iPhone 4S, and promising future for Siri
My iPhone4 went through quite the sequence of unfortunate events prompting an upgrade to a 4S, which I wasn’t originally planning on doing…
My Apple devices have rarely had any concerns of loss or damage at my hands, until the ip4 that is. The original 4 went with me to, and helped me navigate Moscow Russia all without issue. In early Spring, it took a tumble out of my hands and managed to pop open when landing on a concrete floor. This event prompted my first ever visit to Apple for my one-time oops replacement, which went off without a hitch, even though the phone had been jailbroken (thanks unnamed Apple store employee who looked the other way while I wiped/reset the device). The oops replacement phone went through more hell than any electronic device I’ve ever owned. It survived a dip in a (clean) toilet, several unusual falls and eventually either due to the falls or the swim lost its ability to connect to AT&T’s cell network. It’s currently stuck in a perpetual “searching…” “No Service” loop. Because of the swim in the toilet, and the fact that I’d already used up my oops replacement, I was SOL. Thus the need for yet another new phone. Luckily, the timing was right for a normal upgrade priced 4S.
It arrived yesterday, and the things I was most excited about haven’t disappointed. The A5 CPU and the camera. I’ve always had issues with the UI response lag my vanilla 4 had, and the new A5 CPU in the 4S has eliminated this completely. Very sweet… The camera is also super speedy, and with iOS5′s lock screen access, can be accessed amazingly quickly. Image quality and overall sensor response has been superb.
Finally, the reason for this post, the thing I was most skeptical about was Siri.
Having worked at a VoIP startup, and having been responsible for speech recognition and synthesis software implementation in the past, I was very curious to experience Siri. Overall, it works pretty well, and will only improve with time.
The biggest issue at the startup I worked at was that all recognition had to take place on the server side. So, even on crappy cell phone connections, a garbage input was all the server had to work with. As you can imagine, less than ideal, less than stellar performance on average.
Apple, and Google for that matter with their voice tech, have solved this problem by doing the basic speech break down on the device, and then sending these data to server farms for the hard number crunching. This is ideal because the phone itself gets a pristine input regardless of what’s going on with your cell network. It allows them to up the accuracy, and the speed of processing in an almost perfect world for speech recognition. Get the input from the best source, do the processing on a massive server farm. Win win. In the case of Siri, and its iOS integration, it’s extremely impressive and I’m now super excited to see how Siri pans out in the future.