-->

Apple's Ironic Product Positioning

Article Featured Image

Back in February, during the 2007 Oscars, Apple aired its first national television commercials for the new iPhone. It started with TV and film clips of stars (ranging from Marilyn Monroe to The Flintstones’ Betty Rubble) picking up the phone and saying "hello." The commercial ended with two frames, one with the word "Hello" and the next said "Coming in June."
Good thing the tag line wasn’t Say hello to iPhone. If you bother to say anything to the device, you will be met by deaf ears. That’s why I wrote the following commentary when Steve Jobs wowed attendees at MacWorld in early January: "iPhone may be a tour-de-force for the touch-screen, but it’s inexplicably odd to introduce a new smartphone with so few speech-based features. I can hardly express how profoundly disappointed I am that this shiny, new thing—the first must-have product since Nintendo’s Wii—has less voice processing than Tickle-Me Elmo."
A Missed Opportunity
Because iPhone is not speech-enabled out of the box, the view from our part of the world is that this is a tremendous missed opportunity for the normally infallible Apple. Instead, Apple expects to do just fine by positioning iPhone as a widescreen iPod, advanced mobile phone, and wireless Internet access device. It will essentially be the most sought-after fashion accessory in time for summer.
Apple’s CEO Steve Jobs has indicated that he’ll be content working with a single carrier, AT&T/Cingular, and garnering a modest percentage of the new wireless phone sales this year. Adding speech applications to the already formidable mix of entertainment, communications, and Web browsing services would be overkill.
Meanwhile, other wireless carriers should be expected to work with other device makers and service providers to offer an impressive array of speech-enabled mobile services. The speech recognition powers of Microsoft’s Windows Mobile operating system, IBM’s embedded ViaVoice, Nuance Mobile, VoiceSignal, and Tellme for Phone are key enabling technologies that are finally conditioning the mobile public for more robust voice control of multimodal applications. In the midst of these offerings, Apple is paying attention mostly to appearance.
General availability for the iPhone begins next month. At that time, Apple will go public to answer many of the mysteries— like what processor it employs and whether it will be able to use any standard iPod peripherals—surrounding its latest must-have gadget. I’ve already taken a bit of heat for saying definitively that there are no voice applications for iPhone. According to Jobs, the device runs the OS X operating system, which has included a speech processing framework (built on Java Beans). Therefore, it comes equipped with robust resources for both speech recognition and text-to-speech rendering.
But let’s not mistake a capability with a full-blown application. My unscientific survey of Mac developers has led me to believe that Apple’s speech processing capabilities suffer from a sort of benign neglect. In a world where wonderfulness is defined by a multitouch interface and the ability to rotate an image on the visual screen, refining a speech-enabled interface for command and control or dictation is underwhelming. It does very little for the device’s image. Thus, developers have characterized the speech processing framework as an afterthought at best and crippleware at worst.
It’s Not Too Late
Now that it is May, we’re at the 11th hour and 55th minute of the iPhone atomic clock. Apple has opted to support widescreen iPod functions, basic phone, Web browsing, text messaging, and photo exchange. Out-of-the-box, the iPhone is not destined to be a platform for many of the high-growth areas of surrounding conversational commerce, such as speech-based local search, presence-based instant messaging, or navigation.
Back in January, Jobs pointed out that the killer app is making calls. Then he went about showing what visual splendor can be embedded into initiating and manipulating call flows. He used multitouch to scroll through the address book. He demonstrated visual access to voicemail messages. He pressed buttons to merge calls.
Defining the next-generation iPhone functions, including the speech-enabled ones, are now up to Apple’s devoted community of application developers and partners.



Dan Miller is the founder of and a senior analyst at Opus Research. He published Telemedia News & Views, a monthly newsletter featuring developments in voice processing and intelligent network services. Contact him at dmiller@opusresearch.net.

SpeechTek Covers
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues