Apple brought its smart speaker into the world with a shoddy name and an unconventional pitch, but anyone brash enough to cast the device to the side so easily will surely pay the price. Apple, unlike Amazon and Google, understands that selling glorified intelligence-in-a-box as a method of human computer interaction lacks foresight — people want a product, not a technology.
Launching the HomePod is the most startup-y move Apple has made in years. The company is simultaneously disrupting the massive home speaker market and the emerging smart speakers space. By treating hardware as an opportunity rather than a means to an end, Apple might actually be able to deliver a product that ends up as more than a gimmicky metal tube bound to eventually collect dust on a shelf.
Let’s be clear, Siri is behind. It was behind before WWDC 2017 and it is still behind after WWDC 2017. The company is racing to get its intelligent assistant up to spec, it’s bringing on machine learning engineers and making large acquisitions to jump the tech readiness line. No doubt pitching the HomePod as an audio solution rather than the hardware manifestation of an AI is convenient. But at the end of the day, it doesn’t really matter.
Apple is going product first and platform second for a reason. It’s still very early days for monetizing personal assistants — Amazon has declined to prognosticate on Alexa as a revenue stream for years. The home speaker market on the other hand is well defined. I hate market size estimates as much as the next guy, but wireless audio is estimated to be worth north of $50 billion.
Back at TechCrunch Disrupt New York 2016, Mike George, Amazon’s VP of Echo, pointed to music when asked about the device’s most common use case. This is interesting because the Echo’s strongest hardware differentiator is its far-field microphone. It makes sense that the design priority for a hardware device with a voice interface most commonly used for music would be the speaker.
At the end of the day, Echo’s value proposition is that it can detect distant voices with much higher accuracy than a phone. Meanwhile Echo and smartphones alike can run Alexa and all the automatic speech recognition and natural language understanding under the hood.
Apple needs to get machine intelligence right for its entire product portfolio because it’s a critical enabling technology, not because it needs to compete against Google and Amazon to answer the most arcane voice query. And there’s little reason to doubt the company’s ability to get this done. Apple isn’t the poor left behind $800 billion company that can’t do AI. Machine learning is everywhere in Apple products — Spotlight Search, Mail, iMessage, the list goes on.
One thing that was weird about Apple’s WWDC HomePod pitch was it’s emphasis on sound quality. The traditional Apple move is to play up ease of use — the ability for pairs of speakers to work together and the ease of pairing an iPhone to the speakers. Perhaps this is just Apple’s attempt to justify the high $349 price tag. And sure, it’s Apple so these things are going to be pricy and hard to get until V1 gets a price cut when V2 drops — that strategy isn’t going to stop working now.
The right way to think about this launch is as AirPods for the home. In the future, both AirPods and HomePod will benefit from additional integrations with Siri — but nobody buys AirPods, a MacBook or an iPhone for Siri. To believe in HomePod is to believe in the market for home audio and to believe in Apple’s ability to execute a product-first strategy.