For several years now, Apple has neatly lined up its operating-system timetables to let it unveil multiple platform upgrades at its annual WWDC keynote. There’s iOS, the company’s modern flagship. MacOS, its historic mainstay. And WatchOS and tvOS, both of which remain works in progress. It’s a massive amount of software functionality to announce all at once.
However, at Monday’s WWDC keynote—which we’ll be liveblogging—the news about all four of these operating systems will matter less than whatever Apple has to say about Siri, its voice assistant.
Siri isn’t an operating system in a traditional, literal sense: It’s a feature which spans all of Apple’s products, and which mostly lives in the cloud rather than running on the devices themselves. But personal technology’s biggest platform war is raging between AI-infused voice services: Siri, Amazon’s Alexa, the Google Assistant, and Microsoft’s Cortana. And in any rational analysis of that battle, Apple is a distinct underdog to Amazon and Google.
It’s not just about the useful things a voice assistant can do—and which other assistants do in greater quantity, with a deeper understanding of spoken commands, than Siri. It’s also become an embarassing truism that Apple is bad at AI, at least in comparison to Amazon, Google, and Microsoft. As with most truisms, that doesn’t necessarily line up with reality, and Apple’s traditional resistance to reveal much about its research efforts makes it particularly hard to gauge its progress in AI. But when the company does make major strides in this field, they’ll be most likely to show up as new capabilities within Siri.
Bloomberg’s Mark Gurman, the Apple beat’s most reliable reporter when it comes to intelligence on unannounced products, has a pre-WWDC report which suggests that the keynote will not be jam-packed with major news. But Gurman’s story hardly mentions Siri at all; it wouldn’t be a shocker if WWDC brings advances for the service which Apple has managed to keep secret. (After all, it’s a lot easier to prevent leaks when a product is purely software-based than it is with a piece of hardware that’s chock full of components manufactured by other companies.)
Here are some things I’ll be looking for on Monday from my seat in San Jose’s McEnery Convention Center:
Apple has been opening Siri up to useful functionality provided by other companies for awhile: You can already speak to the service to hail an Uber, turn on a Hue light bulb, or send a message in WhatsApp. But the company, which has always preferred controlling its experiences to providing outsiders with unfettered access, has been slow to turn its assistant into an ecosystem. Instead, it’s Amazon, with features such as the ability for third-party developers to charge for content within an Alexa skill, that’s built a platform that feels like the iOS of voice. Apple may have no interest in Siri becoming as open as Alexa, but it’ll never make up for time already lost unless it lets other companies help teach Siri new tricks.
Amazon, Google, and Microsoft are all spreading their respective voice platforms’ influence by encouraging other manufacturers to incorporate them into new devices—such as Google Assistant-powered smart screens and Alexa-ready laptops. Apple may be no more likely to pursue this strategy than it would be to let someone else build an iOS smartphone. But rumor has it that we may at least a Siri speaker from Apple-owned Beats. And additional integration into cars—one area where Apple already lets Siri venture into third-party hardware—would be welcome.
Even since Apple announced the HomePod, it’s been reminding people that the #1 activity on smart speakers is listening to music. That’s allowed it to focus on audio quality and downplay its device as a direct rival to Amazon’s Echo and Google’s Home, both of which have more of an emphasis on home automation and other capabilities. I’ve always assumed that Apple has emphasized this positioning merely to bide time while it worked on rethinking Siri for the new context that the smart-speaker category demands. Monday’s keynote, a year after the HomePod’s announcement, would be an ideal venue for showing off new features that emphasize the smart in smart speaker.
As Tim Cook is fond of pointing out, Apple is in the business of selling hardware rather than advertising, which gives it little incentive to squirrel away its customers’ data and poke through it in ways that anyone might regard as a violation of privacy. However, the company must balance this approach with the fact that AI loves data; the more it knows about us, the smarter it seems. I’d expect Cook and company to bring up privacy and reiterate Apple’s hardline stance during this year’s WWDC keynote—now more than ever—but they must simultaneously show that Siri isn’t done getting better at understanding us.
It’s certainly rankled Apple to be burdened with an image as an AI laggard. One thing that would remind the world that the company is serious about shedding that reputation would be for John Giannandrea—the former Google AI chief who defected to Cupertino a couple of months ago–to make an on-stage appearance on Monday. Even though his influence on Apple’s platforms won’t truly be felt until next year’s conference, he could start to spell out a new vision.
Much of what Apple still needs to do with artificial intelligence involves catching up with the competition. But it would be far more intriguing if Siri’s AI goes places that nobody is expecting. At I/O, Google did that with Duplex, the new feature designed to let the Google Assistant call local businesses and interact with humans to do things such as schedule a haircut. Apple is unlikely to be working on anything in the same ballpark—which is just as well given how little Google has done to allay concerns about its technology. But if Apple shows Siri doing something that’s both utterly new and classically Apple, it will the best possible way to make WWDC 2018 memorable.