This week, Google introduced one of the biggest redesigns to Android, which includes a wild new concept for giving voice commands: just talk normally — even if it’s a multi-layered request that involves translations, texting and emoji.
The Gemini AI programming, as it stretches across multiple Android devices like phones, watches and car systems, promises to solve what I feel is one of the biggest problems now in using voice assistants. We’ve been required to talk in a particular, choppy way — starting off with the right wake word, using the keyword that triggers the proper command, being careful to not get too casual. I am doing a delicate word-salad dance with Siri every time I try to ask it to play a certain Apple Music playlist. And yet, even simple commands don’t always give the results you want.
But with AI, can we finally have computers that get it our requests right? Apple made these promises to us before, with a AI-infused Siri that hasn’t launched yet that is supposed to give more personal help and also dig into our messages and emails. So when I see the Android team tell us that it will happen soon across Google devices, I’m skeptical if this can really work as advertised.
In this week’s episode of One More Thing, which you can watch embedded above, I explain what makes the new Gemini AI pitch so impressive. But when on-screen disclaimers warn that the results might not be accurate, what use is it to have an assistant you cannot trust?
If you’re looking for more One More Thing, subscribe to our YouTube page to catch Bridget Carey breaking down the latest Apple news and issues every Friday.
Read the full article here