Fantas-Tech?
The new iPhone fails to deliver on its most important new feature
Last fall, Apple ran an ad touting the iPhone's personalized artificial intelligence. Actor Bella Ramsey asked Siri, "What is the name of the guy I had a meeting with a couple of months ago at Cafe Grenel?" The problem was the Apple Intelligence software in the iPhone couldn't actually do that. Apple, under pressure, pulled the ad.
A year later, there's a new phone that can answer that exact question: The Google Pixel 10. The iPhone 17, just unveiled, still cannot.
Apple is touting the new iPhone's speed, durability and camera, like it has for nearly two decades. It now comes in orange. But the defining technology of this era is AI - and the new iPhone 17 already feels about two years old.
For a few weeks I've been testing two phones: an iPhone running the iOS 26 software that comes with the iPhone 17, and the Google Pixel 10 that debuted in August. Side by side, it's clear the Pixel's AI can do things I wish the new iPhone could.
I don't expect this will change many people's immediate buying decisions. For one, the companies make switching hard. Still, the takeaway from my test is that AI is starting to make a real impact on what makes a phone smart and useful - and the advantage is Google's.
Of course any smartphone can run AI apps like ChatGPT or Google's Gemini. Both the iPhone and the Pixel have chips designed for AI, and boast AI functions like live translation and drafting emails. Yet many AI features have turned out to be parlor tricks or half-baked. Apple Intelligence can, uniquely, summarize notifications and messages - but I don't find them very useful or even correct.
Three years after ChatGPT made it possible to have a natural conversation with a computer, it makes sense that we should be able to interact more naturally with our phones, too.
Here are three ways AI is changing how phones work - and where both Google and Apple are with delivering it.
• 1) Getting personal
Your phone is already the repository of your important personal information, tucked away in separate email, chat, photo and other apps. What if AI could dig into them and remember the important stuff?
Google has made inroads into personalizing its AI assistant Gemini. I pressed the button on the Pixel to summon Gemini and asked, "When was my last haircut?" It looked into my calendar and told me the answer.
On the iPhone, Siri got confused and told me about an appointment coming up.
To make this work on the Pixel, I granted Google's Gemini access to my Gmail, Google calendar and other services. On most Android phones, Gemini can also tap into your local device's Messages, WhatsApp, Phone and Photos apps. (Whether you want Google's AI to know that much about you and what it means for your privacy are topics for a different column.)
Gemini's personalization worked even when I chained together a command such as, "Get directions home and share the ETA with Steven."
It's far from perfect. Gemini's ability to respond was at times fragile and Google doesn't do a great job of making clear what its AI can and cannot do. A few times when I asked about my last haircut, it didn't think to check the calendar unless I specifically told it to. It wouldn't add an address in a message to a contact card, because it doesn't yet have that ability.
And if you use Microsoft's Outlook or Slack for important information, it can't tap into them, though Google says it's working on more integrations.
Apple's AI personalization efforts have been beset by delays. When it first introduced Apple Intelligence in the summer of 2024, the company offered a pretty exciting vision: the iPhone's AI would keep track of what's going on in your life, so Siri could be smarter. And it described a privacy-preserving way it would store and process all these details.
But Apple failed to deliver these core elements of Apple Intelligence, saying they didn't live up to its quality standards. In June, Apple said it expects to ship its personalized Siri sometime in 2026. In theory, all models since the iPhone 15 Pro have the hardware and processing power to run it. Until Apple delivers, it's what techies call "vaporware."
• 2) Being proactive
On the Pixel, a friend texted me, "What time is dinner tonight?" Right in the chat, up popped a little bubble with the details. With one tap, it sent them to my friend.
On an iPhone, holding down on the words "dinner tonight" on a similar text pops up a view of my calendar, but not an answer for my friend.
The step forward here is the new Pixel AI feature called Magic Cue. It provides proactive help by watching what you're doing on your phone: whom you're calling, what you're texting. Then it tries to deliver relevant information including dates, names, locations, weather, airline booking numbers, etc. - culled from your Gmail, Calendar, Contacts, Messages and recent open screens. Its suggestions appear within the current app as a floating window.
Behind the scenes, Magic Cue runs entirely on the phone using its AI processor, tracking and providing information without sending anything to Google.
Magic Cue worked for me when someone asked my frequent flier number, and in the weather app when I was about to take a trip out of town. I just wanted to see it even more; the range of situations where it kicks in and the range of data it draws on are pretty limited. Google tells me it's starting by trying to address some specific pain points, but hopes to learn about more.
The closest parallel on the iPhone is a function called Siri Suggestions. Apple's AI tries to learn from how you use your phone, and then makes suggestions on what you might want to do next, such as suggesting people to add to an email. That's fine, but AI that both has access to my personal information and the context of what I'm doing on the phone has the potential to be much more helpful.
• 3) Reinventing voice chats
Everybody knows how asking a question to a bumbling AI voice assistant can be a comedy of errors.
So I was impressed by one of the ways Google reimagined interacting with its Gemini assistant as an actual conversation. Press a button labeled "Live" on the Pixel's home screen, and you can talk naturally back and forth with the bot - even interrupt it.
But then Gemini Live goes further: Press a button with a video icon, and you're not only talking, but the AI can also see through your phone's camera and chat with you while you walk around.
I gave it to my 3-year-old and he ran around pointing at plants and asking questions about them. I got so used to it that Gemini Live became my go-to way to just look up information and get help.
If you grant permission, Gemini Live can also see what's on your phone's screen while you're chatting with it, so you can get help or more information.
But again, Gemini Live doesn't always do what you want. It is technically a different system than one-off voice chats with Gemini - and it can't access the same set of personal information or apps, such as your Gmail.
Over on the iPhone, Apple has made some inroads into getting Siri to better understand conversational language. And it has a partnership with OpenAI that will throw many more complicated queries to ChatGPT to answer. But so far, Siri has no equivalent to the live voice back-and-forth chat.
A new Siri feature called Visual Intelligence can also help you look up information or get feedback about things you take a picture of or things you can screenshot on your iPhone. But it's only still images, not an ongoing video chat.
The closest thing to a Gemini Live experience on the iPhone is installing and using the Gemini app. I liked using it so much, I assigned my iPhone's action button to launch Gemini as a shortcut. That's one way to hack your iPhone into being smarter.
(COMMENT, BELOW)
Every weekday JewishWorldReview.com publishes what many in the media and Washington consider "must-reading". Sign up for the daily JWR update. It's free. Just click here.