Gemini is an increasingly good chatbot, but it’s still a bad assistant

-


Google leverages the theoretical power of generative AI to give Gemini access to data across multiple apps. When it works, this can be very handy. For example, you can ask Gemini to check your email for a specific message, extract data, and pass it to another app. I was excited about this functionality at first, but in practice, it makes me miss the way Assistant would just fail without wasting my time.

I was reminded of this issue recently when I asked Gemini to dig up a shipment tracking number from an email—something I do fairly often. It appeared to work just fine, with the robot citing the correct email and spitting out a long string of numbers. I didn’t realize anything was amiss until I tried to look up the tracking number. It didn’t work in Google’s search-based tracker, and going to the US Postal Service website yielded an error.

That’s when it dawned on me: The tracking number wasn’t a tracking number; it was a confabulation. It was a believable one, too. The number was about the right length, and like all USPS tracking numbers, it started with a nine. I could have looked up the tracking number myself in a fraction of the time it took to root out Gemini’s mistake, which is very, very frustrating. Gemini appeared confident that it had completed the task I had given it, but getting mad at the chatbot wouldn’t do any good—it can’t understand my anger any more than it can understand the nature of my original query.

At this point, I would kill for Assistant’s “Sorry, I don’t understand.”

This is just one of many similar incidents I’ve had with Gemini over the last year—I can’t count how many times Gemini has added calendar events to the wrong day or put incorrect data in a note. In fairness, Gemini usually gets these tasks right, but its mechanical imagination wanders often enough that its utility as an assistant is suspect. Assistant just couldn’t do a lot of things, but it didn’t waste my time acting like it could. Gemini is more insidious, claiming to have solved my problem when, in fact, it’s sending me down a rabbit hole to fix its mistakes. If a human assistant operated like this, I would have to conclude they were incompetent or openly malicious.



Source link

Latest news

Gravel Running Shoes Are the Best Suitcase Shoe

“In general, we are noticing many of these shoes have more of a road running influence than they...

As Key Talent Abandons Apple, Meet the New Generation of Leaders Taking On the Old Guard

Start the music. Players walk clockwise in a circle. When the music stops, everyone sits in a chair....

This AI Model Can Intuit How the Physical World Works

The original version of this story appeared in Quanta Magazine.Here’s a test for infants: Show them a glass...

Lenovo’s Legion Go 2 Is a Good Handheld for Power Users

The detachable controllers go a long way towards making the device more portable and usable. The screen has...

Why Tehran Is Running Out of Water

This story originally appeared on Bulletin of the Atomic Scientists and is part of the Climate Desk collaboration.During...

Move Over, MIPS—There’s a New Bike Helmet Safety Tech in Town

Over the course of several hours and a few dozen trail miles, I had little to say about...

Must read

You might also likeRELATED
Recommended to you