[ad_1]
Almost all the use cases given by Google to justify having Gemini in place of Assistant, as well as all the suggested prompts shown when you open a new chat with Gemini, are the kinds of things that demonstrate the power of large language models but aren’t actually that useful: for example, prompts such as “give me a language plan to learn Mandarin” or “generate an image in a watercolour style”. The utility for an ordinary consumer seems small compared with the size of the effort big companies are putting into promoting the technology.
There are specific situations where the technology can clearly help. I imagined I was trying to design graphics for a basketball team and asked Gemini to create some ideas featuring a wasp, and it did pretty well. It didn’t do as well when I asked for it in specific colours (it always used yellow), and I had mixed results when I asked it to add text. Common phrases such as “slam dunk” or “no way” tended to work fine, but it had a horrendous time trying to add “Melbourne Wasps”. Overall, I can see how it would be useful as a starting point.
On the other hand, I showed Gemini a picture of my cat sleeping on a crate of vinyl records and asked it to draft some social captions, which is an application companies frequently suggest as a use case. It made 10 suggestions, which were all either very generic (“just chillin’ like a villain”) or total nonsense (“the only thing softer than that fur is that crate”).
For factual content, Gemini is not yet at a place where you can take its responses at face value, and it’s certainly not alone there. As with Bing, it presents its findings confidently but always needs double-checking.
Some answers it gave to my questions were clearly wrong (it insisted that the latest science indicated the moon was older than Mars but gave a nonsensical explanation), while others seemed fine (its explanation of doorbell voltage transformers checked out). In cases where the answer was effectively “it depends”, Gemini gave long rambling non-answers with a tendency to hedge its bets.
In one case, it answered, “What is the oldest living civilisation?” with a long explanation of why that’s such a difficult question, but it did make arguments for several candidates that seemed sound. When asked, “How many Sonic the Hedgehog video games are there?” Gemini rightly explained it depended on how you interpret what a game is and how many esoteric spinoffs you consider to be part of the main series. It then gave a detailed breakdown of sub-categories and how many games are in each. It looked like it had taken the information straight from Wikipedia, except that it was all completely incorrect. The numbers were just made up.
Loading
When it comes to factual results, Gemini’s output is often automatically double-checked by a web search that happens in the background. If Google thinks it has verified the information, it highlights it in green. If it thinks the information could be wrong, it highlights it in yellow. But it rarely attributes the facts it reports to any source. In most cases, you’d be better off just searching the web in the first place and clicking a link from somewhere trustworthy.
As for Gemini’s suitability as a voice assistant, a Google support page makes it clear that the standard Assistant is still better when it comes to almost every task you might need it for, including setting reminders and timers, controlling smart home devices, sending messages, playing media and navigating using Google Maps.
Meanwhile, all of Gemini’s strengths – summarising text, generating and analysing images, creating bulleted plans – are things I can’t quite imagine needing my phone to do on demand.
Get news and reviews on technology, gadgets and gaming in our Technology newsletter every Friday. Sign up here.
[ad_2]
Source link