Pros
- Free
- Higher token limit than ChatGPT Free
- Faster image generation
- Largely accurate
- Can upload and examine large documents
Cons
- Randomly hits token limit
- Illogical, frustrating image generation
- Image analysis is inaccurate
- Long documents can’t be pasted directly
Google Gemini has come a long way. From its former life as Bard, where its hallucinating inaccuracies sent Google’s stock price tumbling, to a rebrand as Gemini, where its hallucinating inaccuracies sent Google’s stock price tumbling. After two years in the oven, it finally seems Google has gotten its chatbot right.
The free version of Gemini, which runs on the 2.5 Flash model, is a strong product. It’s fast at answering questions and gets facts correct. In Gemini’s case, I found that it had a higher rate limit when compared to the free version of ChatGPT, so I could use it more often and spend less time waiting. In one test, I could generate multiple images on Gemini, whereas on ChatGPT Free, one image would throw me over my limit. In our testing, ChatGPT’s image generator was the slowest one CNET tested. Image generation was also much faster than with ChatGPT Free. At the same time, I’d sometimes hit my token limit on Gemini free after just asking my first question of the day.
For casual AI users, Gemini 2.5 Flash is more than enough. It gives users plenty of access to a competent AI model without making it feel they’re constantly at the edge of hitting their limit. Still, the model isn’t perfect and can make mistakes, especially on the imaging side of things. Despite this, for general use cases, the free version of Gemini can take over your Google searches.
How CNET reviews AI models
I took a different approach to reviewing AI chatbots this year. As AI models have improved, simpler queries aren’t stressing the models anymore. These models are also connected to the internet, which helps with their accuracy. Instead, I took a more experiential approach. AI chatbots are everything machines. The way I use an AI chatbot, as a writer and a journalist, will differ from a coder, a lawyer and an artist. Thankfully, since journalists are generalists, I do feel my usage will cross apply to a wide range of users. That does mean we won’t be asking the exact same questions to all the AI models we use and simply comparing answers.
How accurate is Gemini Free?
Compared to past iterations of Gemini, Gemini Free, which is running on the newly updated 2.5 Flash model, is largely very accurate. This is not only because Gemini has an open connection to the internet to cross-reference information, but also because 2.5 Flash is a “thinking” model. What this means is that the model isn’t simply working as “autocomplete on steroids.” It’s trying to follow a set of rules and rationales before giving an output.
Of course, there’s plenty of debate as to whether thinking or reasoning models are actually doing either, or, if in reality, it’s nothing more than increased mathematical computation to give sentence generation greater accuracy. Regardless, with Gemini 2.5 Flash, you can actually see how the model is thinking, a feature pulled from China’s DeepSeek R1, which hit the internet late last year.
As a new Nintendo Switch owner, I’m excited to see which games come to the console. Rumors are surfacing that Stellar Blade, previously a PS5 exclusive (with a recent PC release) would be coming to Switch 2. Given that it’s a technically demanding game, I was curious how it would run on Nintendo’s new handheld.
Gemini 2.5 Flash did a great job giving me a sense of how Stellar Blade might perform. It broke down how well Unreal Engine 4 titles ran on the original Switch and cross-applied how it might run on a more powerful Switch 2. It found that most likely, in docked mode, a theoretical Stellar Blade port would run at 1080p with a consistent 30 FPS with the use of AI upscaling called DLSS. While I’m not a hardware expert, this conclusion seems in line with other Switch 2 ports, like Cyberpunk 2077.
I’ve also been researching whether it’s smart or economical to do Turo in New York City. Turo is a car rental service in which individuals can rent out their cars as a way to earn money. Think of it like Airbnb but for your car. New York can be a difficult market, given parking constraints, street rules and other costs of ownership. Gemini 2.5 Flash did a fantastic job breaking down why renting out a manual transmission Toyota GR86 could have advantages on Turo as a more enthusiast vehicle but would also run into issues of it being too niche for most drivers.
Gemini also pointed out specific engine issues with the 2022 model, which, albeit rare, is something to consider. It then broke down the math and what types of revenues and profits I might be looking at. It gave me low, medium and high estimates. It helped me conclude that renting out a manual transmission Toyota GR86 might be more trouble than it’s worth. I assume Gemini was able to pull from data on both dedicated forums and Reddit. (Google signed a $60 million licensing deal with Reddit last year.)
Gemini can pull from YouTube, Google Maps and a range of other Google-owned products. This gives Gemini an advantage over other AI chatbots. For example, if you want to know what ingredients a restaurant uses in its burritos, Gemini is able to cross-reference Google Maps reviews to help find an answer. ChatGPT, for example, has to search through Yelp and other resources to find that answer.
Rate limits are both rare and random
Google says the newly updated Gemini 2.5 Flash models have a 1-million token context window. This far surpasses what even the paid version of the 128,000 tokens ChatGPT’s GPT-4o model offers. Granted, ChatGPT’s flagship GPT-4.1 model has a 1-million token context window.
I didn’t encounter many restrictions when using Gemini 2.5 Flash. I could continue asking questions and have it generate multiple images without it ever limiting me.
Randomly, however, I’d be hit with a limit, even if it was my first question of the day. When that happened, I had to wait a few hours for it to reset. I’m not sure how Google is measuring usage. Is it based on how much you use it in an hour or does it accumulate over days? The latter certainly wouldn’t make sense.
Given that Google says Gemini has a 1-million token context window, I was surprised when I couldn’t paste in the transcript from a two-hour meeting for summarization. Weirdly, I could only paste a quarter of it. When I asked Gemini why I couldn’t paste more, it was adamant that I could, confident that its large context window could handle whatever I could throw at it. I tried again; same result.
It was only after I had uploaded a .txt file that it was able to read the entire meeting and summarize it for me. When I asked Gemini why, it said that it’s possible Google put a character limit on direct-text inputs to prevent browser slowdown. I didn’t run into this problem with the paid version of Gemini.
While I didn’t test its coding capabilities, Google says Gemini Code Assist gives free users 180,000 completions per month, which, according to the company, would mean users would have to code for 14 hours a day, every day, before hitting their limit.
Google is strangely behind on shopping when compared to ChatGPT
Google makes its money from online ads, which account for 78% of its 2024 revenue. Google searches aren’t just filled with ads nowadays, but product carousels and sponsored product posts to the point that, in my opinion, they can be obnoxious. Still, the expectation is that Google’s AI chatbot would also be a shopping powerhouse, right?
For shopping, Gemini 2.5 Flash lags far behind ChatGPT. Earlier this year, OpenAI issued an update for all ChatGPT users to make shopping a dynamic experience within the chatbot, with links to products along with corresponding images. I found ChatGPT Free’s shopping experience to be rather good, despite occasional linking hiccups. OpenAI says it isn’t monetizing shopping recommendations.
Shopping on Gemini, however, is a lackluster experience. Sure, for product research, Gemini 2.5 Flash can pull up the necessary bits of information and cross-compare products. But it doesn’t link to products unless asked. And it won’t pull in images like a Google Search would, either.
When shopping for webcams to connect to my Nintendo Switch 2 for Mario Kart World gaming, Gemini did a solid job of recommending products and was even able to cross-reference a Reddit post I linked to.
Oddly, when asking Gemini 2.5 Flash for webcam recommendations for my Nintendo Switch 2, I ran into an error that simply said, “something went wrong.” There was no explanation. I waited a bit, but ultimately had to start a new chat for things to start working again.
An error pop-up in Google Gemini Free
Image generation with Gemini Free: You get what you pay for
Gemini 2.5 Flash is incredibly generous with image generation. Unfortunately, getting it to generate the correct image is a frustrating process.
I wanted Gemini to create a nostalgic-feeling image of a boy playing a Game Boy in the back of his parents’ car during a nighttime road trip. While Gemini was able to make the image, the world logic was completely off.
At first, Gemini 2.5 Flash generated an image of a sad-looking boy.
Gemini Free incorrectly renders an image.
When I called out Gemini saying this was not at all what I was looking for, it course-corrected but still didn’t do a great job. One subsequent try was certainly better, but it didn’t have the color palette I was looking for. Also, the boy was in the front seat of the car. Not really safe. In another iteration, the car in the background was driving away, which doesn’t match correct world logic.
A bizarre image generated by Gemini Free.
After much back-and-forth, Gemini would continually generate images that were wrong and looked bizarrely off. For instance, the boy is now in the front seat with his parents, but facing toward the back. I eventually gave up.
While Gemini 2.5 Flash is fast and generous with its image generation, it’s far from ideal. Google still needs to work on fixing the internal logic within Gemini. Google DeepMind’s Demis Hassabis talked about “world models” at Google I/O earlier this year, where these models did a much better job of understanding and representing lifelike physics. Hopefully, this tech trickles down to the free version of Gemini soon.
Gemini Free gives more than ChatGPT Free
Google deserves credit for how much it has improved on Gemini this past year. The AI chatbot is far more accurate and provides a feature-rich experience. The fact that much of this is being given away for free is also impressive. It definitely puts the other AI chatbots on notice. It shows how the power of Google can be difficult to compete against, especially when the company is trying to establish Gemini as people’s go-to AI chatbot.
ChatGPT, Claude and Perplexity can compete by delivering higher quality and more accurate information. Of course, this will require more investment, innovation and server costs, which might be harder for companies not as rich as Google. The free version of Claude, for example, runs on Sonnet 4, which is a “hybrid reasoning model” and uses a multi-tier approach to getting the best out of an AI.
Still, it’s impressive that Google is giving everyone access to a “thinking” model for zero dollars. Considering DeepSeek R1 did the same earlier this year, this might have forced Google’s hand. Regardless, this newly improved Gemini is a step in the right direction.
Read the full article here