As Google continues to improve Gemini, it recently released 2.0 called “Flash Thinking Experimental” as its latest prototype for its language model. This version of Gemini can reason on its own, and is meant for “multimodal understanding, reasoning, and coding”. To make it clear, Gemini 2.0 cannot reason like a human being–instead, it is meant to break down instructions into smaller tasks.
An example of what Gemini 2.0 can do was demonstrated by Google Product Lead Logan Kilpatrick, where the AI platform went on to solve complex physics questions–and explain how it came up with the said solution.
Those curious about this version of Gemini can try it out on Google’s AI Studio. We tried to ask Gemini 2.0 about the optimal running temperature for laptops, and it gave a comprehensive answer to our query beyond giving specific temperature figures for each use case. Aside from giving a more comprehensive answer, it was able to deliver our query several seconds faster compared to the current iteration of Gemini, which shows that Google is making leaps in its language model even with an experimental version.
Google’s push for this iteration of Gemini goes in line with what OpenAI has to offer–they recently unveiled their o1 reasoning model, which is available to their subscribers at a hefty sum of $200(~Php 12,000) a month.