In addition to al of its other talents, Gemini 2.0 Flash also has a "thinking mode"
Google recently announced the release of an early version of its first 'reasoning' model, Gemini 2.0 Flash Thinking. The model features text and image understanding and text outputs, and is now available via the Gemini API and in the Google AI Studio.
Google does not appear to want to fall behind in the new 'reasoning model' trend we've seen in the aftermath of OpenAI's o1 launch. Recently, the company has announced a new experimental model, available in Google AI Studio and the Gemini API, called Gemini 2.0 Flash Thinking. Like rival models OpenAI o1, DeepSeek R1, and Qwen with Questions (QwQ), Gemini 2.0 Flash Thinking is designed to spend more time considering the questions it receives.
Google is calling Gemini 2.0 Flash Thinking's process 'Thoughts' and like some rival reasoning models, Thoughts are part of Gemini 2.0 Flash Thinking's outputs regardless of whether it is used via Google AI Studio or the Gemini API. In a post thread on the social media platform X, Google AI Studio product lead Logan Kilpatrick showcased Gemini 2.0 Flash Thinking as it solved a puzzle using text and visual clues, and called the model "the first step in [Google's] reasoning journey."
Gemini 2.0 Flash Thinking has some limitations compared to the plain experimental version of Gemini 2.0 Flash: Gemini 2.0 Flash Thinking has a 32K input token limit and an 8K token output limit, it can only process images and text, its outputs are text-only, and it has no tool usage features.