Google Gemini 2.0 Flash Thinking vs OpenAI o1

The world of artificial intelligence is undergoing a major change with OpenAI and Google announcing new language models designed for advanced thinking capabilities. While OpenAI launched the o1 family a while ago, Google recently Gemini 2.0 Flash Thinking introduced the model.

In early September, OpenAI announced the o1 series, designed to spend more time before producing a response. These models; It addresses complex thinking tasks by performing better than previous models, especially in areas such as science, coding and mathematics.

Google Gemini 2.0 Flash Thinking vs OpenAI o1

Immediately after this announcement, Google introduced its own thinking-oriented language model, Gemini 2.0 Flash Thinking. This experimental model is available to developers under the name “gemini-2.0-flash-thinking-exp-1219” in Google AI Studio; It has a claim to multimodal comprehension, reasoning and coding.

According to Google, the basis of Gemini 2.0 Flash Thinking’s capabilities lies in the augmented inference calculation that the model uses. Although the tech giant did not offer specific comparisons, reports from Chatbot Arena showed that the model came out on top in all categories. Accordingly, Gemini 2.0 also surpassed Flash.

The Gemini 2.0 Flash Thinking model supports more than 128 thousand token lengths and has information until August 2024. Developers can access the new model through the Gemini API in Google AI Studio and Vertex AI.

Around the same time as Google’s announcement, OpenAI; He announced that the o1 model is now available to more developers. It was reported that the updated o1 model provides improved results in various popular artificial intelligence comparisons and offers customer support optimization.

Source link: https://shiftdelete.net/dusunen-yapay-zeka-donemi-openai-o1-vs-google-gemini-2-0