OpenAIAs you know, it’s Monday GPT-4o introduced a new artificial intelligence model called. This model is described as a big step towards making human-computer interaction more natural. GPT-4oIt can accept all kinds of input, including text, audio and image, and produce output in these formats. It also stands out with its ability to recognize emotions, the ability to be interrupted during a conversation, and the capacity to respond almost at the same speed as humans.
OpenAI GPT-4o delivers GPT-4-level intelligence to all its users, including free users, CTO Mira Murati said in a livestreamed presentation. This is the first time that a great progress has been made in terms of ease of use. During the presentation, GPT-4o was shown to be capable of live translation between English and Italian, helping a researcher solve a linear equation on paper in real time, and guiding an OpenAI executive on deep breathing techniques simply by listening to breathing.
OpenAI’s new GPT-4o models
No different from humans, now much smarter
GPT-4oThe letter “o”, called “omni”, refers to the versatile capabilities of the model. OpenAI, GPT-4oHe explained that ‘s is trained on text, images and audio, and all inputs and outputs are processed by the same neural network. This model, unlike the company’s previous models GPT-3.5 and GPT-4, allows users to ask questions by simply speaking, and is able to transcribe speech into text and thus reflect tone and emotion.
The new model will be available to all users, including free ChatGPT users, in the coming weeks, and the desktop version of ChatGPT, initially for Mac, will also be available to paid users starting today. OpenAI’s announcement comes a day before Google I/O, the company’s annual developer conference. Shortly after OpenAI announced GPT-4o, Google introduced a version of its own AI chatbot called Gemini with similar capabilities.
Source link: https://www.teknolojioku.com/yapay-zeka/openainin-yeni-gpt-4o-modeli-insandan-farksiz-artik-cok-daha-zeki-66481c80a736ad837d0e096b