OpenAI has just announced the launch of GPT-4o, an evolution of the popular ChatGPT, promising to be even faster and more efficient. This new version of the artificial intelligence model allows not only typing but also speaking and showing objects to ChatGPT, which responds almost instantaneously, making the interaction more akin to a real conversation.
A New Dimension in Interactivity
In a humorous comparison on social media, some users are saying that GPT-4o resembles the virtual assistant from the movie "Her," in which the protagonist falls in love with an advanced operating system. Sam Altman, CEO of OpenAI, even mentioned the movie on his profile on X (formerly Twitter).
GPT-4o in Action
OpenAI's demonstration included a video where a person shows their outfit for a job interview and asks for ChatGPT's opinion, as well as a test where it creates a song, showcasing the model's versatility.
Key Differences from GPT-4 Turbo - GPT-4o stands out from its predecessor, GPT-4 Turbo, in several key areas:
Faster and Smarter
GPT-4o impresses with its speed, responding to audio commands in just 320 milliseconds, a much shorter time compared to the 2.8 seconds of GPT-3.5 or the 5.4 seconds of the paid GPT-4. "We trained a single model to handle text, vision, and audio, using the same neural network to process all inputs and outputs," explained OpenAI. Sam Altman is enthusiastic about the result, describing GPT-4o as "the best model ever created by the company, intelligent, fast, and natively multimodal.
[Only registered and activated users can see links. Click Here To Register...]
[Only registered and activated users can see links. Click Here To Register...]
A New Dimension in Interactivity
In a humorous comparison on social media, some users are saying that GPT-4o resembles the virtual assistant from the movie "Her," in which the protagonist falls in love with an advanced operating system. Sam Altman, CEO of OpenAI, even mentioned the movie on his profile on X (formerly Twitter).
[Only registered and activated users can see links. Click Here To Register...]
GPT-4o in Action
OpenAI's demonstration included a video where a person shows their outfit for a job interview and asks for ChatGPT's opinion, as well as a test where it creates a song, showcasing the model's versatility.
Key Differences from GPT-4 Turbo - GPT-4o stands out from its predecessor, GPT-4 Turbo, in several key areas:
- Pricing: GPT-4o is 50% cheaper than GPT-4 Turbo, priced at $5 per million input tokens and $15 per million output tokens.
- Rate Limits: GPT-4o boasts rate limits that are 5 times higher than those of GPT-4 Turbo—up to 10 million tokens per minute.
- Speed: GPT-4o operates at double the speed of GPT-4 Turbo.
- Vision Capabilities: GPT-4o performs better in evaluations related to vision capabilities compared to GPT-4 Turbo.
- Multilingual Support: GPT-4o offers improved support for non-English languages over GPT-4 Turbo.
Faster and Smarter
GPT-4o impresses with its speed, responding to audio commands in just 320 milliseconds, a much shorter time compared to the 2.8 seconds of GPT-3.5 or the 5.4 seconds of the paid GPT-4. "We trained a single model to handle text, vision, and audio, using the same neural network to process all inputs and outputs," explained OpenAI. Sam Altman is enthusiastic about the result, describing GPT-4o as "the best model ever created by the company, intelligent, fast, and natively multimodal.
[Only registered and activated users can see links. Click Here To Register...]