OpenAI Introduces GPT-4o; Opens Access to All ChatGPT Users

openai announces gpt-4o model free for everyone.jpg
Image Courtesy: OpenAI
In Short
  • The GPT-4o model is truly multimodal and it will be available to all ChatGPT users, free and paid. ChatGPT Plus features will also be available to everyone.
  • The new GPT-4o model is excellent at voice conversations. It can see and talk naturally without any latency or interruptions.
  • ChatGPT also gets a desktop app for macOS. You can share your screen with ChatGPT and continue the conversation using voice seamlessly.

At the Spring Update event, OpenAI introduced its latest flagship model called GPT-4o (‘Omni’) and it’s going to be available to everyone, including free and paid ChatGPT users. Finally, free users get access to GPT-4 class intelligence without any charge. Not only that but all of the tools and premium features of ChatGPT Plus are being made available to free users as well.

To name a few, free users can access the internet on ChatGPT; upload images and access vision capabilities of GPT-4o; upload and analyze files and documents; create charts and perform Advanced Data Analysis (earlier called Code Interpreter); enable the Memory feature, and access GPTs and the GPT Store as well.

Basically, OpenAI is bringing all of the paid features to the free version of ChatGPT. OpenAI says the new model will be rolled out to all users in the next few weeks. Keep in mind that there is a limit on the number of messages for free users. Once you reach the limit, you will be switched to the GPT-3.5 model automatically.

GPT-4o is Truly Multimodal

The most significant thing about GPT-4o is that it’s a multimodal model from the ground up. Earlier, OpenAI was using different models for different modalities which increased latency resulting in interruptions and poor experience. During Voice Chat, it used Whisper for voice processing; for vision, it used GPT-4V, and for text processing and reasoning, well, GPT-4.

However, the GPT-4o model can process all three modalities including text, audio, and vision at the same time, and reason intelligently. In some of the demos shown on the stage, it really felt like a scene straight out of the movie Her. You, of course, need a good internet connection to experience something like that.

Image Courtesy: OpenAI

The GPT-4o model can see things in real-time and express feelings in a natural way along with a variety of tones. The conversation now feels less robotic and more spontaneous. You can also start speaking to interrupt it and continue your conversation.

In addition to that, the GPT-4o model understands the emotions behind your voice as well. For example, if you are feeling anxious and breathing rapidly, it tells you to calm down. It can also translate languages in real-time. OpenAI says the new GPT-4o model supports 50 languages.

ChatGPT Gets a Desktop App for macOS

Finally, ChatGPT gets a desktop app for macOS. And you can voice chat with ChatGPT on your Mac as well. OpenAI has added vision capability to its macOS app which is truly remarkable. You can turn on vision and it can see your screen. If you are coding and want ChatGPT to take a look at your code, it can see and reason with it. That’s pretty excellent. I am not sure if a similar ChatGPT app is coming to Windows.

Image Courtesy: OpenAI

As for developers, the new GPT-4o model will be available on the API. It is 50% cheaper, offers 2x performance, and provides a 5x higher rate limit than GPT-4 Turbo.

Now that OpenAI has brought all of the paid features to free users, you might be wondering what is left for ChatGPT Plus users. Well, OpenAI says paid users will have 5x the capacity of free users. In addition, OpenAI will be releasing the next “frontier” model pretty soon, announced Mira Murati at the end of the event. So, paid users will definitely have access to the “next big thing” pretty soon.

VIA OpenAI
Comments 0
Leave a Reply

Loading comments...