OpenAI Launches GPT-4 Turbo Model with Latest Knowledge Cutoff

sam altman announcing gpt-4 turbo

At its first-ever Developer Conference, OpenAI launched a new model called GPT-4 Turbo. It’s better than the GPT-4 model in every way and brings numerous new changes that developers and general users have been requesting for a long time. In addition, the new model is updated till April 2023 and is much cheaper to use. To learn all about OpenAI’s GPT-4 Turbo model, read on.

GPT-4 Turbo Model is Here!

GPT-4 Turbo model supports the largest context window of 128K, which is even higher than Claude’s 100K context length. OpenAI’s GPT-4 model was generally available at a maximum token of 8K and 32K for select users. Now, according to OpenAI, the new model can ingest more than 300 pages of a book in one go, and that’s impressive.

sam altman announcing GPT-4 Turbo knowledge cutoff date

Not to forget, OpenAI has finally updated the knowledge cutoff to April 2023 on the GPT-4 Turbo model. On the user side, it has also improved the ChatGPT experience, and users can start using the GPT-4 Turbo model starting today. What’s amazing is that you don’t need to select a particular mode to achieve a task. ChatGPT can now smartly pick what to use when needed. It can browse the web, use a plugin, analyze code, and do more — all in one mode.

sam altman demonstrating new features of gpt-4 turbo model

For developers, a lot of new things have been announced. First off, the company has launched a new text-to-speech (TTS) model. It generates incredibly natural speech in 6 different presets. Furthermore, OpenAI has released the next version of its open-source speech recognition model, Whisper V3, and it’s soon going to be available via the API.

What’s interesting is that APIs for Dall -E 3, GPT-4 Turbo with Vision, and the new TTS model have been released today. Coke is launching a Diwali campaign today that allows customers to generate Diwali cards using the Dall -E 3 API. Moving on, there is a JSON mode that allows the model to respond with a valid JSON output.

Plus, Function calling has also been improved on the newer model. OpenAI is also allowing developers to have more control over the model. You can now set the seed parameter to get consistent and reproducible outputs.

sam altman announcing gpt-4 turbo pricing

Coming to fine-tuning support, developers can now apply for GPT-4 fine-tuning under the Experimental Access program. GPT-4 has been upgraded to a higher rate limit – double the token limit/minute. Finally, coming to pricing, the GPT-4 Turbo model is significantly cheaper than GPT-4. It costs 1 cent for 1,000 input tokens and 3 cents for 1,000 output tokens. Effectively, GPT-4 Turbo is 2.75x cheaper than GPT-4.

So, what do you think about the new GPT-4 Turbo model? Let us know in the comment section below.

SOURCE OpenAI
comment Comments 0
Leave a Reply