OpenAI Blog Leaks GPT-4.5 Turbo; Sparks Interest

openai next frontier model in training, gpt-5 is coming
Image Courtesy: OpenAI
In Short
  • Bing and DuckDuckGo indexed a webpage from OpenAI Blog which suggests that the company is working on an upcoming GPT-4.5 Turbo model.
  • It's touted to bring better "speed, accuracy, and scalability" surpassing GPT-4 Turbo.
  • Strangely, the captured meta description says the model will have a knowledge cutoff of up to June 2024.

In a surprising development, last night on Reddit, a post sparked interest among the AI community about the upcoming GPT-4.5 Turbo model by OpenAI. The company seems to have accidentally pushed a blog post on the GPT-4.5 Turbo model which was indexed by Bing and DuckDuckGo. The meta description shows that GPT-4.5 Turbo surpasses GPT-4 Turbo in speed, accuracy, and scalability.

This raised the curiosity of many users on Reddit and X (formerly Twitter). I also tried to access the OpenAI blog for the GPT-4.5 Turbo model from here, but it throws a 404 error. Currently, on Bing and DuckDuckGo, the page has been de-indexed and no longer shows up on the search engine. That said, I looked it up last night, and Bing did show the indexed webpage scraped from the OpenAI Blog webpage.

Another post on X also demonstrates that GPT-4.5 Turbo is mentioned in the source code of the webpage. It reads:

OpenAI has announced GPT-4.5 Turbo. GPT-4 Turbo is a new model that exceeds GPT-4 Turbo in speed, accuracy, and scalability. See how GPT-4.5 Turbo can generate natural language and code with a 256k context window and a June 2024 knowledge cutoff.”

Strangely, the captured description says that GPT-4.5 Turbo will have a knowledge cutoff of up to June 2024. Many speculate that it might be a typo or OpenAI may release the GPT-4.5 Turbo model in the next three to four months, around July or August. Whatever the case, it looks like OpenAI will be releasing an intermediate model before launching the next-generation GPT-5 model.

That said, the good thing is that OpenAI is finally expanding the context length window and will bring support for 256K tokens with GPT-4.5 Turbo. With the current GPT-4 Turbo model, you can process only up to 128K tokens. After the Claude 3 (200K) and Gemini 1.5 Pro (1 million) model launch, it’s time for OpenAI to double down on expanding the context window.

So do you think OpenAI is going to release the GPT-4.5 Turbo model in the next few months? Let us know your thoughts in the comment section below.

#Tags
Comments 1
Leave a Reply

Loading comments...