
- OpenAI has finally unveiled the GPT-4.5 model and it's rolling out to ChatGPT Pro subscribers. ChatGPT Plus users will get GPT-4.5 next week.
- It's not a frontier model, and doesn't outperform o-series reasoning models, but delivers better performance than GPT-4o.
- OpenAI says GPT-4.5 has a thoughtful personality and excels at creative writing. It also exhibits fewer hallucinations.
OpenAI introduced GPT-4o, a non-reasoning model to ChatGPT users back in May 2024. Finally, over 10 months later, the hot AI startup has unveiled its next-generation GPT-4.5 AI model, codenamed ‘Orion’ today. GPT-4.5 is the last non-reasoning model from OpenAI as the upcoming GPT-5 will merge the o3 reasoning model to create a unified AI system.
OpenAI says GPT-4.5 is the “largest and most knowledgeable language model” developed by the company so far, but it’s not a frontier model. It’s designed to be more general-purpose than STEM-focused o-series reasoning models.
It means that GPT-4.5 excels at creative writing, natural conversation, practical problem-solving, and offers a broader knowledge base. Note that it’s a multimodal model so it can process images and files too.
What is interesting is that GPT-4.5 exhibits fewer hallucinations than GPT-4o. Its hallucination rate dropped to 37.1% from GPT-4o’s 61.8%. And GPT-4.5’s accuracy improved to 62.5% from GPT-4o’s 38.2%. Apart from that, early testers say that GPT-4.5 is “warm, intuitive, and natural” during conversations.

As for benchmarks, GPT-4.5 outperforms GPT-4o in MMLU across 14 languages. Next, in SWE-bench Verified which evaluates the ability to solve real-world software issues, GPT-4.5 achieves 38% while GPT-4o gets 30.7%. That said, it performs worse than the o1, o3, and o3-mini reasoning models.
In the new SWE-Lancer benchmark developed by OpenAI which evaluates performance on real-world, economically valuable software engineering tasks, GPT-4.5 solved 32.6% of the tasks, compared to GPT-4o’s 23.3%. In GPQA (Science), GPT-4.5 scored 71.4% while GPT-4o got 53.6%.

About availability, GPT-4.5 is rolling out to ChatGPT Pro users starting today. And OpenAI says starting next week, GPT-4.5 will be available to ChatGPT Plus, Team, and Edu users.
All in all, it appears scaling LLMs via pre-training has hit a wall, and that’s why OpenAI says GPT-4.5 will be the last non-reasoning model. In the benchmark numbers, it’s clear that o-series reasoning models perform exceptionally well, even on older base models.
Nevertheless, in every aspect, GPT-4.5 performs better than GPT-4o while being 10x more efficient. It has a refined personality, produces superior writing, and has a broader world knowledge. Now, anticipation builds for the unified GPT-5 AI system which will integrate the o3 reasoning model. It’s likely to be released in May this year.