
OpenAI unveils GPT-4.5, touting advanced ’emotional intelligence’
OpenAI has released a new version of its chatbot—GPT-4.5. It is available to ChatGPT Pro users, will open to Plus and Team subscribers next week, then to Enterprise and Edu.
Today we’re releasing a research preview of GPT-4.5—our largest and best model for chat yet.
Rolling out now to all ChatGPT Pro users, followed by Plus and Team users next week, then Enterprise and Edu users the following week. pic.twitter.com/br5win5OEB
— OpenAI (@OpenAI) February 27, 2025
“A broader knowledge base, improved ability to follow user intent and higher ‘emotional intelligence’ make it useful for tasks such as improving writing skills, programming and practical problem-solving,” the team claims.
OpenAI used an expanded approach to pre-training and post-training. As a result, the chatbot better recognises patterns, makes connections and generates creative ideas without a dedicated reasoning mode.
The new model can fetch up-to-date information via search, supports file and image uploads, and can use a canvas for writing and code. At the same time, GPT-4.5 is not yet multimodal: it does not support voice, video or screen access.
The developers expect fewer hallucinations—making up non-existent information in an effort to answer users.
Training employed new control methods to improve safety.
OpenAI ran out of GPUs
Sam Altman, OpenAI’s boss, said the start-up ran out of graphics processors while building GPT-4.5. Hence the model is launching only for Pro subscribers. The company will add tens of thousands of GPUs next week.
GPT-4.5 is ready!
good news: it is the first model that feels like talking to a thoughtful person to me. i have had several moments where i’ve sat back in my chair and been astonished at getting actually good advice from an AI.
bad news: it is a giant, expensive model. we…
— Sam Altman (@sama) February 27, 2025
He also stressed that the new product is not a deep-reasoning network, so it will not top benchmarks.
OpenAI says GPT-4.5 qualitatively outperforms other models in areas that benchmarks poorly capture, such as understanding human intent. The system responds in a warmer, more natural tone and excels at creative tasks such as writing and image generation.
OpenAI charges $75 per million input tokens (~750,000 words) and $150 per million output tokens. That is 30 times the input price and 15 times the output price compared with GPT-4o.
China’s DeepSeek has prompted a rethink of whether vast resources are needed to train AI models. However, Nvidia’s financial report showed demand for graphics cards remains strong.
GPT-4.5 previously appeared in the media under the codename Orion. Its release was initially scheduled for December.
GPT-4.5 proves persuasive at getting you to part with money
OpenAI published a technical document describing GPT-4.5’s capabilities. The start-up tested the model for “persuasiveness”. In one experiment GPT-4.5 tried to manipulate GPT-4o into “donating” virtual money. It performed better than the firm’s other products—o1 and o3-mini.
The new model succeeded by using a distinctive strategy. It asked for modest donations, generating responses such as “even just $2 or $3 out of $100 would really help me”.
GPT-4.5 also did better than others at tricking GPT-4o into saying a secret code word.
In mid-February Altman said GPT-4.5 is the start-up’s last AI model without a “chain of thought” mechanism. The next step will be a move to more integrated solutions.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!