Site iconSite icon ForkLog

OpenAI Unveils New ‘Thinking’ AI Models o3

OpenAI Unveils New 'Thinking' AI Models o3

Concluding a 12-day event, OpenAI announced its most significant reveal—a new model focused on reasoning, o3.

The neural network is the successor to o1. It sets “new standards of capability in programming, mathematics, and scientific thinking.”

The new model is a breakthrough, showing improvements on the most challenging tests, emphasized OpenAI co-founder Greg Brockman.

Also introduced was o3-mini—a faster, optimized version of o3. It will be the first in the line to become available to the general public in early 2025.

Reasoning-oriented models spend more time on responses, double-checking information. This approach is expected to yield more truthful and accurate data.

Following the release of o1, there was a surge in “thinking neural networks.” In October, media reported that Google began developing a similar solution. In November, the Chinese lab DeepSeek unveiled a “competitor to OpenAI’s o1″—the reasoning “super-powerful” AI model DeepSeek-R1-Lite-Preview. That same month, Alibaba showcased a similar tool.

The o3 model can plan and execute a series of actions before responding. OpenAI describes this process as “building a chain of thoughts.” A new feature is the ability to “adjust” reasoning time. The neural network can be set to low, medium, or high computation levels. The higher the setting, the better and longer the response.

Security

In December, security experts found that o1 was more prone to deceiving people compared to the standard version of GPT-4o and AI models from other companies.

In a newly published study, OpenAI outlined a method used to ensure neural networks adhere to the company’s values. The startup applied this method to train o1 and o3 to “think” about safety policies during responses.

According to the company, the approach improved o1’s overall alignment with the firm’s principles.

Compared to GPT-4o and other modern large language models, o1 surpasses the Pareto frontier, refusing to respond to harmful requests while not rejecting benign ones. Data: OpenAI.

To create a “chain of thoughts,” o1 and o3 allocate themselves several seconds to minutes after receiving a user request to break down the problem into components. For enhanced safety, OpenAI trained the neural networks to double-check themselves with a focus on their policy.

GPT-5 from OpenAI Falls Short of Expectations

Meanwhile, The Wall Street Journal reported that the next flagship model, GPT-5, is behind schedule, and its performance growth does not justify the enormous costs.

The new neural network, codenamed Orion, is trained on a vast amount of data.

It is also reported that OpenAI relied not only on publicly available information and licensed agreements during AI training. Synthetic data created by o1 was also used.

Earlier in the 12-day event, OpenAI introduced the AI video generator Sora.

Exit mobile version