
OpenAI Unveils Fast AI Models GPT-5.4 Mini and GPT-5.4 Nano
OpenAI introduces GPT-5.4 mini and nano, optimized for minimal latency tasks.
OpenAI has launched two new AI models: GPT-5.4 mini and GPT-5.4 nano. These are specifically optimized for tasks requiring minimal latency in response generation.
We’re introducing GPT-5.4 mini and nano, our most capable small models yet.
GPT-5.4 mini is more than 2x faster than GPT-5 mini. Optimized for coding, computer use, multimodal understanding, and subagents.
For lighter-weight tasks, GPT-5.4 nano is our smallest and cheapest… pic.twitter.com/cdp5HWtM2M
— OpenAI Developers (@OpenAIDevs) March 17, 2026
The firm stated that GPT‑5.4 mini significantly surpasses GPT‑5 mini in coding, logical reasoning, multimodal analysis, and working with third-party tools. The data processing speed has more than doubled.
In several tests such as SWE-Bench Pro and OSWorld-Verified, the neural network approached the performance of GPT‑5.4.

“GPT‑5.4 mini is designed for tasks where response speed directly determines product quality,” the announcement states.
OpenAI outlined scenarios where the new solutions are most effective:
- AI assistants with instant response;
- subagents that instantly perform auxiliary functions;
- interface management systems that create and interpret screenshots;
- multimodal applications for real-time image processing.
“In such scenarios, the best model is not necessarily the most powerful. It provides instant answers, works reliably with tools, and successfully handles complex professional tasks,” the startup stated.
GPT‑5.4 nano is the most compact and fastest version of GPT‑5.4. The company recommends using it for classification and data extraction, ranking, and subagents performing simple auxiliary tasks.
Programming
OpenAI emphasized that GPT‑5.4 mini is particularly effective in development processes where high iteration speed is crucial. The model with minimal latency handles pinpoint editing, codebase navigation, frontend generation, and debugging cycles.
“This makes it an excellent choice for tasks requiring high speed and low costs,” the announcement states.
In benchmarks, GPT‑5.4 mini outperforms GPT‑5 mini with comparable response speed, while operating significantly faster.

Computer Use
GPT‑5.4 mini demonstrates “excellent” results in multimodal tasks related to interface management. The model can instantly interpret screenshots of overloaded user control panels for quick computer actions.

Access and Pricing
GPT‑5.4 mini is available via API, Codex, and ChatGPT. Its usage costs $0.75 per million input tokens and $4.5 per million output tokens.
GPT‑5.4 nano is offered exclusively through API. Its rates are lower: $0.2 per million input tokens and $1.25 per million output tokens.
In March, OpenAI integrated the music recognition service Shazam into ChatGPT. Users can identify tracks without leaving the application.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!