{"id":96471,"date":"2026-04-24T11:52:25","date_gmt":"2026-04-24T08:52:25","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=96471"},"modified":"2026-04-24T11:56:03","modified_gmt":"2026-04-24T08:56:03","slug":"openai-unveils-gpt-5-5-its-flagship-ai-model","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/openai-unveils-gpt-5-5-its-flagship-ai-model\/","title":{"rendered":"OpenAI unveils GPT-5.5, its flagship AI model"},"content":{"rendered":"<p>OpenAI has released GPT-5.5. The model is positioned as \u201ca new level of intelligence for real work and powering agents.\u201d<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Introducing GPT-5.5<\/p>\n<p>A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done.<\/p>\n<p>Now available in ChatGPT and Codex. <a href=\"https:\/\/t.co\/rPLTk99ZH5\">pic.twitter.com\/rPLTk99ZH5<\/a><\/p>\n<p>\u2014 OpenAI (@OpenAI) <a href=\"https:\/\/twitter.com\/OpenAI\/status\/2047376561205325845?ref_src=twsrc%5Etfw\">April 23, 2026<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>The neural network is \u201cbuilt to understand complex tasks, use tools, check its work, and carry more tasks through to completion.\u201d<\/p>\n<p>The model can infer user intent, plan work autonomously and carry tasks through to completion. GPT-5.5 \u201cdoes an excellent job\u201d of writing and debugging code, searching the internet, analysing data, creating documents and spreadsheets, controlling software and switching between tools.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cInstead of carefully supervising every step, you can assign GPT\u20115.5 a complex multi\u2011stage task and trust it to plan, apply tools, check its work, overcome ambiguities and keep going,\u201d the announcement says.<\/p>\n<\/blockquote>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-ef2af28c169fd9f2-2621801763014178.webp\" alt=\"image\" class=\"wp-image-278928\"\/><figcaption class=\"wp-element-caption\">GPT\u20115.5 tests across benchmarks. Source: OpenAI.<\/figcaption><\/figure>\n<p>OpenAI noted that the new model is particularly effective in agentic programming, computer control, knowledge work and early-stage scientific research\u2014areas where long chains of reasoning and actions matter.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cGPT\u20115.5 delivers a jump in intelligence without sacrificing speed. Larger, more capable models often run more slowly, but GPT\u20115.5 matches GPT\u20115.4 in real\u2011world per\u2011token latency while exhibiting a much higher level of intelligence,\u201d the startup said.<\/p>\n<\/blockquote>\n<p>The model uses \u201csignificantly fewer\u201d tokens when operating in Codex.<\/p>\n<p>OpenAI said it applied the \u201cmost powerful\u201d set of safety measures ahead of release, working with internal and external experts.<\/p>\n<h2 class=\"wp-block-heading\">Availability<\/h2>\n<p>GPT-5.5 is available in ChatGPT and Codex for Plus, Pro, Business and Enterprise plans. A separate GPT\u20115.5 Pro is offered for Pro, Business and Enterprise.<\/p>\n<p>Both variants will soon be available via <span data-descr=\"application programming interface\" class=\"old_tooltip\">API<\/span> at $5m per 1m input tokens and $30m for output. The context window is 1m tokens.<\/p>\n<p>In Codex, GPT\u20115.5 is available for Plus, Pro, Business, Enterprise, Edu and Go plans with a 400,000 context window. GPT\u20115.5 is offered in Fast mode, generating tokens 1.5 times faster at 2.5x the cost.<\/p>\n<p>GPT\u20115.5 is pricier than GPT\u20115.4. This is explained by higher token efficiency.<\/p>\n<h2 class=\"wp-block-heading\">Capabilities of GPT-5.5<\/h2>\n<p>The new model uses fewer tokens and relies less on retries when solving tasks. On Artificial Analysis\u2019s programming index it delivers a \u201cstate\u2011of\u2011the\u2011art level of intelligence\u201d at half the cost compared with competitors.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-33245636426af7b8-2621860695898658.webp\" alt=\"image\" class=\"wp-image-278929\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>GPT-5.5 is the company\u2019s most powerful system for agentic programming. On Terminal\u2011Bench 2.0, which evaluates complex command\u2011line scenarios, it reaches 82.7% accuracy.<\/p>\n<p>On SWE\u2011Bench Pro it scored 58.6%; on Expert\u2011SWE the model outperformed GPT\u20115.4.<\/p>\n<p>Across all three benchmarks the newcomer beat its predecessor while using fewer tokens.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-77a3bfef22fe70be-2621877003904332.webp\" alt=\"image\" class=\"wp-image-278930\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cThe model\u2019s strengths in programming are particularly apparent in Codex, where it can perform engineering tasks\u2014from implementation and refactoring to debugging, testing and validation,\u201d the company blog says.<\/p>\n<\/blockquote>\n<p>GPT-5.5 has a better grasp of system structure: why something fails, where fixes are needed and which parts of the code they affect.<\/p>\n<p>The model \u201csignificantly outperforms\u201d GPT-5.4 and Claude Opus 4.7 in reasoning and autonomy: it anticipates problems, and predicts testing and review needs without explicit prompts.<\/p>\n<p>On GDPval, which assesses agents\u2019 ability to complete well\u2011defined intellectual tasks across 44 professions, GPT\u20115.5 scores 84.9%. On OSWorld\u2011Verified it posts 78.7%, and on Tau2\u2011bench 98%.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-24ebf21d613fd067-2621901312424252.webp\" alt=\"image\" class=\"wp-image-278931\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>GPT\u20115.5 also performs strongly in other tests: 60% on FinanceAgent, 88.5% on internal investment\u2011banking modelling tasks and 54.1% on OfficeQA Pro.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-1dcaa1256da59cb7-2621911186315728.webp\" alt=\"image\" class=\"wp-image-278932\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<h2 class=\"wp-block-heading\">Information work<\/h2>\n<p>GPT-5.5 is \u201ca powerful tool for everyday computer work.\u201d The model better understands user intent and more confidently traverses the full information\u2011handling cycle: search, analysis, tool use, verification and transforming raw inputs into finished outputs.<\/p>\n<p>In Codex, GPT\u20115.5 outperforms GPT\u20115.4 at producing documents, spreadsheets and slide presentations.<\/p>\n<p>More than 85% of staff across OpenAI divisions use Codex weekly, including in software development as well as finance, communications, marketing, data analytics and product management.<\/p>\n<h2 class=\"wp-block-heading\">Scientific research<\/h2>\n<p>In scientific and technical workflows GPT-5.5 also does better. These are tasks that do not boil down to answering a specific question: the model can explore an idea step by step, gather evidence, test a hypothesis and interpret the resulting data.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-4bab3e9598de4c7c-2621944308386952.webp\" alt=\"image\" class=\"wp-image-278933\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>GPT\u20115.5 shows improvements over GPT\u20115.4 on GeneBench\u2014a platform for multi\u2011step analysis of scientific data in genetics and quantitative biology.<\/p>\n<p>On BixBench the new model also surpassed its predecessor.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-df898332aeadfe0f-2621957718904255.webp\" alt=\"image\" class=\"wp-image-278934\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>In April, OpenAI <a href=\"https:\/\/forklog.com\/en\/news\/openai-launches-cloud-based-ai-agents-for-team-collaboration\">introduced<\/a> \u201cagents for the workspace\u201d in ChatGPT. Teams can create shared assistants for complex tasks and long\u2011running processes.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has launched GPT-5.5, billed as \u201ca new class of intelligence for real work and powering agents.\u201d<\/p>\n","protected":false},"author":1,"featured_media":96472,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"OpenAI launches GPT-5.5, a \u201cnew class of intelligence\u201d for real work and agents.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1201,1190],"class_list":["post-96471","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chatbots","tag-openai"],"aioseo_notices":[],"amp_enabled":true,"views":"13","promo_type":"1","layout_type":"1","short_excerpt":"OpenAI launches GPT-5.5, a \u201cnew class of intelligence\u201d for real work and agents.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/96471","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=96471"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/96471\/revisions"}],"predecessor-version":[{"id":96473,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/96471\/revisions\/96473"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/96472"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=96471"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=96471"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=96471"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}