{"id":94264,"date":"2026-02-13T17:34:00","date_gmt":"2026-02-13T14:34:00","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=94264"},"modified":"2026-02-13T17:35:16","modified_gmt":"2026-02-13T14:35:16","slug":"openai-introduces-a-streamlined-version-of-codex-ai-programmer","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/openai-introduces-a-streamlined-version-of-codex-ai-programmer\/","title":{"rendered":"OpenAI Introduces a Streamlined Version of Codex AI Programmer"},"content":{"rendered":"<p>OpenAI has released a streamlined version of its programming tool, Codex.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">GPT-5.3-Codex-Spark is now in research preview.<\/p>\n<p>You can just build things\u2014faster. <a href=\"https:\/\/t.co\/85LzDOgcQj\">pic.twitter.com\/85LzDOgcQj<\/a><\/p>\n<p>\u2014 OpenAI (@OpenAI) <a href=\"https:\/\/twitter.com\/OpenAI\/status\/2022009582210715925?ref_src=twsrc%5Etfw\">February 12, 2026<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>GPT-5.3-Codex-Spark is positioned as a reduced version of the <a href=\"https:\/\/forklog.com\/en\/news\/openai-counters-anthropic-with-the-release-of-gpt-5-3-codex\">model introduced in February<\/a>, GPT-5.3 Codex. It is designed for faster <span data-descr=\"process of generating a response\" class=\"old_tooltip\">inference<\/span>. To achieve this, the startup employed a specialized chip from its hardware partner, Cerebras.<\/p>\n<p>In January, the companies <a href=\"https:\/\/openai.com\/ru-RU\/index\/cerebras-partnership\/\">announced<\/a> their collaboration.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cThe integration of Cerebras into our computing solutions aims to significantly speed up AI response times,\u201d OpenAI stated at the time.<\/p>\n<\/blockquote>\n<p>The firm describes the Spark chip as the \u201cfirst milestone\u201d of the partnership.<\/p>\n<p>The processor is designed for rapid real-time operation. It is powered by the Wafer Scale Engine 3, the third generation of Cerebras&#8217; megachips, equipped with four trillion transistors.<\/p>\n<p>OpenAI describes the new tool as a \u201cdaily productivity driver,\u201d aiding in rapid prototyping. The original GPT-5.3 Codex version is intended for more extended and labor-intensive tasks.<\/p>\n<p>In an official statement, OpenAI emphasized that Spark is engineered for minimal latency in Codex.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cCodex-Spark is the first step towards Codex operating in two complementary modes: real-time collaboration for quick iterations and extended tasks requiring deep reasoning,\u201d the company noted.<\/p>\n<\/blockquote>\n<p>Back in February 2026, OpenAI <a href=\"https:\/\/forklog.com\/en\/news\/openai-launches-codex-app-for-ai-programming-assistance\">released<\/a> a standalone application for the Codex programming assistant.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has released a streamlined version of its programming tool, Codex.<\/p>\n","protected":false},"author":1,"featured_media":94265,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"OpenAI releases a streamlined version of Codex programming tool.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1201,1190],"class_list":["post-94264","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chatbots","tag-openai"],"aioseo_notices":[],"amp_enabled":true,"views":"160","promo_type":"1","layout_type":"1","short_excerpt":"OpenAI releases a streamlined version of Codex programming tool.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94264","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=94264"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94264\/revisions"}],"predecessor-version":[{"id":94266,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/94264\/revisions\/94266"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/94265"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=94264"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=94264"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=94264"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}