{"id":92116,"date":"2025-12-12T11:50:54","date_gmt":"2025-12-12T08:50:54","guid":{"rendered":"https:\/\/forklog.com\/en\/?p=92116"},"modified":"2025-12-12T11:55:24","modified_gmt":"2025-12-12T08:55:24","slug":"openai-unveils-gpt-5-2-ai-model-series","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/openai-unveils-gpt-5-2-ai-model-series\/","title":{"rendered":"OpenAI Unveils GPT-5.2 AI Model Series"},"content":{"rendered":"<p>OpenAI has introduced its GPT-5.2 model lineup. As of December 12, the Instant, Thinking, and Pro versions are available to all users.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">GPT-5.2 is now rolling out to everyone.<a href=\"https:\/\/t.co\/nfubPwnIIw\">https:\/\/t.co\/nfubPwnIIw<\/a><\/p>\n<p>\u2014 OpenAI (@OpenAI) <a href=\"https:\/\/twitter.com\/OpenAI\/status\/1999182098859700363?ref_src=twsrc%5Etfw\">December 11, 2025<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;Currently, ChatGPT Enterprise users save an average of 40-60 minutes daily thanks to AI, with active users saving more than 10 hours weekly. We developed GPT-5.2 to unlock even greater economic potential for people,&#8221; the startup&#8217;s blog states.<\/p>\n<\/blockquote>\n<p>Thinking has shown high results in several tests.<\/p>\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-8b53ba96f04b961e-745476768478562-1024x576.png\" alt=\"image\" class=\"wp-image-271623\" srcset=\"https:\/\/forklog.com\/wp-content\/uploads\/img-8b53ba96f04b961e-745476768478562-1024x576.png 1024w, https:\/\/forklog.com\/wp-content\/uploads\/img-8b53ba96f04b961e-745476768478562-300x169.png 300w, https:\/\/forklog.com\/wp-content\/uploads\/img-8b53ba96f04b961e-745476768478562-768x432.png 768w, https:\/\/forklog.com\/wp-content\/uploads\/img-8b53ba96f04b961e-745476768478562.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>The model has reached expert level in the GDPval benchmark, which evaluates the performance of intellectual tasks across 44 professions. Test tasks include creating presentations and working with spreadsheets.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-6472b6552939ea14-745492570888480.webp\" alt=\"image\" class=\"wp-image-271624\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>The &#8220;Thinking&#8221; version of the LLM completes GDPval benchmark tasks 11 times faster than human experts, with the cost of task completion being less than 1% of a specialist&#8217;s expenses.<\/p>\n<p>The company emphasized that GPT-5.2 Thinking &#8220;raises the bar for professional work.&#8221; It demonstrates:<\/p>\n<ul class=\"wp-block-list\">\n<li>modern reasoning methods considering long context;<\/li>\n<li>significant improvements in spreadsheet generation and analysis;<\/li>\n<li>success in creating slide shows.<\/li>\n<\/ul>\n<p>The GPT-5.2 Instant version is designed for everyday work and learning. It features a warm and conversational style, clear explanations highlighting key information, improved step-by-step guides, and high-quality translation of technical information.<\/p>\n<p>GPT-5.2 Pro is positioned as the most powerful solution for complex queries. The neural network shows high performance in specialized fields, including programming and scientific research.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;GPT-5.2 is part of the ongoing process of model improvement. We continue to work on known issues such as unwarranted refusals and delays to make the product more useful,&#8221; OpenAI emphasized.<\/p>\n<\/blockquote>\n<p>GPT-5.1 will remain available to paying users for three months.<\/p>\n<h2 class=\"wp-block-heading\">Programming<\/h2>\n<p>GPT\u20115.2 Thinking set a new record in the SWE-Bench Pro test \u2014 55.6%. It analyzes the model&#8217;s ability to work with four languages.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-b8cc38d60155f2ea-745547481182129.webp\" alt=\"image\" class=\"wp-image-271625\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>The SWE-bench Verified test also achieved a high value \u2014 80%.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;For everyday professional use, this means the model more reliably debugs code, implements requests for new features, refactors large codebases, and fixes from start to finish with less manual intervention,&#8221; states the OpenAI blog.<\/p>\n<\/blockquote>\n<p>GPT-5.2 Thinking performs better in frontend development and creating complex and atypical interfaces compared to GPT-5.1 Thinking.<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>&#8220;GPT-5.2 is the biggest leap for GPT models in agent programming since GPT-5 and is the best solution in its price range,&#8221; noted OpenAI.<\/p>\n<\/blockquote>\n<h2 class=\"wp-block-heading\">Hallucinations and Context<\/h2>\n<p>GPT\u20115.2 Thinking hallucinates less compared to GPT\u20115.1 Thinking. The model is more reliable in everyday information work, conducting research, writing text, analyzing, and supporting decision-making.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-ac5be44bafda690b-745585360166014.webp\" alt=\"image\" class=\"wp-image-271626\"\/><figcaption class=\"wp-element-caption\">Source: OpenAI.<\/figcaption><\/figure>\n<p>GPT\u20115.2 Thinking &#8220;sets a new standard&#8221; in long-context reasoning. It achieved leading scores in OpenAI MRCRv2 \u2014 a test that checks the model&#8217;s ability to integrate information distributed across long documents.<\/p>\n<p>In real-world deep analysis tasks requiring related information across hundreds of thousands of tokens, GPT\u20115.2 Thinking is &#8220;significantly more accurate&#8221; than GPT\u20115.1 Thinking.<\/p>\n<h2 class=\"wp-block-heading\">Vision<\/h2>\n<p>GPT\u20115.2 Thinking is OpenAI&#8217;s most powerful visual perception model. It reduces errors in diagram analysis and understanding software interfaces by about half.<\/p>\n<p>The neural network can more accurately interpret dashboards, screenshots, technical graphs, and reports.<\/p>\n<figure class=\"wp-block-image size-full\"><img decoding=\"async\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/img-a3174add7fe749d2-745617731705546.webp\" alt=\"image\" class=\"wp-image-271627\"\/><figcaption class=\"wp-element-caption\">Comparison of low-quality image analysis. Source: OpenAI.<\/figcaption><\/figure>\n<p>The example shows that GPT-5.2 successfully identifies key areas and delineates object boundaries. In contrast, GPT-5.1 highlights only individual fragments, showing weak understanding of spatial structure.<\/p>\n<p>Although both models make errors, version 5.2 handles image analysis noticeably better.<\/p>\n<p>In December, OpenAI <a href=\"https:\/\/forklog.com\/en\/news\/chatgpts-growth-slows-amid-googles-advances\">planned<\/a> the release of GPT-5.2 as a response to the growing popularity of Google&#8217;s Gemini.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI has introduced its GPT-5.2 model lineup. As of December 12, the Instant, Thinking, and Pro versions are available to all users.<\/p>\n","protected":false},"author":1,"featured_media":92117,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"1","news_style_id":"1","cryptorium_level":"","_short_excerpt_text":"OpenAI unveils GPT-5.2 models, available from December 12.","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1201,1190],"class_list":["post-92116","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-chatbots","tag-openai"],"aioseo_notices":[],"amp_enabled":true,"views":"194","promo_type":"1","layout_type":"1","short_excerpt":"OpenAI unveils GPT-5.2 models, available from December 12.","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/92116","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=92116"}],"version-history":[{"count":1,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/92116\/revisions"}],"predecessor-version":[{"id":92118,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/92116\/revisions\/92118"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/92117"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=92116"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=92116"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=92116"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}