{"id":19963,"date":"2024-12-27T17:06:35","date_gmt":"2024-12-27T15:06:35","guid":{"rendered":"https:\/\/forklog.com\/en\/chinese-startup-deepseek-unveils-ai-model-surpassing-meta-and-openai\/"},"modified":"2024-12-27T17:06:35","modified_gmt":"2024-12-27T15:06:35","slug":"chinese-startup-deepseek-unveils-ai-model-surpassing-meta-and-openai","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/chinese-startup-deepseek-unveils-ai-model-surpassing-meta-and-openai\/","title":{"rendered":"Chinese Startup DeepSeek Unveils AI Model Surpassing Meta and OpenAI"},"content":{"rendered":"<p>Chinese AI startup DeepSeek has introduced its own large language model, which has outperformed competitors from Meta and OpenAI in tests.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">? Introducing DeepSeek-V3!<\/p>\n<p>Biggest leap forward yet:<br \/>\u26a1 60 tokens\/second (3x faster than V2!)<br \/>? Enhanced capabilities<br \/>? API compatibility intact<br \/>? Fully open-source models &#038; papers<\/p>\n<p>? 1\/n <a href=\"https:\/\/t.co\/p1dV9gJ2Sd\">pic.twitter.com\/p1dV9gJ2Sd<\/a><\/p>\n<p>\u2014 DeepSeek (@deepseek_ai) <a href=\"https:\/\/twitter.com\/deepseek_ai\/status\/1872242657348710721?ref_src=twsrc%5Etfw\">December 26, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>DeepSeek V3 boasts 671 billion parameters. In comparison, <a href=\"https:\/\/forklog.com\/en\/news\/meta-unveils-llama-3-1-ai-model-collection\">Llama 3.1 405B<\/a> has 405 billion. This figure reflects the AI&#8217;s ability to adapt to more complex applications and provide more accurate responses.\u00a0\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-qw.googleusercontent.com\/docsz\/AD_4nXd_rtKV6EwBZb6bJ8jzvMQ7b9w86nePkE7-XCgC5aULKHvgDzkEIFyhMYiZns1OKHqYrOO32n3OYr9U8bOiskQ2fAAH9mpjP0Of07Oi2hzNLv-HtDzcC1zQL86C7-291S3pXOSgWg?key=lhKNHbg1WFRFN78zKyO3B6Y7\" alt=\"Chinese Startup DeepSeek Unveils AI Model Surpassing Meta and OpenAI\"\/><figcaption class=\"wp-element-caption\">Comparison of DeepSeek V3 with competitors. Data: DeepSeek.<\/figcaption><\/figure>\n<p>The Hangzhou-based company trained the neural network in two months with $5.58 million, using significantly fewer computational resources (2048 GPUs) compared to larger tech companies. It promises to offer the best price-to-performance ratio in the market.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">? API Pricing Update<\/p>\n<p>? Until Feb 8: same as V2!<br \/>? From Feb 8 onwards:<br \/>Input: $0.27\/million tokens ($0.07\/million tokens with cache hits)<br \/>Output: $1.10\/million tokens<\/p>\n<p>? Still the best value in the market!<\/p>\n<p>? 3\/n <a href=\"https:\/\/t.co\/OjZaB81Yrh\">pic.twitter.com\/OjZaB81Yrh<\/a><\/p>\n<p>\u2014 DeepSeek (@deepseek_ai) <a href=\"https:\/\/twitter.com\/deepseek_ai\/status\/1872242663489188088?ref_src=twsrc%5Etfw\">December 26, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Future plans include adding multimodality and &#8220;other advanced features.&#8221;<\/p>\n<p>OpenAI team member Andrej Karpathy noted that DeepSeek has demonstrated impressive research and development under limited resources.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget (2048 GPUs for 2 months, $6M).<\/p>\n<p>For reference, this level of capability is supposed to require clusters of closer to 16K GPUs, the ones being\u2026 <a href=\"https:\/\/t.co\/EW7q2pQ94B\">https:\/\/t.co\/EW7q2pQ94B<\/a><\/p>\n<p>\u2014 Andrej Karpathy (@karpathy) <a href=\"https:\/\/twitter.com\/karpathy\/status\/1872362712958906460?ref_src=twsrc%5Etfw\">December 26, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\"><p>\n<cite>&#8220;Does this mean you don&#8217;t need large <span data-descr=\"graphics processing units\" class=\"old_tooltip\">GPU<\/span> clusters for frontier <span data-descr=\"large language models\" class=\"old_tooltip\">LLM<\/span>? No, but you should ensure you&#8217;re not wasting what you have. This looks like a good demonstration that there&#8217;s still much to be done with both data and algorithms,&#8221; he added.<\/cite><\/p><\/blockquote>\n<p>Previously, DeepSeek introduced a &#8220;competitor to OpenAI&#8217;s o1&#8243;\u2014the thinking &#8220;super-powered&#8221; AI model DeepSeek-R1-Lite-Preview.<\/p>\n<p>Back in July, Chinese company Kuaishou <a href=\"https:\/\/forklog.com\/en\/news\/chinese-kling-emerges-as-a-rival-to-sora\">unveiled the Kling AI model<\/a> for video generation to the public.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Chinese AI startup DeepSeek has introduced its own large language model, which has outperformed competitors from Meta and OpenAI in tests. ? Introducing DeepSeek-V3! Biggest leap forward yet:\u26a1 60 tokens\/second (3x faster than V2!)? Enhanced capabilities? API compatibility intact? Fully open-source models &#038; papers ? 1\/n pic.twitter.com\/p1dV9gJ2Sd \u2014 DeepSeek (@deepseek_ai) December 26, 2024 DeepSeek V3 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,133],"class_list":["post-19963","post","type-post","status-publish","format-standard","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-china"],"aioseo_notices":[],"amp_enabled":true,"views":"54","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19963","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=19963"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19963\/revisions"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=19963"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=19963"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=19963"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}