{"id":15529,"date":"2024-07-25T11:12:11","date_gmt":"2024-07-25T08:12:11","guid":{"rendered":"https:\/\/forklog.com\/en\/mistral-ai-unveils-flagship-ai-model-large-2\/"},"modified":"2024-07-25T11:12:11","modified_gmt":"2024-07-25T08:12:11","slug":"mistral-ai-unveils-flagship-ai-model-large-2","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/mistral-ai-unveils-flagship-ai-model-large-2\/","title":{"rendered":"Mistral AI Unveils Flagship AI Model Large 2"},"content":{"rendered":"<p>Paris-based AI startup Mistral AI has <a href=\"https:\/\/mistral.ai\/news\/mistral-large-2407\/\">released<\/a> its flagship model, Large 2, which &#8220;significantly surpasses the previous version Large and operates on par with <a href=\"https:\/\/forklog.com\/en\/news\/openai-unveils-a-more-human-like-version-of-chatgpt\">GPT-4o<\/a>, <a href=\"https:\/\/forklog.com\/en\/news\/claude-3-opus-surpasses-gpt-4-in-user-ratings\">Claude 3 Opus<\/a>, and Llama 3 405B.&#8221;\u00a0<\/p>\n<p>Large 2 emerged a day after Meta Corporation launched a new collection of AI models, including the open-source Llama 3.1 405B.\u00a0<\/p>\n<p>The enhanced neural network from Mistral AI features a context window of 128,000 tokens and 123 billion parameters\u2014terms used to describe problem-solving capabilities. Llama 3.1 405B, by comparison, has 128,000 tokens and 405 billion parameters.\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXe9_brrjoSOxEl3g6qIwheEyyAPKR33hpIgSBvaR_NNn59fZwHQc8ls3-L03tmmJ_h74yNHcy7qCPHSoNQ16vBTIOYxhIs-MBiFk6qQdFZpXMa7DIVoS2gUMEDZ7aPReJyQrUTb0o-axB9f17Xbwl8bR1Qp?key=VljzumEjkzeHEOL6B8etXQ\" alt=\"Mistral AI Unveils Flagship AI Model Large 2\"\/><figcaption class=\"wp-element-caption\">Comparison of Large 2 with Llama 3.1 405B and Llama 3.1 70B in code generation and mathematics performance. Data: Mistral AI.<\/figcaption><\/figure>\n<p>The press release highlights Mistral AI&#8217;s efforts to minimize &#8220;hallucination&#8221; issues. Large 2 has been trained to be more discerning in its responses and to avoid fabricating plausible information.<\/p>\n<p>The new AI system from Mistral understands English, French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, as well as over 80 programming languages, including Python, Java, C, C++, JavaScript, and Bash. It provides more concise answers compared to previous models.\u00a0<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-rt.googleusercontent.com\/docsz\/AD_4nXdVauVsBRbfI9isZocqISeHwRcKjz-39KaYVI3TJcMNHtirVCOwsyNx8NQUku0S0GIeBrZpaMQ4R0Q0XguR223oGvJVMZrgEJ94cVvOxVW9O4S9VdcOeSv6hrjEVWOkqvZnB_oTVKpNACfjTp6sLbDytvza?key=VljzumEjkzeHEOL6B8etXQ\" alt=\"Mistral AI Unveils Flagship AI Model Large 2\"\/><figcaption class=\"wp-element-caption\">Comparison of Large 2 with Llama 3.1 405B and Llama 3.1 70B in multilingual capabilities. Data: Mistral.<\/figcaption><\/figure>\n<p>In June, Mistral AI raised $640 million at a valuation of $6 billion. Founded by former employees of Google DeepMind and Meta, the startup is seen as a European competitor to OpenAI.\u00a0<\/p>\n<p>Back in late May, the French AI company <a href=\"https:\/\/forklog.com\/en\/news\/mistral-ai-unveils-code-generating-ai-model\">introduced<\/a> Codestral, an artificial intelligence model for code generation.<\/p>\n<p>In February, Microsoft <a href=\"https:\/\/forklog.com\/en\/news\/microsoft-commits-2-1-billion-to-mistral-ai-partnership\">announced<\/a> a $2.1 billion investment in Mistral AI.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Paris-based AI startup Mistral AI has released its flagship model, Large 2, which &#8220;significantly surpasses the previous version Large and operates on par with GPT-4o, Claude 3 Opus, and Llama 3 405B.&#8221;\u00a0 Large 2 emerged a day after Meta Corporation launched a new collection of AI models, including the open-source Llama 3.1 405B.\u00a0 The enhanced [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":15528,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1477],"class_list":["post-15529","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-mistral"],"aioseo_notices":[],"amp_enabled":true,"views":"92","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/15529","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=15529"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/15529\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/15528"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=15529"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=15529"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=15529"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}