{"id":19485,"date":"2024-12-12T17:33:27","date_gmt":"2024-12-12T15:33:27","guid":{"rendered":"https:\/\/forklog.com\/en\/google-unveils-gemini-2-0-with-advanced-ai-enhancements\/"},"modified":"2024-12-12T17:33:27","modified_gmt":"2024-12-12T15:33:27","slug":"google-unveils-gemini-2-0-with-advanced-ai-enhancements","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/google-unveils-gemini-2-0-with-advanced-ai-enhancements\/","title":{"rendered":"Google Unveils Gemini 2.0 with Advanced AI Enhancements"},"content":{"rendered":"<div class=\"wp-block-text-wrappers-keypoints article_keypoints\">\n<ul class=\"wp-block-list\">\n<li>Google has launched the new Gemini 2.0 model, which is more powerful than its predecessor and features multimodal capabilities.<\/li>\n<li>The updated version includes a Deep Research tool for reasoning.<\/li>\n<li>Additionally, the AI agent Project Mariner has been introduced, capable of performing tasks on a computer on behalf of the user.<\/li>\n<li>Google announced improvements to the AI Overviews search query summary system.<\/li>\n<\/ul>\n<\/div>\n<p>On December 11, Google unveiled its &#8220;most powerful AI model to date&#8221; \u2014 Gemini 2.0.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Welcome to the world, Gemini 2.0 \u2728 our most capable AI model yet.<\/p>\n<p>We&#8217;re first releasing an experimental version of 2.0 Flash \u26a1 It has better performance, new multimodal output, <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a> tool use \u2014 and paves the way for new agentic experiences. ? <a href=\"https:\/\/t.co\/ywY2oZv76p\">https:\/\/t.co\/ywY2oZv76p<\/a> <a href=\"https:\/\/t.co\/1Wgcr3m2Ip\">pic.twitter.com\/1Wgcr3m2Ip<\/a><\/p>\n<p>\u2014 Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1866869343570608557?ref_src=twsrc%5Etfw\">December 11, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>An experimental version, 2.0 Flash, has been initially launched, boasting high performance, multimodal features, and &#8220;paving the way for new agentic experiences.&#8221;<\/p>\n<p>Gemini 2.0 Flash surpasses 1.5 Pro in key metrics, demonstrating double the speed. It can generate images, text, and speech in multiple languages. The model is reported to have significant improvements in programming and image analysis.<\/p>\n<p>2.0 Flash becomes the flagship neural network, replacing 1.5 Pro. The new network can utilize third-party applications and services like Google Search and external <span data-descr=\"application programming interface\" class=\"old_tooltip\">API<\/span>, distinguishing it from the previous generation.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">As our workhorse model, Gemini 2.0 Flash outperforms 1.5 Pro on key benchmarks, at twice the speed.<\/p>\n<p>It can generate images mixed with text as well as customizable text-to-speech multilingual audio. 2.0 Flash can also call tools like <a href=\"https:\/\/twitter.com\/Google?ref_src=twsrc%5Etfw\">@Google<\/a> Search, code execution and third-party\u2026 <a href=\"https:\/\/t.co\/OVicGFnJdP\">pic.twitter.com\/OVicGFnJdP<\/a><\/p>\n<p>\u2014 Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1866869346514899088?ref_src=twsrc%5Etfw\">December 11, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Gemini 2.0 Flash is available in a chat version for all users, while the experimental multimodal neural network with text-to-speech and image conversion features is accessible to developers via the Gemini API in Google AI Studio and Vertex AI. In the coming months, the enhanced AI version will gradually roll out to various products like Android Studio, Chrome DevTools, Firebase, Gemini Code Assist, and others.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Google&#8217;s AI Agent<\/strong><\/h2>\n<p>Google&#8217;s AI division, DeepMind, has <a href=\"http:\/\/deepmind.google\/project-mariner\">introduced<\/a> its first AI agent capable of independently operating on the internet.<\/p>\n<p>Project Mariner, built on Gemini 2.0, is available to a limited group of testers. It controls the Chrome browser, moves the cursor on the screen, clicks buttons, fills out forms, and can navigate websites, behaving like a human.<\/p>\n<p>Once the AI agent is set up, a chat window appears on the right side of the browser. Users can give instructions to the neural network, such as creating a shopping cart from a grocery store based on an attached list.<\/p>\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/lh7-qw.googleusercontent.com\/docsz\/AD_4nXfX8pEZCGxzOrWiVKewTCiXKlquvB2rIdNL11yi8yn3IfQ6CPDXJyOh0Aet4TaH4e9WHcm21skYiR284kuO7a7CK2XjTbKk1lL6CxhhFTQCmu8ZGOjDWrBalIuW7-3OX4qGUhGwwA?key=3hQr1QXky530HKHXL07lJmHQ\" alt=\"Gemini 2.0: Google \u0432\u044b\u043f\u0443\u0441\u0442\u0438\u043b\u0430 \u0440\u044f\u0434 \u0418\u0418-\u043e\u0431\u043d\u043e\u0432\u043b\u0435\u043d\u0438\u0439\"\/><figcaption class=\"wp-element-caption\">Data: Google.<\/figcaption><\/figure>\n<p>After specifying all parameters, the agent will go to the supermarket website, find the necessary items, and add them to the virtual cart. Its operation speed is slow \u2014 it takes about five seconds to click one button.<\/p>\n<p>Project Mariner cannot fill in credit card numbers and other payment information, accept cookies, or sign service agreements. This is intentional to give users more control.<\/p>\n<p>The agent can be used for searching flights, hotels, recipes, purchasing items, and other tasks. While executing a request, the computer cannot be used.<\/p>\n<p>Project Mariner is not yet available to the general public, and its public release date is unknown.<\/p>\n<h2 class=\"wp-block-heading\"><strong>Other AI Agents<\/strong><\/h2>\n<p>In addition to Project Mariner, Google has introduced several other AI agents for more specialized tasks:<\/p>\n<ul class=\"wp-block-list\">\n<li>Deep Research can help explore complex topics by creating multi-step learning plans. It is not intended for solving mathematical and logical problems, writing code, or data analysis;<\/li>\n<li>Jules can program, integrates into workflows on GitHub, and will appear in 2025;<\/li>\n<li>another AI is focused on assisting in video games, with its release date unknown.<\/li>\n<\/ul>\n<h2 class=\"wp-block-heading\"><strong>Gemini Learns to Reason<\/strong><\/h2>\n<p>The updated version of Gemini has received the Deep Research feature, which uses &#8220;advanced reasoning&#8221; and &#8220;long-context capabilities&#8221; to create concise research summaries. Briefs can be exported to Google Docs for further editing.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">The new Deep Research feature from Google feels like one of the most appropriately &#8220;Google-y&#8221; uses of AI to date, and is quite impressive.<\/p>\n<p>I&#8217;ve had access for a bit and it does very good initial reports on almost any topic. The paywalls around academic sources puts some limits. <a href=\"https:\/\/t.co\/dwSqr6aKGZ\">pic.twitter.com\/dwSqr6aKGZ<\/a><\/p>\n<p>\u2014 Ethan Mollick (@emollick) <a href=\"https:\/\/twitter.com\/emollick\/status\/1866875581863739434?ref_src=twsrc%5Etfw\">December 11, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>The service can analyze information regarding a query using the internet, acting as a sort of research assistant. The result of its deliberations is presented as a brief summary with links to sources. The procedure is as follows:<\/p>\n<ol class=\"wp-block-list\">\n<li>The user writes a query.<\/li>\n<li>Deep Research creates a &#8220;multi-step research plan.&#8221;<\/li>\n<li>The user confirms the start of the analysis.<\/li>\n<li>Deep Research conducts the research over several minutes and generates a response.<\/li>\n<\/ol>\n<p>The service is available to owners of the paid version of Gemini Advanced.<\/p>\n<h2 class=\"wp-block-heading\"><strong>AI Overviews to Become Smarter and Multimodal<\/strong><\/h2>\n<p>Google <a href=\"https:\/\/blog.google\/technology\/google-deepmind\/google-gemini-ai-update-december-2024\/#gemini-2-0\">announced<\/a> improvements to the AI Overviews search query summary system. The service is reported to soon handle &#8220;more complex topics,&#8221; &#8220;multimodal&#8221; and &#8220;multi-step&#8221; searches, including advanced mathematical queries and programming tasks.<\/p>\n<p>Testing of the functionality will begin this week, with a broader rollout at the start of next year.<\/p>\n<p>The enhancement of AI Overviews is driven by the launch of Gemini 2.0.<\/p>\n<p>In November, Google trained the Gemini chatbot to remember contextual information about users&#8217; lives, interests, and preferences.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google has launched the new Gemini 2.0 model, which is more powerful than its predecessor and features multimodal capabilities. The updated version includes a Deep Research tool for reasoning. Additionally, the AI agent Project Mariner has been introduced, capable of performing tasks on a computer on behalf of the user. Google announced improvements to the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":19484,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,719,738,1150],"class_list":["post-19485","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-gemini","tag-google","tag-news-plus"],"aioseo_notices":[],"amp_enabled":true,"views":"59","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19485","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=19485"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19485\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/19484"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=19485"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=19485"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=19485"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}