{"id":19630,"date":"2024-12-17T17:36:58","date_gmt":"2024-12-17T15:36:58","guid":{"rendered":"https:\/\/forklog.com\/en\/google-deepmind-unveils-advanced-ai-video-and-image-generators\/"},"modified":"2024-12-17T17:36:58","modified_gmt":"2024-12-17T15:36:58","slug":"google-deepmind-unveils-advanced-ai-video-and-image-generators","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/google-deepmind-unveils-advanced-ai-video-and-image-generators\/","title":{"rendered":"Google DeepMind Unveils Advanced AI Video and Image Generators"},"content":{"rendered":"<p>DeepMind, Google&#8217;s AI-focused division, has announced a new model for video generation, Veo 2.\u00a0<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Today, we\u2019re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. ?<\/p>\n<p>We\u2019re also releasing an improved version of our text-to-image model, Imagen 3 \u2014 available to use in ImageFX through\u2026 <a href=\"https:\/\/t.co\/h6ejHaMUM4\">pic.twitter.com\/h6ejHaMUM4<\/a><\/p>\n<p>\u2014 Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1868703624714395907?ref_src=twsrc%5Etfw\">December 16, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>The neural network is capable of:<\/p>\n<ul class=\"wp-block-list\">\n<li>creating videos with resolutions up to 4K;<\/li>\n<li>understanding camera control requirements in prompts;<\/li>\n<li>accurately replicating the laws of physics and realistic human facial expressions.<\/li>\n<\/ul>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cIn direct comparisons of results conducted by experts, Veo 2 was preferred over other advanced video generation models,\u201d noted DeepMind.<\/p>\n<\/blockquote>\n<p>In the experimental Google VideoFX tool for video creation, clips are currently available with resolutions up to 720p and durations up to eight seconds. The full functionality of Veo 2 is provided to a limited number of users, with a broader launch planned for next year.<\/p>\n<p>Google staff have also enhanced the capabilities of the Imagen 3 image generator. The neural network has gained the ability to apply various art styles such as realism, fantasy, and others.\u00a0<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">We\u2019ve also enhanced Imagen 3\u2019s ability to:<br \/>\u25aa\ufe0f Produce diverse art styles: realism, fantasy, portraiture and more<br \/>\u25aa\ufe0f More faithfully turn prompts into accurate images<br \/>\u25aa\ufe0f Generate brighter, more compositionally balanced visuals<\/p>\n<p>\u2192 <a href=\"https:\/\/t.co\/MmdiHyrugR\">https:\/\/t.co\/MmdiHyrugR<\/a> <a href=\"https:\/\/t.co\/L1l2Qzs5up\">pic.twitter.com\/L1l2Qzs5up<\/a><\/p>\n<p>\u2014 Google DeepMind (@GoogleDeepMind) <a href=\"https:\/\/twitter.com\/GoogleDeepMind\/status\/1868703631056552337?ref_src=twsrc%5Etfw\">December 16, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>It also more accurately creates images from prompts and generates brighter and more compositionally balanced visuals.\u00a0<\/p>\n<p>In December, Google released the new model Gemini 2.0, the Deep Research tool for reasoning, and the AI agent Project Mariner, which can perform actions on a computer on behalf of the user.\u00a0<\/p>\n","protected":false},"excerpt":{"rendered":"<p>DeepMind, Google&#8217;s AI-focused division, has announced a new model for video generation, Veo 2.\u00a0 Today, we\u2019re announcing Veo 2: our state-of-the-art video generation model which produces realistic, high-quality clips from text or image prompts. ? We\u2019re also releasing an improved version of our text-to-image model, Imagen 3 \u2014 available to use in ImageFX through\u2026 pic.twitter.com\/h6ejHaMUM4 [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":19629,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,1474,738,1392],"class_list":["post-19630","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-deepmind","tag-google","tag-video"],"aioseo_notices":[],"amp_enabled":true,"views":"90","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19630","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=19630"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/19630\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/19629"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=19630"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=19630"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=19630"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}