{"id":17516,"date":"2024-10-04T11:42:28","date_gmt":"2024-10-04T08:42:28","guid":{"rendered":"https:\/\/forklog.com\/en\/google-lens-enhances-ar-capabilities-with-moving-object-queries\/"},"modified":"2024-10-04T11:42:28","modified_gmt":"2024-10-04T08:42:28","slug":"google-lens-enhances-ar-capabilities-with-moving-object-queries","status":"publish","type":"post","link":"https:\/\/forklog.com\/en\/google-lens-enhances-ar-capabilities-with-moving-object-queries\/","title":{"rendered":"Google Lens Enhances AR Capabilities with Moving Object Queries"},"content":{"rendered":"<p>Google has updated its augmented reality application, Lens, to include responses to queries about moving environments.<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">Over the years, we\u2019ve expanded Search so you can ask a question in many ways \u2014 from typing a query to humming a tune. <\/p>\n<p>Today, we&#8217;re introducing even more new AI-powered ways to search what you see and hear. Learn more \u2193 <a href=\"https:\/\/t.co\/MHZCqo1ujX\">https:\/\/t.co\/MHZCqo1ujX<\/a><\/p>\n<p>\u2014 Google (@Google) <a href=\"https:\/\/twitter.com\/Google\/status\/1841904286826938444?ref_src=twsrc%5Etfw\">October 3, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Users on Android and iOS can record videos and ask questions about objects of interest. Lens employs the AI model Gemini to understand context.\u00a0<\/p>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cSuppose you want to learn more about an interesting fish. Lens will prepare an overview explaining why they swim in circles and provide additional resources and useful information,\u201d noted Lou Wang, Product Director at the company.<\/p>\n<\/blockquote>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"680\" height=\"680\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/NEW-Lens-video-understanding.webp\" alt=\"NEW-Lens-video-understanding\" class=\"wp-image-242495\"\/><figcaption class=\"wp-element-caption\">After voicing a question during video recording, Lens will provide an answer. Data: <a href=\"https:\/\/blog.google\/products\/search\/google-search-lens-october-2024-updates\/?utm_source=tw&#038;utm_medium=social&#038;utm_campaign=og&#038;utm_content=&#038;utm_term=\">Google<\/a>.<br \/><\/figcaption><\/figure>\n<p>The experimental feature is available in English. To access it, users must register for the Google Search Labs program and activate \u201cAI Overviews and more.\u201d\u00a0<\/p>\n<p>Artificial intelligence is used to determine which objects in the frame are most interesting, significant, and relevant to the given question.\u00a0<\/p>\n<p>In addition to video, information can be searched using images. To do this, users need to take a picture and ask a question verbally.\u00a0<\/p>\n<p>Lens has also been enhanced in the area of e-commerce. The application can recognize a product in a photo and provide information about it: price, brand, reviews, and stock availability.<\/p>\n<p><span id=\"docs-internal-guid-78d6196d-7fff-8a16-45af-54171ddb6f68\"><\/p>\n<p dir=\"ltr\" style=\"line-height:1.38;margin-top:0pt;margin-bottom:0pt;\">\n<div><span style=\"font-size: 11pt; font-family: Arial, sans-serif; background-color: transparent; font-variant-numeric: normal; font-variant-east-asian: normal; font-variant-alternates: normal; font-variant-position: normal; vertical-align: baseline;\"><\/span><\/div>\n<p><\/span><\/p>\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"680\" height=\"680\" src=\"https:\/\/forklog.com\/wp-content\/uploads\/NEW-Lens-shopping-results.webp\" alt=\"NEW-Lens-shopping-results\" class=\"wp-image-242496\"\/><figcaption class=\"wp-element-caption\">Searching for a backpack using Lens. Data: Google.<\/figcaption><\/figure>\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p>\u201cSuppose you see a backpack you like. You can use Lens to identify this product and instantly see details that interest you,\u201d noted Wang.\u00a0<\/p>\n<\/blockquote>\n<p>Meanwhile, a project leader for video generation <a href=\"https:\/\/forklog.com\/en\/news\/openai-unveils-sora-enthusiasm-and-challenges\">Sora<\/a> has left OpenAI to join DeepMind, Google&#8217;s AI division.\u00a0<\/p>\n<blockquote class=\"twitter-tweet\">\n<p lang=\"en\" dir=\"ltr\">I will be joining <a href=\"https:\/\/twitter.com\/GoogleDeepMind?ref_src=twsrc%5Etfw\">@GoogleDeepMind<\/a> to work on video generation and world simulators! Can&#8217;t wait to collaborate with such a talented team.<\/p>\n<p>I had an amazing two years at OpenAI making Sora. Thank you to all the passionate and kind people I worked with. Excited for the next chapter!<\/p>\n<p>\u2014 Tim Brooks (@_tim_brooks) <a href=\"https:\/\/twitter.com\/_tim_brooks\/status\/1841982327431561528?ref_src=twsrc%5Etfw\">October 3, 2024<\/a><\/p><\/blockquote>\n<p> <script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>In recent months, several key employees have departed from the developer of ChatGPT. This comes amid reports of Sam Altman&#8217;s desire to change the company&#8217;s corporate structure.<\/p>\n<p>Reports have emerged about Google&#8217;s work on a \u201cthinking\u201d artificial intelligence.<\/p>\n<p>In mid-September, AI startup OpenAI unveiled a new large language model, o1, trained using reinforcement learning for complex reasoning tasks.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Google has updated its augmented reality application, Lens, to include responses to queries about moving environments. Over the years, we\u2019ve expanded Search so you can ask a question in many ways \u2014 from typing a query to humming a tune. Today, we&#8217;re introducing even more new AI-powered ways to search what you see and hear. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":17515,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"select":"","news_style_id":"","cryptorium_level":"","_short_excerpt_text":"","creation_source":"","_metatest_mainpost_news_update":false,"footnotes":""},"categories":[3],"tags":[438,738,1392],"class_list":["post-17516","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-news-and-analysis","tag-artificial-intelligence","tag-google","tag-video"],"aioseo_notices":[],"amp_enabled":true,"views":"62","promo_type":"","layout_type":"","short_excerpt":"","is_update":"","_links":{"self":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/17516","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/comments?post=17516"}],"version-history":[{"count":0,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/posts\/17516\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media\/17515"}],"wp:attachment":[{"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/media?parent=17516"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/categories?post=17516"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/forklog.com\/en\/wp-json\/wp\/v2\/tags?post=17516"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}