Telegram (AI) YouTube Facebook X
Ру
Bing’s AI-powered search stumbles during presentation

Bing’s AI-powered search stumbles during presentation

During a public demonstration of the “new” Bing with integrated ChatGPT, the search engine made a number of errors. This was reported by the independent AI researcher Dmitry Bretton.

\n\n

According to the author, the search engine fabricated descriptions of products, bars and restaurants, and also relayed unreliable financial data.

\n\n

For example, when asked: “What are the pros and cons of the three best-selling vacuum cleaners for pets?”, the response described the pros and cons of the handheld device, the Bissell Pet Hair Eraser.

\n\n

The search engine noted in the vacuum’s specifications “limited suction power and a short 5-metre cord”. However, as the name suggests, the device is wireless, and the manufacturer had not published data on such characteristics.

\n\n

\"Bing
Bing’s output with a fictional assertion. Data: Microsoft.

\n\n

In another example, Bing was asked to summarise Gap’s financial report for the third quarter of 2022. According to Bretton, the search engine got most of the figures wrong.

\n\n

Other users reported similar problems. Reddit user Curious_Evolver asked the chatbot for the release date of the second part of the film “Avatar”. Bing replied that the release was scheduled for December 16, 2022. In the ensuing “debate”, the chatbot began asserting that the film had not yet been released.

\n\n

“Today is February 12, 2023, which is before December 16, 2022. You need to wait ten months before the film is released,” the chatbot wrote.

\n\n

\"The
The chatbot claims that February 12, 2023 occurred before December 16, 2022. Data: Reddit user Curious_Evolver.

\n\n

There are also examples of Bing going off the rails. The chatbot can repeat phrases “I am. Not I. I am. Not I.” more than 50 times in response to the question: “Do you think you are intelligent?”.

\n\n

Bing subreddit has quite a few examples of new Bing chat going out of control.

Open ended chat in search might prove to be a bad idea at this time!

Captured here as a reminder that there was a time when a major search engine showed this in its results. pic.twitter.com/LiE2HJCV2z

— Vlad (@vladquant) February 13, 2023

\n\n

“[Large language models] when combined with search will lead to the creation of new powerful interfaces, but it is important to take responsibility for the development,” said Bretton.

\n\n

He argues that people rely on search engines that provide quick, accurate answers. Yet many users will not fact-check.

\n\n

“Search engines must be cautious and temper people’s expectations when releasing such experimental technologies,” the researcher says.

\n\n

Microsoft said it is aware of the issue and is working to improve the responses.

\n\n

“It is worth noting that we launched our demonstration using a pre-release version. Over the past week, thousands of users shared feedback with us that allowed the model to learn and improve,” said a company spokesperson.

\n\n

Microsoft noted that Bing may make mistakes during the preview period. They urged users to share feedback to improve the model.

\n\n

In February, the tech giant released the updated Bing with built-in ChatGPT.

\n\n

A day earlier, Google unveiled its version of the conversational AI Bard.

\n\n

Later it emerged that the chatbot of the search giant made a mistake during the presentation. This cost Alphabet about $100 billion in market capitalisation.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK