Telegram (AI) YouTube Facebook X
Ру
Calls to ban Clearview AI, a robot that 'thinks aloud', and other AI news

Calls to ban Clearview AI, a robot that ‘thinks aloud’, and other AI news

We aim to inform readers not only about events in the Bitcoin industry, but also about what is happening in adjacent technology spheres — cybersecurity, and now the world of artificial intelligence (AI).

ForkLog has compiled the most important AI news from the past week.

  • A driverless Tesla crash, killing two people. Elon Musk said the car was operating without Autopilot, and in the U.S. Senate a federal investigation was proposed.
  • The UK intervened in the Nvidia-Arm deal on national-security grounds. Analysts say the chances of closing the deal are now slim.
  • The European Commission unveiled new rules for AI regulation.
  • The U.S. Federal Trade Commission urged companies to scrutinize the use of racist and sexist AI.
  • Activists in the UK filed a lawsuit against TikTok over collecting facial data of children.
  • A supercomputer maker unveiled an AI chip with 850 thousand cores.
  • Researchers developed an algorithm to detect conspiracy theories online.
  • The U.S. military integrated a conversational AI into a combat robot to recognise vocal commands.
  • Italian researchers taught the Pepper robot to ‘think aloud’ to understand how it makes decisions.

Tesla on Autopilot in crash with two fatalities

In a Houston suburb, a Tesla Model S, presumably in Autopilot mode, crashed into a tree at high speed and caught fire. Two men, aged 59 and 69, were killed.

Two men dead after fiery crash in Tesla Model S.

“[Investigators] are 100-percent certain that no one was in the driver seat driving that vehicle at the time of impact,” Harris County Precinct 4 Constable Mark Herman said. “They are positive.” #KHOU11 https://t.co/q57qfIXT4f pic.twitter.com/eQMwpSMLt2

— Matt Dougherty (@MattKHOU) April 18, 2021

Police said that at the time of the crash, no one was operating the electric car. One passenger was in the front passenger seat, the other in the back. 

Firefighters needed 113 600 litres of water and four hours to extinguish the car because its battery continued to reignite.

Later, Elon Musk said the crashed EV was not in Autopilot mode.

Your research as a private individual is better than professionals @WSJ!

Data logs recovered so far show Autopilot was not enabled & this car did not purchase FSD.

Moreover, standard Autopilot would require lane lines to turn on, which this street did not have.

— Elon Musk (@elonmusk) April 19, 2021

«Records show that Autopilot was not engaged, and the full self-driving feature for this vehicle was not purchased», — wrote the businessman.

He added that for Autopilot to work, road markings are required, which were not present on the street where the crash occurred.

U.S. Senators Richard Blumenthal and Ed Markey призвали the National Highway Traffic Safety Administration to take \”corrective measures\” against Tesla to prevent further improper use of the extended driver-assistance feature (ADAS).

They urged the federal agency to determine the exact cause of the recent crash and develop recommendations to improve automated driving systems and ADAS, such as Tesla’s Autopilot.

UK intervenes in Nvidia-Arm deal

The British government вмешалось in the $40 billion Nvidia-Arm deal on national-security grounds.

By 30 July 2021, the antitrust regulator would prepare a report examining competition and national-security issues surrounding the deal.

In light of this, Nvidia shares fell about 4%.

Analysts think that the odds of closing the deal are no more than 10%. Yet the company leadership remains confident Arm will become part of Nvidia in 2022, as previously planned.

In September 2020 Nvidia announced the purchase of Arm processor designer from SoftBank. Qualcomm, Google and Microsoft opposed the deal. They fear the new owner would offer unfavourable terms.

The European Commission unveiled AI regulation recommendations

The European Commission представила new rules governing AI in the European Union, including a ban on most surveillance technologies used on citizens.

The authors proposed banning applications that enable authorities to conduct social scoring or exploit children.

For high-risk AI used in hiring, maintaining critical infrastructure, migration, and law-enforcement needs, the Commission proposed strict limitations.

Penalties could reach up to 6% of global turnover or €30 million. The new rules require approval by the European Parliament and EU member states.

The U.S. regulator threatens firms using biased AI

The U.S. Federal Trade Commission urged companies to pay close attention to the use of sexist and racist algorithms. Otherwise the regulator threatened to intervene.

The agency’s in-house counsel Eliza Gillson noted that the law prohibits the sale or use of algorithms that could lead to discrimination on the basis of race, colour, religion, national origin, sex, marital status or age.

Gillson reminded that distorting algorithm tests that check for discrimination violates federal law.

UK sues TikTok over child facial-recognition

TikTok was accused of illegally collecting facial data and other confidential information from British children.

Former Children’s Commissioner Anne Longfield said the platform illegally collected personal information from 3.5 million underage users since 2018.

TikTok said it does not agree with the allegations and intends to defend its position.

If found liable, the company could pay more than £3 billion in compensation.

Supercomputer maker unveils AI chip with 850k cores

Cerebras unveiled the second-generation Wafer Scale Engine (WSE-2) processor designed for machine learning and AI workloads.

It contains 2.6 trillion transistors, 850,000 cores, 40 GB of SRAM buffering with a bandwidth of 20 Pbit/s. The processor is partitioned into small blocks connected by an internal network with up to 200 Pbit/s.

WSE-2 is manufactured with a 7‑nm process, and its area measures 46,000 mm².

\"Clearview
Wafer Scale Engine 2. Data: Cerebras.

The new processor will be part of the CS-2 supercomputer, with shipments due in Q3 2021. The price is not yet known. According to some sources, the previous generation cost around $2 million.

Scientists create algorithm to detect conspiracy theories online

A group of American researchers developed an algorithm that identifies conspiracy theories about COVID-19 on social media and models their evolution over time.

The researchers assembled a dataset of 1.8 million Twitter posts with relevant keywords to form four popular theories:

  • 5G cellular towers spread the virus;
  • The Gates Foundation is involved in the pandemic;
  • The virus was developed in a laboratory;
  • COVID-19 vaccines are dangerous.

They manually labeled several hundred messages in each category to build training sets.

\"Clearview
Changes in word importance over time for tweets related to the Bill and Melinda Gates conspiracy theory. The darker the color, the more important the word. Data: study.

Results showed misinformation messages contain more negative sentiment than tweets with facts.

Researchers also found that over time conspiracy theories are augmented with details from other conspiracy theories and unrelated real-world events.

The team hopes their algorithm will help detect the spread of such theories more quickly in the future and combat misinformation online.

US military develops conversational AI for robots

The U.S. military and researchers at the University of Southern California developed a conversational AI system for a combat robot to understand spoken commands.

The developers trained the system to recognise speech from soldiers’ dialogues recorded during search-and-rescue operations, ported the model to a robot, and successfully tested it in a similar training environment.

\"Clearview
Robot with conversational AI on board during testing. Data: U.S. Army.

The team plans to improve the system by training it on more commands and integrating computer vision to analyse gestures or facial expressions.

Pepper robot taught to ‘think aloud’

Italian researchers taught the Pepper robot to ‘think aloud’ to better understand how it makes decisions.

In a test they trained it to set a table according to etiquette rules. They then deliberately asked the robot to violate them by placing a napkin in the wrong spot.

«This situation upsets me. I would never break the rules, but I cannot upset the person who gave the command, so I do what they want», — Pepper thought aloud after confirming the request was correct.

Researchers say this method helps address the ‘black box’ problem and makes the decision-making process of algorithms more transparent. They also found that with the ‘inner voice’ enabled, Papper is more likely to complete the task successfully.

Also on Forklog:

  • New Zealand startup created a virtual Einstein;
  • Human rights advocates urged Biden to drop use of Clearview AI;
  • MIT launched an initiative focused on data privacy;
  • In the U.S. proposed to ban Clearview AI legislatively;
  • Researchers created a deepfake model to forge satellite images.

Subscribe to ForkLog’s Telegram: ForkLog AI — all the news from the AI world!

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK