
Tech giants urge US Supreme Court to bar lawsuits against algorithms
A group of companies, users, scholars and human-rights experts have spoken out in support of the tech giants in the YouTube algorithms case being heard by the US Supreme Court. CNN reports.
The case is Gonzalez v. Google. In 2015, a relative of the plaintiff died in an ISIS attack in Paris. According to the plaintiffs, YouTube’s recommendation algorithms were partly responsible for the spread of recruitment videos for the terrorist organization.
Google argues that Section 230 of the Communications Decency Act does not permit such lawsuits. Excluding AI-based recommendation systems from legal protection could lead to radical changes to the Internet, allies of the tech giants say.
Companies such as Meta, Twitter and Microsoft have sided with Google, along with critics of the corporations, including Yelp and the Electronic Frontier Foundation.
They are joined by Reddit and a group of volunteer moderators of the platform. They say the lawsuit would set a dangerous precedent. In the future, the ruling could lead to lawsuits against non-algorithmic forms of recommendations and against individual users.
“The entire Reddit platform is built on recommendations of content by users through voting and pinning posts. In this case there is no doubt about the consequences of the suit: their theory would sharply expand the ability to hold people accountable for their online interactions,” the company said.
Yelp says their business depends on providing relevant and non-deceptive reviews to their users. A decision in favour of the plaintiffs could disrupt the service’s core functions, effectively forcing it to stop curating all reviews, including manipulative or fake ones.
Section 230 guarantees that platforms can moderate content to provide users with the most relevant information from the vast amount of information on the Internet, Twitter said.
“Today, an average user would need about 181 million years to download all the data on the Internet,” the companies added.
Meta argues that a new interpretation of Section 230 would spark broad discussions about what it means to “recommend” something on the Internet.
“If merely displaying a third-party post in a user’s feed qualifies as a “recommendation,” many services would face potential liability for essentially all third-party content they host,” the company said.
Meta representatives added that nearly all decisions about sorting, selecting, organizing and displaying third-party content could be interpreted as a “recommendation.”
The court ruling also threatens GitHub. Microsoft says that for a platform with 94 million users, the consequences of restricting Section 230 would be “devastating.”
A company spokesperson said Bing search and the LinkedIn social network also rely on the algorithmic protection under the aforementioned provision.
According to the Stern Center for Business and Human Rights at New York University, it is impossible to craft a rule that would isolate algorithmic recommendation as a meaningful liability category. Such attempts could lead to “the loss or suppression of a substantial amount of valuable speech,” especially for minorities.
“Web sites use ‘targeted recommendations’ because they make their platforms convenient and useful. Without protection for recommendations, services would have to remove third-party content […]. Valuable freedom of speech would disappear,” the NYU statement says.
In November 2022, Twitter leadership laid off a group of AI researchers, working on transparency and fairness of the platform’s algorithms.
In September, Facebook’s recommendation system was accused of “targeted incitement of atrocities”, committed by the Myanmar military against the Rohingya.
Subscribe to ForkLog news on Telegram: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!