
Experts criticise plan to pause AI research
A number of AI experts criticised the authors of the Future of Life Institute’s letter calling for a six-month pause on developing large language models.
According to opponents, the initiative is AI hype and distorts the academic papers of some researchers.
The authors of the letter, in arguing for a pause in research, cited Timnit Gebru — a former Google researcher. The company fired her after the scientist criticised large language models in her scientific work.
But Gebru says the authors distorted the article. In her words, the work, on the contrary, cautions against inflated claims about AI.
“Mostly they state the opposite of what we say, and they quote our paper”, the scientist wrote.
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have ‘human-competitive intelligence.’
They basically say the opposite of what we say and cite our paper?
— @timnitGebru@dair-community.social on Mastodon (@timnitGebru) March 30, 2023
Emily Bender, a co-author of the paper, said the letter is a mess and soaked in AI hype.
In her words, the risks of the technology have never been tied to its “excessive power.” Rather, scientists warned against “the concentration of power in the hands of people capable of reproducing systems of oppression, to the detriment of the information ecosystem”.
Instead: They’re about concentration of power in the hands of people, about reproducing systems of oppression, about damage to the information ecosystem, and about damage to the natural ecosystem (through profligate use of energy resources).
>
— @emilymbender@dair-community.social on Mastodon (@emilymbender) March 29, 2023
Stability AI chief Emad Mostaque, a signatory of the letter, renounced one of its demands just a few hours after publication.
“I do not think a six-month pause is the best idea”, wrote him on Twitter.
To ensure it does as you say.
However, large emergent models can be very dangerous without achieving AGI or sentience.
I don’t think a pause will do much but signed in the spirit of the letter that transparency & coordination & better governance is ? needed
— Emad (@EMostaque) March 29, 2023
Another signatory—the psychology professor Gary Marcus—spent most of the day arguing with critics on Twitter, including computer science professor Nick Holliman. Users accused the scientist of “a silly distraction”.
One critic suggested the letter was signed by “white people” who are indifferent to current issues of algorithmic bias.
“I have repeatedly stated that we must be concerned with how existing systems perpetuate past biases”, Marcus replied.
This misrepresents me.
I have repeatedly argued (Rebooting AI, my pinned tweet since 2021, Atlantic article last week, etc) that we must be concerned with how existing systems perpetuate past bias.
I stand by what I wrote yesterday: “We desperately need to stop dichotomizing.… https://t.co/GohmvZ6ty2
— Gary Marcus (@GaryMarcus) March 29, 2023
Large tech firms such as Google, Meta, and Microsoft had been working on AI systems for years. Yet interest in the technology surged after the announcement of ChatGPT.
In March 2023, OpenAI introduced a new version of the GPT language model. That likely prompted the drafting of the public letter.
Earlier in March, more than 1,000 AI experts called for a six-month pause in the development of large language models.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!