
AI Employees Sound Alarm Over Disclosure Rights and AGI Fears
Former and current employees of AI corporations have issued a call demanding expanded rights to disclose potentially hazardous corporate information.
“Companies involved in artificial intelligence possess confidential information about the capabilities and limitations of their systems, their safeguards, and the levels of risk [associated with the technologies]. Currently, they have only minor obligations to provide some of this information to governments, but not to the public,” the document states.
Among the signatories of the letter are 13 former and current employees of OpenAI, Google DeepMind, and Anthropic, some of whom wished to remain anonymous.
The authors argue that until effective government oversight is established over AI giants, their employees are “the only ones who can hold them accountable.”
However, such companies have “strong financial incentives to avoid oversight.” According to the authors, they are often constrained by non-disclosure agreements and other corporate procedures:
“Ordinary whistleblower protections are insufficient because they are tied to illegal activities, whereas many of the risks that concern us are not yet regulated. Some reasonably fear various forms of retaliation, given the history of similar cases in the industry.”
As a mitigating measure, the authors proposed that corporations adhere to several principles:
- Do not enter into agreements that prohibit “denigration” or criticism of the company.
- Ensure anonymity and impartiality towards employees who report risks.
- Foster a “culture of open criticism” and acknowledge potential concerns—from AI manipulation to loss of control, known as “singularity.”
- Refrain from retaliatory measures against employees who publish confidential information related to concerns.
The Omnipotent AGI
Another former OpenAI employee, Leopold Aschenbrenner, published an essay expressing concerns about the rapidly developing artificial general intelligence (AGI).
The work is part of the Situational Awareness initiative and is dedicated to the resignation of OpenAI co-founder Ilya Sutskever.
“The AGI race has begun. We are building machines that can think and reason. By 2025-2026, these machines will surpass many college graduates. […] Along the way, national security agencies will intervene. The US government will awaken, and by 2027-2028 we will have some form of government AGI project. No startup will be able to handle superintelligence. Somewhere in SiFi the finale will occur,” writes Aschenbrenner.
In his observations, the technological rivalry intensifies each year, so by the end of the decade, “trillions of dollars will be spent on graphics processors.”
Meanwhile, leading national developers view safety as a secondary concern. Currently, research institutions are “serving AGI on a platter to the Communist Party of China.”
“Superintelligence will provide a decisive economic and military advantage. China has not yet exited the game. In the race to create AGI, the survival of the free world is at stake. Can we maintain our superiority over authoritarian regimes? And can we avoid self-destruction along the way?” the former OpenAI employee questioned.
He believes that only a few involved in AI technology development can influence the future of humanity.
Earlier, billionaire Elon Musk identified artificial intelligence as his primary concern. In his view, neural networks could eliminate the need for humans to work.
Back in May 2024, billionaire investor Warren Buffett compared the threat of deepfakes to nuclear weapons.
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!