Site iconSite icon ForkLog

US to set rules for generative AI models

US to set rules for generative AI models

The Biden administration will study whether rules are needed for AI systems such as ChatGPT. The The Wall Street Journal reports.

\n\n\n\n

The U.S. Department of Commerce has launched a call for comments on whether potentially risky AI models should undergo a certification process before their release.

\n\n\n\n

“We know we need to set guardrails to ensure they are used responsibly,” said Alan Davidson, head of the National Telecommunications and Information Administration at the department.

\n\n\n\n

The comment window will run through June 10, 2023. According to Davidson, the comments will help the agency craft recommendations for policymakers on how to approach AI issues.

\n\n\n\n

He noted that his agency’s legal mandate includes advising the president on technical policy, not writing or enforcing rules.

\n\n\n\n

Representatives of Microsoft backed the administration’s decision to place AI development under close scrutiny.

\n\n\n\n

“We should all welcome such a step in public policy to gather broad feedback, consider the issues carefully, and act promptly,” the company said.

\n\n\n\n

Earlier, the Cyberspace Administration of China published a draft rule aimed at developers of generative AI models. If the document is adopted, the regulator would require providers to ensure that their services do not generate content that threatens national security.

\n\n\n\n

In March, more than 1,000 industry experts urged pausing the development of large language models for six months.

\n\n\n\n

Later, a number of other experts criticised the authors of the letter. They were accused of distorting scientific research to promote the idea of AI’s ‘excessive power’.

\n\n\n\n

In April, Biden called concerns about AI premature. However, he acknowledged the responsibility of tech companies for the safety of AI products.

Exit mobile version