Telegram (AI) YouTube Facebook X
Ру
OpenAI Backs Research into AI Morality

OpenAI Backs Research into AI Morality

OpenAI is funding scientific research into algorithms capable of predicting human moral judgments, according to TechCrunch, citing a filing with the IRS.

The startup has awarded a grant to researchers at Duke University for a project titled “AI Morality Research.”

Details of the work are scarce, and lead researcher Walter Sinnott-Armstrong declined to comment on the progress. The grant is set to conclude in 2025.

Previously, Sinnott-Armstrong and fellow project participant Jana Borg authored a book on the potential of AI as a “moral GPS,” aiding individuals in making more informed decisions.

Together with other teams, they developed a “morally oriented” algorithm to assist in deciding the allocation of donor kidneys. They also evaluated scenarios where people might prefer to delegate decision-making to AI.

The aim of the OpenAI-funded work is to train algorithms to “predict human moral judgments” in situations that provoke conflicts in medical, legal, and business contexts.

Earlier, Sam Altman’s startup was preparing to launch an AI agent under the code name “Operator.”

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK