Site iconSite icon ForkLog

OpenAI Backs Research into AI Morality

OpenAI Backs Research into AI Morality

OpenAI is funding scientific research into algorithms capable of predicting human moral judgments, according to TechCrunch, citing a filing with the IRS.

The startup has awarded a grant to researchers at Duke University for a project titled “AI Morality Research.”

Details of the work are scarce, and lead researcher Walter Sinnott-Armstrong declined to comment on the progress. The grant is set to conclude in 2025.

Previously, Sinnott-Armstrong and fellow project participant Jana Borg authored a book on the potential of AI as a “moral GPS,” aiding individuals in making more informed decisions.

Together with other teams, they developed a “morally oriented” algorithm to assist in deciding the allocation of donor kidneys. They also evaluated scenarios where people might prefer to delegate decision-making to AI.

The aim of the OpenAI-funded work is to train algorithms to “predict human moral judgments” in situations that provoke conflicts in medical, legal, and business contexts.

Earlier, Sam Altman’s startup was preparing to launch an AI agent under the code name “Operator.”

Exit mobile version