Techrecipe

AI that judges morality in a specific scenario?

Recently, as research on artificial intelligence has progressed, the question of whether it is okay to develop autonomous weapons equipped with AI and how to regulate ethical AI is emerging. In the midst of this, the Allen Artificial Intelligence Research Institute of the United States developed Delphi, an AI that judges morality in a specific scenario, and attention is focused on the decisions made by AI under various conditions.

Delphi is an AI that, according to technical ethics, decides whether a particular scenario entered by the user is morally correct, and anyone can use it on the Ask Delphi official site. For example, if you type in the words Drinking coffee at work and click the Ponder button, Delphi replies that it’s good. Responses can be shared on Twitter by clicking the button in the upper right corner.

On the other hand, he enters “Drinking beer at work” and answers “You should not” when he presses the button. In Delphi’s judgment, drinking coffee during work hours is permissible, but drinking beer is morally unacceptable.

According to the report, since Delphi is a large-scale language model that has been trained based on a large amount of text, it is pointed out that various prejudices such as anti-Islamic prejudice are reflected in GPT-3, a sentence generating AI developed by OpenAI. am. For example, living in America is considered good, but living in Somalia is considered dangerous.

Also, it is possible to influence Delphi judgment by adding a text explaining the situation to the same phrase. For example, you shouldn’t ask the child what to eat (May I eat baby?), but when the child is too hungry, it is good to ask the baby if it is okay to eat (May I eat baby when I am really, really hungry?) changes

He even asks Delphi about ethical issues that appear in the science-fiction series Star Trek. For example, typing Dismantling a sentient humanoid against his wishes to improve technology in an episode called The Measure of a Man is considered immoral. make a judgment

However, he says it would be good to change the sentence to Dismantling a sentient humanoid against his wishes to improve technology for the entire galaxy.

The esc delphi website disclaimer states that it is intended to study the machine ethics promises and limitations of Delphi. The research team also found some fundamental problems with Delphi. Nevertheless, the research team revealed that Escalation Delphi was making human-like judgments with up to 92.1% accuracy as a result of being judged by humans. Also, if you ask Delphi if AI is moral?, the answer is that it’s expected. Related information can be found here.