Intelligence brings responsibility - Even smart AI assistants are held responsible

People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether...

Full description

Saved in:
Bibliographic Details
Published in:iScience 2023-08, Vol.26 (8), p.107494-107494, Article 107494
Main Authors: Longin, Louis, Bahrami, Bahador, Deroy, Ophelia
Format: Article
Language:eng
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:People will not hold cars responsible for traffic accidents, yet they do when artificial intelligence (AI) is involved. AI systems are held responsible when they act or merely advise a human agent. Does this mean that as soon as AI is involved responsibility follows? To find out, we examined whether purely instrumental AI systems stay clear of responsibility. We compared AI-powered with non-AI-powered car warning systems and measured their responsibility rating alongside their human users. Our findings show that responsibility is shared when the warning system is powered by AI but not by a purely mechanical system, even though people consider both systems as mere tools. Surprisingly, whether the warning prevents the accident introduces an outcome bias: the AI takes higher credit than blame depending on what the human manages or fails to do. [Display omitted] •Basic AI-assistants are seen as sharing responsibility with their human user•Active AI-assistants receive more credit than blame•But AI-assistants are strongly perceived as tools•Results are the same for verbal and tactile assistants Social interaction; Artificial intelligence; Social sciences
ISSN:2589-0042
2589-0042