Loading…

Why Is Artificial Intelligence Blamed More? Analysis of Faulting Artificial Intelligence for Self-Driving Car Accidents in Experimental Settings

This study conducted an experiment to test how the level of blame differs between an artificial intelligence (AI) and a human driver based on attribution theory and computers are social actors (CASA). It used a 2 (human vs. AI driver) x 2 (victim survived vs. victim died) x 2 (female vs. male driver...

Full description

Saved in:
Bibliographic Details
Published in:International journal of human-computer interaction 2020-11, Vol.36 (18), p.1768-1774
Main Authors: Hong, Joo-Wha, Wang, Yunwen, Lanz, Paulina
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This study conducted an experiment to test how the level of blame differs between an artificial intelligence (AI) and a human driver based on attribution theory and computers are social actors (CASA). It used a 2 (human vs. AI driver) x 2 (victim survived vs. victim died) x 2 (female vs. male driver) design. After reading a given scenario, participants (N = 284) were asked to assign a level of responsibility to the driver. The participants blamed drivers more when the driver was AI compared to when the driver was a human. Also, the higher level of blame was shown when the result was more severe. However, gender bias was found not to be significant when faulting drivers. These results indicate that the intention of blaming AI comes from the perception of dissimilarity and the seriousness of outcomes influences the level of blame. Implications of findings for applications and theory are discussed.
ISSN:1044-7318
1532-7590
1044-7318
DOI:10.1080/10447318.2020.1785693