Press ‘1’ to speak to a machine: An examination of the psychological factors influencing preference for interaction with artificially intelligent actors
Author(s)
Yang, Hee Jin (Heather)![Thumbnail](/bitstream/handle/1721.1/139393/yang-hjy-phd-sloan-2021-thesis.pdf.jpg?sequence=3&isAllowed=y)
DownloadThesis PDF (1.293Mb)
Advisor
Carroll, John Stephen
Terms of use
Metadata
Show full item recordAbstract
What psychological factors influence the preference for interaction with a human versus an artificially intelligent actor? How can these factors be used to increase adoption of novel technologies, and what are their broader societal impacts? In this dissertation, I answer these questions through two streams of research: Firstly, by examining what kinds of people seek out algorithmic advice; and secondly, how the implicit application of social information to algorithmic agents impacts their interpretability and evaluation.
In Chapter 1, I examine the individual level differences of users of artificially intelligent advisors. Across four studies, users’ cognitive style predicted advice-seeking behavior from algorithmic advisors, even after controlling for a host of consequential factors, such as prior experience with artificial intelligence, comfort with technology, social anxiety, and educational background. Building on the Dual Process theory literature, I show that increased cognitive reflection is related to increased perceptions of accuracy for algorithmic (versus human) advisors, with accuracy perceptions mediating the relationship between cognitive style and advisor preference. I find that individuals who rely on their intuition perceive human advisors as being more accurate than algorithmic advisors, in comparison to their deliberative counterparts, and also rate algorithmic advisors as being less impartial.
In Chapter 2, I investigate how individuals apply social stereotypes to digital voiced assistants (DVAs) and how this facilitates understanding of novel personified devices. Through experimentally pairing participants with fake artificially intelligent voiced agents, I demonstrate that individuals implicitly apply social stereotypes to the agent in the same way as they do to humans. Consistent with traditional gender stereotypes and in contrast to current academic justifications reliant on the generalized preference for female voices, I find that individuals prefer female (versus male) voiced artificial intelligent agents when occupying roles that are female-typed, but not male-typed, demonstrating a stereotype congruence effect. I extend this finding to show how gender stereotype congruent features of a novel device facilitate understanding of its capabilities for inexperienced users.
Finally, I discuss the implications of this research for managers, policy makers, developers and users of artificially intelligent agents.
Date issued
2021-06Department
Sloan School of ManagementPublisher
Massachusetts Institute of Technology