Event Recap / March 2018

Video: Juliana Schroeder, “Mistaking Minds and Machines”

On Thursday, March 22 at 12pm, the Center for Long-Term Cybersecurity was honored to host Juliana Schroeder, Assistant Professor in UC Berkeley’s Haas Management of Organizations Group, for the second event in our Spring 2018 Seminar Series.

Schroeder’s presentation, “Mistaking Minds and Machines: How Cues in Language Affect Evaluations of ‘Humanness’,”was based largely on a recent paper published in the Journal of Experimental Psychology (with Nicholas Epley, 2016). She provided an overview of a series of experiments focused on how people interpret communication differently based on whether it comes from machines or other human beings. ​This work was nominated for a best paper award at the Hawaii International Conference on Systems Science (HICSS).

“Every single day we are interacting with machines and making the decision about whether or not to give them access to our personal or private information,” Schroeder said. “Whether we’re letting our phone use our location or whether we’re completing a personality quiz on Facebook, we have to make that decision constantly. That’s a decision that requires trust.”

She defined trust as “the belief that [a machine] will behave with benevolence, integrity, predictability, or competence,” and noted that “a critical component of trust is this belief that a machine seems more or less humanlike. If you look at the traits associated with trust, those traits have empirically been more associated with humanness.”

In 2017, Schroeder received a grant from CLTC for work on the topic, “Sharing Personal Information with Humanlike Machines: The Role of Human Cues in Anthropomorphism and Trust in Machines,” which she used to further explore the degree to which humans’ trust in machines depends on whether they believe the machine has a humanlike mind, with the capacity to think and feel. Integrating research in social psychology and human-computer interaction, Schroeder and her fellow researchers are developing a theoretical model of anthropomorphism in which they experimentally test the marginal added contribution of different types of human cues (e.g., language, face, voice) on the belief that a machine has a humanlike mind.

“There’s a growing research focused on the link between anthropmorphism and trust,” she said. “What I study in particular is, what are the ways that people infer humanness in machines—and in other people as well—and particularly how they do this through different aspects of language.”

Watch the video of the presentation below or on YouTube.

 

https:///www.youtube.com/watch?v=4DQ5q6Rzik8