News / March 2018

Q&A with Juliana Schroeder: On Humans, Machines, and Trust

On Thursday, March 22 at 12pm, the Center for Long-Term Cybersecurity will host the second event in our Spring 2018 Seminar Series. This presentation will feature Juliana Schroeder, Assistant Professor in UC Berkeley’s Haas Management of Organizations Group. RSVP here to attend this event, which will be held on South Hall Room 205 on the UC Berkeley campus (map).

As outlined on her website, Schroeder’s research focuses how people navigate their social worlds: first, how people form inferences about others’ mental states and mental capacities and, second, how these inferences influence their interactions. Her research has been published in journals such as Journal of Personality and Social Psychology, Journal of Experimental Psychology, and Psychological Science. It has been featured by outlets such as the New York Times, Newsweek, NBC, and the Today Show, and has been funded by the National Science Foundation. She received her B.A. in psychology and economics from The University of Virginia. She received a M.A. in social psychology and advanced methods from the University of Chicago, and an M.B.A. from The Chicago Booth School of Business. Her Ph.D. is in Psychology and Business from the University of Chicago.

In recent years, Schroeder’s focus has turned to humans’ interactions with machines, including how people develop trust based on the nature of the communication interface. The topic of her CLTC seminar is based on a paper, “Mistaking Minds and Machines: How Speech Affects Dehumanization and Anthropomorphism,” that she published in 2016 (together with Nicholas Epley) in the Journal of Experimental Psychology. ​This work was recently nominated for a best paper award at the Hawaii International Conference on Systems Science (HICSS).

In 2017, she received a grant from CLTC for work on the topic, “Sharing Personal Information with Humanlike Machines: The Role of Human Cues in Anthropomorphism and Trust in Machines,” which she used to further explore the degree to which humans’ trust in machines depends on whether they believe the machine has a humanlike mind, with the capacity to think and feel. Integrating research in social psychology and human-computer interaction, Schroeder and her fellow researchers are developing a theoretical model of anthropomorphism in which they experimentally test the marginal added contribution of different types of human cues (e.g., language, face, voice) on the belief that a machine has a humanlike mind.

In anticipation of her CLTC seminar, we asked Juliana Schroeder a few questions about her research. Below are her responses (lightly edited for content).

What are your primary research interests?

​I’m primarily interested in understanding how people make judgments of other agents’ (particularly, other humans’ and machines’) mental capacities​. The belief that a person has weak mental capacity—either reduced capacity to think or to feel—is a form of dehumanization, whereas believing that a machine has strong mental capacity is a form of anthropomorphism. I study how people make these judgments and their consequences. One consequence I’ve been considering recently is trust—specifically, willingness to share information with another person or machine.

You recently completed research focused on trust and communication between humans and machines. What did you discover?

​In three field experiments and a laboratory experiment, we’ve found evidence that people are more likely to share their personal information with “virtual assistant applications” when they are talking to the applications than when they are typing to the applications. Talking to the applications makes them believe they are engaging in a more human-like, social interaction which seems to make them more willing to trust the agent with their information.

What led you to become interested in this topic?

​I did my PhD in psychology and business at the University of Chicago and worked with a professor, Nicholas Epley, who is an expert in this type of research (you can find his latest book here).

Part of your work focuses on how different types of human cues—i.e. language, face, or voice—shapes people’s beliefs that a machine has a humanlike mind. What are the roles that these different cues play?

​Voice appears to be critical for humanization—specifically, speech (that is, a natural human voice combined with language). Our experiments suggest that voice may be more humanizing than visual cues and more humanizing than language alone (i.e., in text form). One of the reasons why voice seems to convey a person’s (or machine’s) mental capacities is that variance in paralinguistic signals (e.g., tone of voice, volume, speech rate) express nuances in thoughts and emotions​ very well. For example, other research shows that hearing a person speak increases observers’ empathic accuracy (i.e., their ability to accurately judge the communicator’s thoughts and feelings) compared to reading the same words in text form.

What are the cybersecurity issues or concerns that might emerge as machines become more humanlike?

​I think users become more comfortable interacting with machines as the machines look and sound increasingly humanlike, which may lead users to be less vigilant about their privacy​ around such machines. There could be security concerns that arise from this.

How would you predict human-technology interfaces to evolve in the future, given what you’ve learned?

​We are certainly moving in the direction of voice recognition. Not only does it feel more natural to talk to a machine, but it’s more convenient, at least once the voice recognition technology works adequately. I think humans and machines are starting to form deeper relationships and we might be moving in a direction where ​machines start supplementing or even replacing human relationships. So it’s really critical to understand what aspects of machines affect trust and what are the security concerns associated with this.

What do you hope people will get out of your CLTC seminar?

​I hope they’ll think more carefully about how the form of language via which people interact with machines (and other humans) can affect their experiences. ​

Please RSVP to attend the March 22, 12pm lunch seminar series with Juliana Schroeder.