Responsibility and Trust in AI-Mediated Communication

Published: 7 March 2019| Version 1 | DOI: 10.17632/632ph8d4vj.1
Jess Hohenstein, Malte Jung


Included are data from successful and unsuccessful conversations with a standard or AI-mediated messaging app. In each condition, the conversational outcome was controlled by a confederate. After the conversation was completed, participants rated the responsibility for the conversation outcome between themselves, their partner, and the AI mediation, as well as their perceived trust of their partner and the AI mediation. We found that the presence of AI-generated smart replies increases perceptions of trust between human communicators and that the AI serves as a moral crumple zone when things go awry, taking on the responsibility that would otherwise have been assigned to the human. Also included are the LIWC variables for the confederate and participant sides of the conversations, as well as for the smart replies that were seen in the conversations. Due to the sensitive nature of information revealed by participants in the conversations, participants were assured that the raw conversation data would remain confidential and not be shared.



Artificial Intelligence, Communication, Computer-Mediated Communication, Interpersonal Trust