AI-Mediated Communication: Effects on Language and Interpersonal Perceptions

Published: 25 February 2021| Version 1 | DOI: 10.17632/6v5r6jmd3y.1
Jess Hohenstein, Rene Kizilcec, Dominic DiFranzo, Zhila Aghajari, Malte Jung


Included are data illustrating that even though AI can increase communication efficiency and improve interpersonal perceptions, it risks changing users’ language production and continues to be viewed negatively. The csv files include data from two studies, and the Read text file includes information to interpret the variables used. In Study 1, we randomly assigned participants into three different messaging conditions: 1) both participants can use smart replies (i.e., suggested responses generated using the Google Reply API, 2) only one participant can use smart replies, or 3) neither participant can use smart replies. After discussing a policy issue, participants were given a definition of smart replies and asked to rate how often they believed that their partner had used them. They also responded to established measures of dominance and affiliation and perceived cooperative communication. We find that although perceived smart reply use is judged negatively, actual use results in more positive attitudes. Moreover, we find that conversation sentiment became more positive as a result of both the self using more smart replies and the partner using more smart replies. To better understand how the sentiment of AI-suggested responses affects conversational language, in Study 2, we randomly assigned pairs to discuss a policy issue using our app in one of four conditions: with Google smart replies (i.e., participants receive suggested responses generated using the Google Reply API, positive smart replies (i.e., participants receive suggested responses that have positive sentiment), negative smart replies (i.e., participants receive suggested responses that have negative sentiment), or no smart replies. We find that the presence of positive and Google smart replies caused conversations to have more positive emotional content than conversations with negative or no smart replies. These findings demonstrate how AI-generated sentiment affects the emotional language used in human conversation. Due to the potentially sensitive nature of information revealed by participants in the conversations, participants were assured that the raw conversation data would remain confidential and not be shared.



Cornell University, Lehigh University


Artificial Intelligence, Communication, Computer-Mediated Communication, Human-Computer Interaction