Transparency and Public Communication Foster Trust in AI Companies
Description
This study examines how organizational characteristics of companies producing Artificial Intelligence (AI) technologies influence public trust through a vignette-based experimental design. Building on prior frameworks of trust, we focus on five sub-dimensions of trust: Benevolence, Standards and Guidelines, Data Quality, Reliability, and Transparency, each with three different levels. Results indicate that Transparency and Benevolence are the most significant drivers of trust. Organizations that provide clear explanations of their AI technologies and demonstrate societal accountability by seeking and incorporating public feedback are viewed more favorably. Adherence to external standards, such as national or international guidelines, further enhances trust, while technical performance and data quality are less influential, as participants assume the technology is functioning adequately for their limited use. We conclude that transparent practices, societal engagement, and institutional collaboration will foster public confidence in companies producing AI technologies.