Trust Experiments – Measuring Trust in AI as Actions, What Raises and Reduces Trust, and What Innovators and Organizations Can Do to Design Future Trusted AI

Trust Experiments – Measuring Trust in AI as Actions, What Raises and Reduces Trust, and What Innovators and Organizations Can Do to Design Future Trusted AI

Commenced on

1 October 2022

completed

PI

POON King Wang (LKYCIC, SUTD)

Co-PI

JAYASEKARA Dinithi, WILLEMS Thijs (LKYCIC, SUTD), MUSSAGULOVA Assel (USyd)

Team

PRISSE Benjamin, HO Jun Quan, DENG Ruotong (LKYCIC, SUTD)

Trust in AI is a topic that has been on the rise since AI-enhanced technologies became common in a variety of domains. Trust-based human-AI interactions are becoming increasingly common in public services (e.g., chatbots), healthcare, marketplaces (e.g., e-commerce), education, and financial services. However, little is known whether we truly trust AI or what can be done for us to trust AI, at least in Singapore context. Therefore, this is an interdisciplinary project that measures public trust towards AI technologies and organizations producing AI technologies. We introduce trust experiments – based on experimental economics and organizational studies – to measure trust as actions instead of perceptions to build evidence base frameworks that researchers, policy makers, and company leaders can use to study, design, and implement future trusted AI. The experiments will enable us to understand human-AI interactions, individual traits, and organizational factors that raise or reduce public trust placed in AI technologies, and on organizations developing and providing AI solutions. These include identifying individual and organizational determinants that promote trust and trustworthy interactions with AI. Overall, our study aims to advance our understanding of public trust in AI by moving beyond conceptualizing “trust as a perception” to “trust as concrete actions and decisions” that individuals take when faced with AI.

 

Research Technical Areas: Game theory in experimental economics, Survey and Vignette experiments in organizational studies

 

Benefits to Society: Our research advances the scholarly body of work as it integrates multiple disciplines across individual and organizational scales to understand public trust in AI. For individuals, evidence produced through the application of experiments will give greater assurance that AI they adopt, and use are fair and trusted. AI Innovators and organizations can use our findings to develop trusted AI, build the reputations and communications needed for others to trust them as AI providers; and adapt our experiments/methods to test, pilot, evaluate, iterate, and scale AI for trustworthiness.

 

Governments/Policymakers can use the findings to systematically evaluate the efficacy of existing standards, frameworks, and policies; and to adapt our experiments/methods to test, pilot, evaluate, iterate, and scale future standards, frameworks, and policies.

 

The combined benefits of all of the abovementioned stakeholders will enable to reap the benefits of the digital economy and society sooner than later.

 

Project email: trustinai@sutd.edu.sg