BETWEEN LOGIC AND TRUST: HOW PEOPLE PERCEIVE AI IN HIGH-STAKES CONTEXTS
Keywords:
Artificial Intelligence, Trust in AI, Explainable AI, Human-AI Collaboration, Transparency, Algorithmic Fairness, Ethical Decision-MakingAbstract
This paper examines how individuals perceive and place trust in artificial intelligence (AI) in its role in making decisions in high-stakes fields such as healthcare, finance, education, and autonomous transport. The research was conducted using a structured, anonymous, scenario-based questionnaire that was self-administered by 197 individuals aged between 15 and 39, in order to understand participants’ preference for human assistance over AI assistance in the decision-making process, as well as the impact of transparency on perceived trustworthiness. The findings reveal a strong preference for human supervision. A total of 69.5% of respondents believed that a senior surgeon without AI assistance was more trustworthy than a mid-level surgeon with AI assistance, and 72.6% of respondents preferred human–AI cooperation over fully autonomous financial management. Education was the only sector in which participants expressed trust in AI-based admissions systems, with 17.8% fully believing in such systems and 47.7% expressing partial or conditional belief. The findings further reveal that transparency is a significant factor influencing acceptance, as 76.6% of respondents indicated that they would place greater trust in AI systems if clearer explanations were provided. Overall, the results indicate that although people are aware of the efficiency of AI, they remain cautious about allowing it to operate independently and prefer systems in which automation is balanced with human responsibility.













