Exploring Public Perceptions of AI Consciousness

Exploring Public Perceptions of AI Consciousness

A recent study conducted by the University of Waterloo reveals that two-thirds of survey respondents believe that artificial intelligence (AI) tools, such as ChatGPT, possess some form of consciousness, including subjective experiences like feelings and memories. Published in the Neuroscience of Consciousness journal under the title "Folk psychological attributions of consciousness to large language models," the research sheds light on how these perceptions could shape interactions with AI.

Dr. Clara Colombatto, a psychology professor at Waterloo, notes the disparity between public opinion and expert consensus, with most experts denying current AI's capability for consciousness. Yet, public belief in AI consciousness has implications for societal trust and human-AI relationships. Increased trust may enhance social bonds, but excessive reliance could lead to emotional dependency and reduced human interaction.

The study, led by Colombatto and Dr. Steve Fleming from University College London, surveyed 300 individuals in the U.S. Their findings suggest that frequent AI use correlates with higher attributions of consciousness and other mental states to AI systems. This underscores the influence of conversational AI's human-like interactions in shaping public perception.

Colombatto emphasizes that attributions of consciousness extend beyond emotions, impacting ethical considerations like moral responsibility and decision-making. She advocates for integrating public attitudes into AI design and regulation to ensure safe and ethical deployment.

Future research will delve into specific factors driving consciousness attributions, their impact on trust dynamics, and variations across cultures and over time. These insights aim to inform policies that govern AI development and usage, balancing technological advancement with societal expectations.