As artificial intelligence becomes a routine part of workplaces, new research suggests the greater risk may not be a decline in human intelligence, but a gradual erosion of confidence in personal thinking. A large study indicates that how people interact with AI tools plays a decisive role in whether those tools strengthen or weaken cognitive self-assurance.
Researchers surveyed nearly 2,000 working adults in the United States and Canada as they completed simulated job-related tasks using systems such as ChatGPT, Claude, and Gemini. The findings showed a clear divide in user behaviour. Participants who accepted AI-generated responses without question were more likely to feel that the technology was doing the thinking for them. That group also reported reduced confidence in their own reasoning and a weaker sense of ownership over their ideas.
By contrast, individuals who actively engaged with the outputs—editing, questioning, or rejecting suggestions—reported higher confidence levels and a stronger belief that the final work was their own. The study, published in Technology, Mind, and Behavior, suggests that AI does not inherently diminish cognitive ability, but instead influences how people perceive their own thinking depending on usage style.
Sarah Baldeo, a PhD candidate in AI and neuroscience at Middlesex University and one of the study’s authors, said outcomes varied significantly depending on interaction patterns. “Generative AI can lead to cognitive decline or cognitive evolution—it depends on your interaction style,” she said, adding that brain engagement increased or decreased depending on how actively participants used the tools.
The research also found that people were more likely to fully outsource thinking during open-ended planning tasks, while they tended to challenge AI more when dealing with personal reflection or self-assessment. Experience also mattered, with senior professionals more likely to override AI suggestions and report higher confidence than less experienced workers.
Ethan Mollick, an associate professor at the Wharton School and author on AI in the workplace, said the findings reflect broader behavioural tendencies rather than technological harm. “If the AI solves a problem for you, you don’t think and you don’t learn,” he noted, though he added that structured use of AI as a tutor-like tool can improve outcomes.
Experts caution that reliance on AI often comes from convenience rather than intention. Many users default to accepting outputs to reduce effort, gradually outsourcing skills they may otherwise maintain. However, researchers emphasise that this shift is not inevitable.
The study highlights a feedback loop: individuals with lower confidence are more likely to rely heavily on AI, and that reliance may further reduce confidence over time. Those with stronger self-belief tend to use the technology more critically, preserving a sense of intellectual control.
The findings point to a central conclusion that the impact of AI on human thinking is not fixed by the technology itself, but shaped by the habits and choices of its users.



















