10/29/2025 | Press release | Distributed by Public on 10/29/2025 04:33
The finding adds to a rapidly growing volume of research indicating that blindly trusting AI output comes with risks like 'dumbing down' people's ability to source reliable information and even workforce de-skilling. While people did perform better when using ChatGPT, it's concerning that they all overestimated that performance.
'AI literacy is truly important nowadays, and therefore this is a very striking effect. AI literacy might be very technical, and it's not really helping people actually interact fruitfully with AI systems', says Welsch.
'Current AI tools are not enough. They are not fostering metacognition [awareness of one's own thought processes] and we are not learning about our mistakes,' adds doctoral researcher Daniela da Silva Fernandes. 'We need to create platforms that encourage our reflection process.'
The article was published on October 27th in the journal Computers in Human Behavior.
Why a single prompt is not enough
The researchers designed two experiments where some 500 participants used AI to complete logical reasoning tasks from the US's famous Law School Admission Test (LSAT). Half of the group used AI and half didn't. After each task, subjects were asked to monitor how well they performed -- and if they did that accurately, they were promised extra compensation.
'These tasks take a lot of cognitive effort. Now that people use AI daily, it's typical that you would give something like this to AI to solve, because it's so challenging', Welsch says.