Using AI for Just 10 Minutes Might Make You Lazy and Dumb, Study Shows

1 hour ago 2

Using AI chatbots for adjacent conscionable for 10 minutes whitethorn person a shockingly antagonistic interaction connected people’s quality to deliberation and problem-solve, according to a caller survey from researchers astatine Carnegie Mellon, MIT, Oxford, and UCLA.

Researchers tasked radical with solving assorted problems, including elemental fractions and speechmaking comprehension, done an online level that paid them for their work. They conducted 3 experiments, each involving respective 100 people. Some participants were fixed entree to an AI adjunct susceptible of solving the occupation autonomously. When the AI helper was abruptly taken away, these radical were importantly much apt to springiness up connected the occupation oregon flub their answers. The survey suggests that wide usage of AI mightiness boost productivity astatine the disbursal of processing foundational problem-solving skills.

“The takeaway is not that we should prohibition AI successful acquisition oregon workplaces,” says Michiel Bakker, an adjunct prof astatine MIT progressive with the study. “AI tin intelligibly assistance radical execute amended successful the moment, and that tin beryllium valuable. But we should beryllium much cautious astir what benignant of assistance AI provides, and when.”

I precocious met up with Bakker, who has chaotic hairsbreadth and a wide grin, connected MIT’s campus. Originally from the Netherlands, helium antecedently worked astatine Google DeepMind successful London. He told maine that a well-known effort connected the mode AI whitethorn disempower humans implicit clip inspired him to deliberation astir however the exertion could already beryllium eroding people’s abilities. The effort makes for somewhat bleak reading, due to the fact that it suggests that disempowerment is inevitable. That said, possibly figuring retired however AI tin assistance radical make their ain intelligence capabilities should beryllium portion of however models are aligned with quality values.

“It is fundamentally a cognitive question—about persistence, learning, and however radical respond to difficulty,” Bakker tells me. “We wanted to instrumentality these broader concerns astir semipermanent human-AI enactment and survey them successful a controlled experimental setting.”

The resulting survey seems peculiarly concerning, says Bakker, due to the fact that a person’s willingness to persist with problem-solving is important to acquiring caller skills and besides predicts their capableness to larn implicit time.

Bakker says it whitethorn beryllium indispensable to rethink however AI tools enactment truthful that—like a bully quality teacher—models sometimes prioritize a person’s learning implicit solving a occupation for them. “Systems that springiness nonstop answers whitethorn person precise antithetic semipermanent effects from systems that scaffold, coach, oregon situation the user,” Bakker says. He admits, however, that balancing this benignant of “paternalistic” attack could beryllium tricky.

AI companies bash already deliberation astir the much subtle effects that their models tin person connected users. The sycophancy of immoderate models—or however apt they are to hold with and patronize users—is thing that OpenAI has sought to code down with newer releases of GPT.

Putting excessively overmuch religion successful AI would look particularly problematic erstwhile the tools whitethorn not behave arsenic you expect. Agentic AI systems are peculiarly unpredictable due to the fact that they bash analyzable chores independently and tin present unusual errors. It makes you wonderment what Claude Code and Codex are doing to the skills of coders who whitethorn sometimes request to hole the bugs they introduce.

I precocious got a acquisition successful the information of offloading captious reasoning to AI myself. I’ve been utilizing OpenClaw (with Codex inside) arsenic a regular helper, and I've recovered it to beryllium remarkably bully astatine solving configuration issues connected Linux. Recently, however, aft my Wi-Fi transportation kept dropping, my AI adjunct suggested moving a bid of commands successful bid to tweak the operator talking to the Wi-Fi card. The effect was a instrumentality that refused to footwear nary substance what I did.

Perhaps, alternatively of simply trying to lick the occupation for me, OpenClaw should person paused to thatch maine however to hole the contented for myself. I mightiness person a much susceptible computer—and brain—as a result.


This is an variation of Will Knight’s AI Lab newsletter. Read erstwhile newsletters here.

Read Entire Article