top of page

AI feels like magic. And that’s a problem.

AI is casting a spell over us by making things feel too easy. Here's how you break the curse.

 

The weekend that changed everything

A CEO builds a working strategic dashboard over a Saturday afternoon. A marketing director vibe-codes a competitive analysis tool before Monday's board meeting. A CFO spins up a scenario planning model that would have taken the finance team three weeks - in an hour, using Claude, with no developer in sight.


This is not science fiction. It is happening right now, in organisations at every level, and it is genuinely extraordinary. The ability to build working things - not just receive answers, but create functional tools, prototypes, and systems - without technical skills or lengthy IT cycles is one of the most significant shifts in organisational life in a generation. The thrill is real. The productivity gain is real. The democratisation of building is real.


And inside all of it, quiet and invisible, is a trap.

 

The ‘magic’ problem

The spell cast by AI builder tools is different from the spell cast by a good search result. When you receive an answer, you know you received an answer. When you build a thing - when you watch a working prototype materialise from a conversation, when the dashboard populates, when the proof of concept actually runs - something deeper happens. You feel capable in a new way. The friction that used to stand between an idea and a thing has gone. That feeling is intoxicating, and it is entirely justified.


But the same magic that removes the friction of building can quietly remove the friction of thinking. Every prototype that works becomes a reason not to interrogate what it was built for. Every tool that produces impressive outputs becomes harder to question. Psychologists call the underlying mechanism automation bias - the tendency to over-rely on automated systems and under-apply human judgement. The trap is not that AI gets things wrong. It is that AI gets things wrong fluently, confidently, and in ways that are very hard to argue with in a Monday morning board presentation.

How AI is subtly shrinking our thinking is not a headline about AI failing. It is a headline about AI succeeding - and what that success can quietly cost.


A paper published this week in Harvard Business Review puts a precise name to what is at stake. Researchers studying how organisations learn found that as AI reuse rose, independent exploration fell and teams converged on the same few approaches. Productivity went up. Innovation quietly flattened. They call what is being lost absorptive capacity - the ability to evaluate, adapt and improve ideas rather than just copy them. A manager with absorptive capacity does not just accept an AI-generated competitor analysis. She spots what is missing, tests its assumptions against her own market knowledge, and adapts the recommendations to her context. A manager without it forwards the output unchanged.


The thinking gap is what happens when absorptive capacity erodes faster than productivity rises.


The thinking gap

The leaders winning with AI are not using it less. They are thinking better before - and during - the build. Three questions, the pre-flight check I call the ‘thinking gap’, take minutes and prevent the mistakes that take months to fix.


1. Framing: Are you building the right thing?

The most common failure mode in AI-assisted building is not technical. It is that the tool gets built before the problem gets defined. And because building is now so fast and so satisfying, the pull toward starting before thinking is stronger than it has ever been.


The HBR researchers frame the solution as a simple principle: deposit your own thinking before you withdraw the AI's. Contribute understanding to withdraw wisdom. In practice this means stating the problem precisely before you build anything.


Here is what that looks like. Most people start here: "Build me something to help improve our sales performance."


The better starting point: "Our enterprise deal cycle is running 40% longer than mid-market despite similar complexity. Build something that helps me diagnose where the delays are happening — specifically at the procurement sign-off stage."


The first will generate something plausible that could apply to almost any company. The second forces you to have already done some thinking - and rewards that thinking with something genuinely useful. The discipline: before you build, state the problem in one sentence with no abstract nouns. No "improve," "streamline," or "enhance." If you cannot, you have a direction, not a destination.


Amazon learned this expensively. It framed its hiring challenge as "identify high-performing candidates faster" and trained an AI on a decade of successful hires. Its workforce was disproportionately male - so the AI learned to replicate that pattern, penalising CVs containing the word "women's." Efficient. Fluent. Wrong. Not because the AI failed. Because the real problem was never precisely stated.


2. Width: Who and what else does this touch?

When AI tools move beyond personal use — into teams, processes, client interactions - they enter systems. A solution that improves one part frequently creates friction, cost or fragility somewhere else. Before deploying, spend five minutes drawing the map: which other teams, metrics, relationships or processes does this affect? Who is invisibly dependent on the thing you are about to change?


Uber's surge pricing solved a genuine supply-and-demand problem elegantly. What the algorithm could not see was the reputational system it was also part of. When surge pricing activated during the Sydney hostage crisis, fares quadrupled as people tried to flee. One width question, "How does this behave in contexts we did not design for?" might have caught it.


If you are building something purely for personal use, width matters less. The more important question then is the next one.


3. Foresight: What might this cost that won't appear on any budget?

This is the most personal question of the three - and the one least likely to get asked when a prototype is working and a weekend well spent feels like a win.


The HBR research identified three types of AI user among BCG consultants. Builders who retained control of both what to do and how to do it. Builders who drove the what but collaborated closely with AI on the how. And self-automators - people who ceded control of both. Self-automators experienced what the researchers call "no-skilling": they developed neither domain expertise nor AI fluency. The direction of information flow told the story. Builders pushed context into the AI. Self-automators pulled finished outputs out of it.


Before automating anything involving human judgement, ask: which of those am I becoming? The dashboard that saves you an hour each morning - was that hour where you noticed things? Made connections? Kept your feel for the business? If so, the saving and the cost are not the same number.


A study in the Lancet found that after just three months of using an AI colonoscopy tool, doctors became measurably less adept at finding precancerous growths without it. The detection metric improved. The underlying capability quietly eroded. The tool worked perfectly. Nobody asked the foresight question.


The ‘spell’ - and how to stay conscious inside it

Here is what makes all of this personal rather than merely organisational.


Every time you watch something impressive emerge from a build session and move straight to deployment, you are not just saving time. You are practising a habit - the habit of letting the magic substitute for the thinking. Repeated often enough, that habit reshapes your judgement. Your threshold for what counts as enough thinking quietly lowers. The spell does not feel like diminishment. It feels like capability. It feels like Saturday afternoon well spent.


The HBR researchers found that the antidote is not less AI. It is what they call strategic friction - a small but deliberate investment of your own thinking before the AI does its work. Not bureaucratic process. Not slowing the build. Just the discipline of depositing something of your own before you withdraw the AI's output. A rough hypothesis. A constraint the AI does not know about. Three observations from the front line that no model has seen. That investment, small as it is, keeps your absorptive capacity alive. It keeps you a builder rather than a free-rider on your own thinking.


The leaders most at risk are not the sceptics who never opened Claude. They are the enthusiasts - the ones who built something extraordinary last weekend and cannot wait to build the next thing. Which, if you have read this far, probably includes you. And me.


Build fast. Prototype boldly. Deposit your thinking first.


The competitive advantage of the AI era turns out to look remarkably like wisdom.


The good news is that wisdom, unlike technology, cannot be downloaded by a competitor overnight.

 

 
 
  • LinkedIn

© Elvin Turner & Associates Ltd 2026

bottom of page