Matt Shumer says the sky is on fire. What should you do about it?

AI
A sober response to AI hype

There is a certain kind of technological story that recurs every few years. Someone who works close to the frontier has an experience that feels revelatory, a moment where the future suddenly compresses into the present. And, thrilled or frightened or both, they try to warn the rest of us. The article “Something big is happening” by Matt Shumer is written in that spirit. It’s urgent, confident, and strongly framed: AI is not only arriving faster than people expect, it’s already reshaping Matt’s job. What happened to him, he argues, is about to happen to everyone else. And soon.

There's truth here. The progress has been extraordinary. Matt describes software built from plain English descriptions, arriving polished and tested. AI making choices that feel like judgment, like taste - the supposedly human element we thought machines couldn't touch. It's powerful testimony. Some examples are genuinely impressive.

But when an article radiates this much intensity, we need to step back and ask a more useful question. Not "Is this exciting or terrifying?" but "So what?" What does this actually mean for how we work and what leaders should do today?

The Adoption Gap

First, separate emotion from practicality. People embedded in technological ecosystems experience change differently from those running businesses, managing teams, or dealing with the messy human reality of organisations. Innovation happens in bursts. Adoption crawls. As of July 2025, a Which? survey estimated that 21 million UK users still ran Windows 10 – that’s roughly a third of the population clinging to software outdated since 2021. LLMs have only been in widespread use since ChatGPT launched in November 2022. The gap between frontier capability and widespread adoption is enormous.

Second, distinguish anecdote from trend. Matt's workflow transformed overnight. That might be entirely accurate for him. But one compelling case isn't evidence of widespread capability, reliability, or readiness. In real organisations, impressive demos mean little if the system breaks under compliance pressure, fails quietly in edge cases, or can't be audited. It's like every low-code platform: brilliant for quick proofs of concept, like pulling teeth when you venture outside regular use cases.

The Trust Bottleneck

Here's what determines AI adoption speed more than anything else: trust.

Trust is the invisible gravity holding back deployment, no matter how astonishing frontier capabilities become. The problem isn't that AI gets things wrong - humans do that constantly. The problem is how AI fails.

Humans display visible patterns of competence. You can judge a draft quickly: tone, structure, logic, attention to detail. These offer subconscious clues about whether the thinking holds together. Find a few well-supported points early, and you trust the rest was produced with similar care.

AI shatters this contract. It doesn't fail through laziness. It produces 99% beautiful, polished output... then buries catastrophic errors in the remaining 1%. An invented fact. A legal precedent that never existed. A flipped sign in a numbers table. It lies confidently in small, unpredictable places, destroying the traditional cues humans use to build trust.

A system producing high quality work 99% of the time and made-up nonsense 1% of the time is actually more dangerous than consistent mediocrity. When everything looks impressive, finding the one disastrous mistake becomes prohibitively expensive. Until verification gets easier or predictable guardrails emerge, trust, not capability, remains the limiting factor.

The S-Curve Reality

Matt’s article argues for an "intelligence explosion": each model builds the next, accelerating toward systems "smarter than almost all humans" by 2026-2027. It's seductive because acceleration is visibly happening.

But technology history teaches caution. Almost all systems follow sigmoid curves. Early growth is slow, then rapid and seemingly exponential, then constraints emerge: data limits, compute ceilings, reliability bottlenecks, regulatory friction, human oversight requirements. The curve flattens. Every field that looked exponential eventually revealed its S-curve. AI might be different. History suggests otherwise.

Consider self-driving cars. The technology nearly existed a decade ago. A Guardian article in 2015 reported predictions that we’d all be permanent backseat drivers by 2020. Today? Some cities have automated taxis, but adoption remains glacial. We are not seeing widespread instant adoption with all taxi drivers, delivery drivers and truckers losing their jobs overnight. Not because the technology doesn't exist, but because trust and adoption require time to build and legislate.

What Actually Matters

Focus on what's here now, not speculative futures. Today's AI capabilities are extremely useful - not magical, not autonomous, not job-destroying in apocalyptic ways, but genuinely valuable as productivity multipliers. AI excels at drafting documents, producing summaries, accelerating research, generating first-pass analysis, writing boilerplate code, refactoring, and transforming raw information into structured formats.

But AI works best with humans in the loop. The powerful pattern isn't "AI does your job." It's "You + AI": you direct, review, and decide; AI assists at extraordinary speed. This is where adoption takes off, not replacing roles wholesale, but transforming workflows.

Practical Steps Forward

What should organisations do in the next 12-24 months? Neither panic nor complacency. Structured curiosity.

Start where AI's weaknesses don't create large risk: document drafting, summarisation, boilerplate code, meeting notes, refining presentations. Build human-in-the-loop review checkpoints. Create internal guidelines. Train staff in verifying AI output, not just using it. Shift roles toward judgment-heavy work, away from mechanical drafting. Reward responsible adoption. Remove the stigma from admitting AI assistance.

The winners won't be those automating fastest. They'll understand the difference between capability and reliability, flashy demos and safe workflows, frontier models and auditable processes. They'll amplify human expertise, not replace it prematurely. They'll build trust gradually.

Stay Calm

This technology is transformative, but transformation comes unevenly. Some tasks will vanish, others evolve, many new ones emerge. The future isn't pre-written by model releases. It's shaped by how intelligent people choose to use available tools.

Don't get swept by hype, but don't ignore momentum either. Look beyond sensationalism, recognise limits and strengths, and focus on what you can build, verify, trust, and scale today. Do that, and you won't just navigate coming changes—you'll help shape them.

The sky isn't on fire. But the weather is definitely changing.

Read more about the Refractis approach to sustainable AI adoption:  Stop the trAIn, I want to get off! — Refractis

Next
Next

Stop the trAIn, I want to get off!