Listen. I’ve been in the trenches with these AI coding tools for six weeks now, building an accessibility issue dashboard at my day job, and I need to tell you something before you make the same mistake I almost made.
The tools are fast. Obscenely, dangerously fast. And that speed is a loaded weapon pointed directly at your own foot.
When you speed up code output in this environment, you are speeding up the rate at which you build the wrong thing. You have automated the guessing. You will build the wrong feature faster, ship it, watch it fail, and then do a retro where someone says "we need to talk to users more" and everyone nods solemnly and then absolutely nothing changes.
Andrew Murphy nailed it when he wrote that when you speed up code output in a broken process, you’ve automated the guessing. You’re not shipping faster — you’re failing faster, with more confidence and better commit messages.
My first reports were done in a week. Done — as in, measuring what I wanted, displaying correctly, technically functional. A week. I felt like a genius. I felt like the future. I felt like I should probably buy a better chair because clearly I was going to be sitting at this desk changing the world.
I did not deploy.
Instead, I did the unglamorous thing. The thing that doesn’t get written up in breathless Substack posts about the AI productivity revolution. I wrote up the designs. I explained how these “done” tools actually worked. I showed them to people. Real people — not the helpful imaginary users I’d been designing for inside my own skull.
One design became five. Not “show me how fast accessibility issues are getting fixed.” That was naive. That was me designing for the ghost of a user. What it actually needed to be:
- Theming to cut the glare in the default report UI — because real people work in real environments with real eyes
- Pattern settings on the graphs for colorblind users — because a legend is useless if the colors all look the same to you
- Priority auditing for issues — because fix velocity without priority context is just noise dressed up as data
- Training gap identification to surface why certain issue types keep recurring — that’s the actually interesting question
- Full keyboard and screen reader interaction with the graphs — which meant going back into the guts of the open source charting tools and having a very unpleasant afternoon
The first graphs weren’t trash. But they were a skeleton. Context-free. Orphaned from the actual review and planning processes people were living inside every day.
Here’s the point, and I’ll make it plainly because I want it to land: The speed of generation is irrelevant if you haven’t done that work first — you’ve just got a faster car with no map, on a road you invented, going somewhere nobody asked to go.
You still have to talk to actual humans. You still have to write down what you’re building so other people can tell you you’re wrong. You still have to chase your assumptions into dark corners and interrogate them until they confess.
Don’t mistake fluency with a hammer for knowing what to build. You’ll build the wrong feature at warp speed, ship it like a proud idiot, watch it crater, and then sit in a retro while someone says “we really need to talk to users more” and every single person in the room nods like a Churchill dog on a dashboard. And then nothing changes. It never changes.
Go talk to someone.
An actual person.
Today.