AI can't be accountable

AI can't be accountable

in

AI can’t be accountable

I’ve felt this in my job lately. Specs, PRDs, tickets and PRs are arriving faster than ever, and I’ve caught myself having an unexpected reaction: I don’t want to read all of it. Not because I don’t care, but because there’s too much of it, and my brain starts looking for a shortcut.

But what am I supposed to do here? Take an AI-generated spec and ask AI to review it? That feels a bit daft.

There’s no question about whether AI can generate enough output. It quite clearly can. But what happens when that output starts piling up faster than anyone really wants to read it?

The documents exist, the approvals happen, everyone nods along and looks great, and that’s brilliant… right up until something goes wrong and we all start looking around the room like it’s someone else’s problem.


The AI assembly line

AI makes everyone more productive, not just engineers. We’re getting to a point where it’s involved at nearly every stage, of every process.

You give it a product idea, and it helps shape the PRD. Give it access to the codebase and it can turn that PRD into a tech spec. From there, it can generate extra-detailed tickets, sometimes even telling you which lines to change. Then you pick up the ticket and of course, it can implement the code, open the PR, write the description, and even reply to review comments.

The problem here isn’t the tools - it’s that the process hasn’t caught up with the volume. In isolation, none of those are unreasonable - I do a lot of that myself. However, it makes me uneasy when I zoom out and look at the entire assembly line.

Each stage creates more output. More artefacts, more words, more decisions and more things that are supposed to be reviewed, understood and then signed off by a human.

But, unlike AI, I don’t have unlimited capacity to read it all and take it all in. The process looks good from the outside. Lovely. A well oiled machine. However, if every step is happening faster than people can absorb it, the neatness can hide the weakness.


When review becomes theatre

The byproduct of all of this is volume. If AI is helping produce all these artefacts, the amount of stuff moving through the process becomes a lot to keep up with - humans still need food, sleep, and the occasional break from staring at a screen full of bullet points. And that’s when the gates start getting weaker.

It’s not because people stop caring, not at all. It’s because the review burden becomes unrealistic. You end up reading enough to feel comfortable, and you assume (or hope) someone else read it properly.

That’s the bit that makes me uneasy. The checks may still exist, but it feels like they are becoming more ceremonial. The spec gets approved. The ticket gets accepted. The PR gets merged. Everyone has technically done their job. But if nobody has really challenged the process end to end, then what exactly did those approvals mean?

That’s where “AI-assisted” starts drifting into “human rubber-stamped.”


So, who owns it?

This is the question I keep thinking about.

If AI helped write the PRD, shape the spec, break the work into tickets, and implement the code, then who exactly owns the outcome when something goes wrong? Because it certainly isn’t the AI. It doesn’t carry the stress when production falls over. It just sits patiently, waiting for the next prompt.

The answer is that a human still owns it. In fact, several humans probably do - although good luck getting everyone in a room to agree on exactly which ones.

AI can help produce the output, but it cannot take responsibility for the consequences.

That sounds obvious, but the more AI is involved at each stage, the easier it becomes to feel like a passenger rather than an owner. The work didn’t really come from you - it came through you. And that distance makes accountability harder to locate when something goes wrong.

But approval is still approval. If you sign off something you didn’t properly understand, that doesn’t remove accountability.


Productivity at the cost of control

This is the silent trade-off. AI makes it easier to produce more, no doubt. And because all of that looks like progress, the speed alone can trick us into thinking that everything is working.

But speed and control are not the same thing. Moving fast doesn’t mean moving in the right direction. If a spec moves through review before anyone has really stress-tested it, you might build the wrong thing faster. If tickets are generated from a spec nobody fully read, bugs get baked in earlier. If PRs are merged because the volume made proper reviews impossible, incidents become more likely. The pipeline looks efficient, but the output might not be.

The same gates are technically in place, but their ability to provide control gets weaker. Review still happens, technically. Approval still happens, technically. But the quality of those checks starts slipping, because the volume has changed and human capacity has not.

That might be fine in some contexts. A rough internal tool that three people use is one thing. But a production system handling money, health, legal decisions, or anything else with real consequences is another. The higher the stakes, the less comfortable I feel with a workflow that produces more material than people can realistically check.


In the end…

The answer is not to stop using AI. That clearly isn’t happening, and honestly, I wouldn’t want it to. The gains are real. The help is real. The productivity is real.

But I do think we need to be honest about what comes with that.

AI can help at almost every stage. But what it can’t do is sit in the incident meeting and hold its hand up.

And if the volume it creates is quietly making human reviews worse, then the real risk is not that AI makes mistakes. It’s that we’re approving too much without understanding enough.

That’s the bit that feels dangerous to me. Not the generation, but the weakening of the gates.