What's common between AI Copilots and Outsourcing
Is AI the Anonymous Indian that decreases your work or increases it?
Now that Claude can use a computer, the worries about AI taking jobs have suddenly increased. As a counter to that sentiment, Offshoring and AI Agents is a well-written, detailed article pointing out the ways in which AI copilots can cause problems for complex, long-lived codebases. Most likely, similar problems probably exist in other fields. But this area I am familiar with and I agree with these opinions:
It starts by comparing AI copilots to interns:
A stagiare is the restaurant equivalent of an intern, raw and wide-eyed. With any luck, they’d be eager to learn, and with even more luck, they’d actually be capable of working somewhat independently in the kitchen. So I’d take these stagiares, and find something useful for them to do, like slicing onions.
On one level, what slicing onions involves is:
Take onions, cut into 1/16” slices.
In an ideal world, the stagiare will take my bag of onions, go away, and return with a pile of sliced onions, saving me the 15 minutes it would have taken to do it myself. In reality though, this apparently simple task is a minefield. A few things that I’ve seen go wrong:
Onions got sliced into wrong shape (rings instead of sticks, half rings instead of full rings)
Huge mess in prep station, onion detritus everywhere, other cooks pissed.
Stagiare used knife instead of mandoline, returns with ¼” slices with 100% variance in thickness, not nice, even 1/16” slices.
Stagiare used mandoline instead of knife, cut off fingertips, blood everywhere.
Stagiare used finger guard on mandoline, tiny plastic chips in onions (but no blood).
Stagiare used meat slicer, fails to clean it (or tell anyone).
People who’ve used AI extensively, especially for more complex tasks, will be familiar with this problem. To get the AI to do the right thing, your specification of what needs to be done has to get more and more precise and detailed.
The same problem of course, exists with outsourcing programming work. So, in a world where AI copilots do a lot of the coding work, what becomes less important and what becomes more important?
All three of these helpers – the stagiare, the offshore team, and the coding assistant – have shifted the nature of the work, from doing to specifying, from building to teaching, from creating to verifying
Specifying, teaching, and verifying increase in importance. And many good programmers are not great at these skills.
With complex, long-lived codebases, 5 subtle problems occur and these will exist even if we fix the problem of AI hallucinations and subtle bugs. Even with non-buggy code, if AI copilots have written most of your code, knowing what’s in your code becomes more difficult because of:
Volume of code: The copilots don't usually write
concise code and won’t refactor by default
Inexperienced programmers: more code will be written by people
less familiar with what they're writing. Management will expect people to use AI to get something that works even if they don’t understand it.
Reading code is harder than writing code: AI will primarily be used for the latter;
by default, it doesn’t help you do the former.
An 80% accurate system is safer than a 99% accurate system. People using an 80% accurate system instinctively know they should check for errors and verify everything carefully. People using a 99% accurate system simply assume that it is 100% accurate, and so, when it blows up, it blows up spectacularly.
Where will the expert programmers come from? All the low-level drudge work, currently being done by the interns and juniors will be done by the AI copilots. But doing that drudge work is necessary training for your brain for you to become an expert.
Of course, it is important to remember that AI is improving very fast and we have only seen the tip of the iceberg so far. It is possible that in a few years (or months?) AI will improve to a point where these problems go away completely. But maybe not.
So read the whole article. It is a good addition to your mental model of what the future might or might not be like.