3 AI Competencies

Knowing how to code by hand won’t prepare you to use AI augmented coding, the same way that writing with a pen wouldn’t prepare you to use a typewriter.

3 AI Competencies
Sophie waiting to chill by us on the couch

When someone says they “know AI,” what exactly do they mean?

I think there are 3 distinct AI competencies:

AI Researcher: Creates or, more often, tunes models. Understands math, neural networks, deep learning, and generative model architectures.

AI Engineer: Builds applications that use AI. Sets up application and API infrastructures and integrations, builds RAGs, does prompt engineering, evaluates models, and tracks usage and cost.

AI Operator: Uses AI to enhance their daily work. Runs agents, builds workflows, and leverages tools to achieve real-world outcomes.

With that out of the way, here are my thoughts on these competencies, in no particular order.

AI Operators don’t know what they’re doing

AI Research is new (years), and AI Engineering is really new (months). This means that AI Operation is absurdly new.

We’re used to a world where tools are generally ready for general availability and gone through some level of iteration. AI is NOT there yet.

I think this is the best way to understand the current issue with software engineers running Claude Code wild and committing what it spills out without reading it: we don’t know how to properly operate it yet.

When typewriters first hit the market, one of the obvious problems — crazy in hindsight — was that nobody knew how to type.

We had to literally invent typewriters before we could invent typing.

There’s even a theory that QWERTY was designed so salespeople could demo the word “typewriter” quickly by just using the top row of keys. Had you noticed that before?

In any case, AI Operation is in the same place today. Right now, most of us are just mashing prompts, stringing tools together, and hoping (or assuming) good things will come out.

They won’t.

Knowing how to write documents won’t prepare you to use AI tools, and knowing how to code by hand won’t prepare you to use AI augmented coding, the same way that writing with a pen wouldn’t prepare you to use a typewriter.

Assuming we can use AI tools without competence just because it spills out somewhat coherent content is like assuming we know how to use a typewriter just because the typewriter’s output is more legible than our handwriting.

Which gets me to my next point: Not only we don’t know we’re poor AI Operators, but AI Engineers are just making the problem worse.

AI Engineers are digging the Operators’ hole

Instead of making the tools for AI Operators easier to use and training them on how to use them, AI Engineers are making our tools more powerful.

It’s like saying: “Oh, you’re having trouble with this handgun? Here, take an assault rifle. Try now.”

Sure, that’s one way to “enhance your users’ ability.” But while more power makes simple tasks easier, it also makes the blast radius of incompetence infinitely wider.

And right now, we’re spraying bullets everywhere — with no signs of stopping.

In a world where people can’t use simple tools, “empowering” everyone with complex, more powerful tools is exactly the wrong thing to do.

The implicit assumption here of course, which I’ll make explicit now, is that AI is smart enough that it’ll make up for the Operator’s incompetence. It’s not.

Because it’s not AI that needs to be smart enough — “AI” is plenty smart. What we don’t have enough of is maturity in our AI Research, AI Engineering, and baseline AI Operator competence to make simple AI tasks trivial.

But if the situation is so dire Dui, then why are we insisting on this mistake?

Well, that’s easy:

Assault rifles sell.

Companies are good at buying but bad at training

If your competitors are buying the most powerful AI tool, it’s hard to justify buying the simpler, more limited one. After all, AI is going to change how business is conducted everywhere, forever, so why settle for less than the best?

So if you’re an AI Engineer building AI tools, you’re only selling if your AI tool is the most powerful. Ease of use is not currently a competitive advantage, in part because it’s so hard to measure. In fact, we don’t even know what the difference is between easy-to-use AI and hard-to-use AI. Many assume more powerful tools are just all around better, including easier to use, since AI will just do more of the work for us.

In a way, I imagine this is not unlike back when computers first came out and companies put a computer on each employee’s desk and left them to figure it out. Progress!

Honestly, the truth is that humanity hasn’t quite figured out how to improve people’s competencies systematically. We do a great anecdotal job in sports, music, and some other activities, but our institutionalized education systems, such as schools and academia, are quite inconsistent and inefficient. And so are companies.

It’s not so much that companies are bad at training AI Operators — they’re bad at training, period.

Ironically, I think one of the biggest reasons for this inefficiency is that learning is not really economical (despite what “they” say). Training is costly, and companies have a hard time investing a large portion of their costs in training instead of in areas that directly add to the company’s bottom line, such as having employees do the actual work.

Most times, of course, it’s more economical for the company to just replace an existing employee with a new one from the market who has the competencies it needs. But of course that’s impossible for new technologies like AI since there’s no talent pool to hire from.

When Apple came out with the App Store and the iPhone went from being an iPod that makes calls to a computer that runs in your pocket, suddenly we needed iPhone App developers. How many did we have then? Nearly zero. A few Objective-C developers for macOS, perhaps? There could be no iPhone App developers before the App Store.

That same two-sided marketplace now exists between AI Operators and AI tools. Agentic tasks, MCP integrations, agentic coding, AI workflow integrations, these are skills that couldn’t exist before the tools were created — and the tools are being created and updated literally every week, creating even more learning pressure.

This is like the App Store all over again. Companies will have to choose: either grow talent in-house or wait.

And they can’t wait because AI is much, much, much faster than the mobile revolution. It’s not even close.

You’re already behind!!

I keep having conversations with people who say, “Yeah, I need to learn this, AI is the future.”

And I keep telling them the same thing: AI is not the future; it’s the present!

You missed the first train, so pick the next one.

Maybe AI competence will become so accessible that the gap between the “haves” and “have-nots” shrinks. But I’m not betting on it. The gap is expanding, and I believe it will keep expanding.

Yes, compared to today, it’ll be easier: easier to operate AI, easier to build AI apps, easier to update AI models.

But by the time that future arrives, the cutting edge will already be far ahead.

And importantly: power in the world will only continue to consolidate around this.

People won’t be replaced by AI — they’ll be replaced by better AI Operators. And the better we get at AI Engineering and the stronger our AI Research is, the more powerful these tools will be.

Only a bad craftsman blames their tools. There’s much leverage strong AI Operators can achieve today by mastering the existing AI toolset, and this competitive advantage will only continue to increase.

This AI expertise is not automatic, or even easy to acquire. It needs to be earned and constantly refreshed, as illustrated by everyone trying and failing to use high-leverage AI tools with little preparation.

AI is not creating a level playing field at all — it’s raising the bar for what’s possible.

Yes, the incompetent will get better.

But the competent will become exponentially better.