What happens to our jobs given AI?

But so much of our economy is now either mechanized or digital, that I’m not sure we can make cooks and dentists out of everybody. Besides, who will have the money to pay us.. other cooks and dentists?

What happens to our jobs given AI?
Juju and Tico enjoying retirement

Many friends have asked me what happens to our jobs given AI. The real answer is: nobody knows.

That said, we can all speculate, and I’m in a relatively good position to speculate given a somewhat broad understanding of Engineering in general and AI in particular, Moral and Political Philosophy, and Economics, including Behavioral Economics.

My take in summary: I think we’re in trouble.

Here are the details.

First things first: I’m a capitalist

I know not everybody is a capitalist, meaning (in summary) that they believe capitalism is a reasonable way to organize society economically. But I am. I’m sure that biases my opinions.

In particular, several friends of mine who are American seem to be socialists, and/or anti-capitalists. Although I do think not all of them had much opportunity to learn or think deeply about the topic, many of them did.

Honestly, it’s not like “capitalist or socialism (or anything else like anarchism or whatever) is better” is a settled question: our economic modeling capabilities are too inferior to provide any signal that could resemble scientific scrutiny, and moral philosophy hasn’t evolved enough to define what “better” even means without major controversy.

So, take that for what you will: I’m not that much more informed about my view as my friends are, even if we diverge, because in practice I believe we’re just all grossly uninformed about the subject.

That said, there’s a major misconception about capitalism when people talk about AI and UBI: we don’t need to “move the economy.”

Capitalists don’t want to sell, they just want your work

There’s a myth that capitalists want to keep the economy moving by building products and paying workers so that people can, in turn, buy their products and keep the economy moving. That they want to keep workers paid because otherwise nobody is buying their products and then they will go out of business.

The view that workers need to be paid so they can buy from capitalists is a common view, but I think a flawed one.

What capitalists want is not to sell products. In fact, I think most capitalists, if they could make money without selling products (or adding value) to people would choose to do just that (and many, in fact, have). What capitalists want is the money, not the sale, and the reason they want the money is to pay for labor.

In fact, if we lived in a more dystopian world where people were working for companies for free, and companies got to both build products and keep their money, they’d likely just spend the company money with other companies buying their complementary services, and distribute the money to shareholders so they could buy products and services.

In that world, none of the money goes to the laborer, and companies just exchange money and items among themselves, distributing some of it to shareholders, and this money manages the transfer of wealth, the real value (products and services), around.

To a large extent, this is not that different from how the world is organized today: A large majority of the money, and wealth, is mostly concentrated within companies and distributed to shareholders.

But there’s still a missing piece of the capitalist puzzle: many companies still spend 80%+ of their OpEx on people.

That, I suspect, will end with AI.

Thanks for your service, see you later

Companies are incentivized to reduce their spend on everything, including people, and that will continue to happen. If AI makes people redundant, companies will definitely take the deal — it’s what they’re incentivized to do.

That doesn’t mean that everything that is current labor will get replaced by AI one by one: as our human services get cheaper through AI, the demand for it will also increase: more software, more accounting, more art.

But unless that demand is infinite, assuming AI can do similar work to what people can do with a better margin, people will be more and more replaced by AI. Besides, the demand is market demand, driven by those holding money, and if that money is less distributed to people, they’ll no longer count as demand.

This means that if AI continues to increase in sophistication, it will replace some humans, who will no longer be valid demands on the market, or provide labor, lowering the demand, until more and more equilibria are found.

If humans could work for any amount of money, then eventually the market forces would push human labor down to AI’s cost. But if humans can’t outcompete AI, then the tendency is that humans will have less and less ability to do their current work.

But will there be new work for humans to do?

Nope.

This is the end of the technological road

When luddites broke the textile machines, they were concerned about being automated, but that just meant people focused on doing other work that the machines at the time couldn’t automate.

This AI revolution is different: AI will economically automate virtually everything humans can do, meaning, it’ll be virtually more economical to have AI rather than a human do virtually everything humans can and will do.

In that world, there’s no “Welp, guess I’ll just play the guitar, paint pictures, or write open source software” then. Virtually nobody will be economically incentivized to pay you instead of an AI to do a job.

There will probably be some exceptions for a while, of course: dentistry, cooking, babysitting, BJJ sparring. Robotics will evolve at a slower pace than digital AI, and maybe it won’t ever be more economical. Hard to say at this point.

But so much of our economy is now either mechanized or digital, that I’m not sure we can make cooks and dentists out of everybody. Besides, who will have the money to pay us.. other cooks and dentists?

Which brings me back to why I don’t think UBI is the answer to the AI question.

Why UBI is not really gonna work

Universal Basic Income is the name for the concept of giving people a fixed stipend, like a salary, unconditionally. In simplified terms, it’s like making money without having to work.

I think UBI is actually a reasonable solution if it would work: the economy keeps moving somewhat similar to how it works now, except instead of people working to be paid, AI does the work and people get paid.

The problem is that UBI implies companies will want to share their wealth with people, and there’s no reason to believe they’re motivated to do so, and many reasons to believe they’re not. So it’ll come down to who has the power over whom, and if companies hold all the wealth, they’ll hold all the power.

I agree with the perception that technological progress erodes democracy: the bigger the levers that can be yielded by people, the less likely that there is power in numbers.

So in a world where the people demand to have access to wealth, and companies say “well, make me.” I think the result will come down to whoever wins the arm wrestling dispute.

In a world where AI dominates work, is owned by companies, and people have no jobs, I don’t think it’s the people who’d win.

In the end, companies and money are just abstraction over people: who owns the AIs, who owns the land, who owns the wealth, and who doesn’t.

This also means that the attention economy is, probably, coming to an end.

The attention economy is a labor economy

Why are companies like Google and Meta so valuable? Well, that’s easy: because they hold our attention.

But why is our attention so valuable? Well, because we hold power. In this case.. mostly purchasing power (and also voting power).

In a world where people can’t purchase goods (erosion of work) and can’t vote (erosion of democracy), there’s no value to our attention.

If there is power in numbers, and people can revolt, then it’s still valuable to influence people’s behaviors. But if the technological lever is big enough that there’s no concern that people would win the arm wrestling dispute, then there’s nothing to worry about, as we see on the other side of propaganda when authoritarian countries apply control over the people through force.

Because labor is used to produce value, and companies acquire money to get labor, once you’re no longer offering labor, there’s really no economic reason to share wealth with you under our current capitalist economic systems.

We can, of course, already see this happening throughout the world with the people who are “economically excluded” — this is just likely gonna happen to way more people if AI economically excludes them.

So how will humanity get out of this, Dui?

I don’t know, really.

I think we’re mostly screwed, honestly.

Well, then what should I do??

I think there are a few principles that are outlined in the above.

1) Become a capitalist:

Labor is becoming less and less valuable. Owning stuff will, in turn, most likely become more and more valuable.

So own stuff.

2) Invest in AI, and divest from attention economies:

Many attention economy companies, like Meta and Google, are also investing in AI. I think their future value will basically depend on how well they make this transition.

Fields that are dependencies of AI like energy, chips, and data centers, will likely continue to appreciate incredibly in value.

3) Build bigger value levers for your labor:

In a world where technological innovation is very high, the gap between the haves and have-nots of competence is wide.

Do what you need to do to be on the good side of this competence gap.

This probably means finding ways of being much better at your work than your competitors not leveraging AI are.

4) Uncommoditize yourself:

As labor becomes more scarce and competence gaps become wider, the commoditization of valuable labor will end.

Like companies, it’s more likely that your work will continue to be valued if it’s unique than if it’s undifferentiated.

This also probably means investing a lot in personal marketing.

Sorry.

Well that was depressing, Dui

Yeah, I know.

If it makes matters worse, and it does, I didn’t even talk about the crisis of meaning that we humans will have if AI did work. That is covered (a bit convolutedly) in Nick Bostrom’s book Deep Utopia. It’s a great but not very encouraging read of the best-case scenario.

And I didn’t even touch on the topic of existential risk.

So, I guess, get cracking on the above suggestions.

And good luck.