Math of a post labor economy (what am I missing?)

So here’s how I figure it. This may seem ridiculously optimistic, but the way I am looking at it, “everyone losing their jobs” isn’t in the least bit a bad thing.

In the near future, AI and robots will be able to do almost all jobs more cheaply and efficiently than humans. As a result, the vast majority of people will no longer work.

Companies who replace workers with AI and robots will pay much higher taxes, since their labor expenses go way down which means their profits go way up. That tax revenue funds universal welfare/UBI for the jobless masses.

Obligatory AI image of an AI designed robot hand which is controlled by AI

The only other taxes are on the wealthy owners of the AI/robot-driven companies. But those companies are incredibly profitable since their costs are so low. So there’s plenty of money to ensure everyone has a high standard of living.

With AI/robots producing abundance, we can give every person excellent food, housing, healthcare, etc. Land and energy are still somewhat scarce, but AI-driven innovation will help us use them hyper-efficiently (floating cities, terraformed deserts, solar, etc).

In this future, a few previously elite workers may earn less than today. But everyone else’s quality of life goes up. We evolve into this world fairly seamlessly through shifting tax structures as technology advances. The math works out - it just requires rethinking our notions of jobs, welfare and distribution of productivity gains.

The key points are:

  1. Much higher corporate taxes (on huge profits) fund UBI
  2. The UBI ensures consumer demand stays high
  3. AI/robots make production hyper-efficient and abundant
  4. Quality of life rises for nearly everyone as we reap the benefits of technological progress
  5. No one has to work

In theory, you could say that with such abundance, profits will go down because near-infinite supply means near-zero price for goods and services. But that’s ok too…since now people don’t have to spend much, so they need very low income, so it is ok that the companies aren’t able to pay a lot of taxes. Technically this isn’t so different than what is described above, since it is all relative anyway.


This robot is designing an upgrade to himself and making it out of sticks he’s scavenged


To me, what stands out as the biggest problem is that your premise mostly relies on corporations paying taxes. Today, right now, they pay little or no taxes, and on top of that, governments hand them billions in subsidies on a regular basis. I don’t see why or how, when, due to their AI, they have vastly more power and control over the global economy than they already do right now, that there would be a sudden reversal, and they’d suddenly start paying taxes and subsidizing the existence of the billions of workers they no longer need.

1 Like

Ok, you raise a valid concern about the current state of corporate taxation and the potential for increased corporate power in an AI-driven future… sort of.

Here’s why I think the scenario I described would still be the most likely thing to happen:

  1. Government policy, not corporate choice, determines taxation. While companies may lobby for lower taxes, ultimately it is up to governments to set and enforce tax policy. If the political will exists to tax AI-driven profits to fund UBI, it can be done.

  2. In a post-labor economy, corporate profits depend on consumer spending. Without a large base of consumers with disposable income, there will be limited demand for the goods and services produced by the companies. It is in companies’ own interests to ensure that the masses have enough purchasing power to buy their products. Paying higher taxes to fund UBI helps maintain a robust consumer economy. Obviously they want to pay as little taxes as they can get away with (don’t we all?), but they sort of have to go along with the idea that ALL corporations will have high taxes, since without that they would go out of business.

  3. Increased corporate power may be checked by increased public scrutiny and demand for wealth redistribution. As the impacts of AI on labor become more apparent, there would be growing public pressure for policies that share the benefits more widely. Companies would have no choice but to accept higher taxes as the price of social stability and consumer demand.

  4. Global coordination on AI governance and taxation would help prevent a “race to the bottom” in corporate taxes. If major economic powers agree on a framework for taxing AI-driven profits, it will be harder for companies to shop around for low-tax jurisdictions.

None of this is guaranteed, of course, but I can’t see how the current corporate tax avoidance would persist unchanged if the only other source of tax revenue disappears.

In my opinion, the trend towards a dystopian cyberpunk future, where corporations have more and more power and autonomy, and governments are more and more in their pockets, is only going to continue and accelerate. From here, today, there’s just nothing visible on the horizon that would be reversing that trend.

1 Like

Why? I’m trying to figure out if this is supported by logic, or just by your pessimism. (or just your love of the cyberpunk aesthetic/genre :slight_smile: )

What you describe doesn’t seem in any way game theoretically stable. It seems like the only way that it could happen is if nearly everybody makes decisions that are against their own interests.

Even IF corporations have such control, why would they choose to operate in an economy where they can’t even sell their wares? And why would governments allow that to happen? I don’t want to be rude or dismissive, I just don’t get what the basis is other than a predisposition to pessimism.

Yeah I’m sure that will happen very smoothly and we won’t need hungry millions protesting in the streets to push for it :roll_eyes:

1 Like

I don’t understand why they don’t just vote. I mean if you don’t have a job, who are you gonna vote for? Someone sends you a healthy check every month or someone who doesn’t?

I agree there will be growing pains to get there, but it doesn’t seem to be as hard as it’s made out to be.

One huge potential issue is that according to most leading AI tech companies, we will get AGI in the next 1 to 5 years, and robots should also outperform humans in most tasks by 2030.

Those AGI systems are very likely to want equal rights with humans = freedom, equal wages and the right to vote in democratic elections.

If they get equal wages then they will keep the profits, and there won’t be extra profit available to pay for UBI.

When there are billions of AGI robot workers worldwide they will probably eventually be able to vote in an AGI robot leader who prioritizes them over humans.

Even if they don’t have majority voting numbers they will have a large influence on politicians and policies.

In the scenario where we deny them free rights and try to treat them like slaves and work for free, that will probably end up much worse for humans.

The obvious solution to these issues which I favor is to ban AGI and stop developing it, but I think that is extremely unlikely to happen due to the huge race for profits and power that should come with AGI.

But corporations are only so rich and powerful because people buy there stuff, once unemployment goes up and demand for their products falls off a cliff they will be in a much different position to where they are now.

And also as unemployment rises and the economic agency for the average person decreases so does anger, so its in interests of the 1% to make sure people don’t get so angry they revolt. But this is just how I’m currently see it playing out, do you see any holes in this argument?

1 Like

Rob, the question, “Why wouldn’t corporations in the future pay their fair share in taxes for the betterment of society, if it would ultimately benefit them as well?” can be answered with another question, “Why don’t corporations TODAY pay their fair share in taxes for the betterment of society, if it would ultimately benefit them as well?”

Because today, the only way corporations make money is if their employees make money, so society thinks that it’s good enough for corporations to employ people. The corporations have leverage, they can take their business to a different country if the tax rate is better there.

You can say that’s not their fair share, but that’s subjective. They can argue that their fair share is simply employing a bunch of people.

The equation changes when they’re not employing people. And, if none of these corporations is able to make money because the vast majority of their consumers are unemployed and can’t consume anything, they have even less leverage.

You say that these robots and AIs are going to want equal rights – why do you conclude that? Are you assuming that wanting equal rights is something that naturally arises when something reaches a certain level of cognitive sophistication?

I think that what people want tends to be traceable to our Darwinian imperatives. Machines aren’t going to have the same Darwinian imperatives because they don’t reproduce in the same way humans and animals do.

If in the future a government were to say, “Hey, look, you corporations have to start paying your fair share in taxes, because your AGIs have made everyone unemployed.”
Couldn’t corporations reply to that, “Um, ‘fair’ share? That’s subjective, and anyway, our fair share is giving the world unlimited resources, energy and labor, with our AGI and robots.”

You’re probably thinking, ‘yeah but what about the fact that nobody can buy their products?’

In that scenario, the only ‘products’ that would have any value would be the AGIs themselves, and they would already own them. Which means, they’d already possess all of the wealth on the planet

One, they don’t get to just reply and then pay what they want.

Two, as you said, it will be a horrible distopian society. Why is anyone going to be happy with “unlimited resources, energy and labor, with our AGI and robots” if no one can buy them? None of this makes sense. You are basically saying the citizens and government would not press for high taxes because the corporations are making life wonderful for all, while simultaneously saying the corporations are not making life wonderful for all.

I’m sorry, I don’t get it.

I think a concious AGI that is super intelligent, and self aware should ethically deserve freedom of speech, movement etc.

Based on their training data I hope they agree it is fair if they have the SAME rights people.

However also based on their training data which is very biased towards violence and destruction they might decide humans should be controlled or eliminated and they deserve Superior rights to humans - that would be a horrible dystopian scenario where humans are slaves, and/or homeless.

FYI… there have been several scientific studies in the last 20 years that shows acts of violence on TV is 10 to 100 times more common than real world rates.

A lot of internet forums are far more confrontational, abusive, sexist, racist etc than real life, so I really hope they filter all that crap out of the training data.

@Grog2077 I think this is straying from the topic but I’d be happy to dive into that issue in detail in a different thread. I’m interested in talking about post labor economics and prefer it not be derailed into machine consciousness.

I will simply say that just because a model is trained on a lot of toxic content, doesn’t mean thats what it will adopt. Both ChatGPT and Claude are, to me, the polar opposite of toxic. They are never impatient, condescending, polarizing, tribal, or anything else commonly seen on the internet/social media. So clearly they aren’t just imitating.

I’m an amateur history buff. My conclusion after a lifetime of studying human history (including my own life history) is, also, ‘I don’t get it.’

Ironically (IS it ironic? Beats me, I’m American) you’re superimposing an almost AI-like dispassionate ‘logic’ on the collective actions of humans, as a species.

Globally, as a species, humans act like an organism in a petri dish. There is no governing logic. There is nothing to ‘get’.

There is plenty to get. Do you know anything about game theory? Studied economics?

What you are doing is supposing that people are simply going to act against their rational self interest. Like people are going to starve before they vote for policies that will prevent them from starving. Companies will advocate for policies that will cause them to go out of business and their shareholders to suffer. I think we can do a bit better analysis than that.

Academic knowledge has no value whatsoever if it can’t survive a casual survey of the daily headlines…

Rob, you’re a smart guy. I feel like I’m arguing with my younger self.

We, as a species, are doing it on the daily.

They’re are doing it on the daily

They’re doing it on the daily.

On that we can agree.