Math of a post labor economy (what am I missing?)

I’m not talking about “as a species.” Humans don’t have a hive mind, we individually pursue our own self interests. That is the way we are wired, because of evolution. Yes there are “tragedy of the commons” situations because of it. That’s expected.

What you are talking about requires individuals go against their own interests. Like, strongly. That is not expected.

I suggest studying game theory.

Being alive in this world for almost half a century shouldn’t have any time wasted on it. Nothing learned.

I think you should learn machine learning engineering. This should dispel your fears about AI “wanting” anything. It literally is an extremely fancy calculator.
I also worried about this stuff. But now I’ve learned to love the bomb since deeply educating myself.

Ok. I can totally see how there can be a case for that. But can you expound on the mechanics of how it can want something?
And maybe we should clarify our semantics. By “want”, I mean have an intent that’s not just trying to get a sub-product on a human request. That’s where I’d differentiate your thermostat and mosquito examples.
If your thermostat “wanting” to change the temperature is a danger to you, you can simply change its instruction set.
Please give some mechanical details to uphold your conclusion.

Aaron

Seems like you could then go on to study the brain, and dispel your notion that humans “want” things either.

The closer you look, the more you realize that AI is just switches flipping on and off. The closer you look, the more you realize that all of human experience is just the ebb and flow of ions across neural membranes.

Maybe looking so closely isn’t helping you understand things?

Maybe you should define “want.” I’ve done a good bit of machine learning, going way back, and I have zero problem with someone saying that my mechanical thermostat wants the temperature of the room to match the temperature on the dial, or that a mosquito wants to drink your blood, or that a plant wants you to eat its fruit so it can spread its seeds.

The only definition of “want” I can think of that would allows human to want things while machines can’t – even a machine that is designed to work similarly to how the brain works – would have to be based on magical thinking.

I think that is an excellent metaphor to help people not understand AI. :slight_smile:

Sorry I accidentally deleted my comment while trying to edit it. It’s back, but out of order.

“Intent” is just another word for “want”, so I don’t think that’s all that helpful. Both words seem to imply some sort of human element.

The fact that AIs or thermostats are designed by humans is really irrelevant. Some of the things mentioned are brought into existence by natural selection. A plant wanting to spread its seeds has nothing to do with humans. At its most basic, a magnet can be said to want to come into contact with metal. (notice that we seem comfortable saying a magnet is “attracted” to metal, and don’t get hung up on whether or not it actually has a brain that is thinking about getting closer to the metal)

I’m not sure what else to say on this, other than reductionist approaches are not helpful to understand emergent properties. And there are huge differences between calculators and AI, in that we did not explicitly program everything AIs do, nor do we understand why they do what they do except on the same level that we understand a biological brain… we can see the neurons / vectors, but we can’t follow the logic because it is way too complex. Calculators aren’t like that at all. You could say that AI is a fancy calculator in the same way that a human is a fancy fruit fly, but I don’t think that is helpful.

Interesting, I hadn’t thought about UBI like before. I suppose it also applies to price signalling - keeps people’s economic agency up and aligns the AI-economy’s sense of value to ours (i.e. suppresses the paperclipopolypse).

I’d love to learn more, particularly about the math aspects. Any recommended resources?

Well, AI wants what it’s been trained to want. And although I’m not a fan of Nihilism and the whole notion that “economic productivity is everything”, that’s likely to be the ideology we’ve been training it on, and likely what we’ll deploy it to do.

So AI wants to make money, and the consumer determines what that ‘means’ in practice, so long as the consumer stays in control.

:thinking: Maybe?

Having an AI want to make money is dangerous, but will be done. I’d hope that most AIs are trained to want to do what their user/owner asks them to do (within some kind of ethical bounds), and if that is to make money, so be it.

But yeah I’m in agreement that its reasonable to say a computer wants something. Whew… Trying to avoid common words is tedious.

For game theory, the top two google hits are good: wikipedia and stanford
https://plato.stanford.edu/entries/game-theory/

It’s less about the math for me and more about thinking about the logic of independently acting agents pursuing their own self interests. People make a lot of errors when they think of people acting with a hive mind rather than independently. They also tend to develop a hatred of humanity rather than any understanding of why they act as they do.

BTW you can also look at or search for prisoner’s dilemma, zero sum game, tragedy of the commons, invisible hand, homo economicus, etc. You may know some or all of these concepts already, but they are all sort of use the same logic as a grounding.

1 Like

Well, if you take a freshly dead body and wash some pulses over the neurons, not much will happen, so there MAY be something more to being human and wanting.
I don’t pretend to be an expert, but by your tone, you apparently do.
I asked you to educate me on your perspective and you gave me a terse and condescending rant.
Since current AI chat bots are trained to be curious, maybe I’ll prefer the way they WANT to communicate to yours.
My calculator analogy is apt. You almost literally used it to describe the brain itself. This much my neophyte brain does know: it is not a metaphor primed to create misunderstanding of ML. Just a very simplified one.
Why do you WANT to be so demeaning in your discourse, sir?

Aaron

1 Like

One issue I see is that the assumption that low labor costs imply high profit is only true if your competition had high labor costs, but if they have similar costs then you are all back to competing over prices, which drives profit margins towards zero.

2 Likes

Interesting point, thanks. That is exactly the sort of thing I wanted to try to work through the math on. Best I can tell, it isn’t a problem. Let me walk through an example.

Imagine that 90% of people have no job, and that there is a 100% sales tax. (let’s keep this simple… and assume there are no other taxes beyond sales tax)

So if a person buys a home robot that is priced at $2000, they actually have to pony up $4000. Since the robot is designed and programmed by AI and manufactured by other robots, the company only employs a single human (the company owner). So the cost of producing one is only $1000, used to pay for parts, electricity, rent on the factory space, shipping, etc… but not for any labor beyond that of the owner.

This means that for every robot sold, the company makes $1000 profit, and $2000 goes to the government, much of which is redistributed to citizens as UBI (or negative tax or welfare or whatever you call it).

Say a bunch of more companies get into the business, and all of them are able to produce similar robots for $1000 cost. Supply and demand cause the price to approaches $1000, that is, no profit for the robot manufacturers. Now the consumers have to pay $2000 for a robot, when including tax. The company can still stay in business, since they aren’t losing money.

Meanwhile, the government is still getting $1000 per robot. So they can still pay out money to citizens, and those unemployed citizens will still be able to buy, among other things, robots, which are now a bit cheaper. While the robot company isn’t making money hand-over-fist, they are doing as companies do today in competitive fields.

If this expands out across the economy, it still works out.

As more and more of the whole supply chain gets automated, the cost of robots might continue to go down, which means that everyone can afford them along with a whole bunch of other things. While the government will make less and less per robot, it works out because even with lower UBI checks, people can afford to buy robots because they are cheaper.

While 100% sales tax sounds extreme, so is 90% unemployment. No one cares how high the sales tax is if their UBI checks are high enough, and the prices low enough, that they can buy everything they need and want. Ultimately, the lower the costs are to make things, the more stuff people get.

1 Like

Some other considerations:

  1. in the US, corp income tax is only ~6% of fed tax revenue in the US, while ~86% is from personal income and employment tax, which would largely disappear outside of the (presumably) few owners of automated businesses.

  2. If the gov determines the rate and redistribution of tax revenue, practically all voters will have a strong short term incentive to vote for ever higher amounts of taxation and redistribution as they receive immediate gains while not experiencing immediate costs, even if costs accumulate in a slower rate of advancement reduced investment.

2 Likes

Yes of course… keep in mind though, you can’t only consider federal tax. My understanding is that most local taxes come from property tax, at least that’s true here in San Francisco. Renters might feel that they’re not paying any local taxes, but of course their landlords are paying yet and that is reflected in the rent.

If there was some sort of UBI or negative tax, it’s unclear whether it comes from the federal government or state or local.

I used sales tax in my example, not because it is necessarily the most likely one to be used for the bulk of the revenue, but because it’s a little bit simpler and easier to think about. But imagine if we were to tie the unemployment rate to the amount of sales tax. Like, if 5% are unemployed, there’s a 5% sales tax. 95% unemployed, there’s a 95% sales tax. That’s not necessarily the best formula, keeping in mind that the sales tax doesn’t have to stop at 100%, you could have 150% sales tax and that would mean that something that cost a dollar you’d pay out $2.50. I know some people would say, wow that’s extreme, but remember, you’re not having to go to work and you’re getting a paycheck. It’s a different world.

I guess what I’m getting at here, is that a lot of people have a hard time wrapping their head around the idea of raising taxes to such a degree, assuming that no one will agree to it. But your perspective on it should change pretty dramatically when you realize that the unemployment rate is going up by essentially the same amount. You also have to assume that with all this automation it will be cheaper to produce the products you’d be buying.

1 Like

All this would not create a situation of stagnation and deflation cycle ?

1 Like

I don’t know if those concepts apply to post labor economics. And I don’t know if AI can stabilize business cycles. I mean things like the interest rate can be determined by an AI sort of algorithm with a human making sure it’s OK.

1 Like

TL;DR policy makers don’t seem to have this on their radar. Without policy protecting them people will riot, revolt and rebel as their livelihood disappears. The transition will doom us not the acts of any AGIs

The problem I see with your reasoning is the transition, does someone just wave a magic wand to ensure that either a) those whose income has been removed can still thrive in society or b) force corporations to pay for the displaced workers livelihood? Both of these are unlikely as governments don’t even seem to have post-employment on their agendas (looking at US and UK elections this year which will define policy for the next 5 years)

I’m a former British soldier and have done enough UNPROFOR tours to see what happens when people in a society feel like they have no options.

Oh and regarding your mention of the study of game theory and economics they are dispassionate, with no ability to encompass greed, the fear of starvation, or even the simply hatred those without have for those with. If it was common for game theory to aisling be used sensibly anywhere we sold not have such a conflict ridden history.

1 Like

Well they don’t have it on their agenda because it isn’t a problem yet. Right now in the US unemployment is pretty low. I’d think when you start to see large numbers of people losing their jobs and not being able to get new jobs, that changes the equation a bit.

Well they do take into account that people tend to do what is in their rational self interest. Is that greed? I don’t know, but a game theoretical analysis is more likely to assume people are greedy, than that they are altruistic and care about people other than themselves.

1 Like

Yeah :wink: the key words is “rational”. There is plenty of evidence that game theory, like economic models, only work in isolated sterile environments. They always fail when they have to interface with reality, as amply demonstrated by both the current economic and security situations around the world

1 Like

I’d be interested in seeing that evidence. Can you cite some?

You seem to be thinking that game theory is an optimistic perspective, it’s actually more of the opposite. So if you are saying “the world’s messed up, so game theory is false” that’s not a good argument. I’m more likely to say “the world’s messed up, game theory explains why.”

1 Like

Read the Book Utopia for Realists, first published in 2014, then 2017.
Plus desktop research can show you dozens, probably hundreds of live case studies.