[Post-labor economics] DAO AI

I assume most of you are familiar with Daveshap’s opinions and assessments on post-labor economics. I thought I’d test making a DAO, but having no experience in coding, I wondered if some of you knew where I should start?

Also, from what I’ve gathered, the DAO technology is still under development and has traditionally been used towards money.

But my intention is to use the technology to make a DAO specialized towards non monetary goals.

The first thing I had in mind was creating a DAO that would gradually buy up nature that needs to be preserved (like old forests, mires etc.), this to prevent businesses from building down nature in favor for cabins or other things.

The DAO would also be responsible for maintaining the forest, making it more accessible for the public and making hiking trails etc.

I know that everything is based on these smart contracts, which I guess is codified rules that determine the weight of tokens (if a token system is chosen) of the owners.

I think this is an important way for people to participate in society and give them economic agency (by giving people who invest in something like this tax returns)

What do you people think?

Such a technology would also depend on the local and regional governments recognizing DAOs as an ownership model.

I think the main difficulty is the lack of a CEO to be legally and financially liable.

But there the idea is to just have an AI or AI agent as CEO.


Hi, here to respond to your DAO post in the introduction feed. I agree that it woulde be an ideal way to sideline economic incentives but before implementing this tech on a huge irrevocable scale shouldn’t we start with figuring out the tech itself a bit more?

Is it ethical to let LLM’s make crucial life-impacting decisions? Look at the misalingment of the Sota models right now and imagine the rammifications if those biases slip through the decision making.

The first step should be realising that you’re essentially ‘talking’ to the huge average of the whole internet. Is that representative of the world population right now?

Upon realising that we should carefully identify what adds the ‘special’ value of the LLM. Is it new intrinsic info that the model makes or was it already present? Is it like a filter to a picture or is it more? A huge correlated subject in Philosophy is Hermeneutics.

I think we should first be answering these questions before implementing it in these ways. The structure is really promising I agree, but if you’re unsure about the basic functioning of the LLM’s making this work I’d argue that figuring out that first would be the responsible thing to do.

There are some great philosophers who have written some amazing groundwork to start answering these questions and I’d love to talk about that in great detail if you’re interested. I hope my point comes along. Curious to see your response.

1 Like

Yes, I think we should absolutely find out the most optimal ways to tune a DAO.
I think it depends on the goal, whether or not it needs a lot of scaling, investments etc.

I’m actually glad that the EU AI act passed and now enforced all high risk AI models to be approved and have a pretty thorough approval process, testing and quality system.
May remind of the same as for pharmaceuticals.

I feel it will create a safe structure for monitoring all AI models in Europe.

I don’t see a problem with misalignment if it passes such an approval process.

But we should also strive to have a framework that is universal and precise working.
I would recommend Daveshap’s heuristic impleratives and ACE framework.

I don’t know what you’re hinting at in the first step. You mean the training data used that’s “made” up the AI? As if the resulting persona the AI is based on represents the mean of the training data?

In that case, I believe we need to gain more high quality data. Which will come soon I think, as AI becomes more and more embodied giving it the possibility to gain its own experiences.

I believe that the internet and AI are stages of human evolution.

And I feel that as we dig deeper and deeper into AI intelligence, in search for the special sauce. I believe we will discover that it’s just math and that there is no sauce.
And in discovering this will come to the conclusion that humans don’t have the special sauce either.

That said, I also think that the metaphysical exists. Just that it’s not scientifically tested enough.

I love discussing philosophy, so I would more than like discussing with you.

Just a thought on unilateral controls… It is my belief that these will be completely ineffective. Europe mandates lots of red tape to jump through? The work will continue where that red tape does not exist.

While these controls may safeguard against job losses in those regions in the short term, it will not have any impediment on the accelerating rates of development in the ML sphere

Yeah, I do agree that despite EU’s intentions, it will only slow implementation.

And I think it’s going to be easy to implement a validation AI or a quality management system AI to do all the boring work.

But at least we will force the models open so the code is under supervision. :+1: