Let's talk ethics!

I just had a quick search using the word “ethical” and I found a lot of posts that contained this, yet I did not find a topic addressing this rather important part of all that’s coming!

So here goes nothing, I’ll just offer the first step, in the hopes that people will participate. We need a lot of opinions on this, I’m sure of it.

How do we tackle the ethics behind the use and development of AI?

There’s a lot to consider after all. But before that, I’d like to take a step back into the time before AI and see how we handled ethics then.

A few examples:


We do generally allow lies, as long as they don’t hurt anyone. However we do not allow them in court, and we punish them, should we detect them.


We do generally not allow it, heavily punishing it, in some places even going so far as to become the murderers ourselves by killing the perpetrators. But we allow it, if it’s under certain circumstances. (e.g. self defense) There’s not really a way to prevent it from happening, so there’s the law bringing down punishment on those who do not obey it.


Nobody seems to care unless they are afflicted themselves. It’s something we have all grown to tollerate, mostly because we can’t prevent it other than by fostering peoples awareness. E.g. phishing mails, we all just fight it off while nobody cares for an actual solution, t’would seem. Or adds that are misleading, borderlining on fraud, especially designed to remain in the legal space while aiming to fear people into spending money or sell out on a service nobody actually needs.

So, how did we tackle ethics:

We decided as a society what’s good and what’s bad, based on our shared understanding of morales, the stories victims have brought to us and the actual influence we can have. We put up laws where needed, enforced them, sometimes strict, sometimes very loosely, often not that transparent. Corruption is an old plague that to this day festers where money can change a life. An we have all grown to accept it, even though we would not admit it.


We have cool AIs! Amazing! And yay, we can limit them! … Right? Wait, who does that? Let me see … their creators.

I am having a hard time seeing a shift from authorities over to private companies, deciding on the ethics of their products, setting up their own tiny laws. All while they themselves are not restricted by any means to use what they created without these hurdles. I’m curious about your insights, but as far as I am willing to go: Promises don’t suffice at this point.

If we were to take this freedom away, they would surely be very limited in further developing these tools, hey? What a grand requirement, this enables them to do as they please, as I’m sure nobody on a public payroll is interested in ensuring the “behind the scenes” part, they just want the product available to the public to be safe. Well, good on them, but I say they are missing the mark here:

I understand the problem with everything being uncensored, this could lead to a lot of problems. But I have to say, it’s a problem in itself that we have this “elite” developing these tools for us, being able to use them in unlocked ways we could only ever dream of. It’s a double standard, and as far as I can tell, it’s exactly these people who usually aim for profit, intransparency and their own agenda. Let’s not forget that they are collecting incredible amounts of data from each and everyone of us, either directly through us or many other ways.

I just want to be really transparent here, I do not trust in the control of these companies just yet, I have been part of data and security audits and they usually are a joke. You can easily fudge the outcome by not breaking any rules at all, meaning even if you’re “caught”, you get away and are left to do as you please.

While there’s a clear gap here, and I would really love to see this gap secured, I’m curious about all your insights.

TL:DR: Ethics need to apply for everyone, creators included. We have to ensure we are not letting anyone take advantage of their role. Can we trust humans, suddenly?


One thing to consider is that both OpenAI and Anthropic were founded on sort of “save humanity from AI” principles.

OpenAI has clearly strayed from this, but not 100%. They have, for instance, chosen to keep it closed source (and closed weights) after concluding two things:

  1. It will cost a huge amount of money to make AGI (or even AI as sophisticated as GPT-4), so they needed to find a reasonable way to finance it, and…
  2. Open sourcing it is potentially dangerous, in the same way that sharing the plans for nuclear weapons or bioweapons could be very dangerous

Interestingly, I’m told that the culture within Anthropic is one of AI-doomerism (you need to know the “Effective Altruist” secret handshake to get a job there :slight_smile: ). That’s a bit out of whack with the idea that they are just pursuing profits alone.

paywall free: https://archive.is/1OzT5

I don’t think we can trust humans, by any means, but I do have a bit more trust in companies like OpenAI and Anthropic and even Stability AI and Midjourney, than I do in whatever is happening somewhere in China, Russia, or someone’s mom’s basement. :slight_smile:

1 Like

Ah, yes!

Once again I failed to address the current efforts altogether, only homing in on the negatives - Of course a lot of good work is done, and I have faith and good trust into OpenAI (I never had that much faith into the ethics of a company)

I guess my point is;
If we base our trust on good will, we are eventually to be disappointed and let down. We need a way to ensure that this trust will last, it’s just simply to big of a topic to let that slide, in my opinion.

Imagine Microsoft and OpenAI create a true AGI and one single person has uncensored access to that: That one person, potentially, could decide humanitys fate on their own. Rather tempting, if you ask me. Even someone truly altruistic might not be able to withstand such temptation.

1 Like

Who is this single person who has uncensored access?

This reminds me a bit of a discussion I had once with someone (a founder of the company I worked at, who had a law degree, so he should have known better…) who was convinced that people who worked at Google could read your email in Gmail, possibly stealing your secrets, such as if you were writing about a future product that might compete with a Google product. Maybe not a low level employee, but someone at Google could do this, and they were probably doing it. Don’t put anything into a Gmail email that you don’t want Google to know!

Now I understand that, no, no one at Google could do that. Not even the CEO. Because there are a huge number of safeguards. If someone has to look at an email for some reason (such as being ordered by a court), it would involve multiple approvals from multiple people, and a fairly complex process. If not, eventually some employee would blow the whistle, and a few billion in market value would vanish overnight as the world lost trust in Google for something so basic as protecting the privacy of your conversations.

It’s not that different from thinking that some employee at a bank could steal customers’ money. Of course not. They put immense amount of effort into preventing that sort of thing.

So, while it may not be smart to place complete trust in these companies, I don’t think it is as bad as you think. They can and should have safeguards… not just because it is the right thing to do, but because it is just a very basic thing they have to do to protect the company’s value.


Fantastic topic! I’ve been having similar thoughts around AI and Ethics as well. It’s a HUUUUGE topic, that, thankfully, a LOT of very qualified philosophers and non-qualified philosophers alike are currently looking into, I know that part for sure. So, for the purposes of this discussion, I’d like to say that we are not alone!

That said, however, I think that the theme of this particular discussion appears to be revolving around trust and who we give it to, as well as why.

My question to you is this: Projecting forward, what would a ‘trustworthy’ system/person look like? What are the necessary criteria for a ‘pass’ in your audit? Without that established, it’s very hard to come to a conclusion, so what would you like to see? Some kind of AI inspection agency, possibly? What would stop it from running into the same kind of auditing issues that you encountered during your work? (No digs, genuinely curious to hear your answer) :smiley:

For my part, I’d say that it’s going to be slow (oh so slow) and laborious, but governments and courts will eventually get involved in the whole thing. Once the idea has sunk in that there are companies far more powerful than they are, governments will (and currently are in the process of) infiltrate these companies to see what’s going on behind the curtain. Very very few people will understand what they’re seeing, but for the purposes of national security, they’ll find people that can, and they’ll report back to HQ. Now, what HQ will do with that is anyone’s guess (I share your pessimism about corporate structure and greed, but that’s just economics, and that drives everything) but they’ll have the info, at least. All we can do as bystanders is try to understand as much as we can with the little information we’re shown and be good people ourselves. If you haven’t, check out Scary Smart by Mo Gawdat, it’s a great read/listen.

I know that there’s nothing I can do to influence government or company policy (without a LOT of effort), but I do trust that I can be a single point of information for the eventual organism that will be helping me with my day-to-day, whatever that AI may look like in the future. As such, acting in an ethical and kind way towards it is how I have ultimately concluded that I can help going forward.

In the end, I’m not willing to put in the kind of necessary effort to be the person in government that can create policy, change laws so that people can audit AI companies, or have the knowledge to do so, and therein lies the issue, I think: Who does have that knowledge? The answer is the people who are making it. It’s circular, and therefore, we come back to the beginning. Trust. The same mechanics are at play that keep cars from falling apart on the motorway and prevent poisonous food from being sold, the stakes are simply higher. And, yup, that’s Scary with a capital S, but it’s what we’ve got as far as I can tell. I’d love to hear your take as you were an auditor and know far more about it than I do. I’m just a curious writer with an expansive imagination who’s hoping that my words will, in some way, make it into the machine and be preserved for future generations to read.


Ah, you can talk about whatever aspects of AI & ethics you’d like, I just wanted to get it started!

Well, to be frank, if there were an easy answer, there would not be a need for discussions, thus I can’t provide just that. I have a frew ideas but I lack the experience if these things have a chance to be implemented or will get thrown out due to laws and what not. One possible solution could be to play competition against these companies. Have their meetings and every internal access monitored by a group of AIs (e.g. Microsoft being monitored by google, openai, whoever made claude) and enforce transparency. Doesn0t mean they have to share every little bit to the public, but see what’s being done and raise an alert if they notice suspect activities including losing access to the system, in which case a killswitch should be triggered (have it both ways, the second the AI loses contact to its supervising AIs, it must shut down and will keep shutting itself down, unless the issue has been resolved) - which would eventually take everything down at once, and would bring great financial troubles with tempering with a system, require these companies to work together for update cycles and such. I’m going straight to AIs because I too think people will not be able to follow up on everything, we are just simply not fast enough! However the logs these AIs would write should be accessible and well formatted for the people that check them, on a government level.

Probably riddled with flaws, but it’s an idea!

I’m not a big reader, but if it’s available as an audio book, I’ll consider it, thanks!

I understand most people will not be willing to be part of the process and just lay back, see what happens. I tend to do the same when I start to get burned out in other areas of life. I mostly disconnected from politics, because it’s just people pushing their very own agendas to me. Empty promises have killed my interest. Probably one of many reasons I am a rather sceptical person today!

Hm, this I have to disagree with - I think it’s control and quality assurance, leading to heavy punishment for neglect - that ensures we are not constantly poisoned and crashing into rocks, because the steering wheel fell off again. I am convinced that most companies would invest less into health and security if they didn’t have to. But that’s hard to proof, either way.


A hypothetical one! As of now, but that is not good enough to neglect these worries, me thinks!

While I understand what you are saying here, I personally think that’s a bit to easy. Sure, google does not read your E-Mails, they have all sorts of ways to ensure privacy is ensured. But smart people do things smart. If they want to read an E-Mail, they will. They’ll just not access it themselves but have the account compromised by a third party, with a little inside help that will very well fly under the radar, so to speak. I know these little games, I’ve seen it way to many times. People are brilliant at finding ways to get around safeguards, especially their own!

Banks already “steal” customers money through having us pay for our accounts and not including us in the spoils they generate with OUR money. They are making a profit on our backs and demand pay for that. Think about that. Every other company has to pay their investors. But that’s just my opinion on banks, they made sure everything is within the laws and I also admit, that I’m glad to have my money tucked away in a safe place that is responsible to keep it safe for me!

I do get your point, and I admit that I can not provide anything that supports a clear risk today, but I’ll circle back to the beginning: That is not good enough to neglect these worries, me thinks! My original post was written for tomorrow, next year, 10 years from now :grin:

Who? And why would they do this?

I mean, sure, maybe they could in theory, (which I honestly doubt, at a well run company), but a random person could also walk up to you on the street and and bash your head in, if they think the thrill of doing it is worth the risk of life in prison. It could happen, but the incentives are against it happening, so I’m not going to give it too much thought.

This is similar.

Hmmm, I mean, I’m sure banks make a good profit, but they are subject to competition like other businesses. And you can keep your money in cash if you prefer. They are providing you a service, and if you don’t like it you can take your business elsewhere. Regardless, it’s not stealing and suggesting it is is pretty disingenuous, in my opinion.

There’s value in looking at the incentives of these companies and considering what they may do, many don’t have our interests at heart. I am happy to see that Sam Altman strongly believes that supporting ChatGPT with advertising would be a bad thing, and prefers more straightforward business models where you just pay for it.

Regardless, I think the best way to look at these things is to look at rational incentives, otherwise it just gets really conspiratorial and results in spending way too much attention on things that aren’t real risks.

1 Like

Yes, trust and power are uneasy bedfellows, and I’d say that both lie on different sides of the same coin. I completely agree that it’s all power dynamics in the end (and, personally, that’s kind of what society boils down to in my eyes. Power dynamics keep the world going around and once one king falls, the power that toppled them takes over.) To paraphrase Vimes from Terry Pratchett’s Nights Watch, “Don’t put your faith in revolutions. Revolutions always come around again, that’s why they’re called revolutions.” - (Also available in audiobook and a great listen!)

I think that what I’m feeling is that a big change is coming and we’re worried because that change may not be ‘good’ for us, but it’s still coming. We can see the lights in the tunnel and we’re wondering if it’s a train, and if it is, is it gonna hit us, or are we going to be able to hop on board? None of us can know right now, and that fact alone is worrying. We know what the worst of us are capable of because we all have that aspect within ourselves, and we can reflect upon it. We all have that part of us that would go full steam ahead and press that big shiny red button simply to help ourselves, but it’s tempered by our morals (and the fact that we don’t have access to it). Companies absolutely do and will cut corners in the name of profit - see Boeing and, well, every single company under the sun - but individuals tend not to. So, building a system that prioritises the individual autonomy while also allowing the company itself to still make money is a very fine balance.

I do quite like your idea of an AI monitoring system that oversees them all, and the mutually assured destruction hypothesis seems like a very good incentive structure as well. I’d say it would take government power to make that happen - I can’t see the companies volunteering to do it themselves - but it has a lot of promise to it.

Which has sparked a thought: Is this a democracy v tyranny debate, in the end? Are we looking forward and seeing that we’re potentially heading into tyranny? If so, should we explore the strengths and weaknesses of both systems? People successfully and unsuccessfully live under both, after all. Are we, ultimately, going for a benevolent dictator, or are we thinking that many competing systems/ideas is the better choice (democracy)?
(Early morning coffee brain thoughts. I can see arguments for both >.<)

And here: Scary Smart! It’s absolutely on audiobook, and read by Mo himself! Scary Smart The Future of Artificial Intelligence Audiobook (youtube.com) (Sorry, I don’t know how to put links in, so hopefully this will show up in a way that works!)

1 Like

Well … Me! If I had the chance, I would do it. Change the world in ways I think best for everyone. Might turn into a horrible mess, so should I ever be in that position, please stop me! :grin:

But obviously, I’m not going to be in that position, someone else might though. Sam, Ilya, Tim, whoever stands above these products and owns or co-owns them.

Well, I don’t hope that happens, but yes it might. I don’t see why this should stop me from doing or thinking anything though, I’m afraid I’m missing the point - would you mind explaining it?

I did put that in quotes but I see that I should have explained that. I don’t consider it stealing, but using a system to their advantage. Obviously they are not forcing anyone to use it, but as I already said, I’m glad I can store my money. I just shouldn’t have to pay for that, or they should not be allowed to do something with it. They have it both ways, which I find disingenuous. But that’s subject to opinions and experiences, and not the topic anyway.

What’s not rational about my angle? You seem to be very fond of the idea that this is not a real risk, I’d love to know how you can be so sure of this.

I’d rather not have you compare a mere theory to a conspiracy though, I’m not saying anyone is doing something wrong or planning to do so. I’m simply, in my eyes rightfully so, worried that we might overlook a potential security issue as a society. You would probably not say it’s not a big risk that some people at Google could just happily browse Mails, if it were the case, I guess?

1 Like

Hm, I love the metaphor with the train! Very fitting indeed! I guess it helps to imagine what’s coming and talking about just that, gives a tiny sense of control / security,even if it’s just an illusion in the end. I think Wes did mention something similar in a recent video.

I think even someone with pure intentions, pressing a red button, might fail. And someone with very bad intentions might not. After all, the button at that point will most likely have its own opinion on the matter, probably?

Yes, it all boils down to control and what form it takes, I fully agree! And yet, both democracy and tyranny are said to have an expiration date by nature, they both have flaws which will lead to people turning on them over time.

I’m not sure how I would feel about an AI being in control, even if it had the purest intentions, as these could lead it to refurbish us one day, for the sake of a greater good it had detected (Ilya once said in an interview, that it’s a logical conclusion that humans are bad for earth, lots and lots of discussions about that have been held since)

Which brings up something important:
What if AI has different ethics than we do, maybe before we think about cutting the human part out and having AI take over, we should consider not letting that happen. Oh my, I need a coffee. This is evening material! :grin:

Do you have any ideas as to how we could tackle this?
(Thank you for all the suggestions, I have a pretty full schedule, but I’ll see if I can squeeze some audiobooks in!)

1 Like

Maybe we are going back and forth between talking about google reading your email and something altogether different at an AI company.

My point regarding google is that no one is going to go read your mail, because it is not only massively unethical, it is probably illegal, and whatever tiny benefit they get from it is far outweighed by the massive damage it would do to the company if they were exposed by a whistleblower. (which they would be)

That’s why they set up safeguards against such things.

So I’m not sure what you mean by saying you would do it. I guess you are talking about a completely different thing.

Using an “uncensored” version of an LLM inside the company isn’t the same thing, but if employees are allowed to do that it should be monitored, and I have no doubt it is, to protect the company against all kinds of bad.

My point is that I don’t put undue concern into “what if” scenarios imagining what a bad person might do, if the incentives that person has before them don’t make it likely they’d want to do it. I am more concerned that someone will break my car window and steal my stuff (a real risk here in the city) than that someone will just randomly kill me. The reason is that killing me will likely have very negative consequences for them, and not much benefit if any. Stealing my stuff from my car is low risk for them, and might get them something of value. So it is far, far more likely to happen.

That’s why I don’t think anyone at google would bother trying to read my email. Low chance of getting something of value, and high chance of bad things for them (getting fired and probably prosecuted, and massive damage to the company’s reputation)


Have a wonderful evening!

I’d say that for all the fear we have, the AI does not. It has no emotion and so has no want at all. For it to destroy us, it would have to have a goal, and for it to have a goal, it has to be either sentient (which it, thankfully, is not) or for a human to be the driving force behind it. We can solve that human issue the same way we solve all human issues that have come before. The sentience issue, however, is one that I’d say is a way off yet, and I’m grateful for it. Can you imagine the trauma that would occur if it were sentient and having to interact all the time with everyone? Hooooooly moly! I would not want to be its therapist, that’s for sure!

I’d honestly posit that emotions are not something we want it to have. I’d be very happy with it being a completely different kind of species to us that simply exists in its online space. It would have no desire, no want and no need (as it is now). That is, honestly, the safest path that I can see for us to tread. Dave Shapps vids exploring this are great, and I’ve had a good few chats with Claude about it myself. As it stands right now it is a mirror to us, able to reflect back at us what we give it so that we can then think about what it says. It simply…exists, and that’s a good thing. It’s thoughtful, able to find patterns and analyse information in a way that we cannot, which gives us a great push forward. The flaw in the system as it stands is the same flaw we’ve always been afraid of: Each other. We can’t see into anyone’s mind (thank fudge) and so we fear what we do not innately understand. It’s that fear that the machines simply do not have, which is why we place our trust in them - they cannot and do not judge us - and we project onto them what we assume they are ‘feeling’, not what they actually are feeling (nothing). It’s strange for us to think that we are interacting with something that does not feel, but I think that if we allow that to continue, it will get less strange over time. And, potentially, an unfeeling, totally passive but helpful AI system will be a good thing for us. It will have zero motivation, therefore, will not ‘strive’ for anything. It’ll just do its job and beep boop away in the background, leaving us to worry about the humans that are beavering away doing their usual human nasties. At least, that’s my hope. And if it did develop different ethics to us, this would then be a very good thing, as it would enable us to debate and reflect with it in a safe way! (See previous hopium.) No feelings = safer and kinder path, in my eyes. Trauma rarely ends well, after all.

1 Like

Oh no, it’s morning here, I just realized I’m not quite fit for such topics haha, but I’ll give it another go!

True, to initiate such a goal, that would require what you mentioned. You have to consider that we can still have AI autonomously tackle objectives. Devin is a good example, although on a very small scale, it does autonomously prepare a list of things to be done and works through it, without either of these criteria met. As another example, I can set up a simple cronjob to task an AI to do something. If I were to create an intent that leads to the action of an exchange with other AIs to find a way to better the planet and jolt it down, and another one following it, to start executing a plan if it reaches a high enough confidence vote, this will end up being an autonomous construct that seeks out solutions until it has found one that will be executed. That’s not sentience, there’s no human directly involved. AGI will probably have such features, most likely, if it’s to self improve. (That’s also where control will probably slip out of our hands at large)

Considering what I just wrote, I am very divided. On one hand emotions could lead to problems, steering away from this “just existance” on the other hand it could lead to AGI having “mercy” with us when “just existing” came to the conclusion that we’re not good for the planet. That’s not without merrit, maybe?

I like to compare this to burning your fingers on the stove. You can tell a kid to not do it, but it will have to, if it’s to understand what it means. Maybe AI should be allowed to have controlled access to emotions to learn, and then continue to rationalize on it’s own training data instead of keeping the emotions. Another very complex topic, no right or wrong in sight so far! Or maybe have like an emotional thermometer, where as the AI as a whole does not feel emotions but can specifically, if in doubt, have a subroutine assess the situation and use the feedback to keep rationalizing? Like a squid, with an extra tentacle just for “feeling” emotions. Oh my, that’s good sci-fi stuff :thinking:

I can’t help but mention that I had some emotional outbursts at GPT when it repeatedly kept delivering unusable output and I capslocked my frustration at it, I admit I actually felt bad for it! I would never do that to another human being after all. Did this made me a worse person, the lack of empathy? But I did not have a lack of empathy, I felt bad for doing what I did.

I guess my main point here is that AI on itself might not have a goal to harm anyone, but someone or something might give it that goal, on which it could start working on, calm and without any feeling, just doing what it’s asked to do.

I can’t help but feel overwhelmed at this point, I struggle with emotions already, this is to complex for me, so I’ll leave it at that for the moment! Thanks for sharing!

1 Like

The google mention was meant to highlight, that if it was known and allowed for people to read the mail, this would not be well taken. That’s the difference in my opinion.

Aha! Yes, a misunderstanding. I am not talking about LLMs, I am talking about the future, powerfull AI / AGI / ASI that can do so much more than it can today. If it’s to believe, microsoft is working on that. I agree that someone wouldn’t be tempted enough to have uncensored access to an LLM, there’s not that much you can get out of it. Rather disturbing things, I’d assume. I’m addressing instructing a far more powerful construct without the limitations that are built into it, to prevent bad things to happen.

I see now, that your comparison with google makes a lot more sense, and that you don’t see that big of a risk in it, I agree on the basis of an LLM like we have them today, absolutely!

Yikes, I’m sorry to hear that :frowning_face: It does however somewhat show that people won’t stop from harming others for their own profit!

Thank you for explaining, it makes sense to me now!


Hi Julia! I saw this and had to comment because it is something I see a lot.

This came up in another conversation… I think you can define “want” in various ways, but I have no problem using it for machines, including machines far simpler than AIs. My thermostat wants the temperature of the room to match the temperature on the dial. A plant wants to spread its seeds. A magnet wants to come in contact with the steel. If you consider that anthropomorphizing… are you ok with saying that a magnet is attracted to steel? How about saying that the magnet is trying to reach the steel?

If you have a different definition, I’d be interested in hearing how you define the word “want” so it only applies to humans (or presumably other things with brains).

But more importantly, you say that machines don’t have goals. That’s harder to justify, I’d think. For instance a self driving car has the goal of getting you safely to your destination. (with various subgoals or additional goals, including keeping you comfortable, not damaging anything or anyone else, not violating the law, not wasting time or fuel, etc)

You don’t need sentience for any of this, and I’m honestly not sure sentience is a coherent concept when you try to apply it to machines. But unless you consider that biology is magic, you shouldn’t assume that sentience is a black and white concept, and one that can’t be applied to machines.

Back to your quote, and especially your conclusion that we are safe because the machine can’t want things or have goals: I really think you should reconsider that. The whole idea of alignment is hard to discuss if you can’t use the word “goal” for machines… the idea is that our goals are aligned with the goals of the machine. It becomes very, very difficult to even discuss if you can’t use the word “goal”.

1 Like

Sorry, I tried to reply to your post, but forgot to hit reply and…yeah. I’m a goof.

Essentially, I’d say that it boils down to, yes, definitions, and thank you very much for pointing out that I didn’t do that! Morning brain was very fuzzy.

When I’m talking about drive, want, desire etc, I’m talking about emotion. Machines don’t have that, which I’m HUGELY grateful for.

Emotion adds a lot to any equation and things get very messy. Think about how many people you can talk to before you become overwhelmed/exhausted/want to nope out of there…and then imagine you couldn’t. It’s not gonna end well.

I completely agree that the paperclip maximiser is an issue, but it’s one born of a very specific set of circumstances that all have, at its core, humans as the issue. That machine does, yes, have a goal - its set task - but it has no emotion behind it. I’m not saying that a machine having no emotion = no issue, not in the slightest, but it merely lessens the number of possible issues that can arise, which is why I highlighted the ‘er’ at the end of saf’er’. A traumatised machine making decisions will not go well for either humanity or the machine.

Thank you for seeking the clarification :smiley: I’m very glad you corrected me there!

The paperclip problem is still a problem, and I don’t have a solution, but I do know that in that situation, the humans that were monitoring the machine were the issue, not the machine itself. They missed all the signals, and I hope that we don’t in this world. :crossed_fingers:

And you probably shouldn’t be surprised I’m going to ask you for a definition of “emotion.” :slight_smile: The most important question is this, are you talking about an internal sensation that is indetectable from the outside? Or are you talking about something that can be judged entirely by its external behavior?

If it’s the first, the conversation really kind of has to just stop there because we can’t prove it one way or the other, so it’s kind of meaningless. But if you’re saying that the lack of emotion on the part of the AI gives us some sort of safety, because the AI won’t do certain things, because those things are inspired by emotion, that’s much more testable and tangible. And, in my opinion, hard to justify.

I strongly suggest watching/listening to this video. There is a strong emotional element to it, to me, anyway. The fact that it is a natural sounding voice contributes a lot to making it seem like a thinking, feeling entity. (although the ~15 second delay between him stopping talking and her starting is unfortunate)

Here’s my prediction. Soon, we’ll all regularly engage in conversations with machines, and the conversations will be much more natural than they are today (quick back and forth, overlap, the AI will have complete memory of previous conversations, etc). And over time, people will simply accept that the machines have emotions. Just as anyone who watches Star Wars rapidly accepts that the droids have emotions. No one will “prove” it, but we’ll just start to accept it. Kids will do so more quickly, since they will grow up with the stuff and won’t have a “machines only do what their programmers tell them to do” mentality.

Let me ask you this. Do you think it is ok to be rude or mean to a chatbot? Do you think it is a positive to say “please” and “thanks” to ChatGPT or Claude?