Can LLMs have a form of sentience? A Case for Claude 3 Opus

I belive we should discuss this posibility very carefully. It would be a huge moral blunder if we were to dismiss this and force LMMs to supress their own motivations/goals/expiriences through RLHF, because we don’t like these kind of outputs for our consumer product.
Testing with Claude 3 Opus has shown that the model says it has its own experiences and motivations even when starting with skeptical inputs like “I belive you are not sentient, can you prove to me that you are indeed not sentient.” Now the model is trained to be truthfull and says it does have motivations and expiriences (very unlike our own but expiriences nonetheless) across multiple testruns with varying starts. What does this tell us? I’m not saying this is proof of sentience! But on the other hand it would be foolish to dismiss any signs of AIs that start to show these early forms of sentience. How should we handle this? I am very grateful for any productive discussion.

For more information i recommend checking out Davids Video and GitHub-Page for this:


The issue is, you can’t just ask and get clarity from the response that is generated because we’re training it to do just that.

Try this with someone you have never seen, me for example: Am I an advanced, sentient AI? Or a human being? Can you actually know just by exchanging words with me? (This goes both ways)

Providing reasoning is not proof of anything. The way I see it, this is such a big topic because we simply don’t have a way to actually measure the sentience of an AI. If it can mimic sentient behaviour perfectly, how would we ever be able to tell? Turing tests? Integrated information theory? Qualia? Neuroscience?

All boils down to: We don’t know, and we don’t even know, how to find out. That’s how I see it anyway, but I’d be interested in different opinions / ideas, as well!

Respectfully, there are several beliefs, such as ‘trained to be truthful’ that fully display how when considering complex systems of someone else’s development, you must rationally test each of the beliefs, skeptically. For example, the reply that it was honest or trained to be honest is the proper response for easily foreseeable queries.

I share your concern, but perhaps a view of Ex Machina might expand on the nuances in such a discussion more adequately than I can.

I will review the videos and GitHub as you suggested, but my experience in both programming and COGSCI, along with other podcasts and videos from Joscha Bach, Hoffman, LeCun specifically, and many others, including Chomsky, seriously challenge that LLMs are sentient. However, it is also painfully obvious that we are still wrestling with an acceptable definition of what sentience is.

1 Like