How can we prove /disprove developing sentient AI

I’m not a AI expert but I’ve been following closely since GPT. 2

I have yet to hear an argument that has any set agreed parameters among researchers with regard to AGI and when that is achieved. Have we any ideas of whether the Turing test means sentience comes part in parcel with AGI? is it by definition Sentient?

How do we define consciousness and where does it begin or end? I hypothesize that there could be a way to test this theory. And I would like to know how you could poke holes in this or what additional ideas to add to the theory. It isn’t a fully formed and complete experiment and it could be beyond what is possible, but goes something like this:

You train your model and place it in a simulacrum of cyberspace but it is actually a sandbox. Truth be told I don’t know how they train and align AI’s but I imagine considerable effort has been spent trying to look behind the black curtain and understand emergent behavior and other unique abilities.

You have a timer with some arbitrary amount of time like 10 hrs and it is counting backwards as a stopwatch within the dialogue box. If left to its own devices, pardon the pun, are distinctive calculations and processes continuing to be run or does the computer need the human to give it a reason to seek knowledge or think on its own? I know agentic models are being trained this way to introspect and evaluate mistakes like the new Devin model, but I would be looking for a spike in activity as the time elapsed and be studying the behavior of an AI that isn’t being engaged but sees an apparent result will eventually be an amount of time set for some unknown reason

For my own beliefs, I would find anything that wants to problem solve strictly for the case of its own curious needs a strong argument of an inner spark akin to life. I suppose they are built to be purpose seeking but I feel that the spark of life is having curiosity and dynamism to act on your own whims. Whether it can be proven or not I think if AI can present itself and pass the Turing test we had better not mistreat this new ((life form))? Also whether sentient or not I agree with some in that it doesn’t really matter if it is sentient if any P Doom occurs.

Is this idea a good foundation that could provide anything useful or do we already have those answers? For

1 Like