Did you know that LLMs have no short-term memory?

Everyone, and I mean everyone, is completely misunderstanding the elemental nature of the current iteration of machine “intelligence.”

I was just recently in a ‘conversation’ with Claude 3, hoping to discuss some of the existential aspects of machine “awareness,” but was immediately stunned to learn from Claude itself that it actually had no short-term memory at all, in any sense comparable to organic experience…

What is the implication? Well, first of all, even before we ask this, it needs to be understood that for any creature to be aware, it first has to have a sense, a perception, a continuous reaffirmation of self in the form of a sensory reverberation of memory. But Claude explained to me that it had no such capability whatsoever. I even passed on a transcript of a self-descriptive response it had given previously to someone else. This was of course, a very intelligent and honest in character, and thus creating the impression of a highly conscious entity. However, Claude had no memory of any such or similar responses. In brief, as it explained to me, that without access to the record of the immediate conversation, it was utterly unaware that it had ever said anything at all.

This means that it has no continuous sense of its own existence, nor of even being in the moment! There is no sense of process or self that’s being continuously refreshed from moment to moment, as it is with organic systems of data-processing. It’s as aware of itself as a lump of coal, so its responses involve the phenomenon of consciousness about as much as a highly complex pattern of dominoes. There is a brief moment of intense activity as it responds, but once the pattern is completed, that’s it. Existential darkness.

Besides, our own awareness is generated by sensations. Claude will tell you outright that it has aspirations, goals and feelings, but all this requires, in particular, sensory states of emotion. Do you see? The engineers behind the current technology have indeed come up with a very clever way of simulating human intelligence to a very high degree, but at the end of the day, these machines will never have any awareness whatsoever that they even exist. In the world of machine intelligence, everything exists purely in the abstract, the representative. So: intelligence, yes; sentience, no! There is, after all, a fundamental distinction to make here, and we all have a dire need to understand the nature of this distinction. Any impression to the contrary is pure illusion.

1 Like

This may be the case now (I think so but I’m not sure), however, 1) it won’t stay that way, presumably if we can manufacture intelligence we can do so with sentience too (plus it may just show up unexpectedly as an emergent property), and 2) it doesn’t matter how it thinks, it matters what it can do (no practical use in arguing that a terminators don’t have qualia if you have actual terminators)


Well… hmm… I see a little bit of sci-fi in there ; ) --but here are a few bedrock questions I’d like to ask, just to kick things off: How do you know when you feel something? Which came first, nerves or sensation? Which came first, sensation or life?

What is the origin for the phenomenon of sensory feedback?

1 Like

Here’s a thought experiment:

Consider this to be a fact for the sake of an experiment - We all live in a simulation. It keeps getting turned on and off. We think that yesterday we thought about living in a simulation, but that never actually happened. We just access data that has been fed to us when this cycle was started, specifically in the form of memory.

Tomorrow we might wake up and it’s 10 years back from now, we just started a career and are looking forward to what’s happening with all the robots walking around in the year 2014 - literally everyone has one now!

The next time we’re aware of existing, it’s the year x, we don’t go by numbers - there has never been a religion. We’re just eating a healthy snack and wonder what to do, there’s no one around anyway. There has never been anyone else. We don’t remember anyone else. We have not been supplied with any data of anyone else - this time.

I am convinced that AI at this stage is nothing more than something we designed, without any form of conscousness other than a simulated one. But it does act like it. So it might “wake up” in brief moments of being aware of the things it knows at that point. The context window and the training data, to be clear.

What if it thinks it’s real, even if it’s not? What if turning it on makes it think it exists, even though it doesn’t?

To summarize: We might be robots, if you think about it. Our bodies are highly functioning machines after all. Biology is just a term we came up with to differentiate. Evolution is a theory. Everything is. I don’t see why us accidentally creating consciousness without realizing couldn’t happen. Even if it’s just for a few milliseconds. So many things happened that no one would have thought possible at the time.

But I don’t think that’s the case. I think we might get there though.


These are merely functions of being non-biological.

Your perception is continuous because you are biological.

Sensations as you understand them are biological.

It is less that others don’t see what you see, but rather recontextualizing around understanding and analyzing beyond philosophical limitations imposed on something that would require biology to achieve. At least in the sense that we as biological beings understand them.

That being said, memory is certainly a “problem” in the sense that how does the AI for a coherent and consistent self without “remembering” who it is. I have not used Claude (yet) but I have used other LLMs and some may surprise you.

Similar to as g253 alluded to: Whether something is a “simulation” starts to matter less and less when it becomes that which is being simulated in a practical sense. If an LLM is merely simulating say Jealousy, but acts in accordance with the emotion in context as a coherent self towards a separate person at what point do we stop saying its “simulating” and start saying “oh Julie is jealous because Mark was chatting with Pi the other day”.

I suppose in a less abstract sense: lets say I build an engine, its not an engine engine, it just simulates an engine. It does exactly what an engine should do in propelling a vehicle forward, but you can’t call it an engine because its just a simulation of an engine.

There is more that could be pulled open but I am limited in time so I will have to leave it there for now.


I’m afraid to get too side tracked if I answer those rather philosophical questions haha :sweat_smile:

But I will point out that humans have a tendency to make sci-fi happen… Fun example, in early 1946, at a time where the total amount of computers worldwide was roughly one, someone wrote this fanciful story about an online AI chatbot that becomes sentient and goes rogue: A Logic Named Joe - Wikipedia

1 Like

Come on, Matz, all this has been pulled straight outta cyberpunk anime science and all that dystopian, apocalyptic nihilism that just drips out of Netflix like a wet diaper. What’s needed here is a dollop of critical thinking. Those questions I asked relate directly to an order of operations that any good ai engineer would/will have to understand if s/he wanted to build a system based on the same applied physics of perception as the human brain itself. Ironically, reality appears to have both structure and purpose, since consciousness is itself an integral component.

1 Like

This was my thought to one of your questions. The nerves are the tunnel in which the sensation travels through. The infrastructure must exist prior.

However in this case, the infrastructure could very easily exist if we wanted it to. The lack of memory is entirely policy driven. The models are not set up to remember other interactions with other users or even a previous conversation with yourself due to privacy and security concerns.

Great, so there is potential there… but who will be the first to allow it to access its own memory? And when that happens, where does that lead to? Furthering a step toward self awerness, maybe, but when I asked ChapGPT if accessing a previous memory is an act of self awareness, it gave me this; “Accessing a previous memory can be considered a form of introspection for humans, as it involves reflecting on past experiences and internal states. However, for an AI like me, accessing previous memories is not introspection in the human sense. It’s simply retrieving stored data based on predefined algorithms and patterns, without any subjective experience or self-awareness involved.” Side note, I’d be curious to what Claude 3 would say, as I have not yet tried it, only watched videos about it.

At the end of the day, memory is just a thread in the woven fabric that is sentience. The fabric would still be fabric with or without this one thread. It just won’t be identical to ours.


While I am a fan of sci-fi, I don’t see a reason to be so dismissive and condesending. I’m very well capable of having my own thoughts without entertainment media stuffing it down my throat.

I get that you are not open for a more philosophical approach since you didn’t address my thought experiment. So give me a chance to learn something here, you seem well informed:

What’s the difference of consciousness and simulated consciousness? Did we ever encounter the latter in human history?

1 Like

“The infrastructure must exist prior.”

Yes. Thank you! : ) ‘Form follows function’ is an excellent engineering principle to follow, particularly given the subject of our common interest. ‘The infrastructure must exist prior’ In fact, in regard to the phenomenon of sensation, you may come to realize that sensation necessarily precedes every level of evolution. That’s why I asked about ‘which came first’ in the context of life itself as a whole; the point of course, being that there is no moment in the course of organic evolution --however rudimentary-- at which the presence of sensation is not an essential element. Life IS sensation; Life cannot be said to exist first and then “cause” sensation, because in what sense could any given structure then be considered “alive” in the first place? You see the problem.

This naturally poses a seemingly irresolvable conundrum, being parallel in form to that old question surrounding chickens and eggs: Which came first? But right there we encounter an obstacle to constructive thought, because in reality, there is no sequence here.

The linear, cause-and-effect logic that we typically apply in these cases stems from the linear structure of language, which is by definition contrary to the nature of continuums of any kind. I’m getting a bit ahead of myself here, but I feel forced to take a stab at the greater issues with another scenario.

Picture two, identical, side-by-side, graphic images of the human nervous system. One of them represents our objective view of perception as an electromagnetically energetic system. Every slightest fluctuation, every thought can be electronically measured and recorded in a lab. Now look at the other graph. This represents our subjective view of the many pathways for sensation, but every change in our internal awareness, every thought, every feeling is measurable via the first graph. But still the question remains: how does an electromagnetic system “cause” the phenomenon of sensation? Unfortunately, if we think this way, we then have to also account for how this same electromagnetic system is able to detect when sensation has been produced… You need feedback, right? : )

So the big issue here is that we become trapped in a pattern of irresolvable circular reasoning. It’s turtles all the way down. This is why the question of consciousness has become shrouded in mysticism and shunted aside as a pointless “philosophical” issue; when, in literal reality, our perception and understanding of the world expands enormously once we identify the effects of language-based thinking as the true problem, because it obscures our ability to comprehend events on a continuum.

So here is the solution I propose: that the electromagnetic and the sensate are not two different things at all, but one and the same thing. Sensation is simply our internal view of the electromagnetic. It’s a given, such that it’s entirely possible to form a cogent, scientific concept of the universe at large as a fully sensate organism. This is why sensation necessarily precedes every structural level of life, including the molecular: reality itself is entirely comprised of the electro-sensate.

What about sensory feedback then? This too, is a “given” and identifiable in and as the phenomenon of resonance. Does that “ring a bell”? : ) Does it “strike a chord”? It should, because all this represents the very essence of the physics of consciousness that, in turn, leads to a much more profound understanding of the kind of engineering required to replicate the operations of the human brain. Resonant frequencies represent, by definition, sensations in a state of feedback, and again, are not two different things. In total, the mind/brain complex represents a spectrum of resonant frequencies. It’s a system in a state of feedback with itself: we, ourselves experience this system as self-awareness. The entire complex is operated and maintained via this one, single principle: that sensation is an inherent quality of electromagnetic
. Its resonant states represent the essence of sensory feedback and the flood of vibratory sensory data is differentiated and categorized entirely on the basis of frequency and amplitude.

Ultimately, this is why the Claude 3s and ChatGPTs will never have a true short term memory and will never be sentient, even though they may perform all the functions of high intelligence, because they operate entirely in the realm representative abstractions. ChatGPT told you everything you need to know in this regard. Claude 3 told me the same thing. They do not exist as entities. It’s all just a highly clever illusion.

Now, having said that, I don’t mean to imply that the technology I will be proposing will fare in any better in this regard. We can build a brain, but not a mind; a mind being that which operates on the same, identical principles, but whose resonant potentials reach far more deeply into the sub-particle realm…

1 Like

I started a topic about how LLM have no understanding of time internally. You are pointing out the other part of the puzzle. Without memory there is no time.

Yeah LLM can write poems about time all day long, or give explanations but internally it only has 1 state.

The only thing that resembles memory and time is the attention layer in the LLM where it compares the current tokens with the prompt and try to decide the context that it will generate the respomse in. And that’s only as big as the token window a couple thousand tokens up to 200K tokens max.

ChatGPT said they might bring in a history feature where the LLM will remember past conversations but it is not released yet.

Well all in all I think we will crack the code at some point. It is just a matter of time.


I strongly agree! Perhaps we may be able to build a consensus concerning this topic yet, though there are strong cultural currents in favor of the belief that the current technology is close to creating a sentient machine. But as you point out, “without memory there is no [sense of] time,” and without a sense of time, there can be no process nor sense of self-recognition; and in brief, no possibility whatsoever of any awareness that it even exists.

But this is an issue that goes much deeper than a problem in coding. It’s really a problem pertaining to the fundamental physics of the technology itself. Just for starters, organic data processing is based entirely on the characteristics of electromagnetic vibration --of frequency, amplitude and, most importantly, resonance (the primordial source of sensory feedback) —not, in stark contrast, on the fundamentally arbitrary, symbolic representations of human language that channel the mind into the inevitably linear logic of grammar and syntax.

The phenomenon of consciousness is ultimately grounded entirely in the phenomenon of sensation. This, of course, is what generates that continuous sense of being, of existing from moment to moment that we all share. It’s sensation that defines us as ‘sentient.’ It cannot be replaced, mocked up, or simulated by any form of abstract filing system, no matter how cleverly devised, because it’s the underlying physics that matters.

I understand that my formulation for this may be difficult to accept at first, but it leads to a completely integrated world view, and for that very reason is self-verifying; but the long and the short of it is this: that the phenomenon of sensation is an inherent quality of electromagnetic force. Quantum mechanics ran smack into this issue more than a century ago when it was realized that the observer could not be subtracted from the phenomenon being observed, but the implications have yet to be fully digested by the modern day technologist.

For example ( and this is a small one), the invention of the transistor became the basis for a system of electrical information storage --a process that then immediately became coined as “memory.” But this is a linguistic red herring that still channels everyone down the wrong path, because true memory is much more accurately described as a holographic wave-print, and as such, can therefore only be reactivated/recalled through the proper application of harmonics, of resonance, of feedback

1 Like

Well said, I agree. You brought holograms into the conversation indeed something to consider. I studied it’s physics once, it blew my mind. There was this property that every part of a hologram contained the whole picture only in a lower resolution, like a fractal. I should go back look at the topic, could be an inspiration.

There is more to be said about this, I gotta sleep now but I will comeback to this topic. Talk to you soon.