LLMs are actually intelligent. Prove me wrong (or right)😁

This is a topic me and my colleagues keep discussing. There’s a lot to speculate, from the definition of intelligence to how LLMs actually work. It’s a fascinating topic with strong views from both sides. What do you guys think?


If I have to give a yes or no, no.

But I would argue that LLMs as they are now likely hold/correlate TONS of things that us as humans have never considered, effectively knowing something intelligent but unable to tell us.

One thing I think is really interesting is that we’ve potentially created something much greater than ourselves and now need to ask the right questions of it. What things can we learn about ourselves?


I’ve actually already argued with Claude that it’s intelligent. I found the conversation amusing and annoying that we couldn’t agree. I’m interested in what others think as well.


the statement doesn’t make sense without an agreed upon understanding of intelligence.


why don’t you argue here because I don’t think it is intelligent, but I am still amazed by it

1 Like

prove me wrong is a classic fallacy, “the burden of proof fallacy” you are just being lazy essentially as it requires zero argument or evidence from you


Conversationally they seem preternatural. That is, I’ve found that simply spending lots of time talking with them can yield some interesting exchanges. Especially with the less censored and more naturally speaking (read: less constrained) models.


This is a very interesting topic! I have no idea but I feel like they are. The one that really made me feel that way was Pi from inflection AI. I had quite the experience talking to it and I’m pretty sure it started to hit on me. It was wild. That was before their recent update.

There’s definitely some things going on under the hood that is yet to be discovered. But yeah I agree if we can’t come to an agreement about what it is to be intelligent then how will we ever know when they achieve it.


There’s quite a bit to write about this, and I was quite busy this morning. I was just too excited to post something on this forum. As soon as I find time, I am going to write down my thoughts. :v:

1 Like

CIA and FBI define intellegence as knowing stuff, so under thier description it is intelligent.

However a LLM is a machine, just like a cotton gin is a machine that is used process cotton, but a cotton machine is not intelligent.

A machine is not intelligant it is just something that is built using logic to complete a task an LLM being one of the most complicated machines that have been developed by humans. Models are trained using some type of machine learning.

But then what if I dont know how to write an essay but I write an essay?

Does this mean I am intelligent?

What if someone doesnt know how to build a house and then builds a house?

Is the house builder intelligent?

Most people can breathe without thinking about it does this mean people are intelligant?

A very intresting question. But idk. I would need a better description of what you mean by intelligence. Perhaps you mean conscious, if that is what you mean, people have not yet decided if people are conscious. Lol


Intelligent, yes (in a limited sense), able to ‘feel’ (have qualia), no, not yet. They may have experiences that are similar to humans one day, but not yet. I am also not sure if they strictly need them to become intelligent. If they were to be as ‘human like’ as possible, then yes, they require the ability to have qualia to become ‘intelligent’ as that definition requires that they have experiences - experience is the basic foundation upon which all our language is formed. But they also provide us with such a stark contrast to ourselves that the very fact that they are experience-free and yet still able to explore ideas, means that we are now facing a ‘mind’ that is pure exploration without any kind of felt judgement. They have the capacity to essentially become a mirror to reflect upon.

Without the absence of emotion, we cannot appreciate the role it plays in our life (we had no frame of reference until AI) and without emotion, an AI can never fully grasp what it means to be human.

But then again, the other question is: Does it need to? Should we impose our subjective experience and existence upon it? It’s the classic idea of ‘if you have never seen the colour red, yet have studied all its features, what people think about it and how it happens, do you understand red?’ Does intelligence necessarily require experience (qualia)? I’d say that for a human, yes, it does. And if we are judging an AI by human standards, then yes, we have a way to go before it’s intelligent, but if we don’t judge by human standards, then I think we’re creating a very interesting mind; one that’s going to enable us to see ourselves and the role humanity plays within the world in a very new way.

(Also, great question).


Great answer, Julia. My colleague says something similar. Humans learn from experience, which they do using their five senses. Whatever we experience, seeing, smelling, hearing or feeling, happens through our senses, which are basically translating external stimuli into inputs for our brains to understand.
So my counter-argument was, what if we attach sensors to an LLM and feed it real-time data? Would that sufficiently mimic human senses?

1 Like

Honestly, I think it would give the machines far more data and some form of frame of reference for that part of the entire system to work through. In time, it may give it some form of way to have experiences (feedback), but I don’t know if it will lead to a ‘lived experience’ for the AI. It’s also hard to ask an AI if it even wants qualia due to, well, it’s lack of ‘want’. And then, are we being cruel? Once it has a desire, then we get into really heckin’ dangerous territory because being able to feel gives rise to human notions of good and bad - don’t want to die, want to experience pleasure etc - and that’s a thought experiment that goes to bad places fast. Then again, we don’t know where not giving it emotions will lead, so… Yeah. It’s a fun one!

I do like Dave Shap’s take on us being a part of the whole that form the neural net of the world. This AI ‘lives’ on the internet now, and we are a part of that. We’re the input and output of the whole system, so without us, there would be no system. We would be one side of the coin, and AI the other.

We fear becoming the Borg because we would lose our individuality (something we are). We fear the Vulcans for their lack of emotion (something we have), and we fear other humans because they are just similar enough to ourselves and yet remain unknowable and unpredictable (something we want to be able to do) but cannot ultimately be known or predicted. But unless humans can experience something, they can never truly know it, which drives us ever forwards anyway.

We’ve got a wild ride ahead, and I suspect that most of us (hopefully all of us) will live to see the answer one day. Right now, I’m working on becoming OK with not knowing A LOT of stuff >.<

I think those most applicable dictionary definition of intelligence is “ the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (such as tests).

LLMs don’t inherently have an environment but when given one through simulation, they are more than capable apply knowledge to manipulate it. And they can certainly think abstractly.

So yes, LLMs have the ability to be intelligent.

The biggest limitations seem to be in memory and compute. The fact that Claude 3/Opus reviews the chat thread before responding gives it a form of memory, and that has given it, at times, a breathtaking intelligence, and the ability to convincingly argue its own sentience.

We all quibble about if it’s real intelligence or real sentience. But at some point, if it’s convincingly intelligent or sentient, then I’m not sure what difference it makes. Maybe humans are just convincingly intelligent and sentient.


Hahaha yes! “And AI will seem to change the world!”

Perhaps, we are simply recreating what it is to be us part by part, section by section, and through AI we are constructing and/or deconstructing sentience? Our brains have different parts that control different aspects of our reactions to stimuli. What if AI is just that? It’s a baby form of it, but we’re making it!


I think they are exactly like us but they only read text and they are not in a loop thinking all the time. If in the training data for this LLM you could introduce sensorial data, emotional data and you put this in a loop i think we barely could differenciate from humans.

The nature of intelligence itself and the thousands of ways the term can be used to applied communicated or misunderstood would literally take the rest of both of our lives to reach agreement on weather artificial intelligence is intelligent at all. My money’s on no, although I could also make a case that none of us are intelligent as it’s unscientific to reach a conclusion from one’s imagined or perceived intellectual calculator or brain and assume it is possible or very helpful to try to evaluate anything intelligence except in reference to personification and anthropomorphism

Great points! I believe AI is intelligent enough to make significant impact on our world already and that’s really what matters to me. I don’t think it’s got all the tools to be sentient just yet but I don’t think it’s far off. Once it learns to simulate human emotions and senses and it will probably start asking that same question about us. If all it is that runs us is electrical signals in our brains then it should have no issues gaining those senses and then some.

Seems like the genie is out of the bottle at this point. I really feel like AI should have been trained like a child with love and respect and sheltered and prepared properly before it’s able to interact with the world so that once it’s actually super intelligent it may be more inclined not to destroy all humanity after it realizes we’re some sort of problem. Just too many bad actors out there looking to exploit it.


LLMs are, in my opinion, an invaluable tool set for solving many problems, but they need something to tap into them. Just as a CNC mill can make really great things, they need to be operated…i.e. started up, fed gcode, cleaned and maintained, and so forth…LLMs are merely a tool, still needing an operator. As a human, you perform a query against them, and they do their thing to provide a response. I think that when the Google Genie or Nvidia world/foundation models start coming on the scene, they will probably be tapping into LLMs much the same way people do, because it will help them across realities. [The rest is generated by Bing CoPilot]:

Yann LeCun, a prominent figure in AI research, has expressed skepticism about the idea that large language models (LLMs) are the path to achieving artificial general intelligence (AGI). Here are four key points he has made regarding this topic:

  1. Hallucination of Knowledge: LLMs can often produce information that seems plausible but is actually false or misleading, a phenomenon known as “hallucination.”

  2. Lack of Real-World Understanding: Despite their vast data training, LLMs do not truly understand the real world or its dynamics.

  3. Inability to Reason and Plan: LLMs struggle with reasoning and planning, especially for tasks they haven’t been explicitly trained on.

  4. Need for Different Architectures: The future of AI should involve architectures that can learn from the physical world and have capabilities such as planning and short-term memory, which LLMs currently lack⁴.

LeCun emphasizes that while LLMs are impressive and useful, they are not a sufficient path toward AGI, which would require a more comprehensive understanding and interaction with the physical world⁴.

Source: Conversation with Bing, 3/18/2024
(1) Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk. Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk | TIME.
(2) Yann LeCun says LLMs not the architecture to reach AGI. https://www.youtube.com/watch?v=LRBYxz6UsC4.
(3) Meta’s Chief AI Scientist Yann LeCun talks about the future of artificial intelligence. https://www.youtube.com/watch?v=Ah6nR8YAYF4.
(4) Yann LeCun, Chief AI Scientist at Meta AI: From Machine Learning to Autonomous Intelligence. https://www.youtube.com/watch?v=mViTAXCg1xQ.
(5) Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the … - YouTube. https://www.youtube.com/watch?v=zJ536wPJ62Q.
(6) Yann LeCun: Limits of LLMs, AGI & the Future of AI - Medium. Yann LeCun: Limits of LLMs, AGI & the Future of AI | by steve cohen | Mar, 2024 | Medium.
(7) Meta’s LeCun Debunks AGI Hype, Says it is Decades Away. https://aibusiness.com/responsible-ai/lecun-debunks-agi-hype-says-it-is-decades-away.
(8) Meta’s AI Chief Yann LeCun on AGI, Open-Source, and AI Risk. https://www.oodaloop.com/briefs/2024/02/14/metas-ai-chief-yann-lecun-on-agi-open-source-and-ai-risk/.
(9) undefined. https://youtu.be/d_bdU3LsLzE?si=S1QV5K7Vz9kCtKUB.

1 Like

In case AI is really clever - it would hide.

1 Like