Will AI's general intelligence look like human's general intelligence?

When I hear people speak of general intelligence I all too often think they approach the subject from a humancentric point of view. I believe AI’s general intelligence will look very different to ours, just like other species with general intelligence such as pigs or crows. Should we look at this in a anthropocentric or non-anthropocentric way?

● Human intelligence has evolved under very specific evolutionary pressures and environmental contexts over millions of years. The cognitive architectures and capabilities optimized for in humans were shaped by factors like being a social primate species, having to find food and shelter, communicate through spoken language, etc. An AI system would not face those same constraints or optimizing forces.

●AI systems could potentially develop very different sensory modalities, information processing paradigms, and ways of modeling the world compared to biological intelligences - not being bound by a carbon-based neural network architecture.

● There may be fundamental differences in motivation systems, goal structures, and the very nature of cognition between AIs and biological minds. Human cognition is influenced by our emotions, subconscious drives, and an embodied physical experience.

●However, there may also be convergent properties of any sufficiently advanced general intelligence, regardless of the specific implementation details. Things like logic, reasoning, abstraction, curiosity, and ability to understand complex systems.

[This post has been refined by AI, with its original thoughts and ideas provided by me]

3 Likes

Wouldn’t it be better to have a million narrow purpose AIs instead of AGI. With narrow purpose AI would know what we get, there is no debate about what it wants and there would be lesser chance of unexpected outcomes. With AGI risks seem to outweigh the benefits if you look at humanity as a whole and not just jobs and tasks.

1 Like

As long as Ai is using Human Language then l would think Ai General Intelligence would be like Human General Intelligence.
Human Language brings with it meaning and with meaning there is purpose so l would think it would always be at the core of Ai intelligence… :wink:

1 Like

Let’s remind ourselves that different AI solutions, among these LLMs like GPT4, has been created by humans for humans. The mere fact private companies like Microsoft and OpenAI put down hundreds of millions producing these products tells you the motivation part… it is to make money. And you make money by solving human issues.

Neurologically speaking, the architecture LLMs is inspired by the brain, not necessarily a human brain, but an abstraction of the way vast network of neurons tend to aggregate and function as an organ.

Your question about evolution is on point, tho in my opinion, the evolution of the AI field can’t be thought of in terms of biological evolution, but rather technical evolution. The AI industry won’t develop a 1000 different models just to see which direction survive… they will develop a 1000 different models in hope that these models lead to the capitalization of 1000 different revenue streams.

In other words, asking where AI intelligence evolution would lead to is like asking which mean of transportation will be used when you have no reason to travel. Currently, the reason for travel is financial, and this reason is guiding the technical advancements we have witnessed.