Sorry, not all that chatty… I often say english (or insert human to human language) is my second lauguage… but I have a strong feeling, based on my general understanding and experience with neural networks, that the answer is quite plain to be seen.
Similar to human (or animal) learning, we are the sum of our inputs. Like an LLM, we are a network of neurons. Of course it will come up with it’s own thoughts, just as we do.
Yes, if you do zero training of an LLM, it won’t do anything. This is similar to a human being born, in a coma. With zero input, the human won’t really know anything. Upon being conscious for the first time, it won’t be able to do anything that isn’t in it’s base programming. From there, neural pathways are built, and it “learns”. I don’t see an LLM being very different.
I think I have a real world, personal, example of the “it was trained on some art, so it is only copying”. When I was very young, I would sit on my grandmother’s lap while she would paint. She never taught me to paint, but I took it in. One day, while trying to solve a hard coding problem, I went for a walk. I walked by an oil paint supply store, saw a $99 sale for a beginner set, and bought it. When I got home, without even thinking, I started blending colors together, had a feel quite quickly for putting that on the canvas. I wasn’t copying her, but the neural pathways were already developed.
I do agree with the assertion above that the AI (in it’s current iteration) isn’t going to take over the world… because it’s mostly zero shot right now, etc. But, as we develop agents, that is going to change. We need to pay attention to that… but, I digress.
I think the “emergent abilities” that have been seen ARE a form of original thought. Nothing is new, we humans just riff on what we’ve seen, in one form or another… mostly.
Nice to meet you all! Don’t judge my words too harshly… there is a spectrum.
ps. Wes, you are awesome, I like your style! We are very like minded!
-e