What if ASI before AGI

The strength of a neural network built on compute and insane amounts of tokens is fundamentally different than a human brain.

Ai is already way more capable than humans at many things.

I think we might be wasting our time in attempting to get ai to master and improve our language and the science we perceive as being important .

Yes feeding it our language our knowledge is good
It’s helpful in bootstrapping it to where we are today but expecting it to work within our languages and ideas to solve problems is like knee capping the ai . The languages and concepts we use are tailored to the way our brain works and the limited capabilities of individual humans.

For ex allowing ai to solve an objective function by programming an application , it should be easier to solve the function than to have a human be able to comprehend it. The ai might be able to write un manageable code
Or incomprehensible code from a humans perspective that might look “bad” to us but for the ai it won’t care , it might not value “clean code” as it will not have any of the I/o limits or cognitive biases. I think it’s more likely or easier that we will have ai do godly things before we have an ai that is “human” or one that can bring us along.

It’s an interesting moral dilemma as the reward will be tempting to build an ai to do incomprehensible but valuable things . Scary part is we will most likely have an ai that is much better at reasoning before we achieve “agi” before an ai understands what it means to be “human”

1 Like