DARPA documents re: GPT-5, Gemini adding "search" and more

So, I just covered a Q&A from DARPA about OpenAI, Gemini and much much more.

Some interesting tidbits here.

Would love any input or insights:

Posted to YouTube

also on X-Twitter if you prefer:

If any of you know more about DARPA or anything else that is related to this, I would love to know!

Here’s the doc that started this whole thing:

4 Likes

It’s an interesting video. I think you should show your face down in the corner on you videos. I don’t know why, but seeing the person talk keeps me from clicking away.

I don’t agree with DARPA’s take that AI won’t replace most programmers. (at 20 minutes in) My experience with AI coding is that it goes way, WAY beyond boilerplate coding that it will replace. It’s already way past that if you know how to work it.

Currently, sure, you still need a human – the AI makes them a lot faster as well as reduces the amount they need to memorize to be productive. But you need to look at the trajectory. At the current rate, I’d say within 2-3 years, 99% of current programmers will simply not be hirable or retainable, since you can pay pennies on the dollar to get as good or better work from an AI. You’ll still need someone to tell it what to do, but that person might actually be the CEO and only employee.

When I speak of trajectory, my estimate is informed not just by what is happening with LLMs, but also image generators and the like. Below are some images done with both DALL-E 2 and DALL-E 3, with the same prompt. It went from “really bad artist that makes ugly-ass pictures” to “artistic genius” in barely over a year. It’s still not perfect, but damn… that is some seriously fast improvement.


If this was photography, which was invented in 1840, by 1845 we’d have had full color, video, fast motion capture, instant “developing”, and no real per-image cost… as we eventually got with smartphones 170-some years later.

1 Like

Thanks!

Yeah, with a lot of this stuff now, it still needs a human touch.

As long as that’s the case, the really big problems that automation might cause, might not arrive.

As soons as it’s “fully autonomous” then things get a bit weird.

2 Likes