Yesterday I attended the Queensland AI Summit, and saw the keynote from Zach Kass, the (former) head of GTM at OpenAI. His talk was about the road to AGI. His arguments boiled down to:
AGI is scientifically inevitable. We are on the path.
Nobody can imagine what AGI will look like. We can imagine the steps that lead us there; the phase immediately before it will be Multi Modal AI.
You can imagine Multi Modal AI as all operating systems being “AI first”, so all software uses AI to interact with you, and with all other software (rather than APIs).
The big roadblocks to AGI will be bad policy (premature regulation) and not overcoming physical constraints (current AI uses too much compute / energy, needs several orders of magnitude of improvement). Bad policy is the most likely thing that will prevent us from solving the physical constraints (eg. nuclear energy policy).
I’m always looking out for a reason to be angry at the government, so I agreed with his last point. The rest, I struggled a bit more with.
I first read Zero to One when it came out, and I hadn’t built anything. I found it motivating, but empty. I read it again last year and with new perspective it was a totally different book. One idea I come back to often is that of definite and indefinite optimism.
Briefly, the idea is that it’s easy to be optimistic about the future without saying (or knowing) what you’re specifically optimistic about. “Things will get better, but in ways that are hard to imagine.” This is called indefinite optimism and it’s bad.
The problem with thinking this way is that it doesn’t lead to things getting better, because you can’t build something without having a design, vision, or plan. The counter-view, definite optimism, requires being optimistic of the future because of the specific things you can imagine and are working towards building. (Read the full chapter notes here.)
I find the AGI arguments to fall mostly in the indefinite optimism camp. We’re told that AGI will cure cancer and Alzhimers’s; that AGI will solve our energy problems (if we haven’t already) through scientific breakthroughs; that it will lead to infinite abundance of food and resources and make working optional for everyone; that it will give everyone great healthcare and education; that it will cause mass deflation (but in a good way); and that it will be the most impactful industrial revolution yet.
To be clear, I want them to be right about many of these things. I also wanted the crypto people to be right when they said crypto would solve the problems that central banks and modern money theorists create. But it turns out wanting to be right wasn’t enough, and without a more definite view of how that would happen everyone just ended up trading pictures of monkeys. Not because that’s a better outcome, but because it wasn’t clear what to build to achieve that outcome they wanted.
AGI utopians argue that AGI is inevitable, but that we can’t yet imagine what it will look like, presumably because it will be so different to anything we can conceive today. I can see the argument for why imagining it might be a bit scary and put some people off, but if everybody claims they have no idea what it will look like, how will we build towards it?
Indefinite optimism leaves us in a weird spot today. AGI utopians argue that everyone should look forward to a world where AI is integrated into everything. To understand what that will feel like, start by using some AI today. How do you try AI? Go use a chatbot. Okay, so in the future there will be more chatbots? NO, you’re not imagining the possibilities enough!
Without a more definite view of how AGI will come around, and what it will look like, I think a more likely path for AI is that the future looks similar to today, but with more chatbots.
Two other thoughts that didn’t really fit into that:
Zach argued that new companies started today that are AI first will totally disrupt all incumbents (unless they reinvent themselves as AI first), in every industry. At first I was like ehh, but then I realised I know first hand how internet-first companies were able to totally disrupt companies that didn’t reinvent themselves to be internet-first. Maybe not in every single industry, but in many. So maybe there is something there.
I think most laypeople today think of AI as something that’s been anthropomorphized. Most “AIs” you interact with have names1 and try to behave like humans. Does this mean that imagining an AGI future means imagining a future with lots of robots? It’s not so hard to imagine! But on the other hand, some people argue that AGI will be like electricity - an invisible force that’s available everywhere in all the things you use. This is hard enough to picture without the mental model we all now have of AI being personal.
By the way, the QLD AI Hub team did such an awesome job with this conference. It was really well put together. I appreciated the invite!
big fan of X, the Spotify DJ
Re: your comment that we're told AGI will cure our healthcare / energy problems -- aren't there lots of more concrete claims as to _how_ AGI (or just very good AI) will help here e.g., predicting protein folding, or building a virtual cell and then combining these advances to run orders of magnitude more experiments to find the compounds that make an impact. On the energy front, I'm a little more skeptical I suppose.
Re: whether incumbents get disrupted. I'm probably more with you on the original "ehh". The internet was a new paradigm that required entirely different capabilities for a company and completed altered the economics of business (e.g., distribution and scale fundamentally changed). AI feels big, but it runs on the same hardware, and we might change the UX but it's still software capabilities *and* so far models work best when you have the right data, which skews incumbent. I'm heavily skeptical of saying that this generation of big companies are unbeatable, but I'm also not sure AI is to todays companies what the internet was to the last generation.