Vibing our way into the singularity

My LinkedIn feed is positively abuzz with CEOs and business majors desperately trying to tell me how vibe coding will make all software developers obsolete any moment now. The Antropic CEO, Dario Amodei, said recently that AI will write all code for software engineers after 2026. Anthropic CEO Predicts AI Will Take Over Coding in 12 Months | Entrepreneur I think this shows just how little these people know what software developer actually do at work, or - in cases like Mr Amodei - they knows it very well and lie to you to pump their own stock prices. I don't know if Amodei has any stock he could pump, but it's the vibe that counts. Maybe he actually drank the Koolaid and wholeheartedly believes what he's saying. In that case, may god have mercy on his soul.

I'd even go a step further and say, AI exposed how little people in general know about what professionals from other disciplines actually do. Be honest, have you ever seen one of those AI videos and thought to youself 'wow, I bet soon it will be good enough to replace a lot of people in the movie industry'? If so, you probably don't know what professionals in the movie industry actually do.

Professionals worry about the last 20%, vibe coders and amateurs only worry about the first 80%

Professionals worry about the last 20% in the pareto principle Pareto principle - Wikipedia , the tiny details that are hard to get right. For anyone who has a clear product vision and attention to detail, these last 20% are non-negotiable and you have to get them right if you want to work for them. I've seen developers pour over code to shave off a few microseconds, sound engineers complain about a sound playing an imperceptible 30ms after you press a button instead of 15ms, art directors hone in and demand changing the color of a single pixel in a video review. All of these people are professionals and have a very high bar for quality work - do you think they'll be impressed if you present them some AI generated vibe-based work that slightly changes every time you chant your 'pretty please' proomts to the chat window?

Where no man has gone before

But let's go a step further, let's jump right into the singularity Singularity in this case means an AI is capable of self-improvement, which leads to a rapid increase of computational power, which leads to more self-improvement, ad infinitum and AGI. Almost every AI expert and researcher is convinced that something like a self-improving AI is just around the corner, or at least that something like AGI with super-human intelligence is at least possible. Ray Kurweil has been telling us that the singularity is near for over 20 years now! Looking up the release date of that book made me feel really old for some reason...

I think AI singularity is impossible, at least until I see some compelling evidence to change my mind. The hardness of the self-improvement problem vastly outpaces any gains that a super-intelligent AI could implement. Let me explain what I mean by that - I've taught theoretical computer science to university students and spent a good deal of my career banging my head against hard algorithmic problems involving big search spaces Multidimensional search spaces very quickly outpace any cpu and memory you have. You need very smart constraints to guide your algorithms and even then you mostly just get bad approximations to a real solution. . I'm by no means an expert in the field, but I've developed a gut feeling for how hard a problem is to solve computationally and what's possible with the limited resources we have available in our universe. Intelligence I would define 'intelligence' as the capability to solve problems. The more effective and efficient your problem-solving skill, the more intelligent you are. and computational speed are great at solving polynomial-time problems. But they very quickly run up against a wall when having to solve an exponential-time problem that can't be reduced down to polinomial-time via approximation.

Allow me a simple analogy to show why this is tricking so many people into believing we're close to a singularity: a computer trying to solve a problem is like an athlete trying to jump over an obstacle. We've developed some great tools akin to a trampoline that allows the athlete to jump over obstacles previously thought unjumpable. A great example here would be AlphaFold, which pretty much solved the protein folding problem for known proteins. AI with their vast training sets is like an automatic muscle trainer, supercharging the athlete to jump higher and higher. The obstacle they're talking about however, self-improving AI, would require the athlete to jump to the fucking moon. And the next self-improvement cycle would require a jump to Alpha-Centauri. Exponential growth be like that.

The only way this is even remotely realistic is if you believe that after every new self-improvement the AI can come up with a new technological break-through that would allow it to completely change the nature of its computations. Maybe quantum computers are strong enough to allow it to jump to the moon first, but after that it will hit another wall very quickly. Imagine having to not only develop a new technology like quantum computers for every improvement iteration, but having to do it faster each time. It just sounds like people have zero regards for actual physical limits and live in a made up sci-fi novel.

Even if we could develop an entity with infinite intelligence in the form of an oracle Which could actually be possible if multiverses are real and we find a way to communicate with them or do something like combine superpositions of our quantum computers with theirs. , there would still be innumerable trivial computational problems that we couldn't solve. You want to tell me AGI can infinitely self-improve itself to the singularity by vibing its way out of the Matrix simulation we currently live in by not considering the halting problem even once? Please.

Dog-food your own AI slop, you coward

So, as a lay person, how can you judge if Google or Microsoft is just soft-pitching you the next Bitcoin bubble and try to get your money for their AI platforms - or if they can deliver on their promises and solve real world problems that professionals would look at with a satisfied nod? It's pretty easy - as soon as they start dog-fooding and build their own critical infrastructure completely with the same AI tools they want to sell you. And I don't mean their engineers get more productive by using CoPilot, I mean replace them completely with CoPilot and let the AI reign supreme. If Google starts to switch its code base for the ad-revenue service and search engine over to AI, with no human intervention Aside from some vibes and promting of course. , then I'll consider it more than a money grab. I want to see Meta replace the entire dev ops team for their production servers with a chat window on Mark Zuckerbergs desktop. Do it, you coward!

One of the problems with this ham-fisted approach that AI companies take right now, where they scrape the entire web several times a day for training data, torrent copyrighted material and hammer open-source project websites into non-responsiveness, is that they cut off their own life-blood. Without a StackOverflow to provide millions of data points for training, how can you train your AI to get better at coding? A lot of the current web is built on the goodwill of all actors involved and it's eroding fast. I can see a world where all the new and hot content from popular web frameworks, upcoming artists or book authors is locked behind a login screen to keep those pesky AI bots outside. That would make it really hard for AI companies to train their models on relevant data.

In conclusion, AI companies currently look to me like a car salesman trying to sell me a used VW Golf with a multitude of problems. To distract me from the leaking oil pan, they shows me how the car is crushing a bunch of benchmarks that they developed themselves. And on top they're trying to tell me that soon the car will be able to drive so fast that it crushes the light-speed barrier in an event they call singularity and then we'll all live in an age of car abundance. Meanwhile, they themselves don't want to drive the car. Sure, buddy, sure.

Author: Michael Galetzka. Last modified: March 30, 2025.
Website built with Franklin.jl and the Julia programming language.