Software Survival 3.0
by jaybrueder on 1/29/2026, 9:32:31 AM
https://steve-yegge.medium.com/software-survival-3-0-97a2a6255f7b
Comments
by: nickorlow
Stuff like this makes me feel like I'm living in a different reality than the author
1/30/2026, 10:09:56 PM
by: Kerrick
> Friction_cost is the energy lost to errors, retries, and misunderstandings when actually using the tool. [...] if the tool is very low friction, agents will revel in it like panthers in catnip, as I’ll discuss in the Desire Paths section.<p>This is why I think Ruby is such a great language for LLMs. Yeah, it's token-efficient, but that's not my point [0]. The DWIM/TIMTOWTDI [1] culture of Ruby libraries is <i>incredible</i> for LLMs. And LLMs help to compound exactly that.<p>For example, I recently published a library, RatatuiRuby [2], that feeds event objects to your application. It includes predicates like `event.a?` for the "a" key, and `event.enter?` for the Enter key. When I was implementing using the library, I saw the LLM try `event.tilde?`, which didn't exist. So... I added it! And dozens more [3]. It's great for humans <i>and</i> LLMs, because the friction of using it just disappears.<p>EDIT: I see that this was his later point exactly! FTA:<p>> What I did was make their hallucinations real, over and over, by implementing whatever I saw the agents trying to do [...]<p>[0]: Incidentally, Matz's static typing design, RBS, keeps it even more token-efficient as it adds type annotations. The types are in different files than the source code, which means they don't have to be loaded into context. Instead, only static analysis errors get added to context, which saves a <i>lot</i> of tokens compared to inline static types.<p>[1]: Do What I Mean / There Is More Than One Way To Do It<p>[2]: <a href="https://www.ratatui-ruby.dev" rel="nofollow">https://www.ratatui-ruby.dev</a><p>[3]: <a href="https://git.sr.ht/~kerrick/ratatui_ruby/commit/1eebe9806308003e3ec1c580f8b8dd9b29c1eeef" rel="nofollow">https://git.sr.ht/~kerrick/ratatui_ruby/commit/1eebe98063080...</a>
1/30/2026, 9:00:08 PM
by: pron
Here's what I don't get about the "AI can build all software" scenario. It extrapolates AI capabilities up to a certain, very advanced point, and then, inexplicably, it stops.<p>If AI is capable enough to "build pretty much anything", why is it not capable enough to also use what it builds (instead of people using it) or, for that matter, to decide <i>what</i> to build?<p>If AI can, say, build air traffic control software as well as humans, why can't it also be the controller as well as humans? If it can build medical diagnosis software and healtchare management software, why can't it offer the diagnosis and prescribe treatment?<p>Is the argument that there's something special about writing software that AI can do as well as people, but not other things? Why is that?<p>I don't know how soon AI will be able to "build pretty much anything", but when it does, Yegge's point that "all software sectors are threatened" seems to be unimaginative. Why not all sectors full stop?
1/30/2026, 10:24:59 PM
by: mrandish
Some interesting long-term, directional ideas about the future of software dev here but the implied near-termness of SaaS being disintermediated ignores how management in large orgs evaluates build vs buy Saas decisions. 'Build' getting 10x cheaper/easier is revolutionary to developers and quite possibly irrelevant or only 'nice-to-have' to senior management.<p>Even if 10x cheaper, internally built Saas tools don't come with service level agreements, a vendor to blame/cancel if it goes wrong or a built-in defense of "But we picked the Gartner top quadrant tool".
1/30/2026, 9:45:43 PM
by: joshribakoff
> First let’s talk about my credentials and qualifications for this post. My next-door neighbor Marv has a fat squirrel that runs up to his sliding-glass door every morning, waiting to be fed.<p>Some of the writings here feels a little incoherent. The article implies progress will be exponential as matter of fact but we will be lucky to maintain linear progress or even avoid regressing.
1/30/2026, 10:22:15 PM
by: 2001zhaozhao
I'm not convinced of this post's hopeful argument near the end. If you are doing SaaS as a way of making money and don't have a deep moat aside from the code itself, it will probably be dead in a few years. The AI agents of the future will choose free alternatives as a default over your paid software, and by the way said free alternatives are probably made using reliable AI agents and are high-quality and feature complete. AI agents also don't need your paid support or add-on services from your SaaS companies, and if everyone uses agents, nobody will be left to give you money.<p>As a technical person today, I wouldn't pay a $10/month SaaS subscription if I can login to my VPS and tell claude to install [alternate free software] self-hosted on it. The thing is, everyone is going to have access to this in a few years (if nothing else it will be through the next generation of ChatGPT/Claude artifacts), and the free options are going to get much better to fit any needs common enough to have a significant market size.<p>You probably need another moat like network effects or unique content to actually survive.
1/30/2026, 10:09:34 PM
by: munificent
I'm not a business dude but even I can see one problem in his argument: He tacitly equates "agents use your software" with "your software survives". But having your software invoked by an agent doesn't magically enrich you. Up to now, <i>human</i> users have made software success by one of:<p>1. Paying money for the software or access to it.<p>2. Allowing a fraction of the attention to be siphoned off and sold to advertisers while they use the software.<p>I don't think advertisers want to pay much for the "mindshare" of mindless bots. And I'm not sure that agents have wallets they can use to pony up cash with. Hopefully someone will figure out a business model here, but Yegge's article certainly doesn't posit one.
1/30/2026, 10:19:18 PM
by: meisel
> Gas Town has illuminated and kicked off the next wave for everyone<p>That sounds pretty hyperbolic. Everyone? Next “wave”?
1/30/2026, 9:06:34 PM
by: xyzsparetimexyz
I feel like this isn't adequately accounting for the fact that existing software is becoming easier to refactor as well. If someone wants a 3d modelling program but is unsatisfied with the performance of some operation in Blender, are they going to vibe code a new modelling program or are they going to just vibe refactor the operation in Blender.
1/29/2026, 5:15:43 PM
by: deborahjacob
In big orgs, 'agents can build it' rarely changes the buy vs build decision. The pragmatic moat I see isn’t the code, it’s turning AI work into something finance and security can trust. If you can’t measure and control failure-cost at the workflow level, you don’t have software.
1/30/2026, 10:21:35 PM
by: jonathaneunice
> If you believe the AI researchers–who have been spot-on accurate for literally four decades<p>LOLWUT?<p>Counter-factual much?
1/30/2026, 10:09:58 PM
by: coldtea
><i>I debated with Claude endlessly about this selection model, and Claude made me discard a bunch of interesting but less defensible claims. But in the end, I was able to convince Claude it’s a good model</i><p>Convinced an LLM to agree with you? What a feat!<p>Yegge's latest posts are not exactly half AI slop - half marketing same (for Beads and co), but close enough.
1/30/2026, 10:05:45 PM
by: SirensOfTitan
I'm frankly exhausted from AI takes from both pessimists and optimists--people are applying a vast variety of mental models to predict the future during what could be a paradigm shift. A lot of the content I see on here is often only marginally more insightful than the slop on LinkedIn. Unfortunately the most intelligent people are most susceptible to projecting their intelligence on these LLMs and not seeing it: LLMs mirror back a person's strengths and flaws.<p>I've used these tools on-and-off an awful lot, and I decided last month to entirely stop using LLMs for programming (my one exception is if I'm stuck on a problem longer than 2-3 hours). I think there is little cost to not getting acquainted with these tools, but there is a heavy cognitive cost to offloading critical thinking work that I'm not willing to pay yet. Writing a design document is usually just a small part of the work. I tend to prototype and work within the code as a living document, and LLMs separate me from incurring the cost of incorrect decisions fully.<p>I will continue to use LLMs for my weird interests. I still use them to engage on spiritual questions since they just act as mirrors on my own thinking and there is no right answer (my side project this past year was looking through the Christian Gospels and some of the Nag Hammadi collection from a mystical / non-dual lens).
1/30/2026, 9:11:35 PM
by: tra3
I've been using Claude and it's a game changer in my day to day. The caveat being of course that my tasks at a small "feature" level and all interactions are supervised. I see no evidence that this is going to change soon...<p>My other thought, that I can't articulate that well is....what about testing? Sure LLMs can generate tons of code but so what? If your two sentence prompt is for a tiny feature that's one thing. If you ask Claude to "build me a todo system" the results will likely rapidly diverge from what you're expecting. The specification for the system is the code, right? I just don't see how this can scale.
1/30/2026, 9:28:03 PM
by: xyzsparetimexyz
> I debated with Claude endlessly about this selection model, and Claude made me discard a bunch of interesting but less defensible claims. But in the end, I was able to convince Claude it’s a good model<p>This is not a good way to do anything. The models are sychophantic, all you need to do in order to get them to agree with you is keep prompting: <a href="https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-advice-fatal-overdose-21266718.php" rel="nofollow">https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-...</a>
1/29/2026, 5:27:03 PM
by: troupo
Steve Yegge used to be a decent engineer with a clear head and an ability to precisely describe problems he was seeing. His "Gooogle Platform Rant" [1] is still required reading IMO.<p>Now his bloviated blogposts only speak of a man extremely high on his own supply. Long, pointless, meandering, self-aggrandising. It really is easier to dump this dump into an LLM to try to summarize it than spend time trying to understand what he means.<p>And he means <i>very</i> little.<p>The gist: I am great and amazing and predicted the inevitable orchestration of agents. I also call the hundreds of thousands of lines of extremely low quality AI slop "I spent the last year programming". Also here are some impressive sounding terms that I pretend I didn't pull out of my ass to sound like I am a great philosopher with a lot of untapped knowledge. Read my book. Participate in my meme coin pump and dump schemes. The future is futuring now and in the future.<p>[1] <a href="https://gist.github.com/chitchcock/1281611" rel="nofollow">https://gist.github.com/chitchcock/1281611</a>
1/30/2026, 10:02:09 PM
by: voldemolt
Holy fuck is this guy blowing smoke up his own ass.<p>He needs an editor, I’m sure he can afford one.<p>I look forward to him confronting his existence as he gets to be as old as his neighbor. It will be a fun spectacle. He can tell us all about how he was right all along as to the meaning of life. For decades, no less.
1/30/2026, 9:11:30 PM
by: AIorNot
So much Noise...<p>Too many people are running a LLM or Opus in a code cycle or new set of Markdown specs (sorry Agents) and getting some cool results and then writing thought-pieces on what is happening to tech.. its just silly and far to immediate news cycle driven (moltbot, gastown etc really?)<p>Reminds me of how current news cycle in politics has devolved into hour by hour introspection and no long view or clear headed analyis -we lose attention before we even digest that last story - oh the nurse had a gun, no he spit at ICE, masks on ICE, look at this new angle on the shooting etc.. just endless tweet level thoughts turned into youtube videos and 'in-depth' but shallow thought-pieces..<p>its impossible to separate the hype from baseline chatter let alone what the real innovation cycle is and where it is really heading.<p>Sadly this has more momentum then the actual tech trends and serves to guide them chaotically in terms of business decisions -then when confused C suite leaders who follow the hype make stupid decisons we blame them..all while pushing their own stock picks...<p>Don't get me started on the secondary Linkedin posts that come out of these cycles - I hate the low barrier to entry in connected media sometimes.. it feels like we need to go back to newspapers and print magazines. </end rant>
1/30/2026, 9:13:08 PM
by: henning
This is one of those instances where bullshit takes more effort to debunk than it does to create.<p>We already went over how Stack Overflow was in decline before LLMs.<p>SaaS is not about build vs. buy, it's about having someone else babysit it for you. Before LLMs, if you wanted shitty software for cheap, you could try hiring a cheap freelancer on Fiverr or something. Paying for LLM tokens instead of giving it to someone in a developing country doesn't really change anything. PagerDuty's value isn't that it has an API that will call someone if there's an error, you could write a proof of concept of that by hand in any web framework in a day. The point is that PagerDuty is up even if your service isn't. You're paying for maintenance and whatever SLA you negotiate.<p>Steve Yegge's detachment from reality is sad to watch.
1/30/2026, 8:38:29 PM
by: Traubenfuchs
I feel like we are in universal paperclips, a game about turning all matter in the universe into paperclips.<p>We are entering the absurd phase where we are beginning to turn all of earth into paperclips.<p>All software is gonna be agents orchestrating agents?<p>Oh how I wish I would have learned a useful skill.
1/30/2026, 9:05:39 PM
by: yodon
Steve Yegge is hella smart, and I've spent many hours digging into his recent work on GasTown and Beads, but he needs to read up on business strategy.<p>I'd recommend starting with Stratechery's articles on on Platforms and Aggregators[0], and a semester long course on Porter's Five Forces[1].<p>[0]<a href="https://stratechery.com/2019/shopify-and-the-power-of-platforms/" rel="nofollow">https://stratechery.com/2019/shopify-and-the-power-of-platfo...</a><p>[1]<a href="https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysis" rel="nofollow">https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi...</a>
1/29/2026, 4:02:03 PM