Good code will still win
by dakshgupta on 3/31/2026, 2:32:26 PM
https://www.greptile.com/blog/ai-slopware-future
Comments
by: seamossfet
I find most developers fall into one of two camps:<p>1. You treat your code as a means to an end to make a product for a user.<p>2. You treat the code itself as your craft, with the product being a vector for your craft.<p>The people who typically have the most negative things to say about AI fall into camp #2 where AI is automating a large part of what they considered their art while enabling people in group #1 to iterate on their product faster.<p>Personally, I fall into the first camp.<p>No one has ever made a purchasing decision based on how good your code is.<p>The general public does not care about anything other than the capabilities and limitations of your product. Sure, if you vibe code a massive bug into your product then that'll manifest as an outcome that impacts the user negatively.<p>With that said, I do have respect for people in the latter camp. But they're generally best fit for projects where that level of craftsmanship is actually useful (think: mission critical software, libraries us other devs depend on, etc).<p>I just feel like it's hard to talk about this stuff if we're not clear on which types of projects we're talking about.
3/31/2026, 6:01:11 PM
by: Animats
<i>Meanwhile, the complexity of the average piece of software is drastically increasing. ... The stats suggest that devs are shipping more code with coding agents. The consequences may already be visible: analysis of vendor status pages [3] shows outages have steadily increased since 2022, suggesting software is becoming more brittle.</i><p>We've already seen a large-scale AWS outage because of this. It could get much worse. In a few years, we could have major infrastructure outages that the AI can't fix, and no human left understands the code.<p>AI coders, as currently implemented, don't have a design-level representation of what they're doing other than the prompt history and the code itself. That inherently leads to complexity growth. This isn't fundamental to AI. It's just a property of the way AI-driven coding is done now.<p>Is anybody working on useful design representations as intermediate forms used in AI-driven coding projects?<p><i>"The mending apparatus is itself in need of mending"</i> - "The Machine Stops", by E.M. Forster, 1909.
3/31/2026, 6:04:36 PM
by: reese_john
<p><pre><code> Why build each new airplane with the care and precision of a Rolls-Royce? In the early 1970s, Kelly Johnson and I [Ben Rich] had dinner in Los Angeles with the great Soviet aerodynamicist Alexander Tupolev, designer of their backfire Bear bomber. 'You Americans build airplanes like a Rolex watch,' he told us. 'Knock it off the night table and it stops ticking. We build airplanes like a cheap alarm clock. But knock it off the table and still it wakes you up.'...The Soviets, he explained, built brute-force machines that could withstand awful weather and primitive landing fields. Everything was ruthlessly sacrificed to cut costs, including pilot safety. We don't need to be ruthless to save costs, but why build the luxury model when the Chevy would do just as well? Build it right the first time, but don't build it to last forever. - Ben Rich in Skunk Works</code></pre>
3/31/2026, 5:49:15 PM
by: xnx
If "good code" == "useful code", then yes.<p>People forget that good engineering isn't "the strongest bridge", but the cheapest bridge that just barely won't fail under conditions.
3/31/2026, 5:08:26 PM
by: pagecalm
Agreed on the economics side. Clean code saves you time and money whether a human or AI wrote it. That part doesn't change.<p>But I don't think the models are going to get there on their own. AI will generate a working mess all day long if you let it. The pressure to write good code has to come from the developer actually reviewing what comes out and pushing back. The incentive is there but it only matters if someone acts on it.
3/31/2026, 6:06:07 PM
by: simianwords
People are not emotionally ready to accept that certain layers of abstraction don’t need as much care and effort if they can be automated.<p>We are at the point where a single class can be dirty but the API of the classes should be clean. There’s no point reviewing the internals of a class anymore. I’m more or less sure that they would work as intended.<p>Next step is that of a micro service itself. The api of that micro service should be clean but internals may be however. We are 10% here.
3/31/2026, 5:56:32 PM
by: ezekg
The background pattern really makes it hard to read, just fyi. I'd make the content have a white bg if you absolutely must use the pattern.
3/31/2026, 6:17:00 PM
by: personality1
I wish I could write beautiful good code, every part of me wants it, but I'm forced to deliver as fast as I can.
3/31/2026, 6:19:16 PM
by: socalgal2
When has this ever been true<p>Did the best processor win? no x86 is trash<p>Did the best computer language win? no (not that you can can pick a best)<p>The same is true pretty much everywhere else outside computers, with rare exception.
3/31/2026, 6:12:50 PM
by: muskstinks
... for now.<p>And just to be clear: AI continues to progress. There are already rumors about the next Anthropic model coming out and we are now in the phase of the biggest centralized reinforcement loop ever existed: everyone using ai for writing and giving it feedback.<p>We are, thanks to LLMs, able now to codify humans and while its not clear how fast this is, i do not believe anymore that my skills are unique.<p>A small hobby application costed me 11 dollars on the weekend and took me 3h to 'build' while i would have probably needed 2-3 days for it.<p>And we are still limited by resources and normal human progress. Like claude team is still exerpimental. Things like gastown or orchestrator architecture/structure is not that estabslihed and consumed quite a lot of tokens.<p>We have not even had time yet to build optimzed models. Claude code still understand A LOT of languages (human languages and programming languages)<p>Do not think anyone really cares about code quality. I do but i'm a software engineere. Everyone around me doesn't. Business doesn't. Even fellow co-workers don't or don't understand good code.<p>Even stupid things like the GTA 5 Online (or was it RDR2?) startup code wasn't found for ages (there was some algo complexity in loading some config file which took ages until someone non rockstar found it and rockstar fixed it).<p>We also have plenty of code were it doesn't matter as long as it works. Offline apps, scripts, research scripts etc.
3/31/2026, 6:04:13 PM
by: vb-8448
Good code wasn't winning even before the ai slop era!<p>The pattern was always: ship fast, fix/document later, but when "later" comes "don't touch what is working".<p>To date nothing changed yet, I bet it won't change even in the future.
3/31/2026, 4:52:59 PM
by: rbbydotdev
The wrinkle here is what exactly “win” means
3/31/2026, 6:06:29 PM
by: ahussain
My prediction is that we'll start to see a whole new layer of abstraction to help us write high quality code with LLMs - meaning new programming languages, new toolchains, stricter typechecking, in-built feedback loops etc.<p>The slop we're seeing today comes primarily from the fact that LLMs are writing code with tools meant for human users.
3/31/2026, 6:17:03 PM
by: sublinear
> economic forces will drive AI models toward generating good, simpler, code because it will be cheaper overall<p>Economic forces are completely irrelevant to the code quality of AI.<p>> I believe that economic incentives will start to take effect and AI models will be forced to generate good code to stay competitive amongst software developers and companies<p>Wherever AI succeeds, it will be because a dev is spending time on a process that requires a lot of babysitting. That time is about the same as writing it by hand. Language models reduce the need to manually type something because that's what they are designed to do, but it doesn't mean faster or better code.<p>AI is rubber duck that can talk back. It's also a natural language search tool. It's training wheels for devs to learn how to plan better and write half-decent code. What we have is an accessibility tool being sold as anything and everything else because investors completely misunderstand how software development works and are still in denial about it.<p>Code quality starts and ends with business needs being met, not technical capability. There is no way to provide that to AI as "context" or automate it away. AI is the wrong tool when those needs can be met by ideas already familiar to an experienced developer. They can write that stuff in their sleep (or while sitting in the meetings) and quickly move on.
3/31/2026, 6:17:03 PM
by: yshamrei
good code do not earn money =)
3/31/2026, 5:17:26 PM
by: RcouF1uZ4gsC
The existence and ubiquity of bash scripts make me doubt this.
3/31/2026, 5:28:49 PM
by: seniorThrowaway
this submission is basically an ad
3/31/2026, 4:53:47 PM
by: aplomb1026
[dead]
3/31/2026, 5:32:32 PM
by: throwaway613746
[dead]
3/31/2026, 5:05:27 PM
by: sloptile
[flagged]
3/31/2026, 4:51:27 PM
by: 7e
None of this is true. Models will soon scale to several million tokens of context. That, combined with the combined experience of millions of feedback cycles, will make software a solved problem for machines, even as humans remain dumb. Yes, even complex software. Complex software is actually better because it is, generally, faster with more features. It’s smarter. Like a jet fighter, the more complex it is, the more capable it is.
3/31/2026, 5:01:25 PM