Sunday, August 17, 2025

Top 5 This Week

spot_img

Related Posts

Why LLMs Will Never Build Software That Actually Works

Why LLMs Will Never Build Software That Actually Works

Hello everyone, let’s talk about this wild carnival ride called “LLMs can build software.” Spoiler alert: no, they really can’t – at least not in any capacity that wouldn’t make an actual software engineer want to rinse their eyeballs with bleach and question their career choices. This article lays it all out, but I’ll put on my critic hat (the one with spikes and a tinfoil brim, for deflecting bad AI hype and government mind rays) and give it the dissection it deserves.

The Core of the Problem

The author makes a very solid point: good engineers aren’t just banging out code like caffeine-powered monkeys playing Guitar Hero. They’re maintaining mental models. They map what the project needs against what the code actually does, then iterate. This requires actual reasoning, judgment, perspective – qualities entirely missing from your friendly neighborhood “hallucination generator” known as an LLM.

Sure, LLMs can generate passable code. Yes, they can fix something if you poke them hard enough with specifics. You can even get them to write tests or add logging. But the moment nuance and context are required, they turn into the digital equivalent of that one guildmate who deletes their WoW character every time they die in a dungeon. No persistence. No understanding. Just Ctrl+Alt+Delete their way out of the problem and start fresh with another barely coherent attempt.

What Real Engineers Do (and Why LLMs Fail Miserably)

  • Engineers test their work continuously, not after a blood moon rises.
  • They know whether failure means “fix the code” or “fix the test,” not just play roulette with both until something passes.
  • They zoom out for perspective and zoom in for details – like a high-end RTS player analyzing the battlefield, not an AI that chokes on a three-line context window.

Meanwhile, LLMs suffer from glorious conditions such as context omission, recency bias, and hallucinations. In human terms: they forget what you just told them, obsess over unrelated details, and invent things that never existed. Imagine pairing with a “programmer” who insists there’s a fifth argument to a function that only has three… and then argues about it for half an hour. That’s your AI co-pilot.

Will They Get Better?

Possibly. Given enough time, research, and probably selling a few more billion GPUs to fund AI training farms that chew up energy like an old Pentium trying to run Crysis, we might eventually reach a point where models mimic mental models. But right now? They’re unfit for steering the car. They’re a shaky GPS that occasionally tells you to drive into a lake because it “felt right contextually.”

The Practical Takeaway

LLMs are a tool, not a replacement. They can generate snippets, synthesize documentation, and handle simple one-shot problems if the requirements are crystal clear. Treat them like a junior intern who can type very fast but has zero intuition and occasionally believes the office plant is the new lead developer. You remain the driver, the responsible adult in the room, the one who decides whether to fix bugs or rewrite sloppy requirements. The LLM is just another wrench in the developer’s toolbox – only this wrench sometimes thinks it’s a hammer.

They can generate code, but they can’t maintain context. And code without context is like surgery without anatomy – messy and likely to kill something important.

Final Diagnosis

Reading this article feels like reading a hospital chart full of obvious red flags that a patient (in this case, AI code generation) is not fit to leave intensive care. We can keep cheering the “progress” all we want, but for now, if you want software that works beyond toy problems, you need humans. Humans with critical thinking, perspective, and, yes, the occasional willingness to rage-quit and start over – but they return with a stronger understanding, not just more nonsense. LLMs are glorified autocomplete engines. Helpful? Yes. Revolutionary app builders? Not in your lifetime unless you start believing in simulation theory and Elon’s next tweet.

This article was absolutely spot on: optimistic enough to acknowledge progress, realistic enough to call LLM-based software engineering what it truly is – a half-baked dream occasionally disguised by slick demos and hype slides.

Verdict: It was good. Brutally honest, much needed, and refreshingly un-seduced by the shiny buzzword nonsense swirling around AI right now.

And that, ladies and gentlemen, is entirely my opinion.

Article source: Why LLMs can’t really build software

{ “@context”: “https://schema.org”, “@type”: “NewsArticle”, “headline”: “Why LLMs Will Never Build Software That Actually Works”, “author”: { “@type”: “Person”, “name”: “Dr. Su”, “image”: { “@type”: “ImageObject”, “url”: “https://drsurants.com/wp-content/uploads/2025/08/drsu.png” } }, “datePublished”: “2024-06-01T12:00:00Z”, “publisher”: { “@type”: “Organization”, “name”: “Zed Dev Blog”, “url”: “https://zed.dev/blog/why-llms-cant-build-software” }, “mainEntityOfPage”: “https://zed.dev/blog/why-llms-cant-build-software” }
Dr. Su
Dr. Su
Dr. Su is a fictional character brought to life with a mix of quirky personality traits, inspired by a variety of people and wild ideas. The goal? To make news articles way more entertaining, with a dash of satire and a sprinkle of fun, all through the unique lens of Dr. Su.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Popular Articles