Friday, August 22, 2025

Top 5 This Week

spot_img

Related Posts

AI Code Contributions Are Ruining Open Source – And Disclosure Won’t Save Us

AI Code Contributions Are Ruining Open Source – And Disclosure Won’t Save Us

Hello everyone. Let’s dive headfirst into the murky swamp of AI-assisted code contributions, because clearly, some folks think sprinkling a little algorithmic fairy dust onto their pull requests makes them some kind of software sorcerer. Spoiler alert: it doesn’t – it more often makes them the digital equivalent of that guy who brings instant noodles to a gourmet cooking competition and insists he’s innovating.

The Fixation on Disclosure

The author of this piece insists that it is, at this stage, “common courtesy” to disclose AI usage when submitting code. Let me pause right there. Common courtesy? No, mate – common courtesy is holding the door open for someone or maybe not breathing like Darth Vader into your headset on Discord. What we’re seeing here is a desperate plea from maintainers who are tired of wading through spaghetti code that’s been auto-generated by the silicon equivalent of a drunk intern.

This isn’t altruism; it’s self-preservation. Reviewers are not here to triple-check line-for-line hallucinations produced by an AI tool that couldn’t pass a Turing test if the alternative was a rubber chicken. At this point, asking for disclosure isn’t about politeness – it’s about damage control.

AI as a Tool, Not a Savior

The article makes a big deal about being “a fan of AI assistance.” Lovely. So am I – just like I’m a fan of caffeine. But if you think that means I’m okay with someone pumping 12 Red Bulls into a patient before surgery, then you’re clearly confusing fandom with reckless stupidity. AI tools are fine when used responsibly and under “heavy supervision.” The keyword here being heavy – you don’t let the robot walk off with the scalpel and hope it figures out where the appendix is.

The main frustration seems to be this: inexperienced developers are chucking AI-generated pull requests into maintainers’ laps without reviewing them. And then those maintainers are expected to clean up after them like parents tidying a toddler’s attempt at cooking lasagna in the microwave. Wouldn’t you be furious? Especially once you realize it’s not even the toddler – it’s their virtual imaginary friend who made the mess.

The Proposed GitHub Byline “Solution”

Now, here’s where it gets better. The author suggests GitHub should introduce a byline system. Essentially, every time an AI tool so much as sneezes into your codebase, a tag is automatically injected to disclose which tool was involved. Cute idea. Almost sounds practical, right? Except, have you met developers? Developers will find the most creative ways around this faster than kids hacked in wall-clipping glitches in Quake.

Instead of marking their commits with the glorious “AI-generated by ChatGPT 47.0,” people will bury it behind empty commits, squash history, or come up with some plugin that obfuscates the tool. If your policy depends on users willingly not cutting corners, congratulations – you’ve built yourself a policy fit for a utopian fiction novel.

The Reviewer Problem

The article does have one strong point: maintainers shouldn’t be wasting time coaching AI ghosts. Coaching human contributors? Great. Teaching eager newcomers how to write legitimate code is mentorship, it’s valuable, and it helps open source survive. But holding the hand of a deterministic text predictor that has no actual learning mechanism? That’s busywork from hell. It’s like explaining basic arithmetic over and over to a calculator and then acting surprised when the calculator doesn’t say thanks.

The image shows a square icon with rounded corners, designed to look like a computer application or software button. The background inside the icon is a gradient of dark blue with a subtle dotted pattern. In the top-left corner of the icon, there is a small white ghost figure that has a simple, stylized design with two horizontal lines for eyes and a diagonal line as a mouth, giving it an expression similar to a command prompt symbol. The icon has a metallic silver border around it.
Image Source: [169223740](https://avatars.githubusercontent.com/u/169223740) via [avatars.githubusercontent.com](https://avatars.githubusercontent.com)

If your pull request is secretly a chatbot’s fever dream, at least have the decency to disclose it before wasting people’s weekends.

Transparency vs. Reality

Transparency sounds noble. “If AI tools were used, the contributors should have reviewed, revised and understood their contributions.” Wonderful. Should. The word “should” belongs in university ethics papers, utopian whitepapers, and the pitch decks of cryptocurrency startups. In reality? The lazy route is almost always the chosen route. People will toss in generated junk just to test if reviewers will tolerate it. Spoiler – they often don’t.

And let’s hit the conspiracy angle here: does anyone truly believe AI companies don’t want widespread undisclosed usage? The more silently integrated these bots become into workflows, the more indispensable they appear. Transparency isn’t a bug in their business model – it’s an existential threat. No wonder we haven’t seen them spearheading disclosure standards. It’s almost like they’d prefer we keep pretending their stilted code drafts are “real work.” But hey, I’m probably just being cynical. Or am I?

The Tab Completion Red Herring

Finally, someone asks – “Does this apply to tab completions?” Well done, Sherlock. Yes, because half the nonsense you attribute to “tab completions” these days is still AI autocomplete on steroids. That’s like saying “but officer, it wasn’t drunk driving, it was just slightly tipsy bumper car-ing.” You don’t get a free pass just because it looks like a keystroke expansion. If the machine is predicting code beyond a trivial syntax finish, you’re in AI territory whether you like it or not.

Conclusion: The Final Diagnosis

So where does all this leave us? In a rather awkward half-step. On one hand, the call for AI disclosure is sensible. No sane person wants to be duped into reviewing a hallucination generator’s draft as if it were genuine craftsmanship. On the other hand, dreaming up shiny new policies and bylines isn’t going to solve the root problem: human laziness, corporate incentives, and a lack of accountability.

From my perspective – as both critic and, let’s be honest, doctor of this terminally ill tech ecosystem – this policy idea is like prescribing a cough drop to a patient with pneumonia. Sure, it might soothe the immediate irritation. But the infection? It’s going nowhere until you address the deeper rot. AI disclosure is a band-aid on a system in desperate need of real discipline and cultural change.

Overall impression? Good intentions, sensible complaint, laughably naïve solutions. This whole AI courtesy plea is worth listening to, but unless someone enforces it with real consequences, it’s about as useful as a tutorial screen that says, “Press any key to continue,” when your keyboard’s unplugged.

And that, ladies and gentlemen, is entirely my opinion.

Article source: https://github.com/ghostty-org/ghostty/pull/8289 (Content assisted by AI tooling)

Dr. Su
Dr. Su
Welcome to where opinions are strong, coffee is stronger, and we believe everything deserves a proper roast. If it exists, chances are we’ve ranted about it—or we will, as soon as we’ve had our third cup.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Popular Articles