benterix 2 hours ago

Did these Ocaml maintainers undergo some special course for dealing with difficult people? They show enormous amounts of maturity and patience. I'd just give the offender Torvalds' treatment and block them from the repo, case closed.

  • Mariehane 2 hours ago

    I think you naturally undergo that course when you are maintainer of a large OSS project.

    • pjc50 an hour ago

      Well, you go one of two ways. Classic Torvalds is the other way, until an intervention was staged.

  • hypeatei 2 hours ago

    It's clear some people have had their brain broken by the existence of AI. Some maintainers are definitely too nice, and it's infuriating to see their time get wasted by such delusional people.

  • kace91 an hour ago

    I honestly reread the whole thread in awe.

    Not due to the submitter, as clickbaity as it was, but reading the maintainers and comparing their replies with what I would have written in their place.

    That was a masterclass of defending your arguments rationally, with empathy, and leaving negative emotions at the door. I wish I was able to communicate like this.

    My only doubt is whether this has a good or bad effect overall, giving that the PR’s author seemed to be having their delusions enabled, if he was genuine.

    Would more hostility have been productive? Or is this a good general approach? In any case it is refreshing.

    • squigz 31 minutes ago

      I don't think 'hostility' is called for, but certainly a little bit more... bluntness.

      But indeed, huge props to the maintainers for staying so cool.

oliwarner 4 hours ago

There are LLMs with more self-awareness than this guy.

Repeatedly using AI to answer questions about the legitimacy of commits from an AI, to people who are clearly skeptical is breathtakingly dense. At least they're open about it.

I did love the ~"I'll help maintain this trash mountain, but I'll need paying". Classy.

  • sheepscreek 2 hours ago

    Kudos to the community folks for maintaining their composure and offering constructive criticism. That alone makes me want to contribute something to the OCaml ecosystem - not like this dude of course :)

  • the_gipsy 2 hours ago

    Yea that part is the icing on the cake.

pluc 2 hours ago

To all the AI apologists here I'd like to submit a simple scenario to you and hear your answer: you use AI to create a keynote speech on a topic you needed to use AI to write. At the end of your speech, people ask you questions about the contents of your speech. What do you say?

This is the same.

  • laterium 20 minutes ago

    What have politicians been doing forever?

  • thisisit an hour ago

    "I lack funding to answer. Pay me and I'll ask AI to answer your question."

  • bilekas an hour ago

    "The AI has a complete understanding of your question, prove me wrong"

  • j4coh 2 hours ago

    "Beats me. AI decided to do so and I didn't question it."

  • genewitch 2 hours ago

    "hey bixby, answer the next question you hear"

paxys 2 hours ago

Even if you are okay with AI generated code in the PR, the fact that the community is taking time to engage with the author and asking reasonable questions/offering reasonable feedback and the author is simply copy-pasting walls of AI-generated text in response warrants an instant ban.

If you want to behave like a spam bot don't complain when people treat you like a spam bot.

  • ptsneves an hour ago

    Sometime ago I had a co-worker do this to me, pasting answers to my questions. He would paste the jira ticket to the ChatGPT(this was GPT3 time) and submit the PR. I would review it and ask questions and the answers had this typical rephrasing and persona of chatgpt. I had no proof, so one day i just used the PR and my comments as a prompt. The answers the co-worker gave me were almost the same down to the word as what ChatGPT gave me. I told my team I would not be available to review his changes anymore and that I would rather just have the ticket outright.

rsynnott 4 hours ago

> Here's the AI-written copyright analysis...

Oh, wow. They're being way too tolerant IMO; I'd have just blocked him from the repo at about that point.

  • fhd2 3 hours ago

    Their emotional maturity is off the charts, rather impressive.

  • creata 2 hours ago

    And then, later in the thread:

    > I did ask AI to look at the OxCaml implementation in the beginning.

autumnstwilight 14 hours ago

>>> Here's my question: why did the files that you submitted name Mark Shinwell as the author?

>>> Beats me. AI decided to do so and I didn't question it.

Really sums the whole thing up...

  • j4coh 2 hours ago

    After having previously said "AI has a very deep understanding of how this code works. Please challenge me on this."

  • andai 4 hours ago

    I thought you were paraphrasing. What in blazes...

  • lambda_foo 10 hours ago

    Pretty much. I guess it’s open source but it’s not in the spirit of open source contribution.

    Plus it puts the burden of reviewing the AI slop onto the project maintainers and the future maintenance is not the submitters problem. So you’ve generated lots of code using AI, nice work that’s faster for you but slower for everyone else around you.

    • skeledrew 9 hours ago

      Another consideration here that hits both sides at once is that the maintainers on the project are few. So while it could be a great burden pushing generated code on them for review, it also seems a great burden to get new features done in the first place. So it boils down to the choice of dealing with generated code for X feature, or not having X feature for a long time, if ever.

      • swiftcoder 2 hours ago

        > or not having X feature for a long time, if ever

        Given that the feature is already quite far into development (i.e. the implementation that the LLM copied), it doesn't seem like that is the case here

      • gexla 4 hours ago

        Their issue seemed to be the process. They're setup for a certain flow. Jamming that flow breaks it. Wouldn't matter if it were AI or a sudden surge of interested developers. So, it's not a question of accepting or not accepting AI generated code, but rather changing the process. That in itself is time-consuming and carries potential risk.

        • skeledrew 3 hours ago

          Definitely, with the primary issue in this case being that the PRer didn't discuss with the maintainers before going to work. Things could've gone very differently if that discussion was had, especially disclosing the intent to use generated code. Though of course there's the risk that disclosure could've led to a preemptive shutdown of the discussion, as there are those who simply don't want to consider it at all.

      • dudinax 7 hours ago

        With the understanding that generated code for X may never be mergable given the limited resources.

        • skeledrew 3 hours ago

          Yes, and that may eventually lead to a more generation-friendly fork to which those desiring said friendliness, or just more features in general, will flock.

          • squigz 3 hours ago

            I think everyone would appreciate if these people using LLMs to spit out these PRs would fork things and "contribute" to those forks instead.

            • skeledrew 3 hours ago

              It's a fairly simple matter to reject a PR. And a nice-to-have if they update their contribution guidelines to reflect their preferences.

              • squigz 3 hours ago

                It's also a fairly simple matter to respect the time of the maintainers of software you want to contribute to - by, for example, talking to them before dumping 16,000 LoC in a PR and expecting them to review it.

                Unless, of course, it has nothing to do with actually contributing and improving software.

fzaninotto 3 hours ago

I've closed my share of AI-generated PRs on some OSS repositories I maintain. These contributors seem to jump from one project to another, until their contribution is accepted (recognized ?).

I wonder how long the open-source ecosystem will be able to resist this wave. The burden of reviewing AI-generated PRs is already not sustainable for maintainers, and the number of real open-source contributors is decreasing.

Side note: discovering the discussions in this PR is exactly why I love HN. It's like witnessing the changes in our trade in real time.

  • inejge 3 hours ago

    > I wonder how long the open-source ecosystem will be able to resist this wave.

    This PR was very successfully resisted: closed and locked without much reviewing. And with a lot of tolerance and patience from the developers, much more than I believe to be fruitful: the "author" is remarkably resistant to argument. So, I think that others can resist in the same way.

  • raincole 2 hours ago

    Open-source maintainers will resist this wave even just because they don't want to be mocked on HN/Reddit/their own forums.

    It's corporation software that we need to worry about.

    • the_gipsy an hour ago

      OSS has always pushed back, just because of the maintenance burden in general, and corporate can just "fix it later" because there are literally devs on payroll. Or at least push through and then dump the project, the goal is just completely different, each style works in its context.

      But I don't know if corporate software can really "push through" these new amounts of code, without also automating the testing part.

bilekas an hour ago

> It’s not where I obtained this PR but how.

The fact that this was said as what seems to be a boast or a brag is concerning. As if by the magic of my words the solution appeared on paper. Instead of noticing that the bulk of the code submitted was taken from someone else.

fxtentacle 2 hours ago

"This seems to be largely a copy of the work done in OxCaml by @mshinwell and @spiessimon"

"The webpage credits another author: Native binary debugging for OCaml (written by Claude!) @joelreymont, could you please explain where you obtained the code in this PR?"

That pretty much sums up the experience of coding with LLMs. They are really damn awesome at regurgitating someone else's source code. And they have memorized all of GitHub. But just like how you can get sued for using Mickey Mouse in your advertisements (yes, even if AI drew it), you can get sued for stealing someone else's source code (yes, even if AI wrote it).

  • neom 2 hours ago

    Not quite. Mickey Mouse involves trademark protection (and copyright), where unauthorized commercial use of a protected mark can lead to liability regardless of who created the derivative work. Source code copyright infringement requires the copied code to be substantially similar AND protected by copyright. Not all code is copyrightable: ideas, algorithms, and functional elements often aren't protected.

footy 3 hours ago

> AI decided to do so and I didn't question it

in response to someone asking about why the author name doesn't match the contributor's name. Incredible response.

TYPE_FASTER 25 minutes ago

> Looking over this PR, the vast majority of the code is a DWARF library by itself. This should really not live in the compiler, nor should it become a maintenance burden for the core devs.

I think this is a good point, that publishing a library (when possible, not sure if it's possible in this case) or module both reduces/removes the maintenance burden and makes it feel like more of an opt-in.

anilgulecha 4 hours ago

For the longest time, Linus's dictum "Talk is cheap. Show me the code" held. Now that's fallen! New rules for the new world are needed..

  • Cthulhu_ an hour ago

    I don't think it's fallen, but if the code is 13K LOC and written without any prior planning, nobody will read it.

  • aarestad 3 hours ago

    “code is cheap, show me the talk” - ie “show me you _understand_ the ‘cheap’ code”

    • svantana 2 hours ago

      Doesn't work in this case because the 'talk' (github PR comments) is also computer generated. But in person (i.e. at work) it's a good strategy

flakiness 3 hours ago

In this case the PR author (either LLM or person) is "honest" enough to leave the generated copyright header that includes the LLM's source material. It' not hard to imagine that more selfish people tweak the code to hide the origin. The same situation as the AI-generated homework essays.

I generally like AI coding using CC etc, but this forced me to remember that these generated code ultimately came from these stolen (spiritually, not necessarily legally) pieces.

armchairhacker 4 hours ago

OP’s code (at least plausibly) helped him. From https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...

> Damn, I can’t debug OCaml on my Mac because there’s no DWARF info…But, hey, there’s AI and it seems to one-shot fairly complex stuff in different languages, from just a Github issue…My needs are finally taken care of!

So I do believe using an LLM to generate a big feature like OP did can be very useful, so much that I’m expecting to see such cases more frequently soon. Perhaps in the future, everyone will be constantly generating big program/library extensions that are buggy except for their particular usecase, could be swapped with someone else’s non-public extensions that they generated for the same usecase, and must be re-generated each time the main program/library updates. And that’s OK, as long as the code generation doesn’t use too much energy or cause unforeseen problems. Even badly-written code is still useful when it works.

What’s probably not useful is submitting such code as a PR. Even if it works for its original use-case, it almost certainly still has bugs, and even ignoring bugs it adds tech debt (with bugs, the tech debt is significantly worse). Our code already depends on enough libraries that are complicated, buggy, and badly-written, to the extent that they slow development and make some feasible-sounding features infeasible; let’s not make it worse.

  • Cthulhu_ an hour ago

    > Even badly-written code is still useful when it works.

    Sure, just as long as it's not used in production or to handle customer or other sensitive data. But for tools, utilities, weekend hack projects, coding challenges, etc by all means.

    • armchairhacker an hour ago

      Exactly.

      And yeah, people will start using AI for important things it’s not capable of…people have already started and will continue to do so regardless. We should find good ways for people to make their lives easier with AI, because people will always try to make their lives easier, so otherwise they’ll find bad ways themselves.

  • squigz 3 hours ago

    > cause unforeseen problems

    This is literally the point of having software developers, PR reviews, and other such things. To help prevent such problems. What you're describing sounds like security hell, to say nothing of the support nightmare.

    • armchairhacker 2 hours ago

      The point is that one-off LLM-generated projects don’t get support. If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, the people who decided to rely on it can pool a fund and hire real developers to fix it, probably by rewriting the entire thing from scratch. If a vibe-coded project becomes so popular that people start being pressured or indirectly forced to rely on it, then there’s an issue; but I’m saying that important shared codebases shouldn’t have unreviewed LLM-generated code, it’s OK for unimportant code like one-off features.

      And people still shouldn’t be using LLM-generated projects when security or reliability is required. For mundane tasks, I can’t imagine worse security or reliability consequences from those projects, than existing projects that use small untrusted dependencies.

      • squigz 2 hours ago

        > The point is that one-off LLM-generated projects don’t get support.

        Just sounds like more headaches for maintainers and those of us who provide support for FOSS. 5 hours into trying to pin down an issue and the user suddenly remembers they generated some code 3 years ago.

        > If a vibe-coder needs to solve a problem and their LLM can’t, they can hire a real developer. If a vibe-coded project gets popular and starts breaking, whoever decides to use it can pool a fund to hire real developers to fix it, probably by rewriting the entire thing from scratch.

        Considering FOSS already has a funding problem, you seem very optimistic about this happening.

raincole 7 hours ago

https://news.ycombinator.com/edit?id=45982416

(Not so)interestingly, the PR author even advertised this work on HN.

  • pityJuke 2 hours ago

    Your link doesn’t work when logged out because it’s to the edit page. s/edit/item

  • ares623 6 hours ago

    what’s stopping the author from maintaining their own fork i wonder?

    • kreetx 4 hours ago

      Nothing!

      Another question though when reading his blog: is he himself full AI? as in, not even a human writing those blog posts. Reads a bit like that.

andrepd 2 hours ago

> AI has a deep understanding of how this code works. Please challenge me on this.

> > Here's my question: why did the files that you submitted name Mark Shinwell as the author?

>Beats me. AI decided to do so and I didn't question it.

I'm howling

franktankbank an hour ago

Everybody is dunking on this guy like hes some dopey protagonist in a movie, but you guys watched the movie. I think the interaction is pretty damn interesting. At least I see this interaction is "better" than the similar bug reports that have been discussed here (but I can't put my finger on why). If someone wants to contribute to ocaml I think they should read this issue to get a sense of how they work. Excellent communication from them and anyone could learn something about software professionalism. So I have to give kudos to the AI megaman for sparking the discussion and thought.

One thing I never really liked about professional software development is the way it can stall at big movements because we reject large PRs. Some stuff just won't happen if you have a simple heuristical position on this (IMO obviously).

bndr 2 hours ago

Oh wow, that was painful to read, I especially liked this analysis part:

> Different naming conventions (DW_OP_* vs DW_op_*)

ochronus 4 hours ago

Kudos to the folks in the thread!

wilg 13 hours ago

Incredibly, everyone in this situation seems to have acted reasonably and normally and the situation was handled.

heldrida 2 hours ago

I just can’t…

Welcome to 2025!

bsder 14 hours ago

Can we please go back to "You have to make an account on our server to contribute or pull from the git?"

One of the biggest problems is the fact that the public nature of Github means that fixes are worth "Faux Internet Points" and a bunch of doofuses at companies like Google made "social contribution" part of the dumbass employee evaluation process.

Forcing a person to sign up would at least stop people who need "Faux Internet Points" from doing a drive-by.

  • fhd2 4 hours ago

    Fully agree, luckily I don't maintain projects on GitHub anymore, but it used to be challenging long before LLMs. I had one fairly questionable contribution from someone who asked me to please merge it because their professor tasked them to build out a GitHub profile. I kinda see where the professor was coming from, but that wasn't the way. The contributor didn't really care about the project or improving it, they cared about doing what they were told, and the quality of the code and conversation followed from that.

    There's many other kinds of questionable contributions. In my experience, the best ones are from people who actively use the thing, somewhat actively engage in the community (well, tickets), and try to improve the software for themselves or others. From my experience, GitHub encourages the bad kind, and the minor barriers to entry posed by almost any other contribution method largely deters them. As sad as that may be.

  • dijksterhuis 2 hours ago

    i’ve been quite happy moving over to gitlab as much as i can.

    fewer people have a gitlab account — instant “not actually interested in helping” filter.

bravetraveler 15 hours ago

"Challenge me on this" while meaning "endure the machine, actually"

I guess the proponents are right. We'll use LLMs one way or another, after all. They'll become one.

  • fzeroracer 7 hours ago

    "Challenge me on this"

    Five seconds later when challenged on why AI did something

    "Beats me, AI did it and I didn't question it."

    Really embarrassing stuff all around. I feel bad for open source maintainers.

bdbdbdb 3 hours ago

No it does not. AI does not understand anything at all. It is a word prediction engine

xtracto an hour ago

This won't be a popular opinion here but, this resistance and skepticism of AI code, and people making it less smells to me very similar to the stance I see from some developers that have this belief that people from other countries CANNOT be as good as them (like, saying that outsourcing or hiring people from developing countries will invariably bring low[er] quality code).

Feels a.but like snobbism and projection of fear that what they do is becoming less valuable. In this case, how DARE a computer progeam write such code!

It's interesting how this is happening. And in the future it will be amazing seeing the turning point when the.machine generated code cannot ne ignored.

Kind of like chess/Go players: First they laughed at a computer playing chess/Go, but now, they just accept that there's NO way they could beat a computer, and keep playing other humans for fun.

  • pjc50 an hour ago

    Except it's the other way round: the poor quality is evident up front, and "they used AI" is an inference for why the quality is poor.

djoldman 14 hours ago

Maintainers and repo owners will get where they want to go the fastest by not referring to what/who "generated" code in a PR.

Discussions about AI/LLM code being a problem solely because AI/LLM is not generally a productive conversation.

Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.

Additionally, if there isn't a code of conduct, AI policy, or, perhaps most importantly, a policy on how to submit PRs and which are acceptable, it's a huge weakness in a project.

In this case, clearly some feathers were ruffled but cool heads prevailed. Well done in the end..

  • rogerrogerr 14 hours ago

    AI/LLMs are a problem because they create plausible looking code that can pass any review I have time to do, but doesn’t have a brain behind it that can be accountable for the code later.

    As a maintainer, it used to be I could merge code that “looked good”, and if it did something subtly goofy later I could look in the blame, ping the guy who wrote it, and get a “oh yeah, I did that to flobberate the bazzle. Didn’t think about when the bazzle comes from the shintlerator and is already flobbed” response.

    People who wrote plausible looking code were usually decent software people.

    Now, I would get “You’re absolutely right! I implemented this incorrectly. Here’s a completely different set of changes I should have sent instead. Hope this helps!”

    • chii 11 hours ago

      > doesn’t have a brain behind it that can be accountable for the code later.

      the submitter could also bail just as easily. Having an AI make the PR or not makes zero difference for this accountability. Ultimately, the maintainer pressing the merge button is accountable.

      What else would your value be as a maintainer, if all you did was a surface look, press merge, then find blame later when shit hits the fan?

      • ares623 8 hours ago

        If I had a magic wand I would wish for 2 parallel open source communities diverging from today.

        One path continues on the track it has always been on, human written and maintained.

        The other is fully on the AI track. Massive PRs with reviewers rubber stamping them.

        I’d love to see which track comes out ahead.

        Edit: in fact, perhaps there are open source projects already fully embracing AI authored contributions?

        • ctenb 7 hours ago

          I agree. It would also work out like a long term supervised learning process though. Humans showing how it's really done, and AI companies taking that as a gold standard for training and development of AI.

          • ares623 7 hours ago

            I'm not so sure. There's already decades of data available for the existing process.

            • ctenb 5 hours ago

              That is true, but it doesn't help for new languages, frameworks, etc

        • jebarker 4 hours ago

          How would you define “ahead”?

          • forgetfulness 2 hours ago

            Able to make changes preserving correctness over time

            Vibecoding reminds me sharply of the height of the Rails hype, products quickly rushed to market off the backs of a slurry of gems and autoimports inserted on generated code, the original authors dipping and teams of maintainers then screeching into a halt

            Here the bots will pigheadedly heap one 9000 lines PR onto another, shredding the code base to bits but making it look like a lot of work in the process

            • jebarker 2 hours ago

              Yes, preserving correctness seems like a good metric. My immediate reaction was to think that the parent comment was saying they’d like to see this comparison because AI will come out ahead. On this metric and based on current AI coding it’s hard to see that being the case or even possible to verify.

      • rogerrogerr 10 hours ago

        I don’t accept giant contributions from people who don’t have track records of sticking around. It’s faster for me to write something myself than review huge quantities of outsider code as a zero-trust artifact.

  • armchairhacker 4 hours ago

    I agree, but @gasche brings up real points in https://github.com/ocaml/ocaml/pull/14369#issuecomment-35565.... In particular I found these important:

    - Copyright issues. Even among LLM-generated code, this PR is particularly suspicious, because some files begin with the comment “created by [someone’s name]”

    - No proposal. Maybe the feature isn’t useful enough to be worth the tech debt, maybe the design doesn’t follow conventions and/or adds too much tech debt

    - Not enough tests

    - The PR is overwhelmingly big, too big for the small core team that maintains OCaml

    - People are already working on this. They’ve brainstormed the design, they’re breaking the task into smaller reviewable parts, and the code they write is trusted more than LLM-generated code

    Later, @bluddy mentions a design issue: https://github.com/ocaml/ocaml/pull/14369#issuecomment-35568...

  • williamdclt 2 hours ago

    > Better is to critique the actual PR itself. For example, needs more tests, needs to be broken up, doesn't follow our protocols for merging/docs, etc.

    They did: the main point being made is "I'm not reading 13k LOCs when there's been no proposal and discussion that this is something we might want, and how we might want to have it implemented". Which is an absolutely fair point (there's no other possible answer really, unless you have days to waste) whether the code is AI-written or human-written.

  • snickerbockers 13 hours ago

    I don't suppose you saw the post where OP asked claude to explain why this patch was not plagiarized? It's pretty damning.

    • orwin 3 hours ago

      I think that's probably the most beautiful AI-generated post that was ever generated. The fact that he posted it shows that either he didn't read it, didn't understood it, or thought it would be fun to show how the AI implementation was inferior to the one it was 'inspired' from.

    • lambda_foo 10 hours ago

      Why have the OP in the loop at all if he’s just sending prompts to AI? Surely it’s a wonderful piece of performance art.

  • abathologist an hour ago

    For example "cites a different person as an author, who happened to have done all the substantive work on a related code base". ;)