Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have. I have a handful of open source contributions. All of them are for small-ish projects and the complexity of my contributions are in the same ball-park as what I work on day-to-day. And even though I am relatively confident in my competency as a developer, these contributions are probably the most thoroughly tested and reviewed pieces of code I have ever written. I just really, really don't want to bother someone with low quality "help" who graciously offers their time to work on open source stuff.

Other people apparently don't have this feeling at all. Maybe I shouldn't have been surprised by this, but I've definitely been caught off guard by it.





It's because a lot of people that werent skilful werent on your path before. Now that pandora's box has been re-opened, those people feel "they get a second chance at life". It's not that they have no shame, they have no perspective to put that shame.

You on the other hand, have for many years honed your craft. The more you learn, the more you discover to learn aka , you realize how little you know. They don't have this. _At all_. They see this as a "free ticket to the front row" and when we politely push back (we should be way harsher in this, its the only language they understand) all they hear is "he doesn't like _me_." which is an escape.

You know how much work you ask of me, when you open a PR on my project, they don't. They will just see it as "why don't you let me join, since I have AI I should have the same skill as you".... unironically.

In other words, these "other people" that we talk about haven't worked a day in the field in their life, so they simply don't understand much of it, however they feel they understand everything of it.


This is so completely spot on. It’s happening in other fields too, particularly non-coding (but still otherwise specialized or technical) areas. AI is extremely empowering but what’s happening is that people are now showing up in all corners of the world armed with their phone at the end of their outstretched arm saying “Well ChatGPT says…” and getting very upset when told that, no, many apologies, but ChatGPT is wrong here too.

It's why artists despise the AI art users. In that field it isn't simply them trying to contribute but instead insisting that you wasted your time learning to create art and if you're a professional you deserve to starve. All while being completely ignorant to the medium or the process.

You know...

Many artists through the ages have learned to work in various mediums, like sculpture of materials, oil painting, watercolors, fresco or whatever. There are myriad ways to express your visual art using physical materials.

Likewise, a girlfriend of mine was a college-educated artist, and she had some great output in all sorts of media, and had a great grasp of paints, and paper and canvas and what-have-you.

But she was also an Amiga aficionado, and then worked on the PCs I had, and ultimately the item she wanted most in life was a Wacom Tablet. This tablet was a force-multiplier for her art, and allowed her some real creative freedom to work in digital mediums and create art with ease that was unheard-of for messy oil paintings or whatever on canvas in your garage (we actually lived in a converted garage anyway.)

So, digital art was her saving grace, but also a significant leveler of playing fields. What would distinguish her original creativity from A.I.-generated stuff later on? Really not much. You could still make an oil or watercolor painting that is obviously hand-made. Forgeries of great artists have been perpetrated, but most of us can't explain, e.g. the Shroud of Turin anyway.

So generative A.I. is competing in these digital mediums, and perhaps 3D-printing is competing in the realm of physical objects, but it's unfortunate for artists that their choices have narrowed so far, that they are practically required to work in digital media exclusively, and master those apps, and therefore, they compete with gen A.I. in the virtual realm. That's just how it's gonna be, until folks go back to sculpting marble and painting soup cans.


FWIW, even in physical medium, artists have huge competition with "factory art", i.e. a lot of low-paid laborers creating paintings and drawings for cheap. Quantity, not quality, is the name of the game here - and this is the art that adorns all the offices and hallways around the world.

It's basically like GenAI, but running on protein substrate instead of silicon one.

And even in the digital realm, artists already spent the last decade+ competing with equivalent "factory art", too. Advertising stands on art, and most of that isn't commissioned, it's rented or bought for cheap from stock art providers, and a lot of supply there comes from people and organizations who specialize in producing art for them. The OG slop art, before AI.

EDIT: there's some irony here, in that people like to talk about how GenAI might (or might already be) start putting artists out of work. But I haven't seen anyone mention that the AI has already put slop creators out of work.


A shrug and a sigh in so many words.

Funny, reading your comment I had the idle thought: I mostly really see callousness towards artists coming from people retaliating after being belittled by artists for using AI

And here's your response to what felt like a pretty good faith response that deserved at most an equally earnest answer, and at worst no response.

Instead they got worse than no response lol.


   > All while being completely ignorant to the medium or the process.
also ignorant that the art they generated was made possible by those people who "wasted their time"...

That all makes sense. But the more I know, the more I realize that a lot of software engineering isn't about crazy algorithms and black magic. I'd argue a good 80% of it is the ability to pick up the broken glass, something even many students can pull off. 15% of that comes down to avoiding landmines in a large field as you pick up said glass.

But that care isn't even evident here. People submitting prs that don't even compile, bug reports for issues that may not even exist. The minimum I'd expect is to check the work of whatever you vibe coded. We can't even get that. It's some. Odd form of clout chasing as if repos are a factor of success, not what you contribute to them.


I find that interesting because for the first 10 years of my career, I didn’t feel any confidence in contributing to open source at all because I didn’t feel I had the expertise to do so. I was even reluctant to file bugs because I always figured I was on the wrong and I didn’t want to cause churn for the maintainers.

This is easily the most spot-on comment I've read on HN in a long time.

The humility of understanding what you don't know and the limitations of that is out the window for many people now. I see time and time again the idea that "expertise is dead". Yet it's crystal clear it's not. But those people cannot understand why.

It all boils down to a simple reality: you can't understand why something is fundamentally bad if you don't understand it at all.


It's not as if there weren't that sort of people in our profession even before the rise of LLMs, as evidenced by the not infrequent comments about "gatekeeping" and "nobody needs to know academic stuff in a real day-to-day job" on HN.

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

ever had a client second guess you by replying you a screenshot from GPT?

ever asked anything in a public group only to have a complete moron replying you with a screenshot from GPT or - at least a bit of effor there - a copy/paste of the wall of text?

no, people have no shame. they have a need for a little bit of (borrowed) self importance and validation.

Which is why i applaud every code of conduct that has public ridicule as punishment for wasting everybody's time


Problem is people seriously believe that whatever GPT tells them must be true, because… I don't even know. Just because it sounds self-confident and authoritative? Because computers are supposed to not make mistakes? Because talking computers in science fiction do not make mistakes like that? The fact that LLMs ended up having this particular failure mode, out of all possible failure modes, is incredibly unfortunate and detrimental to the society.

Last year I had to deal with a contractor who sincerely believed that a very popular library had some issue because it was erroring when parsing a chatgpt generated json... I'm still shocked, this is seriously scary

"SELECT isn't broken" isn't a new advice, and it exists for a reason.

My boss says it's because they are backed by trillion dollar companies and the companies would face dire legal threats if they did not ensure the correctness of AI output.

Point out to your boss that trillion dollar companies have million dollar lawyers making sure their terms of service put all responsibility on the user, and if someone still tries to sue them they hire $2000/hour litigators from top law firms to deal with it.

Your boss sounds hilarious naive to how the world works.

In a lot of ways he is, despite witnessing a lot of how the sausage is made directly. Honestly, I think at at least half of it is wanting to convince himself that the world still functions in ways that make sense to him rather than admit that it's mostly grifters grifting all the way down.

The Gervais Principle framework calls this type of person a Clueless. They sit in middle management as a buffer between the Sociopaths who run the world, and the Losers who know the world sucks but would just like to get their paycheck and go home. I'm surprised to hear this actually play out — the Gervais Principle doesn't seem very empirical.

The high-trust Boomer brain cannot comprehend the actual low-trust society of grifters in which we live.

I don't agree with this blanket statement. The internet is low trust for lots of reasons, but regular (read small, proximal/spatiotemporally constrained) communities still exist and are not grifters all the way down. Acknowledging that distant strangers are not trustworthy in the traditional sense seems reasonable, but is categorically different than addressing natural social groups (small and local).

Yes, and most young Americans are locked out of those small, high-trust suburbs due to high housing prices. So instead they get to experience the magic of low-trust America first-hand, hence the disconnect between the young and the boomers.

Exactly. Sadly, low-trust America has become the default where most people live. There are still nice, small-town, local shopping, suburban high-trust enclaves here and there, but as soon as you go online or deal with a business with more than a handful of locations, you're back in the low-trust grifting zone.

This is a good heuristic, and it's how most things in life operate. It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss and jail time serving as strong incentive for food manufacturers and vendors to not fuck this up. The same is the case with drugs, utilities, car safety and other important aspects of life.

So their boss may be naive, but not hilariously so - because that is, in fact, how the world works[1]! And as a boss, they probably have some understanding of it.

The thing they miss is that AI fundamentally[2] cannot provide this kind of "correct" output, and more importantly, that the "trillion dollar companies" not only don't guarantee that, they actually explicitly inform everyone everywhere, including in the UI, that the output may be incorrect.

So it's mostly failure to pay attention and realize they're dealing with an exception to the rule.

--

[0] - Actually hurt you, I'm ignoring all the fitness/healthy eating fads and "ultraprocessed food" bullshit.

[1] - On a related note, it's also something security people often don't get: real world security relies on being connected - via contracts and laws and institutions - to "men with guns". It's not perfect, but scales better.

[2] - Because LLMs are not databases, but - to a first-order approximation - little people on a chip!


> It's the reason you can just buy food in stores without any worry that it might hurt you[0] - there's potential for million ${local currency} fines, lawsuits, customer loss [...]

We are currently facing a political climate trying to tear many of these safeguards down. Some people really think "caveat emptor" is some kind of natural, efficient, ideal way of life.


> [1]

Cybersecurity is also an exception here.

"men with guns" only work for cases where the criminal must be in the jurisdiction of the crime for the crime to have occurred.

If you rob a bank in London, you must be in London, and the British police can catch you. If you rob a bank somebody else, the British police doesn't care. If you hack a bank in London though, you may very well be in North Korea.


That's a fair point, and I suppose it is a major reason cybersecurity looks the way it does. The Internet as it is ignores the jurisdictional borders. But I still think cybersec is going overboard with controls, constraining use cases where international cybercrime is not a major factor in the threat model.

For this logic I like to point out that every AI service has text that says, essentially "AI can be wrong, double check your answers". If you had the same disclaimer on your food "This food's quality is not assured" would you feel comfortable buying it or would you take pause until you've built up trust with the seller and manufacturer.

There's so much CYA because there is an A that needs C'ing


Just doesn't understand the scale of money.

Maybe a million dollar company needs to be compliant. A billion dollar company can start to ward off any loopholes with lawsuits instead of compliance.

A trillion dollar company will simply change the law and fight governments over the law to begin with, rather than worrying about compliance.


And just how many rs does your boss think are in strawberry?

If only every LLM-shop out there would put disclaimers on their page that they hope absolve them of the responsibility of correctness, so that your boss could make up his own mind... Oh wait.

I think people's attitude would be better calibrated to reality if LLM providers were legally required to call their service "a random drunk guy on the subway"

E.g.

"A random drunk guy on the subway suggested that this wouldn't be a problem if we were running the latest SOL server version" "Huh, I guess that's worth testing"


There’s a non-zero number of people who would get a chuckle out of a browser extension at replaces every occurrence of LLM or AI with a random drunk guy on the subway .

It could be the same extension that replaces every occurrence of the cloud with my butt.

That’s the one I trying to think of, I could remember the ‘butt’ part.

People's trust on LLM imo stems from the lack of awareness of AI hallucinating. Hallucination benchmarks are often hidden or talked about hastily in marketing videos.

I think it's better to say that LLMs only hallucinate. All the text they produce is entirely unverified. Humans are the ones reading the text and constructing meaning.

[flagged]


To quote Luke Skywalker: Amazing. Every word of what you just said is wrong.

Which is why I keep saying that anthropomorphizing LLMs gives you good high-order intuitions about them, and should not be discouraged.

Consider: GP would've been much more correct if they said "It's just a person on a chip." Still wrong, but much less, in qualitative fashion, than they are now.


No, it does not, it just adds to the risk that you'd be fooled by them or the corporations that produce them and surveil you through their SaaS-models.

It's a person in the same sense as a Markov chain is one, or the bot in the reception on Starship Titanic, i.e. not at all.


Just a weird little guy.


Similar analogy, yes.

FWIW, I prefer my "little people on a chip" because this is a deliberate riff on SoC, aka. System on a Chip, aka. an actual component you put when designing computer systems. The implication being, when you design information processing systems, the box with "LLM" on it should go where you'd consider putting a box with "Person" on it, not where you'd put "Database" or any other software/hardware box.


No, it is not. It's a funny way of compressing and querying data, nothing else.

It is probabilistic unlike a database which is not. It is also a lossy way to compress data. We could go on about the differences but those two things make it not a database.

Edit: unless we are talking about MongoDB. It will only keep your data if you are lucky and might lose it. :)


No, it is still just a database. It is a way to store and query information, it is nothing else.

It's not just the weirdness in Mongo that could exhibit non-deterministic behaviour, some common indexing techniques do not guarantee order and/or exhaustiveness.

Let it go, LLM:s and related compression techniques aren't very special, and neither are chatbots or copy-paste-oriented software development. Optimising them for speed or manipulation does not change this, at least not from a technical perspective.


> It's just a database. There is no difference in a technical sense between "hallucination" and whatever else you imagine.

It's like a JPEG. Except instead of lossy compression on images that give you a pixel soup that only vaguely resembles the original if you're resource bound (and even modern SOTA models are when it comes to LLMs), instead you get stuff that looks more or less correct but just isn't.


It would be like JPEG if opening JPEG files involved pushing in a seed to get an image out. It's like a database, it just sits there until you enter a query.

This comes from not having a specific area or understanding, if you ask it about an area you know well, you'll see.

I get what you're saying but I think it's wrong (I also think it's wrong when people say "well, people used to complain about calculators...").

An LLM chatbot is not like querying a database. Postgres doesn't have a human-like interface. Querying SQL is highly technical, when you get nonsensical results out of it (which is most often than not) you immediately suspect the JOIN you wrote or whatever. There's no "confident vibe" in results spat out by the DB engine.

Interacting with a chat bot is highly non-technical. The chat bot seems to many people like a highly competent person-like robot that knows everything, and it knows it with a high degree of confidence too.

So it makes sense to talk about "hallucinations", even though it's a flawed analogy.

I think the mistake people make when interacting with LLMs is similar to what they do when they read/watch the news: "well, they said so on the news, so it must be true."


No, it does not. It's like saying 'I talk to angels' because you hear voices in the humming from the ventilation.

It's precisely like a database. You might think the query interface is special, but that's all it is and if you let it fool you, fine, go ahead, keep it public that you do.


I don't remember exactly who said it, but at one point I read a good take - people trust these chatbots because there's big companies and billions behind them, surely big companies test and verify their stuff thoroughly?

But (as someone else described), GPTs and other current-day LLMs are probabilistic. But 99% of what they produce seems feasible enough.


> But 99% of what they produce seems feasible enough.

This being a big part of the problem-- their false answers are more plausible and convincing then the truth. The output almost always seems feasible-- true or not is an entirely different matter.

Historically when most things fail they produce nonsense. If they do they are producing something related to the truth (but perhaps biased or mis-calibrated). LLM output can be both highly plausible and unrelated to reality.


Billions of dollars of marketing have been spent to enable them to believe that, in order to justify the trillions of investment. Why would you invest a trillion dollars in a machine that occasionally randomly gave wrong answers?

I think in science fiction it’s one of the most common themes for the talking computer to be utterly horribly wrong, often resulting in complete annihilation of all life on earth.

Unless I have been reading very different science fiction I think it’s definitely not that.

I think it’s more the confidence and seeming plausibility of LLM answers


People are literally taking Black Mirror storylines and trying to manifest them. I think they did a `s/dys/u/` and don't know how to undo it...

They codysld start by trying to dysndo it.

I'm sorry. That was a terrible joke.


Sure, but this failure mode is not that. "AI will malfunction and doom us all" is pretty far from "AI will malfunction by sometimes confabulating stuff".

In terms of mass exposure, you're probably talking things like Cmdr Data from Star Trek, who was very much on the 'infallible' end of the fictional AI spectrum.

Data was also famous for getting things embarrassingly wrong, particularly when interacting with his human colleagues.

The stories I read had computers being utterly horribly right, which resulted in attempts (sometimes successful) at annihilate humanity.

This is probably more of a GAI achievement, but we definitely need confidence levels when it comes to making queries with factual responses.

But yes, look at the US c.2025-6. As long as the leader sounds assertive, some people will eat the blatant lies that can be disproven even by the same AI tools they laud.


This sounds a bit like the "Asking vs. Guessing culture" discussion on the front page yesterday. With the "Guesser" being GP who's front-loading extra investigation, debugging and maintenance work so the project maintainers don't have to do it, and with the "Asker" being the client from your example, pasting the submission to ChatGPT and forwarding its response.

>> In Guess Culture, you avoid putting a request into words unless you're pretty sure the answer will be yes. Guess Culture depends on a tight net of shared expectations. A key skill is putting out delicate feelers. If you do this with enough subtlety, you won't even have to make the request directly; you'll get an offer. Even then, the offer may be genuine or pro forma; it takes yet more skill and delicacy to discern whether you should accept.

delicate feelers is like octopus arms


Or octocat arms in this context?

Still, I meant that in the other direction: not request, but a gift/favor. "Guess culture" would be going out of your way to make the gift valuable for the receiver - matching what they need, and not generating extra burden. "Ask culture" would be like doing whatever's easiest that matches the explicit requirements, and throwing it over the fence.


I've also had the opposite.

I raise an issue or PR after carefully reviewing someone else's open source code.

They ask Claude to answer me; neither them nor Claude understood the issue.

Well, at least it's their repo, they can do whatever.


Not OP, but I don't consider these the same thing.

The client in your example isn't a (presumably) professional developer, submitting code to a public repository, inviting the scrutiny of fellow professionals and potential future clients or employers.


I consider them to be the same attitude. Machine made it / Machine said it. It must be right, you must be wrong.

They are sure they know better because they get a yes man doing their job for them.


Our CEO chiming in on a technical discussion between engineers: by the way, this is what Claude says: *some completely made-up bullshit*

I do want to counter that in the past before AI, the CEO would just chime in with some completely off the wall bullshit from a consultant.

Hi CEO, thanks for the input. Next time that we have a discussion, we will ask Claude instead of discussing with who wrote the offending code.

Didn't happen to me yet.

I'm not looking forward to it...


Random people don’t do this. Your boss however…

Keep in mind that many people also contribute to big open source projects just because they believe it will look good ok their CV/GitHub and help them get a job. They don't care about helping anyone, they just want to write "contributed to Ghostty" in their application.

I think this falls under the "have no shame" comment that they made

It's worse. Some of them are required to contribute to an existing project of their choice for some course they're taking.

From my experience, it's not about helping anyone or CV building. I just ran into a bug or a missing feature that is blocking me.

TBH Im not sure if this is a "growing up in a good area" vibe. But over the last decade or so I have had to slowly learn the people around me have no sense of shame. This wasnt their fault, but mine. Society has changed and if you don't adapt you'll end up confused and abused.

I am not saying one has to lose their shame, but at best, understand it.


Like with all things in life shame is best in moderation.

Too little or too much shame can lead to issue.

Problem is no one tells you what too little or too much actually is and there are many different situations where you need to figure it out on your own.

So I think sometimes people just get it wrong but ultimately everyone tries their best. Truly malicious shameless people are extremely rare in my experience.

For the topic at hand I think a lot of these “shameless” contributions come from kids


I feel like there is a growing number of people who just can't even recognize or acknowledge shame. It's not even an emotion they are capable of or understand.

So many people now respond to "You shouldn't do that..." with one or more of:

- But, I'm allowed to.

- But, it's legal.

- But, the rules don't say I can't.

- But, nobody is stopping me.

The shared cultural understanding of right and wrong is shrinking. More and more, there's just can and can't.


I agree and I think it’s a backlash to the 2010’s where many felt there was too much shame/shaming going on in most online spaces.

Fwiw I haven’t noticed either phenomenon much irl but that might just be my bubble.


Certainly in the political arena we have people that are completely shameless. Maybe that counts as online space, but it has big effects on people's real life.

To add, I don't know if this is a cultural, personal, or other thing but nowadays even if people get shamed for whatever they do, they see it more as a challenge, and it makes them rebel even harder against what is perceived to be old fashioned or whatever.

Basically teenagers. But it feels like the rebellious teenager phase lasts longer nowadays. Zero evidence besides vibes and anecdotes, but still.

Or maybe it's me that's getting old?


The adaption is going to be that competent, knowledgeable people will begin forming informal and formal networks of people they know are skilled and intelligent and begin to scorn the people who aren't skilled and aren't intelligent. They will be less willing to work with people who don't have a proven record of competence. This results in greater stratification and harder for people who aren't already part of the in group to break in.

> skilled and intelligent [people] begin to scorn the people who aren't skilled and aren't intelligent

That has NEVER led to a positive result in the whole of human history, especially that the second group is much larger than the first.


Shame is a good thing it shows one has a conscience and positive self regard.

Just like pain is a good thing, it tells you and signals to remove your hand from the stove.


I've been saying for a couple years now that we need a healthy revitalization of shame in society. Sure in the past (and present) shaming people has been done for bad reasons but shame itself serves an important social function and I feel like there has been a collapse in its effectiveness, which has been very bad for society. People should be made to feel ashamed for certain things they do. It should impact them deeply and it should linger with them and be reinforced by others around them until they successfully make behavior changes. For example I see people lie pretty shamelessly and they suffer no lasting consequences for it. They should be stained with shame until they alter their behavior. People should not let them move past it and move on to the next lie.

Yeah but its not helpful if its the new air fryer thats burning the hand not the stove, unless you adapt.

It doesn't help that it seems like society has been trending to reward individuals with a lack of shame. Fortune favors the bold, that is.

Think of a lot of the inflammatory content on social media, how people have made whole careers and fortunes over outrage, and they have no shame over it.

It really does begin to look like having a good sense of shame isn't rewarded in the same way.


Lack of shame, and antisocial behavior in general are also directly economically rewarded nowadays thanks to the attention economy.

I worked for a major open-source company for half a decade. Everyone thinks their contribution is a gift and you should be grateful. To quote Bo Burnham, "you think your dick is a gift, I promise it's not".

> To quote Bo Burnham, "you think your dick is a gift, I promise it's not".

For those curious:

https://www.youtube.com/watch?v=llGvsgN17CQ


Sounds like everyone's got some main character syndrome, the cure for that is to be a meaningless cog in the enterprise wheels for a while. But then I suspect a lot of open source contributions are done exactly by those people - they don't really matter in their day job, but in open source they can Make A Difference.

Of course, the vast majority of OS work is the same cog-in-a-machine work, and with low effort AI assisted contributions, the non-hero-coding work becomes more prevalent than ever.


Kind of by definition we will not see the people who do not submit frivolous PRs that waste the time of other people. So keep in mind that there's likely a huge amount of survivor bias involved.

Just like with email spam I would expect that a big part of the issue is that it only takes a minority of shameless people to create a ton of contribution spam. Unlike email spam these people actually want their contributions to be tied to their personal reputation. Which in theory means that it should be easier to identify and isolate them.


All email is spam.

"Other people" might also just be junior devs - I have seen time and again how (over-)confident newbies can be in their code. (I remember one case where a student suspected a bug in the JVM when some Java code of his caused an error.)

It's not necessarily maliciousness or laziness, it could simply be enthusiasm paired with lack of experience.


Funny, I had a similar experience TAing “Intro to CS” (first semester C programming course). The student was certain he encountered a compiler bug (pushing back on my assumption there was something wrong with their code, since while compilers do have bugs, they are probably not in the code generation of a nested for loop). After spending a few minutes parsing their totally unindented code, the off-by-one error revealed itself

Off topic, but I feel like this could be made into a Zen Koan from The Codeless Code[0]. You're almost there with it!

[0] https://thecodelesscode.com/


Offer topic, but the Codeless Code isn't Zen Koans. It's formatted like Zen Koans, and it's entertaining and brings value to the world, but it isn't the same thing.

Our postgres replication suddenly stopped working and it took three of us hours - maybe days - of looking through the postgres source before we actually accepted it wasn't us or our hosting provider being stupid and submitted a ticket.

I can't imagine the level of laziness or entitlement required for a student (or any developer) to blame their tools so quickly without conducting a thorough investigation.


I had a professor who cautioned us not to assume the problem was in the compiler, or in anyone else’s code. Students assuming that there is a compiler (or similar) bug is not uncommon. Common enough he felt it necessary to pre-empt those discussions.

have found bugs in native JVM, usually it takes some effort, though. Printing the assembly is the easiest one. (I consider the bug in java.lang/util/io/etc. code not an interesting case)

Memory leaks and issues with the memory allocator are months long process to pin on the JVM...

In the early days (bug parade times), the bugs are a lot more common, nowadays -- I'd say it'd be an extreme naivete to consider JVM the culprit from the get-go.


Some people just want their name in the contributor list, whether it's for ego, to build a portfolio, etc. I think that's what it comes down to. Many projects, especially high profile ones, have to deal with low effort contributions - correcting spelling mistakes, reformatting code, etc. It's been going on for a long time. The Linux contributor guidelines - probably a lot of other projects too - specifically call this stuff out and caution people not to do it lest they suffer the wrath of the LKML. AI coding tools open up all kinds of new possibilities for these types of contributors, but it's not AI that's the problem.

It's good to regularly see such policies and discussions around them to remind me how staggeringly shameless some people could be and how many of such people out there. Interacting mostly with my peers, friends, acquaintances I tend to forget that they don't represent average population and after some time I start to assume all people are reasonable and act in good faith.

Yep, this. You can just look at the state of FOSS licensing across GitHub to see it in action: licenses are routinely stripped or changed to remove the original developers, even on trivial items, even on forked projects where the action is easily visible, even on licenses that allow for literally everything else. State "You can do everything except this" and loads of people will still actively do it, because they have no shame (or because they enjoy breaking someone else's rules? Because it gives them a power trip? Who knows).

I think of it like people just have crappy prompt adherence. It makes more sense that way.

With AI at least you can wipe the context and reapply system prompt.

You don't have to obey the copyright of anyone who's not willing to sue you for copyright infringement. Business moguls know this.

A subset of open source contributors are only interested in getting something accepted so they can put it on their resume.

Any smart interviewer knows that you have to look at actual code of the contributions to confirm it was actually accepted and that it was a non-trivial change (e.g. not updating punctuation in the README or something).

In my experience this is where the PR-spammers fall apart in interviews. When they proudly tell you they’re a contributor to a dozen popular projects and you ask for direct links to their contributions, they start coming up with excuses for why they can’t find them or their story changes.

There are of course lazy interviewers who will see the resume line about having contributed to popular projects and take it as strong signal without second guessing. That’s what these people are counting on.


You just have to go take a look at what people write in social media, using their real name and photo, to conclude that no, some people have no shame at all.

I would imagine there are a lot of "small nice to haves" that people submit because they are frustrated about the mere complexity of submitting changes. Minor things that involve a lot of complexity merely in terms of changing some config or some default etc. Something where there is a significant probability of it being wrong but also a high probability of someone who knows the project being able to quickly see if it's ok or not.

i.e. imagine a change that is literally a small diff, that is easy to describe as a mere user and not a developer, and that requires quite a lot of deep understanding merely to submit as a PR (build the project! run the tests! write the template for the PR!).

Really a lot of this stuff ends up being a kind of failure mode of various projects that we all fall into at some point where "config" is in the code and what could be a simple change and test required a lot of friction.

Obviously not all submissions are going to be like this but I think I've tried a few little ones like that where I would normally just leave whatever annoyance I have alone but think "hey maybe it's 10 min faff with AI and a PR".

The structure of the project incentives kind of creates this. Increasing cost to contribution is a valid strategy of course, but from a holistic project point of view it is not always a good one especially assuming you are not dealing with adversarial contributors but only slightly incompetent ones.


To have that shame, you need to know better. If you don’t know any better, having access to a model that can make code and a cursory understanding of the language syntax probably feels like knowing how to write good code. Dunning-Krueger strikes again.

I’ll bet there are probably also people trying to farm accounts with plausible histories for things like anonymous supply chain attacks.


when it comes to enabling opportunities i dont think it becomes a matter of shame for them anymore. A lot of people (especially in regions where living is tough and competition is fierce) will do anything by hook or crook to get ahead in competition. And if github contributions is a metric for getting hired or getting noticed then you are going to see it become spammed.

Funny enough, reading this makes me feel a little more confident and less... shame.

I've been deep-diving into AI code generation for more niche platforms, to see if it can either fill the coding gap in my skillset, or help me learn more code. And without writing my whole blog post(s) here, it's been fairly mediocre but improving over time.

But for the life of me I would never submit PRs of this code. Not if I can't explain every line and why it's there. And in preparation of publishing anything to my own repos I have a readme which explicitly states how the code was generated and requesting not to bother any upstream or community members with issues from it. It's just (uncommon) courtesy, no?


This is one thing I find funny about all the discussion around AI watermarking. Yes for absolutely nefarious bad actors it is incredibly important, but what seems clear is that the majority of AI users do absolutely nothing to conceal obvious tells of AI generation. Turns out people are shameless!

Two immediate ones I can think of:

- The yellow hue/sepia tone of any image coming out of ChatGPT

- People responding to text by starting with "Good Question!" or inserting hard-to-memorize-or-type unicode symbols like → into text where they obviously wouldn't have used that and have no history of using it.


> how little shame people apparently have

You can expand this sentiment to everyday life. The things some people are willing to say and do in public is a never ending supply of surprising.


> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have

My guess is that those people have different incentives. They need to build a portfolio of open-source contributions, so shame is not of their concern. So, yeah, where you stand depends on where you sit.


The major companies that made available the very tools they use to create this spam code, applied the exact same ethics.

Shamelessness is very definitely in vogue at the moment. It will pass, let's hope for more than ruins.

To put this another way, shame is only effective if it's coupled with other repercussions with long standing effects.

An example I have of this is from high school where there were guys that were utterly shameless in asking girls for sex. The thing is it worked for them. Regardless of how many people turned them down they got enough of a hit rate it was an effective strategy. Simply put there was no other social mechanism that provided enough disincentive to stop them.

And to take the position as devil's advocate, why should they feel shame? Shame is typically a moral construct of the culture you're raised in and what to be ashamed for can vary widely.

For example, if your raised in the culture of Abrahamic religions it's very likely you're told to be ashamed for being gay. Whereas non-religious upbringing is more likely to say why the hell would you be ashamed for being gay.

TL:DR, shame is not an effective mechanism on the internet because you're dealing with far too many cultures that have wildly different views on shame, and any particular viewpoint on shame is apt to have millions to billions of people that don't believe the same.


It's because the AI is generating code better than they would write, and if you don't like it then that's fine... they didn't write it

it's easy to not have shame when you have no skin in the game... this is similar to how narcissists think so highly of themselves, it's never their fault


I'm not surprised. Lower barrier of entry -- thanks to AI in this case -- often leads to a decrease in quality in most things.

https://x.com/JDHamkins/status/2014085911110131987

I am seeing the doomed future of AI math: just received another set theory paper by a set theory amateur with an AI workflow and an interest in the continuum hypothesis.

At first glance, the paper looks polished and advanced. It is beautifully typeset and contains many correct definitions and theorems, many of which I recognize from my own published work and in work by people I know to be expert. Between those correct bits, however, are sprinkled whole passages of claims and results with new technical jargon. One can't really tell at first, but upon looking into it, it seems to be meaningless nonsense. The author has evidently hoodwinked himself.

We are all going to be suffering under this kind of garbage, which is not easily recognizable for the slop it is without effort. It is our regrettable fate.


Lots of people cosplay as developers, and "contributing" to open source is a box they must check. It's like they go through the moves without understanding they're doing the opposite of what they should be doing. Same with having a tech blog, they don't understand that the end goal is not "having a blog" but "producing and sharing quality content"

> The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have.

My guess is it's mostly people from countries with a culture that reward shameless behavior.


> Other people apparently don't have this feeling at all.

I think this is interesting too. I've noticed the difference in dating/hook-up contexts. The people you're talking about also end up getting laid more but that group also has a very large intersection with sex pests and other shitty people. The thing they have in common though is that they just don't care what other people think about them. That leads some of them to be successful if they are otherwise good people... or to become borderline or actual crininals if not. I find it fascinating actually, like how does this difference come about and can it actually be changed or is it something we get early in life or from the genetic lottery.


The Internet (and developer communities) used to be a high trust society - mostly academics and developers, everyone with shared experiences of learning when it was harder to get resources, etc.

The grift culture has changed that completely, now students face a lot of pressure to spam out PRs just to show they have contributed something.


If you are from poor society you can't afford to have shame. You either succeed or fail, again and again, and keep trying.

In other news, wet roads cause rain.

"The biggest surprise to me with all this low-quality contribution spam is how little shame people apparently have."

And this is one half of why I think

"Bad AI drivers will be [..] ridiculed in public."

isn't a good clause. The other is that ridiculing others, not matter what, is just no decent behavior. Putting it as a rule in your policy document makes it only worse.


> The other is that ridiculing others, not matter what, is just no decent behavior.

Shaming people for violating valid social norms is absolutely decent behaviour. It is the primary mechanism we have to establish social norms. When people do bad things that are harmful to the rest of society, shaming them is society's first-level corrective response to get them to stop doing bad things. If people continue to violate norms, then society's higher levels of corrective behaviour can involve things like establishing laws and fining or imprisoning people, but you don't want to start with that level of response. Although putting these LLM spammers in jail does sound awfully enticing to me in a petty way, it's probably not the most constructive way to handle the problem.

The fact that shamelessness is taking over in some cultures is another problem altogether, and I don't know how you deal with that. Certain cultures have completely abdicated the ability to influence people's behaviour socially without resorting to heavy-handed intervention, and on the internet, this becomes everyone in the world's problem. I guess the answer is probably cultivation of spaces with strict moderation to bar shameless people from participating. The problem could be mitigated to some degree if a Github-like entity outright banned these people from their platform so they could not continue to harass open-source maintainers, but there is no platform like that. It unfortunately takes a lot of unrewarding work to maintain a curated social environment on the internet.


In a functioning society the primary mechanism to deal with violation of social norms is (temporary or permanent) social exclusion and in consequence the loss of future cooperative benefits.

To demand public humiliation doesn’t just put you on the same level as our medieval ancestors, who responded to violations of social norms with the pillory - it’s actually even worse: the contemporary internet pillory never forgets.


You think exile is a better first step than shame? That's certainly a take. On the internet, that does manifest as my suggested way of dealing with people where shame doesn't work, a curated space where offenders are banned -- but I would still advocate for attempting lesser corrective behaviour first before exclusion. Moreover, exclusion only works if you have a means to viably exclude people. Shame is something peers can do; exclusion requires authority.

Shame is also not the same thing as "public humiliation". They are publicly humiliating themselves. Pointing out that what they publicly chose to do themselves is bad is in no way the same as coercing them into being humiliated, which is what "public humiliation as a medieval punishment" entails. For example, the medieval practice of dragging a woman through the streets nude in order to humiliate her is indeed abhorrent, but you can hardly complain if you march through the streets nude of your own volition, against other people's desires, and are then publicly shamed for it.


No society can function without enforced rules. Most people do the pro-social thing most of the time. But for the rest, society must create negative experiences that help train people to do the right thing.

What negative experience do you think should instead be created for people breaking these rules?


Temporary or permanent social exclusion, and consequently the loss of future cooperative benefits.

A permanent public internet pillory isn’t just useless against the worst offenders, who are shameless anyway. It’s also permanently damaging to those who are still learning societal norms.

The Ghostty AI policy lacks any nuance in this regard. No consideration for the age or experience of the offender. No consideration for how serious the offense actually was.


Drive-by PRs don't come from people interested in participating in the community in question. They have infinite places to juke their stats.

I see plenty of nuance beyond the bold print. They clearly say they love to help junior developers. Your assumption that they will apply this without thought is, well, your assumption. I'd rather see what they actually do instead of getting wrapped up in your fantasies.


Thanks to Social Media bubbles, there's no social exclusion possible anymore. Shameless people just go online find each other and reinforce each others' shamelessness. I bet there's a Facebook group for people who don't return their shopping carts.

Getting to live by the rules of decency is a privilege now denied us. I can accept that but I don't have to like it or like the people who would abuse my trust for their personal gain.

Tit for tat


It is well supported that TFT with a delayed mirroring component and Generous Tit for Tat where you sometimes still cooperate after defection are pretty succesful.

What is written in the Ghostty AI policy lacks any nuance or generosity. It's more like a Grim Trigger strategy than Tit for Tat.


You can't have 1,000,000 abusers and be nuanced and generous to all of them all the time. At some point either you either choose to knowingly enable the abuse or you draw a line in the sand, drop the hammer, send a message, whatever you want to call the process of setting boundaries in anger. Getting a hammer dropped on them isn't going to feel fair to the individuals it falls on, but it's also unrealistic to expect that a mob-like group can trample with impunity because of the fear of being rude or unjust to an individual member of that mob.

It is understanding of these dynamics that lead to us to our current system of law: punitive justice, but forgiveness through pardons.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: