Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.
We all had that one "productive" engineer in our teams who would write huge PRs that would have large swaths of refactoring whether warranted or not and that was way before anyone even could imagine in their wildest dreams that neural networks could generate that huge amounts of code.
The net effect of such a "productive" engineer always was that instead of increasing the team velocity, team would come to a crawling pace because either his PR had to be reviewed in detail eating up all the time and/or if you just did cursory LGTM then they blew up in production meanwhile forcing everyone back to the drawing board but project architecture would have shifted so rapidly due to his "productivity" that no one had a clear picture of the codebase such as what's where except that one "super smart talented productive loyal to the company goals" guy.
Sounds a like a tactical tornado, made me think of this paragraph:
“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.”
- John Ousterhout, A Philosophy of Software Design
I have seen precisely zero consequences for these people because they usually leave after not too long and go somewhere else, sometimes for higher pay. The slower folks end up getting the worse code and no raises in exchange for comradery.
But also I have no idea how that situation arises unless the slower folks are just auto-approving PRs. You kind of did that to yourself if you let the new person get away with it.
My experience is exactly the opposite. The TT ends up being the last engineer standing a lot of the time. The people who want to have better refactoring and more maintainable code are usually the ones who move on. The TT often stays in the same place for 25 years. Often correcting mistakes they themselves made in the past.
I knew one engineer who came in every Sunday night to process missed orders from an e-com system they wrote. They were unable to actually fix the problems with their code, so they just fixed the problems by hand. Every week...for years on end. Management thought he was a star who worked hard. The devs knew he was the worst engineer they have ever worked with. He still works at that same company 25 years later.
The correlation between what management thinks and reality can be pretty large at times.
You start by rejecting those PRs, saying "write more maintainable code, not quick hacks".
Management starts pressuring the original developer "why is it not merged yet, I thought you had it working".
That developer hits back with "well, it failed code review, they want me to refactor it".
Management goes back to the reviewer, "why did you fail this? It meets coding standards right? Pipeline is green".
Reviewer says "Well, yes it technically meets coding standards but it's full of hacks and is not future proof, it will bite us."
Management says "If we coded for tomorrow we'd never get anything done. Don't be so awkward". And then code gets merged.
Then you learn to just let these people go wild. If it hurts in the future you have a nice little "I told you so". But in my experience, management doesn't actually care if it hurts us in the future, it's not their problem. They just say "Well give me bigger estimates if you need to refactor". Fair enough, it's not a big deal but it is a pointless slog of picking up the pieces.
The other way it comes about is when the original developer just isn't really that good of a developer. So you end up in such an endless feedback loop trying to get the code in a good state that you piss everyone off and it's just easier to merge it.
Some hills just aren't worth dying on. And these guys can be exploited for your own advantage if you want to get code merged quickly ;)
>You start by rejecting those PRs, saying "write more maintainable code, not quick hacks".
How do you go about that when for example, my previous employer just allowed any software developer to commit to any branch, and there was never any code review happening?
In objective terms, these tactical tornadoes are among the most valuable headcount at a big company to the extent that they can rapidly patch production issues and restore service, by any means possible.
The problem is allowing this kind of frantic tactical development even in "peace time".
They'll often exploit a power dynamic. If you're less senior than them, in some organizations it can be very difficult to stand up to that behavior. I experienced that at my previous employer as a relatively senior engineer, even. Top down organizations look unfavorably on feedback that runs upwards. Your pushback will be seen as standing in the way of progress.
> You kind of did that to yourself if you let the new person get away with it.
In theory, sure, but in practice, to echo the others, you often don't have a choice, because of power dynamics/politics.
Its easy to say "its management's fault", but the principle is the same - these guys are spammers and quacks (and deserve nothing less than to be confined to the level of hell reserved for spammers), they just have to spam long enough and something will get through (volume over quality). And after their "success" i.e. fraud, they can ditch the company and move onto the next. I've seen multiple "seniors" like this, not actually very good at the work, but great at pushing half-baked slop.
Yup, this was my first thought. Tell an LLM that there's a bug, and it will _happily_ add 200 lines to the project, usually wrapped in if statements so that it all interleaves with existing code. Then it will write twice as many lines in tests, run it all, and be done. Your bug is fixed. All the tests run, and test coverage went up. Now do that a couple dozen more times. :shudder:
For their bug bounty program, the company can just charge 5-10$ per submission to guarantee everything you send gets thoroughly reviewed by a human, and so it completely eliminates bot slop DDoS submissions overnight. If your bug and PR was actually good, then you get 10 + 1000$ back, and if it wasn't good, then you need to do better due diligence next time, and the skilled human feedback you received on why it wasn't good, was a valuable lesson for your engineering career, and it only cost you the price of a Starbucks latte, and it also cut out all the scammers polluting the system. This way everyone wins.
I said it before and I'll say it again, for opportunities open to the entire world on the internet, adding monetary friction is THE ONLY (anonymous) WAY to filter out serious people from bad actors doing spray-and-pray hoping they'll make some money, or get that job, by weaponizing AI bots. You can't rely on honor systems and a high trust society on the anonymous open internet, you need to financially gatekeep to save yourself and your sanity, and make sure the honest serious people you want to engage with don't end up drowning in the noise of the scammers and unscrupulous opportunists.
But we can't shut ourselves down just because we refuse to apply solutions to AI slop DDoS.
This is a great strategy idea, I like it. I'm not good at thinking out the curse of the monkey's paw, so I'm curious if folks can think of any downsides.
I said it before and I'll say it again, for opportunities open to the entire world on the internet, adding monetary friction is the only way to filter out serious people from bad actors doing spray-and-pray hoping they make some money or get that job through weaponizing AI bots and sucking all the air in the room.
So many problems can be solved that way, including customer support. Instead of having to post a sob story on Twitter and HN when the AI at BigCo bans my account for no reason, why not charge me $100 for access to human support that is empowered to triage and escalate genuine issues? Then, issue a refund if the problem is on their end.
$100 is way too much. Maybe $5 to get people to spend 30 seconds on google to solve the easy problems instead of calling. But I wonder if even that would be enough to significantly incentivize claiming everything is intended behavior / user error just for another revenue stream.
But seriously, I guarantee you the opposite is more common- the incompetent devs which can't manage shipping anything, keep trying to do "surgical and small edits" after 1 week of thinking about them and then have them blow up in prod for someone else to fix quickly because if it's up to them, it'll take 2-3 sprints
10 years ago I was a lot closer to what y'all talking about. After having more and more colleagues I can no longer agree and suspect this is mostly the opinion of incompetents which try to discredit regular devs.
Another thing they always lack is the ability to see when a large change is necessary because that's just what is necessary to achieve the feature in a stable manner. Sorry to say this, but starting of this discussion while trying to discredit large change sets in the age of ai is incredibly inept.
When you wrote your software well, large changes are possible and increase stability when you actually need to add a fundamental change of behavior. Which can come from a miniscule requirement.
But to close off on the topic of this article: they made the right call. In the open source context you cannot have this kind of incentive anymore with openclaw continuously shitting out one PR after another
> Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.
The people AI evangelists often say "typing" instead of "writing code", because they don't really understand -- or it's not lucrative for them to acknowledge -- what makes writing code hard.
We don't just write code to be executed by machines, we also write it to be read by humans. Code reviews, debugging, future changes -- all of these things involve reading and understanding the code someone wrote. And until we have an AI that we can actually hold responsible for its actions, we can't delegate the understanding to it.
"[...] bottleneck isn't in writing the code. It is in reading and understanding the code". 100% agreed! Furthermore, the more code is generated by AI, the fewer people will actually understand it!
Generally, software engineers already have little to no understanding of the code that's actually being executed. We're so used to high- and higher-level abstractions like C, Go, Python, and JavaScript that we forget that we're already working with mostly-deterministic symbolism in a process that more closely resembles invoking magic spells than writing machine code. One more level of abstraction is not the end of software engineering.
Consider a plumber who doesn't understand mettalurgy or electronics but relies on some foundational trade principles that they learned from a mentor and who can understand manufacturer guides for clever new fittings and pumps.
That's the level that most competent software engineers should be working at.
Delegating understanding to LLM's is totally different thing. It's not plumbing at all. It's more like hiring a unlicensed, generalist but well-reputed handyman from Craigslist and then going out to a movie while they do the work. It could turn out fine, or not, and if it does work out, it could even save time and money if they're rate is low enough.
But it's not plumbing anymore, and you should be wary about billing plumber's rates for their work or taking on liability for it if you haven't even made sure that work meets your own standards of trade and quality.
You can argue that it's "one more level of abstraction" but it's a qualitatively different kind of abstraction. And in the economy of skilled labor, and the legal landscape of accountability and liability, that difference is enormously relevant.
This argument comes up a lot. The point is that with unreviewed AI nobody understood the code at any time (including the AI). This is completely different to a C compiler wherein the writers and maintainers deeply understand the code. This means that even though I don't understand it, I can use it with some confidence.
Your point about AI being another abstraction similar to the "mostly deterministic" C compiler also comes up often but there are many arguments against it. If you think the determinism of a compiler and an AI are similar then I'm not sure whether you know anything about how either of them work or have even compared examples of what they produce.
That's a you problem. If you feel this way, its the universe saying that you aren't very good at writing software. Good engineers don't have this problem.
PS We have way too many levels of abstraction now, that doesn't mean the right answer is to add another. Even worse unlike the others, LLMs aren't deterministic.
There’s a large difference between understanding precisely what some code does and understanding what code intends to do. It’s why “what happens when you begin typing into your web browsers address bar?” is such a powerful question for weeding out low quality interview candidates. I’ve never worked at Google, but I can talk about how they probably handle the incoming requests. I’ve never worked on Windows OS-level software, but I can start talking about input buffers. Kind of reminds me of WIRED’s “5 Levels” series…
Anyway, my point is prompts are non deterministic and there’s no way of inferring what code output by an LLM is intended to do because that’s not how LLMs work
It's almost impossible to have a rational discussion about the effects of this technology because this point is so easily lost. Even super smart, credentialed, expert people easily (and often!) fall into the trap of anthropomorphizing the bot because it makes human noises. It's really important to remember the mechanical principles underlying its function. No different from any other computer program in that respect, the difference is the psychological hold it gets on the user. There is no intention behind its actions, but it's very easy to hallucinate one because with every other thing that speaks human language there is some intention behind the words and actions.
Precisely they are deterministic, so extrem cases apart, we could expect that given the documentation and a peace of code, engineers would most of the time be able to translate properly to assembly and explain what the assembly actually trigger in mechanical terms.
LLMs, as pushed currently, are not deterministic.
Moreover, I yet have to see a compiler whose output try to convince me I'm completely right and bring very smart interesting point on the table. Quite the contrary actually, though generally errors messages are not explicitly telling users how stupid the proposed code is as it doesn't even pass mere syntax and fundamental logic requirements.
To the extent that's true, it's already a problem plaguing the profession.
I wouldn't advocate for using different tools, but everyone should be able to reason about the machine instructions underlying their code. Both in the immediate sense of the assembly a simple function turns into, and the tricks language runtimes use to enable their neat features.
The attitude that things are magic is poison. There is a difference between feeling confident something is comprehensible and not yet needing to go learn it, vs resigning to a position of powerlessness.
I agree in principle, but every time I run a debugger on modern C++ it makes it clear that, rather than being a simple and cutesy transformation, "compiler optimization" is actually black magic.
The thing is that C is formal by itself. Opcodes, Assembly, C, Python, Common Lisp,… are all equivalent to each other, meanings there’s no statement and no algorithm that you can’t map to each other. That’s what it means to be Turing Complete.
The main issue is that not everyone cares about the semantic of what they’re writing. You don’t need to know assembly to talk about C’s semantic or know C to talk about Python semantics. It does not require going up and down some abstraction tower.
I was (almost) just that guy for one PR. Removed something like 20% or more of the codebase by leveraging the libraries and external tools we already had in use better, but it meant almost every single thing we were doing had to use the library function instead of the one we wrote. But assuming you have good regression tests and linters, so you know the code works and it's not terrible, the review should be more about overall high level quality instead of poring over every character to check correctness. It was still a pain to review, though
You’re not an example of what we’re taking about here. Congratulations!
A better example would be if you’d changed the behavior of the library as you did this work, and the library changes introduced hard-to-detect bugs across the application.
Yes exactly. the GP isn't what we are talking about it and huge PR isn't what we are talking about either.
PR can be huge that's OK. For example, codebases that moved from Python 2 to Python 3 would have had huge PRs but the cognitive load was well understood.
As per the other person's comment, yeah basically I could have broken it up but it would've been an arbitrary demarcation. I just deleted our functions and fixed everything that yelled. Admittedly that could've been one and then leveraging the libraries better could've been another, but they would've been 2 PRs that changed almost every line. So done as one to mitigate review time.
Arbitrary demarcations can still be valuable! Just because something is arbitrary doesn't mean that it's not helpful. Working in chunks will let you take more time to review each callsite individually, and increase your confidence in the changes
In the future, I would definitely encourage you to explore a more iterative solution—fix the first 50 occurrences first, or maybe all the occurrences of a handful of functions. For example, if you have utility functions A, B, C and D, maybe fix functions A and B first, and then C and D second.
Ultimately, at the end of the day it's going to depend on how much code you're touching. If you're only touching 100 library calls, then it's probably easy to do them in one PR. But if you're updating 1000 library calls, you'll need to take a more iterative approach. Building those skills now will serve you well in the future when working on bigger codebases and harder refactors.
Well another problem is that there was a developer also working on those functions at the same time. So just like the recent post on a 25 million LoC reformat done in a weekend, it seemed better to do it in one fell swoop. If it's good enough at 25 million, I'm sure it's good enough at a few thousand
It depends a bit. But it would now mean that there are multiple ways of doing the same. Call your internal function or call the library directly. You need to put up some linting around it that people only use your function or the library function.
Otherwise you may get that you have your function, you think everywhere is using it, you make it fix a bug. And poof, you introduced bugs at the other call sites.
My whole career I yearned for green field projects but somehow predominantly worked on existing grown code bases and legacy projects.
That naturally meant reading and understanding more code than writing. Sometimes my LOC count was even negative, and I was proud of that accomplishment.
Now with AI I write even less and I've given up on the dream to gain fulfillment that way. The ability to quickly understand large amounts of code from questionable sources, be them machine or human, should hopefully stay valuable until my retirement, especially when supported by AI? What do you think?
Statistically speaking, most codebases are brownfield. Without joining a startup, working on a greenfield project is actually a pretty rare treat. And deleting code is wonderful. I'd go as far as to say lines removed is more often than not more valuable than lines added; every line that's no longer in the codebase is code you no longer have to understand or care about. So long as functionality remains intact, of course.
The problem, as I see it, with prolific use of AI to generate code is that it goes in the exact opposite direction. More and more code is bolted on top of existing code, more and more edge-cases, patch-ups, workarounds, etc. accumulate, the codebase grows and grows. In the end, no matter how good you are at understanding "code from questionable sources", you're still a human being. The AI can generate new code at rates several orders of magnitude faster than you can injest and understand it, and when your meat brain becomes exhausted, the machine does not tire. From a business perspective, your employer will weigh their options: they can wait for you to interpret the code and generate good code (whether by hand or by machine + human review).. Or they can just keep pulling the lever on the slot machine until it works well enough to sell. And for the business exec just looking for the fastest path to paydirt, I'm afraid the latter option is going to look way more appealing.
Personally, and this is just my wild guess; I think almost no one is _really_ going to learn to code again. That's not a slight on the next generation or anything; I wouldn't know how to code right now if LLMs existed 25 years ago. I love coding, but it's the _result_ that's the real accomplishment. The journey a bit... but having the result always just sitting there, ready to grab with no effort, is too much temptation to resist.
And as this goes on, folks who can run an LLM _and_ understand/criticize/rework/re-prompt are just going to get more and more scarce. Even using an LLM in my preferred style, where you guide the model through a long series of small steps, will fade away.
> Which goes on to prove that bottleneck isn't in writing the code. It is in reading and understanding the code.
No, it's that quite literally the PR submissions are SPAMmed. Whoever made them is acting like a SPAMmer, sending out lots of garbage and hoping it sticks. IE, they're not doing their part in reviewing what the AI found and generated.
I don't understand why one wouldn't just auto reject big PRs and tell them to make smaller ones. Sounds like it's a communication and social problem, not a technological one.
Even with AI, just tell it to make smaller self contained PRs. I do this with Claude or GPT models and they do just fine.
It's not a communication problem, it's a scale problem. An AI could spit out dozens more massive PRs in less time than it takes you to even evaluate whether or not the current one is AI.
Power dynamics. Usually the person making the giant PRs is the one with all the sway. An earlier-career engineer is unlikely to push back against that level of influence.
It can be a company wide policy rather than trying to target a single individual even if the outcome is that they are targeted. This is something that should be addressed to them through a manager etc or if not, it's time to leave while they ruin the product over time.
What's the alternative? You push back or you don't, leaving you likelier to leave in the future. For non-junior devs, the market is still humming along.
I don't see how the market is humming along for anyone.
Then again, I have zero network. Maybe you can just call someone on the phone and jump ship next week? I can't. Many other people cannot as well.
My idea right now is to find ways to do things mostly my way and introduce a near-perfect meritocracy in my team. No seniors or juniors; I am technically "the most senior" but we all have differing and unique experiences. I share my experiences and when I feel stronger about something I make it clear why but I don't go sad in the corner if the other engineers overrule me.
Regardless of how the market is, I like getting along with people. Of course sometimes (actually: often) it's not possible in which case either a team restructuring should be done, or one should indeed leave (which is the nuclear option; not just "oh well, things did not work out").
Same can be said back to you. Obviously mine and 50+ acquaintances experience is not the entire world but geographical clusters and/or work-area clusters do apply.
Can you pick up the phone and be in the next job the next week?
PRs are all about power dynamics and (un)spoken deals…
If you rubberstamp some people‘s PRs all the time, you can then get them to greenlight your unpleasant PRs via pm instantly.
The other way round, retaliation: I once added some serious review notes to the PR of a very senior engineer because it was a dangerous topic. He would then spend the next months nitpicking every single PR I created. Had to post my PR in slack whenever he was not online to get them merged. After that I never seriously reviewed his PRs again. Too much of a headache.
And honestly, even if you do push back, you probably can't succeed. I used to work with a guy that made enormous, 10k line PRs to our Jenkins code, and would give only 3-4 days for people to review it. We tried to push back on it, but he was the golden boy of one of the people in charge of the project. Even the (inevitable) breakage of software builds when he merged his changes didn't cause any consequences for him. Unfortunately, sometimes with office politics there's absolutely nothing you can do.
If you don't ever have a massive PR from a dynamite session, then you cannot ever be better than "average and plodding". So the question is, what's the context of the massive PR and how should it be handled?
* Mature product making money, intermediate engineer just refactored everything so it's "better"? Shut the fuck up, kindly please, you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation.
* Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Obviously there are many other contexts and these are 2 extremes in a multi-dimensional space. But if the process is "we litigate every line", then that's just not an innovative place to be. Yes, most PRs should be small, targeted, easy to review and tied to a ticket but if you're innovating? By definition it's a little different.
> you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation
I can fling that back to you: very often the team hates the conclusion I arrive at, which is "It worked during your initial crunch and then everyone is just afraid to change it, which means your test coverage is far from good -- why is it not enriched?"
I am not trying to be an arse on purpose but the inertia and cargo-culting and tribe-defending practices I've seen during my contracting years (10-11) made me almost physically sick. Programmers are a fiercely territorial bunch and it's often to the detriment of the organization.
Of course the reverse cases exist: where the domain is difficult and ugly hacks had to be done so the project works and makes money. Absolutely. I love receiving this knowledge and integrating it; makes for interesting engineering discussions.
> Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Yep, full agree. And often times these stylistic concerns are not even that; they are often "I suffered here at the beginning, this green-horn should suffer as well!" which is honestly pathetic and it also happens quite a lot.
In retort, that's just doubling down that everything should always be average and plodding.
I'm not saying one shouldn't learn how to stage large changes into a mature codebase. Sometimes the overhead is very worth it, maybe most times if you're close to the profit center of a faang. But one should understand multiple ways of working, for different situations.
If you can't be arsed to prepare your code for review because it's such a buzzkill to your velocity, why are you even reviewing then? Just push to main.
I'm not being snarky. I put different review standards in place for different repos on my team. Sometimes the standard is no standard. Push to main. Figure it out later.
Without AI, both writing and reading code are bottlenecks.
How many times have you reviewed your old code and been appalled at the terrible quality? You personally created slop; it's no different from GenAI output except that a human had to spend precious time crafting it. You likely were indeed bottlenecked by your ability to churn out code that you just had to get to work, for one reason or another.
The real issue is in the asymmetry when one party can use automation to create more code than another party can possibly manually verify.
Exactly! They should have set [your agentic AI toolkit could be here!] loose on these issues and 100x'd their output, all while actually shipping fixes to these issues instead of closing them. These Luddites are going to be left in the dust as AI is here to stay!
The reality is somewhere in the middle. Features are shipping 2x to 5x faster at a lot of organizations, with solid code still being produced and reviewed.
Anyone trying to suggest that AI hasn't sped up quality code production is just insisting on keeping their head in the sand, IMO.
They're just working at companies with mature products where people are in meetings all day -- they say so! Startups very much want to crank shit out faster.
I don't understand this. If that project is not offering a bug bounty, why are they getting so many PRs? What possible incentive is there to spend real money on tokens just to push junk PRs? Are the PRs spamming a product or something?
Why does every programming job application ask for your GitHub profile? The industry used open source contributions as a proxy for candidate quality, and this is Goodhart's law in action.
Closing the program is totally reasonable. However, there is another option: Make submitters pay a nominal fee that is returned in the case that a real bug is found.
Asking people to pay to submit bugs would start a firestorm of internet drama about asking people to do free work for the company and pay for the privilege. It doesn’t matter if the program actually paid out.
If they got even one report closed incorrectly we would never hear the end of it.
I see that as an absolute win. Free publicity and all that jazz. Plus idiots with their fake submissions of bugs will either bleed money or gtfo from my repo.
I will include this security project as an addendum to my reply: <https://github.com/juli/taint>. Views on crypto can differ dramatically depending on the country you live in.
There are many cryptocurrencies that allow anyone to move money quickly, cheaply, and on the same day in less than a minute and requires zero bank accounts.
And which are trivial to convert back and forth between real money and cryptocurrency? And hold their value with sufficient stability that you can convert USD into the currency, make a transaction, wait a few weeks, make a transaction the other direction and then convert back into USD, with roughly no loss in value?
Unfortunately this isn't all black-and-white. There are some bug bounty where the company is very eager not to pay any bounty, aggressively marking vulnerabilities as out-of-scope or working-as-intended.
In those case you already lose time, but in the future you would also lose money.
Unfortunately you don't know how a company will react before submitting, especially if it's a small one.
I think it would be fair to distinguish "reasonable report, but not actually a vulnerability" (where you get the submission fee back) and "slop" (where you don’t).
Price it right. At the right price, it pays for everything you are talking about. At an even higher price, it is basically closing the program.
I'm not trying to suggest they _need_ to implement it. Like I said, closing it is reasonable. Completely aside from any other considerations, one could just decide that they don't feel like dealing with it. But there are other options.
It sounds like the bug bounty requires the user to extend the simulator, to cover the type of bug they found. Maybe the they could require a full run of the simulator test suite before submission? This serves as a nice check (that they didn’t break the simulator), and maybe it could also produce some proof-of-work artifact as a side-effect… (is this possible? I don’t know security).
The problem with that approach is that it will also deter genuine submissions, probably moreso than a "no bounty" system.
For those who encounter bugs as part of their employment, they'd now need to convince their employer to fork over money up front. For most employers, getting them to spend even insignificant money is like pulling teeth.
But even for the self-employed or hobbyists, gambling real money on "are they going to be a jerk about my exploit report". No offense towards Turso, but the bulk of software firms are TERRIBLE about handling reports like that. Many already have unstated policies of screwing people out of deserved bug bounties at every step.
To submit such reports today already requires you to accept that your work is statistically, just going to be a bunch of free labour that you gave away for the betterment of the product's users. Adding a cash fee just further deters submissions, especially once people haven't gotten their money back a few times. (Consider how many "AI detection tools" are themselves incredibly unreliable machine learning or sometimes even LLM systems)
How so? These bot systems work on volume – there's no regard for how much reviewer time they gobble up. The idea is to make producing reports basically free, so getting 1 in 1000 positives is still a success if you have no regard for externalities.
If they have to pay for reviewer time for each of 1000 reports, then the scheme stops being viable.
The majority of the exploits I can think of are fixed by setting the correct price. Other suggestions in this thread of denominating in bitcoin fix the other exploitation: chargebacks.
If you can think of something that isn't solved by one of those two mechanisms, I'd be interested in hearing them enumerated.
Honestly I think this is a great idea. My only suggestion is instead of being very nominal, it should be "reasonable" (so $10 and not $1).
It's even possible to directly link this to maintainers/employees - if you can review 10 such AI/real things per hour (likely more if it's AI slop that's easy to detect), you're generating another revenue stream. Now, I have no idea if these guys are based in SF Bay or a 3rd world country with low COL but as an "add on", $100 an hour isn't too shabby (and can be on the "low end" if one's good at spotting AI crap.)
Side note, isn't it possible to have some way to verify if the "vulns" are actual vulns or not? ...Heck why not throw an LLM at it, powered by a single $10 submission fee?
I believe the company is based in SF, but the developers are all over the world, so $100/hr is probably in the ballpark. Interestingly one of the senior developers is working from prison so his costs are probably a bit lower: https://news.ycombinator.com/item?id=44288937
Sounds like a startup idea to me! Admittedly, the friction and the fact that you have to pay would prevent a lot of legitimate people from participation which sucks.
AI is really throwing a wrench in the economics of software development, isn’t it?
Phabricator used to run on a similar system. You had to pay to send them bug reports & feature requests.
Sounds a bit weird for an open source project but I can tell you that the one company I worked at that used Phabricator did pay (and they definitely wouldn't have otherwise) so I think it's a viable strategy. Plus it makes you immune to slop!
On the other hand they did shut down a year or so ago though. Didn't say why.
Possibly stupid question (this is outside my wheelhouse): is there any way a final full run of the simulator test cases (presumably required to make sure the submitted simulator changes don’t break the thing) could act as a proof-of-work?
I wonder what Hacktoberfest would look like now if they were still giving out t-shirts to everyone. Probably not enough cotton in the world.
It can't be on individual maintainers to stop this, imo its on Github (and Gitlab) to stop these sort of accounts from even getting to the point of submitting PRs. Its essentially spam.
Look at the user who created the first PR they reference https://github.com/Samuelsills. This is not an account that should be allowed to do anything close to opening a PR against a well known repo.
I could have swear it said "0 contributions in the last year" when I opened the profile before, and it showed no activity in the contribution graph, but now I check again and it shows a bunch of stuff... Guess some request failed last time or something, my bad.
it seems we all will slowly learn to live within new contexts; i really appreciate their openness about it and it gives me insights to munch on
thanks to you all also to ring in with dev-style annecdotes (i'm stilllearning everyday, and hope to continue for a long time): those big-prs and tactical tornadoes stories are helping keep the crafts and thinking afloat, somehow.
> It is possible to set up automated systems to gatekeep this, but with a non-negligible dollar value attached to it, the incentive is just too great for the AIs to just keep arguing, reopening the same PR, etc.
I think a lot of this is exposing a change in assumed context, but it seems better to adapt to the new trends than discontinue security programs.
AI lets good-faith bug hunters look through more repos they are not deeply familiar with. They may recognize a bad pattern quickly, almost like a very specialized static-analysis rule. But without project context, it is not always clear whether something is a real bug, a footgun, expected behavior, or just out of scope.
The blog shows obvious slop examples, but I think borderline accepted vs rejected examples would be more useful. They would help people understand what is worth reporting and what would just drain maintainers.
It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used".
> It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used".
It doesn't really require all people to tell the truth.
If the bug hunter is acting in good faith, they can communicate how much scrutiny they think their report deserves, which may reduce maintainer frustration.
If the bug hunter is acting in bad faith, and they claim "no AI used" but the report shows obvious AI-generated content, detectable by a classifier, maintainers can dismiss it more easily.
Why not require putting up some money, say $20, to submit a bug eligible for a payout? If you know what you’re doing you wouldn’t mind this at all because you’ve proven it to yourself and you’ll get paid $1000. If the bug turns out to not be legit and it was a good faith effort then you can return the deposit as well. Slop doesn’t get a refund.
An interesting "conundrum" (at least from my outsider perspective): how many of those bot requests are from agents that utilize Turso on their backends?
Has anyone used Turso in production? It's an SQLite compatible rewrite in Rust but with added features like multiple writer support and being open to external contributions which SQLite is not.
I was thinking of using it for my full stack Rust apps just so everything works with cargo and I don't have to bring in SQLite separately.
It's alright as a dropin sqlite replacement. I ran into a bunch of problems with libsql on windows a year or two ago when I tried it but I'd assume it's fixed now. They also offer turso db as service with a very generous free plan which was my main reason to try it.
We sorely need a way to reliably detect AI slop, but unfortunately it doesn't seem possible and it's just getting harder and harder.
Last month I tried my hand at finding a way to tell whether an OSS project is slop or not, based on the amount of "human attention" it received vs the amount of code it contains. The idea is that a 100k LOC project which received 3 days' worth of attention from a human is most certainly slop.
The approach doesn't work very well, though¹, mostly because it's hard to gauge the amount of attention that was given. If I see one commit with +3000 LOC, I can assume it's AI-generated, but maybe you're just the type of dev that commits infrequently.
Maybe we need some sort of "proof of human attention" for digital artifacts, that guarantees that a human spent X time working on it.
I suspect that it will be impossible, soon. People will just train LLMs to "act human," and pass the various turing tests we throw at them.
I stay pretty busy[0], and have been accused of "gaming" my GH repos.
That's not the case. I'm retired, experienced, and working on software all day, every day. I just don't get paid for it.
I also don't especially care, whether or not anyone thinks I'm a bot. I eat my own dogfood. Most of my work is on modules that I use in my own projects.
There's no reason to care that a human spent time on it.
Humans are bad at writing code. Garbage PRs and slop have been a problem in open source and bug bounty programs since long before AI came on the scene.
We need better AI so that there's no need to solicit external bug fixes, and better AI so other contributions can be evaluated for usefulness and quality.
What do you care if a human ever looked at it at all? It implies that humans are adding value to the process. It's possible for a human to add value. The right human can add tremendous value. But I'll take a completely autonomous AI over 99% of the human software engineers and 99% of the people contributing PRs and bugfixes.
It was hard to keep up with slop before. It's a lot harder now. AI will help weed through the garbage.
If AI is already mass-producing garbage PRs and other unreliable crap, what makes AI (established as producing unreliable crap) the solution for review? What makes the reviewing AI not produce unreliable crap with regards to the review?
A magical, hypothetical AI that always gets it right and will make all these problems go away is neither a solution nor a plan. It's wishful thinking.
AI in the hands of the right people is incredibly powerful. A good team of engineers with AI doing their own bug-hunting on their own code is already far better than any outsider—human, AI, or human-assisted AI—could ever do. A good internal AI-assisted team is also the only thing that can vet all other contributions. It doesn't matter if those contributions are 100% human-written, 100% AI-written, or a combination. The problem is the same.
Unless you stop accepting outside contributions at all, there's simply no way to determine if a human was involved in the process. Any mandate that all contributions come from humans will fail because there's no detection or enforcement mechanism. You have to assume it's slop either way, and improve your ability to vet it. Only another AI can do that, because we don't have enough qualified humans to keep up.
I did address it: AI in the hands of the right people.
Random contributions to bug bounty programs or random PRs for new features come from all corners: expert engineers producing fantastic code; intermediate engineers trying their hardest but producing mediocre code; junior engineers wasting everyone's time with ill-conceived poorly-written code; and all of the above with varying amounts of AI assistance. And now also purely-automated AI, where the only human involved is pointing their AI at GitHub with no guidance.
You can't stop it on the inbox side. Either you turn the inbox off, or you leverage AI to help you separate the wheat from the chaff.
Being a verifiable human identity (not as-in age verification or whatever) but as in having a known, public, reputation online will go a long way in this new slop-first world.
There is hardly a bright line between real and fake. An influencer is just a person who rents out their identity. Can you imagine getting a real PR from a human engineer you trust, but the description says "This pull request was sponsored by Skeezy Software Inc."?
Well, yes and no. What I mean is, being a related person who is indeed a person (by whatever means you establish that) and having some sort of standard by which you won't be bought, seems increasingly rare and therefore valuable.
By "bought" I don't mean they won't sponsor stuff. I mean they've got a public standard that can be trusted to some degree.
Your final example isn't exactly what I'm thinking of here. I'm thinking that a well-known identity and name within a community bypasses a lot of this BS with AI slop and communities bombarded by the slop will continue to close themselves off which will increase the value of being a known, contributing member.
Idk I need to figure out a way to articulate this better but essentially the value of being verifiably human is increasing IMO.
I'm imagining a chart with two lines. Both have to do with verifying humanity online. The first line shows its cost. It increases dramatically over time. The other line shows its value. That one declines gradually, then falls off a cliff.
I think we're very close to those two lines crossing. Which is another way of saying that people might care today whether something was generated by/with AI, but I don't think they will care soon. Humans will still decide what gets created, but the how won't matter as much.
You might be right that the software equivalent of a sourdough-baking Reddit community will continue to exist. But most people will buy bread at the store and have no idea how it's made.
Digital human fortresses are totally becoming a thing.
For example, our community [0] asks you to submit an application before you're granted an invite code. If you attend a meetup in person we'll grant a "Verified Human" badge too. This gives you the power to invite others into the fortress: you're responsible for them.
The price to pay is steep because community growth is now glacial. It really does solve the slop problem though. (I'm also no longer convinced maximizing growth is Good.) Maybe there's some in-between solution for those who dislike invite-only spaces.
The project does not accept bug bounty submissions without BBBS attestation. To get it, you must first submit your report to the BBBS for review.
Now, if this is your first submission (you are unknown to the BBBS), you must submit $50 to the BBBS along with the bug report, to pay a human to spend an hour looking at your work to verify it is written in good faith. This is not a review of whether the bug is real or valuable, just a readover to verify the report is coherent and plausible. If you have done this before, you can get a free attestation based on being a member in good standing, but submitting slop (per the judgement of the BBBS reviewer or the project receiving the report) is an account ban.
The BBBS couldn't steal your work and submit it themselves if they gave you some sort of signed hash as a receipt, which as a side effect would also be a deterrant against bounty programs stealing your work.
Submissions would only be expensive per submission for an anonymous user, enabling the low friction high trust communication under which collaboration works best when reputation has been established.
The BBBS itself won't be overrun by slop since the price of establishing an account far exceeds what a bot might expect to make with a single malicious submission. Nor can legitimate established accounts be sold since the cost of creating them exceeds the value to be expected from abusing them. Moreover, the cost to establish a reputation as a bug bounty hunter is small in dollars compared to the cost in time and expertise that a legitimate hunter would be expected to expend in the course of their work.
The vast majority of slop would go away as the cost of a first submission is much too high. The cost to the project is close to nothing - integrating with the BBBS attestation API. The cost to a legitimate bug bounty hunter is low - some human review while establishing a reputation, which could even be made useful if it came in the form of feedback. All review is paid for by the submitter, so no one is trying to counter infinite slop with volunteer hours.
Moreover, the BBBS can serve as a mediator of trust, not only against AI, but as a place to receive reputational merit for high value work and trustworthy bug bounty programs.
I realize I am describing a lightweight guild, which is subject to well known political failure modes (the most significant of which is exploiting newcomers), but the concept has the advantage that guilds have functioned as successful slop gatekeepers in society for a very long time and a lot is known about how to make them work.
Bots are using real tokens for this. So, ultimate honeypot idea: post heavily commented skeleton code in a github repo, promise a generous money reward for closing issues and never pay anyone. See the bots swarm and burn their tokens to write code for you.
Is there a description of this project on any other site? They clearly can't post it's bot bait on the git repo, and maybe not on the leaderboards site because it's linked from the repo.
But there must be some announcement about the project somewhere? I'd like to get that to pass it around.
AI can find useful exploits but the highly publicized ones are among a sea of false positives and the successes I've read were found by people who were already experts. I can 100% see a public bug bounty program being inundated with garbage even if there are diamonds in the rough.
Let’s take curl as an example. Daniel Stenberg wrote about how he had to stop curl’s bug bounty program due to prevalent AI slop[0]. He also wrote about how he eventually restarted security bug reports without a bounty[1]. It turns out that without a bounty, the reports are higher quality. It almost seems like by removing the monetary incentive, it attracts people who are reporting bugs due to genuine altruism and concern for security, rather than hope for a quick buck. It feels like it harkens back to an earlier age of free software development on the Internet untainted by commercial interests.
So my opinion is that security bug reports should continue, but bug bounties should not. Turso should probably still encourage corruption bug reports but with no bounty.
The weird thing is it can't be that economically feasible to burn a ton of tokens in the hopes that you might get a bounty.. seems like a great way to set money on fire.
i was just thinking about that. i don't think anyone is systematically making money with that. it's probably just people who set up openclawd or whatever and told it to solve bounties.
we automated finding bugs. then we automated submitting bugs. now we're automating rejecting submissions. at no point did anyone automate fixing the bugs.
I actually just got a PR from my boss's AI agent.
It identified the wrong problem (surface level), and corrected it by corrupting the document data, but making it no longer throw an exception.
Fixing bugs has been automated by claude code and other tools for a while. We've merged a bunch of bug-fix PRs by claude, some of which were found by other bots.
Oh look it's more of exactly what AI skeptics said would happen: low effort bullshit generated at scale making life hell for people actually trying to make things. That's wild.
Edit: it is genuinely wild, I don't know of another product category that selects so perfectly for the WORST type of person to be it's enthusiast. Just every single person I see hyped about AI is fucking insufferable on at least one and usually multiple axis.
Web3 is the closest analogue in recent memory, but if you go back further to the pre-enlightenment era (and some pockets of more recent history, particularly in isolated rural/colonial regions) you can see similar behaviors. It's mad religious fervor coupled with poor education. They see what their beliefs tell them they should see, and lack the mental rigor to analyze the actual data. Not their fault! It's our fault for letting them into the profession. Other disciplines are much better at keeping these folks outside the gates.
I think people would be more interested in listening to "AI skeptics" if they offered realistic solutions to the problems they predict. Pandora's box has been opened, let's deal with the consequences now instead of trying to shut the box which cannot be shut.
> I think people would be more interested in listening to "AI skeptics" if they offered realistic solutions to the problems they predict.
AI is the fucking problem. Yes, it has (some) uses. It is not nearly the number advertised. And more and more the median use case seems to be, again, overloading people actually trying to do work with an avalanche of bullshit.
The solution is exactly what the linked article says: shut it down. The AI people have ruined another good thing that was both beneficial to the project, and to a number of individuals.
This response is incredibly annoying and insufferable. It's only "impossible" at this point because people continually ignored skeptics and anyone warning about exactly these outcomes.
Now that doom is here, it's too late to do anything about it. Just accept the doom!
The critics didn't do themselves any favors. Part think the Terminator has something useful to say on the subject, part invent contrived scenarios like self-driving cars having to resolve trolley problems. Reality turned out to be much more boring.
But yes, what you said but unironically. Like it or not it's here, it's not going away, so all the remaining options have to assume that.
> The critics didn't do themselves any favors. Part think the Terminator has something useful to say on the subject, part invent contrived scenarios like self-driving cars having to resolve trolley problems. Reality turned out to be much more boring.
I'm referring to actual people I argued with in the past. People convinced that AI in a self-driving car would involve the car calculating whether to kill a pedestrian or the driver, rather than trying to figure out whether this thing half obscured by foliage is a speed limit sign or not.
Obviously that's not what everyone argues, my point is that there's a lot of chaff in such arguments and not much wheat. People make a lot of noise about dramatic but completely unrealistic scenarios, while ignoring the far more boring reality.
The PauseAI people are for instance talking about human extinction, somehow. And not crappy GitHub PRs.
I never said it did, "doom" in this case is just "any negative consequences of AI", because anyone saying that AI could lead to negative consequences has been accused of "doomerism". My point is simply that the negative consequences are here right now, in the room with us, and AI boosters are still pretending that they don't exist.
No one serious is arguing it will destroy the world (outside of LLM companies using these people as free marketing), the real critics argue that this is going to weaken labor while all the benefits go to the few capitalists and nothing ultimately improves in regards to society.
Unless improvement for you means increase cancer rates, exacerbating the climate crisis, or using poor systems to kill school children in wars.
All outcomes humans with souls typically want to avoid.
You need to get out of your SV big tech bubble and go attend your local planning board meeting, the vast majority of the public hates this technology. It's literally killing members in their community and ruining the ecology.
The question we should ask is why a subset of humans are so gung-ho about this technology when all it's done is induced mass misery at even a greater scale. We all know the actual answer to this: they want more money even if the costs is more societal misery.
Be careful tho, we already know people are willing to commit violence and if it's one thing you can count on in the USA is when economic conditions worsen more people become desperate. That desperation leads to pretty extreme reactions, and these reactions are typically adored by the public writ large too (see the public's Luigi reactions).
Quite the powder keg and I don't think SV realizes the potential backlash that they are brewing themselves.
My only objection is calling it doom. It isn't and calling it that gives this stupid shit used by low-effort people far, far too much credit. It's just slop.
But it does suck, you know? Part of what makes OSS so great is that anyone could contribute. If someone uses a thing, and finds something broken or a way to make it better, they could do that and then push it back up to the project and ideally have it merged so everyone can use it. That's what makes it awesome. The project benefits, the maintainer benefits, the coder benefits, the users benefit.
Now we have to stop that because lazy people can't stop shitting it up with generated PRs and trying to get money for not fucking doing anything.
I mean, I was being a little hyperbolic and spitting back against people who claim that any criticism of AI is "doomerism", but I edited out the part of my comment that made that clear before posting. My bad.
Although I do want to push back mildly. I think this situation is a bit worse than just "it sucks", and if you extrapolate out to a world where every institution that's like open source gets polluted by the same fundamental dynamics, it's not quite doom, but it's quite a bit worse than "it does suck".
> forget about the shutting it down and think of something actually realistic.
Why is it not realistic? Small teams do excellent work. Keep your team small and trusted. Only accept contributions from your team, and people outside your team who are personally vouched for by someone on your team. It's like climbing mountains or sailing or any other type of inherently risky activity--you don't go out with people you don't trust. It's eminently possible, you just don't like the idea of it.
Right, so the Github "open contributions" model where anyone can open an issue or a PR or otherwise waste a maintainer's time is broken. Fundamentally insecure under this type of attack. Now that the exploit is being used widely, and costing us immensely, we need to put a lid on it. If the only way to guarantee an AI bot (or its meatspace sock puppet) doesn't waste your time is to move to a "look but don't touch" model, then that's what we need to do. I think this would be a reasonable default:
Public repos are read only except for contributors who have been given specific permission, and those permissions are granular e.g. in order of increasing damage potential:
>the author just injected garbage bytes manually into the database header, and then argued that this corrupted the database
>Steps to reproduce: Modified cli/main.rs to include a Vec with limited capacity. Forced a volatile write beyond the allocated bounds using std::ptr::write_volatile.
>author claims to have found a critical vulnerability that allows for the execution of arbitrary SQL statements. Imagine that? A SQL database that allows the execution of SQL statements. How can we ever recover from this.
I wonder why are they even doing this. Do any of these PRs ever win any money? It feels like they are burning down a forest thinking they'll find gold if they do it, without any evidence that there will be any gold after the forest is burnt down.
Isn't there some alternative approach? I.e when someone submit ai slop they get a strike. Three strikes and you are suspended from submitting to the bug bounty for x months/years?
*Edit - I get it. It seems like the authentication is a challenge.
How about "It costs $1000 to submit a bug bounty for approval", and raise the reward to $2000 (or $5000 if it's in the cards, since that will have a deterrant impact on non-AI responses).
I think that's entirely sensible. Doesn't even have to be that expensive, just expensive enough to deter people who go "oooh, free money", and expensive enough to compensate for having to review slop far enough to realize it's slop.
you still need to spend effort reviewing the code to figure out when you can give a strike. Thrice for an actual ban. This would still waste precious maintainer time.
They mentioned they had identified alternatives but it would be costly to implement them. One can imagine that ban evading by generating a new user account would be easy for an LLM agent. It's going to be a long, long game if whack-a-mole.
This probably gets solved outside of the level of an individual project. No small team can handle this without building a whole product just to handle the bug bounty.
We all had that one "productive" engineer in our teams who would write huge PRs that would have large swaths of refactoring whether warranted or not and that was way before anyone even could imagine in their wildest dreams that neural networks could generate that huge amounts of code.
The net effect of such a "productive" engineer always was that instead of increasing the team velocity, team would come to a crawling pace because either his PR had to be reviewed in detail eating up all the time and/or if you just did cursory LGTM then they blew up in production meanwhile forcing everyone back to the drawing board but project architecture would have shifted so rapidly due to his "productivity" that no one had a clear picture of the codebase such as what's where except that one "super smart talented productive loyal to the company goals" guy.
“Almost every software development organization has at least one developer who takes tactical programming to the extreme: a tactical tornado. The tactical tornado is a prolific programmer who pumps out code far faster than others but works in a totally tactical fashion. When it comes to implementing a quick feature, nobody gets it done faster than the tactical tornado. In some organizations, management treats tactical tornadoes as heroes. However, tactical tornadoes leave behind a wake of destruction. They are rarely considered heroes by the engineers who must work with their code in the future. Typically, other engineers must clean up the messes left behind by the tactical tornado, which makes it appear that those engineers (who are the real heroes) are making slower progress than the tactical tornado.” - John Ousterhout, A Philosophy of Software Design
But also I have no idea how that situation arises unless the slower folks are just auto-approving PRs. You kind of did that to yourself if you let the new person get away with it.
I knew one engineer who came in every Sunday night to process missed orders from an e-com system they wrote. They were unable to actually fix the problems with their code, so they just fixed the problems by hand. Every week...for years on end. Management thought he was a star who worked hard. The devs knew he was the worst engineer they have ever worked with. He still works at that same company 25 years later.
The correlation between what management thinks and reality can be pretty large at times.
(Thank you.)
You start by rejecting those PRs, saying "write more maintainable code, not quick hacks".
Management starts pressuring the original developer "why is it not merged yet, I thought you had it working".
That developer hits back with "well, it failed code review, they want me to refactor it".
Management goes back to the reviewer, "why did you fail this? It meets coding standards right? Pipeline is green".
Reviewer says "Well, yes it technically meets coding standards but it's full of hacks and is not future proof, it will bite us."
Management says "If we coded for tomorrow we'd never get anything done. Don't be so awkward". And then code gets merged.
Then you learn to just let these people go wild. If it hurts in the future you have a nice little "I told you so". But in my experience, management doesn't actually care if it hurts us in the future, it's not their problem. They just say "Well give me bigger estimates if you need to refactor". Fair enough, it's not a big deal but it is a pointless slog of picking up the pieces.
The other way it comes about is when the original developer just isn't really that good of a developer. So you end up in such an endless feedback loop trying to get the code in a good state that you piss everyone off and it's just easier to merge it.
Some hills just aren't worth dying on. And these guys can be exploited for your own advantage if you want to get code merged quickly ;)
How do you go about that when for example, my previous employer just allowed any software developer to commit to any branch, and there was never any code review happening?
Restricting changes to PR's is nowhere near universal.
The problem is allowing this kind of frantic tactical development even in "peace time".
In theory, sure, but in practice, to echo the others, you often don't have a choice, because of power dynamics/politics.
Its easy to say "its management's fault", but the principle is the same - these guys are spammers and quacks (and deserve nothing less than to be confined to the level of hell reserved for spammers), they just have to spam long enough and something will get through (volume over quality). And after their "success" i.e. fraud, they can ditch the company and move onto the next. I've seen multiple "seniors" like this, not actually very good at the work, but great at pushing half-baked slop.
For their bug bounty program, the company can just charge 5-10$ per submission to guarantee everything you send gets thoroughly reviewed by a human, and so it completely eliminates bot slop DDoS submissions overnight. If your bug and PR was actually good, then you get 10 + 1000$ back, and if it wasn't good, then you need to do better due diligence next time, and the skilled human feedback you received on why it wasn't good, was a valuable lesson for your engineering career, and it only cost you the price of a Starbucks latte, and it also cut out all the scammers polluting the system. This way everyone wins.
I said it before and I'll say it again, for opportunities open to the entire world on the internet, adding monetary friction is THE ONLY (anonymous) WAY to filter out serious people from bad actors doing spray-and-pray hoping they'll make some money, or get that job, by weaponizing AI bots. You can't rely on honor systems and a high trust society on the anonymous open internet, you need to financially gatekeep to save yourself and your sanity, and make sure the honest serious people you want to engage with don't end up drowning in the noise of the scammers and unscrupulous opportunists.
But we can't shut ourselves down just because we refuse to apply solutions to AI slop DDoS.
So many problems can be solved that way, including customer support. Instead of having to post a sob story on Twitter and HN when the AI at BigCo bans my account for no reason, why not charge me $100 for access to human support that is empowered to triage and escalate genuine issues? Then, issue a refund if the problem is on their end.
I don't understand why this isn't a thing.
You could probably adjust the cost per region, but then you open yourself up to spam bots again because it’s trivial to spoof one’s location.
It would almost need to be analog. Fill out this form and drop it in the mail with 10 bucks inside.
Sure there is. That would be casus belli for a real ban.
Totally.
But seriously, I guarantee you the opposite is more common- the incompetent devs which can't manage shipping anything, keep trying to do "surgical and small edits" after 1 week of thinking about them and then have them blow up in prod for someone else to fix quickly because if it's up to them, it'll take 2-3 sprints
10 years ago I was a lot closer to what y'all talking about. After having more and more colleagues I can no longer agree and suspect this is mostly the opinion of incompetents which try to discredit regular devs.
Another thing they always lack is the ability to see when a large change is necessary because that's just what is necessary to achieve the feature in a stable manner. Sorry to say this, but starting of this discussion while trying to discredit large change sets in the age of ai is incredibly inept.
When you wrote your software well, large changes are possible and increase stability when you actually need to add a fundamental change of behavior. Which can come from a miniscule requirement.
But to close off on the topic of this article: they made the right call. In the open source context you cannot have this kind of incentive anymore with openclaw continuously shitting out one PR after another
The people AI evangelists often say "typing" instead of "writing code", because they don't really understand -- or it's not lucrative for them to acknowledge -- what makes writing code hard.
We don't just write code to be executed by machines, we also write it to be read by humans. Code reviews, debugging, future changes -- all of these things involve reading and understanding the code someone wrote. And until we have an AI that we can actually hold responsible for its actions, we can't delegate the understanding to it.
That's the level that most competent software engineers should be working at.
Delegating understanding to LLM's is totally different thing. It's not plumbing at all. It's more like hiring a unlicensed, generalist but well-reputed handyman from Craigslist and then going out to a movie while they do the work. It could turn out fine, or not, and if it does work out, it could even save time and money if they're rate is low enough.
But it's not plumbing anymore, and you should be wary about billing plumber's rates for their work or taking on liability for it if you haven't even made sure that work meets your own standards of trade and quality.
You can argue that it's "one more level of abstraction" but it's a qualitatively different kind of abstraction. And in the economy of skilled labor, and the legal landscape of accountability and liability, that difference is enormously relevant.
Your point about AI being another abstraction similar to the "mostly deterministic" C compiler also comes up often but there are many arguments against it. If you think the determinism of a compiler and an AI are similar then I'm not sure whether you know anything about how either of them work or have even compared examples of what they produce.
PS We have way too many levels of abstraction now, that doesn't mean the right answer is to add another. Even worse unlike the others, LLMs aren't deterministic.
Anyway, my point is prompts are non deterministic and there’s no way of inferring what code output by an LLM is intended to do because that’s not how LLMs work
It's almost impossible to have a rational discussion about the effects of this technology because this point is so easily lost. Even super smart, credentialed, expert people easily (and often!) fall into the trap of anthropomorphizing the bot because it makes human noises. It's really important to remember the mechanical principles underlying its function. No different from any other computer program in that respect, the difference is the psychological hold it gets on the user. There is no intention behind its actions, but it's very easy to hallucinate one because with every other thing that speaks human language there is some intention behind the words and actions.
LLMs, as pushed currently, are not deterministic.
Moreover, I yet have to see a compiler whose output try to convince me I'm completely right and bring very smart interesting point on the table. Quite the contrary actually, though generally errors messages are not explicitly telling users how stupid the proposed code is as it doesn't even pass mere syntax and fundamental logic requirements.
I wouldn't advocate for using different tools, but everyone should be able to reason about the machine instructions underlying their code. Both in the immediate sense of the assembly a simple function turns into, and the tricks language runtimes use to enable their neat features.
The attitude that things are magic is poison. There is a difference between feeling confident something is comprehensible and not yet needing to go learn it, vs resigning to a position of powerlessness.
The main issue is that not everyone cares about the semantic of what they’re writing. You don’t need to know assembly to talk about C’s semantic or know C to talk about Python semantics. It does not require going up and down some abstraction tower.
A better example would be if you’d changed the behavior of the library as you did this work, and the library changes introduced hard-to-detect bugs across the application.
PR can be huge that's OK. For example, codebases that moved from Python 2 to Python 3 would have had huge PRs but the cognitive load was well understood.
In the future, I would definitely encourage you to explore a more iterative solution—fix the first 50 occurrences first, or maybe all the occurrences of a handful of functions. For example, if you have utility functions A, B, C and D, maybe fix functions A and B first, and then C and D second.
Ultimately, at the end of the day it's going to depend on how much code you're touching. If you're only touching 100 library calls, then it's probably easy to do them in one PR. But if you're updating 1000 library calls, you'll need to take a more iterative approach. Building those skills now will serve you well in the future when working on bigger codebases and harder refactors.
> setup_terminal(); enable_input(); while(...) inp = read_character(); .....
vs
> readline()
So yes I could've stubbed out the other stuff and replaced just one, but that's just adding tech debt
Otherwise you may get that you have your function, you think everywhere is using it, you make it fix a bug. And poof, you introduced bugs at the other call sites.
That naturally meant reading and understanding more code than writing. Sometimes my LOC count was even negative, and I was proud of that accomplishment.
Now with AI I write even less and I've given up on the dream to gain fulfillment that way. The ability to quickly understand large amounts of code from questionable sources, be them machine or human, should hopefully stay valuable until my retirement, especially when supported by AI? What do you think?
The problem, as I see it, with prolific use of AI to generate code is that it goes in the exact opposite direction. More and more code is bolted on top of existing code, more and more edge-cases, patch-ups, workarounds, etc. accumulate, the codebase grows and grows. In the end, no matter how good you are at understanding "code from questionable sources", you're still a human being. The AI can generate new code at rates several orders of magnitude faster than you can injest and understand it, and when your meat brain becomes exhausted, the machine does not tire. From a business perspective, your employer will weigh their options: they can wait for you to interpret the code and generate good code (whether by hand or by machine + human review).. Or they can just keep pulling the lever on the slot machine until it works well enough to sell. And for the business exec just looking for the fastest path to paydirt, I'm afraid the latter option is going to look way more appealing.
And as this goes on, folks who can run an LLM _and_ understand/criticize/rework/re-prompt are just going to get more and more scarce. Even using an LLM in my preferred style, where you guide the model through a long series of small steps, will fade away.
No, it's that quite literally the PR submissions are SPAMmed. Whoever made them is acting like a SPAMmer, sending out lots of garbage and hoping it sticks. IE, they're not doing their part in reviewing what the AI found and generated.
Even with AI, just tell it to make smaller self contained PRs. I do this with Claude or GPT models and they do just fine.
Beautiful theory, but only that.
Then again, I have zero network. Maybe you can just call someone on the phone and jump ship next week? I can't. Many other people cannot as well.
My idea right now is to find ways to do things mostly my way and introduce a near-perfect meritocracy in my team. No seniors or juniors; I am technically "the most senior" but we all have differing and unique experiences. I share my experiences and when I feel stronger about something I make it clear why but I don't go sad in the corner if the other engineers overrule me.
Regardless of how the market is, I like getting along with people. Of course sometimes (actually: often) it's not possible in which case either a team restructuring should be done, or one should indeed leave (which is the nuclear option; not just "oh well, things did not work out").
Everybody is not you.
The market is bleak - but don’t mistake everyone’s leverage - or understanding their leverage - for your own.
Can you pick up the phone and be in the next job the next week?
If you rubberstamp some people‘s PRs all the time, you can then get them to greenlight your unpleasant PRs via pm instantly.
The other way round, retaliation: I once added some serious review notes to the PR of a very senior engineer because it was a dangerous topic. He would then spend the next months nitpicking every single PR I created. Had to post my PR in slack whenever he was not online to get them merged. After that I never seriously reviewed his PRs again. Too much of a headache.
Do you want one big PR or 100 small ones? You can't escape the sheer volume of code it's going to produce.
If you don't ever have a massive PR from a dynamite session, then you cannot ever be better than "average and plodding". So the question is, what's the context of the massive PR and how should it be handled?
* Mature product making money, intermediate engineer just refactored everything so it's "better"? Shut the fuck up, kindly please, you will have to demonstrate that you understand why things are this way and why it's better before we even have this conversation.
* Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Obviously there are many other contexts and these are 2 extremes in a multi-dimensional space. But if the process is "we litigate every line", then that's just not an innovative place to be. Yes, most PRs should be small, targeted, easy to review and tied to a ticket but if you're innovating? By definition it's a little different.
I can fling that back to you: very often the team hates the conclusion I arrive at, which is "It worked during your initial crunch and then everyone is just afraid to change it, which means your test coverage is far from good -- why is it not enriched?"
I am not trying to be an arse on purpose but the inertia and cargo-culting and tribe-defending practices I've seen during my contracting years (10-11) made me almost physically sick. Programmers are a fiercely territorial bunch and it's often to the detriment of the organization.
Of course the reverse cases exist: where the domain is difficult and ugly hacks had to be done so the project works and makes money. Absolutely. I love receiving this knowledge and integrating it; makes for interesting engineering discussions.
> Greenfield dev, trusted engineer getting from 0 -> 1 on something big? Maybe it shouldn't be held up in committee for 2 weeks. Maybe most objections will be superficial stylistic concerns.
Yep, full agree. And often times these stylistic concerns are not even that; they are often "I suffered here at the beginning, this green-horn should suffer as well!" which is honestly pathetic and it also happens quite a lot.
That's just cope to avoid learning how to turn a big change into a well organized patch series.
I'm not saying one shouldn't learn how to stage large changes into a mature codebase. Sometimes the overhead is very worth it, maybe most times if you're close to the profit center of a faang. But one should understand multiple ways of working, for different situations.
I'm not being snarky. I put different review standards in place for different repos on my team. Sometimes the standard is no standard. Push to main. Figure it out later.
How many times have you reviewed your old code and been appalled at the terrible quality? You personally created slop; it's no different from GenAI output except that a human had to spend precious time crafting it. You likely were indeed bottlenecked by your ability to churn out code that you just had to get to work, for one reason or another.
The real issue is in the asymmetry when one party can use automation to create more code than another party can possibly manually verify.
So all we have to do is write code without reading or understanding it! Larry Wall was right all along!
Anyone trying to suggest that AI hasn't sped up quality code production is just insisting on keeping their head in the sand, IMO.
https://github.com/UnsafeLabs/Bounty-Hunters
The corresponding leaderboard:
https://clankers-leaderboard.pages.dev
Almost every time someone on HN asks how to increase their chances of employment, the response is to contribute to other people's Git* projects.
It's likely to get blacklisted by AI bots, soon enough, though.
Asking people to pay to submit bugs would start a firestorm of internet drama about asking people to do free work for the company and pay for the privilege. It doesn’t matter if the program actually paid out.
If they got even one report closed incorrectly we would never hear the end of it.
At this point there isn't an excuse.
In those case you already lose time, but in the future you would also lose money.
Unfortunately you don't know how a company will react before submitting, especially if it's a small one.
I'm not trying to suggest they _need_ to implement it. Like I said, closing it is reasonable. Completely aside from any other considerations, one could just decide that they don't feel like dealing with it. But there are other options.
For those who encounter bugs as part of their employment, they'd now need to convince their employer to fork over money up front. For most employers, getting them to spend even insignificant money is like pulling teeth.
But even for the self-employed or hobbyists, gambling real money on "are they going to be a jerk about my exploit report". No offense towards Turso, but the bulk of software firms are TERRIBLE about handling reports like that. Many already have unstated policies of screwing people out of deserved bug bounties at every step.
To submit such reports today already requires you to accept that your work is statistically, just going to be a bunch of free labour that you gave away for the betterment of the product's users. Adding a cash fee just further deters submissions, especially once people haven't gotten their money back a few times. (Consider how many "AI detection tools" are themselves incredibly unreliable machine learning or sometimes even LLM systems)
I'd say closing a program which doesn't work anymore is a better idea.
If they have to pay for reviewer time for each of 1000 reports, then the scheme stops being viable.
If you can think of something that isn't solved by one of those two mechanisms, I'd be interested in hearing them enumerated.
It's even possible to directly link this to maintainers/employees - if you can review 10 such AI/real things per hour (likely more if it's AI slop that's easy to detect), you're generating another revenue stream. Now, I have no idea if these guys are based in SF Bay or a 3rd world country with low COL but as an "add on", $100 an hour isn't too shabby (and can be on the "low end" if one's good at spotting AI crap.)
Side note, isn't it possible to have some way to verify if the "vulns" are actual vulns or not? ...Heck why not throw an LLM at it, powered by a single $10 submission fee?
AI is really throwing a wrench in the economics of software development, isn’t it?
Sounds a bit weird for an open source project but I can tell you that the one company I worked at that used Phabricator did pay (and they definitely wouldn't have otherwise) so I think it's a viable strategy. Plus it makes you immune to slop!
On the other hand they did shut down a year or so ago though. Didn't say why.
It can't be on individual maintainers to stop this, imo its on Github (and Gitlab) to stop these sort of accounts from even getting to the point of submitting PRs. Its essentially spam.
Look at the user who created the first PR they reference https://github.com/Samuelsills. This is not an account that should be allowed to do anything close to opening a PR against a well known repo.
> It is possible to set up automated systems to gatekeep this, but with a non-negligible dollar value attached to it, the incentive is just too great for the AIs to just keep arguing, reopening the same PR, etc.
AI lets good-faith bug hunters look through more repos they are not deeply familiar with. They may recognize a bad pattern quickly, almost like a very specialized static-analysis rule. But without project context, it is not always clear whether something is a real bug, a footgun, expected behavior, or just out of scope.
The blog shows obvious slop examples, but I think borderline accepted vs rejected examples would be more useful. They would help people understand what is worth reporting and what would just drain maintainers.
It could also help to ask reporters to clarify how the bug was found so you let people set reasonable expectations: "AI-found and manually confirmed", "AI-assisted", or "no AI used".
And why would they tell the truth?
If the bug hunter is acting in good faith, they can communicate how much scrutiny they think their report deserves, which may reduce maintainer frustration.
If the bug hunter is acting in bad faith, and they claim "no AI used" but the report shows obvious AI-generated content, detectable by a classifier, maintainers can dismiss it more easily.
I was thinking of using it for my full stack Rust apps just so everything works with cargo and I don't have to bring in SQLite separately.
https://x.com/doodlestein/status/2052910351474209258
Last month I tried my hand at finding a way to tell whether an OSS project is slop or not, based on the amount of "human attention" it received vs the amount of code it contains. The idea is that a 100k LOC project which received 3 days' worth of attention from a human is most certainly slop.
The approach doesn't work very well, though¹, mostly because it's hard to gauge the amount of attention that was given. If I see one commit with +3000 LOC, I can assume it's AI-generated, but maybe you're just the type of dev that commits infrequently.
Maybe we need some sort of "proof of human attention" for digital artifacts, that guarantees that a human spent X time working on it.
¹ I wrote about it here https://pscanf.com/s/352/
I stay pretty busy[0], and have been accused of "gaming" my GH repos.
That's not the case. I'm retired, experienced, and working on software all day, every day. I just don't get paid for it.
I also don't especially care, whether or not anyone thinks I'm a bot. I eat my own dogfood. Most of my work is on modules that I use in my own projects.
[0] https://github.com/ChrisMarshallNY#github-stuff
Humans are bad at writing code. Garbage PRs and slop have been a problem in open source and bug bounty programs since long before AI came on the scene.
We need better AI so that there's no need to solicit external bug fixes, and better AI so other contributions can be evaluated for usefulness and quality.
What do you care if a human ever looked at it at all? It implies that humans are adding value to the process. It's possible for a human to add value. The right human can add tremendous value. But I'll take a completely autonomous AI over 99% of the human software engineers and 99% of the people contributing PRs and bugfixes.
It was hard to keep up with slop before. It's a lot harder now. AI will help weed through the garbage.
A magical, hypothetical AI that always gets it right and will make all these problems go away is neither a solution nor a plan. It's wishful thinking.
Unless you stop accepting outside contributions at all, there's simply no way to determine if a human was involved in the process. Any mandate that all contributions come from humans will fail because there's no detection or enforcement mechanism. You have to assume it's slop either way, and improve your ability to vet it. Only another AI can do that, because we don't have enough qualified humans to keep up.
We already know AI is spamming unreliable crap and slop. The apparent solution is "more, better AI".
Why wouldn't this AI for screening all this also produce crap and slop?
Is the plan there "AI but it actually works right and doesn't produce crap and slop"?
Random contributions to bug bounty programs or random PRs for new features come from all corners: expert engineers producing fantastic code; intermediate engineers trying their hardest but producing mediocre code; junior engineers wasting everyone's time with ill-conceived poorly-written code; and all of the above with varying amounts of AI assistance. And now also purely-automated AI, where the only human involved is pointing their AI at GitHub with no guidance.
You can't stop it on the inbox side. Either you turn the inbox off, or you leverage AI to help you separate the wheat from the chaff.
By "bought" I don't mean they won't sponsor stuff. I mean they've got a public standard that can be trusted to some degree.
Your final example isn't exactly what I'm thinking of here. I'm thinking that a well-known identity and name within a community bypasses a lot of this BS with AI slop and communities bombarded by the slop will continue to close themselves off which will increase the value of being a known, contributing member.
Idk I need to figure out a way to articulate this better but essentially the value of being verifiably human is increasing IMO.
I think we're very close to those two lines crossing. Which is another way of saying that people might care today whether something was generated by/with AI, but I don't think they will care soon. Humans will still decide what gets created, but the how won't matter as much.
You might be right that the software equivalent of a sourdough-baking Reddit community will continue to exist. But most people will buy bread at the store and have no idea how it's made.
For example, our community [0] asks you to submit an application before you're granted an invite code. If you attend a meetup in person we'll grant a "Verified Human" badge too. This gives you the power to invite others into the fortress: you're responsible for them.
The price to pay is steep because community growth is now glacial. It really does solve the slop problem though. (I'm also no longer convinced maximizing growth is Good.) Maybe there's some in-between solution for those who dislike invite-only spaces.
[0] https://handmadecities.com/chat
The project does not accept bug bounty submissions without BBBS attestation. To get it, you must first submit your report to the BBBS for review.
Now, if this is your first submission (you are unknown to the BBBS), you must submit $50 to the BBBS along with the bug report, to pay a human to spend an hour looking at your work to verify it is written in good faith. This is not a review of whether the bug is real or valuable, just a readover to verify the report is coherent and plausible. If you have done this before, you can get a free attestation based on being a member in good standing, but submitting slop (per the judgement of the BBBS reviewer or the project receiving the report) is an account ban.
The BBBS couldn't steal your work and submit it themselves if they gave you some sort of signed hash as a receipt, which as a side effect would also be a deterrant against bounty programs stealing your work.
Submissions would only be expensive per submission for an anonymous user, enabling the low friction high trust communication under which collaboration works best when reputation has been established.
The BBBS itself won't be overrun by slop since the price of establishing an account far exceeds what a bot might expect to make with a single malicious submission. Nor can legitimate established accounts be sold since the cost of creating them exceeds the value to be expected from abusing them. Moreover, the cost to establish a reputation as a bug bounty hunter is small in dollars compared to the cost in time and expertise that a legitimate hunter would be expected to expend in the course of their work.
The vast majority of slop would go away as the cost of a first submission is much too high. The cost to the project is close to nothing - integrating with the BBBS attestation API. The cost to a legitimate bug bounty hunter is low - some human review while establishing a reputation, which could even be made useful if it came in the form of feedback. All review is paid for by the submitter, so no one is trying to counter infinite slop with volunteer hours.
Moreover, the BBBS can serve as a mediator of trust, not only against AI, but as a place to receive reputational merit for high value work and trustworthy bug bounty programs.
I realize I am describing a lightweight guild, which is subject to well known political failure modes (the most significant of which is exploiting newcomers), but the concept has the advantage that guilds have functioned as successful slop gatekeepers in society for a very long time and a lot is known about how to make them work.
But there must be some announcement about the project somewhere? I'd like to get that to pass it around.
An ultimate honeypot would not give the creator so much financial liability for passing out "generous" rewards.
Let’s take curl as an example. Daniel Stenberg wrote about how he had to stop curl’s bug bounty program due to prevalent AI slop[0]. He also wrote about how he eventually restarted security bug reports without a bounty[1]. It turns out that without a bounty, the reports are higher quality. It almost seems like by removing the monetary incentive, it attracts people who are reporting bugs due to genuine altruism and concern for security, rather than hope for a quick buck. It feels like it harkens back to an earlier age of free software development on the Internet untainted by commercial interests.
So my opinion is that security bug reports should continue, but bug bounties should not. Turso should probably still encourage corruption bug reports but with no bounty.
[0]: https://daniel.haxx.se/blog/2026/01/26/the-end-of-the-curl-b...
[1]: https://daniel.haxx.se/blog/2026/04/22/high-quality-chaos/
...large swaths of approaches on online engagement just becoming non-viable
Take that clankers
Someone automated rewriting Bun in Rust, allegedly fixing the bugs.
Edit: it is genuinely wild, I don't know of another product category that selects so perfectly for the WORST type of person to be it's enthusiast. Just every single person I see hyped about AI is fucking insufferable on at least one and usually multiple axis.
AI is the fucking problem. Yes, it has (some) uses. It is not nearly the number advertised. And more and more the median use case seems to be, again, overloading people actually trying to do work with an avalanche of bullshit.
The solution is exactly what the linked article says: shut it down. The AI people have ruined another good thing that was both beneficial to the project, and to a number of individuals.
China says no. what are you going to do now, sanction it? =)
At this point it's impossible, so I concur with the parent: forget about the shutting it down and think of something actually realistic.
Now that doom is here, it's too late to do anything about it. Just accept the doom!
But yes, what you said but unironically. Like it or not it's here, it's not going away, so all the remaining options have to assume that.
You do very well in battles against straw men.
Obviously that's not what everyone argues, my point is that there's a lot of chaff in such arguments and not much wheat. People make a lot of noise about dramatic but completely unrealistic scenarios, while ignoring the far more boring reality.
The PauseAI people are for instance talking about human extinction, somehow. And not crappy GitHub PRs.
Unless improvement for you means increase cancer rates, exacerbating the climate crisis, or using poor systems to kill school children in wars.
All outcomes humans with souls typically want to avoid.
What doom? This is a mildly annoying problem that will likely be self correcting long term.
The question we should ask is why a subset of humans are so gung-ho about this technology when all it's done is induced mass misery at even a greater scale. We all know the actual answer to this: they want more money even if the costs is more societal misery.
Be careful tho, we already know people are willing to commit violence and if it's one thing you can count on in the USA is when economic conditions worsen more people become desperate. That desperation leads to pretty extreme reactions, and these reactions are typically adored by the public writ large too (see the public's Luigi reactions).
Quite the powder keg and I don't think SV realizes the potential backlash that they are brewing themselves.
But it does suck, you know? Part of what makes OSS so great is that anyone could contribute. If someone uses a thing, and finds something broken or a way to make it better, they could do that and then push it back up to the project and ideally have it merged so everyone can use it. That's what makes it awesome. The project benefits, the maintainer benefits, the coder benefits, the users benefit.
Now we have to stop that because lazy people can't stop shitting it up with generated PRs and trying to get money for not fucking doing anything.
Although I do want to push back mildly. I think this situation is a bit worse than just "it sucks", and if you extrapolate out to a world where every institution that's like open source gets polluted by the same fundamental dynamics, it's not quite doom, but it's quite a bit worse than "it does suck".
Why is it not realistic? Small teams do excellent work. Keep your team small and trusted. Only accept contributions from your team, and people outside your team who are personally vouched for by someone on your team. It's like climbing mountains or sailing or any other type of inherently risky activity--you don't go out with people you don't trust. It's eminently possible, you just don't like the idea of it.
Sounds like you can't accept AI is here to stay
No. You go out the door, and then I clean it up, and you don't get invited back. That's how that works.
Even pre-AI it was obvious that contributions have to be vetted for a bunch of reasons.
Public repos are read only except for contributors who have been given specific permission, and those permissions are granular e.g. in order of increasing damage potential:
- comment on issue
- create issue
- comment on PR
- create PR
- run CI against PR
- etc.
In other words, shut it down.
Not great for privacy or ad-hoc contributions, but I don't see a way out of the muck without some kind of trust net.
>the author just injected garbage bytes manually into the database header, and then argued that this corrupted the database
>Steps to reproduce: Modified cli/main.rs to include a Vec with limited capacity. Forced a volatile write beyond the allocated bounds using std::ptr::write_volatile.
>author claims to have found a critical vulnerability that allows for the execution of arbitrary SQL statements. Imagine that? A SQL database that allows the execution of SQL statements. How can we ever recover from this.
I wonder why are they even doing this. Do any of these PRs ever win any money? It feels like they are burning down a forest thinking they'll find gold if they do it, without any evidence that there will be any gold after the forest is burnt down.
*Edit - I get it. It seems like the authentication is a challenge.
New identities are cheap.
Denominated in BTC to avoid chargebacks etc.
(Okay Claude is too expensive, but Deepseek can probably handle it.)
Skynet has won.