> PR approval is too boolean. The PR is approved or it's not approved. Real code review, like real life, lives in the middle
This is have-your-cake-and-eat-it. PR approval is a permission so is a boolean. Of course it is. Either the code can be merged or it can't.
What's being described really here is just something to make you feel slightly better about yourself whilst approving code you hate ("we should revisit this..."). Just open a new ticket.
I was in camp 'boolean', but I think this has convinced me. I always had a problem that there were developers who didn't really understand the code, but would click 'approve' anyhow because they didn't see any problems in the parts they understood.
This meant that they were completely unable to actually 'approve' a review, but were only able to reject it. They were juniors, so they'd eventually get to that point, but by then, everyone would be used to just ignoring their approvals.
The nuance is comments on the PR itself, rather than the state of the approval, which is binary (or ternary, if you want to count leaving it in an unknown state for extended periods of time).
The PR needs to have someone who knows the whole thing.
Having several people review each separate parts but not understanding the others' can cause interaction bugs. If such bugs cannot happen (say, due to modularity, or type safety guarantees etc), then it won't be the case where you need to have a partial approve.
I am not a fan of partial approve. Either you think the code is approvable, or it isn't.
I mean, that’s fair no? If the UX creates an impasse for the user then this leads to friction in the process. There’s more than one way to address it. One is the user overcomes his own internal dilemma, the other is the UX helps him get there. For example, would be cool if there was a way to do a conditional approval with an issue tied to a stacked PR or something similar (just throwing ideas, point is to surface up the friction as a UX take not a protocol or API issue with git)
> If I clone a repo, I want a pretty limited history for that repo when I clone. If I start to go back in time, spin up a worker to go fetch that stuff from the VCS when I need it.
2. Raspberry pi as a forge for a long time: also check, the git server shim is super lightweight, its just an XRPC layer over git repositories + an sqlite3 database. there are folks running it on a riscv board with 512 megs of RAM.
3. Actions are critical and they should be runnable on my local machine: IMO this ask is slightly misplaced. it is mostly your build-systems' job to be hermetic, run anywhere, handle cross-builds etc. it would be really cool to "promote" results of such builds to the forge itself.
Surprised to see raspberry pi for hosting data that's supposed to be integral to a workflow. I've been burned too many times by SD card corruption in the past. Do you use NVME drives nowadays, just curious.
I use RPi 4 with a cheap SSD connected via USB-to-SATA connector as a tiny home server for a year already and it works without issues.
I know that not all USB-to-SATA connectors are compatible with RPi – I got lucky with the first connector I tried (Unitek). Not sure if RPi 5 has a wider compatibility.
A year isn't much. I changed I four ssds,sata,sata,nvme,nvme, about every other year, while the data store at some cheap wdc spinning rust still spins and won't stop for some years.
Raspi5 has a pci connection for ssd hardware add-ons, I use one and it runs fine with a 2 TB one, I just let it on forever. This has been on for almost a year currently without issues.
It is the job of the build system, yes. In most cases the problems people want to solve with locally runnable Actions aren't the build not working, it's the whole integration of the thing; all the YAML definition, secrets, exactly what commands it runs, how it restores tools and caches (which your build system _might_ take care of but the available primitives for that in GHA are very poor), etc.
I do think it's just an awkward problem to solve though, because it essentially devolves to needing to run the entire system somewhere else, which is why every system I've seen like this ends up being trial-and-error.
>it is mostly your build-systems' job to be hermetic, run anywhere, handle cross-builds etc.
yes, and... the idea here is that it would be neat to extend the hermetic builds idea such that this can be run locally / anywhere where there's compute easily. The root problem that's being called out here is that idea of running something until the CI says it's green when there's a change, commit, network call, in the cycle is a pain in the ass. (The best way to avoid this churncycle is to just never write bugs! TFIC ;P)
Both Radicle and Tangled miss the point; these are all for public collaborative work, but what about private repos? Many users work on side projects; they use GitHub private for this. Once you learn GitHub, then you also start public projects on GitHub.
The point I am trying to make is, until you offer a user the ability to make a private repo for side projects, it's unlikely to take off.
What people want is the ability to make a private repo, go away for a few months and come back to find their repos right there waiting for them.
A remote execution cluster and CAS for build artifacts is a good way to avoid duplicate work on local vs CI, and avoid the problem of needing to trust local builds.
Between Radicle, Tangled and Grasp, Radicle feels the coolest to me, emotionally, local first, p2p, for whatever reason it feels kind of nostalgic.
Grasp is actually pretty cool too, built on nostr, which is maybe a stronger platform in the end? I dont really know enough about it. Stronger as in, you're maybe opening up more interoperablity by putting your stuff on a "anything network" vs Radicles "p2p git data network".
To be honest they're all cool ideas, Tangled feels some how corporate though.
Graps seems neat in the way it forces you to sign your commits (which should really be standard practice) but I think Github itself has proven that interoperability is not that important.
I can understand that. It's a difficult problem design-wise and needs someone that knows crypto, so I won't hold it against software that labels itself as alpha.
Does this conversation exist in 2026? If we can all code everything quickly and SaaS has no value then just build your own in a weekend and put GitHub out of business?
I think there's a gap in the market for a much simpler type of git service. All I need is a remote host to which I can push projects for others to see. I don't particularly want pull requests, actions or anything like that.
Maybe a way of facilitating "releases" with compiled binary assets (built locally and uploaded).
Forks can be handled by people cloning the repository and uploading a new project.
Can't a lot of this be accomplished by disabling features you don't need? I just checked my Forgejo instance and per repository there are options to disable: Code, Projects, Releases, Packages, Actions, Issues, PRs & Wikis.
When the solution becomes the problem, an opportunity for disruption opens up. Lots of chatter around this right now. I'd be curious to see if any of the many alternatives popping up gain traction before Github course corrects.
I'm building my own tools. I think people should build their own tools.
The future might look something like instead of paid software or open source software what you get is a set of requirements documents for a code forge, like a recipe. You bake your own.
Then you alter it to your particular use case and set of preferences.
I think the problem is that Microsoft committed to AI totally. There is
no way back for them. And this also means that Github will suffer from
this. Microsoft PR will tell people that AI is the solution to everything,
but in reality it will lead to problems that keep on coming up again and
again. Now, people may say "but Github services being down, does not have
anything directly to do with AI" - while that may be true, the problem is
that Microsoft shifted its strategy already, so most of its considerations
will go about AI top down control. Whether people's workflow using Github
is disrupted, is at best only of secondary interest to Microsoft - and that
specific problem will keep on resurfacing again and again and again. Perhaps
it will be silent for 3 months or so - but I am 100% certain that in the not
so distant future, you'll have a new drama story about how Github is declining.
This is like step-wise deterioration. Ghostty won't be the last here.
Whether alternatives arise ... that will be interesting to see. I mean those
alternatives need to not suck, but a lot of those websites etc... kind of suck.
They can support hosting your own git instance through their platform. That would imo be more interesting. You keep your code, run your runners, but don't get bombarded with bots
I would use RFC2822 as the underlying format to store any kind of message (pull request, review comment, issue etc.), and of course when displaying messages use CommonMark to style them. Any metadata goes into headers, and Message-ID/In-Reply-To/References headers can be used to link them all together. Using this well defined format you can then decide how to best store and transport all the messages, maybe in a git repo as well, use email, or whatever else works.
I personally think Gerrit works much better than whatever GitHub et al. have for code reviews. As for CI, I would try to keep that out of it as much as possible; just hooks to start a pipeline and to display the result and decide whether to allow a merge or not.
It does a mediocre job at all 4. But it does a good job of integrating them all together.
So I agree Gerrit is the superior code review model. but without the other 3 pieces you don't have a product. Even when I was at Google and working every day in Gerrit, I was dissatisfied with the poor integration between code search and code review and with CI.
Google3/Critique/Forge/etc -- Google's internal tooling -- did a much better job of tying that all together.
There's a good argument to be made that the data for reviews could be held in git repos just as easily as the source.
It can be done incredibly easily simply by having a branch per review with a known prefix (although these will rapidly clog up the default branch namespace), implemented via git namespaces to be distinct from the main namespace, or maybe just a special branch e.g. ".reviews" that just contains commit IDs for the tip of each review branch.
It just needs someone who's invested enough to specify it and make a viable implementation, after which people might start adopting it. I guess the reason github and the various forges didn't take this approach is that keeping the review metadata within their ecosystem is what gives their platform value. If anyone could use any local tool they like for reviewing other people's code, there wouldn't be as much vendor stickiness.
EDIT: actually, I guess there are other reasons why you might want your review metadata in a different repository, such as access control and/or cross-repo reviews.
There were a few efforts like that back in the day (when people still cared about offline and store-and-forward-style operation[1]), like Bugs Everywhere[3], git-appraise[4] which stored its data in Git’s little-known “notes” namespace[5], and git-bug[6] which for some reason I’ve seen mentioned quite a bit in such threads recently unlike the others—though I’m not complaining about one of them getting mentioned at least.
Also, as far as read-only access, Gerrit review data is actually accessible via Git[7] (for review ABCDE, pull refs/changes/DE/ABCDE/meta instead of one of the usual numbered refs under that prefix), and someone made the effort[8] to make it accessible via Git notes too (as mentioned in the post on Git notes that I linked above).
Also also, the Fossil SCM of SQLite fame somewhat famously does[9] do this kind of thing with its builtin bug tracker. It has been relegated to obscurity partly as an accident of history (Git won) and partly on the merits (it is aggressively hostile to the kind of history rewriting we are used to routinely—if not always wisely—performing in Git).
Going back to working on top of Git, though, I think that part of the problem is that you really want custom merge strategies when you’re trying to build a fancy datatype, and Git’s support for them requires a lot of wrapping to make it seamless (the location tracking stuff in git-annex[10] is the only success story I am aware of, and that’s a sizeable Haskell project). The existing porcelain is just too rigid.
[1] Can I have a viable replacement for PGP for that use case? Please stop telling me that I don’t exist and should screw off[2]? Please?..
Lots of good points. As for the last point, most review tools seem to be centered on tracking a branch ref over time. The actual merge strategy probably doesn't really matter as long as the tool can see that the watched reference now points to a new commit.
> This person has a family. This person has hobbies. This person is, at this moment, crying.
Reminded me one benefit of email-based workflow.
If I started receiving email, that's usually because I'm in the right mood to doing so. In such mood, I'll be more focused because I expect nothing else to interrupt my work.
My problem with notification is that there's a pull towards clearing them as they show up. But there's no guarantee I have the right energy at the moment.
Also, I found that most notification systems on the web are poor mimics of what email client has already archived decades ago. Maybe the old folks really got it right for using emails.
One of my issue with the PR process is the way they are focused on comments first, instead of the code. You also have the same issue that email threads have, if you comment something substantial, with details you pretty much always get a response that will take one small point, and sort of ignore the rest.
I also hate how noisy the discussion part is, and how GitHub handles that noise by just… hiding part of the discussion. For controversial discussion it is such a mess and horrible to track what is happening.
It’s just not a great discussion platform, while also putting that as the default tab in the PR view
If i was making my own GitHub, i would make it possible to put one deploy keys to multiple repos, same as in Gitea. Why that doesn't work is beyond me and is a major pita with no simple work arounds
> 1. Stuff happens in the wrong order. […] You don't want the feedback loop after the commit you want it before. Let me do an enforced pre-commit hook to run the jobs remotely on the forge and provide the feedback to the user before they push.
My approach is to utilize https://pre-commit.com/ to have all checks available to run locally during commit (or push), but leave it to contributors whether they want to run it or not. If they don't, the checks still run on the forge after pushing. The upside of this approach is that it still allows contributors to commit without internet access or the forge being down.
> 3. PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission.
Well, that's possible with Github and is just a matter how permissions to merge PRs are configured. Just let every contributor merge changes without explicit approval. And if you want LLM approval, make that a Github Action with mandatory success for merging.
> 8. On the flip side, since I need to be online all the time to really work with a team […]
Sure, for communication you need internet access, but working on code can be much more efficient if you can do so without relying on internet access and the forge being available.
I'd even argue working on issues and reviewing PRs should be available entirely offline too with just the state getting synced whenever internet connectivity to the forge is available.
I really like Gerrit's workflow with diff reviews as opposed to pull requests, but unfortunately compared to something like gitea it lacks everything else we've come to expect from git hosting (issues, project planning, etc) which makes it seemingly a hard sell for many. I really wish there was a nice diff review platform kind of like phabricator but alas.
The number one thing that gerrit does that is important is keep comments tied to the code between commits.
By which I mean the discussion doesn't get broken between changes, and it makes it far more trivial to iterate on things in the review without breaking the discussion. And for the reviewer to see what's changed between revisions at the specific comment point they're looking at. And then have a nice clean commit at the end instead of some dogs breakfast of merge commit with revision commits shoved in it.
My best guess is lack of resources and that they want to focus on the well known PR workflow instead of trying new things out of the gate. It's exactly that, it's a proven github workflow for better or for worse that most people are familiar with.
1. Gerrit's approach requires a stable Change-Id in the commits. So it doesn't just work out of box with stock git. It requires that the submitter's git configuration and the repository be set up to support this. (Note that JJ supports this out of the box)
2. Cargo cult. We have a whole generation of software developers who grew up in this generation, love GitHub, and have never known anything else. The "PR" approach is considered orthodox. Unless they went and worked at a Google or somewhere like that, they've probably never been exposed to alternative processes.
This article says git was designed for distributed version control. Then says git doesn't work for most distributed projects because there isn't high trust. But, I'm puzzled why people would still want to build software with low trust.
Discussions like that need to get into the details: trust to do what? You don't want to let randos force push over your repository but you might want to let them submit patches.
If anybody here does do this, and please do, please abstract out the backend CVS so we can swap in/out better ones. Let people bring their own CI/CD. If you must roll your own CI don't use YAML, for mercy's sake; use a declarative definition (a la bazel with CUE, Skylark) or actual code.
This makes total sense to me. Everything that makes up the state of your project should either be part of the versioned repo (git) or not be part of the project.
I created a little Github Issues replacement for myself that puts the issues within the repo so that the work and the todos stay in sync. https://github.com/steviee/git-issues
And I bet there's numerous other projects like that.
If gitlab makes even some of its current premium features free (mostly around push rules, and merge dequest guardrails), most comoanies will host their own gitlab in a heartbeat.
> Stuff happens in the wrong order. You know the PR. Commit 1: 'Feature.' Commit 2: 'fix.' Commit 3: 'fix.' Commit 4: 'actually fix.' Commit 5: 'please.' Commit 6, made at 11:47 PM on a Thursday: 'asdfasdf'. This person has a family. This person has hobbies. This person is, at this moment, crying. You don't want the feedback loop after the commit you want it before. Let me do an enforced pre-commit hook to run the jobs remotely on the forge and provide the feedback to the user before they push.
How would a pre-commit hook help? Would the developer not be crying and working late if their work was rejected by the pre-commit hook instead of the PR? Also, if the tests are so fast they wouldn’t block the terminal running `git commit` for more than a minute or two, you can just run the tests on the local machine, and you should be running them as part of your workflow.
> PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission. A nice one. With legroom. Let me customize and more easily control this. If the person is a maintainer and the LLM says its low risk/no risk just let them go.
You can do this with the existing forges, you can give trusted people the right to bypass the rules. Or you could build your own small PR auto-approval bot, which hands the diff to a LLM, and if the LLM approves, the bot approves the PR on the forge.
Real problem is the CI can only run through the CI system. If the CI runs "make" the developer can run "make" at home and get feedback. If the CI is GitHub Actions you can only run it in GitHub.
There are different workflows. I sometimes commit code that does not compile, so that I have a checkpoint. Or because it’s 16:59 and I want to leave the office (and I want to protect the code I wrote from hardware failure). I’d be annoyed if any pre-commit checks took more than 2-3 minutes, and for most projects, that is not enough to build and run any meaningful tests (especially remotely).
I started Free.ai as a weekend project with the same mindset. And a month in the work hasn't stopped. So I second this. Just find a good name, it helps.
If you want a certain app with a feature and the app isn't open source, then you may as well just clone the app and add the feature.
Claude Code and Codex (and other tools) have computer use and are perfectly capable of navigating, experimenting, cloning functionality, writing tests...
If the app is open source it's probably easier to just fork and add your features though. And cheaper.
Hell, I use the (closed-source) app Smart Audiobook Player and I wanted Audiobookshelf integration. I asked Claude, it decompiled the app, added the extra code, recompiled the APK and it works perfectly, syncing my book's progress with the server.
I could write something, but it would be "I told Claude to do this and it did, I'm happy", there isn't really much more detail to write about. What would you like to see?
Stuff happens in the wrong order. You know the PR. Commit 1: 'Feature.' Commit 2: 'fix.' Commit 3: 'fix.' Commit 4: 'actually fix.' Commit 5: 'please.' Commit 6, made at 11:47 PM on a Thursday: 'asdfasdf'. This person has a family. This person has hobbies. This person is, at this moment, crying. You don't want the feedback loop after the commit you want it before. Let me do an enforced pre-commit hook to run the jobs remotely on the forge and provide the feedback to the user before they push.
Isn't this already totally possible? Or am I thinking subversion?
Pre-commit hook running remotely on the forge "before they push" sounds like an oxymoron. How does the code get to the forge for feedback? That's a post-push hook!
> Pre-commit hook running remotely on the forge "before they push" sounds like an oxymoron.
I think the implication is that a user doesn't host the CI locally. They are suggesting that there should be? a configuration to call an API to submit the code changes for some part/total CI checking. This is only beneficial for orgs/individuals which somehow rank dev effectiveness based on how messy a branch PR history is and how many times they have submitted code that passes/fails a build. Maybe due to build cost, maybe due to ego.
I understand what they are asking for, but it feels like misusing git or based on some org process rather than normal development flow. I don't understand the point.
yeah, you can have github actions setup on arbitrary pushes to your branches, but there's not a good interface for linking actions to bare commits, and then having conversations etc. The place where that happens is usually a PR.
> I was prompted to write this after reading the good post about Ghostty leaving GitHub but it's something I've written and talked about for a few years.
Many of us were annoyed already when Microslop, 'xcuse me, Microsoft assimilated GitHub. But we have to be realistic - alternatives often sucked. Sourceforge? I find creating issues there annoying to no ends. I can use gitlab, which is a bit better than sourceforge, but I also hate creating issues there. I recently saw codeberg appears to have updated its UI (I think?), but I also find it quite annoying.
What GitHub got right were, initially, the UI; and also a focus on folks using github, e. g. making things easy/easier for them. They did not get everything right though - the wiki support I find awful. I rarely use the wikis because they are so bad.
I think the really big problem is that there are commercial interests aka private interests. Microsoft is just one example here; it is a problem literally everywhere in similar sites. In the past I pointed at the example of discussions in issues, with regards to the xz backdoor utils - and the next day after I also participated in discussions, Microsoft took it all down; though it also does not matter if it was Microsoft or the repository owner. The problem is that individuals can too easily censor potentially useful information. The issue discussions WERE useful, and they were censored. If I remember correctly, all information from back then was never fully reinstated. Perhaps people mirrored it, but I did not see a link. The point is that I think this shows that top-down control can be really detrimental. And let's be honest: how many of you trust Microsoft? We kind of need something that is de-central, works reliably and well, and also has a good UI by default and a simple (or at the least a good) workflow. And we also need to avoid the situation where private actors can hold everyone else a hostage. I have absolutely no idea how to solve the above; perhaps it requires different approaches at the same time.
The www kind of changed and I feel that private interests - aka huge mega-mega corporations in particular - made things a lot worse in the last 10-15 years here. That needs to change.
> Let me do an enforced pre-commit hook to run the jobs remotely on the forge and provide the feedback to the user before they push.
You have to 'push' the code to the forge to run it. This code is a 'branch' of the version that is on repo.
> The PR is approved or it's not approved
The code is either merged or it's not. Sure you can trivially add a snooze feature...
> I don't need 4 eyes on every change, especially in a universe where LLMs exist.
Huh, I do. Anyone thinking LLMs replace human review, when LLMs are already replacing the coders, is just vibe-coding, not building a reliable library.
> Stacked PRs are just better.
I have no idea what this really means honestly. You can stack multiple commits in a single PR. You can create PRs based on other MRs.
> A forge shouldn't do everything. Issue tracking yes. Kanban board, probably not.
The board has to live in-sync with the issues or it's not a board.
The alternative to GitHub is already here. It is called self-hosting and there are many alternatives.
The Linux kernel is not hosted on GitHub and uses cgit. Others use GitLab, or Gitea and there is also Forgejo (Which Codeberg uses) that people are using and can be self hosted.
This is why now everyone is realising why "centralising everything to GitHub" [0] was a terrible idea and now GitHub has been (unsurprisingly) run into the ground.
As a casual user, I find the UI incredibly confusing. And not just because it's different from GitHub, but because there are so many features and there are menus absolutely everywhere.
I'm sure if I used it more often that I would figure it out, but it's deeply off-putting for someone who only uses it twice a year or so.
If I were starting such a project -- and I must resist the temptation to do so -- I'd start by taking a very close look at Gerrit.
As a technology base to fork from, probably not ideal. But its flows are something to learn from.
The PR process in GitHub has always been garbage, and its cargo cult adoption by the whole industry is sad. But also unnecessary. There were always alternatives, but GH's refusal to do proper multi-round review and its tendency to encourages giant messy merge commits with no ability to track discussions between changes is an organizational nightmare, and now with LLMs it's even more terrifying.
Every company I've worked at since I left Google has had this problem with giant "take it or leave it" submissions. Dozens of commits in one "review". No ability to properly track changes between revisions. A mess of commits that all land at once.
I don't see how one can build a serious software team structure over top of this. It's a mess. And GitLab only makes it slightly better.
Seems like there are lots of answers: pre-commits, rebase squashes, merge squash...
Feedback + commit is a loop. I often reply to comments w/ the commit sha that resolves it.This is have-your-cake-and-eat-it. PR approval is a permission so is a boolean. Of course it is. Either the code can be merged or it can't.
What's being described really here is just something to make you feel slightly better about yourself whilst approving code you hate ("we should revisit this..."). Just open a new ticket.
-2: This is a bad idea, don't do that
-1: This is a good idea but needs improvement
+1: LGTM but I don't have enough knowledge or authority to approve
+2: Approved
This meant that they were completely unable to actually 'approve' a review, but were only able to reject it. They were juniors, so they'd eventually get to that point, but by then, everyone would be used to just ignoring their approvals.
This provides that middle ground.
Given that, what's wrong with simply commenting on the PR to document the concerns, issues, lack of knowledge, etc?
Unless you're using those +/-2 to achieve some sort of goal... but you can also do that with labels, tags, etc. on the PR.
In many environments that depends on more than just code review, e.g. CI.
The nuance is comments on the PR itself, rather than the state of the approval, which is binary (or ternary, if you want to count leaving it in an unknown state for extended periods of time).
Someone else knows the other portion well and sees the +1 and decides to +2.
In practice this ends the stalemate where partial owners don't feel confident to approve the whole thing
Having several people review each separate parts but not understanding the others' can cause interaction bugs. If such bugs cannot happen (say, due to modularity, or type safety guarantees etc), then it won't be the case where you need to have a partial approve.
I am not a fan of partial approve. Either you think the code is approvable, or it isn't.
Some people think that PR status can also communicate rationales and partial approvals.
Some think that should be done with tags and comments.
Lots of request systems have multiple stages between "open" and "resolved".
1. Is the PR suitable, and therefore should be approved, and
2. Is this person suitable to make that decision.
If 2 is false then the person should remove themselves from the list of reviewers. Then 1 can follow its normal process.
So you could require `Verified+2` (CI), `Code-Review+2`, and `Design+2`, for example.
[0]: <https://gerrit-review.googlesource.com/Documentation/config-...>
You want blobless clones:
Gets history and only fetches blobs on demand. Github has a great article on it https://github.blog/open-source/git/get-up-to-speed-with-par...yes but tangled.org really does do most of that!
1. JJ as the VCS: tangled supports stacked PRs using jj change-ids. https://blog.tangled.org/stacking , we use it a lot to build tangled itself: https://tangled.org/tangled.org/core/pulls
2. Raspberry pi as a forge for a long time: also check, the git server shim is super lightweight, its just an XRPC layer over git repositories + an sqlite3 database. there are folks running it on a riscv board with 512 megs of RAM.
3. Actions are critical and they should be runnable on my local machine: IMO this ask is slightly misplaced. it is mostly your build-systems' job to be hermetic, run anywhere, handle cross-builds etc. it would be really cool to "promote" results of such builds to the forge itself.
I know that not all USB-to-SATA connectors are compatible with RPi – I got lucky with the first connector I tried (Unitek). Not sure if RPi 5 has a wider compatibility.
I do think it's just an awkward problem to solve though, because it essentially devolves to needing to run the entire system somewhere else, which is why every system I've seen like this ends up being trial-and-error.
yes, and... the idea here is that it would be neat to extend the hermetic builds idea such that this can be run locally / anywhere where there's compute easily. The root problem that's being called out here is that idea of running something until the CI says it's green when there's a change, commit, network call, in the cycle is a pain in the ass. (The best way to avoid this churncycle is to just never write bugs! TFIC ;P)
The point I am trying to make is, until you offer a user the ability to make a private repo for side projects, it's unlikely to take off.
What people want is the ability to make a private repo, go away for a few months and come back to find their repos right there waiting for them.
Grasp is actually pretty cool too, built on nostr, which is maybe a stronger platform in the end? I dont really know enough about it. Stronger as in, you're maybe opening up more interoperablity by putting your stuff on a "anything network" vs Radicles "p2p git data network".
To be honest they're all cool ideas, Tangled feels some how corporate though.
https://tangled.org/did:plc:wshs7t2adsemcrrd4snkeqli/core/is...
There is a fundamental contradiction here.
But just a few inches earlier, the author stated:
> Everything tools always turn into crap.
This seems like a contradiction to me.
Maybe a way of facilitating "releases" with compiled binary assets (built locally and uploaded).
Forks can be handled by people cloning the repository and uploading a new project.
Part of the reason for not wanting bells and whistles is for the service to have less chance of dying under a heavy load.
The future might look something like instead of paid software or open source software what you get is a set of requirements documents for a code forge, like a recipe. You bake your own.
Then you alter it to your particular use case and set of preferences.
I think the problem is that Microsoft committed to AI totally. There is no way back for them. And this also means that Github will suffer from this. Microsoft PR will tell people that AI is the solution to everything, but in reality it will lead to problems that keep on coming up again and again. Now, people may say "but Github services being down, does not have anything directly to do with AI" - while that may be true, the problem is that Microsoft shifted its strategy already, so most of its considerations will go about AI top down control. Whether people's workflow using Github is disrupted, is at best only of secondary interest to Microsoft - and that specific problem will keep on resurfacing again and again and again. Perhaps it will be silent for 3 months or so - but I am 100% certain that in the not so distant future, you'll have a new drama story about how Github is declining.
This is like step-wise deterioration. Ghostty won't be the last here.
Whether alternatives arise ... that will be interesting to see. I mean those alternatives need to not suck, but a lot of those websites etc... kind of suck.
I personally think Gerrit works much better than whatever GitHub et al. have for code reviews. As for CI, I would try to keep that out of it as much as possible; just hooks to start a pipeline and to display the result and decide whether to allow a merge or not.
1. Code review 2. Source browsing 3. Ticket tracking 4. CI
It does a mediocre job at all 4. But it does a good job of integrating them all together.
So I agree Gerrit is the superior code review model. but without the other 3 pieces you don't have a product. Even when I was at Google and working every day in Gerrit, I was dissatisfied with the poor integration between code search and code review and with CI.
Google3/Critique/Forge/etc -- Google's internal tooling -- did a much better job of tying that all together.
It can be done incredibly easily simply by having a branch per review with a known prefix (although these will rapidly clog up the default branch namespace), implemented via git namespaces to be distinct from the main namespace, or maybe just a special branch e.g. ".reviews" that just contains commit IDs for the tip of each review branch.
It just needs someone who's invested enough to specify it and make a viable implementation, after which people might start adopting it. I guess the reason github and the various forges didn't take this approach is that keeping the review metadata within their ecosystem is what gives their platform value. If anyone could use any local tool they like for reviewing other people's code, there wouldn't be as much vendor stickiness.
EDIT: actually, I guess there are other reasons why you might want your review metadata in a different repository, such as access control and/or cross-repo reviews.
Also, as far as read-only access, Gerrit review data is actually accessible via Git[7] (for review ABCDE, pull refs/changes/DE/ABCDE/meta instead of one of the usual numbered refs under that prefix), and someone made the effort[8] to make it accessible via Git notes too (as mentioned in the post on Git notes that I linked above).
Also also, the Fossil SCM of SQLite fame somewhat famously does[9] do this kind of thing with its builtin bug tracker. It has been relegated to obscurity partly as an accident of history (Git won) and partly on the merits (it is aggressively hostile to the kind of history rewriting we are used to routinely—if not always wisely—performing in Git).
Going back to working on top of Git, though, I think that part of the problem is that you really want custom merge strategies when you’re trying to build a fancy datatype, and Git’s support for them requires a lot of wrapping to make it seamless (the location tracking stuff in git-annex[10] is the only success story I am aware of, and that’s a sizeable Haskell project). The existing porcelain is just too rigid.
[1] Can I have a viable replacement for PGP for that use case? Please stop telling me that I don’t exist and should screw off[2]? Please?..
[2] https://news.ycombinator.com/item?id=44239804
[3] https://github.com/aaiyer/bugseverywhere
[4] https://github.com/google/git-appraise
[5] https://tylercipriani.com/blog/2022/11/19/git-notes-gits-coo..., https://news.ycombinator.com/item?id=44345334 (579 points, 146 comments)
[6] https://github.com/git-bug/git-bug
[7] https://gerrit-review.googlesource.com/Documentation/note-db...
[8] https://gerrit.googlesource.com/plugins/reviewnotes/+/refs/h...
[9] https://fossil-scm.org/home/doc/trunk/www/bugtheory.wiki
[10] https://git-annex.branchable.com/
Reminded me one benefit of email-based workflow.
If I started receiving email, that's usually because I'm in the right mood to doing so. In such mood, I'll be more focused because I expect nothing else to interrupt my work.
My problem with notification is that there's a pull towards clearing them as they show up. But there's no guarantee I have the right energy at the moment.
Also, I found that most notification systems on the web are poor mimics of what email client has already archived decades ago. Maybe the old folks really got it right for using emails.
In the mood for dealing with email, open client.
The PR review process is flawed, it adds something, but maybe not what it intends.
It’s just not a great discussion platform, while also putting that as the default tab in the PR view
My approach is to utilize https://pre-commit.com/ to have all checks available to run locally during commit (or push), but leave it to contributors whether they want to run it or not. If they don't, the checks still run on the forge after pushing. The upside of this approach is that it still allows contributors to commit without internet access or the forge being down.
> 3. PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission.
Well, that's possible with Github and is just a matter how permissions to merge PRs are configured. Just let every contributor merge changes without explicit approval. And if you want LLM approval, make that a Github Action with mandatory success for merging.
> 4. Stacked PRs are just better. […]
Seems like Github is working on this: https://github.github.com/gh-stack/
> 8. On the flip side, since I need to be online all the time to really work with a team […]
Sure, for communication you need internet access, but working on code can be much more efficient if you can do so without relying on internet access and the forge being available.
I'd even argue working on issues and reviewing PRs should be available entirely offline too with just the state getting synced whenever internet connectivity to the forge is available.
What part of gerrit is so different? Stacked PR’s work fine right (not in github, as a concept)
By which I mean the discussion doesn't get broken between changes, and it makes it far more trivial to iterate on things in the review without breaking the discussion. And for the reviewer to see what's changed between revisions at the specific comment point they're looking at. And then have a nice clean commit at the end instead of some dogs breakfast of merge commit with revision commits shoved in it.
1. Gerrit's approach requires a stable Change-Id in the commits. So it doesn't just work out of box with stock git. It requires that the submitter's git configuration and the repository be set up to support this. (Note that JJ supports this out of the box)
2. Cargo cult. We have a whole generation of software developers who grew up in this generation, love GitHub, and have never known anything else. The "PR" approach is considered orthodox. Unless they went and worked at a Google or somewhere like that, they've probably never been exposed to alternative processes.
I created a little Github Issues replacement for myself that puts the issues within the repo so that the work and the todos stay in sync. https://github.com/steviee/git-issues
And I bet there's numerous other projects like that.
Hope you get your submarine, man! ;)
How would a pre-commit hook help? Would the developer not be crying and working late if their work was rejected by the pre-commit hook instead of the PR? Also, if the tests are so fast they wouldn’t block the terminal running `git commit` for more than a minute or two, you can just run the tests on the local machine, and you should be running them as part of your workflow.
> PRs are too inflexible. I don't need 4 eyes on every change, especially in a universe where LLMs exist. The global GDP lost annually to senior engineers staring at a four-line PR waiting for someone — anyone — to type 'LGTM' could fund a moon mission. A nice one. With legroom. Let me customize and more easily control this. If the person is a maintainer and the LLM says its low risk/no risk just let them go.
You can do this with the existing forges, you can give trusted people the right to bypass the rules. Or you could build your own small PR auto-approval bot, which hands the diff to a LLM, and if the LLM approves, the bot approves the PR on the forge.
aren't you describing why you'd want a pre-commit for this? you do not have to remember to do so, and new people do not need to learn it.
Why don't you see how far you get in a weekend with Claude.
If you want a certain app with a feature and the app isn't open source, then you may as well just clone the app and add the feature.
Claude Code and Codex (and other tools) have computer use and are perfectly capable of navigating, experimenting, cloning functionality, writing tests...
If the app is open source it's probably easier to just fork and add your features though. And cheaper.
Truly magical, it would have taken me months.
If no post planned, please consider - that’s very “an app is a home cooked meal”, and I love it.
It’s mostly your original story of motivation, in brief prose, that does the heavy lifting of a satisfying post,
followed by exactly what spec and names of tools you used, mundane as they may feel,
your exact prompt(s) (because this is of technical interest in and of itself),
and screenshots of excerpts/link to output.
Things that stood out to you along the way would also stand out to others.
The comment alone will probably be the most intriguing one I read all day.
I think the implication is that a user doesn't host the CI locally. They are suggesting that there should be? a configuration to call an API to submit the code changes for some part/total CI checking. This is only beneficial for orgs/individuals which somehow rank dev effectiveness based on how messy a branch PR history is and how many times they have submitted code that passes/fails a build. Maybe due to build cost, maybe due to ego.
I understand what they are asking for, but it feels like misusing git or based on some org process rather than normal development flow. I don't understand the point.
Many of us were annoyed already when Microslop, 'xcuse me, Microsoft assimilated GitHub. But we have to be realistic - alternatives often sucked. Sourceforge? I find creating issues there annoying to no ends. I can use gitlab, which is a bit better than sourceforge, but I also hate creating issues there. I recently saw codeberg appears to have updated its UI (I think?), but I also find it quite annoying.
What GitHub got right were, initially, the UI; and also a focus on folks using github, e. g. making things easy/easier for them. They did not get everything right though - the wiki support I find awful. I rarely use the wikis because they are so bad.
I think the really big problem is that there are commercial interests aka private interests. Microsoft is just one example here; it is a problem literally everywhere in similar sites. In the past I pointed at the example of discussions in issues, with regards to the xz backdoor utils - and the next day after I also participated in discussions, Microsoft took it all down; though it also does not matter if it was Microsoft or the repository owner. The problem is that individuals can too easily censor potentially useful information. The issue discussions WERE useful, and they were censored. If I remember correctly, all information from back then was never fully reinstated. Perhaps people mirrored it, but I did not see a link. The point is that I think this shows that top-down control can be really detrimental. And let's be honest: how many of you trust Microsoft? We kind of need something that is de-central, works reliably and well, and also has a good UI by default and a simple (or at the least a good) workflow. And we also need to avoid the situation where private actors can hold everyone else a hostage. I have absolutely no idea how to solve the above; perhaps it requires different approaches at the same time.
The www kind of changed and I feel that private interests - aka huge mega-mega corporations in particular - made things a lot worse in the last 10-15 years here. That needs to change.
You have to 'push' the code to the forge to run it. This code is a 'branch' of the version that is on repo.
> The PR is approved or it's not approved
The code is either merged or it's not. Sure you can trivially add a snooze feature...
> I don't need 4 eyes on every change, especially in a universe where LLMs exist.
Huh, I do. Anyone thinking LLMs replace human review, when LLMs are already replacing the coders, is just vibe-coding, not building a reliable library.
> Stacked PRs are just better.
I have no idea what this really means honestly. You can stack multiple commits in a single PR. You can create PRs based on other MRs.
> A forge shouldn't do everything. Issue tracking yes. Kanban board, probably not.
The board has to live in-sync with the issues or it's not a board.
The Linux kernel is not hosted on GitHub and uses cgit. Others use GitLab, or Gitea and there is also Forgejo (Which Codeberg uses) that people are using and can be self hosted.
This is why now everyone is realising why "centralising everything to GitHub" [0] was a terrible idea and now GitHub has been (unsurprisingly) run into the ground.
[0] https://news.ycombinator.com/item?id=22867803
GitLab and Azure are a daily source of pain for us.
UI is constantly inconsistent. You have to keep reloading the page to hope to see what’s up with your MR. Doesn’t help it’s super slow to load.
The backend infra is super unreliable, with actions failing to start, merge trains being stuck, their webhooks being overloaded.
I'm sure if I used it more often that I would figure it out, but it's deeply off-putting for someone who only uses it twice a year or so.
As a technology base to fork from, probably not ideal. But its flows are something to learn from.
The PR process in GitHub has always been garbage, and its cargo cult adoption by the whole industry is sad. But also unnecessary. There were always alternatives, but GH's refusal to do proper multi-round review and its tendency to encourages giant messy merge commits with no ability to track discussions between changes is an organizational nightmare, and now with LLMs it's even more terrifying.
Every company I've worked at since I left Google has had this problem with giant "take it or leave it" submissions. Dozens of commits in one "review". No ability to properly track changes between revisions. A mess of commits that all land at once.
I don't see how one can build a serious software team structure over top of this. It's a mess. And GitLab only makes it slightly better.
Forge (github or whatever) doesn't matter.