I have no problems publishing my public projects on GitHub or Codeberg. Secretly hoping that LLM-Crawlers will choke to dead on my bad code.
But private stuff runs on my own servers.
In 2025 it's mostly maintenance free once the setup is running: Debian will be updated fully automated using unattended-update and the hosted application when there's the need to do one manually. Backups automatically every night using Proxmox and its little brother Proxmox Backup Server. Even with valid certificates using DNS-Auth from Lets Encrypt.
I never quite understood why’d people call for “federated forking” when the truly decentralized model had existed since forever: simply clone the repository to your machine, change what you want, create a patch, and communicate it to maintainers (traditionally Linux kernel used a mailing list for that, but imaginably you could use something in Fediverse, Git doesn’t care). Everybody gets to use their favourite tooling and no one is locked to lowest common denominator GUI for things like reviewing proposed changes.
> You can git clone the repo to your local machine, and you can manually move it elsewhere, but there's no way to send a PR from your repo to mine.
There is a native git request-pull command [1] that generates a summary of pending changes to pull into an upstream project, but it doesn’t enjoy support for all the features offered by GitHub pull requests or GitLab merge requests.
Yeah, `git request-pull` is only a helper to create a pull request, while GitHub/GitLab/etc include that in addition to a (convenient) vendor-specific way to track received pull requests and a code review web interface.
Initiatives like ForgeFed are trying to define a more neutral way to share this information, but I don't think there's any estimate date for when there'll be an actual implementation of it. If that ever happens, it'd be possible to get vendor-neutral tooling for that kind of collaboration.
All the drawbacks are not drawbacks to me.
My projects, my repos, my private server. I don't want nor need to show off ("collaborate") and if i did.. what's the point of self hosting then? Just use github/gitlab/codeberg.
I don't even need to rent a server for that. Everything runs on my router (openWRT is amazing)
> I don't want nor need to show off ("collaborate") and if i did.. what's the point of self hosting then? Just use github/gitlab/codeberg.
Let me repeat this again. We didn't centralize git when we started using github/gitlab etc. We centralized discoverability and reach of projects.
So far, everything else can be decentralized - issue tracking, pull requests, project planning, ci, release management and more. But we still don't have a solution to search projects on potentially thousands of servers, including self-hosted ones.
And we still have forums like this one and Reddit where people can just announce their project. Github is more of a bad code refuge than a high signal project discovery.
Every single thing you showed are places to publish software releases, not post your half-finished project that someone some day might find useful. We need both.
As an example, I had to reverse engineer some kinda obscure piece of hardware and after getting everything I needed for my project, I put everything on github in case it was useful to anyone. A few months later, someone was in a similar situation and built on top of my work. Neither of us made a "package" or even "a piece of software", just some badly written scripts and a scattered notes. But it was still useful to publish somewhere where others can find it, especially a place with very good SEO and code-optimized search.
The subject is decentralization. Your above situation could be you posting on your blog or a forum and the other person getting to know about it that way. GitHub may be convenient, but it’s neither necessary, nor indispensable!
The solution for centralization isn't asking people to put the extra effort to avoid it. You won't convince anyone to accept that. Unless you have a unified interface to search all those projects at once - especially unfinished or unpublished projects, people will keep going back to platforms like github. That unified interface doesn't have to be centralized at all either.
So if I'm looking for a solution rather than a library or a project to take part in, I need to search a dozen sources to see if anything meets my requirements? What if the project was just abandoned without being published any of those source registries? What if the project was a PoC of some algorithm that didn't need to be published anywhere else?
That sounds hardly like an alternative to what's possible with Github now. The only alternative that came anywhere close to that ideal was freshmeat - and even that didn't achieve the full potential. Check this discussion alone to see how many talk about 'network effects' or 'discoverability'.
The subject is decentralization. There’s a huge value in curation and specific communities. You don’t go to r/all to read about emacs or bash. Instead you go to r/emacs and r/bash. Even those “awesome $thing” list are better than going through pages of GitHub search results.
r/emacs and r/bash are all communities on a centralized service that you can search using a single interface. That is an inaccurate comparison. Meanwhile, I didn't say that a common index has to be centralized.
And Github has the whole software ecosystem. For example I can search for text editors or ebook readers. In that case I am not searching for any specific programming language.
We aren't talking about your preferences alone here, are we? The case that the parent commenter mentioned is exactly what I was talking about too. What if I want a solution? What if I'm looking for algorithms or examples? What if I want to find a group of projects that's tackling a certain problem? How are my needs invalid? Are the projects that don't have such elaborate setups, polish or completion unworthy of discovery and help? You're essentially dismissing requirements that drive others to large platforms.
> Why do you need a search index on your self hosted git server
The search index doesn't have to be on your server, does it? What if there is an external (perhaps distributed/replicated) index that you could submit the relevant information to? Or if an external crawler could collect it on your behalf? (some sort of verification will also be needed.)
There are two reasons why such a dedicated index is useful. The first is that the general index is too full of noise. That's why people search projects directly on Github. The second problem is that the generic crawlers aren't very good at extracting relevant structured information from source projects, especially the information that the project owner wants to advertise. For example the readme, contribution guidelines, project status, installation and usage information, language(s), license(s), CoC, issue tracker location, bug and security reporting information, keywords, project type, etc. Github and sourcegraph allow you to do precise searches based on those. Try using a regular search engine to locate an obscure project that you already know about.
While less popular these days, email-based workflows don't have the account/federation issue - anyone can just drop by and post a patch series. Of course the workflow itself is going to be an obstacle for most but I'm not sure if that's inherent or only because it's not what people are used to.
> Suppose you want to make a Pull Request or just take a copy of the code. At the moment, you have to create a fork on my server. There's no way to easily fork something to your GitHub or personal server.
Why are people so keen on having that network graph of forks? It's not necessary for collaboration. Status symbol?
I don't see it as a status thing - I see it as a convenience thing.
If I want to fork your code and contribute back, that means I need to be on the same system as you.
There's a bunch of Gnome projects which require me to sign up to their specific git hosting service before I can contribute.
On most git servers, I have to fork in order to send a PR, which often means I have to create a fork on their system - which means I need to set up something to replicate it to my local machine.
It's all friction.
I'd love to see a project on (for example) GitHub and then clone it to my GitLab, work on it there, and send a PR from GL to GH.
> If I want to fork your code and contribute back, that means I need to be on the same system as you.
You really don't. You just clone, code, commit, and send a patch (which is just one or more text files). That's it. You may just code and do a diff, if it's a simple fix.
The project may have a more complex policy to accept contributions. But a Fork and a PR is not a requirement.
On lower frequency repos, I often look through the github forks to find patches for issues or features that were not upstreamed. Its not infrequent I find a nice feature someone implemented for themselves but didn't upstream because it wasn't quite quality enough, making a PR is a lot of effort especially when its not clear that it'll ever be accepted.
Fwiw.. I have a "normal" private git server over SSH, but for public projects, it copies merges into blessed branches to public, read-only repos accessible via HTTPS. Here's an example:
I sort of punted on receiving patches and merge requests because most of the projects I'm distributing in this way aren't really open source, but "published source" and there's a small community of people who use them. The people who use the public, read-only repos know how to generate patches and distribute them via email or (in one case) uploading them to an S3 bucket.
Anyway... your mileage may vary, but it's worked reasonably well for a small community. Not sure I would recommend it for a huge open source project.
For simple changes, you can just `git diff` it to a file, put that file on a pastebin and share that link with any communication tool (or share the file directly). `git format-patch` is more where you want to share the commits (message and all). And you're not tied to using email. It's just a collection of files that you can share using whatever.
You can store the patch somewhere (pastebin, cloud drive), and send the link instead. Or attach them to the email. No need to directly use `git send-email` or copy it in the body of the email.
OK, but then I still have to do work to get that patch and integrate it into my existing systems.
I don't know if you've ever used GitHub, GitLab, or CodeBerg - but PRs just appear in a list there. I don't need to do any work. Very handy especially if they're big changes.
I can also leave comments on specific bits of code, rather than emailing someone.
But that's require a web browser, dealing with 2fa, and I would probably still export the patch and apply it locally to test. Using email can be done on a potato system, using my email client, which already have my workflow automated. Replying someone takes even less effort than commenting.
My email client (the default offering from my service provider ofc) is a POS. Therefore all of email is a miserable, failed tool and I can't believe you're being such a boomer rn.
I agree somewhere between 0% and 100% with the surface level message, and will give similar bounds for my agreement with the literal opposite of the text.
Critical data here is how many PRs one is creating on others' projects and receiving on one's own projects. To me OP sounds like "Lifestyles of the Rich and Famous." At my level of git collaboration, just git push contributions on any code forge and send me a link, and I'll do the work to pull it in wherever I am.
For my private, personal Git server, I recently moved from Gitea to Soft Serve [1], and have enjoyed how light and fast it is. I have it hooked up via Tailscale (using TS_DEST_IP to forward traffic [2]), so it has its own IP on the network and I don't have services fighting for port 22.
TUI tools over SSH definitely aren't for everyone, but if that's your style and you want a place to dump all your non-public projects, it's a great choice.
Most non-private stuff goes on Sourcehut, and anyone can contribute via email (i.e. without any account) assuming they don't mind going through the arcana required to set up git-send-email.
I've really enjoyed using them but I guess I don't do much with the web interface.
> TS_DEST_IP
So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
I might give that a go. I run tailscale on my host and use a custom port for git which you set once in your ~/.ssh/config for host/key config on client machines and then don't need to refer to it repo uris.
TBH, I think it's tailscale I'd like a light/fast alternative to! I have growing concerns because I often find it inexplicably consuming a lot of CPU, pointlessly spamming syslog (years old github issues without response) or otherwise getting fucked up.
They're plenty fast, but it's hard to match the speed of terminal tools if you're used to working that way. With Soft Serve, I'm maybe 10 keystrokes and two seconds away from whatever I want to access from a blank desktop. Even a really performant web application is always going to be a bit slower than that.
Normally that kind of micro-optimization isn't all that useful, but it's great for flitting back and forth between a bunch of projects without losing your place.
> So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
Pretty much. It's a separate container in the same pod, and shows up as its own device on the tailnet. I can still `kubectl exec` or port forward or whatever if I need to access Soft Serve directly, but practically I never do that.
> TBH, I think it's tailscale I'd like a light/fast alternative to!
I've never noticed Tailscale's performance on any clients, it "just works" in my experience. I'm running self-hosted Headscale, but wouldn't expect it to be all that different performance-wise.
What do you even need a git server for if its just for private projects - you can have a remote repo accessible via ssh without any other software needed.
Easy backups, especially for repos that use Git LFS. It's also nice to have a simple UI for browsing repos and project history. Soft Serve doesn't have built in CI/CD, but I plan on bolting on a lightweight solution there.
I have hundreds of random tools and half-finished projects, having them all accessible and searchable from a single location is convenient.
No. You'd send 'patches' via email. Each patch roughly corresponds to a commit. These are diffs + commit metadata that are directly formatted as email messages. A single submission of multiple commits will be made of multiple threaded email messages. The maintainer then 'applies' the received patches to their repo to recreate the commits from the contributor. I know that this sounds complicated, but it's quite streamlined in git. You don't have to do all that manually.
There is one more way to contribute by email. And they are... surprise! Pull requests! You can send pull requests to the maintainer via email, as long as your own modified clone repo is hosted and accessible online somewhere. It doesn't have to be on the same server (unlike github forks). This is done using the `git request-pull` command.
Git has a set of commands that can either provide ready to be sent email messages as files (if you have a proper client, not the web ones), or send the formatted email for you. Most good clients have a way to directly pipe messages to the shell, so you can execute git directly from there to apply the change. Quite streamlined once you've got everything setup (not that difficult) [0].
But it does requires people to be disciplined with their changes (no wip commits). This may require learning about the flags for `git rebase` and `git commit`.
I love hosting my personal git, so glad to see others do! I use it for certain projects that I just want to be lazy about (personal website with sensitive data committed to the repo) projects that are for friends/family and don’t need to be public. I chose gitea maybe 2 years ago? And it’s been great. Very light weight compared to gitlab, with useful things like actions. So I can push a new commit to main on my personal server and it’ll deploy my website to the server. I still publish things I want to share to GitHub, but not everything.
I think the thing that sets it apart from others would be I run it on a m2 Mac mini? Very low power consumption, never makes any noise, and seemingly has plenty of power for whatever I need to get done.
I got the self-hosting bug a few years back and have been running a gitea server (among many others) on a mini PC running in my basement that is setup with tailscale so I can connect to it from anywhere, and it has been fantastic. For any OSS projects that I want to be open to the world, I still put it up on GitHub. But for anything else, it is just going on my Gitea instance.
I just use ssh. I would use Gitea if it was a team project, but it's too much of a hassle for my personal usage. So creating a repository is `ssh git@server git init --bare repo.git` and the remote is `git@server:repo.git`. Pretty much all I need.
GitLab is open source, you can self host it. Although the system requirements are quite high, you will not be able to host it on a two euro VPS.
And I wouldn’t be that concerned about contributors. It’s only the very top projects that get any contributors. And even then most of the time contributors are not worth the hassle.
I recently began running Gitea on my NAS to host an unreal5 project which requireds significant LFS, Github's free LFS limits are very small. And honestly trying to decipher their paid account limits was too much for my old brain that day.
gitweb + gitolite is all you need in my opinion for hosting private projects. I have had a wonderful experience with it since there are no complex databases, config files etc and it’s very light on resources
There are two unresolved problems preventing us from effectively replacing github or similar centralized forges with a fully decentralized/self-hosted ecosystem. These are the factors that keep pushing people back onto those platforms.
Note: Please add any major issues that you consider as a blocker. And, apologies for repeating some points that I expressed in other comments. I wasn't planning to write a top level comment.
So, Problem 1: Discoverability and network effects. This is the main reason why people keep going back to the largest platforms. There is practically zero chance of being seen or helped on self-hosted or small platforms. And this isn't because of a missing technical innovation either. Those platform has the size and first-mover advantage. What we need is a single feature-rich search interface to query all those projects spread over the net. That would need an index. Unlike the interface, the index itself can be distributed or replicated to avoid centralization. It should be able to store structured information about projects, like instructions (readme, usage, contributor guidelines, license text, CoC, etc), license, language, pull-request interface, issue tracker location, bug/security reporting information, project type, ownership details, signing information, etc. This will allow rich searches like what's possible on github and sourcegraph.
Problem 2: Communication channel. The biggest problem hindering distributed git hosting is a way to do pull-requests, code reviews, discussions, etc. Even with something like gitea, you need an account on every instance where you want to contribute. The truth is that we already have a communication channel that allows such a collaboration - email. We already have working proofs of how it operates at LKML and sourcehut. However, people have a particular dislike towards emails for git collaboration. IMO, this is actually the fault of email clients that absolutely massacre the elegance of plaintext emails. There are some powerful tools that's used with LKML that most people will never discover because of their hatred towards email. Sadly, email is a lost opportunity that could have been a delight with sufficient attention on the clients.
I have given up trying to convince people to give it a try. Email is somehow a taboo in this generation. So what else? There are projects like radicle (stable, more about it later) and forgefed (activitypub, under development) that try to build such a channel. But you really don't need to create such a channel for git. Git can adapt to any current channel (eg: xmpp/movim, nntp) using its extension mechanisms. Email was probably just the default channel. You can do patch pushes (git format-patch) and pull requests (git request-pull) practically through any such channel. With a bit of attention on the UI/DX, you can avoid the mistakes of emails (like html) that made them harder to use for our purpose. Additionally, you'll also need to build other development tools (issue tracker, code review, discussions etc) around them. People got too fixated on the platform approach and missed the possibilities of protocols.
Radicle: Honestly, I don't know what to make of it. It's a very impressive effort and they seem to have solved some very difficult problems. But radicle also seems to have gone for a platfom based approach and it encompasses everything. They seem to have an integrated issue tracker, CI system etc. If its scope was restricted to just a p2p network for discovering and exchanging git objects and other arbitrary information, we could have built a wide ecosystem of development tools on top of it. Radicle also feels like a darkweb that exists on a different plane from the web, though web bridges exist. For example, you can't do a clone of a repo from the net, unless it's bridged somewhere. However, I reserve my final opinion is reserved until I have a proper insight. I may have some wrong assumptions here. Please let me know if I do.
It isn't that email is taboo. It is that it is effort.
I have to go to my email program. Download the patch. Upload it. And then, if I want some changes, have to copy-and-paste stuff into a reply. Then repeat.
That's more effort than clicking "review" on a hosted git service.
A theoretical future service could use email under the hood to send messages back-and-forth between instances. But that's just an implementation detail.
I self-host most of my personal git repos on a NAS. They're literally just `git init --bare` folders in `/srv/src` that I access over SSH. Requires essentially zero setup and maintenance and I don't have to be worried if my internet or some other provider dies.
Yup, though you actually lose the ability to control access via groups in the URI this way (not that it matters, it's way more flexible to use Unix FS permissions)
I do some personal R&D and want to keep it under wraps until I'm ready to share it to the world. Keeping stuff on services you can depend is great for many things (I use SourceHut, btw.), but sometimes you want to have your code on-premises.
This is not the next billion dollar business, but I don't want to share the code until I write a couple of papers on it, and anchor the code's and idea's provenance correctly.
I had a softer stance on the issue, but since AI companies started to say "What's ours is ours, and what's yours is ours", I need to build thicker walls and deeper moats.
No, it won't be permissively licensed if I open source this.
I self-host a git site to keep my code private and to integrate an issue tracker. The maintenance effort is very low. I'm running Gitea on my local network Ubuntu server.
I would self host if I also wanted build actions, issues, wiki like Github/Gitea support. I believe the article is about this part. How do I collaborate with others without building a dependency on a tyrant? Gitea + Traefik + Keycloak + federated github login probably does the trick these days. It's been a while since I built a SDLC project.
Github steals my software, the 'software conservatory' steals my software, etc.
I dont want it in a vault, I dont want you to do anything other than read it on my site. I dont want an archive. most of my code is not licensed. All rights reserved.
It's there as a personal portfolio that's it.
And these scanners don't respect the LICENSE file, they think if its on the web - they can not just index it but make full copys and reproduce it.
That means not uploaded to github. That means self hosted, as is the point of the main discussion.
>these scanners don't respect the LICENSE file
I don't think github scans outside repos, but what is stated there certainly applies to OpenAI and others. They don't have a license to do what they are doing, but the US is not enforcing copyright law on them out of fear of losing the AI race.
Licensing a repository
Public repositories on GitHub are often used to share open source software. For your repository to truly be open source, you'll need to license it so that others are free to use, change, and distribute the software.
In this article
Choosing the right license
We created choosealicense.com, to help you understand how to license your code. A software license tells others what they can and can't do with your source code, so it's important to make an informed decision.
You're under no obligation to choose a license. However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work.
> Because you retain ownership of and responsibility for Your Content, we need you to grant us — and other GitHub Users — certain legal permissions, listed in Sections D.4 — D.7. These license grants apply to Your Content. If you upload Content that already comes with a license granting GitHub the permissions we need to run our Service, no additional license is required. You understand that you will not receive any payment for any of the rights granted in Sections D.4 — D.7. The licenses you grant to us will end when you remove Your Content from our servers, unless other Users have forked it.
But private stuff runs on my own servers.
In 2025 it's mostly maintenance free once the setup is running: Debian will be updated fully automated using unattended-update and the hosted application when there's the need to do one manually. Backups automatically every night using Proxmox and its little brother Proxmox Backup Server. Even with valid certificates using DNS-Auth from Lets Encrypt.
There is a native git request-pull command [1] that generates a summary of pending changes to pull into an upstream project, but it doesn’t enjoy support for all the features offered by GitHub pull requests or GitLab merge requests.
[1]: https://git-scm.com/docs/git-request-pull
Initiatives like ForgeFed are trying to define a more neutral way to share this information, but I don't think there's any estimate date for when there'll be an actual implementation of it. If that ever happens, it'd be possible to get vendor-neutral tooling for that kind of collaboration.
I don't even need to rent a server for that. Everything runs on my router (openWRT is amazing)
Let me repeat this again. We didn't centralize git when we started using github/gitlab etc. We centralized discoverability and reach of projects.
So far, everything else can be decentralized - issue tracking, pull requests, project planning, ci, release management and more. But we still don't have a solution to search projects on potentially thousands of servers, including self-hosted ones.
We do.
https://mvnrepository.com/repos/central
https://npmjs.com
https://packagist.org/
https://pypi.org/
https://www.debian.org/distrib/packages#search_packages
https://pkg.go.dev/
https://elpa.gnu.org/packages/
And many others.
And we still have forums like this one and Reddit where people can just announce their project. Github is more of a bad code refuge than a high signal project discovery.
As an example, I had to reverse engineer some kinda obscure piece of hardware and after getting everything I needed for my project, I put everything on github in case it was useful to anyone. A few months later, someone was in a similar situation and built on top of my work. Neither of us made a "package" or even "a piece of software", just some badly written scripts and a scattered notes. But it was still useful to publish somewhere where others can find it, especially a place with very good SEO and code-optimized search.
That sounds hardly like an alternative to what's possible with Github now. The only alternative that came anywhere close to that ideal was freshmeat - and even that didn't achieve the full potential. Check this discussion alone to see how many talk about 'network effects' or 'discoverability'.
And most software have an actual websites or is present in some distribution. I don’t care that much for weekend projects.
We aren't talking about your preferences alone here, are we? The case that the parent commenter mentioned is exactly what I was talking about too. What if I want a solution? What if I'm looking for algorithms or examples? What if I want to find a group of projects that's tackling a certain problem? How are my needs invalid? Are the projects that don't have such elaborate setups, polish or completion unworthy of discovery and help? You're essentially dismissing requirements that drive others to large platforms.
Why do you need a search index on your self hosted git server? Doesn’t Kagi solve that?
The search index doesn't have to be on your server, does it? What if there is an external (perhaps distributed/replicated) index that you could submit the relevant information to? Or if an external crawler could collect it on your behalf? (some sort of verification will also be needed.)
There are two reasons why such a dedicated index is useful. The first is that the general index is too full of noise. That's why people search projects directly on Github. The second problem is that the generic crawlers aren't very good at extracting relevant structured information from source projects, especially the information that the project owner wants to advertise. For example the readme, contribution guidelines, project status, installation and usage information, language(s), license(s), CoC, issue tracker location, bug and security reporting information, keywords, project type, etc. Github and sourcegraph allow you to do precise searches based on those. Try using a regular search engine to locate an obscure project that you already know about.
Why are people so keen on having that network graph of forks? It's not necessary for collaboration. Status symbol?
If I want to fork your code and contribute back, that means I need to be on the same system as you.
There's a bunch of Gnome projects which require me to sign up to their specific git hosting service before I can contribute.
On most git servers, I have to fork in order to send a PR, which often means I have to create a fork on their system - which means I need to set up something to replicate it to my local machine.
It's all friction.
I'd love to see a project on (for example) GitHub and then clone it to my GitLab, work on it there, and send a PR from GL to GH.
You really don't. You just clone, code, commit, and send a patch (which is just one or more text files). That's it. You may just code and do a diff, if it's a simple fix.
The project may have a more complex policy to accept contributions. But a Fork and a PR is not a requirement.
How do I submit the patch to the repo on GitHub / GitLab / CodeBerg / whatever?
Presumably I need to hunt down the maintainer's email?
https://www.bi6.us/GI/B/
I sort of punted on receiving patches and merge requests because most of the projects I'm distributing in this way aren't really open source, but "published source" and there's a small community of people who use them. The people who use the public, read-only repos know how to generate patches and distribute them via email or (in one case) uploading them to an S3 bucket.
Anyway... your mileage may vary, but it's worked reasonably well for a small community. Not sure I would recommend it for a huge open source project.
Have people forgotten that email exists?
Do you not see how much easier something like GH is?
GitHub seems easier because you are used to its workflow — which isn’t always devoid of arcanum either.
I don't know if you've ever used GitHub, GitLab, or CodeBerg - but PRs just appear in a list there. I don't need to do any work. Very handy especially if they're big changes.
I can also leave comments on specific bits of code, rather than emailing someone.
It's a joke, not a manifesto in a joke suit.
Wouldn't 2. make transitioning them into 1. "impossible"?
TUI tools over SSH definitely aren't for everyone, but if that's your style and you want a place to dump all your non-public projects, it's a great choice.
Most non-private stuff goes on Sourcehut, and anyone can contribute via email (i.e. without any account) assuming they don't mind going through the arcana required to set up git-send-email.
[1] https://github.com/charmbracelet/soft-serve
[2] https://hub.docker.com/r/tailscale/tailscale
I've really enjoyed using them but I guess I don't do much with the web interface.
> TS_DEST_IP
So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
I might give that a go. I run tailscale on my host and use a custom port for git which you set once in your ~/.ssh/config for host/key config on client machines and then don't need to refer to it repo uris.
TBH, I think it's tailscale I'd like a light/fast alternative to! I have growing concerns because I often find it inexplicably consuming a lot of CPU, pointlessly spamming syslog (years old github issues without response) or otherwise getting fucked up.
They're plenty fast, but it's hard to match the speed of terminal tools if you're used to working that way. With Soft Serve, I'm maybe 10 keystrokes and two seconds away from whatever I want to access from a blank desktop. Even a really performant web application is always going to be a bit slower than that.
Normally that kind of micro-optimization isn't all that useful, but it's great for flitting back and forth between a bunch of projects without losing your place.
> So you run tailscale in your git server container so it gets a unique tailnet ip which won't create a conflict because you don't need to ssh into the container?
Pretty much. It's a separate container in the same pod, and shows up as its own device on the tailnet. I can still `kubectl exec` or port forward or whatever if I need to access Soft Serve directly, but practically I never do that.
> TBH, I think it's tailscale I'd like a light/fast alternative to!
I've never noticed Tailscale's performance on any clients, it "just works" in my experience. I'm running self-hosted Headscale, but wouldn't expect it to be all that different performance-wise.
I have hundreds of random tools and half-finished projects, having them all accessible and searchable from a single location is convenient.
There is one more way to contribute by email. And they are... surprise! Pull requests! You can send pull requests to the maintainer via email, as long as your own modified clone repo is hosted and accessible online somewhere. It doesn't have to be on the same server (unlike github forks). This is done using the `git request-pull` command.
But it does requires people to be disciplined with their changes (no wip commits). This may require learning about the flags for `git rebase` and `git commit`.
[0]: https://git-send-email.io/
I think the thing that sets it apart from others would be I run it on a m2 Mac mini? Very low power consumption, never makes any noise, and seemingly has plenty of power for whatever I need to get done.
And I wouldn’t be that concerned about contributors. It’s only the very top projects that get any contributors. And even then most of the time contributors are not worth the hassle.
but in all seriousness, i do think that there is a lot of merit in the LKML way of doing things. otherwise all of our servers would be on fire now!
maybe and the insane capacity to sell things from the githubs, gitlabs of the world have brainwashed us!
Anyone have experience with LFS on other repos?
[1] https://news.ycombinator.com/item?id=45198395
Note: Please add any major issues that you consider as a blocker. And, apologies for repeating some points that I expressed in other comments. I wasn't planning to write a top level comment.
So, Problem 1: Discoverability and network effects. This is the main reason why people keep going back to the largest platforms. There is practically zero chance of being seen or helped on self-hosted or small platforms. And this isn't because of a missing technical innovation either. Those platform has the size and first-mover advantage. What we need is a single feature-rich search interface to query all those projects spread over the net. That would need an index. Unlike the interface, the index itself can be distributed or replicated to avoid centralization. It should be able to store structured information about projects, like instructions (readme, usage, contributor guidelines, license text, CoC, etc), license, language, pull-request interface, issue tracker location, bug/security reporting information, project type, ownership details, signing information, etc. This will allow rich searches like what's possible on github and sourcegraph.
Problem 2: Communication channel. The biggest problem hindering distributed git hosting is a way to do pull-requests, code reviews, discussions, etc. Even with something like gitea, you need an account on every instance where you want to contribute. The truth is that we already have a communication channel that allows such a collaboration - email. We already have working proofs of how it operates at LKML and sourcehut. However, people have a particular dislike towards emails for git collaboration. IMO, this is actually the fault of email clients that absolutely massacre the elegance of plaintext emails. There are some powerful tools that's used with LKML that most people will never discover because of their hatred towards email. Sadly, email is a lost opportunity that could have been a delight with sufficient attention on the clients.
I have given up trying to convince people to give it a try. Email is somehow a taboo in this generation. So what else? There are projects like radicle (stable, more about it later) and forgefed (activitypub, under development) that try to build such a channel. But you really don't need to create such a channel for git. Git can adapt to any current channel (eg: xmpp/movim, nntp) using its extension mechanisms. Email was probably just the default channel. You can do patch pushes (git format-patch) and pull requests (git request-pull) practically through any such channel. With a bit of attention on the UI/DX, you can avoid the mistakes of emails (like html) that made them harder to use for our purpose. Additionally, you'll also need to build other development tools (issue tracker, code review, discussions etc) around them. People got too fixated on the platform approach and missed the possibilities of protocols.
Radicle: Honestly, I don't know what to make of it. It's a very impressive effort and they seem to have solved some very difficult problems. But radicle also seems to have gone for a platfom based approach and it encompasses everything. They seem to have an integrated issue tracker, CI system etc. If its scope was restricted to just a p2p network for discovering and exchanging git objects and other arbitrary information, we could have built a wide ecosystem of development tools on top of it. Radicle also feels like a darkweb that exists on a different plane from the web, though web bridges exist. For example, you can't do a clone of a repo from the net, unless it's bridged somewhere. However, I reserve my final opinion is reserved until I have a proper insight. I may have some wrong assumptions here. Please let me know if I do.
I have to go to my email program. Download the patch. Upload it. And then, if I want some changes, have to copy-and-paste stuff into a reply. Then repeat.
That's more effort than clicking "review" on a hosted git service.
A theoretical future service could use email under the hood to send messages back-and-forth between instances. But that's just an implementation detail.
For example, I might want to host my code privately on GitHub and not have Microsoft use it to train their LLMs. That doesn't seem to be possible:
https://github.com/orgs/community/discussions/135400
https://github.com/orgs/community/discussions/171080
This is not the next billion dollar business, but I don't want to share the code until I write a couple of papers on it, and anchor the code's and idea's provenance correctly.
I had a softer stance on the issue, but since AI companies started to say "What's ours is ours, and what's yours is ours", I need to build thicker walls and deeper moats.
No, it won't be permissively licensed if I open source this.
I dont want it in a vault, I dont want you to do anything other than read it on my site. I dont want an archive. most of my code is not licensed. All rights reserved.
It's there as a personal portfolio that's it.
And these scanners don't respect the LICENSE file, they think if its on the web - they can not just index it but make full copys and reproduce it.
By virtue of uploading code to github you are granting them license as per their terms of service.
>on my site
That means not uploaded to github. That means self hosted, as is the point of the main discussion.
>these scanners don't respect the LICENSE file
I don't think github scans outside repos, but what is stated there certainly applies to OpenAI and others. They don't have a license to do what they are doing, but the US is not enforcing copyright law on them out of fear of losing the AI race.
In this article Choosing the right license We created choosealicense.com, to help you understand how to license your code. A software license tells others what they can and can't do with your source code, so it's important to make an informed decision.
You're under no obligation to choose a license. However, without a license, the default copyright laws apply, meaning that you retain all rights to your source code and no one may reproduce, distribute, or create derivative works from your work.
via https://docs.github.com/en/repositories/managing-your-reposi...
https://docs.github.com/en/site-policy/github-terms/github-t...