I guess in theory if this is packaged as a PWA (or the old-school way, a single .html with everything needed inside of it) you could actually run this anywhere and without internet access easily.
Besides loading the frontend resources, is there anything else that wouldn't work? Seems like a simple idea, so as long as the assets could be loaded, you'd be able to "load" the "apps", wouldn't you?
Sure, but what's the point then? Seems like .html with extra steps, not to mention that the URL itself won't work.
Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
> Sure, but what's the point then? Seems like .html with extra steps, not to mention that the URL itself won't work.
Literally says in the submission title and the website itself: An entire website encoded in a URL.
And yes, the domain part of the URL might not work, but whatever URL you use locally would work just as well if you switch the domain, unless I'm missing something.
> Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Yes, the data is in the URL, seems to be the single point of this entire project. I don't seem to find any "server" doing anything of worth here, all the meat really sits in the client-side code, which you can serve however you like, might even work through file://, haven't tried it myself.
Not a single one of those requests contain the string "This is a message site. I guess. Just checking.", or did I miss something? All it seems to load is the "website loader", which is the part that decodes the URL (locally) and displays you "the website".
So assuming you have local access to the loader and you have the parts from the URL, you'd be able to load it.
I'm not sure if y'all are consciously misreading how this is supposed to work, or if I'm misunderstanding what y'all are complaining about. It's not "A public internet website can be loaded if you're not connected to the public internet", it's "websites loaded in this way can be loaded this way as long as you have the loader".
I did a showhn with similar idea(got a whooping 1 point and was flagged as spam which was later removed by mods), you paste your html and it encodes it into url, you can share the url without server involvement. I even added a url shortener because while technically feasible encoded url becomes long and QR code no longer works reliably. I also added annotation so you can add your comments and pass it to colleagues.
If I understand correctly, when a nowhere URL is pasted in a browser, what happens is:
1. the browser downloads generic JS libraries from the main site
2. these libraries then decode the fragment part, and transform it into the UI
If that's correct, someone still has to host or otherwise distribute the libraries - hence why you need the app to use it while offline (it ships the libraries).
This is not criticism, I'm just trying to get my head around how it works.
I think it still fulfills the brief; the website you are accessing is still hosted "nowhere". Very cool concept, just read about fragments on the MDN docs a couple month ago
But dependencies are part of a website? It literally says "Still here when the internet isn't." - but I can't go on there without an internet connection?
Service Workers can cough up this stuff even without a connection, provided you already visited the site once before. This is how sites like Twitter still load their bones even without a connection.
> Very cool concept, just read about fragments on the MDN docs a couple month ago
Crazy to hear someone reading about something today, that been around since the 90s and probably is one of the first parts you touch when doing web development, but I guess you're just another one of the 10K lucky ones :) (https://xkcd.com/1053/)
Interesting thought to explore but overblown claims.
For the privacy claims to hold, a fundamental conceit is that you trust and use the nowhere app / domain. The source is open, so let’s imagine that you individually can be satisfied.
Now, the idea that entire apps can be shared via a link in a Signal chat or a QR code on a flier is a fascinating bit of compression and potential for archiving.
Imagine games shared on paper QR codes at a meetup.
Oh but here’s the rub, do you trust the arbitrary code you just scanned off of a QR code? TLS has become a proxy for trusted authorship. “Well if it’s really coming my bank then it’s probably safe”
This resembles some serverless pastebins. Data is serialized into the fragment part, and client-side JS deserializes them. The only practical difference is that this app sets them as HTML while those set them as text.
A URL fragment is the part after #. The HTTP specification prohibits browsers from sending fragments to servers. The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out. No content is collected, stored, or logged. The privacy is structural.
A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Site creators can encrypt the URL itself with a password. Even possessing the link reveals nothing about what is inside.
> A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Unless that site A is encoded in a format that only one other site B on the internet can decode and "serve" (even if it's all client-side) so whoever wanted to block site A would just block site B as a whole.
> For orders, messages, and real-time coordination, Nowhere uses Nostr relays as communication infrastructure. Relays see only encrypted data they cannot read, arriving from ephemeral keys they cannot trace, sent from a nowhere site they cannot identify.
> The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out.
Technically true, practically a lie. Because that server delivers the Javascript which decodes and presents the content, and that Javascript absolutely has the ability to inspect, modify/censor, and leak the content (along with fingerprints of the browser).
> no host to pressure, no platform that can decide your content should not exist.
Except for https://nowhr.xyz, which becomes a single point of failure for all of these sites...
Yes! It's similar to people sharing a simple url within a QR code only. I find it insulting and inconvenient - i can remember or jot down and type in a url - i don't need a smartphone to do that.
In theory you could put a small html/website in a dense QR code, that would be truly offline - it's a similar thing.
There are also the Pico-8 cardridge format, where a game is stenographically embedded in a PNG
https://github.com/l0kod/PX8
> Private through physics. Not through policy.
Goodness, LLM really convinced itself this was groundbreaking.
You could describe a .html file sitting on your computer with all of the same marketing bluster.
Someone has to send it to you all the same, and you might as well not rely on some random internet service to render it??
https://mourner.github.io/bullshit.js/
Edit: Apparently "Platforms" => "Bullshit" ;)
> present everywhere
> Still here when the internet isn't
I'm afraid the OP may not have full understanding of how internet works. This is either some kind of a post irony, or some vibe code fever dream.
Either way, I'm deeply confused.
Besides loading the frontend resources, is there anything else that wouldn't work? Seems like a simple idea, so as long as the assets could be loaded, you'd be able to "load" the "apps", wouldn't you?
Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Literally says in the submission title and the website itself: An entire website encoded in a URL.
And yes, the domain part of the URL might not work, but whatever URL you use locally would work just as well if you switch the domain, unless I'm missing something.
> Now for online, the data is in the URL already, publicly available (unless shared privately), and the "loader" is still served from the server, so you have to trust the server not to exfiltrate the data.
Yes, the data is in the URL, seems to be the single point of this entire project. I don't seem to find any "server" doing anything of worth here, all the meat really sits in the client-side code, which you can serve however you like, might even work through file://, haven't tried it myself.
It is very much not, open the network tab on any of the examples, behold.
Ok, using https://nowhr.xyz/s#yzXyzs8PcDbxyQ_0KbYMzzRNytKNyE0JDM0x8zT2... as found in the HN comments as an example.
Not a single one of those requests contain the string "This is a message site. I guess. Just checking.", or did I miss something? All it seems to load is the "website loader", which is the part that decodes the URL (locally) and displays you "the website".
So assuming you have local access to the loader and you have the parts from the URL, you'd be able to load it.
I'm not sure if y'all are consciously misreading how this is supposed to work, or if I'm misunderstanding what y'all are complaining about. It's not "A public internet website can be loaded if you're not connected to the public internet", it's "websites loaded in this way can be loaded this way as long as you have the loader".
https://easyanalytica.com/tools/html-playground/
1. the browser downloads generic JS libraries from the main site
2. these libraries then decode the fragment part, and transform it into the UI
If that's correct, someone still has to host or otherwise distribute the libraries - hence why you need the app to use it while offline (it ships the libraries).
This is not criticism, I'm just trying to get my head around how it works.
Crazy to hear someone reading about something today, that been around since the 90s and probably is one of the first parts you touch when doing web development, but I guess you're just another one of the 10K lucky ones :) (https://xkcd.com/1053/)
Now, the idea that entire apps can be shared via a link in a Signal chat or a QR code on a flier is a fascinating bit of compression and potential for archiving.
Imagine games shared on paper QR codes at a meetup.
Oh but here’s the rub, do you trust the arbitrary code you just scanned off of a QR code? TLS has become a proxy for trusted authorship. “Well if it’s really coming my bank then it’s probably safe”
But would this mean encoding the entire dist folder after build step?
Yes, it's not communicated very clearly.
https://tinyurl.com/mrpas5dc
Posted to HN in 2023
https://news.ycombinator.com/item?id=37408150
So, its just like sending your sites link through email/whatsapp or any other channel. I don't know what the real usecase for this idea could be!!!!
this works as a "url" in both chrome and safari:
data:text/html,<pre onkeyup="(function(d,t){d[t]('iframe')[0].contentDocument.body.innerHTML = d[t]('pre')[0].textContent;})(document,'getElementsByTagName')" style="width:100%;height:48%;white-space:pre-wrap;overflow:auto;padding:2px" contenteditable></pre><iframe style="width:100%;height:48%">
For example it will give you this: https://news.ycombinator.com/item?id=47888337#47888930#:~:te...
https://github.com/kelseyhightower/nocode
A URL fragment is the part after #. The HTTP specification prohibits browsers from sending fragments to servers. The server that delivers the page never receives the content, never knows which site you are viewing, and has no way to find out. No content is collected, stored, or logged. The privacy is structural.
A site that was never put on a server can never be taken off one. There is no account to suspend, no host to pressure, no platform that can decide your content should not exist. Each copy of the link is a complete copy of the site data.
Site creators can encrypt the URL itself with a password. Even possessing the link reveals nothing about what is inside.
https://github.com/5t34k/nowhere
Let me tell you about a thing called JavaScript.
> A site that was never put on a server can never be taken off one.
If you post a link on HN and the content is embedded in the link itself then HN is the de facto server.
Unless that site A is encoded in a format that only one other site B on the internet can decode and "serve" (even if it's all client-side) so whoever wanted to block site A would just block site B as a whole.
Technically true, practically a lie. Because that server delivers the Javascript which decodes and presents the content, and that Javascript absolutely has the ability to inspect, modify/censor, and leak the content (along with fingerprints of the browser).
> no host to pressure, no platform that can decide your content should not exist.
Except for https://nowhr.xyz, which becomes a single point of failure for all of these sites...
You still have to share the link somewhere, why not just share a block of text (invitation, campaign, whatever) directly instead?
There are also the Pico-8 cardridge format, where a game is stenographically embedded in a PNG https://github.com/l0kod/PX8
And the Piet and Pikt esolanguages where the visuals are the code: https://esolangs.org/wiki/Piet https://github.com/iamgio/pikt
I think its just for fun :)