If you’ve visited any of these sites recently it’s obvious that part of the issue is that you’re bombarded with pops, ads everywhere, autoplaying video, etc. It’s nauseating and a horrible user experience.
If all I’m looking for is straightforward content/info then I’m naturally using the most efficient way to get that content/information and visiting a website is not the most efficient way anymore
These news sites run ads that are borderline gore, disturbing images promoting snake oil weight loss or skin care treatments, and wonder why nobody wants to click into their site.
Weren't those ads always there, though? The most obvious change is that a little AI popup appears on Google search providing a brief (even if hallucinated) overview of what the user queried.
Unrelated, but I wouldn't expect this take on HN where I assumed everyone knew what an ad-blocker was.
So, Google promotes the enshittification you decry by monopolizing the way you make money on the internet.
Then also Google cripples everyone’s ad-dependent business by sucking out the info these websites provide and have paid people to research and publish.
Nonetheless, Google good, websites bad.
Many of today's news websites (tech or otherwise) cashed in their goodwill / reputation / page rank to sell ads.
The first shoe dropped when news websites realized they weren't generating content fast enough. Hard, in depth journalism takes time, but when people want to know something that happened _today_, they don't want to wait a week for all the facts to come out, and so the major websites started losing traffic to websites that churned out articles fast.
The additional benefit of churning out articles was that you could match against more and more long tail keywords, which lead to more traffic and more ability to sell ads. To keep up, many websites dropped quality for speed, and consumers noticed.
The second shoe then to drop was with affiliate marketing -- articles on CNET / Wirecutter etc were already ranking and rating products, so they figured "[...] why shouldn't we get a cut if someone ends up buying a product we recommend"? The challenge then became that consumers couldn't tell the difference between a product that was recommended because it was good, or because the product gave the biggest "kickback" to the website for using the affiliate link. Thus, people that gave "honest" opinions on products (e.g. people asking on Reddit, at least for a while, as the article suggests) became the new source of truth.
The result of this means that these days, if you read a lot of articles on the major tech websites, they feel more like they've been optimized for speed (e.g. churning out an article fast), SEO, and not much else. Many people have talked about how recipie websites are now short story generators more than food instructions, but it's been common for a while where I go to a tech website to read about something I specifically Googled, only for it to feel more like it was written _specifically_ to capture traffic for a keyword, rather than actually solve the issue or question I came into the website with.
The cherry on top is that AI has none of these problems (so far) -- yes, there's some movement on trying to do SEO for AI, and of course ads will eventually come to AI like it has everything else, but currently, you can get the answers you want, described to you exactly how you'd like to hear it -- who wouldn't want that?
I recently replaced a power supply to upgrade a GPU. I bought the power supply on Cragslist, so it had a jumble of cables and no manual. In the past I would have read an article that I would have found on one of those sites.
This time I conversed entirely with Gemini, sending pictures of the cables and of the components and the motherboard.
I'll not soon forget when I plugged in a cable incorrectly and sent an image of that cable to Gemini.
Gemini said "It is very important that you stop and unplug that cable immediately... Hopefully the power supply's safety precautions kicked in before any permanent damage occurred."
I know that Gemini was conversing with me using plagiarized information from all those sites. But, it was so much better to do this than to try to synthesize that in my brain by reading a bunch of articles.
I don't see a future for tech content because Gemini isn't paying the authors and they don't give me an option to direct payments to them either.
It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage, and LLMs are literally famous for creating plausbile-sounding but wrong output.
If you wanted to use an LLM to identify it, sure, you can validate that, and then find the manufacturer instructions and use those. Just following what it says about the cables without any validation it's correct is just wild to me. These are products with instruction manuals made for them specifically designed for this.
If the hardware changes significantly and those sites don't exist in the future wouldn't that mean gemeni would degrade in quality because it has nothing to pull from?
Right, that success story is only because there was "organic" (for lack of a better term) information from an original source. What happens when all information is nth generation AI feedback with all links to the original source lost?
Edit: A question from AI/LLM ignorance- Can the source database for an LLM be one-way, in that it does not contain output from itself, or other LLMs? I can imagine a quarantined database used for specific applications that remains curated, but this seems impossible on the open internet.
That's definitely been my experience. I work with a lot of weird code bases that have never been public facing and AI has horrible responses for that stuff.
As soon as I tried to make a todomvc it started working great but I wonder how much value that really brings to the table.
It's great for me though. I can finally make a todomvc tailored to my specific needs.
Once or twice, for me it's deflected rather than answer at all.
On the other hand, they've also surfaced information (later independently confirmed by myself) that I had not been able to find for years. I don't know what to make of it.
This then becomes the hardware manufacturers problem. If their new hardware fails for to many users it will no longer be purchased. If they externalize their problem solving like so many companies, they won't be able to gain market share.
This creates financial incentives to pay companies running the new version of search. Your thinking of this as a problem for these companies, when in reality it is a financial incentive.
Yea so I’ve had an issue getting video output after boot on a new AMD R9700 Pro. None of the, albeit free, models from OpenAI/Google/Anthropic have really been helpful. I found the pro drivers myself. They never mentioned them.
Thats not to say AI is bad. It’s great in many cases. More that I’m worried about what happens when the repositories of new knowledge get hollowed out.
Also my favorite response was this gem from Sonnet:
> TL;DR: Move your monitor cable from the motherboard to the graphics card.
It'll be a single sheet of paper with a QR code that redirects to a canned prompt hosted at whichever LLM server paid the most to the manufacturer for their content.
Same experience here: someone at our company had a bricked Macbook Pro. It was previously MDM-managed with JamF, and it wouldn't boot up. Asked ChatGPT to give me steps to fix it.
The first set of steps didn't work, so we iteratively sent pictures of the screen until the steps eventually did work and the issue was fixed.
I have never seen a review site or tech blog go into detail about how to wire a specific power supply to a specific motherboard. I would also never go to such a site to get information I can easily get from the manufacturer through a handbook but I would also never ask a chatbot. Really odd use case tbh.
That animated graph at the top is awful; does not render well on macOS Safari.
That being said, I am morbidly curious about traffic from RSS subscribers: has that gone up, gone down, or remained roughly the same in the same time period?
AI has successfully scraped enough of their content so that they are no longer needed. Thanks for creating the content, now someone else will make money with it.
Franky, good riddance. The websites that had SEO optimized their way to the top of Google's search results for queries like "how to change DNS settings", "best free VPN", or "best wireless earbuds under $300" were generally terrible, and I can't say I'm sad that that creating that kind of "content" is no longer economically viable.
There were large categories of information had become extremely difficult to search for thanks to SEO optimized content farms like these. People switching to Reddit for discovery because of this search index pollution was a direct response to this. To me, LLMs feel like a return to the golden age of AltaVista and Google, where the Internet was a place you could reliably find the information you were looking for.
Noticed the other day it now heavily cites and sources my one of my blog post to support the claim that yes, AI makes you boring if you google "Does AI make you boring?"
If you search "Does AI make you interesting?", it drums up other sources to support that contradictory claim as well.
Many of these websites, I only ever interacted with when doing research either on tech or tech products. I did not appreciate their surface-level reviews and explanations, so in my head I've categorized most of these websites as "noise to wade through whenever I need to look up something". I can't say I'll miss these sites. I would be googling (ddg-ing) way more still if the internet wasn't full of low-effort SEO bait articles that dominate every search result.
Yes a lot of these publications produce low quality content. But some of it can be quite useful. If they disappear who is going to document at a consumer level the latest hardware doodad or whatnot? Manufacturers are going to have to invest in online resources that the AI bots can scrape. Perhaps good documentation will become a driver of profit.
Ads in the AI results, obviously. Google is now the king of the the SEO spam website game: plagiarize the info from others, slap ads on it, and profit. That is the purpose of LLMs: end copyright law, but only for the 5 tech companies wealthy enough to run these massive models. The rest of us still have to follow the law, of course.
But they aren't doing this, are they? How can their ads revenue be constant (as per your sibling comment), while nearly 60% of the traffic is gone (as per the article), before such a scheme has rolled out?
OpenAI was already found to be integrating ads into API. It is only a matter of time. Enshitiffication is inevitable.
Google participates in AI bubble. When it pops, they will aggressively seek monetization. Be ready to see chatbot output to be populated with popups, video ads, popups, and stuff.
Gemini already drives some really valuable ads. It has, in effect, solved the "best pants" problem. It will use personalized chat to give you attributable shopping links for pants. And they don't have to share that revenue with some SEOmaxxing pants blogger.
> Their entire business model was to funnel traffic to websites with their ads.
This doesn't change, they will still show ads somewhere around AI overview. As part of it if it is both technically feasible and legal.
The part of equation that is gone, is how organic traffic got to sites that published quality content. Now they might as well shutdown or switch to hard paywall. Both won't affect Google for few year, until websites (other than shops) are dead, knowledge stored in LLMs gets outdated and search engines have tiny index, that is a shadow of past size.
Quality content stopped being profitable well before ChatGPT. Quantity beat quality as a content strategy flooding the search page with high-level obvious “how-tos” and “best vacuum cleaner” slop. This destroyed the consumer search experience. Current models have plenty of rich historical data and are good at synthesizing quality responses with the right queries. Now the risk is that AI will be starved of recent quality information to pull from. Hopefully the pendulum swings back around to make quality information profitable again…
Bots creating content, bots clicking on content, bots reading that content, and bots creating more content from it. It'll be like a capitalist cousin to Newspeak[1]; instead of a top-down enforcement of the language it will evolve by popular demand.
Tech publications get bent already because their core demographic is people who think ad-blocking is the savior of the internet. Nobody is paying for their content already.
honestly: good. all of them jumped up their own asses for the sake of SEO and minimum required regulation compliance, which stopped me from even going to the ones that aren't low-quality, content mills, which many of them are.
cut the cookies and tracking, so you don't have to have a ridiculous compliance banner. cut the paywall that tells me what you had to say wasn't important enough for public consumption. cut the full screen ad breaks and page takeover nonsense.
these outlets have had years (decades?) to figure out how to monetize content that didn't drive users away. they have failed over and over and over again, so why should I care that they are failing now? if it wasn't AI, it would be something else that came for them. if you rely on the captiveness of your audience, rather than the quality of your product, I'm always happy to see you destroyed. whatever comes next will be different, at the very least. and I'm an optimist - I'll always hope that it's a better way. if it's not, let that shit die, too.
regardless, I have every faith that the good will that buoyed these sites in their respective heydays will continue on to provide some other resources for the same kind of media.
What would be your suggestion for monetization without ads or subscription? Or are you thinking some type of privacy-respecting ad system? Because those have definitely been tried.
Subscription used to work. They can work again, even better than before now that we have the facility for micro-transactions. A micro-transactional framework would have the added benefit of making it expensive for scrapers to steal content.
This is hard for me, an "information wants to be free" kinda guy, to espouse. But there are softer ways to do it, such as how The Guardian does it, or how public media does it.
Yeah, I think there's a lot of juice left in the "newstand" model. We just have to figure out how to translate the efficiency of "drop in quarter, get news" with digital currency and content. Like you said: a micro-transactional framework. That would be a hell of a thing to get started, but if you could my money's on it working like a charm.
I'm not suggesting monetization without ads or subscriptions. I'm suggesting monetization without obnoxious bullshit like full page, scroll arresting ads, or news content locked behind a paywall, rather than editorial content locked behind a paywall.
If I go to your website where you purport to cover the news of the tech industry, it is always in your best interest to actually give me that news. I'd prefer it if they gave a dry, sometimes even bullet-pointed list of bare news facts. What they know, how they know it, and the basic ways it affects the site's topic/hobby, as soon as they possibly know it. From there, link to your subscription content that goes into detail about the news and provides attractive insight or framing or whatever, along with reasoned updates when the news stops breaking and we have some better or more reliable information. People who just want the news can hit the site, light up the in-page and side gutter banner ads, and then bounce. People curious for more or appreciative of the talent can subscribe and get more, and more informed, detail.
Basically, just the same old suggestions for any enterprise: figure out what people, right now, today, want; stop relying on what worked in the past or what is most convenient for your team. Break it down in to how people actually function, and then place monetization where you would purchase, for a price that you would purchase for. I'll always be able to find the news without you, so you don't have any leverage to hold it hostage. Use it as a lead for your content, which can be the kind of reporting (different than news in subtle but meaningful ways) that people will be happy to pay for.
You can shape the AI responses for some niche topics relatively easily even on accident. I recently saw 2 people arguing on a forum on a very niche industry topic and one of them started to use Gemini as a source to argue and Gemini was already referencing their thread as a source. I'm imagining people could start doing that on purpose with their own astroturfed blogs and public social media accounts.
Unrelated, but I wouldn't expect this take on HN where I assumed everyone knew what an ad-blocker was.
The first shoe dropped when news websites realized they weren't generating content fast enough. Hard, in depth journalism takes time, but when people want to know something that happened _today_, they don't want to wait a week for all the facts to come out, and so the major websites started losing traffic to websites that churned out articles fast.
The additional benefit of churning out articles was that you could match against more and more long tail keywords, which lead to more traffic and more ability to sell ads. To keep up, many websites dropped quality for speed, and consumers noticed.
The second shoe then to drop was with affiliate marketing -- articles on CNET / Wirecutter etc were already ranking and rating products, so they figured "[...] why shouldn't we get a cut if someone ends up buying a product we recommend"? The challenge then became that consumers couldn't tell the difference between a product that was recommended because it was good, or because the product gave the biggest "kickback" to the website for using the affiliate link. Thus, people that gave "honest" opinions on products (e.g. people asking on Reddit, at least for a while, as the article suggests) became the new source of truth.
The result of this means that these days, if you read a lot of articles on the major tech websites, they feel more like they've been optimized for speed (e.g. churning out an article fast), SEO, and not much else. Many people have talked about how recipie websites are now short story generators more than food instructions, but it's been common for a while where I go to a tech website to read about something I specifically Googled, only for it to feel more like it was written _specifically_ to capture traffic for a keyword, rather than actually solve the issue or question I came into the website with.
The cherry on top is that AI has none of these problems (so far) -- yes, there's some movement on trying to do SEO for AI, and of course ads will eventually come to AI like it has everything else, but currently, you can get the answers you want, described to you exactly how you'd like to hear it -- who wouldn't want that?
I thought we wanted the truth.
This time I conversed entirely with Gemini, sending pictures of the cables and of the components and the motherboard.
I'll not soon forget when I plugged in a cable incorrectly and sent an image of that cable to Gemini.
Gemini said "It is very important that you stop and unplug that cable immediately... Hopefully the power supply's safety precautions kicked in before any permanent damage occurred."
I know that Gemini was conversing with me using plagiarized information from all those sites. But, it was so much better to do this than to try to synthesize that in my brain by reading a bunch of articles.
I don't see a future for tech content because Gemini isn't paying the authors and they don't give me an option to direct payments to them either.
If you wanted to use an LLM to identify it, sure, you can validate that, and then find the manufacturer instructions and use those. Just following what it says about the cables without any validation it's correct is just wild to me. These are products with instruction manuals made for them specifically designed for this.
Edit: A question from AI/LLM ignorance- Can the source database for an LLM be one-way, in that it does not contain output from itself, or other LLMs? I can imagine a quarantined database used for specific applications that remains curated, but this seems impossible on the open internet.
I think, for public internet data, we can only be reasonably confident for information before the big release of ChatGPT.
In that situation, they give the (wrong) answer that sounds the most plausible.
As soon as I tried to make a todomvc it started working great but I wonder how much value that really brings to the table.
It's great for me though. I can finally make a todomvc tailored to my specific needs.
On the other hand, they've also surfaced information (later independently confirmed by myself) that I had not been able to find for years. I don't know what to make of it.
This creates financial incentives to pay companies running the new version of search. Your thinking of this as a problem for these companies, when in reality it is a financial incentive.
Thats not to say AI is bad. It’s great in many cases. More that I’m worried about what happens when the repositories of new knowledge get hollowed out.
Also my favorite response was this gem from Sonnet:
> TL;DR: Move your monitor cable from the motherboard to the graphics card.
Results vary of course. I have some very wonderful synthesizer manuals.
The first set of steps didn't work, so we iteratively sent pictures of the screen until the steps eventually did work and the issue was fixed.
This saved us from having to call Apple support.
For some definitions of "better", that is. :(
For 99.99999% of people out there, LLMs are the new search. You can gnash teeth and yell and sob, but it is how things are.
That being said, I am morbidly curious about traffic from RSS subscribers: has that gone up, gone down, or remained roughly the same in the same time period?
There were large categories of information had become extremely difficult to search for thanks to SEO optimized content farms like these. People switching to Reddit for discovery because of this search index pollution was a direct response to this. To me, LLMs feel like a return to the golden age of AltaVista and Google, where the Internet was a place you could reliably find the information you were looking for.
But we aren't there quite yet; that's tomorrow's problem. And I still have things that I need to do today.
https://www.bbc.com/future/article/20260218-i-hacked-chatgpt...
Noticed the other day it now heavily cites and sources my one of my blog post to support the claim that yes, AI makes you boring if you google "Does AI make you boring?"
If you search "Does AI make you interesting?", it drums up other sources to support that contradictory claim as well.
And now we can enjoy Marxist advertising ("posts") within our discussions on how to replace a TPMS sensor.
Reddit is Marxist? Hilarious.
I can't believe real people believe shit like this. Truly, the state of education is dire.
Marx and Engles must be rolling in their graves as a beacon of capitalistic tech is called "Marixst". What a silly world we live in.
https://aftermath.site/gameshub-clickout-media-seo-gambling-...
Their entire business model was to funnel traffic to websites with their ads.
What is their income source now that they've all but stopped doing that?
Google participates in AI bubble. When it pops, they will aggressively seek monetization. Be ready to see chatbot output to be populated with popups, video ads, popups, and stuff.
This doesn't change, they will still show ads somewhere around AI overview. As part of it if it is both technically feasible and legal.
The part of equation that is gone, is how organic traffic got to sites that published quality content. Now they might as well shutdown or switch to hard paywall. Both won't affect Google for few year, until websites (other than shops) are dead, knowledge stored in LLMs gets outdated and search engines have tiny index, that is a shadow of past size.
[1] https://en.wikipedia.org/wiki/Newspeak
the rest are ad scams
cut the cookies and tracking, so you don't have to have a ridiculous compliance banner. cut the paywall that tells me what you had to say wasn't important enough for public consumption. cut the full screen ad breaks and page takeover nonsense.
these outlets have had years (decades?) to figure out how to monetize content that didn't drive users away. they have failed over and over and over again, so why should I care that they are failing now? if it wasn't AI, it would be something else that came for them. if you rely on the captiveness of your audience, rather than the quality of your product, I'm always happy to see you destroyed. whatever comes next will be different, at the very least. and I'm an optimist - I'll always hope that it's a better way. if it's not, let that shit die, too.
regardless, I have every faith that the good will that buoyed these sites in their respective heydays will continue on to provide some other resources for the same kind of media.
This is hard for me, an "information wants to be free" kinda guy, to espouse. But there are softer ways to do it, such as how The Guardian does it, or how public media does it.
If I go to your website where you purport to cover the news of the tech industry, it is always in your best interest to actually give me that news. I'd prefer it if they gave a dry, sometimes even bullet-pointed list of bare news facts. What they know, how they know it, and the basic ways it affects the site's topic/hobby, as soon as they possibly know it. From there, link to your subscription content that goes into detail about the news and provides attractive insight or framing or whatever, along with reasoned updates when the news stops breaking and we have some better or more reliable information. People who just want the news can hit the site, light up the in-page and side gutter banner ads, and then bounce. People curious for more or appreciative of the talent can subscribe and get more, and more informed, detail.
Basically, just the same old suggestions for any enterprise: figure out what people, right now, today, want; stop relying on what worked in the past or what is most convenient for your team. Break it down in to how people actually function, and then place monetization where you would purchase, for a price that you would purchase for. I'll always be able to find the news without you, so you don't have any leverage to hold it hostage. Use it as a lead for your content, which can be the kind of reporting (different than news in subtle but meaningful ways) that people will be happy to pay for.