As somebody who only buys used electronics, I am worried that this means the used-electronics-prices are going to start rising closer to a more accurate (and more expensive) level. Bad for me, but probably good for allocation of resources.
I disagree. Articles written by AI are inherently less trustworthy (they're notorious for fabrications and hallucinations) and often have a very low content density. "Write me a 10 paragraph article about high hardware prices for my blog" is the sort of thing that expands to a lot of fluff with not a lot of content. I don't really want to read that article.
Even if AI might (might) be justifiable as an editor, it's still such a negative signal for "is this worth reading?" that in my opinion it is worthwhile to point out and discuss.
This case is interesting, because it seems obvious that the AI accusation is just plain wrong. The article is riddled with the kind of grammatical and spelling mistakes that humans regularly make but that a modern AI would never make.
That could easily be part of their prompt. I just did a quick test telling Copilot to "add a few spelling and grammar mistakes to look like a human wrote it" and it does a reasonably convincing job.
The errors are more than superficial; few authors would be willing to have AI screw up their work enough to invert the meaning of some of their sentences. https://news.ycombinator.com/item?id=47169256
It's also very easy to paste a paragraph in to a chatbot and ask it to revise it. Or ask it to write an introduction.
I don't really have a problem with that use of AI.
But one of the costs is reputational: potential readers are now going to assume AI wrote the whole article, fairly or unfairly. That's a consideration writers have to weigh before choosing to do this.
I agree that AI-written text often has a low content density. I wonder if it's a matter of information theory.
Information theory defines the information of a symbol as being related to how often it occurs and how often it is expected to occur. Something that isn't expected carries more information. (Usually "symbol" is defined as one character or byte, but it could be a word or word part.)
Well, if you think about LLMs that way, they give you the most-probable next word (or word part). That means that they give you less information than normal writing. I suspect that's why it reads as bland, low-content - because it really is low content, in the information theory sense.
Now, it doesn't always give you the most probable next symbol. There is some randomness. And you can increase the randomness by turning up the temperature. But if you do, then I suspect it becomes incoherent more quickly. (Random gibberish may have high information from an information theory standpoint, but humans don't want to read that either.)
Maybe it's because no matter how it was done this is a boring piece that talks about the tiny tiny minority of higher numbers obsessed "gamers" that do upgrade their hardware yearly.
It's not even representative for gamers as a whole.
And starting from that, it's easy to also accuse it of being LLM generated, even if it isn't.
I have no opinion because I couldn't go past the first paragraph. It's not talking about any subgroup I can identify with.
Also after skimming it didn’t say anything new or insightful. No matter if “content creator” or “AI”.
I don't know, it would take quite a subtle prompt to get an LLM to write the sub-edited slop in the latter part of the article:
> We are now again in a new inflatory phase, and this time the difference is that it does not only impact GPUs, but almost everything really important, like RAM and NVME storage.
> But there’s also something new on the software side. The advent of new technologies like DLSS, FSR, and more recently Framegen have also changed the performance equation a bit.
> Finally, the games themselves are not pushing the enveloppe [sic] as much as they used to.
> but that’s not obvious that the visual outcomes are far better.
Am I the only one who hasn’t felt the need to upgrade in _way_ more than just one year? I still have an old XPS 15 9560 (pushing 10 years!!!) running Ubuntu which is perfectly usable. I upgraded the ram (32Gb) and the battery, and I still consider it to be totally usable for most day to day tasks. Development, docker containers, browsing the web with an unhealthy number of tabs open. What more do I need?
Same. My daily driver is a high spec Dell laptop from 2018. I do CAD work on it and it's approximately fine. I upgraded the memory last year and I've had to repaste the heatsink and replace the battery, but I still can't justify getting a new machine given that it does everything I actually need flawlessly.
55 answers to that poll, with no idea of if or how it has changed from last year. I do believe that many people are postponing upgrades, but those poll results are not worst spending that many words on.
These aren't the words of a human – they're the words of an LLM
It just distracts the discussion away and adds nothing.
Even if AI might (might) be justifiable as an editor, it's still such a negative signal for "is this worth reading?" that in my opinion it is worthwhile to point out and discuss.
I don't really have a problem with that use of AI.
But one of the costs is reputational: potential readers are now going to assume AI wrote the whole article, fairly or unfairly. That's a consideration writers have to weigh before choosing to do this.
Information theory defines the information of a symbol as being related to how often it occurs and how often it is expected to occur. Something that isn't expected carries more information. (Usually "symbol" is defined as one character or byte, but it could be a word or word part.)
Well, if you think about LLMs that way, they give you the most-probable next word (or word part). That means that they give you less information than normal writing. I suspect that's why it reads as bland, low-content - because it really is low content, in the information theory sense.
Now, it doesn't always give you the most probable next symbol. There is some randomness. And you can increase the randomness by turning up the temperature. But if you do, then I suspect it becomes incoherent more quickly. (Random gibberish may have high information from an information theory standpoint, but humans don't want to read that either.)
It's not even representative for gamers as a whole.
And starting from that, it's easy to also accuse it of being LLM generated, even if it isn't.
I have no opinion because I couldn't go past the first paragraph. It's not talking about any subgroup I can identify with.
Also after skimming it didn’t say anything new or insightful. No matter if “content creator” or “AI”.
Should we allow this to normalize, then I'm done with this part of the internet. (And I agree, one has to use this criticism with some care)
> We are now again in a new inflatory phase, and this time the difference is that it does not only impact GPUs, but almost everything really important, like RAM and NVME storage.
> But there’s also something new on the software side. The advent of new technologies like DLSS, FSR, and more recently Framegen have also changed the performance equation a bit.
> Finally, the games themselves are not pushing the enveloppe [sic] as much as they used to.
> but that’s not obvious that the visual outcomes are far better.
> We have entered in the marginal progress zone.
> We had Witcher 3: Blood and Wine back in 2016 and while it’s the best looking game ever, it has aged very well.
I'm sure the author meant to say it's not the best looking game ever.
Today I checked its price on the web I bought it and it was almost 300€ more.