5 comments

  • tossandthrow 13 minutes ago
    I think media outlets think way too highly of their contribution to AI.

    Had they never existed, it had likely not made a dent to the AI development - completely like believing that had they been twice as productive, it had likely neither made a dent to the quality of LLMs.

  • gzread 1 hour ago
    This is why archive.is was created. Should we stop trying to hunt down and punish its creator and support it as the extremely useful project that it is?
    • philistine 55 minutes ago
      The creator can maintain anonymity. The creator does not deserve to continue being celebrated when they embarked on a DDOS campaign using the traffic of archive.is against a journalist trying to uncover their identity. By these actions, they have shown to be capricious, vindictive, and willing to ensnare their users in their DDOS of others. Whoever they are, they’re terrible.
      • rdevilla 3 minutes ago
        This is great. Journalists are impeding the preservation of the historical record by blocking archivist traffic while simultaneously manhunting those archivists who find ways around their authwalls.

        Soon the news and the historical facts will be unnecessary. You can simply receive your wisdom from the AI, which, as nondeterministic systems, are free to change the facts at will.

      • Obscurity4340 20 minutes ago
        I had no idea that was the actual situation (journalist trying to hunt them down). Sorta changes the moral calculus, I'll allow it
      • MSFT_Edging 13 minutes ago
        If there's ever something a journalist would never ever do, it's destroy someone's life for a headline. Never ever. Totally impossible.
      • gzread 50 minutes ago
        Their life is in danger and one particular journalist is making it so
      • choo-t 37 minutes ago
        Well, if they deserve anonymity, they also deserve to be able to protect it, and they have really few tools against a doxxing, the DDOS was one of them, corrupting the archived article was another, albeit dangerous for their own reputation as an archiver.

        The crux of the problem was the doxxing, not the defense against it.

        • ajam1507 30 minutes ago
          You don’t think leveraging your site to DDOS someone is a problem?

          Do people not also deserve to be protected from being DDOSed? Do people also not deserve to not have their internet traffic be used to DDOS someone?

          • psychoslave 2 minutes ago
            Not defending any party, it's basic ethological expectation: a creature that try to beat an other should expect aggressive response in return.

            Of course, never aggressing anyone and transform any aggression agaisnt self into an opportunity to acculturate the aggressor into someone with the same empathic behavior is a paragon of virtuous entity. But paragons of virtue is not the median norm, by definition.

          • choo-t 8 minutes ago
            > You don’t think leveraging your site to DDOS someone is a problem?

            It is, but it's one of the only tools they have to prevent the doxxing site to being reachable.

            > Do people not also deserve to be protected from being DDOSed?

            You mean the person doing the doing should be protected ?

            >Do people also not deserve to not have their internet traffic be used to DDOS someone?

            Yes, it should have been opt-in. But unless you doesn't run JS, you kinda give right to the website you visit to run arbitrary code anyway.

          • kpcyrd 8 minutes ago
            You don't think non-consensually revealing somebody's identity is a problem?

            Resorting to DDoS is not pretty, but "why is my violent behavior met with violence" is a little oblivious and reversal of victim and perpetrator roles.

  • user_7832 1 hour ago
    > But in recent months The New York Times began blocking the Archive from crawling its website, using technical measures that go beyond the web’s traditional robots.txt rules. That risks cutting off a record that historians and journalists have relied on for decades. Other newspapers, including The Guardian, seem to be following suit.

    I'm a bit surprised I never read about this till now, though while disappointing it is unfortunately not surprising.

    > The Times says the move is driven by concerns about AI companies scraping news content. Publishers seek control over how their work is used, and several—including the Times—are now suing AI companies over whether training models on copyrighted material violates the law. There’s a strong case that such training is fair use.

    I suspect part of it might be these corps not wanting people to skip a paywall (whether or not someone would pay even if they had no access is a different story). But this argument makes no sense for the Guardian.

    • user_7832 1 hour ago
      I went to Guardian's website to cross check their motto (getting confused with WaPo's motto) and got served this (hilarious? sad?) banner. As if blocking cross website tracking is somehow bad.

      > Rejection hurts … You’ve chosen to reject third-party cookies while browsing our site. Not being able to use third party cookies means we make less from selling adverts to fund our journalism.

      We believe that access to trustworthy, factual information is in the public good, which is why we keep our website open to all, without a paywall.

      If you don’t want to receive personalised ads but would still like to help the Guardian produce great journalism 24/7, please support us today. It only takes a minute. Thank you.

      • duskdozer 17 minutes ago
        >If you don’t want to receive *personalised ads*

        So ads, just not personalized. Remind me again why personalized ads are good for me if I have to pay to have non-personalized ads?

      • mocd 53 minutes ago
        The Guardian’s ads asking for contributions have got progressively more desperate. I find their commitment to keeping their site paywall free admirable, but the current almost-begging (and selling off their Sunday paper) has got so intense that it feels like it’s only a matter of time until they introduce some kind of paid content.
  • xnx 2 hours ago
    Does Internet Archive have a distributed residential IP crawler program? I would enthusiastically contribute to that.

    There must be some mechanism to prevent tampering in such a setup.

    • gzread 1 hour ago
      No, IA does everything above board and even honors invalid DMCA takedowns.
    • progval 1 hour ago
      The Internet Archive does not, but Archive Team does: https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
      • xnx 1 hour ago
        Yes! I'm running an instance right now.
    • Retr0id 1 hour ago
      > There must be some mechanism to prevent tampering in such a setup.

      Trivial as long as they terminate the TLS on their end, not yours. So you'd just be a residential proxy.

  • SlinkyOnStairs 1 hour ago
    Devil's advocate: Anyone seeking to limit AI scraping doesn't have much of a choice in also blocking archivists.

    And it's genuinely not that weird for news organisations to want to stop AI scraping. This is just a repeat of their fight with social media embedding.

    Sure. The back catalogue should be as close to public domain as possible, libraries keeping those records is incredibly important for research.

    But with current news, that becomes complicated as taking the articles and not paying the subscription (or viewing their ads) directly takes away the revenue streams that newsrooms rely on to produce the news. Hence the "Newspaper trying to ban linking" mess, which was never about the links themselves but about social media sites embedding the headline and a snippet, which in turn made all the users stop clicking through and "paying" for the article.

    Social media relies on those newsrooms (same with really, most other kinds of websites) to provide a lot of their content. And AI relies on them for all of the training data (remember: "Synthetic data" does not appear ex nihilo) & to provide the news that the AI users request. We can't just let the newsrooms die. The newsroom hasn't been replaced itself, it's revenue has been destroyed.

    ---

    And so, the question of archives pops up. Because yes, you can with some difficulty block out the AI bots, even the social media bots. A paywall suffices.

    But this kills archiving. Yet if you whitelist the archives in some way, the AI scrapers will just pull their data out of the archive instead and the newsrooms still die. (Which also makes the archiving moot)

    A compromise solution might be for archives to accept/publish things on a delay, keep the AI companies from taking the current news without paying up, but still granting everyone access to stuff from decades ago.

    There's just major disagreement about what a reasonable delay is. Most major news orgs and other such IP-holders are pretty upset about AI firm's "steal first, ask permission later" approach. Several AI firms setting the standard that training data is to be paid for doesn't help here either. In paying for training data they've created a significant market for archives, and significant incentive to not make them publicly freely accessible.

    Why would The Times ever hand over their catalogue to the Internet Archive if Amazon will pay them a significant sum of money for it? The greater good of all humanity? Good luck getting that from a dying industry.

    ---

    Tangent: Another annoying wrinkle in the financial incentives here is that not all archiving organisations are engaging in fair play, which yet further pushes people to obstruct their work.

    To cite a HN-relevant example: Source code archivist "Software Heritage" has long engaged in holding a copy of all the sourcecode they can get their hands on, regardless of it's license. If it's ever been on github, odds are they're distributing it. Even when licenses explicitly forbid that. (This is, of course, perfectly legal in the case of actual research and other fair use. But:)

    They were notable involved in HuggingFace's "The Stack" project by sharing a their archives ... and received money from HuggingFace. While the latter is nominally a donation, this is in effect a sale.

    ---

    I find it quite displeasing that the EFF fails to identify the incentives at play here. Simply trying to nag everyone into "doing the thing for the greater good!" is loathsome and doesn't work. Unless we change this incentive structure, the outcome won't change.

    • Obscurity4340 16 minutes ago
      It would be better if there was some arrangement the papers could reach with Archive where they just delay the release or wait a week then its part of the archive. That way, news stuff gets paid for when its hot and fresh but then it gets archived and the record is preserved
    • onetokeoverthe 1 hour ago
      [dead]