We’ve had some trouble recently with posts from aggregator links like Google Amp, MSN, and Yahoo.

We’re now requiring links go to the OG source, and not a conduit.

In an example like this, it can give the wrong attribution to the MBFC bot, and can give a more or less reliable rating than the original source, but it also makes it harder to run down duplicates.

So anything not linked to the original source, but is stuck on Google Amp, MSN, Yahoo, etc. will be removed.

  • geekwithsoul@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    2 months ago

    Many of the articles I’ve seen are not in fact behind a paywall but obviously YMMV

    • abff08f4813c@j4vcdedmiokf56h3ho4t62mlku.srv.us
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      What seems reasonable to me is, if someone is willing to make the optional effort to do so, to link the original paywalled source as the primary link, but then either add the paywall-free MSN/Yahoo/AMP link at the bottom of the description or in a comment. It looks like this would still be in line with the updated rules, but would prevent duplicate posts (one posts only the paywall free version and one posts only with a paywall link).

      • geekwithsoul@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        2 months ago

        There’s much better ways to do archive links that deal with paywalls, e.g. archive.is and others. News aggregators should not be relied on for archival links, as a link that works today may not work a year from now, as corporate agreements/ownership change

        • Ah, that’s a good point that I hadn’t considered. You’re right.

          Of course there might be that rare exception - where the archivers can’t get past the paywall on the original site, but it’s available from MSN or something.

          Even so, it seems like as a general rule, prefer to use an archiver, and fall back to a news aggregator only as a last resort, and then archive the news aggregator’s page so it’s retained even if the aggregator drops the article later on. Am I on the right track here?

          (Current example, https://archive.ph/nugTi did not succeed in getting https://theintercept.com/2024/10/09/white-house-oct-7-israel-war-gaza/ - in the past I’ve seen this overcome by archiving from the Google Cache’d version or from a version archived in the Wayback machine, but Google Cache was killed by Google and archive.org is currently down still over this holiday weekend.)

          • geekwithsoul@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            2 months ago

            BTW, for that site and others with more of a nagwall rather than a paywall, viewing it in reader view takes care of the popup (and many Lemmy clients can be set to default to reader view for links)

            • Thanks, the tip about the reader view solves the original issue (on reading nagwalled articles). I run my own pyfedi/piefed instance so I’d be surprised if I could use a lemmy client, but I’ll keep it in mind.

              If only there was a way I could feed my reader view into archive.is (which would solve the other issue, that of preserving the article in case the original ever goes down).