Shouldn’t that be the content creator’s prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?
Nah. It just lets slimy gits claim they never said XYZ, or that such and such a thing never happened. With as volatile a storage media as internet media, hard backups are absolutely necessary. Put it this way; would you have the same complaimt about a newspaper? A TV show? Post your opinion piece to a newspaper and it’s fixed in ink forever. Yet somehow you complain when that same opinion piece is on a website? Get outta here.
Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It’s a fine line. Is archive.org a library (wasn’t there a court case about this recently…) or are they republishing?
Either way, it doesn’t matter for me any more. The pages are gone from the archive, and they won’t archive any more.
A couple of good examples are lifehacker.com and lifehack.org. Both sites used to have excellent content. The sites are still up and running, but the first one has turned into a collection of listicles and the second is an ad for an “AI-powered life coach”. All of that old content is gone and is only accessible through the Internet Archive.
In fact, many domains never shut down, they just change owners or change direction.
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
User-agent: ia_archiverDisallow:
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There’s no need to sneak around.
I don’t want them publishing their archive while it’s up. If they archive but don’t republish while the site exists then there’s less damage.
I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.
But I don’t republish anything that I archive because that dilutes the value of the original creator.
What if I’m looking for something but the page has changed?
Shouldn’t that be the content creator’s prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?
Nah. It just lets slimy gits claim they never said XYZ, or that such and such a thing never happened. With as volatile a storage media as internet media, hard backups are absolutely necessary. Put it this way; would you have the same complaimt about a newspaper? A TV show? Post your opinion piece to a newspaper and it’s fixed in ink forever. Yet somehow you complain when that same opinion piece is on a website? Get outta here.
Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It’s a fine line. Is archive.org a library (wasn’t there a court case about this recently…) or are they republishing?
Either way, it doesn’t matter for me any more. The pages are gone from the archive, and they won’t archive any more.
A couple of good examples are lifehacker.com and lifehack.org. Both sites used to have excellent content. The sites are still up and running, but the first one has turned into a collection of listicles and the second is an ad for an “AI-powered life coach”. All of that old content is gone and is only accessible through the Internet Archive.
In fact, many domains never shut down, they just change owners or change direction.
Again, isn’t that the site’s prerogative?
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
User-agent: ia_archiver Disallow:
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There’s no need to sneak around.