I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.
I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.
Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?
I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.
So… That’s what I think is wrong with them.
From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.
I completely understood. No one is going to IA as their first stop. They’re only going there if they want to see a history change or if the original site is gone.
Because if you’re referencing something specific, why would you take the chance that someone changes that page? Are you going to monitor that from then on and make sure it’s still correct/relevant? No, you take what is effectively a screenshot and link to that.
You aren’t really thinking about this from any standpoint except your advertising revenue.
I’m thinking about it from the perspective of an artist or creator under existing copyright law. You can’t just take someone’s work and republish it.
It’s not allowed with books, it’s not allowed with music, and it’s not even allowed with public sculpture. If a sculpture shows up in a movie scene, they need the artist’s permission and may have to pay a licensing fee.
Why should the creation of text on the internet have lesser protections?
But copyright law is deeply rooted in damages, and if advertising revenue is lost that’s a very real example.
And I have recourse; I used it. I used current law (DMCA) to remove over 1,000,000 pages because it was my legal right to remove infringing content. If it had been legal, they wouldn’t have had to remove it.
This conversation makes me want to throw up, as most discussions that revolve around the DMCA usually do. Using rights under the DMCA doesn’t put you in very good company.
Technically, each time that it is viewed it is a republication from copyright perspective. It’s a digital copy that is redistributed; the original copy that was made doesn’t go away when someone views it. There’s not just one copy that people pass around like a library book.
Wah wah wah, my stuff’s been preserved and I dont like it.
Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?
Lmao you think Google needs to go through Archive to scrape your site? Delusional.
Not to mention that I’m losing advertising revenue if someone views the site in an archive.
The mechanisms used to serve ads over the internet nowadays are nasty in a privacy sense, and a psychological manipulation sense. And you want people to be affected by them just to line your pockets? Are you also opposed to ad blockers by any chance?
I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous.
And how do you suggest a site which has been wiped off the face of the internet gets archived? Maybe we need to invest in a time machine for the Internet Archive?
Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.
What do you mean by “engagement”, exactly? Clicking on ads?
What do you mean by “engagement”, exactly? Clicking on ads?
In SEO terms user engagement refers to how people interact with the website. Do they click on another link? Does a new blog posting interest them?
Lmao you think Google needs to go through Archive to scrape your site? Delusional.
Any activiity from Google is easier to track and I have a record if who downloaded content if it’s coming from my servers.
The mechanisms used to serve ads over the internet nowadays are nasty in a privacy sense, and a psychological manipulation sense. And you want people to be affected by them just to line your pockets? Are you also opposed to ad blockers by any chance?
I agree that many sites use advertising in a different way. I use it in the older internet sense – someone contacts me to sponsor a page or portion of the site, and that page gets a single banner, created in-house, with no tracking. I’ve been using the internet for 36 years. I’m well aware of many uses that I view as unethical, and I take great pains not to replicate them on my own site.
I disapprove of ad blockers. I approve of things that block tracking.
As far as “lining my own pockets” goes, I want to recoup my hosting costs. I spend hours researching for each article/showcase, make the content free to view, and then I’m expected to pay to share it with anyone who’s interested? I have a day job. This is my hobby, but it’s also my blood, sweat, and tears.
And how do you suggest a site which has been wiped off the face of the internet gets archived? Maybe we need to invest in a time machine for the Internet Archive?
archive.org could archive the content and only publish it if the page has been dark for a certain amount of time.
archive.org could archive the content and only publish it if the page has been dark for a certain amount of time.
It’s user-driven. Nothing would get archived in this case. And what if the content changes but the page remains up? What then? Fairly sure this is why Wikipedia uses archives.
I agree that many sites use advertising in a different way. I use it in the older internet sense – someone contacts me to sponsor a page or portion of the site, and that page gets a single banner, created in-house, with no tracking. I’ve been using the internet for 36 years. I’m well aware of many uses that I view as unethical, and I take great pains not to replicate them on my own site.
Pretty sure mainstream ad blockers won’t block a custom in-house banner. And if it has no tracking, then it doesn’t matter whether it’s on Archive or not, you’re getting paid the same, no?
It’s user-driven. Nothing would get archived in this case. And what if the content changes but the page remains up? What then? Fairly sure this is why Wikipedia uses archives.
That’s a good point.
Pretty sure mainstream ad blockers won’t block a custom in-house banner. And if it has no tracking, then it doesn’t matter whether it’s on Archive or not, you’re getting paid the same, no?
Some of them do block those kinds of ads – I’ve tried it out with a few. If it’s at archive.org I lose the ability to report back to the sponsor that their ad was viewed ‘n’ times (unless, ironically, if I put a tracker in). It also means that if sponsorship changes, the main drivers of traffic like Wikipedia may not see that. It makes getting new sponsors more difficult because they want something timely for seasonal ads. Imagine sponsoring a page, but Wikipedia only links to the archived one. Your ad for gardening tools isn’t reflected by one of the larger drivers of traffic until December, and nobody wants to buy gardening tools in December.
Yes, I could submit pages to archive.org as sponsorship changes if this model continues.
It was a much bigger deal when we used Google ads a decade ago, but we stopped in early 2018 because tracking was getting out of hand.
If I was submitting pages myself I’d be all for it because I could control when it happened. But there have times when I’ve edited a page and totally screwed it up, and archive.org just happened to grab it at that moment when the formatting was all weird or the wrong picture was loaded. I usually fix the page and forget about it until I see it on archive.org later.
I asked for pages like that to be removed, but archive.org was unresponsive until I used a DMCA takedown notice.
how do you expect an archive to happen if they are not allowed to archive while it is still up. How are you suposed to track changed or see how the world has shifted. This is a very narrow and in my opinion selfish way to view the world
Shouldn’t that be the content creator’s prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?
Nah. It just lets slimy gits claim they never said XYZ, or that such and such a thing never happened. With as volatile a storage media as internet media, hard backups are absolutely necessary. Put it this way; would you have the same complaimt about a newspaper? A TV show? Post your opinion piece to a newspaper and it’s fixed in ink forever. Yet somehow you complain when that same opinion piece is on a website? Get outta here.
Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It’s a fine line. Is archive.org a library (wasn’t there a court case about this recently…) or are they republishing?
Either way, it doesn’t matter for me any more. The pages are gone from the archive, and they won’t archive any more.
A couple of good examples are lifehacker.com and lifehack.org. Both sites used to have excellent content. The sites are still up and running, but the first one has turned into a collection of listicles and the second is an ad for an “AI-powered life coach”. All of that old content is gone and is only accessible through the Internet Archive.
In fact, many domains never shut down, they just change owners or change direction.
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
User-agent: ia_archiverDisallow:
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There’s no need to sneak around.
About the only thing I can agree with you on here is I don’t like when people on Wikipedia archive a link and then list that as the primary source in the reference instead of the original link. Wikipedia (at least in English) has a proper method to follow for citations with links and the archived version should only become the primary if the original source is dead or has changed and no longer covers the reference.
They should also honor a DMCA takedown and robots.txt, but at least with the DMCA I’m sure there’s a backlog. Personally I’ve always appreciated the archive’s existence, though, and would think their impact is small enough that it’s better to have them than block them.
I just sent a DMCA takedown last week to remove my site. They’ve claimed to follow meta tags and robots.txt since 1998, but no, they had over 1,000,000 of my pages going back that far. They even had the robots.txt configured for them archived from 1998.
I’m tired of people linking to archived versions of things that I worked hard to create. Sites like Wikipedia were archiving urls and then linking to the archive, effectively removing branding and blocking user engagement.
Not to mention that I’m losing advertising revenue if someone views the site in an archive. I have fewer problems with archiving if the original site is gone, but to mirror and republish active content with no supported way to prevent it short of legal action is ridiculous. Not to mention that I lose control over what’s done with that content – are they going to let Google train AI on it with their new partnership?
I’m not a fan. They could easily allow people to block archiving, but they choose not to. They offer a way to circumvent artist or owner control, and I’m surprised that they still exist.
So… That’s what I think is wrong with them.
From a security perspective it’s terrible that they were breached. But it is kind of ironic – maybe they can think of it as an archive of their passwords or something.
No one is using Internet Archive to bypass ads. Anyone who would think of doing that already has ad blockers on.
You misunderstood. If they view the site at Internet Archive, our site loses on the opportunity for ad revenue.
I completely understood. No one is going to IA as their first stop. They’re only going there if they want to see a history change or if the original site is gone.
Yes, some wikipedia editors are submitting the pages to archive.org and then linking to that instead of to the actual source.
So when you go to the Wikipedia page it takes you straight to archive.org – that is their first stop.
Because if you’re referencing something specific, why would you take the chance that someone changes that page? Are you going to monitor that from then on and make sure it’s still correct/relevant? No, you take what is effectively a screenshot and link to that.
You aren’t really thinking about this from any standpoint except your advertising revenue.
I’m thinking about it from the perspective of an artist or creator under existing copyright law. You can’t just take someone’s work and republish it.
It’s not allowed with books, it’s not allowed with music, and it’s not even allowed with public sculpture. If a sculpture shows up in a movie scene, they need the artist’s permission and may have to pay a licensing fee.
Why should the creation of text on the internet have lesser protections?
But copyright law is deeply rooted in damages, and if advertising revenue is lost that’s a very real example.
And I have recourse; I used it. I used current law (DMCA) to remove over 1,000,000 pages because it was my legal right to remove infringing content. If it had been legal, they wouldn’t have had to remove it.
This conversation makes me want to throw up, as most discussions that revolve around the DMCA usually do. Using rights under the DMCA doesn’t put you in very good company.
Have you ever heard of the mysterious places called “libraries”? IA does not “republish” anything, it is an archive.
Technically, each time that it is viewed it is a republication from copyright perspective. It’s a digital copy that is redistributed; the original copy that was made doesn’t go away when someone views it. There’s not just one copy that people pass around like a library book.
Wah wah wah, my stuff’s been preserved and I dont like it.
Lmao you think Google needs to go through Archive to scrape your site? Delusional.
The mechanisms used to serve ads over the internet nowadays are nasty in a privacy sense, and a psychological manipulation sense. And you want people to be affected by them just to line your pockets? Are you also opposed to ad blockers by any chance?
And how do you suggest a site which has been wiped off the face of the internet gets archived? Maybe we need to invest in a time machine for the Internet Archive?
What do you mean by “engagement”, exactly? Clicking on ads?
In SEO terms user engagement refers to how people interact with the website. Do they click on another link? Does a new blog posting interest them?
Any activiity from Google is easier to track and I have a record if who downloaded content if it’s coming from my servers.
I agree that many sites use advertising in a different way. I use it in the older internet sense – someone contacts me to sponsor a page or portion of the site, and that page gets a single banner, created in-house, with no tracking. I’ve been using the internet for 36 years. I’m well aware of many uses that I view as unethical, and I take great pains not to replicate them on my own site.
I disapprove of ad blockers. I approve of things that block tracking.
As far as “lining my own pockets” goes, I want to recoup my hosting costs. I spend hours researching for each article/showcase, make the content free to view, and then I’m expected to pay to share it with anyone who’s interested? I have a day job. This is my hobby, but it’s also my blood, sweat, and tears.
archive.org could archive the content and only publish it if the page has been dark for a certain amount of time.
It’s user-driven. Nothing would get archived in this case. And what if the content changes but the page remains up? What then? Fairly sure this is why Wikipedia uses archives.
Pretty sure mainstream ad blockers won’t block a custom in-house banner. And if it has no tracking, then it doesn’t matter whether it’s on Archive or not, you’re getting paid the same, no?
Pr
That’s a good point.
Some of them do block those kinds of ads – I’ve tried it out with a few. If it’s at archive.org I lose the ability to report back to the sponsor that their ad was viewed ‘n’ times (unless, ironically, if I put a tracker in). It also means that if sponsorship changes, the main drivers of traffic like Wikipedia may not see that. It makes getting new sponsors more difficult because they want something timely for seasonal ads. Imagine sponsoring a page, but Wikipedia only links to the archived one. Your ad for gardening tools isn’t reflected by one of the larger drivers of traffic until December, and nobody wants to buy gardening tools in December.
Yes, I could submit pages to archive.org as sponsorship changes if this model continues.
It was a much bigger deal when we used Google ads a decade ago, but we stopped in early 2018 because tracking was getting out of hand.
If I was submitting pages myself I’d be all for it because I could control when it happened. But there have times when I’ve edited a page and totally screwed it up, and archive.org just happened to grab it at that moment when the formatting was all weird or the wrong picture was loaded. I usually fix the page and forget about it until I see it on archive.org later.
I asked for pages like that to be removed, but archive.org was unresponsive until I used a DMCA takedown notice.
SEO killed the internet. You’re literally part of the reason why people go look for alternatives to viewing your website, no one wants ads.
I don’t think you know what SEO is. I think you know what bad SEO is.
Anyhow, Wikipedia is always free to link somewhere else if they can find better content.
Wait, people prefer the archived version? Too much ads?
Removed by mod
Did you just draw comparison between redistribution of publicly available content and…rape? Dang.
Hey, if they choose to wrap their comments in completely inane reasoning they should be allowed to.
I 100% agree with you. I’m also allowed to call them out on their bullshit haha
deleted by creator
Removed by mod
Someone asked a question and I answered honestly. I’m sorry that you can’t understand my perspective.
Meaning, your content changes often?
I only try to understand why you seem to be especiallly affected.
how do you expect an archive to happen if they are not allowed to archive while it is still up. How are you suposed to track changed or see how the world has shifted. This is a very narrow and in my opinion selfish way to view the world
I don’t want them publishing their archive while it’s up. If they archive but don’t republish while the site exists then there’s less damage.
I support the concept of archiving and screenshotting. I have my own linkwarden server set up and I use it all the time.
But I don’t republish anything that I archive because that dilutes the value of the original creator.
What if I’m looking for something but the page has changed?
Shouldn’t that be the content creator’s prerogative? What if the content had a significant error? What if they removed the page because of a request from someone living in the EU requested it under their laws? What if the page was edited because someone accidentally made their address and phone number public in a forum post?
Nah. It just lets slimy gits claim they never said XYZ, or that such and such a thing never happened. With as volatile a storage media as internet media, hard backups are absolutely necessary. Put it this way; would you have the same complaimt about a newspaper? A TV show? Post your opinion piece to a newspaper and it’s fixed in ink forever. Yet somehow you complain when that same opinion piece is on a website? Get outta here.
Like I said, I have no problems with individuals archiving it and not republishing it.
If I take a newspaper article and republish it on my site I guarantee you I will get a takedown notice. That will be especially true if I start linking to my copy as the canonical source from places like Wikipedia.
It’s a fine line. Is archive.org a library (wasn’t there a court case about this recently…) or are they republishing?
Either way, it doesn’t matter for me any more. The pages are gone from the archive, and they won’t archive any more.
A couple of good examples are lifehacker.com and lifehack.org. Both sites used to have excellent content. The sites are still up and running, but the first one has turned into a collection of listicles and the second is an ad for an “AI-powered life coach”. All of that old content is gone and is only accessible through the Internet Archive.
In fact, many domains never shut down, they just change owners or change direction.
Again, isn’t that the site’s prerogative?
I think there should at least be a recognized way to opt-out that archive.org actually follows. For years they told people to put
User-agent: ia_archiver Disallow:
in robots.txt, but they still archived content from those sites. They refuse to publish what IP addresses they pull content down from, but that would be a trivial thing to do. They refuse to use a UserAgent that you can filter on.
If you want to be a library, be open and honest about it. There’s no need to sneak around.
About the only thing I can agree with you on here is I don’t like when people on Wikipedia archive a link and then list that as the primary source in the reference instead of the original link. Wikipedia (at least in English) has a proper method to follow for citations with links and the archived version should only become the primary if the original source is dead or has changed and no longer covers the reference.
They should also honor a DMCA takedown and robots.txt, but at least with the DMCA I’m sure there’s a backlog. Personally I’ve always appreciated the archive’s existence, though, and would think their impact is small enough that it’s better to have them than block them.