Highlighting the recent report of users and admins being unable to delete images, and how Trust & Safety tooling is currently lacking.
Highlighting the recent report of users and admins being unable to delete images, and how Trust & Safety tooling is currently lacking.
I have no clue how jurisprudence would turn out. But keep in mind, this is not about the posts people make. The framework just needs to collect/store as little information as possible that can be considered PII. And it should have a way to remove it.
If Deleting your account results in the PII actually being removed (username, ip address, other profile info, whatever data is stored under the hood) and these removals actually get federated… there should not be an issue.
Then admins maybe have to do something if people start posting PII as messages, but that would probably be doxing and up for removal anyway.
So mainly the issus boil down to:
The issue I see is that if my instance is on the hook for the fediverse at large, and I operate on an allowlist basis, malicious actors can scrape PII and ignore the GDPR, and that would make me the one on the hook for that, isn’t that right?
There is plenty of jurisprudence and clarity needed, so… maybe. Hence the importance for the framework itself to be as GDPR compliant as possible and not store PII if not nessecary and remove it once no longer nessecary. (Storing someone’s IP for login, and post validation, bans etc should be limited to the period that makes sense, not infinitely.)
And in your example, the ‘malicious’ part of the 3rd party probably makes it different. Maybe then it is a dataleak.