Given how Reddit now makes money by selling its data to AI companies, I was wondering how the situation is for the fediverse. Typically you can block AI crawlers using robot.txt (Verge reported about it recently: https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders). But this only works per domain/server, and the fediverse is about many different servers interacting with each other.

So if my kbin/lemmy or Mastodon server blocks OpenAI’s crawler via robot.txt, what does that even mean when people on other servers that don’t block this crawler are boosting me on Mastodon, or if I reply to their posts. I suspect unless all the servers I interact with block the same AI crawlers, I cannot prevent my posts from being used as AI training data?

  • Admiral Patrick@dubvee.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    10 months ago

    Really, there’s only one way to prevent that, but it would offer no guarantees; the instance with the weakest security in the group would allow your posts to be crawled.

    It would require an agreement among instances to block crawler bot traffic (by user-agent, known IPs, etc) and only federating, via allow lists, with instances that adhere to the agreement. At that point, it’s more of a federated private forum, but there would still be some benefit I guess.