cross-posted from: https://lemmy.ml/post/5400607

This is a classic case of tragedy of the commons, where a common resource is harmed by the profit interests of individuals. The traditional example of this is a public field that cattle can graze upon. Without any limits, individual cattle owners have an incentive to overgraze the land, destroying its value to everybody.

We have commons on the internet, too. Despite all of its toxic corners, it is still full of vibrant portions that serve the public good — places like Wikipedia and Reddit forums, where volunteers often share knowledge in good faith and work hard to keep bad actors at bay.

But these commons are now being overgrazed by rapacious tech companies that seek to feed all of the human wisdom, expertise, humor, anecdotes and advice they find in these places into their for-profit A.I. systems.

  • FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    4
    ·
    1 year ago

    The code that AI produces isn’t “copied” from those original authors, though. The AI learned how to code from them, it isn’t literally copying and pasting from them.

    If you think a bit of code is “really from” XYZ open-source project, that’s a copyright violation and you can pursue that legally. But you’ll need to actually show that the code is a copy.

    • NeoNachtwaechter@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      1 year ago

      The copyright violation has happened when the code got fed into that AI’s greedy gullet, not when it came out of it’s rear end.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        1 year ago

        That remains to be tested legally speaking, and I don’t think it’s likely to pass muster. If it was trained correctly (ie, no overfitting) the resulting AI model does not contain a copy of the training inputs in any identifiable sense.

        • NeoNachtwaechter@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yes, the laws are probably muddy in Usa as usual, but rather clear here in the EU. But legal proceedings are slow, and Big Tech is making haste with their feeding.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            There are many jurisdictions beyond the US and EU, Japan in particular has been very vocal about going all-in on allowing AI training. And I wouldn’t say the EU’s laws are “clear” until they are actually tested.