Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
The Nexus of Privacy
@thenexusofprivacy@infosec.exchange  ·  activity timestamp 2 days ago
Introducing CC Signals: A New Social Contract for the Age of AI, from @creativecommons:

"CC signals are a proposed framework to help content stewards express how they want their works used in AI training—emphasizing reciprocity, recognition, and sustainability in machine reuse. They aim to preserve open knowledge by encouraging responsible AI behavior without limiting innovation."

Note that the propoal "social contract" does not include a signal for content providers to say they don't want their work used in AI training What's with that?

(I was going to file an issue on it but as usual @mcc is ahead of me. Here's the link again ... if you also think this is a problem it might be useful to upvote it and perhaps add a comment.]

Hilariously, their list of concepts that the project "draws inspiration" from starts with "consent". Indeed, the full reportThe full report (PDF) says that "We believe creator consent is a core value of and a key component to a new social contract." I believe that too, I just don't see how the actual proposal aligns with that.

UPDATE: discussion on Github suggests that this proposal builds on an IETF proposal, which does have a way to specify this -- but takes no position on the default, if it's not specified. So that's somewhat better, but it's still opt-out -- and opt-out isn't real consent. More details in the reply at https://infosec.exchange/@thenexusofprivacy/114748419834505739

#consent#AI#creativeCommons

  • Copy link
  • Flag this post
  • Block
The Nexus of Privacy
@thenexusofprivacy@infosec.exchange  ·  activity timestamp 2 days ago

At any rate, here's the signals in the @creativecommonsCC Signals proposal. My initial reaction these seem reasonable enough as far as they go ... it's just that they don't include the ability to say "no"

  • Credit: You must give appropriate credit based on the method, means, and
    context of your use.

  • Direct Contribution: You must provide monetary or in-kind support to the Declaring Party for their development and maintenance of the assets, based on a good faith valuation taking into account your use of the assets and your financial
    means.

  • Ecosystem Contribution: You must provide monetary or in-kind support back to the ecosystem from which you are benefiting, based on a good faith valuation taking into account your use of the assets and your financial means.

  • Open: The AI system used must be open. For example, AI systems must satisfy the Model Openness Framework (MOF) Class II, MOF Class I, or the Open Source AI Definition (OSAID).

#consent#AI #creativecommons

  • Copy link
  • Flag this comment
  • Block
The Nexus of Privacy
@thenexusofprivacy@infosec.exchange  ·  activity timestamp 2 days ago

And btw here's @creativecommonssupporters page, which thanks (among others) Microsoft, Google, the Chan Zuckerberg Foundation. Amazon Web Services, and Mozilla (which has pivoted to become an AI company these days).

Huh.

Not to be cynical or anything, but I wonder if that has anything to do with the reason they don't think content creators should be able to say "no" to #AI?

  • Copy link
  • Flag this comment
  • Block
The Nexus of Privacy
@thenexusofprivacy@infosec.exchange  ·  activity timestamp 2 days ago
@mcc was even farther ahead of me than I initially realized, with another issue report Your proposal is fundamentally bad; withdraw it

"CC rolling out a welcome mat for "AI" trainers— inventing a new type of welcome mat for AI trainers specifically, even— harms the goal of nurturing the commons, and moreover harms the public perception that CC is equipped to advance this goal."

Agreed. I might feel differently about it if the proposal had both

  • a default of "do not use for training" if no signals are present to establish that the "social contract" is based on consent

  • and an explicit "do not train" signal to unambiguously signal intent to withhold consenting. Without this, unless and until the signals and accompanying social contract are broadly adopted, AI scrapers will be able to argue that it's ambiguous whether consent is being withheld or just not specified.

  • Copy link
  • Flag this comment
  • Block
The Nexus of Privacy
@thenexusofprivacy@infosec.exchange  ·  activity timestamp 2 days ago

A reply to the Github thread suggesgts that (based on the implementation session in the README CC Signals is a refinement of a IETF design to convey opt-in/opt-out via robots.txt/HTTP headers. "That is, the IETF proposal allows you to give a blanket yes/no, and CC signals then allows you to give exceptions to the blanket statement for uses that flow back into the commons."

If so, that makes the situation somewhat better in that there is at least the ability to opt out. But even so, it's opt-out -- not real consent; the IETF Vocabulary For Expressing AI Usage Preferences says (in section 7) "This document takes no position on what default might be chosen as that will depend on policy constraints beyond the scope of this specification."

And the announcement post and the context and considerations page don't even mention the IETF proposal. The implementation page, the README, and the full report do briefly mention intention to build on the IETF's work and provide links, but don't appear to mention the option to opting out of any AI usage. So they clearly need to do some documentation work!

  • Copy link
  • Flag this comment
  • Block
Log in

A small Bonfire corner on the internet

This is a small personal instance of Bonfire in the Fediverse.

A small Bonfire corner on the internet: About · Code of conduct · Privacy ·
Bonfire social · 1.0.0-rc.1.24 no JS en New version available: 1.0.0-rc.1.25
Automatic federation enabled
  • Explore
  • About
  • Code of Conduct