
"CC signals are a proposed framework to help content stewards express how they want their works used in AI training—emphasizing reciprocity, recognition, and sustainability in machine reuse. They aim to preserve open knowledge by encouraging responsible AI behavior without limiting innovation."
Note that the propoal "social contract" does not include a signal for content providers to say they don't want their work used in AI training What's with that?
(I was going to file an issue on it but as usual @mcc is ahead of me. Here's the link again ... if you also think this is a problem it might be useful to upvote it and perhaps add a comment.]
Hilariously, their list of concepts that the project "draws inspiration" from starts with "consent". Indeed, the full reportThe full report (PDF) says that "We believe creator consent is a core value of and a key component to a new social contract." I believe that too, I just don't see how the actual proposal aligns with that.
UPDATE: discussion on Github suggests that this proposal builds on an IETF proposal, which does have a way to specify this -- but takes no position on the default, if it's not specified. So that's somewhat better, but it's still opt-out -- and opt-out isn't real consent. More details in the reply at https://infosec.exchange/@thenexusofprivacy/114748419834505739