C2PA: Another good idea subject to abuse
There are some new digital content standards under development with the best of intentions (if you are not a pessimist like me), and like anything else, they may open up the world to even more centralized control and other significant negative impacts.
C2PA is short for Coalition for Content Provenance and Authenticity, a development project involving such organizations as Adobe, Arm, Intel, Microsoft, and Truepic. According to their website, their goal “…addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content.”
Specifically, the intent is to digitally stamp written, video, photographic, and musical content with encryption-protected information about the author, location, the source machine, and other data. Any subsequent editions of the content would have added information detailing how the content has been changed, by whom, and when. A video on the C2PA homepage describes the process in detail.
As a published author, I like the idea of my content being digitally marked in some way to ensure that I am always credited for my work and that I can prove my ownership in case of any piracy or if my work is purposely misused in some way. I don’t know how this system works in the case of quoting from someone else’s work (think copy and paste), but I assume that this is not an especially difficult obstacle. For someone who retypes the entire quote, however, the C2PA information would encrypt a new document with the latest author’s information.
Image: Technology by pikisuperstar.
The video points out that this optional added security assures that “…downstream processes and the consuming public can feel confident that the content is unaltered and it is from who it says it is from.”
However, like anything else that is good, this has potential for abuse.
Let’s start with C2PA’s stated goal to address “…the prevalence of misleading information online…” What exactly is misleading information? Who gets to decide what is misleading or simply a legitimate alternative perspective? What protections are there for an author who does not wish to be known? How optional will this added security really be? And finally, what are the presumptive actions will some organization take to correct alleged misinformation?
If the idea is to let the consumer beware and take personal appropriate action, this could work nicely. For example, if consumers read the C2PA information in a document and learn that it is not the original work they require, they could ignore that version and go find the original they need. Very nice.
However, what if the government, corporations, or other entities decide to search for this misleading information and take some form of legal or other action against the author? Remember, this isn’t about crime…this is about alternative or even unpopular views, simple mistaken opinions, and other issues traditionally equated with free speech and protected by the First Amendment. Lies that actually harm someone are already prosecutable so, at the very least, it would facilitate the identification and “cancellation” of an individual or entity given the fact that no laws were actually broken.
What about hackers? I believe that, as in the hacking of the Internet Of Things, this C2PA data will eventually get hacked. I can imagine being in court to prove I wrote my novel as opposed to the thief who has C2PA data showing he authored it years before I did. I know I could probably prove my case, but consider the frustration, expense, and time it would likely take.
As with all digital constructs, a Cyber Security answer for C2PA would be required, and this in a global industry that is short some 3.4 MM security experts already. In the U.S. alone, there is a shortage of hundreds of thousands of such experts. Nonetheless, some of this effort would have to focus on keeping C2PA legitimate.
Ultimately, given the goal of focusing on poorly defined “misleading” information, the high potential for abuse, and the cost of keeping it all under control, I still believe the C2PA concept could be beneficial. However, I will add that it also requires much more analysis and thoughtful design to prevent obvious abuses before it officially launches into cyberspace.