In the throes of the AI revolution, we find ourselves facing dilemmas that are as much ethical as they are technological. Among the most poignant of these concerns is the unease of creative professionals — authors, musicians, actors — who fear their livelihoods are being threatened by AI’s rising proficiency in content creation. Their worry is valid as can be seen in this article “‘Bargaining for our very existence’: why the battle over AI is being fought in Hollywood”.
From AI that can write compelling prose to those that can generate lifelike deep fake videos, we are living in an age where the traditional boundaries of creativity are being blurred. But could these same technologies that incite fear also hold the key to an equitable future for creatives?
In this article, we look at ways in which blockchain and AI technologies can help.
Where do DAOs play a role?
The recent surge in interest around decentralised technologies, in particular, DAOs (Decentralised Autonomous Organisations) and Web3 infrastructure, provides a fresh perspective on this issue. A DAO, governed by smart contracts on a blockchain, could serve as an impartial, transparent system for attribution and reward distribution among contributors. Think of it like a futuristic version of a royalty collection society, but one that is not centralised and is transparent in its operations.
Imagine a futuristic version of a royalty collection society, but one that is not centralised and opaque in its operations. Instead, it’s a transparent, auditable, and democratic system. Every transaction and decision would be recorded on the blockchain, visible to all, and could only be modified if the collective agrees.
In this DAO, the contributors — authors, musicians, artists, and even AI programmers — would have their works protected and their contributions acknowledged. The AI’s outputs would also be monitored, and their inspirations traced back as accurately as possible. If a creator’s work is identified in an AI’s output, the smart contract could automatically trigger a royalty payment, ensuring swift and fair compensation.
Web3 infrastructure further enhances the possibilities of this DAO approach by offering a decentralised and interoperable network where these interactions and transactions can happen seamlessly. It brings the promise of a fair, open internet where creators are directly rewarded for their work, and consumers can access and interact with content in a more direct, peer-to-peer manner.
Nevertheless, the application of DAOs to tackle this complex issue is in its infancy. There are numerous legal, technical, and ethical hurdles to overcome. But the promise is there — an impartial, decentralised solution to one of the most pressing problems of the AI age.
The role of Zero-Knowledge Proofs
In the context of large language models (LLMs), however, things get complicated. Blockchain technology, specifically Zero-Knowledge proofs (ZKPs), offers a promising method for auditing changes in data without revealing the underlying data itself. This could prove invaluable for verifying the authenticity of inputs to AI models and maintaining the integrity of the process. Yet, LLMs do not simply copy their inputs; they learn and extract patterns, transforming this knowledge to generate outputs. This means it’s challenging, if not impossible, to directly map a piece of input data to a specific output. Nevertheless, the potential of Zero-Knowledge proofs (ZKPs) in this context should not be disregarded.
Despite the inherent difficulties, ZKPs can serve as a powerful tool in tracking the utilisation of data in the training of large language models (LLMs). This cryptographic principle allows for the confirmation of a claim about specific information, such as data input into an LLM, without revealing the data itself. This could provide a form of tagging system, albeit one that operates differently from what we might intuitively imagine.
Through ZKPs, changes — or the lack thereof — in the input data can be audited and proven without directly exposing the data. This level of security and privacy is paramount in scenarios dealing with sensitive or proprietary information. Suppose a piece of music, a novel, or a confidential document is part of the input data for the LLM. In that case, ZKPs can provide a robust means of ensuring that the usage of such data can be tracked and proven, while maintaining the necessary confidentiality.
It’s important to note that while the ZKP can verify whether a certain set of data was included in the model’s training set, it doesn’t reveal how that specific data influenced the model’s output. Therefore, while ZKPs add an important layer of security and accountability in the data utilisation process, the complex nature of how LLMs learn and generate outputs remains a challenge to be addressed in the attribution discussion.”
What role does AI itself play in a better solution?
Enter AI once again, this time as a potential solution to its own quandary. By analysing the stylistic and thematic elements of an AI’s output, we may be able to infer the source of its ‘inspiration.’ Although such a method would likely have its limitations — high false positives/negatives, the complexity of style analysis, and potential privacy concerns — it’s a direction worth exploring.
Picture an AI system, akin to the deep learning models currently used for authorship attribution or music genre classification. By training on a wide variety of styles and themes, this AI could then analyse the outputs of an LLM and deduce patterns, rhythms, and stylistic quirks reminiscent of certain inputs. In essence, it would be identifying the ‘fingerprints’ of various sources, even if they were abstracted through the learning process of the LLM.
However, this method is not without its limitations. For one, the risk of false positives and negatives could be high. Given the large volume and diversity of data that LLMs are trained on, the AI could attribute outputs to incorrect sources, or miss valid connections entirely.
Furthermore, the complexity of style analysis should not be underestimated. Styles are often subtle, subjective, and multifaceted — qualities that can be challenging for AI to accurately grasp. Can an AI truly understand the nuance of a poignant metaphor, the subtle rhythm of a literary cadence, or the emotional resonance of a particular chord progression? Our current technology still has limitations in this respect.
Finally, privacy concerns cannot be overlooked. If an AI is analysing output to infer sources, safeguards must be in place to prevent potential misuse of this capability, especially when dealing with sensitive or confidential inputs. We need to ensure the balance between the pursuit of attribution accuracy and respect for privacy.
Despite these challenges, this approach represents a fascinating convergence of technology and creativity, offering a potential path to reconcile the tension between AI-driven content generation and the rightful attribution to human creators. It might not provide a perfect solution, but it opens a new field of exploration in our quest for fair and accountable AI systems.”
But what about the grey area?
At the same time, we must also consider the nature of creativity itself. Artists, whether consciously or not, draw from a broad spectrum of influences to create their work. Musicians learn and borrow from one another, producing remixes and mashups. Is AI-generated content not just another form of this tradition? Rejecting AI because it accelerates the creative process might be akin to dismissing electronic music production tools for their mechanical origins.
Yet, the analogy isn’t perfect. While musicians engage with their influences actively and knowingly, AI models absorb data passively, lacking the contextual understanding and intention that human artists possess. Therein lies a grey area and a compelling argument for a careful, nuanced approach to AI in creative fields.
Some tough questions ahead
Moving forward, we must grapple with some difficult questions. How can we ensure fair compensation for creators whose work contributes to training AI? How much ‘influence’ should AI be allowed to exert on its output, and how should this be regulated? And perhaps most importantly, how can we maintain a human-centric perspective on creativity while leveraging the power of AI?
As we tread into this uncharted territory, the guiding principle should be one of fairness, giving due credit to those whose work paves the way for new technology, while also acknowledging the incredible potential of AI. In the end, it’s not about pitting humans against machines, but rather, finding ways in which the two can coexist and, better yet, mutually benefit.
Therein lies the promise of the Web3 wave — not just as a tool for creating decentralised applications, but as a framework for a future where innovation and tradition coalesce, fostering a space that is as respectful of its roots as it is eager for what’s next.