Note: This information is not intended to be comprehensive and it is expected to change over time. Work with a qualified attorney for current information and/or advice.
It is vital to consider the complex legal and ethical implications surrounding the use of digital replicas as well as AI in content generation. From protecting performers' rights to ensuring transparency and consent, navigating the challenges posed by this technology requires a nuanced understanding of both current laws and emerging regulations. This page offers some insights into the critical considerations for responsible and ethical use of AI-driven creations, emphasizing the importance of balancing innovation with respect for individual rights.
The European Union (EU) has been proactive in addressing the implications of artificial intelligence (AI) through a comprehensive regulatory framework. Key laws and regulations include:
The Artificial Intelligence Act (AI Act):
Proposed in April 2021, the AI Act is one of the world's first legal frameworks aimed specifically at AI. It categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. The regulation prohibits AI applications deemed to pose an unacceptable risk, such as social scoring by governments, while imposing stringent requirements on high-risk AI systems, like those used in critical infrastructure or biometric identification.
The AI Act also mandates transparency for AI systems interacting with humans, ensuring users are aware when they are interacting with AI. This legislation is still under negotiation and is expected to be finalized soon​(Right of Publicity Roadmap).
General Data Protection Regulation (GDPR):
Although not AI-specific, the GDPR, which came into effect in 2018, plays a significant role in regulating AI by setting strict rules on data privacy and protection. AI systems that process personal data must comply with GDPR requirements, ensuring that individuals' data is handled lawfully, transparently, and for specified purposes. The GDPR also gives individuals the right to explanations of decisions made by AI systems and the right to contest these decisions​ (Right of Publicity Roadmap, Terms.law).
Ethics Guidelines for Trustworthy AI:
Published by the High-Level Expert Group on AI in 2019, these guidelines emphasize the importance of AI systems being lawful, ethical, and robust. The guidelines outline seven key requirements for AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental well-being, and accountability. While not legally binding, these guidelines have strongly influenced EU policy and corporate practices across the bloc ​(Federal Trade Commission).
The Digital Services Act (DSA) and Digital Markets Act (DMA):
Enacted in 2022, these acts aim to create a safer digital space where users' fundamental rights are protected and establish a level playing field for businesses. While not AI-specific, they address the responsibilities of online platforms, including transparency obligations for algorithmic decision-making, which is often AI-driven. The DSA, in particular, imposes requirements on large platforms to explain and audit their AI-based content moderation systems​(Terms.law).
The European Data Strategy:
Launched in 2020, the strategy outlines plans to make the EU a leader in a data-driven society. It includes initiatives to create a single market for data that will allow data to flow freely across the EU, while ensuring high standards for data protection, security, and ethical AI. The strategy supports the development of a robust data infrastructure and governance framework that aligns with the principles of human-centric AI​(Federal Trade Commission).
In the past three years, several legal updates in the United States have been enacted or proposed that significantly impact performers, particularly concerning their digital replicas and AI-generated or modified content.
Draft Legislation on Digital Replicas: The U.S. Copyright Office recommended new federal legislation to address unauthorized digital replicas, emphasizing the need to protect both living and deceased performers. This proposed law would prohibit the distribution of unauthorized digital replicas and require that any licensing of digital replicas be closely regulated, including limits on the duration and scope of such licenses. The recommendations also include protections against the exploitation of minors and considerations for First Amendment rights (Right of Publicity Roadmap, Terms.law).
State-Level Right of Publicity Laws: Various states, including California and New York, have proposed or enacted updates to their right of publicity laws to address the rise of digital replicas. These laws generally focus on protecting a person’s likeness from unauthorized commercial use, extending protections to digital and AI-generated replicas. Some proposed bills, however, have sparked controversy due to provisions that might allow the use of deceased performers’ likenesses, potentially to the detriment of living performers​(Right of Publicity Roadmap).
Federal Trade Commission (FTC) Rule on AI-Generated Content: The FTC recently finalized a rule aimed at combating fake reviews and testimonials, including those generated by AI. This rule empowers the FTC to seek civil penalties against violators, underscoring the growing concern over AI-generated content's role in deceptive practices. Although primarily focused on consumer protection, this rule intersects with concerns about AI-generated content that could affect performers by creating or manipulating endorsements without their consent​(Federal Trade Commission).
Companies utilizing digital human technology should consider a range of ethical considerations to ensure responsible and fair use. Here are key aspects to keep in mind:
Consent: Obtain explicit and informed consent from individuals whose likeness, voice, or personal data are being used. Ensure they understand how their digital representations will be utilized.
Transparency: Clearly communicate to users and stakeholders how the technology works, its intended use cases, and any potential risks. Be transparent about data collection, usage, and storage practices.
Control: Provide individuals with control over their digital representations, including the ability to review, modify, and revoke consent for the use of their digital likeness.
Privacy: Implement robust data protection measures to safeguard personal data from unauthorized access, use, or disclosure. Ensure compliance with relevant data privacy regulations, such as GDPR or CCPA.
Accuracy and Fairness: Ensure that digital replicas are accurate and do not misrepresent or harm the individuals they depict. Avoid biases in AI models that could lead to unfair or discriminatory outcomes.
Accountability: Establish clear accountability and governance frameworks to oversee the ethical use of digital human technology. Ensure there are mechanisms in place to address any misuse or ethical breaches.
Compensation: Fairly compensate individuals for the use of their digital likeness, voice, or other attributes. Ensure that compensation agreements are transparent and equitable.
Security: Implement strong cybersecurity measures to protect digital replicas and associated data from hacking, tampering, or misuse.
Misinformation and Misuse: Take proactive steps to prevent the technology from being used to create deepfakes, spread misinformation, or engage in other malicious activities. Develop and deploy detection tools to identify and counteract such uses.
Cultural Sensitivity: Be mindful of cultural differences and sensitivities when using digital human technology. Avoid creating content that could be offensive or culturally inappropriate.
Regulatory Compliance: Stay informed about and comply with relevant laws, regulations, and industry standards governing the use of AI and digital human technology.
Ethical AI Development: Follow ethical AI development practices, including conducting thorough impact assessments, involving diverse stakeholders in the development process, and continuously monitoring and improving the technology to address ethical concerns.
Several regional and international laws and frameworks beyond the EU and the US are also notable. These include:
United States: AI Regulation and State-Level Laws
While the U.S. does not yet have a comprehensive federal AI regulation similar to the EU’s AI Act, several state-level laws impact AI usage. For example, California’s Consumer Privacy Act (CCPA) and the newer California Privacy Rights Act (CPRA) regulate how personal data, which can be used in AI models, is collected and processed. Additionally, the U.S. Copyright Office and the Federal Trade Commission (FTC) have been increasingly involved in issues related to AI-generated content, such as copyright and consumer protection​(Terms.law, Federal Trade Commission).
Canada: The Artificial Intelligence and Data Act (AIDA)
Proposed as part of Canada's Digital Charter Implementation Act, AIDA aims to regulate AI systems, focusing on high-impact AI technologies that pose risks to individuals or society. The act would establish rules for the responsible use of AI and data, ensuring transparency, accountability, and fairness in AI systems used within Canada​(Terms.law).
China: AI Regulations and Ethical Guidelines
China has been rapidly developing its regulatory framework for AI. The New Generation Artificial Intelligence Development Plan (AIDP) and subsequent guidelines emphasize the ethical use of AI, national security, and social governance. In 2021, China implemented rules governing deepfake technology and AI-generated content, requiring clear labeling and banning the use of such technologies for malicious purposes, such as spreading false information​(Right of Publicity Roadmap).
International: OECD AI Principles
The Organisation for Economic Co-operation and Development (OECD) adopted AI principles in 2019, which are among the first intergovernmental standards on AI. These principles focus on ensuring AI is used in a manner that is fair, transparent, and respects human rights and democratic values. The OECD principles have influenced policies in member countries and are recognized as an important international standard​(Terms.law).
UNESCO: Recommendation on the Ethics of Artificial Intelligence
Adopted in 2021, UNESCO's recommendation sets global standards for the ethical use of AI. It emphasizes human rights, sustainability, and inclusivity in AI development and deployment. UNESCO's recommendation encourages member states to adopt national policies and legislation that align with these ethical standards​(Right of Publicity Roadmap, Federal Trade Commission).
Japan: AI Strategy and Ethical Guidelines
Japan's Social Principles of Human-Centric AI and subsequent AI Strategy 2021 outline the country's approach to AI governance, focusing on the ethical use of AI technologies. These guidelines emphasize respect for human dignity, fairness, transparency, and collaboration between the public and private sectors​(Right of Publicity Roadmap).