Legal & Ethical
Note: This information is not intended to be comprehensive and it is expected to change over time. Work with a qualified attorney for current information and/or advice.
Last updated
Note: This information is not intended to be comprehensive and it is expected to change over time. Work with a qualified attorney for current information and/or advice.
Last updated
It is vital to consider the complex legal and ethical implications surrounding the use of digital replicas as well as AI in content generation. From protecting performers' rights to ensuring transparency and consent, navigating the challenges posed by this technology requires a nuanced understanding of both current laws and emerging regulations. This page offers some insights into the critical considerations for responsible and ethical use of AI-driven creations, emphasizing the importance of balancing innovation with respect for individual rights.
The European Union (EU) has been proactive in addressing the implications of artificial intelligence (AI) through a comprehensive regulatory framework. Key laws and regulations include:
The Artificial Intelligence Act (AI Act):
Proposed in April 2021, the AI Act is one of the world's first legal frameworks aimed specifically at AI. It categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. The regulation prohibits AI applications deemed to pose an unacceptable risk, such as social scoring by governments, while imposing stringent requirements on high-risk AI systems, like those used in critical infrastructure or biometric identification.
The AI Act also mandates transparency for AI systems interacting with humans, ensuring users are aware when they are interacting with AI. This legislation is still under negotiation and is expected to be finalized soon​().
General Data Protection Regulation (GDPR):
Although not AI-specific, the GDPR, which came into effect in 2018, plays a significant role in regulating AI by setting strict rules on data privacy and protection. AI systems that process personal data must comply with GDPR requirements, ensuring that individuals' data is handled lawfully, transparently, and for specified purposes. The GDPR also gives individuals the right to explanations of decisions made by AI systems and the right to contest these decisions​ (, ).
Ethics Guidelines for Trustworthy AI:
Published by the High-Level Expert Group on AI in 2019, these guidelines emphasize the importance of AI systems being lawful, ethical, and robust. The guidelines outline seven key requirements for AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental well-being, and accountability. While not legally binding, these guidelines have strongly influenced EU policy and corporate practices across the bloc ​().
The Digital Services Act (DSA) and Digital Markets Act (DMA):
Enacted in 2022, these acts aim to create a safer digital space where users' fundamental rights are protected and establish a level playing field for businesses. While not AI-specific, they address the responsibilities of online platforms, including transparency obligations for algorithmic decision-making, which is often AI-driven. The DSA, in particular, imposes requirements on large platforms to explain and audit their AI-based content moderation systems​().
The European Data Strategy:
Launched in 2020, the strategy outlines plans to make the EU a leader in a data-driven society. It includes initiatives to create a single market for data that will allow data to flow freely across the EU, while ensuring high standards for data protection, security, and ethical AI. The strategy supports the development of a robust data infrastructure and governance framework that aligns with the principles of human-centric AI​().
While the U.S. does not yet have a comprehensive federal AI regulation similar to the EU’s AI Act, several state-level laws impact AI usage. For example, California’s Consumer Privacy Act (CCPA) and the newer California Privacy Rights Act (CPRA) regulate how personal data, which can be used in AI models, is collected and processed. Additionally, the U.S. Copyright Office and the Federal Trade Commission (FTC) have been increasingly involved in issues related to AI-generated content, such as copyright and consumer protection​(, ).
Proposed as part of Canada's Digital Charter Implementation Act, AIDA aims to regulate AI systems, focusing on high-impact AI technologies that pose risks to individuals or society. The act would establish rules for the responsible use of AI and data, ensuring transparency, accountability, and fairness in AI systems used within Canada​().
China has been rapidly developing its regulatory framework for AI. The New Generation Artificial Intelligence Development Plan (AIDP) and subsequent guidelines emphasize the ethical use of AI, national security, and social governance. In 2021, China implemented rules governing deepfake technology and AI-generated content, requiring clear labeling and banning the use of such technologies for malicious purposes, such as spreading false information​().
The Organisation for Economic Co-operation and Development (OECD) adopted AI principles in 2019, which are among the first intergovernmental standards on AI. These principles focus on ensuring AI is used in a manner that is fair, transparent, and respects human rights and democratic values. The OECD principles have influenced policies in member countries and are recognized as an important international standard​().
Adopted in 2021, UNESCO's recommendation sets global standards for the ethical use of AI. It emphasizes human rights, sustainability, and inclusivity in AI development and deployment. UNESCO's recommendation encourages member states to adopt national policies and legislation that align with these ethical standards​(, ).
Japan's Social Principles of Human-Centric AI and subsequent AI Strategy 2021 outline the country's approach to AI governance, focusing on the ethical use of AI technologies. These guidelines emphasize respect for human dignity, fairness, transparency, and collaboration between the public and private sectors​().