Page cover

Legal & Ethical

Note: This information is not intended to be comprehensive and it is expected to change over time. Work with a qualified attorney for current information and/or advice.

It is vital to consider the complex legal and ethical implications surrounding the use of digital replicas as well as AI in content generation. From protecting performers' rights to ensuring transparency and consent, navigating the challenges posed by this technology requires a nuanced understanding of both current laws and emerging regulations. This page offers some insights into the critical considerations for responsible and ethical use of AI-driven creations, emphasizing the importance of balancing innovation with respect for individual rights.

The European Union (EU) has been proactive in addressing the implications of artificial intelligence (AI) through a comprehensive regulatory framework. Key laws and regulations include:

  1. The Artificial Intelligence Act (AI Act):

    • Proposed in April 2021, the AI Act is one of the world's first legal frameworks aimed specifically at AI. It categorizes AI systems into four risk levels: unacceptable risk, high risk, limited risk, and minimal risk. The regulation prohibits AI applications deemed to pose an unacceptable risk, such as social scoring by governments, while imposing stringent requirements on high-risk AI systems, like those used in critical infrastructure or biometric identification.

    • The AI Act also mandates transparency for AI systems interacting with humans, ensuring users are aware when they are interacting with AI. This legislation is still under negotiation and is expected to be finalized soon​(Right of Publicity Roadmap).

  2. General Data Protection Regulation (GDPR):

    • Although not AI-specific, the GDPR, which came into effect in 2018, plays a significant role in regulating AI by setting strict rules on data privacy and protection. AI systems that process personal data must comply with GDPR requirements, ensuring that individuals' data is handled lawfully, transparently, and for specified purposes. The GDPR also gives individuals the right to explanations of decisions made by AI systems and the right to contest these decisions​ (Right of Publicity Roadmap, Terms.law).

  3. Ethics Guidelines for Trustworthy AI:

    • Published by the High-Level Expert Group on AI in 2019, these guidelines emphasize the importance of AI systems being lawful, ethical, and robust. The guidelines outline seven key requirements for AI: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and fairness, societal and environmental well-being, and accountability. While not legally binding, these guidelines have strongly influenced EU policy and corporate practices across the bloc ​(Federal Trade Commission).

  4. The Digital Services Act (DSA) and Digital Markets Act (DMA):

    • Enacted in 2022, these acts aim to create a safer digital space where users' fundamental rights are protected and establish a level playing field for businesses. While not AI-specific, they address the responsibilities of online platforms, including transparency obligations for algorithmic decision-making, which is often AI-driven. The DSA, in particular, imposes requirements on large platforms to explain and audit their AI-based content moderation systems​(Terms.law).

  5. The European Data Strategy:

    • Launched in 2020, the strategy outlines plans to make the EU a leader in a data-driven society. It includes initiatives to create a single market for data that will allow data to flow freely across the EU, while ensuring high standards for data protection, security, and ethical AI. The strategy supports the development of a robust data infrastructure and governance framework that aligns with the principles of human-centric AI​(Federal Trade Commission).

Last updated