Additional Resources
Leaders in preventing nefarious uses of digital human AI and similar technologies include:
The Human Artistry Campaign: Outlines several core principles for the responsible use of artificial intelligence (AI) in creative fields. These principles emphasize the importance of human creativity and the need for AI to complement, rather than replace, human artistic expression.
DeepTrace Labs: Specializing in deepfake detection, DeepTrace Labs develops solutions to identify and mitigate the impact of malicious AI-generated content.
Microsoft: With initiatives like the AI for Good program, Microsoft works on developing technologies and policies to ensure the ethical use of AI, including efforts to detect and prevent deepfakes and other harmful uses of digital human technologies.
Google: Google is involved in research and development of AI ethics and safety, creating tools to detect manipulated media and partnering with organizations to promote responsible AI use.
IBM: Through its AI Ethics initiative, IBM focuses on creating frameworks and tools for the ethical development and deployment of AI technologies, including measures to prevent misuse.
DARPA (Defense Advanced Research Projects Agency): The U.S. government agency funds research and development projects aimed at countering deepfake and other malicious AI-generated content through its Media Forensics (MediFor) program.
Partnership on AI: An alliance of major tech companies and academic institutions, including Amazon, Apple, Facebook, Google, Microsoft, and IBM, this organization promotes the responsible and ethical development and use of AI technologies.
European Union (EU): The EU has been proactive in establishing regulations and guidelines to ensure the ethical use of AI, including initiatives to address the risks posed by deepfakes and other malicious AI applications.
MIT Media Lab: Through various projects and research initiatives, MIT Media Lab focuses on developing technologies and frameworks to detect and prevent the misuse of AI-generated content.
Coalition for Content Provenance and Authenticity (C2PA): This group, formed by Adobe, Microsoft, and other partners (including those participating in this project,) is creating standards for content authenticity and provenance to combat misinformation and deepfakes.
These leaders are actively working on technological, regulatory, and ethical solutions to address the potential harms of digital human AI and ensure its responsible use.
Last updated