AIS Logo
← Back to Library
A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

Luca Deck, Max-Paul Förster, Raimund Weidlich, and Niklas Kühl
This study reviews existing methods for marking, detecting, and labeling deepfakes to assess their effectiveness under new EU regulations. Based on a multivocal literature review, the paper finds that individual methods are insufficient. Consequently, it proposes a novel multi-level strategy that combines the strengths of existing approaches for more scalable and practical content moderation on online platforms.

Problem The increasing availability of deepfake technology poses a significant risk to democratic societies by enabling the spread of political disinformation. While the European Union has enacted regulations to enforce transparency, there is a lack of effective industry standards for implementation. This makes it challenging for online platforms to moderate deepfake content at scale, as current individual methods fail to meet regulatory and practical requirements.

Outcome - Individual methods for marking, detecting, and labeling deepfakes are insufficient to meet EU regulatory and practical requirements alone.
- The study proposes a multi-level strategy that combines the strengths of various methods (e.g., technical detection, trusted sources) to create a more robust and effective moderation process.
- A simple scoring mechanism is introduced to ensure the strategy is scalable and practical for online platforms managing massive amounts of content.
- The proposed framework is designed to be adaptable to new types of deepfake technology and allows for context-specific risk assessment, such as for political communication.
Deepfakes, EU Regulation, Online Platforms, Content Moderation, Political Communication