Meta’s deepfake moderation isn’t good enough, says Oversight Board

The Verge
The Meta Oversight Board urged the company to overhaul its AI content moderation due to insufficient deepfake detection.

Summary

The Meta Oversight Board has called on Meta to overhaul its methods for identifying and labeling AI-generated content across Facebook, Instagram, and Threads, stating that current systems are "not robust or comprehensive enough." This recommendation follows an investigation into a fake AI video related to the Israel conflict and is deemed critical given current "massive military escalations" in the Middle East. The Board found that Meta's system relies too heavily on self-disclosure and escalated review, failing to meet the realities of the online environment, especially concerning cross-platform proliferation. Recommended steps include improving misinformation rules, establishing a separate standard for AI content, developing better detection tools, increasing transparency on penalties, and scaling AI content labeling, particularly ensuring "High-Risk AI" labels and better adoption of Content Credentials (C2PA).

(Source:The Verge)