Meta Platforms, the parent company of Facebook and Instagram, is set to introduce a new system to detect and label images generated by artificial intelligence (AI) services from various companies. This move aims to inform users about digitally created images that may resemble real photos, enhancing transparency on its platforms.
Meta Platforms to Label AI-Generated Images: A Step Towards Transparency
According to Nick Clegg, Meta’s president of global affairs, the company will mark content with invisible markers to identify AI-generated images, similar to its current practice for content generated using its own AI tools. This labeling initiative will apply to images created on services provided by OpenAI, Microsoft, Adobe, Midjourney, Shutterstock, and Google.
The announcement reflects the tech industry’s efforts to address potential risks associated with generative AI technologies, which can produce realistic-looking but fake content. While acknowledging that AI technology is still evolving, Clegg emphasizes the importance of creating momentum for industry-wide standards to address these challenges.
In addition to labeling AI-generated images, Meta plans to require users to label their altered audio and video content, with penalties for non-compliance. However, there is currently no effective method to label AI-generated written text.
Although Meta’s independent oversight board recently criticized its policy on misleadingly doctored videos, Clegg acknowledges the need for improvement. He agrees with the board’s recommendation to label such content instead of removing it, indicating Meta’s commitment to transparency and user safety.
The decision to label AI-generated images marks a significant step towards enhancing transparency and trust in digital content on Meta’s platforms, signaling the company’s proactive approach to address emerging challenges in the digital landscape.