In a bid to tackle the growing issue of digitally altered media, Facebook owner Meta has announced significant changes to its policies. These changes come just ahead of the U.S. elections, where the platform faces the challenge of policing deceptive content generated by advanced artificial intelligence technologies.
So, what's changing?
Starting in May, Meta will begin adding "Made with AI" labels to AI-generated videos, images, and audio shared on its platforms. This move expands on a previous policy that only targeted a small portion of manipulated videos. Monika Bickert, Vice President of Content Policy at Meta, highlighted this expansion in a recent blog post.
But that's not all. Meta is also introducing separate and more noticeable labels for digitally altered media that carries a high risk of deceiving the public. These labels will be applied regardless of whether the content was created using AI or other tools. This shift in approach marks a departure from simply removing such content to keeping it up while providing viewers with information about its origin.
Additionally, Meta had previously announced plans to detect images created using other companies' AI tools by embedding invisible markers into the files. While the start date for this scheme wasn't specified initially, Meta is now moving forward with these efforts.
It's important to note that these labeling changes will apply to content across Meta's various platforms, including Facebook, Instagram, and Threads. However, different rules will continue to govern its other services like WhatsApp and Quest virtual reality headsets.
The more prominent "high-risk" labels will be implemented immediately, according to a Meta spokesperson.
Why are these changes happening now?
The timing of these policy updates is significant, especially with the upcoming U.S. presidential election. Researchers in the tech industry have warned about the potential impact of new AI technologies on the electoral process. Already, political campaigns in places like Indonesia have begun utilizing AI tools, pushing the boundaries of existing guidelines set by platforms like Meta and leading AI companies like OpenAI.
One incident that prompted reflection on Meta's existing rules was a video of U.S. President Joe Biden posted on Facebook last year. The video had been altered to suggest inappropriate behavior on his part. Despite concerns raised by Meta's oversight board, the video was allowed to remain on the platform. This incident shed light on the need for clearer and more comprehensive policies regarding manipulated media.
The oversight board also emphasized the importance of extending these policies to cover non-AI content, as such content can be just as misleading. This includes audio-only content and videos depicting actions that never actually occurred.
In summary, Meta's recent policy changes aim to address the evolving landscape of digitally altered media, particularly in the context of AI advancements. By implementing clearer labeling and detection measures, Meta hopes to mitigate the spread of deceptive content while maintaining transparency for its users.
As users, it's essential to stay informed about these updates and remain vigilant when consuming media online. By understanding how content is created and labeled, we can better navigate the digital landscape and make informed decisions about the information we encounter.