Meta’s new policy to counter deepfake ads during the 2024 election

0
40

[ad_1]

The year of deepfakes is quickly approaching as some of the world’s largest democracies prepare for elections. The US, UK, and India are among the countries that will hold general elections in the upcoming year. Meta has taken a proactive approach, foreseeing the role of AI and deepfakes in spreading misinformation. In a recent statement, Meta announced that all advertisers must now “disclose when they digitally create or alter a political or social issue ad in certain cases.” This new policy is aimed at helping users “understand when a social issue, election, or political advertisement on Facebook or Instagram has been digitally created or altered.”

Deepfakes are a type of sophisticated digital manipulation. These AI-generated hyper-realistic videos or audio recordings can substitute a person’s likeness with another. In the context of political advertising, deepfakes present a significant threat. They have the power to undermine political discourse by fabricating images, persuasive speeches, or statements that spread misinformation and deceive the public.

Misinformation has been a significant issue on social media for a long time. We’ve all fallen victim to it at least once. The evolution of deepfakes, including video alterations and AI-generated images, has reached a point where distinguishing authenticity without proper context becomes quite challenging. Meta’s move anticipates and tackles the likelihood of misleading content or deepfake videos going viral in the run-up to the upcoming elections.

Effective from January, Meta’s new global policy mandates advertisers running ads with digitally altered media to transparently disclose the alterations

The policy includes scenarios such as portraying a real person speaking words they never spoke, featuring a lifelike individual who doesn’t exist, or depicting a realistic event that never occurred. To be in breach of the policy, a digital alteration must significantly contribute to the overall message. For example, simple cropping of a small image would not violate the new guidelines.

Meta has outlined its commitment to adding information to ads when digital alterations or deepfakes are disclosed, although specifics are not provided. Advertisers failing to disclose altered media will result in “penalties against the advertiser.” Meta reserves the right to remove content violating these policies. Meta’s prior policies addressed deepfakes and digital alterations. These new changes represent a significant stride in their commitment to combat deceptive content.

Interestingly, Google has taken a similar stance in addressing AI in advertisements. Moreover, X (Twitter) has implemented a Community Notes feature to combat misinformation and altered media. TikTok opts for a safer route by outright banning all forms of political advertising. This collective industry response reflects a growing threat of misinformation during an election year.

[ad_2]

Source link