Meta Platforms will require advertisers to declare the use of artificial intelligence or other digital tools to affect political or election-related ads on Facebook and Instagram beginning in 2024. This comprises manufactured individuals or genuine people speaking or doing things they did not say or do. Notably, Meta will also insist that any fabricated events or edited footage be disclosed.
The policy updates, including Meta's earlier announcement on barring political advertisers from using generative AI ads tools, come a month after the Facebook-owner said it was starting to expand advertisers' access to AI-powered advertising tools that can instantly create backgrounds, image adjustments and variations of ad copy in response to simple text prompt.
Alphabet's Google, the biggest digital advertising company, announced the launch of similar image-customizing generative AI ads tools last week and said it planned to keep politics out of its products by blocking a list of "political keywords" from being used as prompts.
Lawmakers in the US have been concerned about the use of AI to create content that falsely depicts candidates in political advertisements to influence federal elections, with a slew of new "generative AI" tools making it cheap and easy to create convincing deepfakes.
Meta has already been blocking its user-facing Meta AI virtual assistant from creating photo-realistic images of public figures, and its top policy executive, Nick Clegg, said last month that the use of generative AI in political advertising was "clearly an area where we need to update our rules."
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.