
Meta states the AI-driven transition aims to streamline workflows and let human experts focus on complex, high-risk issues while maintaining oversight, but insiders express concern that sensitive areas like AI safety and content moderation may be overly automated
In a major internal overhaul, Meta is automating up to 90% of its risk and privacy review processes using artificial intelligence, marking a strategic departure from its decade-long reliance on human-led assessments. The change, aimed at accelerating product development across Facebook, Instagram, and other platforms, has triggered internal debate over the potential consequences for user safety and content integrity.
The new system, detailed in internal documents reported by NPR, allows product teams to complete a standardized questionnaire about upcoming features. AI tools will then instantly evaluate the responses, approving low-risk updates or flagging issues that must be addressed before launch. Teams are expected to self-certify compliance with these AI-generated recommendations.
Automation raises oversight concerns
Meta says this transition is designed to streamline workflows and allow human experts to focus on more complex and high-risk issues. “This change gives our teams more speed while maintaining necessary oversight,” the company stated. However, insiders worry that even sensitive areas — including AI safety, youth protection, and violent content moderation — could fall under the purview of automated systems.
A former Meta executive cautioned, “When scrutiny is reduced in favour of speed, the risks multiply.” Another employee warned that the human perspective — essential for identifying unintended consequences — could be lost in the process.
Critics have pointed to the timing of the changes, which come shortly after Meta shut down its third-party fact-checking initiative and relaxed certain hate speech moderation rules. While Meta insists its AI decisions will undergo regular audits, some observers argue the company is dismantling long-standing safeguards in pursuit of faster rollouts.
Increased scrutiny in Europe
In Europe, where digital oversight regulations are stricter, Meta will maintain human-led reviews. An internal memo confirms that risk assessments involving European users will continue to be overseen by the company’s Dublin-based headquarters to comply with the EU’s Digital Services Act.
This automation effort is part of Meta’s broader push to embed AI deeper into its operations. CEO Mark Zuckerberg recently revealed that Meta’s in-house AI agents are now responsible for writing a significant share of the company’s code, including for advanced models like Llama. These agents are already capable of debugging and outperforming many human developers.
With other tech giants such as Google and OpenAI also ramping up AI use in development, Meta’s shift reflects a growing industry trend. Yet, the balance between innovation and accountability remains under scrutiny as platforms automate more decisions once made by humans.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.