New internal documents examined by NPR indicate that Meta intends to substitute human risk assessors with AI as the organization transitions toward complete automation. Historically, Meta has relied on human analysts to evaluate potential risks associated with new technologies on its platforms, which includes algorithm modifications and safety features, as part of privacy and integrity assessments. However, these vital evaluations might soon be carried out by bots, with the company aiming to automate 90 percent of this process utilizing artificial intelligence.
In spite of previous assertions that AI would solely manage “low-risk” releases, Meta is now deploying the technology for determinations regarding AI safety, youth risk, and integrity, which encompasses misinformation and moderation of violent content, according to NPR. The new framework requires product teams to submit questionnaires to obtain immediate risk evaluations and suggestions, with engineers receiving enhanced decision-making power.
While automation may expedite app updates and developer launches in alignment with Meta’s efficiency targets, insiders caution that it could heighten risks for billions of users, including unwarranted threats to data privacy.
In April, Meta’s oversight board issued a series of rulings that both supported the company’s position on permitting “controversial” speech and criticized its content moderation practices. The ruling underscored the necessity for Meta to recognize and tackle potential negative effects on human rights, particularly in nations facing crises such as armed conflicts.
Earlier that month, Meta discontinued its human fact-checking initiative, replacing it with crowd-sourced Community Notes and increasingly depending on its content-moderating algorithm, which has been reported to overlook and incorrectly flag misinformation and other posts that breach the company’s recently revised content guidelines.