Are some AGI systems excessively risky to implement? Meta believes that is the case.


**Mark Zuckerberg May Prolong Development of High-Risk AGI Systems**

With the emergence of artificial intelligence, technology firms have been intensifying their initiatives to advance the field. Nonetheless, a recent policy document indicates that Meta’s CEO Mark Zuckerberg might decelerate or even pause the advancement of artificial general intelligence (AGI) systems categorized as “high risk” or “critical risk.”

AGI describes AI systems with the ability to execute any task that a human can accomplish. Previously, Zuckerberg has expressed a desire to make AGI publicly accessible in the future. However, within Meta’s newly published *Frontier AI Framework*, he admits that certain highly sophisticated AI systems may not be disclosed due to potential hazards.

The framework particularly emphasizes reducing risks related to cybersecurity and the potential misuse of AI in chemical and biological weaponry.

>”By focusing on these crucial areas, we can strive to safeguard national security while fostering innovation. Our framework delineates several procedures we adhere to in order to foresee and alleviate risks when developing frontier AI systems,” Meta conveyed in a press release.

The framework’s goal is to recognize and avert disastrous risks tied to cyber threats, in addition to chemical and biological perils. It encompasses threat modeling exercises aimed at forecasting how malicious actors could exploit AGI and formulates processes to guarantee that risks stay within acceptable bounds.

Should Meta conclude that a specific AGI system presents an excessive risk, it will not be made available to the public but will instead be kept internal.

In spite of these measures, the document underscores that the ambition behind developing advanced AI is to realize substantial societal advantages.

While Zuckerberg has consistently supported AGI, this new framework indicates he is adopting a more prudent stance—at least for the time being.