A recent report from the French non-profit SaferAI, published on October 2, highlights concern over the risk management practices of some of the top AI developers, with French company Mistral AI receiving particularly low ratings.
SaferAI, which advocates for the development of safer AI systems, assessed the risk management practices of companies including Anthropic, OpenAI, Google DeepMind, Meta, Mistral, and xAI. According to the report, all of these companies received moderate or worse scores for their risk management approaches.
SaferAI’s CEO, Simeon Campos, explained to Euractiv that the absence of large-scale AI-related harms does not reflect effective risk management, but rather the limited capabilities of current AI systems. He stressed that as AI technology advances rapidly, there is a pressing need for more robust risk management practices in the industry.
The companies were evaluated based on their abilities to identify, analyze, and mitigate risks. These factors included the implementation of safety testing, red teaming, and risk threshold quantification. Companies such as Anthropic, OpenAI, Google, and DeepMind received moderate scores, largely due to their efforts in risk identification, including safety testing and red teaming exercises. However, their performance varied when it came to actively analyzing and mitigating the risks they identified. Meta, in particular, was rated “very weak” for both risk analysis and mitigation.
Mistral and xAI scored poorly overall, with their practices being rated “non-existent” in several areas, except for a “very weak” score of 0.25/5 for risk identification.
SaferAI noted that Meta, Mistral, and xAI have released their AI models as open source, meaning they allow public access to modify and use the models directly. While this is not inherently problematic, SaferAI emphasized that it becomes a concern when such releases are made without thorough threat and risk modeling.
By the time of publication, SaferAI had not received comments from the companies evaluated.
Yoshua Bengio, a Turing Award-winning AI researcher, expressed support for initiatives like SaferAI’s report, saying it is important to assess and compare companies’ safety approaches. Bengio, who chairs a working group drafting a Code of Practice for AI risk management, also highlighted that such initiatives align with efforts to ensure compliance with the EU AI Act.
In response to these concerns, the European Commission’s AI Office is increasing its focus on generative AI risk management. The Commission has been hiring staff to strengthen its technical capabilities in this area, with ongoing recruitment for technology specialists, including those with backgrounds in computer science and engineering.
However, some stakeholders have raised questions about the speed and approach with which the Commission is staffing its AI Office, particularly regarding the technical expertise of its new hires.
Explore top supply chain news stories at The Supply Chain Report. Visit ADAMftd.com for free international trade tools.
#SustainableDeliveries#AICompliance#RiskManagement#SupplyChainTransparency#EthicalAI#TechAccountability#CorporateSustainability