The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm states that applicants must have a minimum of five years of experience in chemical weapons and/or explosives defense and knowledge of radiological dispersal devices – also known as dirty bombs.

Anthropic is not alone in adopting this strategy. A similar position has been advertised by ChatGPT developer OpenAI, focusing on biological and chemical risks. The growing trend among AI firms to recruit experts in these fields reflects heightened concerns over the risks associated with their technologies. Critics argue that providing AI tools access to sensitive weapon information, even with intentions to prevent misuse, presents its own dangers.

Experts like Dr. Stephanie Hare have voiced worries that the use of AI systems to manage sensitive chemical and explosive information, without proper regulatory oversight, could exacerbate risks rather than mitigate them. As global tensions rise, the urgency for stricter controls and ethical frameworks surrounding AI in defense applications becomes increasingly evident. Despite the industry's ongoing warnings about the existential threats posed by AI, there hasn't been a significant push to slow its development, particularly as the US gears up for military operations that may further complicate the landscape of AI and warfare.}