The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent 'catastrophic misuse' of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust.

In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years of experience in 'chemical weapons and/or explosives defence', as well as knowledge of 'radiological dispersal devices' – also known as dirty bombs. The company described the role as similar to positions it has created in other sensitive areas.

Anthropic's approach is not unique; OpenAI, the developer behind ChatGPT, has also advertised a similar position for a researcher specializing in 'biological and chemical risks.' However, some experts caution that this strategy could pose risks, as it exposes AI tools to weapons-related information, inciting concerns around AI’s capabilities and intentions.

Dr. Stephanie Hare, a technology researcher, emphasized the uncertainty of safely using AI systems to handle sensitive information about explosives and chemicals, highlighting the absence of global regulations governing this arena. The urgency of this issue has heightened, coinciding with the US government’s intense focus on AI firms amidst military operations abroad. Additionally, Anthropic is involved in legal action against the US Department of Defense for being labeled a 'supply chain risk,' which has parallels with other tech industry controversies related to national security.