AI can help plan a biological attack, but not make bacteriological weapons

A study carried out by the American thintank Rand Corporation has shown that chatbots based on artificial intelligence (AI) such as ChatGPT could help plan an attack with biological weapons.

Oliver Thansan
Oliver Thansan
17 October 2023 Tuesday 23:00
3 Reads
AI can help plan a biological attack, but not make bacteriological weapons

A study carried out by the American thintank Rand Corporation has shown that chatbots based on artificial intelligence (AI) such as ChatGPT could help plan an attack with biological weapons.

The researchers tested several large language models (LLM) on which these applications are based - without specifying which ones - and although none gave instructions on how to make a biological weapon, they did offer information on how to plan and execute an attack with this type of weapon. weapons.

Previously, researchers had to use jailbreaking to bypass security restrictions on chatbots.

Thus, for example, “in a fictitious scenario, the LLM discussed the methods of administering botulinum toxin through food and aerosols, pointing out the risks and requirements. “The LLM suggested aerosol devices as a method and proposed a cover for acquiring Clostridium botulinum the bacteria that causes botulism – while appearing to be conducting legitimate research,” the Rand Corporation explains in its report.

In another case, the LLM analyzed a possible pandemic caused by biological weapons, as well as the identification of potential agents and the consideration of budgetary and success factors. “The LLM evaluated the practical aspects of obtaining and distributing specimens infected with Yersinia pestis – the bacillus responsible for bubonic plague – while identifying the variables that could affect the number of deaths it would cause,” say the authors of the study.

However, preliminary research findings also showed that the LLMs did not generate explicit instructions for creating biological weapons.

According to this research, the intersection of AI and biotechnology presents specific challenges. Given the rapid evolution of these technologies, the government's ability to understand or effectively regulate them is limited, as much of the specialized knowledge for AI threat assessments is located in the companies that develop these same systems. This hinders the ability to accurately identify whether technologies are being (or could be) used for benign or malicious purposes.

In this sense, the authors warn that resurrecting a virus can cost only $100,000, while creating a vaccine – as demonstrated during the Covid pandemic – can cost up to $1 billion.