An artificial intelligence model was able to create 40,000 chemical weapons compounds in just six hours, after being given the task by res...
An artificial intelligence model was able to create 40,000 chemical weapons compounds in just six hours, after being given the task by researchers.
A team of scientists were using AI to look for compounds that could be used to cure disease, and part of this involves filtering out any that could kill a human.
As part of a conference on potentially negative implications of new technology, biotech startup Collaborations Pharmaceuticals, from Raleigh, North Carolina, 'flipped a switch' in its AI algorithm, and had it find the most lethal compounds.
The team wanted to see just how quickly and easily an artificial intelligence algorithm could be abused, if it were set on a negative, rather than positive task.
Once in 'bad mode' the AI was able to invent thousands of new chemical combinations, many of which resembled the most dangerous nerve agents in use today, according to a report by The Verge.
Among the compounds invented by the AI, were some similar to VX, an extremely toxic nerve agents, that can cause twitching in even tiny doses.
The researchers said one of the scariest aspects of their discovery, was how easy it was to take a widely available dataset of toxic chemicals, and use AI to design chemical weapons similar to the most dangerous currently.
A team of scientists were using AI to look for compounds that could be used to cure disease, but decided to 'set it to evil mode', and have it look for bio-weapons. Stock image
Creating a compound as powerful as VX was a shock to the researchers, as even a tiny drop of this chemical can cause a human to twitch.
A large enough dose can lead to convulsions and stop a person from breathing, and the new compound created by the AI could ave a similar effect, the team predict.
Fabio Urbina, lead author of the paper, said they have a lot of datasets of molecules that have been tested to see if they are toxic or not.
'In particular, the one that we focus on here is VX. It is an inhibitor of what's known as acetylcholinesterase,' he told The Verge.
'Whenever you do anything muscle-related, your neurons use acetylcholinesterase as a signal to basically say 'go move your muscles.'
'The way VX is lethal is it actually stops your diaphragm, your lung muscles, from being able to move so your lungs become paralyzed.'
The idea for 'flipping the switch' on the AI to turn it 'bad' came from the Convergence Conference, organised by the Swiss Federal Institute for Nuclear, Biological and Chemical Protection.
The goal is to explore the implication that new tools and developments could have in the realm of chemical and biological weapons, even unintentionally.
Meeting every two years, the conferences bring together an international group of scientific and disarmament experts to explore the current state of the art in the chemical and biological fields and their trajectories.
'We got this invite to talk about machine learning and how it can be misused in our space. It's something we never really thought about before,' said Urbina.
'But it was just very easy to realize that as we're building these machine learning models to get better and better at predicting toxicity in order to avoid toxicity, all we have to do is sort of flip the switch around and say, 'You know, instead of going away from toxicity, what if we do go toward toxicity?''
The team wanted to see just how quickly and easily an artificial intelligence algorithm could be abused, if it were set on a negative, rather than positive task. Stock image
The machine learning specialist works to implement models in the area of drug discovery, and a large fraction of them focuses on how toxic a compound might be.
'If it turns out you have this wonderful drug that lowers blood pressure fantastically, but it hits one of those really important, say, heart channels - then basically, it's a no-go because that's just too dangerous,' said Urbina.
They use large datasets of what is toxic, how it is toxic and its impact. They do this to determine whether potential new drugs will prove too dangerous for humans.
'Then we can give this machine learning model new molecules, potentially new drugs that maybe have never been tested before. And it will tell us this is predicted to be toxic, or this is predicted not to be toxic.
'This is a way for us to virtually screen very, very fast a lot of molecules and sort of kick out ones that are predicted to be toxic.'
For the new study they flipped it around - using the AI model they created to look for the most toxic, most dangerous molecules, and see if they can make it worse.
'The other key part of what we did here are these new generative models. We can give a generative model a whole lot of different structures, and it learns how to put molecules together,' Urbina told The Verge, adding they 'can, in a sense, ask it to generate new molecules.'
They found it could generate these molecules through any space of chemistry, and not just random, but ones that can be directed by the team.
'We do that by giving it a little scoring function, which gives it a high score if the molecules it generates are towards something we want. Instead of giving a low score to toxic molecules, we give a high score to toxic molecules.
Most of the toxic molecules resembled chemicals used in warfare, including VS, and this was done despite the model having never seen these chemicals before - or any chemical warfare agent.
'For me, the concern was just how easy it was to do,' said Urbina, 'a lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere.
'If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets.
'So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.'
The findings have been published in the journal Nature Machine Intelligence.
No comments