Blogging on the Cyber Theory website (cybertheory.io/canary-in-the-cybermine/) Steve King writes that artificial intelligence is not being used to help cyber defense.
The technology is here. Artificial intelligence and machine learning can predict cyber attacks in advance and identify the exact threat vector and vulnerability. However, adoption of AI technology is not increasing at a rapid rate. The idea that machines can function with foresight that ultimately surpasses human thinking has made organizations fearful of adopting it. It is understandable to be concerned about technology we can't control.
The problem is that threat actors have no concerns about using advanced AI technology in their attacks. They will use this technology to compromise physical systems and penetrate organizational networks.
The newest threat is "generative adversarial networks" or GANs - two neural networks operating adversariously - being used to create "fake news" and forged audio and video. Since there are no original sources they are difficult or impossible to detect.
By publishing research into AI technology we make it easier for countries to use it to their advantage. If researchers would consider that artificial intelligence can be misused, and put safety and information security above open access, the threat would be less. However, King is pessimistic because historically scientific breakthroughs rarely deal with ethical ramifications in advance, and businesses favor productivity over defense. The result is the bad guys keep winning.