The world of ground warfare is messy and complex, and so is its data. Artificial intelligence used in combat is different than other areas of machine learning, and is vulnerable to deceptive AI and “dirty” data, said the Army Research Laboratory’s chief scientist.
Alexander Kott calls it “dinky, dirty, dynamic, deceptive, data,” or D5. “That’s what you get instead of your beautiful, Google-ish kind of million pictures of dogs and million pictures of cats that you can use to train your typical machine learning algorithm,” Kott said, rather, “you get a mess.“ Kott spoke on a panel at the GovernmentCIO Magazine CXO Tech Forum: AI in Government in Washington, D.C. on Dec. 14.
Combat data doesn’t always come in such high numbers, either. For example, there may be five photos of improvised explosive devices, not necessarily a million of them, and those too are constantly changing. “Nobody is going to label them, there’s no time,” Kott said.
He defined dirty data as more noise than signal, and dynamic data is always changing as the enemy is adapting and doing new things every day. Data is deceptive because “everything in war is about deception,” but the question is, how do you overcome D5?
DARPA’s Lifelong Learning Machines
Defense Advanced Research Projects Agency is addressing this dynamically changing environment with a new program, the Lifelong Learning Machines (L2M) led by program manager Hava Siegelmann. It’s focused on the future of AI adapting itself continuously without forgetting its previous data inputs.
Siegelmann, who also spoke on the panel, said the program is just now under contract so it hasn’t technically started yet, but it’s objective is to develop a next-generation adaptive AI system that will continually learn from experience.
The AI system will apply learned knowledge to new circumstances without pre-programming or training sets, and it’ll update its network based on its situation for a variety of applications relevant to the DoD.
“A lot of the AI that was mentioned here was based on taking data, and crunching it and mining it and having all the time in the world to do that,” Siegelmann said. “What really happens is, in many applications not only in the [Defense Department], we don’t have that amount of data and we don’t have that amount of time.”
And the systems and data used in one combat zone might not match those in others, so it’s difficult for the DoD to adopt one machine learning algorithm. The idea of L2M is to “have something that we all do,” Siegelmann said, so AI can learn from experiences, not replace them.
Traditional and current AI applications are pre-programmed and trained with data before being fielded. Inputting new data into an AI system will erase the previous data. And if something goes wrong, the systems goes to its default; rather than learning from the problem and fixing it.
The L2M program will develop systems that train very little in advance but learns how to learn, Siegelmann said. It’ll start with what it knows and once it’s fielded, it will fail first, realize that it failed and build a new training set to incorporate into the system.
“So this is really a very new way of AI,” Siegelmann said, because rather than relying and focusing on big data, “what you care about is what to do next,” and using autonomous systems for decision support and application merging.
Another defense-based challenge in AI is adversarial learning, which is “a rapidly growing field of research in which it was discovered, very recently, that we can very consistently and reliably fool AI if it’s based on conventional machine learning,” Kott said.
In fact, it these deception methods can be done without even knowing what is inside that particular machine learning algorithm. “So with much wisdom comes much troubles,” Kott said. The enemy finds out what has been learned and uses that to deceive. “And of course, that’s what we’re seeing here.”
But adversarial learning is being researched and worked on, and Kott said to expect to hear about it more as AI advances.