Artificial Intelligence: From fake news to urban warfare, federal agencies look to AI to address a host of challenges

The federal government has big plans for artificial intelligence. 

From identifying fake news, to navigating complex data and modernizing warfare; nearly every federal institution intends to harness AI.

A quick search of contract opportunities for “Artificial Intelligence” on Sam.gov returns dozens of AI projects underway at nearly every agency. 

AI got a big boost last weekend when the U.S. Senate voted overwhelmingly to override a presidential veto and pass the National Defense Authorization Act. 

The NDAA includes the implementation of a National Artificial Intelligence Initiative to support research and development, education, and training programs.

The executive order that launched the initiative aims to accelerate the evolution of the technology. Much of that will be achieved with government spending. 

Specifically the Initiative directs the government to collaborate and engage with the private sector, academia, the public, and international partners on AI.

It directs the federal government to pursue five pillars for advancing AI: 

(1) invest in AI research and development (R&D), 
(2) unleash AI resources, 
(3) remove barriers to AI innovation, 
(4) train an AI-ready workforce, and 
(5) promote an international environment that is supportive of American AI innovation and its responsible use. 

It also directs agencies to actively leverage AI to help the federal government work smarter in its own services and missions.

It is clear the business and technology opportunities around AI are here now and will grow significantly in the future. 

An ongoing Defense Advanced Research Projects Agency called Reverse Engineering of Deceptions (RED) illustrates how ambitious the goals are around AI. 

Humans are susceptible to being deceived by adversarially falsified media (images, video, audio, text) or other information, according to the RED project. The consequences may be significant and deception plays an increasingly central role in information-based attacks. 

The RED project will develop techniques that automatically reverse engineer the toolchains behind attacks such as multimedia falsification, adversarial machine learning attacks, or other information deception attacks. 

RED will seek to develop techniques that support the automated identification of attack toolchains and the development databases of attack toolchains.

According to Aaron Sant-Miller, a Chief Data Scientist at Booz Allen Hamilton, AI could be an effective tool in detecting these kinds of deceptions. He does not believe the technology is ready to be an automated weapon against the deceptions, he said.

“As this is foundational research work, I don’t believe the immediate goal should be using AI to fight the deceptions,” Sant-Miller said. “Rather, the approach should be multi-layered.”

The layers, according Sant-Miller, should be: 

1.) Use AI to uncover these deceptions, 
2.) Use AI to understand these deceptions, and then 
3.) Use the outputs of the first two phases to fight the deceptions. 

“It’s important to start with increased visibility and understanding before prescribing a treatment,” he said.

While in many way we are at the infancy of the technology, AI that exists today is ready to fulfill some of the goals of the federal initiative, according to Darrell M. West, co-author of Turning Point:  Policymaking in the Era of Artificial Intelligence.

“AI has the capability of finding outliers right now,” West said. “It is a matter of training the algorithm on what looks normal versus abnormal, and getting the AI to find things that look unusual and thereby warrant additional attention.”

West is the Vice President of Governance Studies and the Senior Fellow at the Center for Technology Innovation at the Brookings Institution.

Sant-Miller agrees that the reverse engineering aspect of DARPA’s RED project is advanced technology that doesn’t exist today.

“Adversarial attacks on AI are occurring today, but the research in this space, to automate the reverse engineering of these adversarial techniques, is just beginning,” he said. 

“As a whole, adversarial AI and its defenses is a young field of research. While I expect a lot of rapid progress in the research domain, fully-automated, integrated and operationalized reverse engineering systems will take some time,” Sant-Miller said.

One of the goals of the federal AI Initiative is to use the technology now to improve how agencies operate. Sant-Miller said there are ways to apply the technology now. 

“AI is very good at pattern recognition right now and current work should focus there,” Sant-Miller said. “More complex processes like decision-making, contextual learning, or conceptualizing the broader implication of patterns are much more challenging.”

But he also cautioned the agencies and developers of AI keep an open mind about how the technology is developed and applied.

“AI capabilities, understanding, and research are growing so fast, every new discovery tends to point toward a new direction for AI,” Sant-Miller said. “This domain is growing and evolving so quickly, we’re constantly learning about new opportunities, new potential, and possible directions to take AI applications.“

Additionally, like many new technologies, moving from the lab to the real world will be a big step for AI, said Sant-Miller. 

“Operational scenarios have a lot more constraints and what a model needs to ‘learn’ and ‘know’ often evolves very quickly,” Sant-Miller siad. “I think it’s important not to assume that just because something works really well in a test or in your lab environment, that it will work well in operations. There are also a lot of sampling and learning challenges that come into play, long term, if you make decisions on what data to collect based on what AI knows now.”

West recommends agencies learn from industry on how to best use AI at this early stage, to avoid those kinds of growing pains.

“Federal agencies need to figure out how AI can improve agency operations,” West said. “Finance is an obvious area because AI can help with fraud detection and internal agency operations. Many private companies are using AI to improve their operations and federal agencies need to do the same thing.”

Overall, both West and Sant-Miller agree the potential for AI in the federal government and beyond is enormous.

“AI is the transformative technology of our time and government agencies need to figure out how to harness it for agency applications,” West said. “Every leading private company is using AI and agencies need to up their innovation game to take advantage of these products.”