Auburn researcher moderates panel on using AI for cybersecurity

By Brian Wesley

Published: Mar 31, 2020 9:26:00 AM

Daniel Tauritz (left) moderated a panel discussion on the “Use of AI for Cybersecurity." Daniel Tauritz (left) moderated a panel discussion on the “Use of AI for Cybersecurity."

Auburn University cybersecurity researcher Daniel Tauritz recently moderated a panel discussion on the “Use of AI for Cybersecurity” at the Intelligence National Security Alliance’s spring symposium in Arlington, Virginia.

The symposium, whose theme was “Building an AI Powered Intelligence Community,” featured a variety of keynotes and moderated discussions, providing insight on the nuances and applications of AI technology, including the fundamental tools and technologies as well as AI’s role in research and development and workforce development. Tauritz served as the moderator for the panel discussing how AI can be applied to defend against cyberattacks and fortify national security.

“One of the greatest threats to national security is that our adversaries may employ AI agents to attack us via cyberspace faster than human defenders can respond,” said Tauritz, associate professor of computer science and software engineering and chief cyber AI strategist for the Auburn Cyber Research Center. “It is therefore paramount that we focus as a nation on dominating the fields of AI and cybersecurity in order to field AI agents capable of defending against this threat, as well as employ AI to augment the capabilities of human defenders to identify high-consequence adversarial strategies and corresponding defenses.”

The panelists for “Use of AI for Cybersecurity” included Martin Stanley, senior technical advisor at the Cyber and Infrastructure Security Agency of the U.S. Department of Homeland Security; Raffael Marty, vice president of research and intelligence at the cybersecurity company Forcepoint headquartered in Austin, Texas; and Angela McKay, senior director of cybersecurity strategy and policy at Microsoft.

The speakers thoroughly discussed several opportunities and threats of AI use in cybersecurity, including AI recognizing cyberattacks, the limitations of the technology and creating AI to assist with training humans. One of the key discussion points was finding the best approach to protecting civilians’ privacy and safety while confronting cyberattacks from adversaries.

“In order to minimize the risk of collateral damage, particularly with the aim to avoid civilian impact, AI agents will need contextual knowledge to distinguish between self and non-self, and between military and civilian targets,” Tauritz said. “Additionally, they will need the equivalent of Isaac Asimov’s ‘Three Laws of Robotics’ – suitably modified for warfare – in order to be imbued with moral values.”

Watch the full panel discussion below:

Media Contact: Chris Anthony, chris.anthony@auburn.edu, 334.844.3447

Recent Headlines