
New Grant Supports Study of AI's Role in Protecting Cybersecurity Systems
- by newswise
- Sep 30, 2025
- 0 Comments
- 0 Likes Flag 0 Of 5

Credit: Photo by Tonia Moxley for Virginia Tech.
The research team working on the cybersecurity project includes (from left) Xavier Pleimling, Sifat M. Abdullah, Cameron Mraz, Bimal Viswanath, Rudra Patel, and Brianna Detter.
BYLINE: Tonia Moxley
Newswise — From deceptive images to toxic chatbots, Bimal Viswanath, associate professor of computer science, has for years warned about the dark sides of artificial intelligence (AI).
Now he’s seen the light.
“We’re asking how generative AI can be used to improve security, not just harm it,” Viswanath said. “It’s about fighting fire with fire.”
Under a new two-year, $600,000 grant from the National Science Foundation (NSF) Security, Privacy, and Trust in Cyberspace Medium program, he and his research team aim to use generative AI — the same kinds of tools used to create deceptive and harmful content — to better secure online systems.
Solving the data problem
Cybersecurity faces a growing suite of threats as everything from business and banking to national security and defense rely on digital technologies. As cybercriminals and adversarial nations implement AI tools to short-circuit security systems, deploying protective AI systems makes sense.
But there’s a problem: a lack of real-world data to train new AI cyberdefense systems.
AI defense tools rely on algorithms that “learn” to recognize new threats from vast amounts of data about malicious behavior and attacks, as well as benign activity. But high-quality cybersecurity data is difficult to access. Companies and researchers often have only small, biased, or incomplete data sets. That limits the accuracy of threat-detection tools, leaving critical gaps in digital defense.
Viswanath’s team aims to change that by using generative AI tools to create realistic but artificially generated examples of cyber threats. By filling in the gaps, this synthetic data could help machine learning models detect new ransomware, phishing attempts, or other attacks.
“If we can generate this high-quality synthetic data, we can make existing security tools smarter without even changing their core design,” Viswanath said.
Expanding security
Most approaches to cybersecurity focus on detecting or neutralizing a specific kind of threat, such as malware or network breaches. But Viswanath’s project, which includes collaborators Atul Prakash from the University of Michigan and Shirin Nilizadeh from the University of Texas at Arlington, intends to create a framework that can significantly enhance threat detection across several domains.
“This project has the potential to reignite fields that have been stagnant because of limited data,” Viswanath said. “With synthetic data, we can push past those roadblocks.”
The work also could have implications for students in Viswanath’s courses in AI and security, where they learn how to train algorithms on real-world data. Incorporating synthetic data into these lessons could help students explore solutions to cybersecurity’s toughest challenges.
“I want students to see not just the problems AI creates, but the opportunities it opens,” he said.
Changing the AI conversation
Viswanath has spent years studying the harms and threats posed by generative AI and strategies to detect its threats and thwart its negative impacts. Now, it’s an ally.
“Most conversations about AI and security focus on the dangers,” he said. “This project is about showing how AI can be part of the solution.”
The research begins in October. Viswanath will lead a team of graduate and undergraduate students as they build and test new tools that may turn one of the most disruptive emerging technologies into a defense system for the future.
Request an Expert
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.