AI in Cybersecurity: Double-Edged Sword or Game-Changer?

Share This Post


Artificial Intelligence (AI) is transforming cybersecurity, but not all that glitters is gold. On one side, it empowers defenders with unprecedented capabilities—from faster threat detection to automated response. On the other, it arms cybercriminals with tools to launch faster, more convincing, and scalable attacks. For organisations caught in this technological arms race, the stakes have never been higher. So, is AI a double-edged sword or the game-changing solution security teams have been waiting for?

The Rise of AI-Driven Attacks

Cybercriminals are no longer relying on brute force or poor grammar to breach systems. They’re using AI to launch smarter, faster and far more personalised attacks. “Cybercriminals are no longer relying on brute force alone to get into systems,” explained Robert Cottrill, Technical Director at ANS. “AI is allowing them to scale attacks with precision, automating reconnaissance, generating realistic phishing content, and adapting tactics in real time.”

Robert Cottrill, Technical Director at ANS.

Generative AI has given threat actors new firepower. “Threat actors are not just targeting AI but also harnessing it,” David Cummins, SVP EMEA at Tenable, told Croner-i. “They have a number of powerful tools at their disposal, including AI-driven virtual assistants that can streamline and amplify their attacks.”

One rapidly growing threat is AI-enhanced phishing. Gone are the days of emails riddled with spelling errors. “Attacks can now closely mimic local vernacular, internal corporate lingo, and professionalism,” said Shobhit Gautam, Staff Solutions Architect at HackerOne. “Old tells are no longer reliable.” AI is also helping attackers identify vulnerabilities faster, reverse engineer security tools, and automate malware development.

The results are sobering. Cummins referenced the case of FunkSec, a previously little-known group that leapfrogged more established cybercrime gangs using AI-developed ransomware code. “FunkSec is thought to have been responsible for more incidents than any other cybercrime group,” he told Silicon UK.

Beyond emails, voice deepfakes have emerged as a serious risk. “A quarter of people have now experienced an AI voice clone, with 77% of victims losing money as a result,” said Gautam. Threat actors have used these clones to mimic CEOs and authorise fraudulent wire transfers worth hundreds of thousands of pounds. “Just a small number of audio files can achieve a 95% voice match,” he explained.

Peter Garraghan, CEO of Mindgard, pointed out that while the types of attacks aren’t new—phishing, credential harvesting, social engineering—AI has dramatically increased their efficiency and reach. “AI now enables the generation of hyper-personalised content at speed and scale,” he said.

In short, attackers are getting faster, smarter, and harder to detect. As Cummins warned, “There is a danger that the horse won’t just bolt through the open gate but charge through it and may even destroy the gate entirely.”

AI’s Role in Strengthening Cyber Defences

Despite the threats, AI isn’t just benefiting attackers—it’s also equipping defenders with tools to fight back. AI is being used to detect abnormal patterns, automate incident responses, and process vast volumes of data at speeds no human could match.

“AI is especially effective in threat detection, monitoring vast datasets in real time, identifying behavioural anomalies, and spotting cyber threats,” said Cottrill. It can also automate responses to low-level threats, giving human analysts more time to focus on strategic challenges.

One of the most significant breakthroughs is in context-driven decision-making. “Harnessing the power of GenAI enables security teams to work faster, search faster, analyse faster and ultimately make decisions faster,” Cummins told Silicon UK. “It’s like Google Translate for cyber defenders.”

GenAI is showing promise in SIEM rule generation, malware analysis, and even translating complex machine code into readable English. “This helps take the guesswork out of the process, recommending the exact path to remediation,” Cummins explained.

Security researchers are also using AI as a copilot to identify software vulnerabilities before hackers do. “AI helps them conduct more security testing and reach farther and deeper areas of the attack surface,” said Gautam. Research from HackerOne suggests that 38% of security researchers now use AI, with 20% seeing it as essential.

AI also plays a key role in collaborative security. It can translate dense technical reports into clear, actionable guidance—ensuring that teams across an organisation can coordinate effectively. “Faster processes and automation allow security teams to focus on strategically important tasks,” Gautam explained.

For high-risk threats, AI is proving especially useful in detecting advanced persistent threats (APTs) and zero-day vulnerabilities. “AI analyses massive data sets for abnormal patterns to identify new threats quickly,” said Niko Maroulis, VP of AI at Hack The Box.

Niko Maroulis, VP of AI at Hack The Box.
Niko Maroulis, VP of AI at Hack The Box.

And AI is even being used to protect other AI systems. “LLMs are increasingly being brought to the fight, serving as intermediary layers to defend other LLMs from threats like prompt injection and data leakage,” Garraghan noted.

Limitations, Ethics and the Human Element

But AI is not a silver bullet. It comes with major limitations—technical, ethical, and operational—that no business can afford to ignore.

First, there’s the data problem. “If you fail to educate the AI model correctly, then the model fails to deliver reliable results,” said Cummins. “It’s gold in, gold out—and vice versa.” AI is only as good as the data it’s trained on. If that data is biased, incomplete or inaccurate, the output will be flawed.

False positives are a significant issue too. “Regular false positives can overwhelm security teams, contributing to employee burnout and a weaker security posture,” said Maroulis. AI-generated bug reports can also hallucinate, leading developers on time-wasting wild goose chases. “Some reports look realistic but are actually nonsensical,” explained Gautam.

AI also struggles with nuance. “It can identify patterns, but not the intent behind them,” Cottrill told Silicon UK. That’s why human oversight is still crucial. “A human-in-the-loop approach is especially important when dealing with deception-based attacks,” Garraghan added.

Security teams must also grapple with privacy concerns. “It’s crucial to balance effective threat detection with responsible data use,” Cottrill explained. “Companies must ensure their AI systems comply with privacy regulations, are transparent in their decision-making, and avoid excessive surveillance.”

GenAI operates by analysing vast amounts of data. If mishandled, this can lead to breaches, manipulation, and loss of customer trust. “AI systems require robust security measures to minimise the risk of unauthorised access,” said Gautam. “If malicious attackers breach these systems, they can gain access to confidential data or manipulate training datasets.”

And then there’s the question of choosing the right tools. Businesses need to look past the hype and ask hard questions. “Ask how the model was trained, how it handles adversarial inputs, and whether it can explain its decisions,” Garraghan advised. Transparency, integration, vendor trustworthiness, and compliance certifications should all be part of the assessment process.

Ultimately, the success of AI in cybersecurity hinges on people—how well they train the models, interpret the data, and respond when things go wrong. As Maroulis summed up, “AI needs constant refinement and human oversight to stay effective.”

So, is AI shifting the balance in favour of defenders? The jury is still out. “In such a close-run race, it can be hard to tell who is in front,” Gautam said. “For the time being, security teams hold the upper hand. However, bad actors are not far behind.”

Cottrill agreed: “Right now, AI is escalating capabilities for both sides. The difference lies in how organisations adopt it.” With the right foundations—clean data, cloud security, and trained teams—AI could be a game-changer. But without them, it’s just another risk vector.

AI is reshaping the cybersecurity landscape at breakneck speed. While it offers powerful tools for defence, it also enables more sophisticated attacks. The result is a high-stakes arms race where speed, data quality, and strategic use of resources are everything.

Organisations that treat AI as a plug-and-play solution risk being outpaced. Success depends on integrating AI thoughtfully pairing its strengths with human expertise, ethical oversight, and robust infrastructure. The goal isn’t just to keep up with cybercriminals, but to stay one step ahead. With the right balance, AI can be a force multiplier—not a liability.

Mike Britton, CIO at Abnormal Security
Mike Britton, CIO at Abnormal Security.
Mike Britton, CIO at Abnormal Security.

How are cybercriminals leveraging AI to enhance the scale and sophistication of their attacks?

“As generative AI adoption has surged, criminals are adapting it for nefarious purposes, including to write malicious code or to write malicious social engineering emails.

“Many cybercriminals are exploiting commercial LLMs like ChatGPT to aid in these attacks, but we’re also seeing the emergence of a number of malicious LLMs specifically catered to threat actors. One of the most recent LLMs we’ve seen is that of GhostGPT. 

“GhostGPT likely either uses a wrapper to connect to a jailbroken version of ChatGPT or an open-source large language model (LLM), effectively removing any safeguards. By eliminating the ethical and safety constraints typically built into AI models, GhostGPT can provide direct, unfiltered answers to sensitive or harmful queries that would be blocked or flagged by conventional AI systems.”

What specific use cases—like deepfakes or AI-generated phishing—are you seeing in real-world attack scenarios?

“One of the most prominent ways AI is used in cybercrime is to facilitate social engineering attacks. This includes both deepfakes (audio and video impersonations) as well as highly targeted AI-generated phishing and business email compromise campaigns. 

“In these attacks, threat actors use LLMs to craft emails that mimic specific individuals or vendors in order to manipulate their victims – such as tricking finance teams into executing fraudulent wire transfers – even translating messages to local languages to improve success rates. 

“Adversaries are using AI for reconnaissance as well, scanning social media, leaked databases, and open-source platforms to gather intelligence on targets. These insights feed into more precise and contextualized attacks. Now more than ever, even inexperienced and petty criminals can easily create highly convincing email attacks at scale.”

On the defence side, where is AI proving most effective right now in detecting or stopping threats?

“AI is increasingly being used to fight back against AI-generated email attacks, with behavioural AI emerging as a powerful defence mechanism. Unlike traditional email security, which relies on static rules or known threat signatures, behavioural AI analyses patterns of human behaviour – how individuals typically write, when they send messages, who they interact with, and what kind of content is “normal” within an organisation. This baseline of normal behaviour becomes a critical filter for spotting anomalies that even well-crafted, AI-generated phishing emails can’t mimic perfectly.

“For example, if an employee suddenly receives an email from their “CEO” asking for a wire transfer – but the tone, timing, or sentence structure doesn’t align with how the CEO usually communicates – behavioural AI can flag it as suspicious, even if the email passes SPF, DKIM, and DMARC checks and contains no obvious malicious links. This contextual awareness makes it much harder for attackers to succeed, even when using generative AI tools to create flawless-looking messages.

“As attackers use AI to create more convincing lures, behavioural AI becomes essential because it’s not just analysing content, it’s analysing context. And that context is incredibly hard to fake, even for advanced generative models.”

What are the current limitations of AI in cybersecurity, and where does human judgment still play a critical role?

“Security teams should be careful not to over-rely on AI – it’s not a silver bullet, but rather a tool that can be layered with other defences for stronger overall security, while improving efficiency and elevating security team members in their roles.

This means that security leaders should not rely exclusively on AI-based email security technology to protect their organisations and should continue to implement security awareness training and other foundational security measures like multi-factor authentication and password management.”

How do you evaluate the risk of false positives or bias in AI-driven threat detection systems?

“Evaluating false positives and bias in AI-driven threat detection starts with the quality and diversity of training data. Models trained on narrow or unbalanced data can misclassify normal behaviour or miss real threats. To manage this, security teams should track precision and recall, adjusting sensitivity thresholds to balance detection accuracy with operational noise.

“Behavioural AI can be especially sensitive to changes in routine, such as users working from new locations or taking on new roles. Systems need to adapt gradually to evolving patterns without overreacting.

“Human-in-the-loop feedback is key as well. When analysts can mark alerts as accurate or not, the AI improves over time. Explainability also matters – models should show why they flagged something, helping teams validate alerts and spot bias in decision-making.”


What ethical or privacy concerns should companies keep in mind when deploying AI for security purposes?

“Regardless of the use case, any company that develops AI or uses AI in their products should prioritise transparency as much as possible, with assurances around how the AI operates and how they manage user data. 

“It’s also important to minimise the number of different data sources that AI systems have access to (i.e. only expose data that the system absolutely needs to access) and to prevent an AI system that consumes raw end user input to also execute actions over sensitive data. 

“Additionally, any good product will prioritise human accountability over system behaviour – meaning humans should be able to make the final decision when it comes to executing, and potentially undoing, any actions taken by AI.”

What should organizations look for when choosing AI-based cybersecurity tools—what questions should they be asking vendors?

“When choosing AI-based cybersecurity tools, organisations should prioritise solutions built natively with AI – not legacy products with AI bolted on. Native AI tools are designed from the ground up to process large volumes of data, detect novel threats in real time, and adapt continuously without heavy manual tuning.

“To assess this, ask vendors whether AI is central to their architecture or just an add-on. Dig into how their models are trained, how they detect zero-day threats, and how they reduce false positives. Look for explainable AI features that offer visibility into why decisions are made – this builds trust and speeds up response. You could also ask how often the system learns or retrains, whether it adapts to your environment, and how well it integrates with existing tools.”

Do you see AI ultimately shifting the balance in favour of defenders, or is it just raising the stakes on both sides?

“AI has proven to be a powerful asset for attackers, and organisations need to utilise AI defensively if they hope to keep pace. In the battle between bad AI and good AI, I believe that the defenders have the upper hand. After all, we’ve got a major advantage that malicious actors don’t: unique personal and organisational context. A deep understanding of how we interact with our communications applications is what enables good AI to automatically detect and remediate even the subtlest signs of malicious activity.”



Source link

Related Posts

- Advertisement -spot_img