top of page

Aggressor Tech -The AI Arsenal for 2025

AI Armed Digital Threats - The Top 6 Approaches 
AI Armed Digital Threats - The Top 6 Approaches 

In our previous piece, we shared an overview of the evolving battleground of Threat Actors & Defensive Blockers leveraging AI in Cybersecurity in 2025. Here we begin a deeper dive into the specific domains involved on the front lines, starting with the AI Threat Landscape and Top Tactics already in play by bad actors around the world.


The Trend

By 2030, AI agents are expected to autonomously conduct comprehensive cyberattacks, seamlessly executing all phases of the cyber kill chain—from initial reconnaissance to data exfiltration and monetization—without human oversight. In many ways, that future is already reality, necessitating advanced AI-driven defense mechanisms to counteract these sophisticated attacks.


We begin with a comprehensive survey of the most impactful tools & tactics already in use by threat actors globally.


AI-Enabled Threat Weapon Vectors

Vector

Core Idea

Representative AI Tools / Tactics from the Table

1. Social-Engineering & Impersonation Weapons

Use generative AI to deceive humans directly—by crafting persuasive messages or synthetic media.

• AI-personalized Phishing-as-a-Service kits & QR “quishing”

• WormGPT-style illicit LLM chatbots

• FraudGPT & Telegram LLM bots

• Deep-fake live-video & voice BEC + MFA-bypass voice/SMS

• AI-enhanced pig-butchering & romance-investment scams

2. Credential & Session Assault Weapons

Automate large-scale account abuse or traffic manipulation to breach perimeter defences.

• AI-optimised credential-stuffing & password spraying

• AI-optimised bad-bot swarms (ATO, scraping)

• AI-driven DDoS & botnet command optimisation

3. Autonomous Intrusion & Movement Weapons

Agents or malware that plan, adapt, and progress through environments with minimal human control.

• Autonomous multi-step attack agents (“AutoGPT-like”)

• BlackMamba polymorphic malware

4. Payload Creation & Exploit Weapons

Generate or customise malicious code—and even negotiate—faster than defenders can respond.

• AI-powered ransomware builders & negotiation bots

• LLM-assisted zero-day discovery & exploit writing

5. Model-Manipulation Weapons

Target defenders’ or public AI systems themselves to subvert, poison, or steal them.

• Prompt-injection & jailbreak attacks on defenders’ LLMs

• Adversarial ML (data-poisoning & model-stealing)

6. Identity-Fabrication Weapons

Produce synthetic personas and documents to bypass KYC / fraud controls.

• Synthetic-identity generators

These Vectors consolidate into related the 6 'threat families', as follows:


  • Vectors 1 & 2 exploit the human–application interface—they remain the top entry vectors by volume and business-email losses.

  • Vector 3 reduces dwell time from days to minutes, stressing SOC response cycles.

  • Vector 4 shortens the exploit and ransom negotiation loop, driving higher attack ROI.

  • Vector 5 shifts the battlefield into defenders’ own AI tooling—an emerging blind spot.

  • Vector 6 fuels a parallel wave of financial fraud outside classic IT intrusion telemetry


Breakdown


​Adversaries are rapidly integrating AI into every stage of the cyber kill chain. On the reconnaissance and social engineering front, AI enables highly convincing and large-scale deception. For example, generative models can produce spear-phishing emails or chat messages in flawless, native-level language (even across multiple languages and dialects) to trick targets.


Attackers are using deepfake video and voice technologies to impersonate trusted executives in real-time video calls, exploiting the “seeing is believing” bias – a deepfake was even used in Hong Kong to defraud a finance executive of $25 million by mimicking their CFO.


Voice cloning and AI dialogue systems (unconstrained by ethics filters) facilitate vishing (voice phishing) at scale. These AI-driven social engineering attacks are automated, adaptive, and more personalized than ever, making them harder for victims to recognize.


On the weaponization and delivery fronts, threat actors employ AI to craft autonomous malware that can outmaneuver traditional defenses. Notably, researchers demonstrated “BlackMamba,” an AI-synthesized polymorphic keylogger that dynamically rewrites its own code via an AI at runtime to evade detection. This proof-of-concept used a benign program to query a large language model (OpenAI’s API) and mutate its malicious payload on the fly, illustrating how generative AI can produce endlessly unique malware variants.


In underground forums, bespoke malicious AI models like WormGPT and FraudGPT emerged in 2023 as “blackhat” alternatives to ChatGPT. Trained on malware development and free from ethical safeguards, WormGPT was sold as a subscription service (€60–€100/month) and was praised by cybercriminal users for “writing snippets of malware [and] phishing campaigns” effectively. Although some of these illicit services were short-lived, they set a precedent for threat actors to develop or acquire AI tools specialized for offense.


AI also contributes to later stages of the kill chain, such as exploitation and command-and-control. Attackers automate vulnerability discovery and exploit development using AI-driven analysis of target software, significantly compressing the time required to find zero-days or misconfigurations.


During exploitation, “Agentic” AI (autonomous agents) can dynamically adapt an ongoing attack – for instance, adjusting privilege escalation techniques on the fly in response to obstacles.


For command-and-control activations, AI can help malware decide on stealthy communication strategies or even execute multi-step objectives autonomously.


Next Up


In our next piece, we will dive into the top specific threat tools used by threat actors here in 2025 and look at the threat actor pyramid.


Sources

  1. Abnormal Security – What Happened to WormGPT?​Abnormal AI

  2. Dark Reading – Phishing Kit “Darcula” Gets AI Upgrade​Dark Reading

  3. Adaptive Security – Quishing: QR-Code Phishing Attacks​Adaptive Security

  4. CNN – Finance Worker Pays $25 M After Deep-Fake Video Call​CNN

  5. Chainalysis – Crypto Pig-Butchering Scams Leverage AI​Chainalysis

  6. The Hacker News – How AI Agents Will Transform Credential Stuffing​The Hacker News

  7. A10 Networks – Cybercriminals Leveraging AI in DDoS​A10 Networks

  8. Reuters – AI Agents: Greater Capabilities & Enhanced Risks​Reuters

  9. HYAS – BlackMamba: AI-Synthesised Polymorphic Keylogger​Threat Investigation | HYAS

  10. Sotero – AI-Powered Ransomware & Negotiation Bots​Sotero

  11. SD Times – LLM-Assisted Zero-Day Vulnerability Detection​SD Times

  12. Palo Alto Networks – What Is a Prompt Injection Attack?​Palo Alto Networks

  13. NIST AI 100-2e2025 – Adversarial Machine Learning​NIST Publications

  14. North Carolina State U. – New Technique for Stealing AI Models​Electrical and Computer Engineering

  15. IDVerse – Deep-Fake Fraud Is Up 2,137 %​IDVerse

 
 
 
bottom of page