top of page

Adversarial Misuse of Generative AI


Google's New AI Risk Report-How Adversaries Are Exploiting LLM's & How to Stay Secure
Google's New AI Risk Report-How Adversaries Are Exploiting LLM's & How to Stay Secure

AI, like all transformative technologies, can be used for both beneficial and malicious purposes. The recent release of DeepSeek, which significantly reduces the cost of AI development and deployment worldwide, presents a new opportunity for malicious actors due to its open-source nature.

Top Malicious Use Cases of AI by Threat Actors

  1. Generating highly convincing phishing emails and social engineering content

  2. Creating misinformation campaigns to manipulate public opinion

  3. Poisoning training data to bias LLM outputs

  4. Developing sophisticated malware with custom code generation

  5. Exfiltrating sensitive information by crafting clever prompts to extract data from LLMs


These threats leverage generative AI’s ability to produce human-like text, deceiving users and automating cyberattacks at scale. Since the advent of ChatGPT 3.5, hackers have increasingly incorporated generative AI into their operations, and this trend is accelerating.


A Look Ahead

Following the release of DeepSeek, Google's Threat Intelligence Group (GTIG) has published new research detailing how cyber threat actors interact with AI-powered assistants to facilitate operations. While current AI models, such as Google’s Gemini, have robust safety measures, emerging AI technologies—particularly those developed in China—pose significant competitive and security challenges for Western enterprises.

The Use of Generative AI by Threat Actors

GTIG focused their analysis on government-backed threat actors using the Gemini web application. The report covers new findings across Advanced Persistent Threat (APT) groups and Information Operations (IO) actors tracked by GTIG. Using a combination of analyst reviews and LLM-assisted analysis, they investigated prompts submitted by these actors to misuse Gemini.

Key Definitions:

  • Advanced Persistent Threat (APT): Government-backed hacking groups engaged in cyber espionage and destructive network attacks.

  • Information Operations (IO): Coordinated, deceptive efforts to manipulate online audiences, often via fake accounts and comment brigading.

GTIG’s analysis revealed that while adversaries leverage generative AI for research, content creation, and coding, they have not yet developed breakthrough cyber capabilities using these tools. However, advancements in AI—especially from aggressive Chinese AI firms—are expected to shift the risk landscape.

Key Trends in Adversarial AI Use

  • Enhanced Cyber Operations – State-backed threat actors, particularly from Iran and China, are leveraging generative AI for reconnaissance, vulnerability research, and phishing campaigns.

  • Influence Operations – AI is increasingly being used to generate and localize content for disinformation campaigns, with China and Iran leading the charge.

  • Malware Development & Evasion – Underground forums now offer jailbroken AI models optimized for malware creation and bypassing security measures.

  • AI-Assisted Productivity Gains for Adversaries – Even without novel capabilities, generative AI allows cybercriminals to scale their operations more efficiently, particularly in phishing and social engineering campaigns.

The Role of Chinese AI in the Emerging Threat Landscape

China’s AI advancements are setting the stage for new cybersecurity challenges. State-backed AI development initiatives are producing LLMs that could rival or surpass Western models, introducing significant implications:

  • Scalability of Cyber Threats – More powerful AI models allow both state-sponsored and independent adversaries to scale their operations.

  • Strategic AI Deployment for Espionage – Chinese-backed groups historically use AI for reconnaissance and social engineering; future AI iterations could streamline these operations even further.

  • Proliferation of AI-Powered Influence Operations – China’s ability to integrate AI into state-driven propaganda efforts remains a significant geopolitical concern.


Safeguarding Enterprises: Strategic Considerations for CIOs & CISOs

Given AI’s role in both innovation and exploitation, security leaders must proactively reinforce their defenses.

Actionable Steps:

  1. Implement AI Risk Management Frameworks – Align security strategies with frameworks such as Google’s Secure AI Framework (SAIF), NIST’s AI Risk Management Framework, and MITRE’s AI threat models.

  2. Enhance Threat Intelligence Capabilities – Invest in AI-driven threat intelligence tools to detect and mitigate adversarial AI misuse.

  3. Adopt AI Governance Policies – Establish enterprise-wide AI usage policies to prevent vulnerabilities in AI-powered workflows.

  4. Monitor Emerging AI Technologies – Stay informed about Chinese AI advancements and their implications for cybersecurity.

  5. Collaboration & Information Sharing – Engage with industry groups and government agencies to share intelligence on AI-driven threats.

Conclusion

The rise of adversarial AI misuse represents a new paradigm in cybersecurity. While current AI tools include safety features, China’s rapid AI advancements could introduce unforeseen risks that Western enterprises must proactively prepare for.


For CIOs and CISOs, the time to act is now—by integrating AI risk management, enhancing cybersecurity frameworks, and staying ahead of emerging AI threats, organizations can protect themselves in an evolving digital landscape.


More to Come

Future reports will explore specific defensive strategies for enterprises looking to mitigate AI-related security risks.


Sources

Google GTIG, 1/29/25


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page