Hackers Used AI to Build Zero-Day Exploit That Bypassed 2FA

5/12/2026
3min read
Denislav Manolov's Image
by Denislav Manolov
Crypto Expert at Airdrops.com
5/12/2026
3min read
Denislav Manolov's Image
by Denislav Manolov
Crypto Expert

Google Confirms First Known AI-Assisted Zero-Day Attack

Cybercriminals used an AI model to discover and weaponize a previously unknown software vulnerability capable of bypassing two-factor authentication, according to a new report from Google’s Threat Intelligence Group.

The flaw targeted a widely used open-source web administration tool and allowed attackers to bypass 2FA protections without needing to crack encryption or steal authentication codes directly. 

Google said the attackers were preparing a large-scale exploitation campaign before security teams intervened.

The incident marks the first confirmed case where Google directly linked AI tools to the discovery and development of a real-world zero-day exploit.

AI Helped Attackers Understand the Software’s Logic

According to Google researchers, the AI model was not simply scanning for broken code or software crashes.

Instead, it analyzed the application’s internal logic and identified contradictions in how authentication rules were implemented.

The system reportedly discovered a hidden condition that unintentionally bypassed two-factor authentication checks under certain circumstances.

Google explained: “Though frontier LLMs struggle to navigate complex enterprise authorization logic, they have an increasing ability to perform contextual reasoning.”

Researchers said the AI effectively understood the developer’s intended behavior and compared it against exceptions hardcoded into the system, eventually identifying the weakness.

Hackers Are Using AI as a “Force Multiplier”

Google warned that advanced AI coding systems are increasingly becoming “expert-level force multipliers” for cybercriminals.

The company noted that AI is now being used to:

  • identify software vulnerabilities,
  • generate exploit code,
  • automate malware creation,
  • and improve defense evasion tactics.

The report also stated that threat groups tied to:

  • China,
  • North Korea,
  • and Russia

are already integrating AI into cyberattack workflows.

According to Google, Russian-linked actors have used AI-generated obfuscation systems and decoy code to hide malware from security tools.

Meanwhile, Chinese and North Korean actors are reportedly focusing on AI-assisted vulnerability research and exploit development.

AI Models Are Becoming Better at Reverse Engineering

Google said the growing danger comes from AI systems becoming increasingly capable of understanding how applications are supposed to work.

Traditional scanners often look for:

  • broken functions,
  • crashes,
  • or obvious security failures.

AI systems, however, can analyze software behavior contextually and identify “strategically broken” logic that may appear normal to automated security tools.

That capability dramatically lowers the barrier for sophisticated exploit creation.

Researchers Debate How Big the Threat Really Is

Despite Google’s warning, some cybersecurity researchers argue the threat of AI-powered hacking is still being exaggerated.

A separate study led by University of Cambridge reviewed more than 90,000 cybercrime forum discussions and found that most criminals currently use AI primarily for:

phishing campaigns,

spam generation,

and social engineering.

The study suggested that highly advanced “AI super hackers” remain relatively uncommon for now.

Researchers wrote: “The role of jailbroken LLMs as instructors is overstated.”

Still, the Google report shows that sophisticated threat actors are already experimenting with more advanced AI-assisted exploit development techniques.

Google and AI Security Concerns Continue to Grow

The findings arrive during a period of growing concern around the security risks tied to AI-powered coding tools themselves.

Earlier this year, Google patched a prompt injection vulnerability affecting its Antigravity AI coding platform after researchers discovered attackers could potentially execute commands through manipulated prompts.

At the same time, Anthropic reportedly restricted access to its Claude Mythos model after internal testing revealed it could identify thousands of previously unknown software vulnerabilities.

Security researchers increasingly fear that AI systems are accelerating both cyber defense and cyber offense simultaneously.

Cybersecurity Experts Warn the Industry Is Entering a New Era

The broader concern among researchers is not simply that AI can help hackers write code faster, but that it can reason about software behavior in ways that were previously limited to highly skilled human experts.

Mozilla researchers recently described the pace of discovery as overwhelming, warning: “For a hardened target, just one such bug would have been red-alert in 2025.”

The rise of AI-assisted exploit development is now forcing cybersecurity firms, governments, and software developers to rethink how quickly vulnerabilities can be discovered and weaponized.

Share with your friends on social media:

Join the community and don't miss a crypto giveaway.

Subscribe for updates by e-mail with the latest research reviews, airdrop news, reward programs, event updates about upcoming airdrops.

By entering your email address you are accepting our Terms & Conditions and Privacy & Cookie Policy.