Home Crypto Google flags first AI-assisted zero-day attack targeting 2FA

Google flags first AI-assisted zero-day attack targeting 2FA

by Adam Forsyth



Google’s Threat Intelligence Group said it found a zero-day exploit that likely used artificial intelligence during discovery and weaponization. 

Summary

  • Google’s report links AI to a zero-day 2FA bypass targeting a popular admin tool today.
  • The exploit needed valid credentials first, but removed the second authentication barrier for attackers later.
  • Crypto users face added risk as AI agents, wallets, and connectors attract phishing attempts online.

The exploit targeted a popular open-source, web-based system administration tool and allowed attackers to bypass two-factor authentication after gaining valid login details.

The group said it worked with the affected vendor to disclose the flaw and stop the planned mass exploitation campaign. Google did not name the tool, the vendor, or the threat actor behind the operation.

Exploit needed valid credentials first

The flaw did not give attackers full access on its own. Google said the bypass required valid user credentials before the attacker could skip the second login step. That detail matters because two-factor authentication often protects crypto accounts, exchange logins, developer dashboards, and wallet-linked services.

Google said the weakness came from a logic error, not a common coding bug such as memory corruption or poor input handling. The company described it as a high-level semantic flaw, where a hardcoded trust assumption conflicted with the tool’s 2FA checks.

Moreover, Google said it had “high confidence” that the actor likely used an AI model to support discovery and weaponization of the vulnerability. The company said the exploit script included educational comments, a hallucinated CVSS score, and a clean Python format often linked to large language model output.

The company also said it does not believe Gemini was used in the operation. Its report noted that China and North Korea-linked actors have shown interest in AI-assisted vulnerability research, including prompt-based security testing and large-scale analysis of known flaws.

Crypto security risks widen

The warning adds to rising concern over AI tools in crypto security. Separate reports have tracked OpenClaw-related phishing, where attackers used cloned websites and malicious wallet prompts to target developers and drain crypto wallets.

Other security coverage has also warned that AI agents can create new weak points when they process outside content, connect to third-party tools, or act without enough human approval. Those risks are more serious when agents can access wallets, private files, browser data, or account credentials.

Google said threat actors are also testing AI for malware support, defense evasion, information operations, and access to AI systems. It named malware families such as PROMPTFLUX, HONESTCUE, and CANFAIL as examples of tools using LLMs for obfuscation or decoy code.



Source link

Related Posts

Leave a Comment