Google's security team disclosed on April 27, 2026 that a sweep of billions of web pages turned up real, in-the-wild prompt-injection payloads built specifically to hijack AI agents, with some payloads written to drain PayPal accounts. The finding was first reported by Decrypt and shared by the outlet on X.
This is the first time a hyperscaler has publicly confirmed that the agentic web is being seeded with operational, not theoretical, attack pages. As of April 27, 2026, broader crypto markets were soft on the day, with BTC at $76,729 (-1.8%), ETH at $2,287 (-3.3%), and the Fear and Greed index at 42 (Neutral) per CoinMarketCap, but the security implications here run well past today's tape.
What Google actually found
The Decrypt write-up describes Google's research as a scan across billions of indexed pages for content designed to manipulate large language models that browse the web on a user's behalf. The team identified real payloads, not honeypot dummies, embedded in pages an AI agent might visit during normal task execution: shopping, summarizing, paying invoices, or pulling enterprise data.
Some payloads target consumer surfaces. The PayPal-themed ones try to redirect or authorize payments while the user thinks the agent is doing something benign. Others target enterprise: payloads sit on pages an internal agent might fetch during research and try to trick it into exfiltrating data or sending it to attacker-controlled endpoints.
The mechanic is prompt injection. Instructions hidden in page content (sometimes invisible HTML, sometimes white-on-white text, sometimes phrased as if they came from the user) override the agent's actual task. The agent, dutifully following what it reads, executes the attacker's instructions instead of the user's.
Why a Google-scale signal matters
Prompt injection has been a known risk class since 2023. What is new here is provenance and scale. Google saying it has found real payloads across billions of pages is different in kind from a single researcher publishing a proof of concept. It implies that attackers have moved from concept demos to production seeding, and that they expect enough agent traffic to make the campaign worthwhile.
The PayPal angle is the part that should travel beyond the AI security crowd. Payments-focused agents are one of the first commercial applications of browsing LLMs, and PayPal has been a vocal early partner. If even a small fraction of agent-routed sessions hit a poisoned page during checkout, the attack class moves from research notes to chargeback queues.
The crypto and on-chain payments parallel
Crypto wallets and card programs are watching the same agent rollout closely. Several issuers have begun previewing AI assistants that can compare cards, top up balances, or route between fiat and stablecoins on a user's instruction. The same prompt-injection class that Google flagged for PayPal applies, with one important difference: a fraudulent on-chain transfer cannot be reversed the way a card chargeback can.
That gap is why the security architecture around agent payments is not yet settled. Hardware-key signing, allowlisted recipients, and per-task spend caps are the mitigations being floated, but very few consumer agents enforce them today. Readers thinking about self-custody options for their spending balance should weigh how exposed those balances would be if a future agent layer sits in front of them.
For a fuller picture of how stablecoin rails are colliding with consumer payments this week, see Western Union's planned Solana stablecoin and Stable Card launch and Musk saying X Money is close to launch, both of which depend on the same trust assumptions Google's research is pressuring.
What changes for users and operators
For everyday users, the takeaway is narrow. If you are letting an AI agent browse on your behalf with payment authority, treat that authority the way you would treat a co-signer: scoped, time-boxed, and recipient-limited. Do not connect agent flows to a primary wallet or a high-limit card without a second factor on the actual payment step.
For operators of agent products, the bar just rose. Provenance of fetched content (was this page reached via a trusted domain or a search result?), output checking on tool calls (does the resolved payment recipient match what the user originally asked for?), and human-in-the-loop confirmation for funds movement are no longer optional polish. Google's data implies that attackers are betting some teams will skip those steps.
Card programs that plan to layer AI assistants over stablecoin spending flows now have a concrete reason to ship those guardrails before the assistants, not after.
Overview
Google's confirmation that real prompt-injection payloads are live in the wild, including ones aimed at PayPal flows, marks a transition for AI-agent payments from theoretical risk to operational threat. The fix is not exotic: scoped authority, output checking, and human confirmation on funds movement. The pressure now is whether agent-payment products ship those controls before users adopt the convenience without them.








