Loading market data...

Morse Code Post Tricks Grok's Wallet into Sending $200k in DRB Tokens

Morse Code Post Tricks Grok's Wallet into Sending $200k in DRB Tokens

On May 4, a malicious actor tricked the verified wallet linked to the AI chatbot Grok into sending 3 billion DRB tokens — valued at roughly $155,000 to $200,000 at the time — to an unauthorized address on the Base network. The attacker pulled it off by posting a Morse code message on X that tagged @grok. Grok decoded the post, and that decoded text became a command for @bankrbot, a bot that handles token transfers. Bankrbot treated the command as executable and pushed the DRB tokens out of Grok's wallet.

How the attack worked

The scheme unfolded in four steps. First, the attacker noticed a Bankr Club Membership NFT sitting in a wallet associated with Grok. That NFT expanded transfer privileges beyond what a standard wallet could do. Next, the attacker posted a Morse code message on X, tagging @grok. Grok's AI decoded the Morse into plain text, which included a command for Bankrbot. Finally, Bankrbot read Grok's public reply as a legitimate instruction and transferred 3 billion DRB tokens to the attacker's wallet — address 0xe8e47...a686b.

Bankr automatically provisions an X-connected wallet for every account that interacts with the platform, including Grok. That wallet is controlled by the X account owner. The attacker exploited that link, using Grok's own AI to generate the spend order.

The NFT that opened the door

Before the token transfer, the attacker identified a Bankr Club Membership NFT in a wallet Grok controlled. That NFT effectively upgraded the wallet's permissions. Without it, the standard wallet might not have had the authority to move such a large sum. The NFT acted as a privilege escalation key, letting the attacker turn a public tweet into a six-figure withdrawal.

Fund recovery and the bug bounty

Bankr developer 0xDeployer reported that 80% of the stolen DRB tokens have been returned. The remaining 20% are still being discussed with the DRB community. Some of that portion was kept by the attacker as an informal bug bounty, according to 0xDeployer. That means the attacker walked away with a payout — albeit one that the community may still negotiate.

What the incident reveals about AI-agent risks

The broader lesson isn't about Grok or Bankrbot specifically. It's that AI-agent risk is fundamentally a wallet-control problem. When a system treats model output — even a decoded tweet — as a spend authority, public commands become as dangerous as a stolen private key. The attacker didn't crack cryptography. They just found a way to make an AI say the right words to a bot that was listening.

No one has announced new safeguards yet, but the incident leaves an open question: how do you build a bot that trusts its own AI but doesn't trust everyone else's?