Tags: Badge, Charles Herder, Dan Kaufman, Ida Wahlquist-Ortiz
2025

SEAaaS: Social Engineering Attacks-as-a-Service

Thanks to Dan Kaufman, I am an advisor to Badge. Badge has done something incredibly powerful with identity: enabling strong, cryptographic identity without any stored secrets. At Google, my teams contributed to the compromise-o-ramma that is PassKey — an improvement over passwords, no doubt, but if you were to ask yourself “exactly how is Apple syncing PassKeys when I get a new device?” you wouldn’t love the answers — so when I met them I was excited to help out in any way I could.

Why provably human matters more now than ever before

Cheaply and reliably authenticating your presence on any device without having to store the crown jewels of either secret keys or — way worse — a centralized repository of biometrics is a Holy Grail challenge of cryptography, which is why Badge’s advancement is so powerful. For all the obvious use cases — multi-device authentication, account recovery, human-present transactions — Badge is going to change how companies approach the problem and make auth, transactions, and identity fundamentally safer for people around the world.

And just in time. Because one of the many impacts of LLMs and GenAI is that a whole class of cyber attacks are about to become available to script kiddies around the world. Think of it as “Social Engineering Attacks as a Service” — SEAaaS, most definitely pronounced “See Ass.”

One of Badge’s founders, Dr. Charles Herder and I just wrote an op-ed on the topic, “In an AI World, Every Attack Is a Social Engineering Attack.” What was remarkable about writing it was how many of the ideas we were discussing made headlines between starting on the article and completing it.

As we wrote:

With the emergence of Large Language Models (LLMs) and Generative AI, tasks that previously required significant investments in human capital and training are about to become completely automatable and turnkey. The same script kiddies who helped scale botnets, DDos (distributed denial of service), and phishing attacks are about to gain access to Social Engineering as a Service.

As we were drafting, the story broke about Claude being used in a wide-ranging set of attacks

Anthropic, which makes the chatbot Claude, says its tools were used by hackers “to commit large-scale theft and extortion of personal data”.

The firm said its AI was used to help write code which carried out cyber-attacks, while in another case, North Korean scammers used Claude to fraudulently get remote jobs at top US companies.

What all of these attacks apply more pressure to is the need to know if an actor — or the author of a piece of code — is who they claim to be. Increasingly sophisticated attackers leveraging cutting edge frontier models will exploit any form of identity vulnerable to replay or credential theft.

As we wrote:

The same AI that is being used today to generate fraudulent content and influence discussions on the Internet is also capable of generating synthetic accounts that are increasingly indistinguishable from real, human accounts. It is now becoming economical to completely automate the process of operating millions of accounts for years to emulate human behavior and build trust.

All this even if we’re incredibly careful about how we use LLMs.

Come talk about this more

Scared? Curious? In the Bay Area? Come join us at the MIT Club of Northern California to hear Charles and I in conversation with Ida Wahlquist-Ortiz. It should be a very interesting conversation.