Public verification signals.
Real, live numbers about which checks rejected suspicious or invalid attempts in the last 7 days. This page is here for transparency, not to pretend one dashboard can prove an unbeatable security posture. Updated every minute, cached 60 seconds at the edge.
The important distinction: these numbers describe how our current rules behaved in production. They do not mean every rejection was a sophisticated bot, and they do not mean every attacker on the internet is covered forever. They are a public view into how the system works today.
Which checks rejected them
- Sim rejected1 · 25.0%
- Honeypot0 · 0.0%
- Replay0 · 0.0%
- No interaction3 · 75.0%
- Too fast0 · 0.0%
- Forged token0 · 0.0%
- Bad secret0 · 0.0%
How we count rejections
A rejected attempt is any verification flow that reached our backend, claimed it had passed a game, and was rejected by one of these checks:
- Honeypot
- An invisible input field the widget plants in its Shadow DOM. Real users never see it. Bots that scrape every input and fill them all in trip it instantly.
- Too fast
- The challenge JWT carries a server-issued `iat` timestamp. We require at least 800ms between issuance and submission. Humans can't play a 5-second game in 200ms; bots replaying tokens often try.
- No interaction
- The widget flips a flag the moment a real pointer/touch/key event fires inside the canvas. Headless browsers without input synthesis fail this check.
- Replay
- Each token has a single-use `jti`. The redeem update is atomic — the second time the same token shows up, it's a guaranteed 409. Common in stolen-token attacks.
- Forged token
- Tokens are signed with RS256 (asymmetric crypto). A bot that fabricates a token with a wrong signature gets a flat 401 from the verifier.
- Bad secret
- The customer's server uses their secret key to verify. Mismatched secrets — typically scraped public keys reused in cred-stuffing — fail Argon2id verification.
- Sim rejected
- The most implementation-specific category. The game's outcome is re-simulated server-side from its private seed and the user's input event log. If the events could not have produced the claimed score under the original layout, the token is rejected. This catches naive tampering and impossible scores; it is not a behavioral check.
What these numbers don’t mean
We don’t count humans who failed a game. By design, a soft fail (timeout, missed target, hit too many bombs) never reaches our backend — the widget just shows a “Try again?” panel and the player gets a fresh challenge. So everything in the categories above is much more likely to reflect automation, abuse, or invalid integration behavior than a human having a bad day.
We also don’t count site-misconfiguration failures as bots. Origin-mismatch / domain-not-whitelisted / unknown-site-key are configuration issues on the customer side, surfaced separately on the project dashboard.
We also do not use this page to claim we outperform giant vendors on raw abuse intelligence. Their scale is different. Our claim is narrower and more defensible: we show the verification rules we run, we publish the signals they reject, and we keep the privacy and UX trade-offs explicit.