Skip to main content

Playbook: Survive a Fraud Attack

TL;DR
  • Hour 0-1: Confirm attack is real, identify vector (card testing, ATO, etc.), enable emergency friction, block bad IPs/devices
  • Hour 1-4: Pattern analysis (IPs, devices, BINs, emails), deploy targeted rules in priority order
  • Hour 4-24: Measure effectiveness, tune controls, gradually relax emergency measures
  • Post-attack: Root cause analysis—what would have detected this earlier?
  • Even in crisis, you're running experiments: hypothesis, metric, kill criteria

24-hour response guide for active fraud attacks.

Every attack is an expensive lesson. Don't waste it.

Workflow Overview

PhaseKey Tasks
ContainConfirm attack is real, block bad IPs/devices, notify fraud team
AnalyzePattern analysis (IPs, devices, BINs), update rules, expand blocks
StabilizeFine-tune rules, restore normal operations, track losses
LearnRoot cause analysis, permanent changes, write incident report
Prerequisites

Before starting, ensure you have:

  • Access to fraud rules dashboard (processor or third-party)
  • Ability to deploy emergency velocity rules
  • Contact info for fraud team and on-call rotation
  • Understanding of common fraud attack types

When to Use This Playbook

  • Sudden spike in fraud transactions (2x+ baseline)
  • Card testing attack detected
  • Coordinated fraud ring identified
  • ATO wave hitting your accounts

Hour 0-1: Assess & Contain

Immediate Assessment

□ Confirm attack is real (not false positive spike)
□ Identify attack vector:
□ Card testing (high velocity, small amounts, high declines)
□ ATO wave (failed logins, profile changes)
□ Application fraud burst (similar applications)
□ Transaction fraud spike (unusual patterns)
□ Estimate current exposure ($)
□ Identify affected segments/products

Immediate Containment

□ Enable emergency friction (CAPTCHA, rate limits)
□ Block identified bad IPs/devices
□ Lower auto-approval thresholds
□ Increase manual review queue priority
□ Notify on-call fraud team
This is a Bet

Some emergency measures hurt conversion. Take the hit, but write down your assumptions and metrics so you can back out intelligently later.

If you block all orders from a country, measure: how many good orders are you losing? Is that worth the fraud prevented?

Communication (Hour 1)

□ Alert fraud team lead
□ Notify security team
□ Brief management (if large scale)
□ Prepare customer support talking points

Hour 1-4: Analyze & Adapt

Pattern Analysis

□ Identify common attributes:
□ IP ranges
□ Device fingerprints
□ BINs/card ranges
□ Email patterns
□ Shipping addresses
□ Time patterns
□ Determine attack sophistication level
□ Estimate attack scale and trajectory

Deploy Targeted Rules

Based on patterns found, deploy rules in this order:

1. High-confidence blocks (low false positive risk):

□ Block specific device fingerprints seen in fraud
□ Block IPs with 100% fraud rate
□ Block email domains used only in fraud

2. Medium-confidence rules (some false positive risk):

□ Add velocity limits (transactions per IP/hour)
□ Require step-up auth for flagged BINs
□ Manual review for new customers from affected countries

3. Last resort (high false positive risk):

□ Decline all orders from specific countries
□ Manual review all new customers
□ Pause specific product categories

Rule Testing Even in Crisis

You're still running experiments, just faster.

For each rule you deploy:

  • Hypothesis: This rule will block X% of fraud with Y% false positives
  • Metric: Block rate, false positive rate (check manually for first hour)
  • Kill criteria: If false positive rate exceeds 5%, tighten or remove

Hour 4-8: Stabilize

Measure Effectiveness

□ Track fraud rate change since rules deployed
□ Monitor false positive rate (customer complaints, support contacts)
□ Compare hourly fraud $ before/after
□ Assess customer impact

Tune Controls

□ Tighten rules if attack continues
□ Loosen rules if false positives too high
□ Add new rules as patterns emerge
□ Remove ineffective rules

Document Actions

□ Log all rule changes with timestamps
□ Document decision rationale
□ Track affected customers/transactions
□ Preserve evidence for investigation

Hour 8-24: Recover & Learn

Transition to Normal

□ Gradually relax emergency controls
□ Return manual review to normal staffing
□ Monitor for attack resumption
□ Keep targeted blocks in place longer

Impact Assessment

□ Total fraud prevented: $_______
□ Total fraud losses: $_______
□ Estimated false positives: _______
□ Customer complaints: _______
□ Operational cost: $_______

Post-Attack (Day 2-7): Learn

What Would Have Caught This Earlier?

The most valuable question. Don't skip it.

□ What signal appeared first?
□ How long between first signal and detection?
□ What alert or rule SHOULD have fired?
□ What data did we not have that would have helped?

Root Cause Analysis

□ How did the attack start?
□ Why wasn't it detected earlier?
□ What control gaps were exploited?
□ How can we prevent recurrence?

Permanent Improvements

Based on what you learned:

New detection rules:

□ Rule 1: _______________ (catches: _______________)
□ Rule 2: _______________ (catches: _______________)

New alerts:

□ Alert if _____________ exceeds _____________ in _____________ window

Process changes:

□ _____________________________________________

Incident Report Template

Document for future reference:

Attack Summary:
- Date/time started: _______________
- Date/time contained: _______________
- Attack type: _______________
- Duration: _______________
- Total exposure: $_______________
- Loss prevented: $_______________
- Actual loss: $_______________
- False positives: _______________

Root Cause:
_________________________________

What Worked:
_________________________________

What Didn't Work:
_________________________________

Permanent Changes Made:
_________________________________

Open Questions:
_________________________________

Quick Reference: Common Attack Types

AttackSignsFirst ResponseFalse Positive Risk
Card TestingHigh velocity, small $, high declinesRate limit, CAPTCHALow
ATO WaveFailed logins, profile changesLock accounts, MFAMedium
App Fraud BurstSimilar applications, velocityTighten onboardingMedium
Transaction SpikeUnusual patterns, new customersLower approval thresholdHigh

Emergency Contacts Template

Fill in for your organization:

RoleNamePhoneEmail
Fraud Lead
Security
Engineering
Customer Support
Management
Legal

First Experiment to Run After an Attack

Once the crisis is over:

Hypothesis: We would have detected this attack X hours earlier if we had _____________.

Experiment: Build that alert or rule. Backtest against the attack data. Deploy in shadow mode.

Expected outcome: Next similar attack gets detected faster.