Skip to main content

Manual Review

Prerequisites

Before setting up manual review, understand:

TL;DR
  • Manual review = Human investigation for gray-zone cases where automation is uncertain
  • Good candidates: high-value transactions, VIP customers, ML uncertain, customer appeals
  • Bad candidates: clear fraud (auto-decline), clear legitimate (auto-approve), low-value
  • Target metrics: >95% decision accuracy, >90% SLA adherence
  • Feed decisions back to ML models to improve automation over time

Human investigation for complex fraud decisions.

When to Use Manual Review

Good Candidates for Review

ScenarioWhy Manual Review
Gray zone scoresML uncertain, needs judgment
High-value transactionsLoss too big for automation error
VIP customersFalse positive cost too high
Complex patternsMultiple signals, needs synthesis
AppealsCustomer disputes automated decision

Poor Candidates for Review

ScenarioBetter Alternative
Clear fraud signalsAuto-decline
Clear legitimate signalsAuto-approve
Low-value transactionsRisk-accept the loss
High volume attacksAutomated rules

Review Queue Design

Prioritization

Priority Score = 
(Transaction Value × Risk Score × Time Sensitivity)
÷ Analyst Capacity
PriorityCriteriaSLA
Critical>$5K, high risk, time-sensitive15 min
High>$1K, high risk OR VIP1 hour
MediumMedium risk, medium value4 hours
LowLow value, marginal signals24 hours

Queue Management

  1. Real-time SLA tracking – Monitor aging
  2. Automatic escalation – If SLA breached
  3. Capacity planning – Staff to volume
  4. Skill-based routing – Complex cases to senior

The Review Process

Investigation Steps

1. Review automated decision reason

2. Examine transaction/application details

3. Check customer history

4. Query external data (device, email, phone)

5. Look for linked accounts

6. Make decision

7. Document rationale

Key Data Points

CategoryWhat to Check
IdentityName, address, SSN verification
DeviceFingerprint, reputation, velocity
BehaviorPattern vs. history
NetworkLinks to other accounts (see synthetic identity)
ExternalEmail age, phone history, bureau

Decision Framework

EvidenceDecision
Clear fraud (Tier 1 indicators)Decline, flag account
Strong fraud (multiple Tier 2)Decline, flag account
Unclear but riskyChallenge (step-up verification)
Risky but VIPApprove with monitoring
Clear legitimateApprove, whitelist signals

Analyst Tools

Essential Features

  • Single pane of glass – All data in one view
  • Decision shortcuts – One-click common actions
  • Notes/comments – For handoffs and history
  • Timer – Track review time
  • Feedback loop – Outcome tracking

Nice-to-Have Features

  • Similar case search – "Show me cases like this"
  • Graph visualization – Network connections
  • Communication tools – Contact customer if needed
  • Quality scoring – Manager review integration

Quality Assurance

Review Sampling

Sample RateApplication
100%New analysts (first 30 days)
20%Standard analyst
10%Senior analyst
5%Expert analyst

Quality Metrics

MetricTarget
Decision accuracy>95%
Documentation completeness100%
SLA adherence>90%
False positive rateTrack by analyst
False negative rateTrack by analyst

Feedback Loop

  1. Track outcomes – Was decision correct?
  2. Feed to models – Human decisions train ML
  3. Identify patterns – What do humans catch that ML misses?
  4. Update rules – Encode learnings

Scaling Manual Review

When Volume Exceeds Capacity

  1. Raise review threshold – Only highest risk
  2. Auto-decide more – Accept some error
  3. Reduce review scope – Focus on key signals
  4. Add staff – If sustainable
  5. Improve models – Long-term solution

Efficiency Improvements

InitiativeImpact
Better data presentation10-20% faster
Keyboard shortcuts5-10% faster
Pre-computed insights15-25% faster
Decision templates10-15% faster

Next Steps

Setting up manual review?

  1. Define queue prioritization - Critical vs. low priority
  2. Design the review process - Step-by-step workflow
  3. Set quality targets - SLA and accuracy goals

Improving review efficiency?

  1. Check efficiency improvements - Quick wins
  2. Build better analyst tools - Single pane of glass
  3. Implement feedback loop - Train ML from decisions

Scaling beyond capacity?

  1. Raise review threshold - Only highest risk
  2. Improve models - Long-term solution
  3. Consider vendors - Outsource review