VICIdial False Positive Rate: Why Your AMD Is Dropping Live Calls (and How to Fix It)

In outbound calling, a false positive happens when your answering machine detection classifies a live human as a voicemail. The call connected. A real person answered. But your system hung up on them before they ever reached an agent.

If you are running VICIdial with default AMD settings, this is happening on roughly 1 in 5 of your connected calls.

This guide explains exactly why it happens, how to measure it in your own operation, and what you can do to fix it — including both the parameter tuning approach and the replacement approach.

What Is a False Positive in AMD?

VICIdial uses Asterisk's built-in AMD (Answering Machine Detection) to classify calls after they connect. The system analyzes the first few seconds of audio and decides: HUMAN or MACHINE.

A false positive is when a human gets classified as a machine. The consequences:

  • The call is routed to your voicemail handling logic instead of an agent
  • The live person hears silence, a click, or a voicemail message
  • They hang up confused or frustrated
  • The call shows as a "machine" in your reports
  • You pay for the dial, the carrier connection, and get zero agent conversation

A false negative is the opposite — a machine classified as a human. Your agent gets connected to a voicemail greeting and has to manually detect and drop it.

Both cost money. False positives are typically the larger problem because they represent real revenue-generating conversations that never happened.

Why VICIdial's Default AMD Produces High False Positives

Asterisk's AMD was designed over a decade ago for a specific telephony environment. Several things have changed.

1. Modern Voicemail Greetings Have Changed

iOS 26 (released in 2025) changed the default voicemail greeting behavior significantly. The new synthesized greeting is shorter, plays faster, and has different acoustic characteristics than earlier versions. Asterisk AMD was not trained on this data.

The result: iOS 26 voicemail greetings are more likely to be classified as human because they do not match the long-greeting pattern the rules expect.

At the same time, real humans answering calls on iOS 26 devices sometimes trigger the machine detection heuristics — short answers like "Hello?" can look like a machine response to Asterisk's word-count logic.

2. Carrier-Side Audio Processing

Modern VoIP carriers apply aggressive audio processing to calls — noise reduction, echo cancellation, level normalization. This processing changes the acoustic properties of both human voices and voicemail greetings in ways that confuse threshold-based detection.

The silence thresholds and word burst parameters in your amd.conf were set against unprocessed audio. When carriers apply their own processing, the same human greeting can have different acoustic properties depending on which carrier the call routes through.

3. Short Human Responses

People answer the phone differently. A quick "Hello?" spoken in under 300ms is a common human response — especially on mobile. Asterisk AMD's minimum word length and between-words silence parameters can misclassify these as the lead-in silence of a machine greeting.

The same problem occurs with non-native English speakers, elderly callers, and people answering from noisy environments.

4. Default Parameters Were Never Calibrated for Your Traffic

The defaults in amd.conf are generic starting points, not optimized settings. They were not calibrated against your carrier mix, your calling regions, your demographic, or your time-of-day patterns. Running a high-volume outbound campaign on default AMD parameters is like running paid ads with default bidding — it works, but not optimally.

How to Measure Your Current False Positive Rate

Before changing anything, measure the problem. You need to know your actual false positive rate, not the theoretical one.

Method 1: Manual Sample Audit

This is the most reliable approach:

  1. Enable call recording for all AMD-classified calls (if not already enabled)
  2. Pull a random sample of 200 calls classified as MACHINE by your AMD
  3. Listen to each recording
  4. Count the calls where you can hear a human voice answering before the call drops

A false positive shows up in the recording as: call connects → human voice ("Hello?", "Yes?", a confused pause) → click/hangup or voicemail message plays

If 30 of your 200 MACHINE-classified calls have audible human responses, your false positive rate is approximately 15%. If 50 do, it is 25%.

Method 2: A/B Route Comparison

A more systematic approach:

  1. Take a subset of your campaign traffic (10–20% of dials)
  2. Route it through AMD disabled — all connected calls go to agents
  3. Agents manually code call outcomes: HUMAN (live conversation), MACHINE (voicemail detected manually), NO_ANSWER
  4. Compare the MACHINE rate from manual agent coding vs. the MACHINE rate from AMD on the same time period

The difference between "AMD says machine" and "agent says machine" on a matched sample gives you your false positive rate.

What Numbers to Expect

AMD Configuration Typical False Positive Rate
Default Asterisk parameters, mixed traffic 18–25%
Manually tuned parameters, US domestic 12–18%
Manually tuned parameters, dedicated carrier 8–15%
AI-powered AMD (amdify.io) 1–3%

The Revenue Impact of Your False Positive Rate

Here is the math on what false positives actually cost:

Daily Dials Connect Rate False Positive Rate Dropped Live Calls/Day Value per Conversation Daily Revenue Loss
5,000 12% 20% 120 $40 $4,800
10,000 12% 20% 240 $40 $9,600
20,000 12% 20% 480 $40 $19,200
20,000 12% 3% 72 $40 $2,880

The difference between a 20% and a 3% false positive rate on a 20,000-dial-per-day operation is $16,320 per day in recoverable revenue. Monthly, that is over $490,000.

These numbers assume you can convert recovered conversations at your existing rate. The point is not that every recovered call converts — it is that right now, those conversations are never happening at all.

Fix Option 1: Tuning Asterisk AMD Parameters

If you want to stay on Asterisk AMD and squeeze more accuracy out of it, here are the key parameters and how to adjust them.

Edit /etc/asterisk/amd.conf:


[general]

; Time (ms) of initial silence before deciding MACHINE

; Increase this if you are dropping fast human responses

initial_silence=3000

; Max duration (ms) of a greeting to still be considered human

; If machines in your traffic have short greetings, lower this

greeting=1500

; Silence (ms) after greeting before deciding HUMAN

after_greeting_silence=800

; Total time (ms) AMD will analyze before giving up

total_analysis_time=5000

; Minimum word length (ms) — shorter is classified as noise

; Lower this to catch short human responses

min_word_length=80

; Silence (ms) between words

between_words_silence=50

; Max words in greeting before classifying as MACHINE

; Lower this if you are getting false negatives on machine calls

maximum_number_of_words=3

; Silence threshold (lower = more sensitive)

silence_threshold=200

After changes: asterisk -rx "module reload app_amd.so"

Tuning Direction for High False Positives

If your problem is live humans being classified as machines:

  • Increase initial_silence — give more time before assuming it is a machine
  • Decrease min_word_length — catch shorter human responses
  • Increase maximum_number_of_words — allow humans to say more before being classified as machine

The tradeoff: more permissive settings reduce false positives but increase false negatives. You will connect agents to more voicemails. Measure both rates as you tune.

The Ceiling on Tuning

Manual tuning can reduce false positives but not eliminate them. The fundamental problem is that the same acoustic signatures appear in both human and machine audio — just with different frequencies. Rules cannot fully separate them.

Most operations that tune aggressively can get to an 8–12% false positive rate. Below that, the error rate is structural — it cannot be fixed by parameter adjustment.

Fix Option 2: Replace AMD with AI-Powered Detection

For operations where the economics justify it — which is most operations running more than 300 dials per hour — replacing Asterisk AMD entirely with an AI-powered solution is the higher-leverage fix.

amdify.io is built specifically for VICIdial and Asterisk environments. It replaces the AMD decision without changing anything else in your dialing stack.

How it works:

  1. Disable built-in AMD in VICIdial System Settings
  2. A small AGI script captures the first 3–5 seconds of connected call audio
  3. The audio is posted to the amdify.io API, which returns HUMAN or MACHINE in under 400ms
  4. Asterisk routes based on the result — same as it would with built-in AMD

The API uses a neural model trained on current call recordings, including iOS 26 voicemail behavior, modern carrier audio processing, and diverse human response patterns. False positive rates drop to 1–3%.

Integration takes 30–60 minutes for a standard VICIdial setup. See the VICIdial integration guide for step-by-step instructions.

How to Confirm the Fix Is Working

After making changes — whether tuning or replacement — verify with data, not assumptions:

  1. Wait 48 hours to collect a statistically meaningful sample
  2. Pull another manual audit sample of 200 MACHINE-classified calls
  3. Count the human responses in the recordings
  4. Compare to your baseline false positive rate from the initial audit

If you moved from 20% to 3%, you will hear the difference clearly — almost no more human voices in the MACHINE-classified recordings.

Also monitor:

  • Connect rate (should increase as live calls stop being dropped)
  • Agent conversation rate per dial (should increase proportionally)
  • Abandonment rate (should stay below 3% — faster AMD classification helps here)

Common Mistakes to Avoid

Tuning based on overall accuracy instead of false positive rate specifically: Overall accuracy can look good even with a high false positive rate if you are calling a list where most answers are actually machines. Measure false positives and false negatives separately.

Changing multiple parameters at once: You cannot diagnose which change helped. Adjust one parameter at a time with a 24-hour measurement window between changes.

Not accounting for carrier variation: If you use multiple SIP carriers, different carriers may perform differently with your AMD settings. Segment your AMD accuracy analysis by carrier.

Testing only in off-peak hours: AMD accuracy can vary by time of day because caller behavior changes. Validate your settings across your full operating window.

Final Thoughts

The default VICIdial AMD configuration was not designed for 2026 traffic patterns. iOS 26, modern carrier processing, and VoIP-native calling behavior have all moved the target in ways that static parameters cannot fully track.

Measure your false positive rate first. If it is above 10%, you have a problem worth solving. If it is above 15%, you are leaving meaningful revenue on the table every day.

Tuning can get you partway there. If you want to reliably reach the 1–3% false positive range that current AI-powered detection achieves, parameter adjustment alone will not close the gap.