A practical guide to candidate NPS in a funnel where the bots don't fill out surveys
Most candidate NPS guides treat the score as operational. It's lagging. Here's the leading indicator that predicts what cNPS will say six months later.
The cNPS score landed on the QBR slide Monday morning. Plus 38, up from plus 26 a quarter ago. The CFO did her small approving nod. Somebody from People Ops mentioned the new survey cadence had paid off. Twenty minutes later you were at the kitchen counter with a coffee, and the recruiter who runs the Customer Success funnel said, almost as an aside, that of the 422 people who applied to the lead role last month, only 168 finished the screening interview. A fair number of the 254 who didn’t had sent back snippy emails about the process before they bailed. None of them filled out the cNPS survey. They never got it. The survey goes out after the requisition closes, and they were gone before then.
You said something honest. The dashboard says the experience is fine. You went back to your desk and pulled the segment data anyway. The plus 38 was almost entirely promoters from people who had completed your process and gotten an offer. The handful of detractors were polite finalists who didn’t get the role. The candidates who actually hated the experience weren’t in the sample.
None of the candidate NPS guides on the first page of Google will name what’s happening. Candidate NPS is a lagging indicator of screening UX, and the leading indicator that predicts what cNPS will say six months later is screening completion rate. The methodology is fine. The score is real. The problem is that by the time it lands on the dashboard, the experience the candidate hated already happened and the candidate who hated it is mostly gone. cNPS is the autopsy. Screening completion rate is the live patient.
How candidate NPS actually works
Before the reframe, cover the basics cleanly. The metric is real and the methodology is sound. Most teams just need a refresher.
The formula
Candidate NPS is the percentage of promoters minus the percentage of detractors. You ask one question on a 0 to 10 scale: “How likely are you to recommend applying to our company to a friend or colleague?” Scores of 9 or 10 are promoters. Scores of 7 or 8 are passives and drop out of the calculation. Scores of 0 through 6 are detractors.
If 40% of respondents are promoters and 20% are detractors, your cNPS is plus 20. The score ranges from minus 100 to plus 100 in theory. In practice, most companies sit between minus 20 and plus 50.
Benchmarks worth knowing
Above 0 means more candidates would recommend you than not. Plus 30 to plus 70 is the strong band. Above plus 70 is excellent and rare. Industry shapes the ceiling. Tech and SaaS tend to clear plus 45 because the pool is selected for engagement. Operations, logistics, retail, and frontline service typically sit between plus 15 and plus 25. A plus 11 in engineering hiring is considered strong because the bar for “would recommend” is higher among that population.
When to send the survey
The standard playbook is multi-stage. After application, after the recruiter screen, after the hiring manager interview, after the offer, after the rejection. Each stage gets a short survey, and the argument is that you can isolate where the experience breaks.
The cleanest single touchpoint is shortly after the requisition closes, with a considerate rejection or hire notification attached. The candidate’s memory is fresh and the outcome is no longer pending. Most teams running cNPS this way get response rates between 15 and 30 percent.
Everything above is the SERP’s consensus, and it’s not wrong. It’s just not the part of the conversation that helps you change the experience.
What a healthy cNPS hides in a 2026 funnel
In a funnel where the upstream stages are permeable to bots and auto-appliers, a clean cNPS is mostly a survival bias.
The score only sees the survivors
Your cNPS sample is built from people who completed enough of your process to receive a survey. In 2026, that’s a different population than the one that started. Of the 422 applicants on a typical mid-market role, a meaningful share are auto-appliers and AI-written applications that will never record a video, finish an assessment, or open a survey email. The human candidates who quit at screening because the experience felt like contempt also don’t show up. They’re already in somebody else’s process.
What’s left in the cNPS sample is heavily weighted toward people who got far enough in to have reasons to like or dislike specific things. Promoters are usually finalists. Detractors are usually polite finalists who didn’t get the role. The people who quit early because the screening signaled the company didn’t respect their time aren’t in the data set at all. Your dashboard reports a candidate experience score for the candidates who survived the candidate experience.
What “multi-stage cNPS” actually catches
Multi-stage cNPS sounds like the answer. Survey at every stage, and you’ll catch the early-funnel detractors before they leave. In practice, response rates collapse at the early stages. A candidate who applied on Tuesday in 90 seconds is not opening a survey email from you on Wednesday. The early-stage cNPS that does come back is dominated by people engaged enough to keep responding, which is the same selection bias one stage earlier. Multi-stage cNPS adds touchpoints. It doesn’t break the survival bias.
What the score lags by
cNPS cycle time is the hiring cycle plus the survey delay plus the analysis cadence plus the time to ship a fix. For most mid-market teams that’s three to six months between when a candidate felt friction and when somebody acted on it. The screening UX you fix in Q3 is the one a candidate flagged in Q1. By then your cost per hire on the rollover roles has already absorbed the cost of every detractor who walked.
The metric that predicts what your cNPS will say
There’s a different number, measured at a different stage, that does what most teams are asking cNPS to do. It’s the share of candidates who start a structured screening step and finish it. The screening step can be a recorded one-way interview, a short structured assessment, a 10-minute take-home, or a single behavioral question with a 30-second video response.
The completion rate at that stage has three properties cNPS can’t have.
It runs in real time
Screening completion rate updates with every candidate. No survey delay, no requisition-close trigger, no analysis cadence. Launch a new screening flow Monday, watch completion drop from 65 to 42 percent by Wednesday, and you know by Wednesday what’s failing. cNPS will tell you about it in October.
It captures the candidates who actually walked
A candidate who quits at screening is voting with their time. The 254 candidates in the opening scenario were telling you something specific about how your process felt. They won’t be in your cNPS sample. They’re in your screening completion rate. Drop-off at the screening stage is the leading indicator the SERP guides don’t name because survey vendors don’t sell it.
It points at concrete fixes
A low cNPS tells you candidates didn’t enjoy the experience. A low screening completion rate tells you what about it didn’t work. The role description was vague. The screening was 12 questions when 6 was enough. The candidate couldn’t tell who was reviewing or when they’d hear back. Each fix is testable inside a week, and the same fixes raise cNPS three months later when the survey catches up.
A healthy structured screening step runs between 60 and 85 percent completion for engaged hiring teams. Below 60 means something specific is wrong upstream and you can usually name what. Above 85 in a high-volume funnel sometimes means the step is too easy and isn’t doing screening work.
”But cNPS is the validated metric. Screening completion rate is just a drop-off rate.”
This is the strongest version of the objection.
The argument runs like this. cNPS has 20 years of marketing methodology behind it. The recommend-applying question has been validated against employer brand outcomes, time-to-hire, and offer acceptance rate. Drop-off rates at any single stage are operational noise. Treating screening completion rate as the leading indicator of candidate experience is mistaking a funnel mechanic for a sentiment metric.
Two things are true about that argument.
Where the objection is right
cNPS is more rigorous than a single drop-off rate measured in isolation. The methodology is genuinely strong, and the comparability across companies is real. If you want to benchmark against the rest of your industry, cNPS is the metric that lets you do that. Screening completion rate is mostly a within-company metric.
Where it stops being right
It stops being right when the validation behind cNPS depends on an assumption that broke. The metric was validated in funnels where the surveyed population was the same as the population whose experience you wanted to measure. In a bot-flooded 2026 funnel, those two populations have separated. Rigor on the wrong sample is rigor on the wrong sample.
Read both metrics together. cNPS gives you the brand-level number that compares to industry. Screening completion rate gives you the operational signal that says where to act this week. The post-rejection survey still has work to do. It’s just not the metric that should drive the next sprint.
A working week when you read both metrics together
Back to the QBR slides. Same Customer Success role, same 422 applicants, same survey cadence. Different setup.
What the candidate side looks like
The careers page funnels into a single Position Link. One URL. The candidate taps it on a phone, gets a 90-second welcome from the hiring manager, answers four screening questions on video, and submits. Total candidate time, between eight and twelve minutes. Resumes go through Truffle’s scoring against the criteria you set during intake. Recorded responses come back transcribed and ranked. At the top of the dashboard, Candidate Shorts compress each candidate’s most revealing moments into about thirty seconds. AI Match shows how closely each response aligns with the criteria. The post-rejection cNPS survey still goes out after the requisition closes.
What the dashboard looks like
The dashboard now reads two metrics next to each other. cNPS sits at plus 38, basically flat. Screening completion rate sits at 40 percent, a metric that didn’t exist on your old reporting. You watch it weekly. The week you launch a new question set the rate drops to 31 percent, and you know on Friday the new questions aren’t working. The week after, you cut two questions and the rate moves back to 44. Three months later the cNPS for the quarter comes in at plus 44, and you already know why.
What the recruiter side looks like
Tuesday morning, you sit down with coffee and a list of 168 finishers ranked by AI Match. You watch the top 20 Candidate Shorts in twelve minutes. You read AI Summaries for the next 20. By 10 a.m. you have eight people to schedule. The remaining 160 get a clear two-day decision and the kind of follow-up that turns a structured candidate experience into something candidates remember. Next week’s screening completion rate ticks up another two points. You don’t have to wait for October to know something worked.
The post-rejection cNPS survey still runs. It still gives you the brand-level read. It now tells you a story you already know the punchline of, instead of one you find out about a quarter late.
Two metrics, two jobs
cNPS measures whether the experience was worth recommending after it was over. Screening completion rate measures whether the experience was worth finishing while it was happening.
What the wrong reading produces
Read cNPS in isolation and the dashboard reports a candidate experience score for the candidates who survived your candidate experience. The score is real. The reading is incomplete. Quarters pass between when a candidate flags friction and when somebody acts on it. The next QBR claps for the same plus 38 and the recruiter pulling Customer Success backfills knows something the slide doesn’t.
What the right reading produces
Read both, and the dashboard tells the same story from two angles. Screening completion rate moves first and tells you what to fix this week. cNPS moves second and tells you whether the fix held at the brand level. The first is operational. The second is reputational. Together they’re the closest most TA teams will get to a real read on candidate experience.
The first reading has been the standard for so long that most teams default to it without checking. The first page of Google hasn’t caught up to the second one yet. Yours can.
Frequently asked questions about candidate NPS
What is a good candidate NPS score?
Above 0 means more candidates would recommend you than not. Plus 30 to plus 70 is the strong band. Above plus 70 is excellent and rare. Industry shapes the ceiling. Tech and SaaS tend to clear plus 45. Operations, logistics, retail, and frontline service typically sit between plus 15 and plus 25. Track screening completion rate next to cNPS, because it moves first and tells you what your cNPS will likely say a quarter from now.
How do you calculate candidate NPS?
cNPS is the percentage of promoters (9-10) minus the percentage of detractors (0-6). Passives at 7-8 drop out. So 40 percent promoters minus 20 percent detractors is a cNPS of plus 20. The score in theory ranges from minus 100 to plus 100. In practice, most companies sit between minus 20 and plus 50.
When is the best time to send the candidate NPS survey?
The cleanest single touchpoint is shortly after the requisition closes, with a considerate rejection or hire notification attached. The candidate’s memory is fresh and the outcome is no longer pending. Multi-stage cNPS adds touchpoints at application, screening, and offer, but response rates collapse at the earliest stages, which is where the candidates most likely to be detractors are also most likely to leave the sample.
Why is my candidate NPS high while candidates are still complaining?
Survival bias. The cNPS sample is built from people who completed enough of your process to receive a survey. The candidates who quit at screening because the experience felt like contempt aren’t in the data. Read screening completion rate next to cNPS and the gap between them is usually where the complaints are coming from.
Should we send candidate NPS surveys to candidates we rejected?
Yes, and pairing the survey with a considerate rejection email is the best place to ask. Rejected candidates are more honest than hired ones, and the response is structured around an outcome they’ve now received. The survey won’t catch the candidates who quit before reaching the rejection stage. That’s a different metric, and screening completion rate is the one that captures it.