Why your hiring velocity is going up and your hires are getting worse
Most hiring velocity advice ignores what is underneath the number. Here is how velocity, interview-to-hire ratio, and net-hire ratio actually fit together.
The two slides went up on Monday morning at the QBR. Hiring velocity, 18-day average, up from 26 in Q1. Interview-to-hire ratio, 4:1, down from 7:1. The CFO did the small nodding thing she does when a metric is moving the way she wants. The CRO clapped, sincerely. Twenty minutes later you were in the kitchen pouring coffee, and somebody from Customer Success leaned in to ask whether the third Customer Success Lead this year was going to stick around, because two of the last three had washed out by day 90 and the team was tired of training the same role.
You said something honest. The dashboard says the position’s fine. You went back to your desk anyway, pulled the post-90-day data for the eight hires you’d made that quarter, and three of them were already gone.
None of the velocity guides will name what’s happening. Your hiring velocity, your interview-to-hire ratio, and your net-hire ratio are one metric in three views, and the only view that tells you anything is the one that holds up after 90 days. The dashboards measure stage-transit speed. They don’t measure whether the people moving across stages are the people you meant to advance. In a funnel where AI-written resumes and auto-appliers can pass the upstream stages, your velocity number is improving partly because the input population changed. Not because your team got better at hiring.
Most hiring velocity advice was built for a funnel where the resumes were real
The standard playbook isn’t wrong. It’s solving the wrong scale of problem.
Where the velocity playbook came from
When the metric got its current shape, a typical mid-market position drew 60 to 80 applications, and most of the candidates who cleared the resume stage had read the description and meant it. The fix worked. Shorten the loop. Set a 24-hour feedback target. Standardize the rubric. Cycle time dropped from 35 days to 18, and the people you hired in 18 were roughly the same people you’d have hired in 35.
That funnel is what every guide on the first page of Google still describes. They’ll tell you the average interview-to-hire ratio is around 9:1, that 3:1 is good, and that anything between 4:1 and 20:1 is normal depending on role. Real numbers. They just don’t describe the funnel you’re running today. The math has changed shape because the population moving through the stages has.
What broke in 2026
The team that ran 60 candidates per role is now running 422. Some are humans. Some are running browser extensions that auto-apply across thirty postings in an afternoon. Some are pasting your description into ChatGPT and getting back a tailored resume in 8 seconds. Some are bots a high-volume hiring team learns to recognize by Wednesday. The resume layer was always the easiest signal to fake. Now it’s so easy to fake that what gets through is mostly indistinguishable from what you wanted, until somebody starts the job.
You’re not hiring faster because the team got better. You’re hiring faster because the upstream stages are more permeable, and they’re mostly permeable to candidates who don’t survive 90 days.
What hiring velocity, interview-to-hire ratio, and net-hire ratio actually measure
Three definitions live on three different SERP pages. Putting them next to each other is the first useful move.
The three formulas, side by side
| Metric | What it measures | Standard calculation | What it can hide |
|---|---|---|---|
| Hiring velocity | Stage-transit speed | Positions filled ÷ positions opened in the same period, or average days per stage | A funnel that compressed by dropping a stage, not by filtering better |
| Interview-to-hire ratio | How clean the path to hire looked | Total interviews ÷ total hires in the same period | A screening layer that isn’t pre-validating before the live interview |
| Net-hire ratio | Net headcount change, and (with a delay) hire durability | New hires ÷ terminations in the same period | Everything, if you don’t apply a 90-day delay to the numerator |
A few quick benchmarks for context. Hiring velocity at the mid-market sits between 18 and 28 days end-to-end, with 14 fast and 35-plus slow. Interview-to-hire ratio averages around 9:1 across roles, with 3:1 to 4:1 considered strong. Net-hire ratio above 1.0 means the company’s growing. None of those numbers mean very much in isolation, which is the whole point.
What all three quietly assume
Every one of those formulas assumes one thing. The candidate at every stage is the candidate you think they are. Velocity assumes a real person filled the position. Interview-to-hire assumes the candidates being interviewed are the ones you’d hire if they passed. Net-hire assumes hires and terminations are drawn from the same population on a delay. In a funnel where the upstream population has shifted, those assumptions are quietly wrong, and the dashboards keep right on reporting as if nothing changed.
This is what the recruitment funnel view of the same problem misses when it stops at conversion rates and never asks whether the population converting is real. Stage-to-stage conversion in a bot-flooded funnel reports cleanly. The hires that come out of it don’t.
The metric the industry is quietly converging on
Quality of hire. SHRM benchmarks it annually. AIHR has built a whole methodology page around it. It barely shows up on most TA dashboards because measuring it cleanly requires a delay window the dashboards aren’t designed to apply. Net-hire-at-90 is the simplest honest version of quality of hire that a TA team can actually run. It’s the same question (did the people you hired turn out to be the right people) with a calendar attached so the answer is computable. Read your three ratios with that lens and they stop competing for your attention. They start describing a single thing.
The funnel-efficiency metric that actually predicts a good hire
The three ratios are one metric in three views. How fast you got there. How clean the path looked. How durable what you produced turned out to be. Each can be improved by something other than getting better at hiring. Read together, with a screening layer underneath, the three describe a real engine.
Velocity tells you how fast
Velocity is the easiest of the three to spike. Drop a stage. Compress two interviews into one. Push a faster offer. Cycle time moves immediately, and none of it changes the candidates passing through. The hires of that quarter look about the same as last quarter, except in the cases where the people who used to filter themselves out at the missing stage no longer do.
The bottleneck the dashboards rarely name
End-to-end velocity gets blamed on the recruiting team. Stage-by-stage velocity tells a different story. The longest single delay in most mid-market funnels isn’t recruiter sourcing or scheduling. It’s the gap between a recruiter advancing a candidate and the hiring manager actually reviewing them. Three days. Five days. Sometimes a full week if it lands in a sprint where the manager has back-to-back planning. That delay shows up on the QBR slide as “slow recruiting.” It almost never is. Track stage velocity alongside end-to-end velocity and the bottleneck becomes visible. The fix is usually a structured manager-review block with a 24-hour SLA, not a faster recruiter.
Interview-to-hire ratio tells you how clean the path looked
Interview-to-hire improves when your screening layer is doing its job. A 4:1 ratio means three out of four candidates who reached the interview stage didn’t become hires. A 2:1 ratio means one out of two did. The improvement reads as efficiency. It also reads as a screening layer that pre-validated candidates before the live interview, so by the time the recruiter spent 45 minutes on a Zoom, the people in the slot were already calibrated for the role. Both readings produce the same number on the slide.
Net-hire ratio tells you how durable
Net-hire is the metric most teams measure on a flat window. This quarter’s hires divided by this quarter’s terminations. The version that matters has a 90-day delay applied to the numerator. Run net-hire ratio with hires restricted to the people who survived past day 90 and the durability of your funnel becomes legible. The honest cycle-time number is the one that survives the attrition filter. If your 18-day velocity produces hires whose net-hire-at-90 reads under 1.0, the velocity number isn’t measuring hiring. It’s measuring how fast you’re producing the next backfill.
A pre-validated screening layer, with evidence at the resume stage and again at the interview stage and optionally a third time at the assessment stage, is what makes the three ratios cohere. It’s also the part the SERP velocity guides skip. Their prescription is to shorten the loop. Shorten the loop without strengthening the layer underneath, and you get the dashboard you opened Monday morning.
”But slowing down will cost us candidates who go elsewhere”
This is the strongest version of the objection.
The argument runs like this. Real candidates are interviewing in five processes at once. Every extra day in your loop is another day for somebody else to make them an offer. Better to keep velocity high, push interview-to-hire ratio down, and accept that some hires won’t stick than to slow the funnel and watch your best-fit candidates go accept somewhere else.
Two things are true about that argument.
Where the objection is right
Candidates do drop out of slow processes. The senior end of the market moves fast enough that a five-day delay between interview rounds can cost you the offer. The 24-hour-feedback discipline the velocity guides preach is real and worth keeping. Speed itself isn’t the enemy.
Where it stops being right
It stops being right when “slow” gets blamed for what’s actually unstructured. Candidates rarely drop because a process took eight days instead of six. They drop because they can’t tell what’s happening, who’s reviewing what they sent, or whether the work they already submitted is being read. A funnel running on pre-validated evidence at every stage is faster than one running on volume. The recruiter is reviewing 40 evidenced candidates instead of 300 lookalike ones. The interview is calibrated to a criterion the recruiter already saw the candidate clear in a structured response. The 24-hour feedback target is easier to hit because the team knows what they’re deciding on.
The choice isn’t between fast and slow. It’s between fast-and-validated and fast-and-noisy. The first produces a velocity number that survives 90 days. The second produces a velocity number you have to re-run next quarter on the same role.
A working quarter looks different when velocity is read with the other two
Back to the QBR slides. Same role, same Customer Success Lead position, same volume of inbound. Different setup.
What the candidate side looks like
The careers page funnels into a single Position Link. One URL. The candidate taps it on a phone, gets a 90-second welcome from the hiring manager, answers four screening questions on video, and submits. Total candidate time, eight to twelve minutes. Resumes go through Truffle’s scoring against the criteria you set during intake. Recorded responses come back transcribed and ranked. At the top of the dashboard, Candidate Shorts compress each candidate’s most revealing moments into about thirty seconds. AI Match shows how closely each response aligns with the criteria. For higher-stakes hires you layer in a short structured assessment so the third evidence layer is in place before the first live interview.
Of the 422 applicants on the role, 168 finish the screening interview. The other 254 don’t, and that drop is the signal. The auto-applies, the half-engaged tap-throughs, and the bots can’t record a video. Form completion on the careers page reads roughly where it was. The screening completion rate is the new top-of-funnel input, and it’s what the velocity number should be running on.
What the recruiter side looks like
Tuesday morning, you sit down with coffee and a list of 168 finishers ranked by AI Match. You watch the top 20 Candidate Shorts in roughly twelve minutes. You read AI Summaries for the next 20. By 10 a.m. you have eight people to schedule for a real conversation. The eight live interviews produce three offers and two acceptances. Interview-to-hire ratio of 2.4:1, down from your old 4:1. Hiring velocity at 17 days, basically flat. The headline is what reads at day 90. Both Customer Success hires are still in seat, the team isn’t retraining the role, and the net-hire ratio with a 90-day filter applied is the highest it’s been in four quarters.
Three ratios, one engine, one honest reading
Velocity describes how fast you got there. Interview-to-hire ratio describes how clean the path looked. Net-hire ratio, with a 90-day window applied, describes whether what you produced was durable.
What the wrong reading produces
Read in isolation, the three ratios are a slide. Velocity climbs. Interview-to-hire ratio drops. Net-hire ratio at the flat window reads above 1.0 and the QBR claps. Two quarters later the same roles re-open, and the cost per hire on the rollover roles dwarfs whatever the velocity gain saved.
What the right reading produces
Read together, with a screening layer underneath, the three converge on the same answer. The candidates surviving each stage are the candidates you meant to advance. Velocity holds. Interview-to-hire ratio drops because the upstream layer pre-validated. Net-hire ratio at 90 days holds because the people you hired actually wanted the position. The same dashboard tells a different story, and the story is the one that holds up at the next QBR.
The first reading is the standard, which is why most teams default to it without checking. The second is what every TA lead with three Customer Success backfills already knows in their bones.
Frequently asked questions about hiring velocity
What is a good hiring velocity?
Depends what you mean. Measured as positions filled versus positions opened in the same period, anything at or above zero means you’re keeping pace with demand. Measured as stage-transit time, mid-market teams sit between 18 and 28 days end-to-end, with 14 fast and 35-plus slow. Either way, the number worth tracking next to it is your interview-to-hire ratio plus net-hire ratio at a 90-day window. Hiring velocity in isolation can be improved by changes that have nothing to do with hire quality.
How do you calculate interview to hire ratio?
Total interviews conducted divided by total hires made over the same period. Forty interviews and five hires is 8:1, meaning the team interviewed eight candidates on average for every successful hire. Industry average runs around 9:1 across roles. 3:1 to 4:1 is strong. Anything past 12:1 usually points to either a sourcing problem or a screening layer that isn’t pre-validating before the live interview.
What is a good net hire ratio?
Net-hire ratio is new hires divided by terminations in the period. A 1.0 means hires equal terminations and headcount is flat. Above 1.0, the company’s growing. Below 1.0, it’s shrinking. The version that matters for hiring quality is the same calculation with a delay applied to the numerator. Hires who survived past day 90, divided by terminations, is the cleanest read on whether the funnel is producing durable hires. It’s also the closest most TA teams will get to a real quality-of-hire number on their own dashboard.
Are interview to hire ratio and offer acceptance rate the same thing?
No. Interview-to-hire ratio counts every interview that happened and divides by hires. Offer acceptance rate is offers extended divided by offers accepted. A low interview-to-hire ratio with a low offer acceptance rate means screening is tight but candidates are walking at the offer stage. The two together give you a clearer read than either on its own.
Should we track hiring velocity by stage or end-to-end?
Both. End-to-end velocity is the QBR number. Stage velocity tells you which step is dragging, which is rarely the step the QBR will assume. The most common stage-velocity bottleneck is hiring-manager review time, not recruiter sourcing time. Tracking both reveals whether your end-to-end number is moving because the funnel got better or because one stage compressed at the expense of another.