# How to Run an Expert Call Program for Commercial Due Diligence (Without the Usual Mistakes)
*A practical guide from The Continental Exchange*
---
Most first-time users of expert networks waste between 30 and 40 percent of their call budget. Not because the experts were bad or the questions were wrong — because the program design was off before the first call was scheduled.
After facilitating expert research across 195 engagements and 17 industries, we've watched the same patterns of success and failure repeat across PE firms, strategy consulting teams, and corporate development groups. The problems aren't random. They're predictable. Which means they're preventable.
Here's what the programs that work do differently.
---
Start With the Research Design, Not the Expert List
The single most common mistake in expert call programs is asking "who can we talk to?" before answering "what question are we trying to answer?"
These sound like the same thing. They're not.
A vague brief produces vague expert specifications. "A fleet manager" is not a useful specification. "A fleet manager who directly oversees 100-500 operational vehicles in a service-heavy sector — HVAC, healthcare delivery, construction — and who has evaluated fleet management software in the past 18 months" is a useful specification.
The discipline of writing specific expert profiles forces clarity on what you're actually trying to learn. If you can't describe the expert precisely, you probably haven't defined the question precisely.
Good research design answers three things before any expert is recruited:
1. **What are the three or four specific hypotheses we're testing?** Not "understand the market" but "determine whether customers would switch from their current FMC to a software-only alternative at a lower price point."
2. **Which expert profile would have direct knowledge relevant to each hypothesis?** The expert who can confirm or disprove "switching behavior" is different from the expert who can confirm or disprove "competitive landscape."
3. **What do we need to believe by the end of this program, and what data would change our belief?** This question forces intellectual honesty about what the research is actually for.
---
Use the Three-Angle Structure — Every Time
Across 195 engagements, the programs that produced findings clients described as "deal-changing" or "surprising" had one thing in common: they structured expert outreach across all three angles — customers, competitors, and former employees.
Programs that collapsed to a single angle produced systematically incomplete intelligence.
**The customer angle** tells you what buyers actually experience versus what vendors claim. Happy customers, churned customers, and prospects who evaluated but didn't purchase all tell you different things. Restrict your customer calls to reference accounts that the vendor recommends, and you're reading a marketing document with a human voice.
**The competitor angle** tells you market structure — who's winning, who's losing, and why the market organizes the way it does. Former executives at competing firms are almost always preferable to current ones. They're more candid, and the compliance calculation is simpler.
**The former employee angle** tells you operational ground truth — the customer concentration that management describes as "diversified," the product roadmap commitment made to three anchor accounts, the internal debate about the pricing model. You can't get this from customers or competitors. Only someone who was inside can tell you.
Remove any one of these three, and you have a predictable blind spot. Remove two, and the research is likely to confirm what you already believe.
---
Invest in Screening Before You Book the Call
Expert selection is the most consequential decision in any research program. More important than the questionnaire. More important than the call structure.
The screening investment pays off in two ways:
**You avoid expensive bad calls.** A 60-minute call with an expert who doesn't actually have the experience the brief requires costs the same as a 60-minute call with someone who does — but only one of them advances the hypothesis. When you're running ten calls on a tight timeline, two or three bad-fit experts is a material drag on program quality.
**You surface compliance red flags early.** The worst time to discover that an expert has NDA obligations that constrain the conversation, or is currently employed at the target company, is 20 minutes into a call. Pre-call compliance screening is not optional — it protects everyone: the client, the research firm, and the expert.
A properly run screening process confirms: (1) the expert actually has the specific experience the brief requires, (2) they're available and willing to discuss the topic areas within appropriate bounds, (3) they don't have conflicts that limit the value of the call.
---
Plan for Call Volume That Actually Reaches Signal
The most common budget mistake is allocating for five calls and expecting definitive conclusions.
Five calls orient you. They don't saturate a hypothesis.
Here's the realistic arc of an expert call program:
**Calls 1-3:** Orientation. The research team calibrates to the market language, tests whether the initial hypotheses are asking the right questions, and begins to identify which angles are most productive. Insights at this stage are real but preliminary.
**Calls 4-8:** Signal building. Patterns start to emerge. The same themes appear independently across expert conversations. You begin to distinguish what's idiosyncratic to individual experts from what's systematic.
**Calls 9-15:** Saturation and validation. New information per call drops sharply. Additional calls confirm existing hypotheses or resolve contradictions from earlier calls. You reach the point where you'd be genuinely surprised by the next call's content — which is a good indicator that you've reached saturation.
For well-defined single-topic research in a relatively consolidated market, saturation can come earlier — seven or eight calls. For complex multi-hypothesis programs in fragmented markets, fifteen or more calls may be necessary.
What you should not do: run three calls, decide you have enough, and build a conclusions deck. You have enough to form hypotheses. You don't have enough to test them.
---
Structure the Calls for Insight, Not Confirmation
Expert calls produce different outcomes depending on how they're framed.
The frame that produces bad outcomes: presenting your hypothesis and asking the expert to confirm it. "We believe the market is moving toward software-only fleet management. Do you agree?" This produces agreement from agreeable experts and disagreement from contrarian ones. It doesn't produce ground truth.
The frame that produces good outcomes: using the expert's experience as data. "Walk me through the last major fleet technology evaluation you were involved in — what triggered it, who was in the room, what criteria drove the final decision." This produces specific, observable experience that either confirms or challenges your hypothesis without asking the expert to validate it for you.
Open questions about specific experiences outperform closed questions about general beliefs. "What happened when you switched from Geotab to Samsara?" produces more usable intelligence than "What do you think is driving telematics adoption?"
The other structural rule: save your most important questions for the first half of the call, not the end. Calls run over, conversations go sideways, experts get interrupted. The question you absolutely need answered should not be the one you get to with ten minutes left.
---
Don't Forget: Wrong Expert, Wrong Data
After all the research design, screening, and question structure — if the expert doesn't have the specific experience the question requires, the call won't produce what you need.
This sounds obvious. But in practice, expert specifications drift under scheduling pressure. The ideal profile was "a fleet manager with 250-500 vehicles in a service vertical who evaluated FMS software in the past 18 months" — but the calendar was tight, so you took a call with a fleet manager who runs 50 vehicles and hasn't evaluated any software in three years. That call didn't fail because of bad questions. It failed because the expert was wrong for the hypothesis.
Hold the specification. Delay calls that don't meet the brief. The timeline pressure is always real; the quality compromise is rarely worth it.
---
The programs that get it right are disciplined about design before they're diligent about execution. Define the question. Specify the expert. Screen before scheduling. Allocate for the volume that reaches signal. Keep the angles triangulated.
That's the playbook. It's not complicated. It just requires applying it before the calendar gets full.
---
*The Continental Exchange facilitates expert research programs for PE firms, strategy consulting teams, and corporate development groups. With 195 engagements across 17 industries, we've built the process around what actually works.*
*[contact@thecontinentalexchange.com] | [www.thecontinentalexchange.com]*

