# What PE Firms Are Actually Asking About AI Right Now
*From 109 Research Engagements Where AI Was the Subject — Not the Tool*
---
In 109 of our last 195 research engagements, AI was the thing under investigation.
Not AI as a tool we used to conduct the research. AI as the market subject our clients were paying to understand. AI in insurance sales. AI in fleet management. AI in clinical trials. AI in CRM. AI in HR technology. AI in accounting software. AI in pharmaceutical research.
That's more than half of our engagements. And the questions have changed significantly over the past three years.
Here's what PE firms and strategy consultants are actually asking about AI right now — and what they're missing.
---
The Question Has Changed Three Times
Understanding where AI diligence is today requires knowing where it's been.
**2022-2023: "Is this real?"** The first wave of AI-focused expert research was about legitimacy. Could this technology actually deliver what vendors claimed? Was there any there there? Research programs in this era were pulling together technical experts — CTOs, machine learning researchers, product managers at AI-native companies — to evaluate whether AI capabilities matched the marketing.
**2024: "Who's winning?"** By 2024, the legitimacy debate was largely resolved for most sectors. The new question was competitive: who had a real AI strategy and who was running a demo? Which incumbents could credibly respond to AI-native challengers? Which new entrants had genuine moats?
**2025-2026: "What's defensible?"** This is where most PE firms are now, and it's a harder research question. The assumption is that AI capability is real. The question is durability. Given that AI capabilities are increasingly distributed — available via API to anyone with a credit card and a use case — what makes any particular company's AI advantage defensible over a three-to-five year hold period?
The current phase requires different expert profiles than the previous phases did. The legitimacy questions needed technical experts. The competitive questions needed commercial leaders. The defensibility questions need people who understand both the technology and the commercial dynamics at a level of depth most AI thought leaders don't have.
---
The Five Questions That Actually Matter Now
Based on patterns across 109 AI-related engagements, these are the research questions that PE firms and strategy consultants are finding most valuable:
**1. Is the moat in the model, the data, or the distribution?**
This is the most important question in AI competitive diligence, and the most commonly underexamined. The answer shapes everything else.
Model-based moats are the weakest. Foundation models are improving and commoditizing rapidly. A company whose AI advantage lives in fine-tuning a model that will be outperformed by GPT-6 or whatever comes next is not sitting on a durable advantage.
Data-based moats are significantly stronger. A company with years of proprietary behavioral data, claims data, outcomes data, or interaction data that no competitor can replicate is in a fundamentally different competitive position. In one insurance engagement, the finding was clear: the carriers with genuine AI competitive advantage had built it on proprietary data that new entrants couldn't acquire — not on superior algorithms. The data moat, not the model moat, was the defensible position.
Distribution-based moats are durable but often undervalued. A company that has embedded AI into existing customer workflows — where switching would require re-training entire teams, rebuilding integrations, and losing years of model personalization — has an advantage that has nothing to do with AI capability per se.
**2. What does the customer actually see versus what the demo shows?**
Expert calls with customers who have been using AI-powered tools for six to twelve months consistently reveal a gap between demo performance and production performance. In HR technology research, AI-ranked candidate lists were frequently described as requiring full human review anyway. In fleet management, AI-powered predictive maintenance was described as theoretically compelling and operationally unproven because the underlying data infrastructure wasn't complete.
The gap isn't always disqualifying. Some tools are delivering real value in narrow, well-defined workflows. But the size of the gap — between what a company shows in a sales process and what customers experience in production — is one of the most diagnostic signals in AI diligence.
**3. Where is the regulatory exposure building?**
AI regulation is accelerating, and the sectors with the most advanced regulatory scrutiny — healthcare, financial services, insurance, and hiring — are exactly the sectors where AI adoption is deepest.
In insurance, state-level fair lending and rate discrimination rules are already constraining how AI-driven pricing can be deployed. In hiring, algorithmic bias in AI-powered candidate screening is an active legal risk in multiple jurisdictions. In healthcare, FDA pathways for AI-enabled medical devices and clinical decision support tools are still developing.
The expert profiles needed to assess regulatory exposure are different from the profiles needed to assess technical capability or commercial adoption. Regulatory affairs executives, compliance counsel who have navigated AI scrutiny, and former regulators who have reviewed AI systems are the relevant voices here — and they're systematically underrepresented in most AI diligence programs.
**4. Who is actually making AI decisions inside the target company?**
This is the most underasked question in AI diligence. Research programs typically focus on what an AI system can do. The more diagnostic question is whether the organization buying or building it is positioned to make good decisions about it.
In one HR technology engagement, interviews with CHROs revealed a consistent pattern: AI purchasing decisions were driven by vendor relationships and conference demos rather than rigorous capability evaluation. The buyers weren't equipped to distinguish genuine AI capability from sophisticated automation.
The talent risk is symmetrical on the sell side. AI teams at portfolio companies that are dependent on two or three key technical leaders have a talent concentration risk that doesn't appear on the balance sheet but matters enormously for the AI roadmap.
**5. What happens to the AI moat when the underlying model improves?**
This is the question nobody wants to ask in a bull market for AI, but it's increasingly relevant. If a company's AI advantage is built on GPT-4 or Claude 3, what happens when GPT-6 ships and makes the same capability available to every competitor who integrates the API?
The companies with durable AI advantages have built something that improves as the underlying models improve — because the moat is in the data, the workflows, the integrations, or the proprietary training signals that compound over time, not in access to a model that everyone can soon access.
---
What PE Firms Are Missing
Three consistent gaps in how AI diligence is being conducted:
**1. The sourcing paradox isn't being managed.**
The most knowledgeable people about how AI is reshaping a given market are often still employed by the companies driving that change. They're off-limits. The next-most-knowledgeable are former executives who left AI-forward companies recently enough to have current context — and they're the most competed-for expert profiles in the market.
Firms that use standard panel-based expert sourcing are systematically getting lower-quality AI experts than firms doing custom recruitment. The AI expertise that matters for defensibility questions isn't on panels — it's been gone from the target companies for six to eighteen months and is now working somewhere else.
**2. Technical experts are overweighted.**
The legitimacy phase required technical experts. The defensibility phase requires commercial experts — people who have sold AI products, bought AI products, or watched companies make AI bets that succeeded or failed. The current expert profile demand has shifted, but sourcing strategies haven't fully caught up.
**3. AI as topic is being conflated with AI as capability.**
The fact that a company uses AI in its product doesn't mean its AI is a competitive advantage. It might mean it bought an API subscription. Expert research in AI diligence should distinguish clearly between AI-as-feature (commodity), AI-as-differentiation (meaningful but potentially replicable), and AI-as-moat (durable, defensible, and tied to proprietary assets the company uniquely controls).
That distinction is harder to make than most AI pitch decks suggest — and more important than most diligence processes acknowledge.
---
The questions PE firms are asking about AI are getting sharper. The research programs designed to answer them are still catching up.
The firms that will win on AI diligence are the ones who've moved from "is this technology real?" to "what makes this specific advantage durable?" — and who are recruiting the right experts to answer that harder question.
---
*The Continental Exchange has facilitated AI-focused expert research across insurance, fleet management, HR technology, clinical trials, pharma, CRM, and accounting — 109 of our 195 most recent engagements. For AI diligence research inquiries: [contact@thecontinentalexchange.com]*

