FIELD MEMO · APR 23, 2026
Six Principles for AI in PE Portfolios
What senior operating partners are learning about AI deployment in PE portfolios.
From the 2026 Private Capital Global Operating Partner Summit: A day of fireside chats, a roundtable, and an AI panel covered finance transformation, cross-functional value creation, and AI implementation across PE portfolio companies. The sessions ranged across topics, but the operating disciplines they kept surfacing were the same. Six principles, drawn from how senior operating partners are actually working through the current moment.
Where do AI failures actually come from?
A team builds an AI tool. It doesn't work. The operator asks where it broke and the team can't say, because no one set up evals at the start. A different team buys a $20K SaaS pilot, the pilot fails, and the real version now has to be built on top of the graveyard. An executive watches a weekend prototype get built in Claude and proposes replacing the company's ERP with it, pulling attention away from the systems of record that actually run the business.
None of these are AI failures. They're failures of the work that should have come before: no measurement system, procurement instincts that treated a strategic capability like a cheap experiment, no honest read on what the existing systems actually do.
Principle 1: Every AI failure traces to a pre-AI failure.
How do you get started on a new AI initiative?
The framework most operators are using to find AI opportunities is the overlap between repeatable process and large headcount. Where many people are doing the same thing the same way, there's leverage.
The trap is in the word "same." A use case gets sized as a hundred people doing the same thing, the team digs in to find eight workflow variants underneath what looked like one process, and the development effort wasn't 8x; it was the tuning, the evals, and the edge-case handling that exploded. The savings number got walked back. The framework was right; the workflow validation step got skipped.
A portfolio company brings in AI tools that look powerful in the demo. A former CIO now spends most of his time gluing those tools into incumbent systems that were never designed to plug into anything. The technology wasn't the bottleneck; the integration was. Three weeks for what should have been three hours. The workflow had been built around the old systems' constraints, and the new tools inherited every one.
This is why carve-outs came up repeatedly as the cleanest opening for serious AI work. In a carve-out you have to rebuild the workflow from scratch anyway, which means you can ask the workflow question first and the tool question second. One operator described replacing what would have been a 100-FTE customer-support stand-up with an automated solution and a small technical team. That was possible only because the workflow was being designed, not retrofitted.
Most operators don't have a carve-out to work with, but the posture is borrowable. The exercise: if you were rebuilding this capability from scratch today, knowing what you know about the tools available, what would you actually design? You may not follow the answer to the letter, but the gap between that designed workflow and the one you've inherited is where the real AI opportunity lives. Most opportunity lists start with "where can we add a tool"; the better list starts with "where is our inherited workflow furthest from the one we'd build today."
Most operators start with the tool. The workflow question gets treated as something to figure out during implementation, when it's already too late to ask it cleanly. The tool then inherits everything that was wrong with the underlying workflow, plus a higher run rate. Use the framework to find candidates but don't start installing the tool until you validate the workflow.
Principle 2: Validate the workflow before you install the tool.
Which AI use cases are worth pursuing?
A panelist on the AI session offered a framing: AI initiatives are most likely to return value when they compress the time to retrieve, summarize, or generate from existing knowledge. He called this zero-cost knowledge retrieval.
The shape of the work matters. RFP responses, customer service triage, sales enablement, internal research, document Q&A. These compress because the knowledge already exists and the labor cost was the time it took a human to surface, package, or deliver it. AI shortens that time. The use case has a working economic model.
Product ideation is a weak case under the same logic. The model produces a hundred bad ideas for every good one, and the human is back in the position of evaluating each one. The original work didn't compress; it just shifted location. Other weak cases share the same shape: tasks that require creative judgment, qualitative evaluation, or synthesis of contradictory inputs into a single defensible position.
Principle 3: Bet on retrieval, not ideation.
Who's actually going to do the work?
The conversation about a new AI initiative usually starts in the right place. Where's the opportunity? What's the use case? Who should own it? It rarely gets to the harder question underneath: who specifically is going to do this, and what are they going to stop doing to make room?
The initiative gets handed to a team already running at capacity, dropped onto a priority list that's already too long, and the question of what's coming off that list to make room never quite gets asked. Six months later the initiative is technically in flight and producing nothing.
Two AI projects at two different accounting firms make the case. One firm pulled a senior billable person off client work to lead the project full-time. The other left everyone in their day jobs. Months in, only the first is producing results. The second team is still juggling client work.
The pattern repeats at scale. One large healthcare-services portfolio company replatformed the foundation of its core business, pulling its most domain-experienced operators off existing work to lead the rebuild. Hard to do, by definition you're pulling your best people off the work that funds the rebuild, but the executive sponsor made the call and it worked.
The finance session named the same dynamic without AI vocabulary. CFOs who restructure their teams in the first ninety days move the business. CFOs who try to do the team work on the side while running the close get consumed by the close. The team work doesn't die; it just doesn't happen.
Real change requires asking the capacity question out loud and reallocating your best people, on purpose, away from the work that funds the change. The most common way for a serious initiative to fail is for the trade to never get named.
Principle 4: Name the trade out loud.
Why is AI ROI so hard to size?
The instinct is to size the AI initiative precisely before committing real money. Build the business case. Get the ROI to two decimal places. Then approve. The room kept dismantling that instinct from two directions.
First direction: nobody actually has a stable view of total cost. One firm audited the all-in cost of AI tools across the portfolio and found that the platform license was often only 20-25% of the real spend. Tokens, compute, and storage made up the rest, and that ratio is moving as vendors shift from seat-based to usage-based pricing. A $50K license became a $200K-plus annual run rate without anyone meaning to expand the program.
Second direction: the productivity gains themselves are uneven and concentrated. The often-cited 40%+ productivity number for AI-assisted coding is real, but it lands almost entirely on junior and mid-level engineers. Top engineers are much harder to displace, so a team with a heavy senior pyramid sees less of the gain than the headline implies. The gain is real; the average is misleading. Sizing has to be done at the team level, not the headline level.
ROI on AI is directionally useful and mostly inaccurate. Pick the workflow worth attacking, commit real resources, ship something, learn what the actual costs and gains look like in your context, and resize quarterly. Static measures of what good looks like don't survive a quarter at the current pace of change.
Principle 5: You can't size a cost that's still discovering itself.
When does it actually make sense to build instead of buy?
The room's answer was lopsided toward buy, for a specific reason. Vendors selling category tools, in spend management or revenue intelligence or service management, have done many more implementations than any single portco can. They know which workflows fail in which contexts. Buying inherits that learning. Building means relearning it on your own data and timeline.
Build came up in only two narrow cases. The first: highly regulated, multi-year, complex programs where the workflow combines proprietary data, analytical depth, and a feedback loop no off-the-shelf tool can match. The second: internal firm-level tools that have to encode the firm's own accumulated judgment. One operator described building an internal tool that pressure-tests new deals against the firm's own IC history, surfacing why deals that look like this one have gone right or wrong before. No vendor is going to encode that.
The pattern underneath both cases: would any vendor have a commercial reason to build this for many customers? If yes, buy. If no, you have your build case.
The trap is the middle case. Portfolio companies trying to do advanced AI development work themselves, without the talent or the focus, because someone got excited and the test never got applied.
Buy where vendors have done the work, build only where context forces it, and apply the test before deciding. Most portfolio companies should focus on applying AI to their core operations, not on building the underlying infrastructure.
Principle 6: Build only when no vendor has a reason to.
What separates the operators furthest ahead?
The operating partner who put it most directly described his approach as paranoia-driven learning. Daily personal use of the tools. Mandated coursework for the team. Hands-on experimentation with new capabilities at the firm level, building internal tools, running structured pilots, working directly with vendors, well before recommending anything across the portfolio.
That hands-on familiarity is the thing most easily skipped. It's also the thing that determines whether you can tell, when a CEO describes an AI initiative on a board call, that the work behind it is real. Without your own hands on the tools, you're taking their word for it. The operators furthest ahead in this room would not take anyone's word for it, including their own from six months ago.
MARTEL CAMPBELL
Replies welcome: martel@martelhealth.com
© 2026 Martel Health