Hello World, Can We Talk?
Companies do not get ten years anymore.
That sentence may sound harsh.
Good.
For thirty years, enterprise technology lived inside a forgiving world. Bad systems survived. Bad implementations survived. Bad leadership survived. Shelfware survived. Consulting theater survived. Customers waited on hold. Employees endured broken workflows. Executives blamed “the system.” Boards tolerated underperformance because everyone else seemed equally slow.
That world is ending.
Not because AI is magic.
Because AI amplifies the visibility of institutional incompetence.
The old transformation cycles gave companies time. ERP took years. Cloud took years. Digital took years. Mobile took years. Organizations could lag, posture, rebrand, buy platforms, reorganize, fail quietly, and remain marginally viable.
AI is different.
AI does not merely modernize infrastructure. It enters the cognitive layer of the enterprise: decisions, service, coordination, analysis, software, customer interaction, knowledge work, procurement, hiring, support, compliance, and strategy itself.
The question is no longer whether you can adopt AI.
The question is whether your institution can still think.
Can we talk?
A large portion of enterprise IT still reports to the CFO. That fact is not administrative trivia. It is fossil evidence.
For decades, IT was treated primarily as a cost. Uptime, security, procurement, risk containment. Important work. Necessary work. Not the work of beating your competitors.
Now those same organizations must lead a transformation that touches the operating model, talent model, customer promise, decision architecture, and the company’s legitimacy.
How many CIOs truly believe they can lead that?
How many CEOs dare to trust them?
How many boards even understand the question?
The Last 5% is about the edge — the final institutional courage required to make capability real at the boundary where value, risk, and accountability concentrate.
This challenge is about the middle.
Earlier. More structural. The question of whether the institution itself is capable of carrying the cognitive load AI now demands of it.
This AI moment is table-stakes leadership roulette.
Some companies will adapt. Some will freeze. Some will buy theatrics. Some will delegate the future to committees. Some will hide behind compliance. Some will give the job to AI vendors who have never been accountable as fiduciaries. Some will discover, too late, that their employees already know the company is intellectually unserious.
That last point matters.
For the first time, ordinary employees can benchmark their employer against frontier capability every day.
They use AI at home. They see what is possible. Then they return to work and confront ticket queues, brittle systems, approval chains, dead language, and leaders pretending pilots are progress.
Talent will not work indefinitely for fools.
Customers will not endlessly tolerate inefficiency, opacity, and pissy service once better alternatives become emotionally obvious.
The comparison surface has escaped the enterprise perimeter.
Can we talk?
I recently had a major flight problem with United Airlines. I booked through American Express Travel. United’s systems canceled four flights without a clear explanation. Six-plus hours on hold. Forty-eight hours of unreality. Eventually, United customer service told American Express it was “an AI error.”
There it was.
The sentence of the era.
An AI error.
Not a person. Not a policy. Not an accountable operating failure. An AI error.
American Express did the right thing. They reimbursed my out-of-pocket cost.
United lost trust. American Express restored it.
That distinction is the future value chain.
The winner is not necessarily the company with the most AI. The winner is the company that knows how to combine AI leverage with human accountability when the machine breaks reality for the customer.
“AI error” is not an explanation. It is the absence of one. It reveals that no one owns the chain of consequences.
That brings us to the real question.
Who owns trust?
Not who sells models. Not who sells cloud seats. Not who sells implementation decks. Not who sells bee-to-honey agent wrappers promising to pollinate every workflow in the enterprise.
Who owns trust?
Enterprises have historically trusted large vendors because they were safe to buy from. Microsoft. Oracle. Salesforce. Accenture. IBM. Safe in the procurement sense. Defensible. Familiar. Contractable. Blame-transferable.
Procurement legitimacy is not fiduciary trust.
Those companies may be institutionally trusted vendors. That does not make them credible fiduciaries for autonomous or semi-autonomous cognition.
A fiduciary does not merely sell capability. A fiduciary accepts consequences.
That is the missing category.
If an AI agent denies a claim, cancels a flight, reprices inventory, reroutes a supply chain, changes a medical workflow, negotiates procurement, drafts legal language, flags fraud, changes hiring pools, or triggers a cyber response, who is accountable? Who is the fiduciary?
The software vendor? The systems integrator? The model provider? The enterprise executive? The board? The workflow owner? The human “in the loop” who never had practical authority to stop the loop?
None of the current enterprise theater answers this cleanly.
The startup swarm will claim it can.
The bees are everywhere now.
Every day, another agent company announces a raise. Another vertical AI platform. Another orchestration layer. Another promise to automate the enterprise. Another “copilot for X.” Another workflow veneer over frontier cognition.
Some will matter.
Most will not.
Many are not companies. They are a temporary arbitrage against frontier model latency. They exist in the gap between what the frontier labs can do today and what they will absorb tomorrow.
The deeper enterprise buyer will eventually ask the only question that matters.
What are you doing that I cannot do without you?
That question will kill a lot of bees.
Behind the swarm sits another uncomfortable question.
What happens to venture capital and private equity if durable differentiation collapses faster than capital deployment cycles?
Power laws defined the modern venture era. A handful of companies returned the entire funds. The rest became acceptable casualties inside the math.
Power laws assume something critical.
Time.
Time to scale. Time to build moats. Time to compound distribution. Time to lock customers. Time to professionalize management. Time to create organizational gravity before the next capability wave arrives.
AI may compress those windows violently.
A startup launches with frontier leverage. Six months later, the frontier absorbs the feature. Twelve months later, the margin collapses. Eighteen months later, the workflow becomes part of the native infrastructure.
What exactly received the multiple?
The technology? The distribution? The trust layer? Or temporary asymmetry against a moving frontier?
Private equity may face a parallel problem.
The classic playbook of operational tightening, labor optimization, process extraction, and platform consolidation emerged from a world where organizational inefficiency could survive for years without existential exposure.
AI changes visibility.
Customers notice friction faster. Employees notice incompetence faster. Markets notice strategic drift faster.
Talent itself becomes a flight market against institutional stupidity.
That changes the durability curve underneath many PE assumptions.
Can we talk?
Are portions of the current AI investment boom actually long-duration bets on short-duration advantage?
Some capital formation behavior increasingly resembles crossed fingers wrapped in management fees.
Not malicious. Not irrational. Just structurally trapped between unprecedented capability acceleration, unclear defensibility, and enormous pressure to deploy capital before the next wave forms.
The old venture model rewarded early discovery of the future.
This era may reward correctly identifying who can survive continuous future arrivals.
Capability access is not enough.
Enterprises do not merely need more AI. They need governed cognition they can trust under pressure.
That brings the frontier labs back into the frame.
If I were making a consequential AI decision for a Fortune 1000 today, I would not want a generic vendor fog around accountability. I would want an identifiable authority on the hook.
Sam. Dario. Demis.
Not because I trust them unquestioningly. Because I want human accountability attached to machine cognition.
Platform expansion. Alignment governance. Scientific stewardship. Three different institutional philosophies of the frontier. The differences matter.
In a transition this unstable, enterprises may not trust abstractions. They may trust accountable people before they trust mature institutions. That is historically normal. Legitimacy often begins with named humans before it becomes durable infrastructure.
Named humans do not scale.
No single founder can underwrite every agent, every deployment, every enterprise consequence, every sovereign boundary, every sector-specific failure mode.
The value chain may need to reorganize.
One possibility worth naming: Trust Syndicates. Strange bedfellows under systemic pressure. Frontier labs, insurers, auditors, regulators, cloud providers, domain operators, verification firms, and accountable human governance — composed into consequence-bearing structures for healthcare, defense, financial services, sovereign infrastructure, and critical operations. Not another software reseller model. A structure designed to carry the weight that “AI error” reveals nobody currently carries.
The market keeps pretending AI is a productivity tool.
It is not only that.
At enterprise scale, AI becomes an institutional actor. Not legally. Not spiritually. Operationally. It participates in decisions, alters incentives, compresses time, exposes weakness, and breaks old excuses.
The old enterprise software question was whether it works.
The AI question is whether we can trust systems that work too well, too fast, across too many consequence chains for any one person to see fully.
That is not a CIO-only question. That is not a CFO procurement question. That is not an innovation committee question.
That is a CEO question. A board question. A fiduciary question.
No lifeguard is coming.
The supervised world is over.
The companies still waiting for permission, consensus, best practices, and ten-year adoption curves may discover that the water did not get deeper.
They forgot how to swim.
P.S. Thank you, Joan Rivers, for the line.