Most executives can point to at least one AI pilot their organization ran that technically worked and commercially went nowhere.
The model performed. The demo impressed. The business impact stalled.
The usual postmortem focuses on talent gaps, change resistance, or timing. Those factors matter, but they are not what separates the companies that monetized AI from the ones still experimenting. The real divide sits somewhere less visible and more uncomfortable: how far leaders were willing to go in restructuring access to data, authority over data, and accountability for decisions derived from it.
The companies that made AI pay did not start with models. They started by making data usable in ways that disrupted internal power arrangements.
John Deere: treating data as a product, not exhaust
John Deere is often cited as an “AI success story,” but the reason it worked is frequently mischaracterized. This was not about clever algorithms. It was about a multi-year decision to treat equipment data as a core asset rather than an operational byproduct.
Modern Deere machinery generates massive volumes of telemetry data about soil conditions, planting depth, fuel efficiency, and yield variability. For years, much of that data existed, but it lived in fragmented systems designed for diagnostics and maintenance, not insight. The company could have layered machine learning on top of that mess and called it innovation. It did not.
Instead, Deere invested heavily in building an integrated data platform that could be used across agronomy, equipment design, and customer-facing products. That required negotiating data ownership with farmers, standardizing data formats across equipment lines, and accepting slower short-term returns in exchange for long-term leverage. Executives discussed these tradeoffs openly in earnings calls as early as 2017 and 2018.
The payoff was not abstract. Precision agriculture products like See & Spray and variable-rate planting systems now drive measurable value for customers and recurring revenue for Deere. AI models mattered, but they were downstream of a more consequential decision: making data accessible, reliable, and shareable across boundaries the company had previously protected.
Wired on Deere acquiring Blue River for $305M (context on ML + computer vision in agriculture):
https://www.wired.com/story/why-john-deere-just-spent-dollar305-million-on-a-lettuce-farming-robot/
Harvard Digital Initiative case write-up on Deere and data in precision agriculture (context on Deere as data/IoT player):
https://d3.harvard.edu/platform-digit/submission/farm-to-data-table-john-deere-and-data-in-precision-agriculture/
Capital One: collapsing the wall between data and the business
Capital One’s use of machine learning in credit and fraud is often described as a technology advantage. Internally, it was treated as an operating model choice.
Long before generative AI became fashionable, Capital One reorganized so that data scientists and machine learning engineers sat inside product and business teams with direct P&L accountability. Data was not something requested from a central function. It was something teams owned and were responsible for improving.
This mattered because it changed incentives. Models were evaluated on business outcomes, not technical metrics alone. Data quality issues surfaced quickly because the teams affected by them felt the impact directly. Decisions about feature engineering, model thresholds, and retraining schedules were business decisions, not handoffs to a separate analytics group.
The result was not perfection. Capital One has publicly acknowledged model limitations and regulatory constraints. But it has consistently translated AI into fraud reduction, credit risk management, and personalized product offers at scale. That consistency is visible in investor disclosures and regulatory filings, not marketing decks.
Capital One Tech: AI & Machine Learning hub (Capital One official):
https://www.capitalone.com/tech/machine-learning/
Forbes on enterprise data products and data ecosystem work:
https://www.forbes.com/sites/capitalone/2024/07/15/how-capital-one-is-evolving-data-management-to-build-a-trustworthy-ai-ready-data-ecosystem/
Harvard Digital Initiative case write-up on Capital One as “AI-first” :
https://d3.harvard.edu/platform-digit/submission/capital-one-transforming-traditional-banking-to-an-ai-first-experience/
Ping An: integration that Western firms avoid
Ping An’s AI story is often misunderstood in Western contexts because it violates a deeply held assumption: that data should remain siloed by line of business to manage risk.
Ping An built an integrated data ecosystem across insurance, healthcare, banking, and wealth management. This allowed AI systems to detect patterns that would be invisible inside isolated units, such as correlations between health behavior, insurance risk, and financial needs. The company then built products on top of those insights, not just internal efficiencies.
This level of integration required regulatory negotiation, significant investment in data infrastructure, and executive tolerance for organizational discomfort. It also required clarity about what data could be shared, how it could be used, and who was accountable when AI-informed decisions crossed business boundaries.
Academic research and industry analyses credit this integrated data approach as a major driver of Ping An’s ability to scale AI applications profitably, particularly in healthcare and insurance services.
IMD case study page (Ping An tech enabling a multi-ecosystem strategy):
https://www.imd.org/research-knowledge/strategy/case-studies/the-role-of-ping-an-technology-in-enabling-ping-an-group-s-digital-ecosystem/
Harvard Business School case listing (Ping An “Finance + Ecosystem” strategy context, may require login/purchase):
https://www.hbs.edu/faculty/Pages/item.aspx?num=57837
The counterexample: GE Digital and the cost of fragmentation
General Electric’s digital ambitions offer a useful contrast. GE invested billions in Predix and advanced analytics with the goal of becoming a digital industrial leader. The technology was sophisticated. The outcomes were mixed at best.
One persistent challenge was data fragmentation across business units that retained autonomy over their systems and priorities. Data standards varied. Access was negotiated. Incentives remained local. AI initiatives struggled to move from pilots to scaled offerings because the underlying data architecture reflected organizational boundaries that leadership was unwilling or unable to dismantle.
Former executives and investigative reporting have described how digital teams built capable tools that the businesses did not fully adopt, not because they lacked value, but because the operating context made integration costly and politically fraught.
Inc.com analysis of GE Digital / Predix trajectory:
https://www.inc.com/alex-moazed/why-ge-digital-didnt-make-it-big.html
Applico analysis of GE Digital (structure and commercialization challenge framing):
https://www.applicoinc.com/blog/ge-digital-failed/
The pattern leaders miss
Across these cases, a consistent pattern emerges. Companies that monetized AI made early, explicit decisions about data that constrained future optionality but increased execution power.
They centralized data where it created leverage, even when decentralization felt safer. They invested in data quality and accessibility before returns were obvious. They accepted that making data usable would expose inconsistencies, inefficiencies, and uncomfortable truths about how the organization actually worked.
Most companies hesitate here. They prefer to fund models, hire talent, and announce initiatives without confronting the harder question of who controls data and who bears the consequences of decisions derived from it. That hesitation shows up later as stalled pilots, brittle systems, and disappointment disguised as patience.
This is why AI success often looks mysterious from the outside. The decisive moves happened earlier, quietly, in budget reviews, org design debates, and infrastructure roadmaps that never mentioned AI at all.
A reframing worth sitting with
AI does not fail because organizations lack intelligence. It fails because leaders underestimate what they are asking data to do inside structures designed to keep it constrained.
The companies that made AI pay were willing to let data change how work actually gets done. Others tried to keep data subordinate to existing power structures and were surprised when the models followed suit.
That distinction will matter more over the next year, not less. As AI capabilities commoditize, the advantage will not belong to those with the best algorithms. It will belong to those who were willing, early, to make data usable at scale and live with the organizational consequences of that choice.










