The Question Nobody Is Answering
Enterprise leaders know they need AI. The business case is clear, the budget is approved, and the vendor demos are impressive. But when it comes time to act, a deceptively simple question stops most initiatives cold:
Which AI?
There are over 46,000 AI startups globally and more than 8,000 vendors competing for enterprise attention (Stanford HAI, 2025; AIMultiple, 2026). The technology categories alone — Document AI, Agentic AI, Robotic Process Automation, Machine Learning, Predictive Analytics, Generative AI, Computer Vision, Natural Language Processing, Conversational AI — create a landscape so dense that even experienced transformation leaders struggle to map it to their operations.
The result is predictable. MIT's 2025 "State of AI in Business" study found that 95% of GenAI pilot programs delivered zero measurable P&L impact — despite $30-40 billion in global enterprise GenAI investment. And the failure isn't because the technology doesn't work. The technology works fine. The problem is that enterprises are applying the wrong technology to the wrong process.
GenAI Dominates the Conversation. It Shouldn't Dominate Your Strategy.
Worldwide GenAI spending hit $644 billion in 2025 (Gartner, March 2025). That figure dwarfs every other AI technology category combined. Enterprise GenAI application spending alone surged from $1.7 billion to $37 billion since 2023, capturing 6% of the global SaaS market (Menlo Ventures, 2025).
But here's what the spending pattern doesn't tell you: Generative AI is the dominant technology for only about 9% of enterprise business subprocesses.
When we mapped the AI technology mix across all 127 universal business subprocesses in the BizBlocz taxonomy — using research from Everest Group, McKinsey, Gartner, Deloitte, and O'Reilly — a very different picture emerged:
| Dominant AI Technology | % of Subprocesses | Count |
|---|---|---|
| ML / Predictive Analytics | 60% | 73 |
| Agentic AI / RPA | 19% | 23 |
| Generative AI | 9% | 11 |
| Computer Vision | 8% | 10 |
| Document AI | 2% | 3 |
| NLP / Conversational | 2% | 2 |
ML and Predictive Analytics — forecasting, scoring, anomaly detection, pattern matching, optimization — dominate 60% of enterprise processes. These aren't the technologies getting keynote slots at conferences or making headlines. They're the technologies that are actually delivering measurable value in production.
Gartner's 2025 Hype Cycle confirmed the asymmetry: GenAI has entered the Trough of Disillusionment, while Agentic AI sits at the Peak of Inflated Expectations. The technologies doing the most enterprise work — ML, RPA, Document AI — don't even make the hype headlines anymore. They're too busy being productive.
Same Company, Different Process, Completely Different AI
This is the part that makes generic "AI strategy" so dangerous. The technology mix doesn't just vary across industries — it varies across subprocesses within the same function, in the same company, on the same floor.
Consider four subprocesses, all within reach of a single CFO's organization:
Invoice Processing (AP01)
- Document AI: 40% — OCR, intelligent capture, data extraction from invoices
- Agentic AI / RPA: 30% — automated routing, approval workflows, exception handling
- ML / Predictive: 10% — matching confidence, duplicate detection
- Generative AI: 10% — exception narrative, supplier communication
- Computer Vision: 5% — physical document scanning
- NLP / Conversational: 5% — supplier query handling
This is a document-first process. The entry point is a piece of paper (or PDF), and everything downstream depends on how accurately you extract the data from it. An enterprise that deploys a GenAI chatbot to "handle invoices" is solving 10% of the problem with 100% of the budget.
Payment Execution (AP02)
- Agentic AI / RPA: 50% — batch processing, bank file generation, approval chains
- ML / Predictive: 20% — payment timing optimization, cash flow forecasting
- NLP / Conversational: 15% — payment status inquiries
- Generative AI: 10% — exception handling narratives
- Document AI: 5% — remittance advice processing
Same department, one subprocess downstream, and the dominant technology flips entirely. Payment execution is a rules-based, high-volume transaction workflow. The AI that matters here is the one that can select invoices, apply discount terms, trigger bank transfers, and process exceptions at scale without human intervention. That's Agentic AI and RPA — not GenAI.
Demand Forecasting (DP01)
- ML / Predictive: 65% — time-series forecasting, demand sensing, causal modeling
- Generative AI: 15% — scenario narrative generation, what-if analysis
- NLP / Conversational: 15% — market signal extraction from news and reports
- Agentic AI / RPA: 5% — automated forecast distribution
This is a prediction-first process. The value comes from the algorithm's ability to detect demand patterns across hundreds of SKUs, channels, and time periods. GenAI can help explain the forecast in natural language, but the forecast itself is pure ML.
Automated Defect Detection (QI01)
- Computer Vision: 65% — image analysis, defect classification, visual inspection
- ML / Predictive: 20% — defect pattern prediction, yield optimization
- NLP / Conversational: 10% — defect reporting and operator communication
- Agentic AI / RPA: 5% — automated disposition workflows
Here the dominant technology is Computer Vision — cameras and image analysis models inspecting products on a production line at speeds no human can match. Try solving this with a chatbot.
Four processes. Four completely different technology profiles. If your AI strategy doesn't operate at this level of specificity, it isn't a strategy — it's a budget allocation waiting to be disappointed.
Why Enterprises Get This Wrong
The technology selection problem has three root causes:
1. Vendor incentives are misaligned with process needs.
Every AI vendor leads with their technology. The Document AI vendor says AP needs IDP. The RPA vendor says AP needs bots. The GenAI vendor says AP needs a copilot. They're all partially right — and their sales cycle doesn't require them to tell you which percentage of the problem they actually solve.
2. The conversation happens at the wrong altitude.
"AI for Finance" is not a strategy. Neither is "AI for Supply Chain" or "AI for HR." These are department labels, not process specifications. Finance alone has 11 processes and 22 subprocesses in the BizBlocz taxonomy, each with a different technology profile. A transformation leader making technology decisions at the department level is guaranteed to over-invest in some subprocesses and under-invest in others.
3. No process-level AI technology framework existed.
Until now, no analyst firm, consulting firm, or research institution has published a framework that maps AI technology types to specific business processes at the subprocess level. Gartner published its first-ever Magic Quadrant for Intelligent Document Processing in 2025 — evaluating IDP vendors, but not prescribing where IDP fits relative to other AI types for each process. Forrester emphasizes use-case prioritization frameworks but doesn't map technology types to processes. BCG's "Widening AI Value Gap" report (September 2025) found that the top 5% of companies achieve 5x the revenue impact from AI — but doesn't tell you which technology type drives that value for which process.
The gap is structural. And it's why 95% of GenAI pilots fail, why 30% of GenAI projects are abandoned after proof of concept (Gartner, 2024), and why only 20% of organizations are generating revenue growth from AI despite 74% hoping to (Deloitte, 2026).
The Solution Mix as a Decision Framework
This is one of the core outputs of the BizBlocz VALUE assessment. For each of the 127 subprocesses in the universal taxonomy, the tool shows not just a savings range and confidence level, but the AI solution mix — the proportional breakdown of which AI technology types drive the value.
It's not a recommendation. It's what the evidence shows. Each allocation is generated by an algorithm that quantifies research inputs from Everest Group, McKinsey, Gartner, Deloitte, and O'Reilly — documented with source attribution and rationale for every subprocess. An important note: no published research provides subprocess-level proportional AI type splits. These percentages are directional — they tell you which technology should lead, which plays a supporting role, and which is irrelevant for a given process. The dominant technology for each subprocess is well-supported by research; the precise proportions are calibrated estimates that provide broad direction, not decimal-point precision. We disclose this because transparency about what the numbers are — and what they aren't — is the same principle that makes the rest of the platform trustworthy.
The practical impact is immediate:
- Before a vendor call: Know which technology type should dominate for your process. If a vendor is selling you GenAI for a process that's 65% ML/Predictive, ask them hard questions.
- During pilot design: Scope the pilot to the right technology. A Document AI pilot for Invoice Processing is not the same as a GenAI pilot for Invoice Processing — and the ROI profiles are completely different.
- During budget allocation: Distribute investment across technology types in proportion to where the evidence says the value lives, not where the hype cycle points.
- Post-implementation: Measure results against the right baseline. If your process is 40% Document AI and you only deployed the Agentic AI component, you've addressed 30% of the opportunity.
The Bigger Picture
The AI solution mix is one lens in a system of five. The BizBlocz platform maps every subprocess across:
- VALUE — Where is the AI savings opportunity, and which technology mix drives it?
- READINESS — Is your organization prepared to capture that value?
- WORKFORCE — What happens to the people when the AI is deployed?
- SERVICES — What gets done, where, and by whom?
- RESULTS — Did you realize the value you projected?
The solution mix lives in VALUE, but it informs everything downstream. READINESS can't assess your automation baseline without knowing which technology to assess against. WORKFORCE impact analysis changes dramatically depending on whether a process is being automated by RPA (task elimination) or augmented by GenAI (task transformation). SERVICES scope depends on whether you're implementing Document AI pipelines or ML forecasting models — completely different skill sets, timelines, and integration patterns.
One taxonomy. 127 subprocesses. Six AI technology types. Every combination is different, and every combination is documented.
Where to Act
If you're a transformation leader trying to decide where AI will deliver the most value, the technology mix should be your second question — right after "which process?"
Not "should we use AI?" — that question is settled.
Not "which vendor?" — that question is premature.
"Which AI technology, applied to which specific process, with what evidence behind it?"
That's the question that separates the 5% of companies achieving AI value at scale (BCG, 2025) from the 95% still running pilots that go nowhere.
The map exists. It covers 127 subprocesses, 6 AI technology types, and 245+ research data points. Every allocation has a rationale. Every rationale has a source.
Sources: Stanford HAI (2025), AIMultiple (2026), MIT NANDA (2025), Gartner (2024, 2025), Menlo Ventures (2025), BCG (2025), Deloitte (2026), McKinsey (2025), Everest Group (2024), O'Reilly (2023), BizBlocz AI Mix Rationale
