There Are Six Types of AI. Most Enterprises Are Only Using One.

By Diego Navia · BizBlocz · April 2026


The word "AI" is doing too much work.

When a CFO says "we need an AI strategy," they often mean they want to automate invoice processing. When a vendor says "we do AI," they may mean they have trained a language model to generate emails. When a consultant says "AI transformation," they sometimes mean something different again.

All three are using the same word. None of them are talking about the same thing.

This matters because technology selection tends to follow the framing. When an organization treats "AI" as a single category, it often selects technology the way it selects a contractor: whoever has the best pitch. MIT's NANDA State of AI in Business 2025 study found that 95% of enterprise generative AI pilots delivered no measurable impact on the bottom line. The technology worked. The category fit did not.

I've been part of those pilots, more than one. In my view, the problem is rarely the model. It's that the process underneath was never a good match for generative AI in the first place, and nobody asked the question before the vendor was selected.

Daniel Kahneman argued in Thinking Fast and Slow that the mind defaults to replacing hard questions with easier ones it already has answers to. That is close to what happens with "AI" in most strategy conversations today. "Which type of AI fits this subprocess" is the hard question. "Which GenAI vendor is ahead this quarter" is the substitute.

There are six types of AI in common enterprise use, a framework we refer to throughout this series as the BizBlocz AI Six. They are not interchangeable. Each one is built for a different kind of problem, uses a different technical approach, and delivers value in a different way. Here is what each one is.


1. Machine Learning predicts

Machine learning is the oldest and most widely deployed AI category in enterprise use. It works by finding patterns in historical data and using those patterns to make predictions about what happens next, without being explicitly programmed for each scenario.

The technical toolbox includes supervised learning (training on labeled examples), unsupervised learning (finding structure in unlabeled data), ensemble methods such as XGBoost and Random Forest, time-series forecasting models, and deep neural networks.

In practice it predicts which customers are likely to cancel in the next 90 days, forecasts demand for hundreds of SKUs across regions, flags transactions that deviate from historical payment patterns for fraud detection, scores credit risk across thousands of behavioral variables, and anticipates equipment failure hours or days before it happens.

Machine learning is also the hardest of the six to build well. It brings me back to my engineering days studying statistics, linear programming, and optimization models. Mature applications like demand forecasting and supply chain modeling are well understood and consistently productive in my experience. Other problems are far less forgiving. I once worked on an ambitious intelligent staff-and-service assignment system for a global delivery network of tens of thousands of people delivering 800+ services. It worked, to an extent, until it didn't.

Any company running predictive analytics, risk scoring, demand planning, or anomaly detection is running machine learning, whether it is labeled that way or not. SAP Intelligent Suite, Oracle Analytics, Salesforce Einstein, and most platform "AI features" are predominantly machine learning underneath.

Across a reference taxonomy of 127 enterprise subprocesses, machine learning accounts for an estimated 32% of aggregate AI value, the largest single share. It does not generate text. It does not automate workflows. It does not read documents. It predicts.


2. Agentic AI acts

Agentic AI takes a sequence of actions to complete a goal, rather than answering a single question. The system observes, decides, executes, and if something goes wrong, adjusts.

The technology underneath includes AI agent frameworks (single and multi-agent orchestration), Large Action Models, Robotic Process Automation (the rules-based predecessor now being augmented by AI-driven agents), reinforcement learning at the training layer, and tool-use frameworks that give models access to external systems. Familiar products include UiPath, Automation Anywhere, Microsoft Copilot, Salesforce Agentforce, and newer agent-native tools such as Claude Cowork.

In practice it receives an invoice, validates it against the purchase order, routes it to the right approver, handles exceptions, and posts the entry to the ERP. It files an insurance claim end to end, covering intake, triage, validation, reserve setting, and communication. It monitors a software environment, detects an anomaly, opens a ticket, escalates it, and closes the loop. It schedules meetings, prepares briefing materials, and follows up based on outcomes.

The distinction from generative AI is worth underlining because the two are frequently blurred. Generative AI creates. Agentic AI executes. A chatbot that drafts an approval email is generative AI. The system that reads the invoice, creates the workflow, sends the email, waits for the response, and posts the journal entry is agentic AI. Different technology, different return profile, different failure modes.

Agentic AI accounts for an estimated 22% of the enterprise AI portfolio. Gartner named AI agents the fastest-advancing technology on the 2025 Hype Cycle for Artificial Intelligence, and projects the share of enterprise applications featuring AI agents will rise from less than 5% in 2025 to 40% by 2026.


3. Generative AI creates

Generative AI produces new content (text, code, images, audio, video) in response to a prompt. The technology underneath it, including Large Language Models, diffusion models, and transformer architectures, learns the statistical distribution of training data and generates new outputs that match that distribution.

The familiar products are ChatGPT, Claude, Gemini, Llama, DALL-E, Stable Diffusion, and Sora. Common enterprise architectures around them include Retrieval-Augmented Generation (RAG), which grounds model output in a company's internal documents; fine-tuning on proprietary data; and reasoning models that extend generation into multi-step problem solving.

In practice it drafts first versions of contracts, proposals, and reports from structured inputs, generates code from natural language specifications, summarizes long documents into short briefs, answers customer questions from a knowledge base, and turns raw data into written narrative for finance and management reporting.

Generative AI is the most-discussed category of AI by a wide margin. Across 127 enterprise subprocesses, it accounts for an estimated 18% of aggregate AI value and leads in a minority of processes. It is real, useful, and commonly misapplied to tasks where the deliverable is not language.


4. Natural Language Processing understands

Natural language processing is AI applied specifically to human language: reading it, classifying it, extracting meaning from it, translating it. The distinction from generative AI is that NLP understands language, while generative AI produces language.

The technology stack includes Named Entity Recognition (identifying people, organizations, dates, and dollar amounts in text), sentiment analysis, text classification, semantic search, machine translation, speech recognition, and information extraction. BERT and its derivatives have been among the most common underlying models, with LLMs increasingly covering the same ground.

In practice it reads tens of thousands of customer support tickets and classifies each by issue type, priority, and sentiment; monitors call center transcripts for compliance issues in real time; extracts relevant clauses from contract portfolios and flags deviations from standard terms; routes incoming customer inquiries to the right team based on the meaning of the message, not keyword matching; and converts call recordings into searchable, analyzable text.

NLP is often invisible because it is embedded inside platform features labeled "smart search," "auto-tagging," or "insights." It accounts for an estimated 14% of the enterprise AI portfolio.


5. Document AI reads paperwork

Document AI is purpose-built for one problem: extracting structured, usable data from documents that were not designed to be machine-readable. Invoices. Contracts. Insurance claims. Mortgage applications. Customs forms. Medical records.

Technically, it is a combination of computer vision (reading the document as an image) and NLP (understanding the content). It has its own category because it maps to a distinct and high-volume enterprise problem set and has its own vendor market, including ABBYY, Hyperscience, Instabase, AWS Textract, and Azure Document Intelligence.

The foundational technology underneath is optical character recognition (OCR), which dates back to 1959 and became standard equipment in banking, insurance, and finance operations through the 1990s, processing checks and invoices decades before most organizations were running machine learning pilots. Modern Document AI layers deep learning and NLP on top of that foundation: Intelligent Document Processing pipelines, layout analysis, table detection and extraction, form processing, handwriting recognition, and document classification. The OCR market alone was valued at approximately $17 billion in 2025 (Grand View Research), with banking and financial services accounting for the largest share.

I still recall the extensive use of OCR on financial statements, bank confirmations, and other Assurance documents while setting up PwC delivery centers across Asia and the Americas. The ROI case for OCR and document automation was unambiguous then. Two decades later, Document AI is still the category nobody brings up first in AI strategy conversations.

In practice it extracts vendor name, invoice number, line items, amounts, and purchase order references from PDF invoices regardless of format; reads stacks of insurance claim forms and populates the claims management system without manual data entry; reviews mortgage application packages and flags missing documents or inconsistent fields; and processes customs documentation across multiple formats and regulatory requirements.

Document AI accounts for an estimated 9% of the enterprise AI portfolio, concentrated in specific high-volume subprocesses where the entry point is a piece of paper. Relative to its share of press attention, it is one of the more consistently productive AI categories in enterprise deployment.


6. Computer Vision sees

Computer vision enables machines to interpret visual information: images, video, and real-time camera feeds. Where Document AI reads a scanned form, computer vision reads a production line, a warehouse floor, a medical scan, or a retail shelf.

The technology stack includes Convolutional Neural Networks and Vision Transformers for image classification, YOLO and Faster R-CNN for object detection, segmentation models for pixel-level analysis, pose estimation, and specialized architectures for medical imaging.

In practice it inspects every unit on a production line at speeds no human inspector can match, counts products on retail shelves and identifies out-of-stock positions in real time, reads license plates entering a logistics facility and cross-references expected deliveries, and analyzes X-rays, MRIs, and pathology slides for diagnostic markers.

Throughout my career I've watched computer vision deployed on assembly lines across Asia, on both hard-goods and soft-goods lines, and on domestic CPG production lines in the Americas. It does what human inspectors cannot: every unit, at line speed, all shift.

Computer vision has the narrowest primary application set of the six categories in general enterprise use, an estimated 5% of the aggregate AI portfolio across the 127 subprocesses. That share reflects coverage across all enterprise functions. In manufacturing, logistics, and healthcare the share is dramatically higher, and the technology is foundational.


Why Six Categories Are Not Enough

Knowing the six types is necessary but not sufficient. Enterprise processes do not use one type of AI. They use a mix.

Invoice processing, in a typical reference mix, is roughly 40% Document AI, 30% Agentic AI, 10% machine learning, and 10% generative AI. Payment execution, one subprocess downstream in the same department, is roughly 50% Agentic AI and 20% machine learning. Demand forecasting is roughly 65% machine learning. Automated defect detection is roughly 65% computer vision.

Same company. Same budget conversation. Four different technology profiles.

Of course, most AI roadmaps are framed at the wrong level. The usable unit is the subprocess, not the department and not the enterprise. Which type of AI, applied to which specific subprocess, supported by what evidence. That is the conversation the next several articles aim at.


Where This Series Goes

Over the next several weeks, each of the six categories gets its own article. Each follows the same structure: what the technology is, how it works, what it doesn't do, where it fits well, where another category leads, and how much of the enterprise AI portfolio it represents.

The series covers Machine Learning (the workhorse most enterprises are already running, often without calling it AI), Agentic AI (the category projected to reshape operating models over the next three years), Generative AI (what it is actually good at, and the processes where it is not the answer), Natural Language Processing (the quiet essential embedded inside many existing platforms), Document AI (the category with the deepest roots and one of the strongest ROI records in enterprise deployment), and Computer Vision (a narrow primary application set, essential where it applies).

Something I've watched first hand across transformation programs at Big4 and other global organizations over three decades: the companies extracting real AI value are not the ones chasing the most impressive tool round-up. They are the ones picking the right category for the right subprocess, honestly, one at a time.

How many of the six are in your current AI portfolio conversation, and which one is doing most of the talking?


Research basis: BizBlocz analysis of 127 universal enterprise business subprocesses, drawing on 245+ data points from 30+ independent research publications including Everest Group, McKinsey, Gartner, Deloitte, BCG, MIT, Stanford HAI, and O'Reilly. Cross-validated against AI taxonomy frameworks from NIST, IEEE, PMI, and Forrester. AI mix percentages reflect aggregate portfolio composition across the full 127-subprocess universe; individual process allocations vary significantly. Additional sources: Grand View Research OCR Market Report (2025); Gartner Hype Cycle for Artificial Intelligence (2025); Gartner Newsroom, "Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026" (August 2025); MIT NANDA, "State of AI in Business 2025."