From BPR to AI: What "Redesigning Work" Really Means for the Future of Work
By Diego Navia · BizBlocz · May 2026
There is a growing consensus in the conversation about AI and the future of work that the path to real value runs through redesigning how work gets done. McKinsey's 2025 State of AI survey identified workflow redesign as the factor most strongly correlated with AI value capture. A 2026 MIT paper argues AI benefits will come from redesigned chains of activities, not from speeding up individual tasks. Erik Brynjolfsson at Stanford has long predicted a J-curve where productivity initially lags as companies re-engineer processes, then accelerates once AI is fully integrated. Tom Davenport, who literally wrote the first article and book on business process reengineering in 1990, published a piece last week reinforcing the point and adding a useful frame: companies are "AI washing" by relabeling layoffs as redesign, the way they did with BPR thirty years ago.
The voices agree that AI piled on top of existing workflows produces marginal gains, while AI embedded in redesigned workflows produces step-changes. They are right. But most of the conversation assumes everyone knows what "redesigning work" actually means, and most readers don't beyond the buzzword. So before getting to what AI is changing, it's worth being honest about what the discipline has actually been about for the last thirty years.
What work redesign is actually for
Work redesign has always been driven by five objectives, usually blended but typically led by one or two:
- Cost reduction. Fewer people doing the same work, or the same people doing more. This is the most honest objective, even when decks call it something else.
- Cycle time. Faster end-to-end. Less queue, less wait, less handoff drag. Customers care about this; finance teams measure it.
- Quality and consistency. Fewer errors, less variation, more predictable outcomes. Especially valuable in regulated industries where variance has compliance cost.
- Capacity unlock. Handle more volume without proportional headcount growth. The case most often used to defend automation investments.
- Strategic flexibility. Change direction faster when the business shifts. Less commonly cited but increasingly important as product cycles compress.
Different waves emphasized different objectives. BPR sold as cost and cycle time. Lean on quality and waste. Six Sigma on consistency. Hyperautomation on cost and capacity. The methodology changed; the underlying motivations stayed remarkably constant.
What also stayed constant for thirty years was the unspoken target.
The traditional playbook was always about handoffs
When Michael Hammer published "Reengineering Work: Don't Automate, Obliterate" in Harvard Business Review in 1990, he framed it as a shift from task-level thinking to process-level thinking. The article launched what became known as business process reengineering, or BPR. The standard reading is that BPR moved the unit of analysis from tasks to processes. But read it carefully and the deeper story is different. Hammer was arguing that the waste in modern organizations had migrated from inside individual tasks to the spaces between tasks. The handoffs.
The famous IBM Credit case in his 1993 book makes this explicit. A credit application moved through five departments. Each department's task was efficient. The total cycle time was seven days; the actual work content was 90 minutes. The waste wasn't in any task. It was in the handoffs between them. Hammer's redesign collapsed the five-handoff sequence into a single role. Cycle time dropped to four hours. The process became the unit of analysis because handoffs were where the value was leaking.
Every redesign discipline since has been about identifying and eliminating different kinds of handoffs. Lean targeted temporal handoffs (waiting, queue, inventory). Six Sigma targeted variation-induced handoffs (variation forces buffers downstream). Digital Transformation targeted customer-facing handoffs across channels. Hyperautomation targeted system-to-system handoffs.
The unit of analysis kept changing. The underlying logic stayed constant: find the handoffs, eliminate them or compress them.
Two layers practitioners learned to add
Handoff redesign worked, but not at first. BPR's failure rate ran 60 to 90 percent through the 1990s, and the lesson was that redesigning handoffs in theory wasn't the same as redesigning them in practice. The discipline matured by adding two layers.
The first was visibility. BPR teams in 1990 worked from interviews, workshops, and SOP documents. Every one of them was a model of the process, not the process itself. Teams were redesigning their beliefs about how work flowed, then implementing radical changes against those beliefs. Process mining changed this around 2017 to 2019. Tools like Celonis read the event logs that enterprise systems were already producing and reconstructed the actual flow of work. The first time most organizations ran process mining on their own processes, they discovered the "standard" process happened in 20 to 30 percent of cases and cycle times were 5 to 10 times longer than anyone believed. Everything before process mining was redesigning from a model. Everything after could redesign from data.
The second was continuity. BPR was an explicit one-shot exercise. Lean and Six Sigma made redesign continuous, embedding it into daily operations rather than treating it as a project. The shift turned redesign from something consultants did to organizations into something organizations did to themselves.
Three executors, one binary question
Through this whole arc, the redesign playbook also accumulated executor options. By the late 2010s, three were available for any given task:
Humans. The default executor for anything requiring judgment, relationships, or accountability. Slower, variable, expensive, but flexible.
Deterministic automation. Rules-based systems, RPA bots, workflow engines, integration platforms. Fast, consistent, cheap to operate. Limited to clearly specified, rule-shaped work. Still the dominant form of automation in most enterprises today, by spend and by deployed footprint.
Hyperautomation orchestration. Process mining plus deterministic automation plus governance, working together to identify and automate rule-shaped work systematically across an organization.
If you've ever been part of a process redesign project, you've seen the question that decided what to do with each task: does this task require judgment? Yes, keep human. No, automate. The question worked for thirty years because the available technologies were deterministic. Choose human, and you got variability with accountability. Choose automation, and you got consistency without judgment. The choice was clean.
That clean choice is what AI breaks.
What the AI-era playbook is quietly becoming
The same task that the traditional playbook would have classified as "requires judgment, keep human" can now be handled in many ways:
- Fully human, the way it always was
- AI-assisted, with a human reviewing what the AI produces
- AI-executed within deterministic guardrails that bound what the AI can do
- Multi-agent AI, where one AI checks another's work
- AI-executed with confidence-threshold routing, where high-confidence cases auto-process and low-confidence cases get human review
The judgment question hasn't disappeared. It's still useful. But the answer isn't binary anymore. It's a spectrum of executor patterns between fully human, fully deterministic, and fully AI-driven. The new design question becomes: which executor is best suited to this specific task (human, deterministic system, or AI), and how do they coordinate?
Whatever you call it, this is the first foundational shift in work redesign in thirty years. Hammer's contribution was making handoffs the explicit target. The AI-era playbook keeps that target but replaces the binary judgment question with a design question that didn't have to exist before, because the technology that makes it necessary didn't exist before.
It also has consequences that ripple through the rest of the discipline.
The handoff equation gets redefined
For thirty years, handoffs were fundamentally about humans communicating with other humans. Even when bots and integrations were intermediaries, the chain ran human-to-human at its endpoints. A request started with a human and landed with a human. You could redesign by mapping who talked to whom and what they passed between them.
AI changes this in a way that's easy to miss. When an AI agent reads an email, drafts a response, sends it to another agent that calls an API, which routes the result to a third agent that executes the transaction, the entire chain can run with no humans communicating in the middle. The handoffs still exist. They are between non-human executors now, exchanging structured data rather than language.
This isn't about removing humans from work. Humans remain central to judgment, accountability, supervision, and most of the activity that creates value. The shift is in what the redesign discipline assumes about the handoff map. For thirty years, the map put humans at every node. AI-era redesign requires a map that treats humans as one type of executor among several, with handoffs flowing between any two.
The visibility and continuity layers also have to extend. Visibility used to mean understanding which person passed work to which person. Now it has to include human-AI choreography, model performance, and where humans override or accept AI output. The instrumentation for that doesn't exist at scale yet. We're roughly where BPR was in 1992, before process mining made the work visible. Continuity used to mean keeping the redesign alive against business change. Now it has to include managing AI that itself keeps changing as foundation models update.
The job implication that's hiding in plain sight
When the unit of redesign was human-to-human handoffs, communication was the backbone of work. The data is striking and consistent.
The average knowledge worker spends roughly 60 percent of their time on coordination (communicating about work, searching for information, switching apps, chasing status updates) and only 40 percent on the skilled work they were hired to do, according to Asana's 2025 research. Microsoft's analysis of its own productivity data shows employees spend 57 percent of their time communicating versus 43 percent creating.
The pattern intensifies up the organization. Individual contributors average about 4 hours per week in meetings. Managers and directors average 13 hours, more than a quarter of their working time. Executives average 23 hours per week in meetings, according to Harvard Business Review. A thirteen-week study of large-company CEOs by Porter and Nohria at Harvard found that CEOs spend 72 percent of their working time in meetings.
This isn't a flaw in how organizations operate. Ronald Coase argued in 1937 that the entrepreneur's main task is coordinating internal activities that markets can't price directly. Drucker and Mintzberg made the same point: at higher levels, work increasingly is communication and coordination.
That's why the AI-era playbook has different implications at different levels. For roles whose work is predominantly task execution, the impact is direct: the executor for many of those tasks may no longer default to human. For roles whose work is predominantly communication and coordination (managers, directors, executives), the impact is structural in a different way. The discussion about AI and managers usually focuses on AI tools that help managers do their jobs faster. The deeper story is that much of the coordination work itself is exactly what AI agents can increasingly handle between themselves. AI doesn't just augment management. It compresses the volume of human-to-human coordination that management was needed for.
Why most companies aren't doing this well
Davenport's "AI washing" observation is worth taking seriously, and worth extending. AI washing happens not because executives are cynical but because the AI-era playbook genuinely doesn't exist yet. When the discipline isn't written, the choice for an executive isn't "do redesign well versus AI-wash." It's "do something visible with AI, or be the executive who fell behind." Layoffs are visible. Productivity stories are visible. Disciplined AI-era redesign is hard to even describe, let alone defend in a board meeting.
Every prior wave looked the same in its early years. The first decade of BPR was full of companies that "did reengineering" by laying people off. The discipline matured later, after enough failed implementations had produced enough lessons that consultants and academics could codify a methodology. We're early. That's not pessimism, it's calibration.
What this leaves us with
Three things, in plain language.
First: work redesign for thirty years has been about finding handoffs and eliminating them, with visibility and continuity as the layers that made it actually work. That part of the playbook still applies.
Second: AI introduces a genuinely new design question that the old playbook can't answer. The binary "human or automated?" collapses. The new question is which executor handles each task and how they coordinate. That's the first foundational shift since Hammer made handoffs explicit in 1990.
Third: when the executor question replaces the judgment question, the handoff map redraws itself. Humans are no longer assumed at every node. That has consequences for routine task work, but it has equally important consequences for the coordination work that occupies most of the time of most managers and executives. Both deserve honest attention.
The companies that figure this out first won't be the ones with the most confident frameworks. They'll be the ones asking the new question (what's the shape of the human-AI collaboration on this task) instead of defaulting to the old one. That's not a methodology. It's a habit of mind. And like every prior wave's discipline, it will take a decade to mature, regardless of how loudly the consensus says we should already have it figured out about the future of work.
Sources: McKinsey State of AI 2025; MIT NANDA "GenAI Divide" 2025; Tom Davenport on AI and process redesign 2026; Hammer & Champy 1993; Asana 2025 anatomy of work research; Microsoft Work Trend Index; Porter & Nohria, "How CEOs Manage Time," HBR 2018; Coase 1937; Brynjolfsson, Mitchell, Rock 2018.
