Drago Dimitrov Logo

Cyborgs, Centaurs, and Self-Automators: What Your AI Strategy Ignores About People

Most organizations talk about their AI strategy in terms of tools: which models they’ve deployed, how many licenses they’ve purchased, what percentage of employees have access. But a recent study from MIT Sloan and Boston Consulting Group reveals a problem that no amount of tooling can solve — the way your people actually interact with AI determines whether you’re building capability or quietly destroying it.

The study tracked 244 consultants given access to a generative AI platform and found they fell into three distinct usage patterns. Understanding these patterns — and managing them deliberately — may be the most important AI leadership decision of 2026.

Three Patterns Hiding in Your Workforce

Researchers identified three modes of AI interaction, each with radically different implications for performance and learning:

Cyborgs (60% of users) engage in what the researchers call “fused co-creation.” They collaborate closely with AI throughout every stage of their work — probing its suggestions, sometimes following its lead, sometimes pushing back. The AI becomes a thinking partner woven into their entire workflow.

Centaurs (14% of users) practice “directed co-creation.” They know exactly what they need from the AI and ask targeted, specific questions. Unlike cyborgs, centaurs maintain structured, controlled interactions — they harness AI for targeted efficiency rather than open-ended exploration.

Self-automators (27% of users) demonstrate “abdicated co-creation.” They offload tasks almost entirely to the AI, delegating analytical and evaluative thinking wholesale. The results come back fast and polished — but lack depth.

Here is the finding that should concern every leader: centaurs produced the most accurate work. Self-automators produced the least valuable. And the 27% figure likely underestimates the prevalence of self-automation in the broader workforce, where employees face less scrutiny than study participants at BCG.

The Hidden Cost: Skill Erosion at Scale

Performance differences are only half the story. The more consequential finding is what happened to learning.

Centaurs — those who used AI in targeted, disciplined ways — deepened their domain expertise. By maintaining their own analytical process and using AI to fill specific gaps, they actually got better at their jobs. They treated AI the way a skilled carpenter treats a power tool: useful for specific cuts, but the craftsperson still designs the joint.

Cyborgs gained something different. They didn’t build much domain expertise, but they became significantly better at using AI itself — learning prompt strategies, understanding model strengths, developing what amounts to a new professional skill. Not a bad outcome, but a different one than most organizations assume they’re getting.

Self-automators gained nothing. No domain expertise. No AI fluency. No skill development at all. They got their deliverables done faster, but each completed task left them slightly less capable than before. Multiply this across a quarter of your workforce, compound it over months, and you have a serious organizational capability problem that won’t show up in any productivity dashboard.

Why Traditional AI Strategies Miss This Entirely

The standard enterprise AI playbook focuses on deployment metrics: adoption rates, time saved, cost per query. These are input measurements that tell leaders nothing about the quality of human-AI interaction happening underneath.

In systems thinking terms, organizations are optimizing the wrong variable. Using the framework from Instant Competence, any outcome is the weighted sum of its system variables — the classic Y = w1a + w2b + w3c formula. Most AI strategies pour resources into variable a (tool access and adoption) while ignoring variable b (interaction quality) and variable c (skill development trajectory). When the highest-weighted variable in long-term organizational performance is human capability, optimizing for adoption alone is like tuning the wrong knob on a mixing board.

This is compounded by what MIT researchers Thomas Davenport and Randy Bean identified as a critical 2026 trend: the need to shift generative AI from an individual productivity tool to an enterprise capability. Most companies still treat AI as a personal assistant for employees. The result is that each person develops their own idiosyncratic relationship with the technology — some becoming centaurs, most becoming cyborgs, and a troubling minority becoming self-automators — with no organizational visibility into which pattern is dominant.

The Agentic AI Complication

This challenge intensifies as organizations move toward agentic AI — systems that can perceive, reason, and complete tasks with minimal human supervision. Davenport and Bean note that ongoing hallucinations and security vulnerabilities have slowed agentic AI adoption, and that companies “will continue to have some human in the loop.”

But which human, doing what kind of thinking? If the humans in the loop are predominantly self-automators — people who have already habituated to offloading analytical work — then the guardrail is an illusion. The person reviewing the AI agent’s output needs the domain expertise to catch errors, and that expertise atrophies when you stop exercising it.

This is a second-order effect that most AI roadmaps fail to anticipate. Organizations invest in AI agents to handle routine work, which reduces the opportunities for humans to practice judgment on routine work, which degrades the judgment needed to supervise AI agents on complex work. The system quietly undermines its own safety mechanism.

What Leaders Should Do Differently

The research points toward several concrete shifts in how organizations manage AI adoption:

1. Make Interaction Patterns Visible

Before optimizing anything, leaders need to understand what’s actually happening. Which teams default to self-automation? Which roles require centaur-style precision? This isn’t about surveillance — it’s about the same kind of workflow analysis that organizations already do for process improvement. You cannot manage a variable you cannot see.

2. Design Workflows That Preserve Judgment

The MIT researchers suggest that rather than leaving AI usage decisions entirely to individuals, organizations should build systems that prompt employees to think before delegating. This could include interfaces that visualize uncertainty in AI responses, or default questions that force a moment of independent analysis before diving into an AI conversation. The goal is to make centaur behavior the path of least resistance.

3. Protect Skill Development for Junior Employees

The researchers specifically flag new employees as the highest-risk group. Entry-level professionals who self-automate from day one never build the foundational expertise they need to advance — or to supervise AI systems later. Organizations should provide structured AI onboarding that includes feedback on outputs, helping new employees understand where AI assists their thinking and where it replaces it.

4. Match the AI Strategy to the Task, Not the Tool

Not every task should be approached the same way. High-stakes analytical work — the kind where errors are costly and judgment matters — calls for centaur-style interaction: targeted, disciplined, expertise-led. Routine operational tasks may be appropriate for higher degrees of automation. The key is making this distinction explicit rather than leaving it to individual habit.

5. Measure Capability, Not Just Output

If your AI metrics only track productivity — tasks completed, time saved, adoption rates — you are flying blind on the variable that matters most. Add capability indicators: Are employees developing deeper expertise over time? Can they perform critical analyses without AI assistance when needed? Are junior team members building the judgment required for senior roles? These are harder to measure, but they determine whether your AI investment strengthens or hollows out your organization over the next five years.

The Real AI Strategy Question

The temptation is to treat AI adoption as a technology problem with a technology solution. Deploy the tools, train the prompts, measure the usage. But the MIT research makes clear that the decisive variable is human — how people choose to interact with AI, what skills they build or lose in the process, and whether leadership creates conditions for disciplined augmentation rather than passive automation.

In the Instant Competence framework, this is what HD Vision looks like in practice: zooming in past the surface-level metrics to see the system dynamics actually driving outcomes. The organizations that thrive with AI will not be those with the most sophisticated models. They will be those that understand — and deliberately manage — the human side of the equation.


Ready to Think Differently?

If you want to bring systems thinking and AI strategy into your organization, book a call with Drago. Or start with the free Clarity Worksheet from Instant Competence.