When my teenage son developed mysterious symptoms, I followed the same path anyone else would: I put his health in the hands of a team of medical professionals. Multiple myeloma is a rare blood cancer. It is so uncommon in 17-year-olds that it doesn’t appear on diagnostic checklists. Despite having no clear starting point to work from, my son’s doctors worked their way to an accurate diagnosis through a process of trial and error, bouncing ideas off each other and testing and discarding hypotheses until they could tell us what was wrong. The process felt inefficient and uncertain at a time when I wanted fast answers and cast-iron guarantees. But this messy and distinctively human approach saved my son’s life.
AI promises to improve processes like this, replacing the fallible and unpredictable human mind with the analytic power of trained and tested algorithms. As someone who helps organizations implement AI technology, I know just how much potential it has to make processes and workflows more efficient. But before we start replacing human judgment at scale, we need to think carefully about the hidden costs that can come with productivity gains.
A recent study in The Lancet Gastroenterology & Hepatology presented some sobering findings for AI maximalists. Physicians who spent several months working with AI support in diagnostic roles showed a significant decline in unassisted performance when the technology was withdrawn. This kind of “deskilling” effect isn’t unique to either medicine or AI. We have known for years that extensive GPS use leads to a decline in spatial memory and that easy access to information reduces our ability to recall facts (the so-called “Google effect”).
{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/creator-faisalhoque.png”,”imageMobileUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity?”,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”ctaText”:”Learn More”,”ctaUrl”:”https://faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”buttonBg”:”#ffffff”,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514}}
Most people are willing to accept these cognitive losses in exchange for convenience. And that is a trade-off that individuals need to decide for themselves. But when it comes to organizations and institutions, things are more complex.
The first concerns that leap to mind are worries about losing access to our AI tools after outsourcing our skills to them. What if the system crashes or performance drops off? While this is a real problem, it is nothing new. We can design backup solutions where necessary, just as we always have with technology.
But there is another set of problems that cannot be resolved simply by putting guardrails in place. Human skill sets are important not just because they let us act on those skills, but also because they let managers and decision-makers understand and supervise what is happening on the frontlines. If physicians lose their diagnostic chops, who will validate or audit the output of the algorithms? Who will notice that the edge cases—the patients with statistically implausible diseases—are not being diagnosed correctly? And, perhaps most importantly, who will take responsibility for the algorithmic judgments, whether they are right or wrong?
For most organizations, maintaining public trust is a core part of their relationship with society. Just as we won’t eat in a restaurant if we don’t trust the kitchen to deliver safe food, so we avoid products and services that we believe may harm us. Without accountability, trust is impossible.
As an IBM training manual put it nearly 50 years ago: “A computer can never be held accountable, therefore a computer must never make a management decision.” The same principle holds true for AI. Without a clear accountability trail that leads to a human decision-maker, it becomes impossible to hold anyone responsible for any harms that arise from the AI’s behavior. And this accountability deficit can destroy the legitimacy of an institution.
We can see these dynamics at work in the U.K.’s 2020 exam grading debacle. At the height of the COVID pandemic, with normal exams cancelled, the U.K. government used an algorithm to assign grades. The algorithm imported biases and systematically favored children from wealthy backgrounds. But even if it had worked perfectly, something critical would still have been missing: institutions that can justify their decisions to those affected by them. Nobody will be satisfied by an algorithmic explanation for a result that might have lifelong effects. Ultimately, the government reversed course, replacing the AI judgment with assessments made by each student’s teachers.
What this means for your organization
The challenge isn’t whether to use AI—it’s how to implement it without creating dangerous dependencies. Here are specific actions leaders, managers, and teams can take:
- Implement AI rotation schedules: Ensure that teams rotate periodically from AI-assisted work to manual work to maintain core competencies.
- Create skill preservation protocols: Document which human capabilities are mission-critical and cannot be outsourced.
- Establish accountability chains: Specify which decisions require human sign-off.
- Institute “analog days”: Schedule regular sessions where teams solve problems without AI tools.
- Design edge case challenges: Create exercises focusing on unusual scenarios AI might miss.
- Maintain decision logs: Create institutional memory of the value and role of human judgment by documenting when and why you override AI recommendations.
- Practice explanation exercises: Regularly require team members to explain AI outputs in plain language—If they can’t explain it, they shouldn’t rely on it.
- Rotate expertise roles: Ensure multiple people can perform critical tasks without AI support, preventing single points of failure.
Warning signs your organization is too AI-dependent
Watch for these red flags that indicate dangerous levels of dependency:
- Teams can’t explain AI recommendations
- Acceptance of AI results without validation has become the norm
- Staff miss errors or outliers that the AI overlooks
- Employees express anxiety about performing tasks without AI assistance
- Simple decisions that once took seconds now require AI consultation
If you spot any of these signs, you need to intervene to restore human capability.
The path forward
My son’s cancer was successfully diagnosed thanks to structured redundancy in his care team. Multiple specialists approached the same problem through different lenses. The bone specialist saw what the blood specialist missed. The resident asked the naive question that made the senior doctor reconsider. This kind of overlap can look like inefficiency at times, but if we don’t work to retain it, we lose something vital.
We should not shy away from the advantages AI can offer when it comes to analytical speed and pattern-recognition. But at the same time, it is essential that we shield the decision-making process from being overwritten by a single algorithmic voice. We must keep humans in the loop both because they can look beyond statistical likelihood and because they can be held accountable for their final decisions.
Yes, maintaining human capabilities alongside AI will be expensive. Training tracks that preserve human skills, AI-off drills, and rigorous human audits all cost money. But they preserve the institutional muscle memory that holds the whole edifice up. The cost of losing the human perspective is one we cannot afford to bear.
{“blockType”:”mv-promo-block”,”data”:{“imageDesktopUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/creator-faisalhoque.png”,”imageMobileUrl”:”https://images.fastcompany.com/image/upload/f_webp,q_auto,c_fit/wp-cms-2/2025/10/faisal-hoque.png”,”eyebrow”:””,”headline”:”Ready to thrive at the intersection of business, technology, and humanity?”,”dek”:”Faisal Hoque’s books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and tech—turning disruption into meaningful, lasting progress.”,”ctaText”:”Learn More”,”ctaUrl”:”https://faisalhoque.com”,”theme”:{“bg”:”#02263c”,”text”:”#ffffff”,”eyebrow”:”#9aa2aa”,”buttonBg”:”#ffffff”,”buttonText”:”#000000″},”imageDesktopId”:91420512,”imageMobileId”:91420514}}
Source link