The Illusion of Control

Why Corporate AI Initiatives Fail Inside the Firewall and Employees are Clamoring Outside

I am a fan of Joan Didion's narrative control, a discipline that demands an examination of reality. This essay adopts her style to interrogate the corporate narrative against the messier, employee reality of AI adoption. She taught that the first sentence is declarative, and the second is the commitment that locks in. Here goes:

Employees are using AI, just not the corporate-prescribed kind. This Dickensian paradox—the gulf between corporate systems and personal-professional AI usage—is both stunning and a warning.

Two Counter Narratives

Didion would seek out counter-narratives to tell the story. What counter-narratives demand examination, and from whose perspective?

Corporate Persona: A skeptical, technically-focused Chief Legal Officer (CLO) reads a compliance update at 7:30 PM. What is the story we are dangerously neglecting to tell?

Counter-Narrative: "Personal usage is out of control. The story neglected here is what happens when that 'well-meaning employee' accidentally uses copyrighted IP either in the input prompt or the resulting output, triggering a costly injunction. We need an 'AI panic room' to stop all infringement entirely."

Employee Persona: A highly productive, non-compliant senior team member reads the new policy update at 7:30 PM from their phone. What is their story?

Counter-Narrative: "If the legal department defines the biggest risks as IP leakage and unauthorized data use, why is it neglecting the story of institutional time-theft? Their AI prohibition costs me two hours of my life every single day, and the sanctioned tools are functionally useless. When I use the public AI, I'm not being malicious, I'm fixing their efficiency problem so I can solve my time problem. They have no problem using my phone to reach me at all hours, but suddenly have an issue when I use it to gain some time back? Where is the mandate that protects my time, my intellectual property, and my sanity?”

Image Caption:  7:30 PM. Two narratives collide. Is she the skeptical CLO, stressed over exposure? Or the top performer, fatigued by institutional time-theft? The policy is failing them both. This is the cost of illusionary control.

The facts behind these two narratives are brutal and unyielding: nearly ninety percent of employees are using personal, public AI for work tasks, bypassing the corporate solutions entirely. Concurrently, up to ninety-five percent of corporate AI initiatives yield zero tangible return on investment. This is not a coincidence. It is the core narrative conflict of the AI era: the internal systems built for prohibition are so restrictive that they drive the actual work, the intellectual labor and insight, beyond the pale.

The corporate story is always about control. It is mandated by compliance, underwritten by IT, and backed by the certainty of the firewall. The narrative asserts: we will acquire sanctioned tools, establish rigorous guardrails, and mandate adoption through controlled internal systems. This corporate AI narrative has already failed.

For employees, their story in the battle between agency and compliance is one of decisive choice. By exercising agency and using personal AI systems at work, employees are becoming a copy-AI-paste army. The rate is high enough to suggest that a public AI might have ghostwritten the AI prohibition policies that exclude its own usage.

Yet, one devastating truth remains: the sheer, unacceptable exposure and loss for both sides.

The Catastrophic Loss

Joan Didion would use these counter-narratives to get to a real shared story. 

Far beyond legal implications is the loss of insight. The data derived from user interactions—pain points, knowledge gaps, and daily challenges—are lost forever, leaving management blind to its workforce's strategic needs. When employees venture beyond the ramparts, that essential intelligence and knowledge capital is lost. Outside, employees leave behind the institutional nuance, branding, and knowledge that could generate meaningful value and prevent them from hurling ‘AI workslop’ back over the wall.

Employees are learning to write business code through prompting. It is a new language. Their world is changing. Using their personal devices, employee experiments and missteps are safely hidden from the view of IT, management, and leaders. They need a development environment—not one restricted, but one designed to allow mistakes. This environment must be modeled on IT infrastructure: sandboxes, dev servers, UAT, and production. The mandate must shift from "Thou Shalt Not" to "Here Is a Safe Place to Discover," demanding the corporation embrace the agency employees already exercise.

A structure like this honors the employee's genuine agency and need for powerful tools (getting time back), ensuring every query remains within the organization's legal and security confines. Within this controlled space, usage can be tracked (anonymously or otherwise) to feed organizational knowledge, turning the employee’s curiosity into a direct line of insight for the business.

A Return to Shared Learning

Can we gain buy-in by helping employees understand that compliance serves a purpose? The employee, facing real-time pressure, votes with their fingers for the unrestricted public model. They exchange compliance for convenience, speed for safety. Infringement accidents happen, but doing so using a personal system opens the employee to professional AND personal liability.

The history of technology demonstrates how early mistakes enabled freedom to flourish alongside the tools. We once wrote ‘Netiquette’ guides for proper internet usage—preventive advice designed for the hapless employee who simply hit ‘Reply All’ instead of ‘Reply’. We must teach users to write business code in the form of a prompt, how to detect and flag IP issues in what they put in and what comes out, and how to inform the organization. Offering style guides and knowledge repositories that teach these skills is the lodestone to bridging this divide and channeling employee curiosity toward strategic intentions. This enables organizations to learn and develop AI systems that genuinely meet their employees’ needs.

Joan Didion also said the last sentence was the culmination of the story. The imperative is clear:

The wellness of the CLO and the employee are intrinsically linked. Both counter-narratives seek self-preservation while striving for high performance. The challenge lies in forging their shared narratives into a singular, successful story that converts AI outputs into meaningful outcomes.

Representative Works on Joan Didion’s Writing Style:

Why I Write (1976): New York Times Magazine By Joan Didion Dec. 5, 1976 [https://www.nytimes.com/1976/12/05/archives/why-i-write-why-i-write.html?smid=url-share]

The bold Joan Didion story you probably never read by Steve Weinberg . Nieman Storyboard. Nieman Foundation for Journalism at Harvard: [https://niemanstoryboard.org/2022/01/13/the-bold-joan-didion-story-you-probably-never-read/]

Joan Didion, The Art of Fiction No. 71 The Paris Review: [https://www.theparisreview.org/interviews/3439/the-art-of-fiction-no-71-joan-didion]

Or visit the Didion Dunne Literary Trust to explore her life and works: [https://www.joandidion.org/]

Sources

Image Courtesy of Motion Array. [https://motionarray.com/stock-photos/entrepreneur-joining-online-meeting-on-phone-at-night-in-office-3781830/]

Statistics from MIT Media Lab Report on the GenAI Divide : [https://www.artificialintelligence-news.com/wp-content/uploads/2025/08/ai_report_2025.pdf]


Next
Next

The AI Imperative: Reshaping Corporate Learning, Renewing Classical Wisdom?