The Sabotage Manual
How We're Using 80-Year-Old Tactics to Kill AI Innovation and Adoption
In 1944, the CIA’s predecessor (called the Office of Strategic Services or OSS) published a manual teaching ordinary citizens how to quietly sabotage an organization from within: form committees, demand perfect paperwork, reward caution.
Eighty years later, we’re not just following this playbook, we’re using AI to execute it at scale.
Earlier this year, I published an article in the Enterprise Technology Leadership Journal called “Breaking the Sabotage Cycle.” My theme throughout the paper was about how the Department of War has become their own worst enemy when it comes to making progress on the adoption of new technology.
Refer all matters to committees.
“Refer all matters to committees, for ‘further study and consideration.’ Attempt to make the committees as large as possible — never less than five.”
I recently read an article in MIT Sloan Management Review by Robert Pozen and Renee Fry about how to increase the productivity benefits using AI. They mention that companies are racing to adopt AI by creating “centers of excellence” or other governance committees.
And for good reason, you do not want to blindly throw AI at something and have it inadvertently leak data or raise ethical concerns (who can forget when Grok began spewing hateful content earlier this year? Yikes.)
The problem is that these internal committees are not close enough to the work the technology is meant to improve. This stymies adoption and results in shelved pilots.
Think of how BMW successfully implemented their AI quality control system because workers on the floor trained the models themselves, not some distant Community of Excellence or external consultant.
Insist on Perfect Work.
“Insist on perfect work in relatively un-important products; send back for refinishing those which have the least flaw.”
As a recovering perfectionist, I can wholeheartedly say that this one is sure to block the advancement of goals. The thing with experiments is that they’re meant to test a hypothesis, not show up on the balance sheet.
What I’m seeing more often is that organizations insist on a complete end-to-end ROI analysis and a crystal-ball forecast of what the business will look like after the transformation. It’s the corporate version of “measure twice, never cut.” The problem is that the technology (and the market) rarely sit still long enough for those assumptions to remain true.
In the time it takes to perfect the model, the opportunity window closes. Meanwhile, smaller teams are out there shipping prototypes, learning from feedback, and capturing the insights that perfectionism leaves on the table.
Perfectionism masquerades as prudence, but it’s really fear dressed up as rigor. It’s the quiet voice that says, “We’ll move forward once we have all the answers,” knowing full well that the answers only come after you start.
In the OSS sabotage manual, the instruction was to “insist on perfect work in unimportant matters.” In today’s AI era, that looks like chasing flawless model accuracy before release, or spending six months aligning PowerPoints instead of testing with real users. The result is the same: motion without progress.
The antidote isn’t recklessness, it’s iteration. Progress favors the teams who ship, learn, and improve in real time. In the world of AI, perfection isn’t the goal; adaptation is.
Multiply Paperwork.
“Multiply paper work in plausible ways. Start duplicate files.”
There is a lot of analysis that can be done to ensure a corporation’s dollars are not being wasted. There is value in identifying thin-slices from which to apply different use cases that can be impactful to an organization’s goals and mission.
The endless layers of “AI Readiness” assessments serve as busy work, burning out teams who can’t tell if the juice is worth the squeeze (see the Expertise Decay Tax I wrote about last time) or lead to shelved ideas and pilots, leaving organizations exactly where they started.
This can come in the form of regulations: planning documents, ROI models with 47 variables, 200-slide readiness assessments, or just the quiet demands of a leader who is demonstrating their anxiety by demanding additional analysis to be completed before a single model is tuned.
Paperwork is not a substitute for progress, it is just paperwork.
Apply Regulations to the Last Letter.
“Apply all regulations to the last letter…contrive as many interruptions to … work as you can…”
Unless you are living in the European Union, regulatory frameworks aren’t coming to save you…or stop you. The National Institute of Standards and Technology published an AI Risk Management Framework back in 2023. That was at least two ChatGPT versions, three major model releases, and an entire agentic AI revolution ago.
Since NIST’s 2023 framework, we’ve seen GPT-4 and 5 launch, multimodal AI become standard, and AI agents start autonomously executing tasks. The framework was written for a world that no longer exists.
And, according to Amy Webb’s Tech Trends Report, things don’t seem to be slowing down anytime soon. The trend report mentions that there are “six structural changes that are already determining which organizations will thrive,” including speed, the extinction of the middle management layer, new rules of talent and others. None of those have to do with regulations. In fact, one area the report does mention regulation is below:
What does this mean? Stop using regulatory uncertainty as camouflage for inaction. Map the regulations that actually apply to your specific use cases: data privacy, security requirements, industry-specific rules. Then move forward within those boundaries. Compliance is a constraint to work within, not an excuse to work without.
Of course, ‘waiting for regulatory clarity’ is just one flavor of institutional caution. The sabotage manual had an even simpler instruction…
Advocate Caution.
“Advocate ‘caution.’ Be ‘reasonable’ and urge your fellow-conferees to be ‘reasonable’ and avoid haste which might result in embarrassments or difficulties later on.”
“Let’s wait for better data before we start.” These are the words used by an organization with their metaphorical head in the sand.
Chances are, the organization you are working for and/or leading, has changed drastically since the days it first began as a scrappy start up. Like all things: everything is born.
But, over time, organizations grow (with luck) and new people take over and they are indoctrinated to grow, but not at the expense of losing what the company already has. You must manage risk.
Caution is most pronounced the higher you are in the organization. However, as a recent article in the Harvard Business Review pointed out in their Gen AI Playbook for Organizations, a “cautious ‘wait and see’ approach…is potentially dangerous.” They go on to provide a simple 2x2 framework in true HBR form to help identify how to manage risk and allow those closest to the work identify and utilize Gen AI to support their work.
Because who understands the true risks and benefits of the implementation other than those that are doing the work (see my earlier discussion about BMW, above).

Chances are, your organization has evolved dramatically from its scrappy startup days. Success brings scale, scale brings complexity, and complexity breeds caution. New leaders inherit not just the company’s assets but its fears: indoctrinated to protect what exists rather than build what’s next. ‘Manage risk’ becomes the mantra, but somewhere along the way, ‘manage’ became ‘avoid.’”
The OSS manual was designed to help citizens slow down enemy organizations during wartime. Today, we’re doing it to ourselves in the corporate world. The difference? Our competitors aren’t following the manual. While we form committees, perfect our paperwork, and advocate caution, they’re shipping, learning, and capturing market share.
The sabotage manual worked because it disguised destruction as diligence. Maybe it’s time we recognize the disguise.
Breaking the Cycle
It is possible for leaders and employees at all levels of an organization to break the cycle of sabotage and feedback loops that go nowhere.
Here are some specific actions:
Start with one use case, not a transformation
Give teams permission to experiment within boundaries
Measure progress weekly, not quarterly
Replace “what could go wrong?” with “what could we learn?”
Thank you for reading! And special thanks, as always, to my friend Melissa Sayers for pointing this manual out for me earlier this year. I wish I had it years ago.



