diff --git a/content/blog/2026-04-26-proof-of-concept-outlived-approval-window.md b/content/blog/2026-04-26-proof-of-concept-outlived-approval-window.md new file mode 100644 index 0000000..e161143 --- /dev/null +++ b/content/blog/2026-04-26-proof-of-concept-outlived-approval-window.md @@ -0,0 +1,64 @@ +--- +title: "The Proof of Concept That Outlived Its Approval Window" +date: 2026-04-26 +pillar: "Wild Card" +tags: [ai, enterprise, governance, architecture] +linkedin_copy: | + The fastest way to kill an AI initiative inside an enterprise is to succeed with it before the governance process catches up. + + The pattern repeats: someone on the team tries a tool, builds a proof of concept in a week, and it works. Generates draft backlog items from transcripts. Maps API fields between systems. Writes first-pass test cases that used to take two days. The team starts depending on it informally. + + Then governance begins. Security review. Data privacy review. Responsible AI review. Vendor management review. Each one asking legitimate questions. Each one taking weeks. + + The proof of concept took a week to build. The approval process takes two months. During those two months, the vendor ships a breaking change, or deprecates the model version, or raises prices. The thing that worked is no longer the thing being evaluated. The proof point ages out before the gate clears. + + This is not a story about governance being too slow. Enterprise security and data handling reviews exist for real reasons. Telling the security team to hurry up is not a strategy. + + The surviving approach is architectural. Build provider-abstracted plumbing from day one. The workflow calls an abstraction layer, not a specific vendor API. Prompt templates live in version control, not a vendor UI. Model identity is a configuration variable, not a hardcoded dependency. Input redaction runs before the request leaves your network regardless of which model receives it. + + When governance asks "what model does this use," the answer is: whichever one you have approved. The workflow does not care. Swapping the provider is a configuration change, not a rebuild. + + The practitioners who succeed with AI inside enterprises share one trait: they treat the vendor as configuration and the workflow as the product. They build the audit trail before the first production call, not after the first incident. + + Full post: {{url}} +--- + +Somebody on the team tries a new AI tool. It works. They build a proof of concept over a weekend or a slow sprint. The proof of concept does something real: it generates draft backlog items from meeting transcripts, or it maps API fields between two systems, or it writes the first pass of integration test cases that used to take a developer two days. + +The proof of concept works well enough that the team starts depending on it. Not formally. Nobody put it on a roadmap. But it saves four hours a week, and the person who built it keeps running it because the output is useful. + +Then someone mentions it in a status meeting. And the governance process begins. + +Security review. Data privacy review. Responsible AI review. Vendor management review. Each one independently reasonable. Each one asking legitimate questions. Where does the data go? What model processes it? Is there PII in the input? Who owns the output? Is the vendor on the approved list? What happens if the vendor changes their terms? + +Good questions. All of them. The problem is not the questions. The problem is the timeline. + +The proof of concept took a week to build. The governance review takes two months. During those two months, the vendor ships a breaking API change. Or the vendor raises prices. Or the model version that powered the proof of concept gets deprecated. Or a competing vendor clears the enterprise approval gate first, and now the team is told to rebuild on a different provider. + +The proof of concept outlived its approval window. The thing that worked is no longer the thing being evaluated. + +I have watched this pattern play out at least three times in the last year across different organizations and different tools. The sequence is identical each time. Someone adopts early. The early adoption works. The governance process starts late. The proof point ages out before the gate clears. The team either starts over or gives up. + +This is not a story about governance being too slow. Governance at enterprise scale moves at the speed it moves for real reasons. Data handling matters. Vendor risk matters. Model provenance matters. Telling the security team to hurry up is not a strategy. + +This is a story about how practitioners can build AI workflows that survive the governance timeline instead of racing against it. + +The answer is architectural, not procedural. Build provider-abstracted plumbing from the start. + +That means: the workflow does not call a specific vendor's API directly. It calls an abstraction layer that routes to whichever provider has cleared the current approval gate. The prompt templates live in version control, not in a vendor's UI. The model identity is a configuration variable, not a hardcoded dependency. The input redaction step (strip PII, strip client names, strip anything sensitive) runs before the request leaves your network, regardless of which model receives it. + +This is not over-engineering. This is the minimum architecture that lets a workflow survive the reality of enterprise vendor management. + +The CI platform you already run is the runtime for this. You do not need a new orchestration framework. You need a pipeline step that calls a model endpoint, logs the request and response, applies a redaction filter, and stores the output. Your CI system already knows how to do all of those things. Pinned dependencies through a proxied registry. Scoped credentials at every trust boundary. Audit trail on every call. + +When the governance team asks "what model does this use," the answer is: whichever one you have approved. The workflow does not care. The prompts are the same. The redaction is the same. The audit trail is the same. Swapping the provider is a configuration change, not a rebuild. + +This approach has a cost. It takes longer to build the first version. The person who could have had a working proof of concept in a weekend now needs a week to build the abstraction layer. That feels slow when you are excited about what the tool can do. + +But the alternative is building a proof of concept that works for six weeks and then dies in a governance queue. Or building it three times because the approved vendor keeps changing. Or, worst of all, having the team give up on the workflow entirely because the rebuild cost exceeded the enthusiasm. + +The practitioners I have seen succeed with AI tooling inside enterprises share one trait. They do not pick a vendor first. They build the plumbing first. They treat the vendor as a configuration detail and the workflow as the product. They build the audit trail before the first production call, not after the first incident. + +And when the governance review finally clears, their workflow is already running. Because it was never waiting for a specific vendor. It was waiting for any vendor. And that is a wait that always ends. + +The proof of concept is the easy part. The architecture that lets it survive your own organization is the hard part. Build for the hard part first. \ No newline at end of file