The Leaky Stack
Why privacy frameworks struggle when every tool phones home
Professional confidentiality obligations in law, medicine, accounting, and financial advice rest on the assumption that practitioners can exercise meaningful control over how client data is processed — through contracts, configuration, and governance of third-party processors. This assumption is increasingly strained by two developments since 2018: the US CLOUD Act, which grants US law enforcement extraterritorial access to data regardless of storage location; and the embedding of agentic AI features in standard professional software, which extends vendor-side data processing to qualitatively greater scope and opacity.
The insurance market has independently confirmed the significance of this shift. Since January 2026, major carriers have begun excluding AI-related liabilities from standard policies. All four major AI vendors cap their own liability at twelve months of fees and disclaim consequential damages.
This essay argues that the default professional stack has made confidentiality materially more fragile than it was before 2018, and that regulatory guidance has not kept pace.
A longer version of this essay, with full adversarial review documentation, is available as a Zenodo working paper.
The Standard Stack
An accountant, lawyer, or doctor in New Zealand typically operates with a Windows or Mac workstation running Microsoft 365, a smartphone, Xero for accounting, and government filing portals. The UK, Australia, and the EU present the same picture with local variations. This is not exotic. It is the default.
The privacy frameworks governing these jurisdictions — NZ’s Privacy Act 2020, the UK GDPR, Australia’s Privacy Act 1988, the EU’s GDPR — share a common logic. They assume delegated processing: the professional outsources to vendors under contractual safeguards and remains responsible for ensuring data is handled appropriately. These are reasonableness standards, not impossibility standards. The NZ Law Society’s cloud guidance explicitly permits cloud use with reasonable steps.
The question is whether the reasonableness model has kept pace with two changes to the processing environment.
The CLOUD Act Problem
The US CLOUD Act (2018) compels US-headquartered companies to produce data regardless of where it is stored. The UK and Australia have bilateral agreements with the US facilitating cross-border access. The EU does not. New Zealand does not.
A privacy lawyer might object: the CLOUD Act has never been documented as having accessed professional client data. Three responses.
First, CLOUD Act warrants can include non-disclosure orders (18 U.S.C. § 2705(b)). The professional may never know their client’s data was accessed. The absence of documented cases is built into the statute’s design.
Second, professional confidentiality is prophylactic. Lawyers don’t wait for burglaries to assess their locks. A statutory power that compels disclosure from a vendor is a foreseeable risk regardless of frequency.
Third, the professional cannot contractually exclude this compulsion, cannot detect it, and cannot rely on vendor resistance — the vendor’s statutory obligation overrides its contractual commitment.
The delegated-processing model assumes contracts can govern processor behaviour. The CLOUD Act creates a category they cannot govern.
The Agentic AI Inflection
Cloud tools were never purely passive — search indexing, telemetry, and malware scanning have existed for years. Agentic AI extends this trajectory in scope and opacity. When Copilot reads a document to generate a summary, when Xero’s AI categorises transactions, the data enters AI pipelines that generate derivative objects — summaries, embeddings, categorisations — with their own retention characteristics.
The major vendors claim these processes stay within service boundaries and don’t train foundation models. These claims should be noted and understood for what they are: unverifiable assurances subject to terms-of-service updates and CLOUD Act compulsion.
The leaked source code of Anthropic’s Claude Code, reported by The Register in April 2026, illustrates the type of gap that can exist. Claude Code captures every file it reads as plaintext, transmits session data to Anthropic’s servers, and includes an unreleased background agent that mines transcripts for “memories” injected into future API calls. Claude Code is a consumer tool, not enterprise software — but it demonstrates what the gap between vendor assurances and technical reality looks like when source code becomes visible. Whether enterprise tools contain similar gaps is unknown, because their source is not available for inspection.
The insurance market has reached its own conclusion. Since January 2026, Verisk/ISO exclusion endorsements have been available to remove generative AI exposure from standard commercial general liability policies. AIG, WR Berkley, and Great American have filed their own AI exclusions. Gallagher Re’s analysis with MIT concluded that AI liabilities sit in the gaps between every major existing product line. Lockton Re reached the same conclusion independently.
These exclusions originate in the US but propagate through global reinsurance chains. Lloyd’s of London underwrites professional indemnity across the UK, Australia, and New Zealand through the same reinsurance markets. The precedent is cyber insurance: US exclusions propagated globally within twelve to twenty-four months.
Simultaneously, all four major AI vendors — OpenAI, Anthropic, Google, Microsoft — cap liability at twelve months of fees and disclaim consequential damages. The professional deployer bears the residual liability.
When carriers exclude the risk and vendors disclaim it, the claim that “existing mechanisms are adequate” requires identifying who is standing behind the assurance. The answer is: the professional, potentially uninsured, bearing a liability they may not have recognised.
The Compounding Problem
Microsoft holds documents and emails. Xero holds financials. Google or Apple holds phone metadata — which, for an on-call doctor, reveals which patients are in crisis by call timing. Government portals hold tax filings. Each dataset is partially sensitive alone. Combined, they constitute a complete profile of a client’s affairs.
Privacy frameworks assess risk processor-by-processor. The actual danger is compositional: cross-stack inference risk when partial datasets held by different vendors are combined. This compositional risk is asserted on structural grounds — the essay does not claim it has been realised, but that it is foreseeable and unaddressed.
The Reasonableness Trap
Alternatives exist. A professional could write in LaTeX, use Proton Mail, keep accounts locally, carry a Nokia. Each would be more secure. None would be taken seriously.
A lawyer who submits documents in anything other than Word format creates friction everywhere. An accountant who sends handwritten invoices signals eccentricity. A doctor on-call with a Nokia looks like they can’t afford a smartphone. The professional who deviates pays a reputational cost — not because anyone mandates the insecure option, but because the ecosystem treats conformity to the dominant vendor as a signal of professional legitimacy.
“Reasonable steps” implicitly means “what a reasonable practitioner would do.” If everyone uses Microsoft 365 and Xero, then using them is the reasonable step — regardless of whether something more secure exists. The standard measures conformity to professional norms, not to security principles. Where those have diverged, the standard reinforces the less-secure option.
What Would a Regulatory Response Look Like?
The problem cannot be solved by individual practitioners. Privacy regulators should distinguish between passive cloud storage and active AI processing. Professional bodies should advise that default configurations may not meet the reasonable-steps standard for sensitive data — and should note that professional indemnity insurance may no longer cover AI-related losses. Governments should negotiate CLOUD Act bilateral frameworks or invest in sovereign infrastructure. Vendors should be required to provide auditable disclosure of AI data processing.
Conclusion
Professional confidentiality has not vanished. It has become materially more fragile, more vendor-dependent, and less governable than privacy frameworks assume. The strongest objection is that this demonstrates governance asymmetry rather than a breakdown. That objection is serious. The reply: the governance gap is unacknowledged by professional guidance, uninsured by standard policies, and entirely borne by the professional.
The vendor disclaims liability. The insurer excludes coverage. The professional stands in the gap alone.
The foundations of professional confidentiality deserve closer examination than they are currently receiving. This essay is an attempt to prompt that examination.
C. Kererū writes on infrastructure resilience, institutional failure, and the gap between policy and reality. Previous essays include “Antifragile Networks” and “The Filter and the Family.”
This essay was developed through structured adversarial review across four AI models (DeepSeek R1, ChatGPT, Claude Opus, Grok) using the Triveritas framework. Seven review passes identified and corrected over-claiming, category errors, jurisdictional mismatches, and evidentiary gaps. The full review protocol is documented in the Zenodo working paper.


