Pillar Two programmes don't burn capital on math.
They burn it on coordination replay.
I have spent more time inside Pillar Two programmes than most people who are not required to file the resulting GIR. Across them, the same pattern keeps appearing — and it is not what the Big-4 advisory decks describe.
The decks describe a quantitative problem. They sketch jurisdictional ETR, the 15% minimum, the substance-based income exclusion, the QDMTT / IIR / UTPR routing sequence. They suggest the programme cost reflects the difficulty of the underlying mathematics.
That is not where the cost lives.
The cost lives in coordination replay.
The pattern
What I mean by that.
At the moment a top-up figure is challenged — by the group's external auditor, by a local tax authority asking why the Q3 figure differs from Q2, by a CFO who does not believe the number — the group does not lack a calc engine. It has several. Spreadsheets. Big-4 tooling. Tax-provision software. An internal model. What it lacks is a defensible chain of which constituent-entity books were authoritative, which Article 3.2 adjustments were made under which interpretation, and which version of the OECD Central Record's qualified-status flags drove the routing.
So the figure gets reconstructed by hand.
That reconstruction routinely takes the group tax department, twelve subsidiary controllers, the local audit teams, and a Big-4 shadow over several working days. Group tax pulls a CbCR-aligned cut. Subsidiary controllers each have their own view of what was excluded under Article 3.2.1(b) (dividends), what was added back under §3.2.1(f) (fines and penalties), what counted as a non-qualified refundable credit. The Big-4 shadow produces an independent GloBE income figure that differs from the group tax figure by a per-jurisdiction number nobody can immediately attribute. The chain is explained back via email, with attachments.
The next quarter, the same question comes back about the same jurisdiction. And it happens again.
The compute layer is doing the same Article 5 walk seven times because seven different teams are not coordinating on which constituent-entity inputs are authoritative. The fee is paid in headcount, in tax-provision reconciliation hours, and most damagingly, in the credibility lost every time a top-up figure cannot be defended without a manual re-reconstruction.
This is not a fault of the tax-provision vendors. They have done their job. The programme spend has not bought what the programmes were sold to buy.
The math is the easy part
The core Articles 3, 4, and 5 mathematics is surprisingly compact. The difficulty is not expressing the formulas; it is operationalising them safely across thirty-plus subsidiary controllers, each with their own local accountant, their own ERP cutoff, and their own read on which Article 3.2 line items apply this year.
GloBE income from book PBT, Adjusted Covered Taxes from
current and deferred tax expense, the year-indexed SBIE
carve-out, the top-up rate clamped at zero, the QDMTT
credit netting. None of it is hard once you have
settled what excluded_dividends_eur
actually is on this constituent entity, this fiscal
year, against this consolidated reporting policy.
That settling is the bottleneck. And settling is a coordination problem, not a math problem.
The three components of the coordination problem are predictable:
- Re-run economics. Each top-up is computed several times by different teams across the group because none can prove they are using the same constituent-entity inputs as the others. The marginal cost of one more recomputation is taken to be free; the cumulative cost is most of the programme.
- Explanation latency. When the CFO asks "why did our Ireland top-up move €2.4M between Q1 and Q2?" the answer is reconstructed manually on each occurrence, even when the question is the same one asked last quarter against the same jurisdiction.
- Authority fragmentation. No single store holds the authoritative versions of constituent-entity book numbers, the Article 3/4 adjustments applied, and the engine version that produced last quarter's filing. So every team caches its own. Every reconciliation between caches is a labour cost.
The architecture requirement that follows
Once you frame Pillar Two this way, the architecture requirements change.
The critical requirement is no longer merely "produce the top-up figure." It becomes:
Prove which constituent-entity inputs were admissible. Prove which Article 3 / 4 / 5 walk produced the figure. Prove that the same chain can be replayed later, by a regulator or a Big-4 auditor, without ambiguity.
That requirement is closer to a governance substrate than to a tax-provision library. It is the part of the stack the vendors have not delivered, because their incentives are different — they sell breadth of jurisdictional coverage and integration with major ERPs. None of that helps when a tax authority asks why the top-up figure for the Ireland jurisdiction on this year's GIR is not consistent with the constituent-entity rows the group filed in its CbCR.
The tool
I have been building a tool that approaches Pillar Two from this angle. It is a self-hosted GloBE calculator with a few specific properties.
- Constituent-entity rows are committed to the system only after the caller has been shown the canonical form and has echoed back the SHA-256 of that canonical form. Mismatches are first-class refusals, persisted as queryable data.
- Every jurisdictional top-up produced is a node in a Merkle DAG anchored to its constituent-entity input leaves and to the function identity (engine version, regime calibration, parameter-set hash including the year-indexed safe-harbour thresholds and the Central Record qualified-status flags) that produced it. Drift any of those, the root hash changes.
- The lineage is exportable as a single JSON file. A regulator or Big-4 reviewer runs an offline verifier — one stdlib-only Python file, no project imports, no network — against that JSON and replays the chain. Any single-byte mutation produces a refusal at the precise broken node.
- The GloBE Information Return XML export embeds the root hash, the function reference, and the body hash in response headers. In block mode, the renderer refuses to ship a constituent entity that lacks provenance.
- Refusals are persisted as queryable entities. "Show me every refused Article 3 walk for the Ireland jurisdiction last fiscal year" is one endpoint call, not a ticket.
- The QDMTT / IIR / UTPR routing is driven by the OECD Administrative Guidance Central Record's qualified-status flags, calibrated in a single parameter file. When the Central Record updates — and it will, several times a year — the change is one diff to one file and the function-reference hash makes the calibration drift visible in every replay.
- It runs in your own infrastructure. Constituent-entity financials, Article 3.2 adjustment decisions, and top-up figures never leave your network.
The maths is published. The verifier is published. The chain holds under tampering or it does not, and you can prove which.
What that does to the multi-day reconciliation
A jurisdictional top-up movement that previously required group tax, the subsidiary controllers, and the Big-4 shadow several working days to reconstruct becomes a deterministic replay against a saved lineage root.
The replay does not need access to the running system. The verifier and the saved JSON are sufficient. An auditor at their own desk, in their own environment, can confirm the top-up figure is the figure that came out of the engine on the day, against the constituent-entity inputs claimed, under the Articles 3.2 / 4.1 / 5 walk claimed, against the Central Record qualified-status flags claimed.
The same answer next quarter is the same hash. If anything has changed — a subsidiary controller's dividend exclusion decision, a Central Record flag flip (a jurisdiction's QDMTT becomes qualified mid-year), an engine version, a regime parameter — the hash is different and the diff identifies the change. There is no manual reconstruction.
That is the operational unlock.
The wedge
This is a deliberately narrow positioning.
It is not a Thomson Reuters ONESOURCE replacement and is not pitched as one. The breadth-of-jurisdiction problem is well solved. The replayability problem is not. The two compete on different axes.
Groups do not need another opaque tax-provision engine. They need a way to defend their GIR figures without reconstructing them manually every quarter.
If that framing is recognisable from inside your programme, I would be interested in talking. Quietly, off any vendor list. The tool is private; access is granted on request after a short conversation.