Unit of Potential (UoP)
A single, governed AI deployment decision — outcome-anchored, constraint-aware, scored. Each UoP carries an Alignment Score and a maturity stage: Generated → Configured → In Production.
Read the spec →We help executives decide what AI should do, where, for whom — and measure whether it actually worked.
Not a marketing site. An operator’s tool. Take five minutes — we’ll walk you through how to read it.
Most AI vendors slice the world by industry. We slice it by work archetype — because automation maps to work, not to logos. The same UoP for legal drafting ships across financial services, insurance, and government — and the alignment math is the same.
The AI industry assumes work = tasks. Wrong. Work = coordination. Trying to match tasks and automate them is most of the time useless. What matters is alignment between stakeholders — only then are tasks derived. We see companies as complex adaptive systems, not task lists.
Industry is downstream of work. Vendors who can’t articulate the work archetype they’re solving for are selling you a logo, not a deployment.
Everything else in the platform is a surface for one of these four. If you know these, you can operate it.
A single, governed AI deployment decision — outcome-anchored, constraint-aware, scored. Each UoP carries an Alignment Score and a maturity stage: Generated → Configured → In Production.
Read the spec →The operator console. Where to deploy? · Who's ready? · Vendor Allocation · UoP·Live · Cockpit. One surface for the whole AI-deployment decision loop.
Open the Workbench →The honest multi-vendor shelf, per archetype. OpenAI is one of N. Anthropic leads SWE code-gen at 54%. We tell the truth — the allocation is the work, not the badge.
See the allocation →The proof case. TAG × Harvey legal drafting — $30M realized, 1,240 roles, 0 displaced. One UoP in production today, measured against its business case.
See it live →We don’t hide what we don’t know. Every UoP, every cell, every recommendation carries one of four tiers — visible to anyone reading it.
External data only. Lightcast postings + O*NET DWAs + Eloundou α/β/γ. No org context yet.
Enriched with firmographics, geography, revenue band, sector-specific absorption signals.
Sponsor identified, Procensus consensus run (L6), governance + redeployment plan in place.
In production. Outcomes measured against the business case. Realized Potential vs. Estimated Potential reported.
Today: 3,315 of our 3,316 scored cells are Estimated. One — TAG × Harvey — is Measured. The progression up the ladder is the product.
The Alignment Score is a proxy for whether the conditions are right for humans + AI to co-create value. When the score is high, three conditions are met:
Everyone — sponsor, operator, worker — agrees what good looks like. The outcome metric is named, owned, and measurable.
Humans + AI both have meaningful contribution. Workers shape the deployment, not just absorb it. AI does what AI does best; humans do what humans do best.
Real outcomes, not deck theater. The UoP ships value the business actually feels — measured, governed, surfaced in the ledger.
Realized Potential exceeds Estimated Potential when emergence happens. We don’t predict — we set the table.
That’s why the primitive is called Unit of Potential, not Unit of Plan. Potential = capacity for value to emerge when conditions are set right. The plan is the floor; emergence is the ceiling.
Dignity-with-productivity, not zero-displacement absolutism. The workforce side of every UoP is a first-class artifact — named pathways for every role affected, surfaced before the score lands.
The Worker POV is built into every score. If we can’t name what happens to the people, we don’t ship the UoP.
| Role | Affected | Path | Outcome |
|---|---|---|---|
| Junior associate | 412 | Reskilled toward client-facing legal ops | Augment |
| Paralegal | 318 | Reskilled toward AI-supervised drafting review | Augment |
| Document specialist | 187 | Reskilled toward case-management orchestration | Reskill |
| Contract reviewer | 162 | Augmented — throughput +3.4× with human-in-the-loop | Augment |
| Legal admin | 98 | Retained — coordination layer untouched | Retain |
| Knowledge manager | 63 | Expanded — AI corpus curation becomes core scope | Expand |
The Workbench answers five questions in order. Each step is a surface — click any to jump in.
No marketing badges. Every source feeding the Global Labor Graph, with row counts and refresh cadence.
| Source | What we use | Coverage | Refresh |
|---|---|---|---|
| Lightcast | Postings · Profiles · Skills · Firmographics | 1.28B postings (16-yr) · 594M profiles · 28,795 skills · 7.3M companies | Daily |
| O*NET | DWAs · Tasks · KSA ontology | 19,265 DWAs · 1,016 occupations | Per release |
| BLS / OEWS | US workforce + wage data (CPS · OEWS p10–p90) | 396 metros · all SOC codes | Per release |
| ILOSTAT | International workforce employment | 172 countries with employment data | Per release |
| OECD | Sector employment + economic indicators | 38 OECD members | Annual |
| Eloundou et al. 2023 | GPT exposure α / β / γ per occupation | 100% of O*NET occupations | Static (research) |
| WEF Future of Jobs 2025 | Employer-reported skills trajectory | 1,000+ CHROs surveyed | Annual |
| TAG (proprietary depth) | HR telemetry · candidate interactions · placements | 100K enterprise clients · 104M placements/yr · 300M candidate interactions/yr | Continuous |
We work with hyperscalers, AI labs, workforce platforms, and the enterprises deploying both. If this is useful — for your team, your customers, or your platform — talk to us.