Moschetti Consulting Field Notes · №03
Field Notes on Workflow & AI

The process specification AI actually needs.

The first two field notes catalogued what goes wrong. This one asks the harder question: what would a process description have to look like for AI to succeed? The obvious wrong answer is a better document.

A field note from Moschetti Consulting ~16 min read

Process documentation is the comfortable answer to the question this field note poses — and it is the wrong answer. Every organization we have worked with has process documentation. Some of it is quite good. None of it is adequate as input to an AI integration, and no amount of careful rewriting can make it adequate, because the failure is not of diligence but of category. What AI needs is not a document. It is a specification — an artifact with the rigor of a schema and the executability of a contract, from which implementation, testing, audit, and operational monitoring can all be derived. This is a different thing from a better PowerPoint.

Most process documents begin in the middle, describing steps without establishing what the process is for. Most are aspirational rather than operational — they describe the process as leadership wants to think about it, not as it actually runs. Most assume, without saying so, that a population of knowledgeable humans will fill the gaps the document leaves. Remove those humans and the document falls apart. That is not a flaw; it is the design. It is also the reason AI integrations fail when given this documentation as input.

What follows is our working theory of the artifact that replaces it. We call it a PRISM — a Process Resource and Interaction Specification. It is not a longer document. It is a different kind of thing.

Part One Why legacy documentation works — until the gap fillers leave.

Every process that appears to run today runs because a specific kind of person is embedded in it. We have a clumsy name for these people, but we will use it once and then retire it.

Good Actors With Additional Critical Knowledge — hereafter, gap fillers — are the people whose institutional knowledge closes the gaps the process documentation leaves. They know that the "approved vendor" flag is unreliable on records created before the 2019 migration. They know that the quarter-end supplemental run needs a second look because of what happened in Q3 2021. They know which exceptions to escalate and which to absorb.

The documentation is not wrong. It is incomplete in exactly the places the gap fillers know to fill.

This is a stable, well-functioning arrangement, and it has scaled organizations for decades. The documentation gets you most of the way there; the gap fillers take over. Leadership does not need to notice the handoff because it is seamless from above. Auditors do not need to probe it because the process produces results that reconcile. Training new hires is slow, because a lot of what they need to learn is not written down — but that, too, is stable.

AI integrations break this arrangement in a very specific way. The premise of the integration is almost always that the AI replaces some portion of the process — the mechanical execution, the routing and triage, increasingly the decisioning itself. What rarely gets said out loud is that the AI also displaces the gap fillers, or at least displaces the path through which the gap fillers were doing their work. The documentation is what gets handed to the integration team. The gap filling is not, because nobody has ever written it down.

The integration proceeds. The model is trained against the documented process. The implementation runs against the documented inputs. And somewhere in month four of production, the organization discovers, one incident at a time, every place where a gap filler used to be.

This is the setup. The rest of this field note is about the artifact that is required when there are no gap fillers left.

Part Two Three properties a document doesn't have, and a specification must.

The difference between a document and a specification is not length, or care, or formatting. It is a set of structural properties that the specification possesses and the document does not. Three matter most.

Rigor of reference.

In a specification, every actor, artifact, event, and relationship is named and defined once — and that definition is the authoritative one, used consistently wherever the term appears. This sounds pedantic until you look closely at a real process document and find that it cannot survive the examination.

Consider a term from our earlier field note's example: approved vendor. A document uses the phrase as though its meaning is obvious. A specification asks: approved by whom? Approved for what? Approved in which system? Is "approved" a one-time state or does it expire? What causes approval to lapse? Is a vendor approved for procurement necessarily approved for payment? Is a vendor approved in one legal entity approved in its affiliates?

In most organizations, different teams answer these questions differently — and because the document never forced the question, nobody has noticed. Procurement treats "approved" as meaning eligible to quote. Accounts Payable treats it as meaning eligible to be paid. Compliance treats it as meaning screened against sanctions lists as of the last refresh. Each team is right in its own language. The process runs because gap fillers translate between the languages in real time.

A specification removes that translation by committing to a single meaning and enforcing it. This is the discipline several previous waves of computing tried and failed to build into business systems, and they failed for an instructive reason: the technology was usually fine, but the business meanings underneath it were too loose. A relationship like has was asked to carry the weight of owns, controls, possesses, and is responsible for all at once. When the business meanings are that ambiguous, no formalism on top can rescue them.

The lesson, now available again under a new banner: AI is very quick to reveal business-definition inconsistency, because AI cannot rely on gap fillers to translate at runtime. What the gap fillers were absorbing, the specification must define.

Closure under interrogation.

A specification has been subjected to — and has survived — an adversarial reader whose job is to find what is unanswered, ambiguous, or contradictory. In a well-run modern workflow, that adversarial reader is partly human and increasingly partly AI. It is astonishingly effective in this role.

Ask a capable model "what is unanswered or contradictory in this specification?" and, if the model is any good, it will be relentless. It will find the place where approved is used two different ways in adjacent paragraphs. It will find the handoff rule that presumes a field the entity definition does not include. It will find the exception path that references an actor not named anywhere else in the document. Each of these findings is an ambiguity that a gap filler has been silently resolving. Each is a place where AI, deployed naively, will fail.

The discipline is not "write the specification carefully." It is "write a draft, then interrogate it, then rewrite, then interrogate again, until the interrogator has nothing left to say." The output is not the draft. The output is the draft plus the interrogation history plus the resolutions. That is what a specification is, operationally — not a prettier document, but one that has been passed through this process and survived it.

Executability by derivation.

The third property is the one that turns the specification from a better description into a different artifact entirely. A proper specification is not a thing the code sits beside. It is the thing the code is derived from — and alongside it, the test harness, the monitoring signals, and the audit evidence.

This is exactly the shift the software industry has been quietly undergoing. Increasingly, the intellectual property of a piece of software is no longer the source code it compiles to. It is the specification — the prompt, the spec, the structured intent — that the code and its tests were generated from. The code is a projection of the spec. Regenerate the spec and you regenerate the code.

The same shift is available for business processes. A specification rigorous enough to generate the implementation is also rigorous enough to generate the test harness that proves the implementation matches it, the monitoring rules that flag when operation diverges from it, and the audit evidence that traces any given action back to a stated business rule. All of these are derivations of a single source, rather than parallel artifacts that have to be kept in sync by humans.

Part Three The specification as superprompt.

The practical frame that has emerged from this work is that the specification functions as a superprompt — a single, authoritative source from which narrower prompts are carved out for specific purposes. One narrower prompt drives implementation. Another drives the test harness. Another drives the monitoring. Another drives the audit trail, or the operational analytics that look for anomalies in the running system — a daily check for distribution drift in an order-type field, for instance, or for unexpected changes in exception rates by region.

This is not workflow automation. Workflow automation takes an existing process and mechanizes it. The superprompt approach takes the business's actual intent and derives the process — along with everything downstream of it — from a single canonical description of that intent. The two can look superficially similar in their outputs, but the second is a vastly different posture, because the canonical artifact is the specification, not the running code.

The implication for how consequential changes are made is substantial. In the traditional setting, changing a business rule means changing the document, changing the code, updating the tests, adjusting the monitoring, briefing the compliance team, and hoping all of them agree afterward. In the superprompt setting, changing a business rule means changing the specification — precisely once, in one place — and regenerating everything that derives from it. Drift between layers becomes much harder, because drift requires the layers to have been written separately in the first place.

Part Four A concrete scenario: the auditor walks in.

The hardest part of this argument to make in abstract terms lands cleanly in one specific scenario, because it is a scenario every executive reading this has lived through.

Legacy: the auditor asks

An auditor arrives and asks to understand system X. The owner provides a PowerPoint. The deck may be up to date or may not. Even if it is up to date, it drives nothing — it is a description, not a control. The auditor goes to the owner of component C1 and asks for details. Narrative is provided. Spreadsheets are provided. Perhaps more decks. None of it drives anything.

The only thing authoritative is the running code, because the running code is the only artifact with consequences. The documentation is a polite fiction negotiated between the audit team and the implementation team — a translation layer both sides understand is incomplete and that neither side has the standing to challenge.

Specification-first: the auditor asks

The auditor arrives and asks the same question. The answer is a specification — the canonical description of the process. The implementation is a derivation of the specification. The tests are a derivation of the specification. The monitoring signals are a derivation of the specification.

For any action the system has taken, the auditor can ask why, and the answer traces back through the generated code, through the test that validated the rule, to the clause of the specification that expressed the business requirement. The documentation and the behavior are the same artifact, viewed from different angles. There is no translation layer to negotiate because there are no longer two layers.

This is a change in what the audit is, not merely in how it is performed. In the legacy pattern, the audit is an attestation constructed after the fact, from artifacts that were never designed to support it. In the specification-first pattern, the audit is a property the system exposes by construction. The same collapse happens, incidentally, for regulatory evidence, for change-management approvals, and for the quarterly certifications that consume so much of a controller's calendar.

Part Five What the specification must be rigorous about.

A specification is distinguished from a document not by what it contains but by how unambiguously it contains it. The following dimensions are the ones we find must, without exception, be rigorously specified for the result to be usable. A description that leaves any of these ambiguous is not yet a specification.

Actors

Every party that initiates, approves, handles, or is notified of anything. Defined once, scoped precisely — role or person, human or system, internal or external — and consistent across every clause that references them.

Artifacts

Every business object the process touches. The vendor, the invoice, the approval, the exception ticket — each defined with its authoritative source, its lifecycle states, and the semantics of every attribute that gates a downstream decision. "Approved vendor" is not an actor; it is an artifact attribute, and it must mean one thing.

Temporal rules

Every cutoff, cadence, deadline, and timezone convention. The special cases — quarter-end, year-end, holidays, daylight-saving transitions — named explicitly rather than left as lore.

Decision authority

Every decision point, the actor authorized to make it, the criteria applied, and — the clause AI forces into existence — whether the decision may be taken autonomously, must be proposed and confirmed, or must always be made by a named human.

Exception handling

Every class of exception the process can encounter, how it is detected, what the resolution path is, and whether the path is automated, routed to a queue for human handling, or escalated. A manual queue is a perfectly acceptable answer. "It depends on who notices" is not.

Accountability

The points in the process where a human bears legal, regulatory, or fiduciary accountability, and what specifically that human is attesting to. If AI is deployed in or around these points, the accountability does not move; the evidentiary requirements around the AI-assisted steps tighten.

Notice what is not on this list. Process diagrams. Organizational charts. Training curricula. Technology inventories. These have their place, but they are not what the specification is for. The specification is for closing the gaps the gap fillers used to close, and the dimensions above are where those gaps live.

Notice also what happens to ambiguity once a specification is pressed through the interrogation we described earlier. Ambiguity does not survive. It surfaces as a finding, and the organization has to decide, explicitly, what the term means. This is uncomfortable work. It is also, in our experience, the most durable work product an integration project produces — because the resolved ambiguities are the new shared understanding that everyone downstream, human or machine, will operate against.

Part Six A small specimen, and how the toolchain closes it.

Everything above is conceptual. Reasonable readers may wonder what the artifact actually looks like and how the discipline of interrogation and simulation operates in practice. We will not show a complete specification — those are necessarily long — but a small specimen for a tightly scoped business rule, paired with a sketch of the methodology, is enough to make the abstraction real.

The format is structured Markdown. A specification is not a config file and should not look like one; it is read by humans and by machines, authored by business owners and reviewed by technologists, rendered into PDFs for auditors and into prompts for AI tooling. Markdown serves all of these without surrendering to any of them.

The methodology, in one diagram.

A specification draft is authored, then passed through two AI-assisted methodology stages. The first, ClosureCompletion, probes the draft for ambiguity, contradiction, undefined references, and missing closure. The second, Simulator, exercises the closed specification against deterministic and adversarial scenarios to surface behaviors the author may not have anticipated. Either stage can return the specification to the author for revision; the loop terminates when both stages report high confidence.

AUTHOR draft PRISM the specification structured markdown CLOSURE COMPLETION ambiguity · contradiction undefined references low confidence → revise SIMULATOR deterministic tests adversarial scenarios low confidence → revise high confidence → settled PRISM
Fig. 1 — The methodology loop. The specification emerges only when both AI-assisted stages return high confidence.

The interior workings of ClosureCompletion and Simulator are uninteresting compared to what they produce. Both are AI-assisted stages built around carefully curated prompts; both produce actionable, structured outputs that the author can resolve. What matters is the loop, and the property the loop guarantees: a specification reaches the right edge of the diagram only after it has survived adversarial closure and adversarial simulation.

The specimen.

What follows is a small fragment of a specification — not a whole process, but one bounded business rule about how a vendor moves between approval states in the procurement lifecycle. The specimen is shorter than a real specification by an order of magnitude. It is included to make the format tangible, not to represent completeness.

Specimen — PRISM fragment, structured markdown
---
spec_id: vendor-approval-states
version: 0.3
owner: head_of_procurement
scope: vendor lifecycle, north america
---

# Vendor Approval State Transitions

## Purpose

To define the authoritative states a 'vendor' may occupy with
respect to procurement and payment eligibility, and the rules
governing transitions between them.

## Actors

### procurement_lead

Named role; one per business unit; human.

### compliance_officer

Named role; enterprise-wide; human.

### ap_supervisor

Named role; one per paying entity; human.

## Artifacts

### vendor

A counterparty registered in the 'vendor_master_system'.

## States

### vendor.approval_state

One of: `prospective`, `screened`, `approved_to_quote`,
`approved_to_pay`, `suspended`, `retired`.

## State Transitions

### prospective → screened

- Trigger: 'compliance_officer' completes sanctions screening.
- Effect: 'vendor.screened_on' set to current UTC;
  'vendor.screening_valid_until' set to 'vendor.screened_on'
  + P365D.
- Authority: 'compliance_officer'. 'not_delegable'.
  'never_autonomous'.

### screened → approved_to_quote

- Trigger: 'procurement_lead' reviews 'vendor' and confirms
  procurement eligibility.
- Precondition: current UTC <= 'vendor.screening_valid_until'.
- Authority: 'procurement_lead'. 'delegable_below_role'.

### approved_to_quote → approved_to_pay

- Trigger: first payment instruction issued against this 'vendor'.
- Precondition: 'vendor' has at least one active purchase order
  AND current UTC <= 'vendor.screening_valid_until'.
- Authority: 'ap_supervisor'. 'autonomous' if the precondition
  is met without exception; otherwise 'proposed_and_confirmed'.

### any → suspended

- Trigger: any of 'compliance_officer', 'procurement_lead', or
  'ap_supervisor' raises a suspension request with a reason code.
- Authority: 'compliance_officer' for compliance reasons;
  'procurement_lead' for procurement reasons; 'ap_supervisor'
  for payment-integrity reasons. 'never_autonomous'.

## Temporal Rules

- All timestamps stored as UTC.
- 'vendor.screening_valid_until' lapse causes automatic transition
  to `suspended` at the daily 02:00 UTC sweep, with reason code
  `screening_expired`.

## Exceptions

- A payment instruction issued against a 'vendor' whose
  'vendor.approval_state' is not `approved_to_pay` is rejected.
  Routed to 'ap_supervisor' exception queue with reason code
  `vendor_not_payable`.
- A purchase order issued against a 'vendor' whose
  'vendor.approval_state' is not `approved_to_quote` is rejected.
  Routed to 'procurement_lead' exception queue with reason code
  `vendor_not_quotable`.

What ClosureCompletion finds.

The draft above is plausible enough to pass casual reading. Run it through ClosureCompletion and the picture changes. A representative subset of findings on this fragment:

ClosureCompletion findings · vendor-approval-states v0.3
  1. Undefined attribute reference. §State Transitions references active purchase order, but purchase_order is not declared as an artifact and the predicate active is not defined. Resolve: declare the artifact and define the lifecycle states that constitute "active."
  2. Ambiguous semantic. The transition any → suspended permits three different actors to initiate, but the resulting suspended state is single-valued. Downstream consumers cannot distinguish a compliance suspension from a payment-integrity suspension. Resolve: either subdivide the state, or require suspension_reason_class as a required attribute on transition.
  3. Missing inverse transition. suspended → * is undefined. The specification permits a vendor to enter suspended but provides no path out. Resolve: define the transition(s) out of suspension, the authority required, and any preconditions on re-entry.
  4. Temporal contradiction. §Temporal Rules states that screening_valid_until lapse causes automatic transition to suspended; §State Transitions states that the any → suspended transition is "never autonomous." These cannot both be true. Resolve.
  5. Authority gap. The transition approved_to_quote → approved_to_pay may be autonomous "if the precondition is met without exception." The specification does not define what constitutes "exception" in this context. Resolve: enumerate the conditions under which this transition must be confirmed rather than autonomous, or remove the qualifier.
  6. Accountability anchor missing. No clause identifies which named role bears periodic attestation that the population of approved_to_pay vendors is correctly maintained. Resolve: name the attesting role and the cadence.

None of the findings is exotic. Each is the kind of ambiguity a diligent gap filler resolves silently every day. Each is also, individually, capable of producing a class of production incident if encoded into AI-driven implementation as written. The author resolves the findings, re-submits, and iterates until ClosureCompletion reports high confidence — at which point the specification proceeds to the Simulator stage, which exercises it against scenarios both expected and adversarial. Findings from the Simulator are returned in the same form, and the loop continues until both stages clear.

The settled specification, on emerging from the loop, is what the implementation team consumes. The same artifact also feeds the test harness and the monitoring rules that are derived from it. From this point onward, the specification is the canonical reference; every artifact downstream of it is a projection.

Why this is less work, not more.

A PRISM looks, at first encounter, like more work than a process document. The interrogation is unfamiliar. The insistence on rigorous definitions is tedious. The whole apparatus reads as heavier than what most organizations are used to producing. This reaction is, we find, almost universal and almost always wrong.

Measured across the full lifecycle — the business analysts who interpret requirements, the developers who implement them, the testers who verify the implementation, the operations team who runs the result, the compliance team who reviews it, the auditors who attest to it — a specification-first process consumes substantially less labor than the alternative, because it eliminates the translation and reconciliation work that otherwise happens at every handoff. Drift between the document and the code does not have to be managed, because there is no drift. Audit evidence does not have to be assembled, because the system produces it by construction. Changes do not have to be propagated, because regeneration is the propagation.

What changes is where the work lives. It moves forward in the lifecycle, to the specification itself, and it moves into a form that can be interrogated, simulated, and regenerated rather than merely negotiated. For organizations that have spent the last three decades watching software, data, and process assets drift apart from their documentation, this is not a small change. It is the foundation on which AI integration can safely rest — and, arguably, the foundation on which any modern business process should rest, AI or no AI.

This is the artifact we help organizations produce. It is not the final deliverable; it is the stake in the ground from which the deliverable derives. If the first two field notes in this series explained what goes wrong and why, this one is about what stops going wrong, once the description of the process becomes the thing the process actually runs on.

— Moschetti Consulting

If your organization is contemplating AI integration, or simply tired of the drift between documentation and reality, we'd welcome the conversation.

inquiries@moschetticonsulting.com