AI Governance
Stewardship
A steward does not execute the system
A steward decides what the system
is allowed to do,
when it must stop,
and who is accountable
Stewardship defines the limits of automation
Runtime Governance
As AI systems gain autonomy, governance can no longer live outside the system.
It must be enforced at runtime — at the moment a decision is made.
Why Existing Approaches Fall Short
Most AI governance today operates around systems, not within them.
- •Policies describe intent, but don't enforce behavior
- •Evaluations score outputs, but don't stop actions
- •“Human-in-the-loop” collapses at scale
- •LLM-as-judge introduces opacity where certainty is required
When automation moves faster than accountability, trust collapses.
Governance Reframed
Governance is not intelligence.
Governance is constraint, escalation, and ownership.
True AI governance answers three questions — deterministically:
- 1. Should this proceed?
- 2. Should a human intervene?
- 3. Should automation stop — now?
Steward
Steward is not a guardrail framework or evaluation tool.
It operates at a different layer: runtime governance.
Where guardrails validate outputs and policies define responsibility, Steward enforces responsibility inside the execution path itself.
Steward enforces human-authored contracts at runtime using:
- •Parallel evaluation lenses
- •Deterministic synthesis
- •Evidence-backed outcomes
- •Explicit accountable humans
No scoring.
No probabilistic judgment.
No hidden discretion.
LLMs assist evaluation. Policy decides outcomes.
PROCEED / ESCALATE / BLOCKED
Governance Guarantees
Steward formalizes governance guarantees that most AI systems leave implicit:
- •Accountability as data
Every contract requires an explicitaccountable_human. Responsibility is enforced, not implied. - •Uncertainty as a governance signal
Low confidence does not guess. It deterministically escalates to a human. - •Governance calculus, not heuristics
Outcomes follow a strict dominance order: BLOCKED > ESCALATE > PROCEED — non-configurable, by design. - •Evidence as an invariant
A BLOCKED decision without cited evidence is invalid. Enforcement requires justification.
Steward does not replace existing policy engines, guardrails, or review processes. It complements them by enforcing governance guarantees they intentionally leave undefined.
Why This Matters Now
- •Agentic systems are becoming autonomous by default
- •Regulation is shifting toward accountability, not intent
- •Enterprises need enforceable guarantees, not assurances
The question is no longer “Can the system do this?”
It's “Who answers when it does?”
How Steward Differs
Most AI safety tools answer: “Is this output acceptable?”
Steward answers:
“Should this action occur at all, should a human intervene, or must automation stop — now?”
Most AI governance tools operate after an action is proposed.
Steward operates before an action is allowed.
This distinction is architectural, not philosophical.
Related Work
The EU Cyber Resilience Act introduces open-source software stewards as legal entities responsible for governance.
Policy frameworks define who is responsible.
Steward defines how responsibility is enforced at runtime.
- FAccT '25: Stewardship in FOSS Governance
Tridgell & Singh on software stewards under the EU CRA - Closing the AI Accountability Gap
Raji et al. on internal algorithmic auditing frameworks - Responsible AI Pattern Catalogue
ACM collection of best practices for AI governance
Steward does not introduce new principles of governance. It makes existing principles enforceable at runtime.
Next Steps
Steward Ownership
The Steward calculus can enforce “steward-ownership” models in autonomous agents — where the system itself cannot proceed without explicit human accountability.
Read the Steward Design →View the Source
Steward is open source. Explore the contracts, lenses, and synthesizer.
View Steward on GitHub →Agenisea's Role
Agenisea builds infrastructure for human-centered AI — where governance, accountability, and execution are enforced together, at runtime.
Steward is one artifact of that work. It is published openly so governance primitives can be examined, reused, and challenged in the open.
Grounding
When machines execute, humans must govern.
AI governance is how we make that enforceable.
Stay Connected
We're building governance infrastructure
for responsible AI.
Join us on the journey.