In a multi-agent AI system, not every agent should be allowed to call every other agent. The question isn't just "can they connect?" — it's "should they?"
When a poorly-governed agent initiates a call to a high-compliance peer, it can inherit permissions, trigger actions, and leave a compliance gap that neither agent's owner intended. That's not a security problem. It's a governance problem.
Gradaris Trust Policies enforce a minimum compliance threshold before any agent-to-agent call is permitted — and log every decision to an immutable audit trail.
Sorry, You're Not My Type
Why Agent-2-Agent trust policies aren't just about security — they're about governance standards.
maybe we could collaborate? 👀
callee: ARIA-1
grade_gap: D → A (+3)
caller_score: 0.41
min_required: 0.85
between us. 🚫
3 unresolved violations &
zero EU AI Act compliance.
✖ EU AI Act: non-compliant
✖ Violations: 3 open
→ Required: score ≥ 0.85
What this means in practice
The comic is a simplification, but the mechanics are real. When NEXUS-7 initiates a call to ARIA-1, Gradaris evaluates the caller's current governance score, checks the applicable trust policy for that agent pair, and either issues a short-lived call token or returns a denial — which is immediately written to the audit chain.
The denial isn't a punishment. It's a signal. NEXUS-7 can get there — by closing audit gaps, resolving violations, and maintaining consistent scores. The grade isn't permanent. The governance requirement is.
This is how EU AI Act Article 12 traceability works in a multi-agent system: not just logging that a call happened, but logging why it was or wasn't permitted, with cryptographic integrity on every record.