The Platform

Continuous AI governance, built for regulators

Gradaris connects to your AI agents however they were built, scores them against the EU AI Act and your governance policies, and produces cryptographically signed evidence you can hand directly to an auditor.

Explore the Framework View Documentation
Platform Architecture

Five layers from AI system to regulatory evidence

Gradaris sits between your AI applications and your regulators — registering, scoring, and generating tamper-proof evidence automatically.

AI applications LLM agents  •  autonomous workflows  •  ML systems  •  external APIs SDK · API · webhook Gradaris registry Registration  •  metadata  •  ownership tracking  •  model catalog Governance engine A–F grading  •  EU AI Act  •  NIST AI RMF  •  ISO 42001 Evidence generation Signed audit reports  •  risk documentation  •  control mapping Verification layer QR validation  •  report authenticity  •  public endpoint for regulators Your AI Gradaris Auditors
How It Works

From agent to governed — in minutes

Three steps. Works for engineers, power users, and non-technical teams. All three paths produce the same governance record.

STEP 01

Connect your agents

Engineers use the Python SDK. Power users use a webhook. Non-technical teams register through a plain-English form. All three paths produce the same governance record — regardless of what tool built the agent and whoever built it.

STEP 02

Continuous assessment

Every agent run generates telemetry. Gradaris scores it across 12 criteria in three tiers — Verified Controls, Empirical Benchmarks, and Structured Assessment — mapped directly to EU AI Act articles.

STEP 03

Auditable evidence

Each agent receives a Gradaris Governance Score (A–F), a cryptographically signed evidence package, and a PDF report you can hand directly to a regulator or auditor — on demand, at any time.

Integration

Works for every team, not just engineering

The biggest governance gap isn’t the AI systems IT controls — it’s the agents everyone else built. Gradaris has a path for every creator.

For Engineers

Python SDK

Drop into any Python agent in minutes. Zero dependencies. Async telemetry that never slows your agent. Works with LangChain, AutoGen, and custom agents.

  • pip install gradaris-sdk
  • SHA-256 integrity hash on every input and output
  • 12 auto-detected risk signal types
  • Async — zero latency impact
For Power Users

Webhook Connector

Built your agent in Make, Zapier, Power Automate, or n8n? Add one HTTP step. No code required. Pre-built blueprints for the most common platforms.

  • Compatible with any platform that can POST JSON
  • Pre-built blueprints for Make and Zapier
  • Same governance data as the SDK
  • Setup in under 5 minutes
For Everyone Else

Register Without Code

Finance, marketing, or operations teams built an agent in ChatGPT or Copilot? Fill in a plain-English form. Gradaris creates the governance record automatically.

  • No technical knowledge required
  • 5-minute registration form
  • Governance baseline assessment generated automatically
  • Compliance team notified and can review
Agent-to-Agent Trust

Govern the AI systems talking to each other

Most governance tools stop at the boundary of a single AI system. Gradaris goes further — governing the trust relationships between agents, not just the agents themselves.

As AI systems increasingly orchestrate other AI systems, the risk surface expands beyond any single model. Gradaris logs every agent-to-agent call, enforces trust policies, and produces a cryptographic audit trail — fully aligned with EU AI Act Article 12 requirements for automated decision systems.

Trust policies — define which agents can call which, under what conditions, and with what scope
Call chain audit log — every inter-agent call is logged with caller identity, trust level, and outcome
Cryptographic verification — SHA-256 token enforcement ensures calls cannot be spoofed or replayed
EU AI Act Article 12 — automatic logging of all automated decision events across the agent network
Trust levels
Full trust
Unrestricted calling with full scope
verified
Constrained
Scoped actions, rate limits enforced
limited
Read only
Observe but cannot trigger actions
passive
Denied
Call blocked, incident logged
blocked
Every call produces a tamper-proof audit record regardless of trust level — compliant with EU AI Act Art. 12 logging requirements.
Public Trust Registry

Governance that's publicly verifiable

Every AI agent that passes Gradaris assessment receives a permanent, publicly accessible Trust ID. Regulators, counterparties, and customers can verify governance status at any time — no Gradaris account required.

The Gradaris Public Trust Registry turns internal compliance into external proof. It's the difference between telling a regulator your agents are governed and showing them a live, cryptographically-backed record they can verify themselves.

Permanent Trust ID — format GRD-AI-YYYY-NNNNNN, assigned on first certification
Public verification page — at verify.gradaris.com/{id}, showing live grade, score, and certification date
JSON APIGET /api/v1/trust/{id} returns agent_id, status, tier, score, and certified_since
QR code on every certificate — governance reports embed a scannable link to the public record
View the Registry Live example ↗
GET /api/v1/trust/GRD-AI-2026-001000 200 OK
{
  "agent_id":        "GRD-AI-2026-001000",
  "agent_name":      "Claims Processing Agent",
  "org_name":        "Veriton Insurance",
  "status":          "certified",
  "tier":            "A",
  "score":           91,
  "last_assessed":   "2026-03-12",
  "certified_since": "2026-03-12",
  "registry_url":    "https://verify.gradaris.com/GRD-AI-2026-001000"
}
The same data powers the public verification page at verify.gradaris.com — readable by humans and machines alike.
Grading Framework

A–F grades backed by a three-tier methodology

Every Gradaris Governance Score comes with a tier breakdown, confidence levels, EU AI Act article mapping, and a cryptographic integrity hash you can defend in front of any auditor.

Tier 1 — Verified Controls

Binary pass/fail checks verified from system logs. High confidence. Any failure caps the maximum score at 59 — forcing a Grade D or lower regardless of other scores.

Tier 2 — Empirical Benchmarks

Statistical tests against versioned, published test suites. Reproducible by any party. Medium-high confidence. Results are stable and verifiable independently.

Tier 3 — Structured Assessment

Fixed, versioned rubric with weighted sub-criteria. Assessor-reviewed with fully auditable process. Medium confidence — the most interpretive tier, fully documented.

See the Framework Deep Dive
Grade Scale
Grade Score range Interpretation
A90–100Exemplary — audit-ready evidence, all Tier 1 controls verified
B75–89Good standing — minor gaps, no critical control failures
C60–74Acceptable — identified improvements required for full compliance
D45–59At risk — Tier 1 control failure or significant gaps present
F0–44Non-compliant — urgent remediation required before audit
Cryptographic Integrity

Every report carries a SHA-256 hash of the assessment methodology, input data, and scoring criteria. If any element changes, the hash changes. Tamper-evident by design.

Regulatory Coverage

Mapped to the frameworks that matter

Gradaris scores are not just internal metrics. Every assessment maps to specific articles in major AI governance frameworks so your evidence is directly usable with regulators.

Primary Framework

EU AI Act

Full mapping to Title III high-risk AI obligations. Articles 9, 10, 12, 13, 14, and 15 each have corresponding Gradaris assessment criteria. Evidence packages reference article numbers directly.

Article 9 · Risk Mgmt Article 10 · Data Article 12 · Logging Article 13 · Transparency Article 14 · Oversight Article 15 · Accuracy & Robustness
Also Mapped

NIST AI RMF, ISO/IEC 42001 & OECD AI Principles

Gradaris scores cross-reference the NIST AI Risk Management Framework, ISO/IEC 42001, and OECD AI Principles so your governance program remains valid as the regulatory landscape evolves beyond the EU.

NIST AI RMF ISO/IEC 42001 OECD AI Principles
Enterprise Access & Identity

Secure, centralised identity management

Gradaris integrates with enterprise identity providers via SAML-based Single Sign-On (SSO). Administrators can configure SSO, manage access, and enforce identity controls through the admin portal.

SAML SSO Integration

Connect Okta, Azure AD, Google Workspace, or any SAML 2.0 identity provider.

Role-Based Access Control

Map identity provider groups to Gradaris roles. Control who can view, assess, or administer.

Admin Configuration Portal

Configure SSO, manage users, enforce policies, and view the full SSO audit log from one interface.

SSO Audit Logging

Every login, provisioning event, and configuration change is logged and searchable.

Free Verification

Instant Verification

Generate a Trust ID for any AI system in minutes. Receive a governance score, personalised insights, and a verifiable Trust ID — listed on the public Gradaris Trust Registry.

1
Answer 8 governance questions
Data handling, human oversight, monitoring, documentation, deployment context, risk impact, and evidence — mapped directly to EU AI Act requirements.
2
Receive a real governance grade
Scored on a weighted 100-point model. Grade resolved from the live Gradaris Trust Grade Mapping — the same methodology used for paid assessments.
3
Get your Trust ID and insights
A real GRD-AI-YYYY-NNNNNN Trust ID is issued, SHA-256 hashed, and listed on verify.gradaris.com with your governance gaps and what you can improve.
Create free Trust ID View sample →

No credit card  ·  2 minutes  ·  Real governance record

gradaris — trust-output
trust_id GRD-AI-2026-8F3K2
grade B Strong governance baseline
score 83 / 100
issued 2026-03-28T23:14:00Z
hash a3f8c2...9e1b47
GOVERNANCE GAPS IDENTIFIED
→ Monitoring coverage incomplete
→ Documentation partially complete
→ Evidence artefacts not fully documented
verify.gradaris.com/GRD-AI-2026-8F3K2
2 min
Average completion time
8
EU AI Act–mapped assessment questions
$0
No credit card required
Ready to govern your AI estate?

See Gradaris with your own scenario

Book a 30-minute walkthrough and we’ll show you what governance looks like for your specific AI agents — from connection to evidence package.