Support

Frequently Asked
Questions

Everything you need to know about AI Trust IDs, governance scoring, certification, and the Gradaris public trust registry.

Core Concepts

An AI Trust ID is a unique identifier assigned to an AI system that links to its verified governance record. It provides a public, tamper-evident reference to that system's grade, certification status, and evaluation history.

Traditional tools focus on internal policy management and documentation. Gradaris adds an external verification layer — combining structured evaluation, scoring, certification, and a public trust registry. It functions as a system of record for AI governance, not just a workflow tool.

Gradaris is both. It performs structured evaluations that result in certification, and it supports ongoing reassessment to reflect changes in system behavior, controls, or risk posture over time.

Scoring & Evaluation

Grades are calculated using a structured evaluation framework across multiple criteria, including governance controls, transparency, reliability, and operational safeguards. Each criterion contributes to a composite score, with certain high-risk gaps capping the overall grade.

No. Organizations provide inputs and evidence, but scoring is determined independently based on evaluation criteria. Results cannot be directly modified by the organization.

Systems can be re-evaluated on a defined cadence or when material changes occur — such as model updates, policy changes, or new risk signals. Grades are updated whenever a re-evaluation is completed, ensuring the published record reflects the current state of the system.

Certification Lifecycle

If a system no longer meets required standards, its grade and certification status may be updated, downgraded, or revoked. The public record always reflects the most current evaluation.

Revocation may occur due to significant control failures, loss of required safeguards, material risk exposure, or failure to maintain evaluation standards over time.

Transparency & Public Registry

The public registry displays summarized results such as grade, status, and key attributes. Detailed internal evidence, sensitive configurations, and proprietary information are not exposed.

Gradaris does not assess business performance, financial outcomes, or non-AI operational processes. Its focus is on governance, risk, and trust characteristics of AI systems.

Data, Security & Integration

Gradaris is designed to minimize sensitive data exposure. Only required evaluation inputs and metadata are processed, and organizations maintain control over what is submitted.

Yes. Organizations can provide evidence and inputs without exposing sensitive internal details publicly. Public outputs are limited to verification-relevant information.

Gradaris can integrate through APIs and structured data inputs, enabling connection with existing governance, monitoring, and operational systems.

Gradaris aligns with industry-standard security practices and is designed to support frameworks such as SOC 2, NIST, and ISO-based controls. Security architecture emphasizes data protection, access control, and auditability.

Still have questions?

Talk to us directly

Book a 30-minute walkthrough and we'll answer any questions about how Gradaris fits your specific AI governance requirements.