Trust Statement · 04

Responsible AI.

The EU AI Act is the floor, not the ceiling. We classify every system, document every model, gate every consequential decision through a named human, and report serious incidents to the regulator within statutory windows.

StatusActiveEffective15 May 2026FrameworkReg. (EU) 2024/1689ReviewQuarterly · external
01 / THESIS

The AI Act is the floor, not the ceiling.

Regulation (EU) 2024/1689 — the Artificial Intelligence Act — defines a baseline for trustworthy AI in the European market. Ophanix builds against that baseline, and then past it. This page is the public summary of our responsible-AI programme. The implementation detail lives in our internal Risk Management System and is shared with customers under engagement.

02 / PROHIBITED USE

What we will not build.

We do not build, sell, or operate systems intended to deploy:

  • Subliminal, manipulative, or deceptive techniques to materially distort behaviour.
  • Exploitation of vulnerabilities of specific groups (age, disability, socio-economic status).
  • Social scoring of individuals by public authorities or by us on behalf of such authorities.
  • Predictive policing based solely on personality profiling.
  • Untargeted scraping of facial images for facial-recognition databases.
  • Emotion-recognition systems in workplace or educational settings.
  • Biometric categorisation to infer sensitive personal characteristics.
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement, outside the narrow exemptions under ART. 5.

This is not a marketing position. It is contractual, and any breach is grounds for immediate engagement termination at our discretion.

03 / RISK CLASSIFICATION

Every system, classified.

Each Ophanix-built AI system is classified per Annex III of the AI Act and per our internal taxonomy. Classification triggers controls — not policy theatre. High-risk systems require:

  • Risk management system — documented, reviewed at each release.
  • Data governance — provenance, bias evaluation, statistical relevance documented for training, validation, and test sets.
  • Technical documentation — model card maintained per ART. 11 / Annex IV.
  • Record-keeping — automated decision logs retained per regulator schedule (minimum 6 months, typically 7 years).
  • Transparency — instructions for use that match what we actually built and how operators are supposed to use it.
  • Human oversight — see section 04.
  • Accuracy, robustness, cybersecurity — measured, monitored, regressed against on every release.
  • Conformity assessment — completed and registered in the EU database before market placement.
  • Post-market monitoring — drift detection, performance trending, serious-incident reporting.
EU AI ACT · RISK CLASSIFICATIONhover / click a tier
TIER 01
Prohibited
TIER 02
High-Risk
TIER 03
Limited Risk
TIER 04
Minimal Risk
TIER 02 · HIGH-RISK

Strictly controlled — most of what we build.

AI in critical infrastructure, employment, essential services, law enforcement. Requires conformity assessment, human oversight, transparency, and post-market monitoring.

  • Biometric identification systems
  • Critical-infrastructure decisioning
  • Employment / candidate screening
  • Credit-worthiness assessment
  • Law-enforcement risk profiling
04 / HUMAN OVERSIGHT

Decisions that matter are reviewed by a named human.

Every consequential decision — defined per system per customer, but always including takedowns, escalations, financial alerts, and counter-narrative deployment — routes through a Human-in-the-Loop reviewer with documented competence and named accountability. "The AI did it" is not an attribution we accept.

Operators can pause, override, and reverse any automated decision. Reversal feeds a structured retraining-feedback loop and an incident review when the reversal rate crosses defined thresholds.

05 / TRANSPARENCY

People know when they are talking to a machine.

  • AI interaction disclosure — RoboChat surfaces clearly identify as AI-driven and route to a human on request.
  • Synthetic content labelling — generated images, audio, and video carry C2PA-aligned provenance and visible labels where context permits.
  • Decision explanations— when an automated decision affects a person, an explanation suitable for that person is generated and made available through the customer's preferred channel.
  • Model cards — every production model ships with a model card: data summary, evaluation metrics, known failure modes, intended use, prohibited use.
06 / BIAS, FAIRNESS & ACCURACY

Audited per segment, not just on average.

Accuracy and fairness are evaluated per language, region, demographic segment, and operational context. Disparities are documented in the model card and remediated through dataset rebalancing, post-processing, or human-review thresholds — not hidden behind a global accuracy number.

For systems serving multiple jurisdictions, we map performance against the local regulatory baseline (e.g. DPDPA in India, POPIA in South Africa, equivalent regional frameworks) and adjust deployment posture accordingly.

07 / GENERAL-PURPOSE MODELS

We use them. We do not pretend we trained them.

Where our systems rely on third-party general-purpose AI models (GPT, Claude, Gemini, open-weight equivalents), we maintain provider documentation, version pinning, and a clear delineation of responsibility for outputs.

We do not contribute customer data to upstream training without explicit, scoped customer authorisation. We verify the provider's no-training contract clause at onboarding and on each model-version upgrade.

08 / SERIOUS INCIDENTS

If an AI system causes harm, we tell the regulator.

We notify the market surveillance authority of the affected member state of any serious incident — death, serious harm to health, fundamental rights infringement, critical infrastructure disruption, environmental damage — within 15 days of becoming aware. For death or widespread infringement, within 2 days. The customer is informed in parallel.

Engagement

Customers who want a full technical and contractual deep-dive on a specific Ophanix system can request a Responsible-AI Brief — produced from the actual implementation, signed by the named risk owner. ai-ethics@ophanix.org

Engage

Need the full evidence pack?

We share the complete audit pack — methodology, findings, remediation log, third-party attestations — with prospective customers under NDA during procurement.

Request Evidence Pack Security Detail