The EU AI Act is the floor, not the ceiling. We classify every system, document every model, gate every consequential decision through a named human, and report serious incidents to the regulator within statutory windows.
Regulation (EU) 2024/1689 — the Artificial Intelligence Act — defines a baseline for trustworthy AI in the European market. Ophanix builds against that baseline, and then past it. This page is the public summary of our responsible-AI programme. The implementation detail lives in our internal Risk Management System and is shared with customers under engagement.
We do not build, sell, or operate systems intended to deploy:
This is not a marketing position. It is contractual, and any breach is grounds for immediate engagement termination at our discretion.
Each Ophanix-built AI system is classified per Annex III of the AI Act and per our internal taxonomy. Classification triggers controls — not policy theatre. High-risk systems require:
AI in critical infrastructure, employment, essential services, law enforcement. Requires conformity assessment, human oversight, transparency, and post-market monitoring.
Every consequential decision — defined per system per customer, but always including takedowns, escalations, financial alerts, and counter-narrative deployment — routes through a Human-in-the-Loop reviewer with documented competence and named accountability. "The AI did it" is not an attribution we accept.
Operators can pause, override, and reverse any automated decision. Reversal feeds a structured retraining-feedback loop and an incident review when the reversal rate crosses defined thresholds.
Accuracy and fairness are evaluated per language, region, demographic segment, and operational context. Disparities are documented in the model card and remediated through dataset rebalancing, post-processing, or human-review thresholds — not hidden behind a global accuracy number.
For systems serving multiple jurisdictions, we map performance against the local regulatory baseline (e.g. DPDPA in India, POPIA in South Africa, equivalent regional frameworks) and adjust deployment posture accordingly.
Where our systems rely on third-party general-purpose AI models (GPT, Claude, Gemini, open-weight equivalents), we maintain provider documentation, version pinning, and a clear delineation of responsibility for outputs.
We do not contribute customer data to upstream training without explicit, scoped customer authorisation. We verify the provider's no-training contract clause at onboarding and on each model-version upgrade.
We notify the market surveillance authority of the affected member state of any serious incident — death, serious harm to health, fundamental rights infringement, critical infrastructure disruption, environmental damage — within 15 days of becoming aware. For death or widespread infringement, within 2 days. The customer is informed in parallel.
Customers who want a full technical and contractual deep-dive on a specific Ophanix system can request a Responsible-AI Brief — produced from the actual implementation, signed by the named risk owner. ai-ethics@ophanix.org
We share the complete audit pack — methodology, findings, remediation log, third-party attestations — with prospective customers under NDA during procurement.