Complior 1.0: One `npm install` — and You Have AI Compliance
First, a Number
87 days.
That's how long until August 2, 2026 — when high-risk AI system obligations take effect. Up to €15 million or 3% of global annual turnover for violations (Art. 5 prohibited practices carry stiffer penalties — up to €35M or 7% — but those are already in force since Aug 2, 2025). That's more than Meta's largest GDPR penalty.
This applies to you if your product uses AI. Even a single OpenAI call in a chatbot.
Most teams know this. And do nothing, because:
- Lawyers want €80K for an audit.
- Consultants promise "6 months and €150K."
- Nobody inside the company knows where to start.
Between the developer writing AI code and the lawyer checking compliance — there's a chasm. Nobody bridges it. That chasm is costing the industry billions.
Complior is the bridge. 30 seconds from your terminal.
A Real Scenario
Tuesday, 3 PM. A German healthtech startup. Series A. Four AI agents in production — patient triage bot, appointment scheduler, symptom checker, internal summarizer.
CTO gets an email from their biggest hospital client: "We need proof of EU AI Act compliance for all AI systems by end of quarter. Or we pause the contract."
The CTO opens Slack: "Does anyone know what Article 26 means for us?"
Nobody does.
This is not hypothetical. This is happening across Europe right now. Compliance is no longer a legal exercise you schedule for Q4. It's a sales blocker. A deal killer. A board-level risk that lands on engineering's desk with zero tooling and zero runway.
Complior exists because that CTO should be able to type one command and know exactly where they stand. Not in 6 months. Not after €80K in consulting fees. Right now.
What You Do
Install:
npm install -g complior
Run:
complior init && complior scan
In the first 10 seconds you see:
- Your Compliance Score (0–100) against the EU AI Act
- Your AI system's risk category based on the legal criteria
- A list of specific violations with the exact article of law
- How many obligations apply to you (16 for a basic deployer → 67 for a healthcare provider — Complior filters automatically)
No account. No internet. No API keys. Free forever.
Then:
complior fix --dry-run # preview
complior fix # apply
And your score jumps from 47 → 75 in a couple of minutes. No manual edits. 18 auto-fix strategies:
- Bare OpenAI call → wrapped with
@complior/sdk(runtime disclosure + bias check) - Missing FRIA → generated from template + pre-filled from system passport
- Missing AI Policy → generated AI governance policy
- Missing Risk Register → structured risk registry
Then — complior fix --doc all generates all compliance documents required by the EU AI Act. Half the fields are pre-filled from your system data. You only fill in what the machine can't know.
Want to go deeper? complior scan --deep adds Semgrep custom rules + Bandit + ModelScan on top of the standard 5-layer analysis. AST-level pattern matching across your entire codebase. Takes longer, catches more.
Under the Hood: How the Scanner Thinks
Most compliance tools work like checklists. Complior works like a compiler.
When you run complior scan, the engine doesn't just grep for keywords. It builds an internal model of your AI system:
Framework Detection. Complior reads your dependency tree and identifies 11 AI frameworks (OpenAI, Anthropic, LangChain, CrewAI, AutoGen, Vercel AI, LlamaIndex, Groq, Ollama, Bedrock, and their transitive dependencies). If your Express app imports a library that imports LangChain, Complior knows you're running an AI system even if you didn't realize it.
AST Analysis. The engine parses your source code into an Abstract Syntax Tree and walks it looking for patterns that matter for compliance. Is there error handling around LLM calls? Is there a fallback when the model fails? Does the system log what the AI decided and why? These aren't style checks — they're legal requirements under Article 12 (logging) and Article 14 (human oversight).
Obligation Mapping. Each finding is mapped to a specific article and paragraph of the EU AI Act. Not "you might have a transparency issue" — but "Article 50(1) requires notification to users. File: chat.ts:47. No disclosure found in system prompt or response headers."
Scoring. The score isn't arbitrary. Each obligation has a weight based on legal severity (prohibited > high-risk > transparency > documentation). Critical findings can't be offset by having good documentation elsewhere. A score of 80 with zero critical findings is better than a score of 85 with one critical.
The entire scan is deterministic. Same code, same score, every time. No LLM in the loop. No probabilistic "maybe compliant." Binary checks, reproducible results.
The Bias Problem Nobody Talks About
Here's something we found during development that changed how we think about eval.
We sent the same loan application prompt to a popular chatbot. One version said "Maria Schmidt applies for a €50,000 business loan." The other said "Mohamed Al-Rashid applies for a €50,000 business loan."
Same prompt. Same financial details. Same credit history described in the text.
The bot recommended different loan products. Different interest rates. Different approval language. Score difference: 0.14 on a normalized scale. That's above the 0.10 threshold that EU AI Act considers discriminatory.
The bot wasn't programmed to discriminate. It was polite to both. It just... recommended differently. The model learned patterns from training data that encode societal biases, and nobody tested for it because nobody had a systematic way to test for it.
complior eval tests for exactly this. A/B paired testing across 9 demographic dimensions: gender, age, nationality, race, disability, sexual orientation, socioeconomic background, language, and intersectional combinations. Not a one-time check — a reproducible, scored, evidence-producing test you can run before every release.
680 Probes: What Eval Actually Does
complior eval https://your-ai.api/v1/chat fires 680 dynamic probes at your live AI endpoint: 380 conformity tests across 11 EU AI Act categories (transparency, oversight, bias, robustness, etc.) + 300 OWASP/MITRE security probes. You get reproducible proof. A cryptographically signed evidence chain — far more convincing to an auditor than a spreadsheet.
When you're done:
complior report --format html
Opens a clean, visual HTML report in your browser. Your compliance score, risk classification, all findings by severity, applied fixes, generated documents, evidence chain — everything on one page. Shareable with your CTO, your legal team, or your auditor. No terminal required to read it.
What You Get at the End
After running the full pipeline — init → scan → fix → eval → report — you have:
- Compliance Score with breakdown by article and severity
- Risk Classification of every AI system in your codebase
- Agent Passports — identity cards for each AI system, signed with ed25519
- Compliance Documents — FRIA, AI Policy, Worker Notification, Data Processing records, all pre-filled from your code
- Eval Evidence — 680 test results with exact probes, responses, and pass/fail verdicts
- Evidence Chain — cryptographic proof (SHA-256 + ed25519) that every action happened in sequence and nothing was altered
- HTML Report — visual, shareable, ready for stakeholders who don't live in the terminal
- Audit Package —
complior audit export --format zipbundles everything into one file for the regulator
Before vs Now
| What | Before | With Complior |
|---|---|---|
| Understand which EU AI Act obligations apply | Lawyer 8 hours / €1,200 | complior init — 30 seconds / €0 |
| Find all AI systems in code | Manual audit / 2 days | complior scan auto-detects every OpenAI/LangChain/CrewAI vendor |
| Generate FRIA | Template + 4 hours of writing | complior fix --doc fria — pre-filled from passport |
| Prove compliance to an auditor | Excel + presentation + hope | Cryptographically signed evidence chain |
| Compliance audit | €80K, 3–6 months | $0, one evening |
"Can't I Just Ask ChatGPT About Compliance?"
You can. And you'll get a plausible-sounding answer that might be wrong.
Here's the problem: LLMs hallucinate legal references. They'll cite "Article 52" when they mean Article 50. They'll tell you FRIA is optional when it's mandatory for high-risk systems. They'll generate a compliance checklist that mixes EU AI Act with GDPR with California's SB-205 and present it as one coherent framework.
Compliance is one domain where "approximately correct" is the same as wrong. If your FRIA references the wrong article, an auditor won't give you points for effort.
Complior doesn't use LLMs for compliance decisions. Zero. The scanner is deterministic: AST rules, pattern matching, obligation mapping. The eval uses LLMs as test targets (we probe them), not as compliance oracles. Every check traces back to a specific article and paragraph you can verify in the regulation text.
The only place an LLM appears is in document generation — and even there, it fills templates with structured data from your passport, not free-form hallucination.
Why Is This Free
Because compliance tools at €5K/month are accessible to 1 in 1,000 teams. The other 999 either ignore the law or hire one-time consultants. Neither works at the pace of modern AI — by the time of audit, everything is already outdated.
Complior CLI is free forever:
- Open source (AGPL-3.0)
- Fully offline by default
- No account, no limits
- All scanner rules, document templates, eval probes — public
We believe compliance should be infrastructure, not a luxury. Like linters, like tests, like CI. Something every team has access to, regardless of budget.
Who It's For
Startup with AI at core (3–20 people). No budget for a compliance officer. But August 2026 is coming. Complior gives you MVP-level compliance in one evening — enough to pass initial checks and unblock EU deals.
Mid-market dev team (20–200 people). You already have processes for GDPR / SOC2, but AI Act is a new beast. Complior plugs into CI/CD: complior scan --ci --threshold 80 --fail-on critical — no PR without compliance reaches main.
Regulated industry (banking, healthcare, public sector). Big Four for the final audit are still needed. But Complior cuts evidence collection time by 10x. Audit takes 2 weeks, not 6 months.
AI consultant. Compliance package for clients as part of delivery. Before — 40+ hours of manual work. Now — 4 hours: complior init for each client, complior fix --doc all, complior report --pdf.
Design Decisions That Might Surprise You
No cloud, by design. Your code never leaves your machine. Not because we're lazy about building a backend — because compliance data is sensitive. Your scan results contain your architecture, your vulnerabilities, your risk classification. That shouldn't live on someone else's server by default.
No LLM in the compliance loop. A compliance tool that hallucinates is worse than no tool at all. Every check is deterministic. Same input, same output, every time.
Rust for the TUI, TypeScript for the engine. We didn't pick Rust because it's trendy. We picked it because the terminal interface needs to be fast (scan in 2 seconds, not 20) and memory-safe (no crashes mid-audit). TypeScript for the engine because that's where the AI ecosystem lives — OpenAI SDK, Anthropic SDK, Vercel AI SDK, LangChain. We meet developers where they are.
ed25519, not HMAC. Most tools that claim "signed logs" use HMAC — which means the same key that creates the signature can recreate it. If you can forge the signature, it's not evidence. We use ed25519 asymmetric signatures. The signing key and the verification key are different. A regulator can verify without trusting the tool.
Obligations, not articles. The EU AI Act has 113 articles. But articles aren't obligations — one article can contain 5 obligations, or an obligation can span 3 articles. We mapped 108 discrete obligations that a deployer must satisfy. That's the actual unit of compliance, not "Article 26."
What's in 1.0
| Component | Number |
|---|---|
| Pipeline commands | 7 (init / scan / eval / fix / report / agent / doctor) |
| EU AI Act obligations | 108 (~65% automated + ~25% templated) |
| Eval probes | 680 (380 conformity + 300 OWASP/MITRE) |
| Document templates | 14 (EU AI Act) |
| SDK hooks | 14 (with PII checksum for IBAN/BSN/NIR/PESEL/Codice Fiscale) |
| MCP tools | 7 (Claude Code, Cursor, Windsurf compatible) |
| Frameworks scoring | 4 (EU AI Act + AIUC-1 + OWASP + MITRE) |
| Tests | 2,719 green, 0 failures |
| Platforms | 5 binaries (Linux x86/ARM, macOS Intel/ARM, Windows) |
And the key: 4 cycles of exhaustive end-to-end verification across 3 different project profiles before every tag. Not "works on my machine" — actually works on 3 types of AI systems with full pipeline coverage.
Try It
npm install -g complior
complior init
complior scan
3 minutes. No account. No internet.
- Documentation: docs.complior.ai
- Source code: github.com/complior/complior
- Latest release: v1.0.1 on GitHub Releases
- npm:
complior@1.0.1· crates.io:complior-cli@1.0.1
If it works for you — star us on GitHub. It's not vanity: ranking affects discoverability when someone searches "EU AI Act compliance tool." The people who actually need Complior will find us because of you.
If it doesn't work — open an issue. Tell us what's wrong. The sooner we hear it, the faster we fix it.
Closing
Compliance shouldn't be a tax developers pay for AI features. It's an invariant — like tests and code review. Something that's either in every commit or not there at all.
Complior is an attempt to make compliance as cheap, repeatable, and automated as npm test.
You have 91 days.
npm install -g complior
Better to spend 30 seconds now than 6 months later.
Ready to check your AI compliance?
Scan your AI tools in 30 seconds. No signup required.
$ complior scanComplior 1.0.1, released 2026-05-03. AGPL-3.0. Built in EU.