logo
We built a CLI tool that scans your codebase for EU AI Act compliance risks.

`npx @systima/comply scan` analyses your repository to detect AI framework usage, traces how AI outputs flow through the program, and flags patterns that may trigger regulatory obligations.

It runs in CI and posts findings on pull requests (no API keys required).

Under the hood it performs AST-based import detection using the TypeScript Compiler API and web-tree-sitter WASM across 37+ AI frameworks. It then traces AI return values through assignments and destructuring to identify four patterns:

1. conditional branching on AI output

2. persistence of AI output to a database

3. rendering AI output in a UI without disclosure

4. sending AI output to downstream APIs

Findings are severity-adjusted by system domain. You declare what your system does (customer support, credit scoring, legal research, etc) and the scanner adjusts accordingly.

Example:

- a chatbot routing tool using AI output in an `if` statement produces an informational note

- a credit scoring system doing the same produces a critical finding

We tested it against Vercel’s 20k-star AI chatbot repository; the scan took about 8 seconds. Example PR comment with full results: https://github.com/systima-ai/chatbot-comply-test/pull/1

Comply ships as an npm package, a GitHub Action (systima-ai/comply@v1), and a TypeScript API. It can also generate PDF reports and template compliance documentation.

Repo and explanation: https://systima.ai/blog/systima-comply-eu-ai-act-compliance-...

Feedback welcome on the call-chain tracing approach and whether the domain-based severity model makes sense.


Loading...