Powered by the SourceRail Governance Engine

Every AI interaction.
Governed. Auditable. Yours.

Onyx intercepts LLM traffic at the network level—across every browser, CLI tool, and desktop app on your machines. Enforce policy, redact sensitive data before it leaves your perimeter, and maintain a cryptographic audit trail of every interaction.

Zero raw data in audit trailsAppend-only cryptographic chainWorks with ChatGPT, Claude, Gemini & more

The Problem

Your employees are sharing secrets with AI.
You just don't know it yet.

Every day, engineers paste API keys into ChatGPT. Analysts feed customer PII into Gemini. Legal teams upload contracts to Claude. There is no perimeter, no audit trail, and no way to enforce policy across the dozens of AI touchpoints in your organization.

68%

of employees use AI tools with company data without IT's knowledge or approval.

$4.9M

average cost of a data breach in 2024. A single pasted credential can trigger one.

0

visibility most organizations have into what data their teams share with LLMs—across browsers, CLIs, and desktop apps.

How It Works

Three steps. Total visibility.

01

Intercept

Onyx installs as a lightweight desktop agent. It transparently intercepts all LLM traffic at the network level—every API call from every browser, CLI tool, IDE extension, and desktop app.

ChatGPT, Claude, Gemini, Copilot, Cursor, direct SDK calls—all governed through a single pane.

02

Govern

Every request passes through the SourceRail governance pipeline. PII is redacted. Secrets are blocked. Content is classified. Policy rules are enforced—per user, per team, per provider.

Emails, SSNs, API keys, private keys, credit cards—detected and handled before they leave your network.

03

Audit

Every governance decision is recorded in an append-only, cryptographically linked chain—the TELEM chain. No raw data. Only digests, identifiers, and decisions. Tamper-evident by design.

Export compliance-ready audit packages. Prove what was blocked, what was redacted, and why.

Governance Pipeline

REQUESTCLASSIFYRULESSCRUBPOLICYALLOW | DENY | REDACTTELEM

Built For

Industries where data is the asset

If your organization handles data that cannot be shared with external AI providers, Onyx gives you a way to adopt AI without the risk.

Healthcare

HIPAA compliance without blocking AI adoption

  • PHI detection and redaction before it reaches any LLM
  • Audit trails that satisfy compliance officers and regulators
  • Per-department policies—radiology can use AI, billing can’t share patient data

Financial Services

Protect trading strategies and customer financial data

  • Block credit card numbers, account details, and financial credentials in real time
  • Enforce model-level policies—allow GPT-4 for research, deny for code generation
  • Tamper-evident audit chain for regulatory examinations

Legal & Professional Services

Client privilege meets AI productivity

  • Prevent privileged client information from reaching third-party AI models
  • Redact names, case numbers, and confidential terms automatically
  • Prove to clients that their data never left your governance perimeter

The Bigger Picture

Beyond governance.
Toward sovereign AI.

Onyx is the first product built on the SourceRail engine—a patent-pending sovereign context governance framework that standardizes how data flows between humans, AI models, and systems.

At its core, SourceRail introduces the Framer—a canonical governed object that wraps every piece of context with policy, provenance, and cryptographic receipts. No raw data in audit trails. Ever. Only digests, decisions, and verifiable proofs.

Today, Onyx uses this engine to protect your organization from data leaks. Tomorrow, the same architecture enables something far more profound.

Intellectual Property Attribution

When every AI interaction flows through a governed, auditable pipeline, we enter a world where the origin of ideas can be traced. An engineer in Tokyo has a breakthrough insight via an LLM. An artist in Berlin generates a concept weeks later. With SourceRail's cryptographic chain, the inception of that idea is provable—opening the door to royalties, attribution, and fair compensation.

A Universal Standard for AI Context

Every pixel, database record, and model interaction governed by a single, unified pipeline. The Framer object standardizes how context is captured, policy-checked, metered, and audited—across any AI system, any provider, any modality. One canon. One chain. One source of truth.

Creator Economics for the AI Age

Artists, researchers, and independent creators pour knowledge into AI systems with no way to track how that knowledge is reused. SourceRail's settlement layer makes it possible to meter usage, attribute contributions, and build economic models where creators are compensated when their work generates value through AI.

Be the first to govern your AI.

Onyx is in private early access. Join the waitlist to get notified when we open enrollment for your organization.

No spam. We'll only contact you about Onyx early access.