Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.aethis.ai/llms.txt

Use this file to discover all available pages before exploring further.

All four interfaces call the same API. The difference is how you invoke it.

Decision tree

Do you have a coding agent? (Claude Code, Claude Desktop, Cursor, Windsurf) → YesMCP server. The agent calls tools directly. No shell, no files. Do you want file-based, version-controlled rules?YesCLI. Sources and tests live in .aethis/, committed alongside your code. Run aethis test in CI. Are you shipping a Python service that calls Aethis?YesPython SDK. Sync and async clients, typed Pydantic models, a stateful DecisionSession for wizard flows. Are you integrating from another stack (Node, Go, Rust, …) or want zero install?YesREST API. Any HTTP client. Decision endpoints on leaf rulesets are safe to call client-side (no key required).

Comparison

MCP serverCLIPython SDKREST API
Best forCoding agentsTerminal · CI/CDPython servicesAny HTTP client
Installnpx -y aethis-mcpuv tool install aethis-clipip install aethis-sdkNone
AuthAETHIS_API_KEY env varaethis login or env varapi_key= argumentx-api-key header
Anonymous decisionsNo key neededNo key neededKey always requiredKey-free on leaf rulesets
Authoring toolsAPI key requiredAPI key requiredNot yet exposedAPI key required
File-based workflowNoYesNoNo
CI/CD nativeNoYes (aethis test)Yes (in test suites)Possible

MCP server

Use when: your authoring and evaluation happen inside a coding agent session. The agent reads your policy document, runs aethis_create_ruleset, iterates until tests pass, and publishes — without you touching a shell. Concrete scenario: You paste a benefits policy document into Claude Code and ask it to author rules. The agent calls aethis_discover_sections, creates rulesets for each section, runs aethis_generate_and_test, refines based on failures, and publishes — with you reviewing output and providing domain feedback. MCP server overview →

CLI

Use when: you want source documents, guidance, and test cases version-controlled as files alongside your codebase. CI pipelines run aethis test on pull requests to catch regressions. Local authoring happens in a terminal. Concrete scenario: A compliance team maintains eligibility rules in a Git repo. When legislation changes, they update sources/ and tests/scenarios.yaml, open a PR, CI runs aethis test, and the reviewer approves before publishing. CLI reference →

Python SDK

Use when: you’re shipping a Python service that calls Aethis from a server context — FastAPI, Django, a Celery worker, a notebook. The SDK provides typed Pydantic response models, sync and async clients, and a stateful DecisionSession adapter for wizard / chatbot intake. Concrete scenario: A FastAPI eligibility service holds one AsyncAethis instance per process, exposes /eligibility/free-school-meals, and forwards the typed DecideResponse envelope (with decision_id, inputs_hash, and trace) to the caller. The decision path is fully async, key-pooled, and pre-typed for IDE autocompletion. Python SDK reference →

REST API

Use when: you’re building a product that integrates Aethis — a mortgage pre-qualification backend that calls POST /decide in real time, a compliance dashboard that shows GET /schema for each active ruleset, or a custom authoring pipeline that drives the API directly. Decision endpoints are safe to call client-side (no API key required). Authoring endpoints require a key and should be called from your server. Concrete scenario: A lending application calls POST /api/v1/public/decide with applicant data on every form submission. Under 5ms per call, no infrastructure, full audit trail stored in the response. REST API →