Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.aethis.ai/llms.txt

Use this file to discover all available pages before exploring further.

What it is

MCP (Model Context Protocol) is an open standard that lets AI coding agents call external tools directly from a conversation — no shell, no files. aethis-mcp implements this protocol, exposing 25 tools for evaluating eligibility and authoring rules from legislation. Install it once. After that, your coding agent can call aethis_decide, aethis_create_ruleset, aethis_generate_and_test, and aethis_publish as naturally as it calls any other tool. Decision tools (evaluate, schema, explain) work with no API key.
Authoring tools (create, generate, publish) require an API key.

If you have aethis-cli installed, one command wires up the MCP server in your editor’s config — picks up the API key cached by aethis login, drops a canonical aethis server entry into the right config file for each target, and preserves any other MCP servers you already have.
# All four clients at once:
aethis mcp install --target all

# Or one at a time:
aethis mcp install --target claude-code      # writes ./.mcp.json (project-local)
aethis mcp install --target cursor           # ~/.cursor/mcp.json
aethis mcp install --target claude-desktop   # ~/Library/Application Support/Claude/...
aethis mcp install --target windsurf         # ~/.codeium/windsurf/mcp_config.json
Restart your editor to pick up the change. Re-run after aethis account generate rotates your key — the entry updates in place. Reverse with aethis mcp uninstall --target <client> (only removes the aethis entry). Don’t have aethis-cli? Use the manual install below.

Manual install (without aethis-cli)

# Decision tools only (no key needed)
claude mcp add aethis -- npx -y aethis-mcp

# With authoring access
claude mcp add aethis -e AETHIS_API_KEY=ak_live_... -- npx -y aethis-mcp
The API key must be set in the MCP client’s config file, not in your shell profile. The MCP server process doesn’t inherit shell environment variables.

Prompts

MCP prompts are pre-built workflow guides that compatible clients can surface as selectable templates.
PromptDescription
aethis-authorStep-by-step TDD workflow: gather requirements → create ruleset → generate → refine → publish
aethis-decideDecision workflow: find ruleset → get schema → evaluate. Accepts optional ruleset_id argument

Try it — no API key required

Decision tools work immediately. Two examples you can try now: UK Free School Meals — child eligibility:
Is a 10-year-old at a state-funded school eligible for Free School Meals?
aethis_decide({
  ruleset_id: "aethis/uk-fsm/child-eligibility",
  field_values: { "child.age": 10, "child.school_type": "state_funded" },
  include_trace: true
})
{
  "decision": "eligible",
  "fields_provided": 2,
  "fields_evaluated": 2,
  "trace": {
    "age_check": "PASS — age 10 is within 4–15 (FSM Regulations 2014, Regulation 3(1))",
    "school_type_check": "PASS — school_type is state_funded (FSM Regulations 2014, Regulation 3(2)(b))"
  }
}
Spacecraft crew certification — Vogon applicant:
Is a Vogon eligible for spacecraft crew certification?
aethis_decide({
  ruleset_id: "aethis/spacecraft-crew-certification",
  field_values: { "space.crew.species": "Vogon" },
  include_trace: true
})
{
  "decision": "not_eligible",
  "fields_provided": 1,
  "fields_evaluated": 11,
  "trace": {
    "species_check": "FAIL — species is 'Vogon' (disqualifying, Section 3)"
  }
}
One field provided. The engine determined a Vogon was disqualified without asking about flight hours, medical certificates, or anything else. It found the shortest path to a decision from the facts it already had.

Authoring with a coding agent

The real value of the MCP server is authoring. Paste a policy document into Claude Code and ask it to author rules. The agent handles the mechanics — you provide domain knowledge at checkpoints.

Agent workflow with human checkpoints

Phase 1 — Section discovery
  Agent: aethis_discover_sections(domain, sources)
  ⬜ HUMAN: Review proposed sections — confirm or redirect
  Agent: aethis_validate_sections (if expected list known)
  Agent: aethis_refine_sections (if wrong)

Phase 2 — Fields (per section)
  Agent: aethis_create_ruleset(name, section_id, source_text, [])
  Agent: aethis_discover_fields(project_id)
  ⬜ HUMAN: Review field names and types — confirm before writing tests
  Agent: aethis_refine_fields (if names wrong)
  Agent: writes test cases using confirmed field names

Phase 3 — Rules (per section)
  Agent: aethis_generate_and_test(project_id)      # 60-120s
  IF failures:
    Agent: aethis_explain_failure → diagnose
    ⬜ HUMAN: Confirm diagnosis or provide domain correction
    Agent: aethis_refine(project_id, feedback)
    ↻ Repeat until all tests pass
  Agent: aethis_publish(project_id)

What the agent can do autonomously

  • Run all tool calls (create, generate, test, refine, publish)
  • Write test cases from confirmed field names and source text
  • Diagnose failures using aethis_explain_failure and craft guidance from the diagnosis
  • Iterate the TDD loop without human input when failures are mechanical (boundary off by one, missing enum value)

What requires human judgment

  • Section boundaries — whether two sets of criteria belong in one section or two
  • Field naming — whether household.annual_net_earnings or household.uc_assessed_income better reflects the source
  • Domain corrections — when the agent misreads a legislative clause (e.g. treating “may” as “must”)
  • Test adequacy — whether 6 tests cover the section or whether edge cases are missing

Error recovery patterns

SituationAgent action
Generation times out (504)Wait 2 minutes, call aethis_list_rulesets to check. Do not re-trigger.
Test fails with clear hintCall aethis_refine with guidance referencing the specific clause
Test fails with unclear hintCall aethis_explain_failure for deeper diagnosis, then refine
Field name mismatch (test silently wrong)Call aethis_validate_fields to compare expected vs. discovered
All tests pass but results feel wrongAdd more edge-case tests, regenerate — don’t publish prematurely
Guidance not converging after 3 iterationsEscalate to human — the source text may be ambiguous or contradictory

Authoring access

Private beta. Rule authoring is invite-only private beta — approval required. Decision tools work now with no sign-up. Join the authoring waitlist →
Bring your own key. Pass anthropic_key on aethis_generate_and_test or aethis_refine. The key is used for that request only and is never stored.

Troubleshooting

ErrorCauseFix
"API key is required"AETHIS_API_KEY not setSet in MCP client config (not shell profile). Decision tools don’t need a key.
"Ruleset not found" (404)Wrong ID or archived rulesetUse aethis_list_projectsaethis_list_rulesets to find the correct ID.
"Cannot publish: tests failing"Tests don’t passFix with aethis_refine, or pass force: true to override (not recommended for production).
Rate limit exceeded (429)Daily limit hitWait and retry. Contact eng@aethis.ai for a higher tier.
Generation timeout (504)Client timed out — normal for complex rules (5–15 min server-side)Wait, then call aethis_list_rulesets to check if a ruleset appeared. Do not re-trigger.
Date field type errorDATE field passed as ISO stringPass as integer ordinal: python3 -c "from datetime import date; print(date(2025,4,13).toordinal())"