← Back to leaderboard
45
/100
D ◔ Limited 24

ICME Preflight

Jailbreak-proof guardrails for AI agents. Policy enforcement powered by Automated Reasoning and formal verification — an SMT solver, not an LLM, decides whether an action passes or fails. Cannot be prompt-injected. Every decision produces a cryptographic ZK proof. Includes a FREE check_logic tool that catches contradictions in agent reasoning (budget overflows, impossible timelines, conflicting constraints) using a Z3 SAT solver. No account needed. 13 tools covering the full workflow: check_logic — FREE. Mathematically prove reasoning is consistent before acting on it. make_rules — write guardrails in plain English, ICME compiles them to formal logic via Automated Reasoning. check_action / quick_check — verify any agent action against your policy. SAT = allowed, UNSAT = blocked. verify_proof — independently verify the ZK receipt from any prior check. get_scenarios / run_tests — test your policy with AWS Automated Reasoning scenarios before deploying. Account & billing — create accou

AWS
Limited visibility — 2/4 applicable dimensions scored
○ Schema Quality ✓ Protocol — Reliability ○ Docs & Maintenance ✓ Security Hygiene — Schema Interpretability
Schema Quality
25% weight
Protocol Compliance
10
20% weight
Reliability
20% weight
Docs & Maintenance
15% weight
Security Hygiene
81
20% weight
30-Day Trend

Score History

Category Trends

30-Day Uptime

30 days ago Today

Latest Health Check

Down
Status
0ms
Connect
Checked 2 weeks, 4 days ago

Protocol Compliance

Schema Valid
Yes
Auth Discovery
Probed 3 weeks ago
Embed Badge

Add this to your README to display your MCP Scoreboard grade:

MCP Score Badge
[![MCP Score](https://mcpscoreboard.com/badge/34be83fa-0c40-4e1b-aba7-3e5874eab335.svg)](https://mcpscoreboard.com/server/34be83fa-0c40-4e1b-aba7-3e5874eab335/)