50
/100
D
◐ Assessed 4⁄3
Debate Agent MCP
Enables multi-agent code review with P0/P1/P2 severity scoring by orchestrating locally installed AI CLIs (Claude, Codex) to perform parallel analysis, deterministic scoring, and consensus-building on git diffs.
Anthropic
Assessed visibility
— 4/3 applicable dimensions scored
✓ Schema Quality
✓ Protocol
— Reliability
✓ Docs & Maintenance
✓ Security Hygiene
— Schema Interpretability
Schema Quality
48
42% weight
Protocol Compliance
N/A
Local server
Reliability
N/A
Local server
Docs & Maintenance
26
25% weight
Security Hygiene
95
33% weight
Score History
Category Trends
Static Analysis
| Metric | Score | Rating |
|---|---|---|
| Schema Completeness | 40 | Fair |
| Description Quality | 60 | Fair |
| Documentation Coverage | 30 | Poor |
| Maintenance Pulse | 30 | Poor |
| Dependency Health | 55 | Fair |
| License Clarity | — | Poor |
| Version Hygiene | — | Poor |
Analyzed 1 month ago