← Back to leaderboard
50
/100
D ◐ Assessed 43

Debate Agent MCP

Enables multi-agent code review with P0/P1/P2 severity scoring by orchestrating locally installed AI CLIs (Claude, Codex) to perform parallel analysis, deterministic scoring, and consensus-building on git diffs.

AI & Machine Learning by ferdiangunawan ★ 2 Last commit: 3 months, 1 week ago
Anthropic
Assessed visibility — 4/3 applicable dimensions scored
✓ Schema Quality ✓ Protocol — Reliability ✓ Docs & Maintenance ✓ Security Hygiene — Schema Interpretability
Schema Quality
48
42% weight
Protocol Compliance
N/A
Local server
Reliability
N/A
Local server
Docs & Maintenance
26
25% weight
Security Hygiene
95
33% weight
30-Day Trend

Score History

Category Trends

Static Analysis

Metric Score Rating
Schema Completeness 40 Fair
Description Quality 60 Fair
Documentation Coverage 30 Poor
Maintenance Pulse 30 Poor
Dependency Health 55 Fair
License Clarity Poor
Version Hygiene Poor
Analyzed 1 month ago
Embed Badge

Add this to your README to display your MCP Scoreboard grade:

MCP Score Badge
[![MCP Score](https://mcpscoreboard.com/badge/20ec3002-7d56-4794-a52a-013a64c0eb43.svg)](https://mcpscoreboard.com/server/20ec3002-7d56-4794-a52a-013a64c0eb43/)