← Back to leaderboard
47
/100
D ◉ Complete 64

Debug Companion MCP

Enables AI coding agents to debug Python projects by running pytest, extracting failure locations, displaying code context around failures, and optionally requesting fix suggestions from Gemini.

AI & Machine Learning by shanirap Last commit: 3 months, 1 week ago
Google Gemini
Complete visibility — 6/4 applicable dimensions scored
✓ Schema Quality ✓ Protocol ✓ Reliability ✓ Docs & Maintenance ✓ Security Hygiene ✓ Schema Interpretability
Schema Quality
48
42% weight
Protocol Compliance
N/A
Local server
Reliability
N/A
Local server
Docs & Maintenance
33
25% weight
Security Hygiene
95
33% weight
Schema Interpretability
86
15% weight
30-Day Trend

Score History

Category Trends

Latest Health Check

Down
Status
1ms
Connect
0.0%
7-day Uptime
Checked 1 month ago

Static Analysis

Metric Score Rating
Schema Completeness 40 Fair
Description Quality 60 Fair
Documentation Coverage 30 Poor
Maintenance Pulse 45 Fair
Dependency Health 75 Good
License Clarity Poor
Version Hygiene Poor
Analyzed 1 month ago

Protocol Compliance

Schema Valid
Probed 1 month ago
Embed Badge

Add this to your README to display your MCP Scoreboard grade:

MCP Score Badge
[![MCP Score](https://mcpscoreboard.com/badge/00a889c1-2f7a-45eb-bac5-47a3942c518c.svg)](https://mcpscoreboard.com/server/00a889c1-2f7a-45eb-bac5-47a3942c518c/)