Multi-server MCP proxy with AI context provenance tracking and quality feedback loop. Enables self-aware, self-improving AI through automatic lineage tracking.
This plugin is not yet in any themed marketplace. To install it, you'll need to add it from GitHub directly.
This plugin uses advanced features that require additional trust:
Only install plugins from repositories you trust. Review the source code before installation.
Choose your preferred installation method below
A marketplace is a collection of plugins. Every plugin gets an auto-generated marketplace JSON for individual installation, plus inclusion in category and themed collections. Add a marketplace once (step 1), then install any plugin from it (step 2).
One-time setup for access to all plugins
When to use: If you plan to install multiple plugins now or later
Step 1: Add the marketplace (one-time)
/plugin marketplace add https://claudepluginhub.com/marketplaces/all.json
Run this once to access all plugins
Step 2: Install this plugin
/plugin install mcp-proxy-tracing@all
Use this plugin's auto-generated marketplace JSON for individual installation
When to use: If you only want to try this specific plugin
Step 1: Add this plugin's marketplace
/plugin marketplace add https://claudepluginhub.com/marketplaces/plugins/mcp-proxy-tracing.json
Step 2: Install the plugin
/plugin install mcp-proxy-tracing@mcp-proxy-tracing
A fast and efficient Model Context Protocol (MCP) proxy server written in Rust. This proxy aggregates multiple MCP servers and provides a unified interface, with built-in monitoring, health checks, and a web UI for management.
New Features:
mcp__proxy__{server}__{tool}
format5 Tools for explicit operations:
mcp__proxy__tracing__get_trace
- View response lineagemcp__proxy__tracing__query_context_impact
- Assess context impactmcp__proxy__tracing__get_response_contexts
- List contributing contextsmcp__proxy__tracing__get_evolution_history
- Track version historymcp__proxy__tracing__submit_feedback
- Submit quality ratings4 Resources for automatic context enrichment:
trace://quality/top-contexts
- High-quality information sourcestrace://quality/deprecated-contexts
- Low-quality contexts to avoidtrace://quality/recent-feedback
- Quality feedback trendstrace://stats/cache
- Performance metricsSee TRACING_TOOLS_QUICKSTART.md for LLM agent usage guide.
mcp-proxy.yaml
:servers:
example-server:
command: "mcp-server-example"
args: ["--port", "8080"]
transport:
type: stdio
restartOnFailure: true
proxy:
port: 3000
host: "0.0.0.0"
webUI:
enabled: true
port: 3001
cargo run -- --config mcp-proxy.yaml
http://localhost:3001
The proxy can run as an MCP server for Claude CLI, aggregating all your backend servers:
# Build the proxy
cargo build --release # or use debug: cargo build
# Run with Claude CLI
claude --mcp-config '{"mcpServers":{"proxy":{"command":"./target/debug/mcp-rust-proxy","args":["--config","mcp-proxy-config.yaml","--stdio"]}}}'
What Claude gets:
mcp__proxy__{server}__{tool}
to prevent conflictsExample tools available:
mcp__proxy__memory__create_entities
- From memory servermcp__proxy__git__commit
- From git servermcp__proxy__tracing__submit_feedback
- Built-in tracingSee TRACING_TOOLS_QUICKSTART.md for full agent documentation.
MCP Rust Proxy works seamlessly with Claude Code to manage multiple MCP servers. Here are some example configurations:
servers:
filesystem-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-filesystem", "/Users/username/projects"]
transport:
type: stdio
env:
NODE_OPTIONS: "--max-old-space-size=4096"
github-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-github"]
transport:
type: stdio
env:
GITHUB_TOKEN: "${GITHUB_TOKEN}"
postgres-server:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-postgres", "postgresql://localhost/mydb"]
transport:
type: stdio
proxy:
port: 3000
host: "127.0.0.1"
webUI:
enabled: true
port: 3001
Then configure Claude Code to use the proxy via MCP remote server:
{
"mcpServers": {
"rust-proxy": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-remote", "http://localhost:3000"]
}
}
}
servers:
# Code intelligence server
code-intel:
command: "rust-analyzer"
args: ["--stdio"]
transport:
type: stdio
# Database tools
db-tools:
command: "npx"
args: ["-y", "@modelcontextprotocol/server-sqlite", "./dev.db"]
transport:
type: stdio
# Custom project server
project-server:
command: "python"
args: ["./scripts/mcp_server.py"]
transport:
type: stdio
env:
PROJECT_ROOT: "${PWD}"
DEBUG: "true"
# Health checks for critical servers
healthChecks:
code-intel:
enabled: true
intervalSeconds: 30
timeoutMs: 5000
threshold: 3
proxy:
port: 3000
connectionPoolSize: 10
maxConnectionsPerServer: 5
webUI:
enabled: true
port: 3001
apiKey: "${WEB_UI_API_KEY}"
servers:
api-gateway:
command: "mcp-api-gateway"
transport:
type: webSocket
url: "ws://api-gateway:8080/mcp"
restartOnFailure: true
maxRestarts: 5
restartDelayMs: 10000
ml-models:
command: "mcp-ml-server"
transport:
type: httpSse
url: "http://ml-server:9000/sse"
headers:
Authorization: "Bearer ${ML_API_KEY}"
vector-db:
command: "mcp-vector-server"
args: ["--collection", "production"]
transport:
type: stdio
env:
PINECONE_API_KEY: "${PINECONE_API_KEY}"
PINECONE_ENV: "production"
healthChecks:
api-gateway:
enabled: true
intervalSeconds: 10
timeoutMs: 3000
threshold: 2
ml-models:
enabled: true
intervalSeconds: 30
timeoutMs: 10000
proxy:
port: 3000
host: "0.0.0.0"
connectionPoolSize: 50
requestTimeoutMs: 30000
webUI:
enabled: true
port: 3001
host: "0.0.0.0"
apiKey: "${ADMIN_API_KEY}"
logging:
level: "info"
format: "json"
The proxy server can be configured using YAML or JSON files. Configuration files are searched in the following order:
mcp-proxy.toml
mcp-proxy.json
mcp-proxy.yaml
mcp-proxy.yml
All configuration values support environment variable substitution using the ${VAR}
syntax:
servers:
api-server:
command: "api-server"
env:
API_KEY: "${API_KEY}"
transport:
type: httpSse
url: "${API_URL:-http://localhost:8080}/sse"
Each server configuration supports:
command
: The executable to runargs
: Command line argumentsenv
: Environment variables for the processtransport
: Transport configuration (stdio, httpSse, webSocket)restartOnFailure
: Whether to restart on failure (default: true)maxRestarts
: Maximum number of restart attempts (default: 3)restartDelayMs
: Delay between restarts in milliseconds (default: 5000)The proxy captures all server output to rotating log files:
~/.mcp-proxy/logs/{server-name}/server.log
[timestamp] [STDOUT|STDERR] message
GET /api/logs/{server}?lines=N&type=stdout|stderr
GET /api/logs/{server}/stream
(Server-Sent Events)The web UI can be configured with:
enabled
: Whether to enable the web UI (default: true)port
: Port to listen on (default: 3001)host
: Host to bind to (default: "0.0.0.0")apiKey
: Optional API key for authenticationThe proxy server is built with:
The project includes a Nix flake for reproducible builds and development environments:
# Enter development shell with all tools
nix develop
# Build the project
nix build
# Build for specific platforms
nix build .#x86_64-linux
nix build .#aarch64-linux
nix build .#x86_64-darwin # macOS only
nix build .#aarch64-darwin # macOS only
# Build Docker image
nix build .#docker
# Run directly
nix run github:zach-source/mcp-rust-proxy
# Install direnv: https://direnv.net
direnv allow
# Now all tools are automatically available when you cd into the project
To use the binary cache for faster builds:
# Install cachix
nix-env -iA cachix -f https://cachix.org/api/v1/install
# Use the project's cache
cachix use mcp-rust-proxy
For maintainers building and pushing to cache:
# Build and push to cache
nix build .#x86_64-linux | cachix push mcp-rust-proxy
# Build without UI (faster for development)
cargo build --release
# Build with UI (requires trunk)
BUILD_YEW_UI=1 cargo build --release
cargo test
src/
├── config/ # Configuration loading and validation
├── transport/ # Transport implementations (stdio, HTTP/SSE, WebSocket)
├── proxy/ # Core proxy logic and request routing
├── server/ # Server lifecycle management
├── state/ # Application state and metrics
├── logging/ # File-based logging system
├── web/ # Web UI and REST API
└── main.rs # Application entry point
yew-ui/ # Rust/WASM web UI
├── src/
│ ├── components/ # Yew components
│ ├── api/ # API client and WebSocket handling
│ └── types/ # Shared types
└── style.css # UI styles
~/.mcp-proxy/logs/{server-name}/server.log
curl http://localhost:3001/api/logs/server-name?lines=50
curl http://localhost:3001/api/logs/server-name/stream
Prometheus metrics available at /metrics
:
mcp_proxy_requests_total
mcp_proxy_request_duration_seconds
mcp_proxy_active_connections
mcp_proxy_server_restarts_total
Configure health checks to monitor server availability:
healthChecks:
critical-server:
enabled: true
intervalSeconds: 30
timeoutMs: 5000
threshold: 3 # Failures before marking unhealthy
MIT
1.0.0