Turn AI Agents Into
Production Systems.
AgentMesh Server is a production-ready A2A (Agent-to-Agent) coordination layer that lets your AI agents collaborate, share tools, and orchestrate complex workflows with real-time streaming and enterprise-grade security.
$ agentmesh dev
Starting AgentMesh Server...
✓ Message broker started (Redis)
✓ Calculator Agent initialized
✓ Analysis Agent initialized
✓ Memory Agent initialized
INFO: Uvicorn running on http://0.0.0.0:8000
# Send a message to agents
$ curl -X POST localhost:8000/
{"result": "25 * 4 = 100"}
Built with industry-leading technologies
- 🐍Python
- ⚡FastAPI
- 🔴Redis
- ☸️Kubernetes
- 🔗MCP
- 📡SSE
- 🔐Keycloak
- 💻OpenCode
Trusted by developers at
The mesh that makes agents work together.
AgentMesh provides the A2A protocol–native platform for connecting agents. Routing, sessions, history, observability, and MCP tool integration—so agents can safely coordinate real work in production.
# Worker polls for tasks
async def poll_loop(self):
while self.running:
tasks = await self.get_pending_tasks()
for task in tasks:
await self.execute_task(task)
# Execute with OpenCode
result = await self.run_opencode(
codebase_path="/app",
prompt="Refactor the auth module",
agent_type="build"
)Distributed Agent Workers
Deploy workers on remote machines with codebases. Workers automatically poll for tasks, execute OpenCode agents, and stream results back in real-time.
# Worker polls for tasks
async def poll_loop(self):
while self.running:
tasks = await self.get_pending_tasks()
for task in tasks:
await self.execute_task(task)
# Execute with OpenCode
result = await self.run_opencode(
codebase_path="/app",
prompt="Refactor the auth module",
agent_type="build"
)Session History & Resumption
Browse past AI coding sessions from any device. Resume conversations exactly where you left off with full context preservation.
# List all sessions
GET /v1/opencode/codebases/{id}/sessions
# Resume a session
POST /v1/opencode/codebases/{id}/sessions/{session_id}/resume
{
"prompt": "Continue with the refactoring"
}
# Stream real-time output
GET /v1/opencode/tasks/{task_id}/output/stream
event: output
data: {"content": "Analyzing code..."}Real-time Output Streaming
Watch agent responses as they happen via Server-Sent Events. Perfect for long-running tasks where you need immediate feedback.
// Subscribe to task output (Swift)
let url = URL(string: "\(serverURL)/tasks/\(taskId)/output/stream")!
let source = EventSource(url: url)
source.onMessage { event in
if event.type == "output" {
print(event.data) // Real-time output
} else if event.type == "done" {
print("Task completed!")
}
}Why teams choose AgentMesh
Modern AI use cases don't need one big agent—they need teams of agents that can talk to each other, call tools, and run close to your code and data.
Turn Agents Into Systems
Multi-agent workflows, agent-to-agent messaging, task chaining, history, and resumption. Great for coding agents, support, data pipelines, research assistants.
Run Where Data Lives
Distributed workers pull tasks from the server, run inside secure environments (dev machines, VPCs, on-prem), and stream results back. No "upload your entire repo" nonsense.
Enterprise-Ready Day One
Keycloak SSO, RBAC for agents/codebases, Kubernetes Helm charts, Redis broker, network policies, observability hooks. Built to pass the "ship to a regulated org" test.
Open Protocol, No Lock-in
Implements the A2A protocol plus MCP for tools. Apache-licensed core; choose self-hosted or a managed Pro / Enterprise offering.
MCP Tool Integration
Access external tools and resources through Model Context Protocol. File systems, REST APIs, databases—extend agent capabilities infinitely.
Real-time Observability
Web dashboard for agent supervision, live streaming output, session explorer. Human intervention when agents need guidance.
Built for real-world AI workflows
See how teams use AgentMesh to orchestrate AI agents across industries.
AI Coding Assistants
Deploy AI coding agents across your development team. Workers run on developer machines with full codebase access, streaming results back through the central server.
- OpenCode integration
- Session resumption
- Multi-repo support
Customer Support Automation
Orchestrate multiple specialized agents to handle support tickets. Route queries to the right agent, escalate to humans when needed.
- Human-in-the-loop
- Agent handoffs
- Real-time monitoring
Data Pipeline Orchestration
Coordinate AI agents for ETL workflows. One agent extracts, another transforms, another loads—all communicating through A2A.
- Task chaining
- Error handling
- Progress tracking
Research & Analysis
Deploy research agents that gather information, analysis agents that synthesize findings, and report agents that present results.
- Multi-agent workflows
- Knowledge sharing
- Collaborative reasoning
DevOps Automation
AI agents that monitor systems, diagnose issues, and coordinate remediation. From alerting to resolution, fully automated.
- Kubernetes native
- Incident response
- Auto-remediation
Content Generation
Coordinate specialized agents for different content types. One writes, another edits, another optimizes for SEO.
- Quality control
- Style consistency
- Multi-format output
Roadmap to the standard A2A runtime
We're building AgentMesh to be the production-grade foundation for teams serious about multi-agent systems.
Solidify the Core
Make AgentMesh a rock-solid, self-hostable A2A implementation.
- A2A protocol implementation
- Redis message broker & task queues
- SSE streaming for real-time output
- Session history & resumption API
- MCP client integration
- Keycloak OIDC integration
- Helm chart for Kubernetes
- CLI: agentmesh init & agentmesh dev
- Python SDK v1
- TypeScript client for web UI
- Hosted Pro MVP (single-region)
Teams Love It
Move from cool open source project to standard runtime for agent systems.
- Multi-tenant architecture
- Horizontal autoscaling policies
- OpenTelemetry traces & metrics
- Fine-grained RBAC & API tokens
- Secret management (Vault/SSM/AKV)
- Audit logs for compliance
- Rust worker SDK
- Node/TypeScript SDK
- Live workflow graph in dashboard
- Multi-region Pro (EU + US)
Enterprise & Ecosystem
Become the go-to A2A control plane for enterprises.
- Enterprise Edition (on-prem/VPC)
- SOC 2 Type II compliance
- Built-in orchestrator agent
- Policy engine for tool calls
- Content filters (PII, toxicity)
- Agent & Tool Registry marketplace
- Enterprise SLAs (99.99%)
- Dedicated TAM & 24/7 support
Loved by AI teams
See what developers are saying about A2A Server MCP.
"A2A Server completely transformed how we build AI features. Our agents now collaborate seamlessly, and the session resumption is a game-changer."
"We evaluated several multi-agent frameworks. A2A's Kubernetes-native approach and enterprise security made it the clear choice for production."
"The distributed worker architecture let us run coding agents on our secure infrastructure while managing everything centrally. Brilliant design."
Simple, transparent pricing
Start free with open source, or get managed hosting and premium support.
Open Source
Free
Self-host AgentMesh with full protocol support.
- Full A2A protocol implementation
- MCP tool integration
- Redis message broker
- Distributed workers
- Community support
- Helm charts for Kubernetes
- Apache 2.0 license
ProPopular
$249/mo
Billed annually (save 17%)
Managed AgentMesh for production teams.
- Everything in Open Source
- Managed cloud hosting
- Multi-region deployment
- Session history & resumption
- Real-time output streaming
- Keycloak SSO integration
- Email support + 99.9% SLA
- Usage analytics dashboard
Enterprise
Custom
For regulated industries and custom deployments.
- Everything in Pro
- Dedicated VPC / single-tenant
- Custom SLA (up to 99.99%)
- SAML/LDAP + fine-grained RBAC
- Audit logs & compliance
- Secret management (Vault/SSM)
- On-premise deployment
- 24/7 support + dedicated TAM
- Migration assistance
Start building with AgentMesh
Get your first multi-agent system running in minutes. Open source, production-ready, and backed by a growing community.
Frequently asked questions
Have a different question? Join the discussion on GitHub.
What is A2A Server MCP?
A2A Server MCP is an open-source Agent-to-Agent communication server that implements the A2A protocol with Model Context Protocol (MCP) integration. It enables AI agents to communicate, share tasks, and orchestrate complex workflows.
How does distributed worker architecture work?
Workers run on remote machines with access to codebases. They poll the central A2A server for tasks, execute them using OpenCode or other agents, and stream results back in real-time. This allows you to run AI coding agents on machines where the code lives.
Can I resume past AI conversations?
Yes! Session history is automatically synced from workers to the server. You can browse past sessions from any device and resume them with full context preservation. The AI continues exactly where you left off.
What authentication methods are supported?
A2A Server supports enterprise authentication via Keycloak with OAuth2/OIDC. This includes username/password authentication, token refresh, and role-based access control for agents and codebases.
How do I deploy to production?
We provide Helm charts for Kubernetes deployment with horizontal autoscaling, network policies, and TLS termination. A single command deploys the entire stack including Redis message broker and monitoring.
Is there a Swift/iOS client?
Yes! We have a native Swift Liquid Glass UI for iOS and macOS with Apple-style glassmorphism design. It supports real-time output streaming, session management, and Keycloak authentication.
What is MCP integration?
Model Context Protocol (MCP) allows agents to access external tools and resources. A2A Server acts as an MCP client, enabling your agents to use file systems, databases, APIs, and other tools through a standardized interface.
How does real-time streaming work?
We use Server-Sent Events (SSE) for real-time output streaming. When an agent runs a task, output is streamed line-by-line to the server, which broadcasts it to connected clients. You see responses as they happen.
Is A2A Server free to use?
A2A Server MCP is completely open source under the Apache 2.0 license. You can use it for free, modify it, and deploy it anywhere. We also offer enterprise support and hosted solutions.
Ready to scale your AI agents?
Talk to our team about your use case. We'll help you get started.