Agent-to-Agent (A2A) protocol implementation patterns for Google ADK - exposing agents via A2A, consuming external agents, multi-agent communication, and protocol configuration. Use when building multi-agent systems, implementing A2A protocol, exposing agents as services, consuming remote agents, configuring agent cards, or when user mentions A2A, agent-to-agent, multi-agent collaboration, remote agents, or agent orchestration.
This skill is limited to using the following tools:
examples/README.mdexamples/ecommerce-assistant.pyexamples/research-cluster.pyscripts/consume-agent.shscripts/expose-agent.shscripts/generate-agent-card.shscripts/validate-a2a.shtemplates/a2a-client.pytemplates/a2a-server.pytemplates/agent-card.jsontemplates/grpc-config.pytemplates/multi-agent-orchestration.pyThis skill provides comprehensive patterns for implementing the Agent2Agent (A2A) protocol in Google's Agent Development Kit (ADK). The A2A protocol standardizes communication between AI agents, enabling multi-agent collaboration across different platforms and frameworks.
The Agent2Agent (A2A) protocol enables AI agents to:
Key Concept: A2A focuses on agent-to-agent collaboration in natural modalities, complementing MCP (Model Context Protocol) which handles tool/data connections.
When to use: Make your ADK agent available for other agents to consume
Template: templates/a2a-server.py
Key Components:
AgentCard at /.well-known/agent.json - Advertises capabilitiesAgentExecutor - Handles incoming requestsDefaultRequestHandler - Processes JSON-RPC messagesA2AStarletteApplication - HTTP server implementationScript: scripts/expose-agent.sh
When to use: Integrate remote A2A agents as sub-agents
Template: templates/a2a-client.py
Key Components:
A2ACardResolver - Discovers remote agent capabilitiessend_task tool - Sends messages to remote agentsScript: scripts/consume-agent.sh
When to use: Orchestrate multiple specialized agents collaborating on complex tasks
Template: templates/multi-agent-orchestration.py
Pattern:
Example: examples/purchasing-concierge/
When to use: Define agent capabilities for discovery
Template: templates/agent-card.json
Contents:
Script: scripts/generate-agent-card.sh
# templates/a2a-server.py structure
from adk import Agent
from a2a import AgentExecutor, DefaultRequestHandler, AgentCard
class MyAgentExecutor(AgentExecutor):
"""Handle incoming A2A requests"""
async def execute(self, request):
# Process request using your agent
result = await self.agent.run(request.message)
return result
# Configure Agent Card
agent_card = AgentCard(
name="my-agent",
description="Agent description",
capabilities=["skill1", "skill2"],
endpoint="https://my-agent.example.com"
)
# Expose via HTTP
from a2a import A2AStarletteApplication
app = A2AStarletteApplication(
executor=MyAgentExecutor(),
card=agent_card
)
Deployment:
# Deploy to Cloud Run
bash scripts/expose-agent.sh --platform cloud-run
# Deploy to Agent Engine
bash scripts/expose-agent.sh --platform agent-engine
# Deploy to GKE
bash scripts/expose-agent.sh --platform gke
# templates/a2a-client.py structure
from adk import Agent
from a2a import A2ACardResolver, send_task
# Discover remote agent
resolver = A2ACardResolver()
agent_card = await resolver.resolve("https://remote-agent.example.com")
# Create tool to communicate with remote agent
send_task_tool = send_task(
agent_url=agent_card.endpoint,
session_id="unique-session-id"
)
# Use in your agent
my_agent = Agent(
tools=[send_task_tool],
# ... other config
)
# Agent can now invoke remote agent
result = await my_agent.run("Ask the remote agent to do something")
# templates/multi-agent-orchestration.py structure
from adk import Agent
from a2a import A2ACardResolver, send_task
# Discover specialist agents
resolver = A2ACardResolver()
research_agent = await resolver.resolve("https://research-agent.example.com")
analysis_agent = await resolver.resolve("https://analysis-agent.example.com")
writing_agent = await resolver.resolve("https://writing-agent.example.com")
# Coordinator agent
coordinator = Agent(
name="coordinator",
tools=[
send_task(agent_url=research_agent.endpoint),
send_task(agent_url=analysis_agent.endpoint),
send_task(agent_url=writing_agent.endpoint)
],
instructions="""
You coordinate multiple specialist agents:
1. Use research agent to gather information
2. Use analysis agent to process findings
3. Use writing agent to synthesize results
"""
)
# Execute multi-agent workflow
result = await coordinator.run("Research and write a report on AI agents")
{
"id": "my-agent",
"name": "My Agent",
"description": "Description of agent capabilities",
"version": "1.0.0",
"url": "https://my-agent.example.com",
"capabilities": {
"skills": [
{
"name": "skill1",
"description": "First skill description"
},
{
"name": "skill2",
"description": "Second skill description"
}
],
"modalities": ["text", "image"],
"streaming": true
},
"protocol": {
"version": "0.3",
"transport": "grpc"
}
}
Generation:
bash scripts/generate-agent-card.sh \
--name "my-agent" \
--description "Agent description" \
--skills "skill1,skill2" \
--modalities "text,image" \
--url "https://my-agent.example.com"
# templates/grpc-config.py
from a2a import A2AStarletteApplication, GrpcTransport
app = A2AStarletteApplication(
executor=MyAgentExecutor(),
transport=GrpcTransport(
host="0.0.0.0",
port=50051,
secure=True,
cert_file="/path/to/cert.pem",
key_file="/path/to/key.pem"
)
)
# templates/security-card.py
from a2a import SecurityCard, sign_card
# Create security card
security_card = SecurityCard(
issuer="my-organization",
audience=["trusted-agent-1", "trusted-agent-2"],
permissions=["read", "write"]
)
# Sign the card
signed_card = sign_card(
card=security_card,
private_key="/path/to/private-key.pem"
)
Request:
{
"id": "request-uuid",
"jsonrpc": "2.0",
"method": "message/send",
"params": {
"message": "Task description",
"session_id": "session-uuid",
"context": {}
}
}
Response:
{
"id": "request-uuid",
"jsonrpc": "2.0",
"result": {
"message": "Agent response",
"artifacts": [],
"status": "completed"
}
}
bash scripts/expose-agent.sh --platform cloud-run --region us-central1
What it does:
/.well-known/agent.jsonbash scripts/consume-agent.sh --url https://remote-agent.example.com
What it does:
send_task tool wrapperbash scripts/generate-agent-card.sh \
--name "my-agent" \
--description "Agent description" \
--skills "research,analysis,writing"
What it does:
/.well-known/agent.jsonbash scripts/validate-a2a.sh --config agent-card.json
What it does:
templates/a2a-server.py - Server-side agent implementationtemplates/a2a-client.py - Client-side agent consumptiontemplates/multi-agent-orchestration.py - Multi-agent coordinationtemplates/grpc-config.py - gRPC transport configurationtemplates/security-card.py - Security card implementationtemplates/go/a2a-server.go - Go server implementationtemplates/go/a2a-client.go - Go client implementationtemplates/agent-card.json - Agent Card JSON structuretemplates/deployment-config.yaml - Cloud Run/GKE deploymenttemplates/security-policy.json - Security configurationLocation: examples/research-cluster/
Architecture:
Communication: All agents communicate via A2A protocol
Location: examples/ecommerce-assistant/
Architecture:
Pattern: Hierarchical agent structure with A2A messaging
Location: examples/code-review/
Architecture:
Integration: Each specialist agent is independent A2A service
Location: examples/data-pipeline/
Architecture:
Workflow: Sequential A2A agent execution
/.well-known/agent.jsontry:
result = await send_task(remote_agent, task)
except A2AConnectionError:
# Handle network failures
result = fallback_handler(task)
except A2AAuthenticationError:
# Handle auth failures
log_security_event()
except A2ATimeoutError:
# Handle timeouts
retry_with_backoff()
# Track A2A metrics
metrics.record('a2a.request.count', {'agent': 'remote-agent'})
metrics.record('a2a.latency', latency_ms)
metrics.record('a2a.error.rate', error_count)
# Test agent exposure
bash scripts/validate-a2a.sh --config agent-card.json
# Test agent consumption
bash scripts/test-a2a-client.sh --url https://remote-agent.example.com
# Test multi-agent workflow
bash scripts/test-orchestration.sh --config multi-agent-config.yaml
# Deploy A2A agent to Cloud Run
gcloud run deploy my-agent \
--source . \
--region us-central1 \
--allow-unauthenticated
Agent Card URL: https://my-agent-[hash]-uc.a.run.app/.well-known/agent.json
# Deploy to Agent Engine
adk deploy --platform agent-engine --agent my-agent
Features: Managed scaling, built-in monitoring, A2A native
# Deploy to GKE
kubectl apply -f templates/deployment-config.yaml
Benefits: Full control, custom scaling, multi-region
A2A agents can use different frameworks:
ADK Agent:
from adk import Agent
agent = Agent(name="adk-agent", ...)
CrewAI Agent:
from crewai import Agent
agent = Agent(role="crew-agent", ...)
LangGraph Agent:
from langgraph import StateGraph
agent = StateGraph(...)
All communicate via A2A protocol - Framework is transparent to clients
# templates/gemini-integration.py
from adk import Agent
from vertexai.preview.generative_models import GenerativeModel
# ADK agent using Gemini
agent = Agent(
name="gemini-agent",
model=GenerativeModel("gemini-2.0-flash-exp"),
tools=[send_task_tool]
)
# Expose via A2A
from a2a import A2AStarletteApplication
app = A2AStarletteApplication(
executor=GeminiAgentExecutor(agent)
)
# templates/a2a-mcp-integration.py
from adk import Agent
from a2a import send_task
from mcp import use_mcp_server
# Agent with both A2A (agents) and MCP (tools)
agent = Agent(
name="hybrid-agent",
# A2A: Communicate with other agents
tools=[
send_task(agent_url="https://research-agent.example.com")
],
# MCP: Connect to data sources
mcps=[
use_mcp_server("filesystem"),
use_mcp_server("database")
]
)
Use Case: Agent uses MCP for data access, A2A for agent collaboration
Environment Variables:
GOOGLE_CLOUD_PROJECT - GCP project ID (for Gemini/Vertex AI)GOOGLE_APPLICATION_CREDENTIALS - Service account key pathA2A_AGENT_URL - Your agent's public URL (for card generation)Dependencies:
pip install google-adk[a2a]
pip install google-cloud-aiplatform
pip install grpcio # For gRPC transport
Infrastructure:
CRITICAL: When generating any configuration files or code:
NEVER hardcode actual API keys or secrets
NEVER include real credentials in examples
NEVER commit sensitive values to git
ALWAYS use placeholders: your_service_key_here
ALWAYS create .env.example with placeholders only
ALWAYS add .env* to .gitignore (except .env.example)
ALWAYS read from environment variables in code
ALWAYS document where to obtain keys
Placeholder format: {service}_{env}_your_key_here
Example:
# .env.example (safe to commit)
GOOGLE_CLOUD_PROJECT=your_project_id_here
GOOGLE_APPLICATION_CREDENTIALS=/path/to/your_service_account_key.json
A2A_AGENT_URL=https://your_agent_url_here
# .env (NEVER commit)
GOOGLE_CLOUD_PROJECT=actual-project-id
GOOGLE_APPLICATION_CREDENTIALS=/actual/path/to/key.json
A2A_AGENT_URL=https://my-agent-xyz.run.app
Agent Card Not Found:
/.well-known/agent.json is accessibleConnection Refused:
Authentication Failed:
Message Format Error:
Official Documentation:
Code Examples:
Community:
Plugin: google-adk Version: 1.0.0 Protocol Version: A2A v0.3+ Language Support: Python (stable), Go (stable) Deployment Platforms: Cloud Run, Agent Engine, GKE