Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs. Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT. Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows or tools, adding RAG knowledge bases, testing with CLI "agents as code", or troubleshooting deprecated @11labs packages, Android audio cutoff, CSP violations, dynamic variables, or WebRTC config. Keywords: ElevenLabs Agents, ElevenLabs voice agents, AI voice agents, conversational AI, @elevenlabs/react, @elevenlabs/client, @elevenlabs/react-native, @elevenlabs/elevenlabs-js, @elevenlabs/agents-cli, elevenlabs SDK, voice AI, TTS, text-to-speech, ASR, speech recognition, turn-taking model, WebRTC voice, WebSocket voice, ElevenLabs conversation, agent system prompt, agent tools, agent knowledge base, RAG voice agents, multi-voice agents, pronunciation dictionary, voice speed control, elevenlabs scribe, @11labs deprecated, Android audio cutoff, CSP violation elevenlabs, dynamic variables elevenlabs, case-sensitive tool names, webhook authentication, post-call webhook, webhook payload schema, ElevenLabs-Signature header, transcript null message, call_successful string, webhook cost credits USD, charging llm_price, user context extraction, llm_usage tokens, data_collection_results, evaluation_criteria_results, feedback thumb_rating, interrupted turn, source_medium, rag_retrieval_info, has_audio has_user_audio has_response_audio
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdassets/agent-config-schema.jsonassets/ci-cd-example.ymlassets/javascript-sdk-boilerplate.jsassets/react-native-boilerplate.tsxassets/react-sdk-boilerplate.tsxassets/swift-sdk-boilerplate.swiftassets/system-prompt-template.mdassets/widget-embed-template.htmlreferences/api-reference.mdreferences/cli-commands.mdreferences/compliance-guide.mdreferences/cost-optimization.mdreferences/system-prompt-guide.mdreferences/testing-guide.mdreferences/tool-examples.mdreferences/workflow-examples.mdscripts/create-agent.shscripts/deploy-agent.shscripts/simulate-conversation.shname: elevenlabs-agents description: | Build conversational AI voice agents with ElevenLabs Platform using React, JavaScript, React Native, or Swift SDKs. Configure agents, tools (client/server/MCP), RAG knowledge bases, multi-voice, and Scribe real-time STT.
Use when: building voice chat interfaces, implementing AI phone agents with Twilio, configuring agent workflows or tools, adding RAG knowledge bases, testing with CLI "agents as code", or troubleshooting deprecated @11labs packages, Android audio cutoff, CSP violations, dynamic variables, or WebRTC config.
ElevenLabs Agents Platform is a comprehensive solution for building production-ready conversational AI voice agents. The platform coordinates four core components:
ElevenLabs migrated to new scoped packages in August 2025. Current packages:
npm install @elevenlabs/react@0.11.3 # React SDK
npm install @elevenlabs/client@0.11.3 # JavaScript SDK
npm install @elevenlabs/react-native@0.5.4 # React Native SDK
npm install @elevenlabs/elevenlabs-js@2.25.0 # Base SDK (Python: elevenlabs@1.59.0)
npm install -g @elevenlabs/agents-cli@0.6.1 # CLI
DEPRECATED: @11labs/react, @11labs/client (uninstall if present)
⚠️ CRITICAL: v1 TTS models will be removed 2025-12-15. Migrate to Turbo v2/v2.5.
npm install @elevenlabs/react zod
import { useConversation } from '@elevenlabs/react';
const { startConversation, stopConversation, status } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/elevenlabs/auth', // Recommended (secure)
// OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY,
clientTools: { /* browser-side tools */ },
onEvent: (event) => { /* transcript, agent_response, tool_call */ },
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});
npm install -g @elevenlabs/agents-cli
elevenlabs auth login
elevenlabs agents init # Creates agents.json, tools.json, tests.json
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env dev # Deploy
elevenlabs agents test "Bot" # Test
import { ElevenLabsClient } from 'elevenlabs';
const client = new ElevenLabsClient({ apiKey: process.env.ELEVENLABS_API_KEY });
const agent = await client.agents.create({
name: 'Support Bot',
conversation_config: {
agent: { prompt: { prompt: "...", llm: "gpt-4o" }, language: "en" },
tts: { model_id: "eleven_turbo_v2_5", voice_id: "your-voice-id" }
}
});
1. Personality - Identity, role, character traits 2. Environment - Communication context (phone, web, video) 3. Tone - Formality, speech patterns, verbosity 4. Goal - Objectives and success criteria 5. Guardrails - Boundaries, prohibited topics, ethical constraints 6. Tools - Available capabilities and when to use them
Template:
{
"agent": {
"prompt": {
"prompt": "Personality:\n[Agent identity and role]\n\nEnvironment:\n[Communication context]\n\nTone:\n[Speech style]\n\nGoal:\n[Primary objectives]\n\nGuardrails:\n[Boundaries and constraints]\n\nTools:\n[Available tools and usage]",
"llm": "gpt-4o", // gpt-5.1, claude-sonnet-4-5, gemini-3-pro-preview
"temperature": 0.7
}
}
}
2025 LLM Models:
gpt-5.1, gpt-5.1-2025-11-13 (Oct 2025)claude-sonnet-4-5, claude-sonnet-4-5@20250929 (Oct 2025)gemini-3-pro-preview (2025)gemini-2.5-flash-preview-09-2025 (Oct 2025)| Mode | Behavior | Best For |
|---|---|---|
| Eager | Responds quickly | Fast-paced support, quick orders |
| Normal | Balanced (default) | General customer service |
| Patient | Waits longer | Information collection, therapy |
{ "conversation_config": { "turn": { "mode": "patient" } } }
Workflow Features:
edge_order (determinism, Oct 2025){
"workflow": {
"nodes": [
{ "id": "node_1", "type": "subagent", "config": { "system_prompt": "...", "turn_eagerness": "patient" } },
{ "id": "node_2", "type": "tool", "tool_name": "transfer_to_human" }
],
"edges": [{ "from": "node_1", "to": "node_2", "condition": "escalation", "edge_order": 1 }]
}
}
Agent Management (2025):
archived: true field (Oct 2025)Use {{var_name}} syntax in prompts, messages, and tool parameters.
System Variables:
{{system__agent_id}}, {{system__conversation_id}}{{system__caller_id}}, {{system__called_number}} (telephony){{system__call_duration_secs}}, {{system__time_utc}}{{system__call_sid}} (Twilio only)Custom Variables:
await client.conversations.create({
agent_id: "agent_123",
dynamic_variables: { user_name: "John", account_tier: "premium" }
});
Secret Variables: {{secret__api_key}} (headers only, never sent to LLM)
⚠️ Error: Missing variables cause "Missing required dynamic variables" - always provide all referenced variables.
Multi-Voice - Switch voices dynamically (adds ~200ms latency per switch):
{ "prompt": "When speaking as customer, use voice_id 'voice_abc'. As agent, use 'voice_def'." }
Pronunciation Dictionary - IPA, CMU, word substitutions (Turbo v2/v2.5 only):
{
"pronunciation_dictionary": [
{ "word": "API", "pronunciation": "ey-pee-ay", "format": "cmu" },
{ "word": "AI", "substitution": "artificial intelligence" }
]
}
PATCH Support (Aug 2025) - Update dictionaries without replacement
Speed Control - 0.7x-1.2x (use 0.9x-1.1x for natural sound):
{ "voice_settings": { "speed": 1.0 } }
Voice Cloning Best Practices:
32+ Languages with automatic detection and in-conversation switching.
Multi-Language Presets:
{
"language_presets": [
{ "language": "en", "voice_id": "en_voice", "first_message": "Hello!" },
{ "language": "es", "voice_id": "es_voice", "first_message": "¡Hola!" }
]
}
Enable agents to access large knowledge bases without loading entire documents into context.
Workflow:
Configuration:
{
"agent": { "prompt": { "knowledge_base": ["doc_id_1", "doc_id_2"] } },
"knowledge_base_config": {
"max_chunks": 5,
"vector_distance_threshold": 0.8
}
}
API Upload:
const doc = await client.knowledgeBase.upload({ file: fs.createReadStream('docs.pdf'), name: 'Docs' });
await client.knowledgeBase.computeRagIndex({ document_id: doc.id, embedding_model: 'e5_mistral_7b' });
⚠️ Gotchas: RAG adds ~500ms latency. Check index status before use - indexing can take minutes.
Execute in browser or mobile app. Tool names case-sensitive.
clientTools: {
updateCart: {
description: "Update shopping cart",
parameters: z.object({ item: z.string(), quantity: z.number() }),
handler: async ({ item, quantity }) => {
// Client-side logic
return { success: true };
}
}
}
HTTP requests to external APIs. PUT support added Apr 2025.
{
"name": "get_weather",
"url": "https://api.weather.com/{{user_id}}",
"method": "GET",
"headers": { "Authorization": "Bearer {{secret__api_key}}" },
"parameters": { "type": "object", "properties": { "city": { "type": "string" } } }
}
⚠️ Secret variables only in headers (not URL/body)
2025 Features:
Connect to MCP servers for databases, IDEs, data sources.
Configuration: Dashboard → Add Custom MCP Server → Configure SSE/HTTP endpoint
Approval Modes: Always Ask | Fine-Grained | No Approval
2025 Updates:
⚠️ Limitations: SSE/HTTP only. Not available for Zero Retention or HIPAA.
Built-in conversation control (no external APIs):
end_call, detect_language, transfer_agenttransfer_to_number (telephony)dtmf_playpad, voicemail_detection (telephony)2025: use_out_of_band_dtmf flag for telephony integration
const { startConversation, stopConversation, status, isSpeaking } = useConversation({
agentId: 'your-agent-id',
signedUrl: '/api/auth', // OR apiKey: process.env.NEXT_PUBLIC_ELEVENLABS_API_KEY
clientTools: { /* ... */ },
onEvent: (event) => { /* transcript, agent_response, tool_call, agent_tool_request (Oct 2025) */ },
onConnect/onDisconnect/onError,
serverLocation: 'us' // 'eu-residency' | 'in-residency' | 'global'
});
2025 Events:
agent_chat_response_part - Streaming responses (Oct 2025)agent_tool_request - Tool interaction tracking (Oct 2025)| Feature | WebSocket | WebRTC (Jul 2025 rollout) |
|---|---|---|
| Auth | signedUrl | conversationToken |
| Audio | Configurable (16k/24k/48k) | PCM_48000 (hardcoded) |
| Latency | Standard | Lower |
| Best For | Flexibility | Low-latency |
⚠️ WebRTC: Hardcoded PCM_48000, limited device switching
@elevenlabs/react@0.11.3@elevenlabs/client@0.11.3 - new Conversation({...})@elevenlabs/react-native@0.5.4 - Expo SDK 47+, iOS/macOS (custom build required, no Expo Go)<script src="https://elevenlabs.io/convai-widget/index.js"></script>Real-time transcription with word-level timestamps. Single-use tokens, not API keys.
const { connect, startRecording, stopRecording, transcript, partialTranscript } = useScribe({
token: async () => (await fetch('/api/scribe/token')).json().then(d => d.token),
commitStrategy: 'vad', // 'vad' (auto on silence) | 'manual' (explicit .commit())
sampleRate: 16000, // 16000 or 24000
onPartialTranscript/onFinalTranscript/onError
});
Events: PARTIAL_TRANSCRIPT, FINAL_TRANSCRIPT_WITH_TIMESTAMPS, SESSION_STARTED, ERROR
⚠️ Closed Beta - requires sales contact. For agents, use Agents Platform instead (LLM + TTS + two-way interaction).
Comprehensive automated testing with 9 new API endpoints for creating, managing, and executing tests.
Test Types:
CLI Workflow:
# Create test
elevenlabs tests add "Refund Test" --template basic-llm
# Configure in test_configs/refund-test.json
{
"name": "Refund Test",
"scenario": "Customer requests refund",
"success_criteria": ["Agent acknowledges empathetically", "Verifies order details"],
"expected_tool_call": { "tool_name": "lookup_order", "parameters": { "order_id": "..." } }
}
# Deploy and execute
elevenlabs tests push
elevenlabs agents test "Support Agent"
9 New API Endpoints (Aug 2025):
POST /v1/convai/tests - Create testGET /v1/convai/tests/:id - Retrieve testPATCH /v1/convai/tests/:id - Update testDELETE /v1/convai/tests/:id - Delete testPOST /v1/convai/tests/:id/execute - Execute testGET /v1/convai/test-invocations - List invocations (pagination, agent filtering)POST /v1/convai/test-invocations/:id/resubmit - Resubmit failed testGET /v1/convai/test-results/:id - Get resultsGET /v1/convai/test-results/:id/debug - Detailed debugging infoTest Invocation Listing (Oct 2025):
const invocations = await client.convai.testInvocations.list({
agent_id: 'agent_123', // Filter by agent
page_size: 30, // Default 30, max 100
cursor: 'next_page_cursor' // Pagination
});
// Returns: test run counts, pass/fail stats, titles
Programmatic Testing:
const simulation = await client.agents.simulate({
agent_id: 'agent_123',
scenario: 'Refund request',
user_messages: ["I want a refund", "Order #12345"],
success_criteria: ["Acknowledges request", "Verifies order"]
});
console.log('Passed:', simulation.passed);
Agent Tracking (Oct 2025): Tests now include agent_id association for better organization
2025 Features:
call_start_before_unix parameteraggregation_interval (hour/day/week/month)tool_latency_secs trackingConversation Analysis: Success evaluation (LLM-based), data collection fields, post-call webhooks
Access: Dashboard → Analytics | Post-call Webhooks | API
Data Retention: 2 years default (GDPR). Configure: { "transcripts": { "retention_days": 730 }, "audio": { "retention_days": 2190 } }
Encryption: TLS 1.3 (transit), AES-256 (rest)
Regional: serverLocation: 'eu-residency' | 'us' | 'global' | 'in-residency'
Zero Retention Mode: Immediate deletion (no history, analytics, webhooks, or MCP)
Compliance: GDPR (1-2 years), HIPAA (6 years), SOC 2 (automatic encryption)
LLM Caching: Up to 90% savings on repeated inputs. { "caching": { "enabled": true, "ttl_seconds": 3600 } }
Model Swapping: GPT-5.1, GPT-4o/mini, Claude Sonnet 4.5, Gemini 3 Pro/2.5 Flash (2025 models)
Burst Pricing: 3x concurrency limit at 2x cost. { "burst_pricing_enabled": true }
2025 Platform Updates:
Events: audio, transcript, agent_response, tool_call, agent_chat_response_part (streaming, Oct 2025), agent_tool_request (Oct 2025), conversation_state
Custom Models: Bring your own LLM (OpenAI-compatible endpoints). { "llm_config": { "custom": { "endpoint": "...", "api_key": "{{secret__key}}" } } }
Post-Call Webhooks: HMAC verification required. Return 200 or auto-disable after 10 failures. Payload includes conversation_id, transcript, analysis.
Chat Mode: Text-only (no ASR/TTS). { "chat_mode": true }. Saves ~200ms + costs.
Telephony: SIP (sip-static.rtc.elevenlabs.io), Twilio native, Vonage, RingCentral. 2025: Twilio keypad fix (Jul), SIP TLS remote_domains validation (Oct)
Installation & Auth:
npm install -g @elevenlabs/agents-cli@0.6.1
elevenlabs auth login
elevenlabs auth residency eu-residency # 'in-residency' | 'global'
export ELEVENLABS_API_KEY=your-api-key # For CI/CD
Project Structure: agents.json, tools.json, tests.json + agent_configs/, tool_configs/, test_configs/
Key Commands:
elevenlabs agents init
elevenlabs agents add "Bot" --template customer-service
elevenlabs agents push --env prod --dry-run # Preview
elevenlabs agents push --env prod # Deploy
elevenlabs agents pull # Import existing
elevenlabs agents test "Bot" # 2025: Enhanced testing
elevenlabs tools add-webhook "Weather" --config-path tool_configs/weather.json
elevenlabs tools push
elevenlabs tests add "Test" --template basic-llm
elevenlabs tests push
Multi-Environment: Create agent.dev.json, agent.staging.json, agent.prod.json for overrides
CI/CD: GitHub Actions with --dry-run validation before deploy
.gitignore: .env, .elevenlabs/, *.secret.json
Cause: Variables referenced in prompts not provided at conversation start
Solution: Provide all variables in dynamic_variables: { user_name: "John", ... }
Cause: Tool name mismatch (case-sensitive)
Solution: Ensure tool_ids: ["orderLookup"] matches name: "orderLookup" exactly
Cause: Incorrect HMAC signature, not returning 200, or 10+ failures
Solution: Verify hmac = crypto.createHmac('sha256', SECRET).update(payload).digest('hex') and return 200
⚠️ Header Name: Use ElevenLabs-Signature (NOT X-ElevenLabs-Signature - no X- prefix!)
Cause: Background noise, inconsistent mic distance, extreme volumes in training Solution: Use clean audio, consistent distance, avoid extremes
Cause: English-trained voice for non-English language
Solution: Use language-matched voices: { "language": "es", "voice_id": "spanish_voice" }
Cause: CLI doesn't support restricted API keys Solution: Use unrestricted API key for CLI
Cause: Hash-based change detection missed modification
Solution: elevenlabs agents init --override + elevenlabs agents pull + push
Cause: Schema doesn't match usage
Solution: Add clear descriptions: "description": "Order ID (format: ORD-12345)"
Cause: Index still computing (takes minutes)
Solution: Check index.status === 'ready' before using
Cause: Network instability or incompatible browser Solution: Use WebRTC instead, implement reconnection logic
Cause: Agent visibility or API key config Solution: Check visibility (public/private), verify API key in prod, check allowlist
Cause: Allowlist enabled but using shared link Solution: Configure allowlist domains or disable for testing
Cause: Edge conditions creating loops Solution: Add max iteration limits, test all paths, explicit exit conditions
Cause: Burst not enabled in settings
Solution: { "call_limits": { "burst_pricing_enabled": true } }
Cause: MCP server slow/unreachable Solution: Check URL accessible, verify transport (SSE/HTTP), check auth, monitor logs
Cause: Android needs time to switch audio mode
Solution: connectionDelay: { android: 3_000, ios: 0 } (3s for audio routing)
Cause: Strict CSP blocks blob: URLs. SDK uses Audio Worklets loaded as blobs
Solution: Self-host worklets:
cp node_modules/@elevenlabs/client/dist/worklets/*.js public/elevenlabs/workletPaths: { 'rawAudioProcessor': '/elevenlabs/rawAudioProcessor.worklet.js', 'audioConcatProcessor': '/elevenlabs/audioConcatProcessor.worklet.js' }script-src 'self' https://elevenlabs.io; worker-src 'self';
Gotcha: Update worklets when upgrading @elevenlabs/clientCause: Schema expects message: string but ElevenLabs sends null when agent makes tool calls
Solution: Use z.string().nullable() for message field in Zod schemas
// ❌ Fails on tool call turns:
message: z.string()
// ✅ Correct:
message: z.string().nullable()
Real payload example:
{ "role": "agent", "message": null, "tool_calls": [{ "tool_name": "my_tool", ... }] }
Cause: Schema expects call_successful: boolean but ElevenLabs sends "success" or "failure" strings
Solution: Accept both types and convert for database storage
// Schema:
call_successful: z.union([z.boolean(), z.string()]).optional()
// Conversion helper:
function parseCallSuccessful(value: unknown): boolean | undefined {
if (value === undefined || value === null) return undefined
if (typeof value === 'boolean') return value
if (typeof value === 'string') return value.toLowerCase() === 'success'
return undefined
}
Cause: Real ElevenLabs payloads have many undocumented fields that strict schemas reject Undocumented fields in transcript turns:
agent_metadata, multivoice_message, llm_override, rag_retrieval_infollm_usage, interrupted, original_message, source_medium
Solution: Add all as .optional() with z.any() for fields you don't process
Debugging tip: Use https://webhook.site to capture real payloads, then test schema locallyCause: metadata.cost contains ElevenLabs credits, not USD dollars. Displaying this directly shows wildly wrong values (e.g., "$78.0000" when actual cost is ~$0.003)
Solution: Extract actual USD from metadata.charging.llm_price instead
// ❌ Wrong - displays credits as dollars:
cost: metadata?.cost // Returns 78 (credits)
// ✅ Correct - actual USD cost:
const charging = metadata?.charging as any
cost: charging?.llm_price ?? null // Returns 0.0036 (USD)
Real payload structure:
{
"metadata": {
"cost": 78, // ← CREDITS, not dollars!
"charging": {
"llm_price": 0.0036188999999999995, // ← Actual USD cost
"llm_charge": 18, // LLM credits
"call_charge": 60, // Audio credits
"tier": "pro"
}
}
}
Note: llm_price only covers LLM costs. Audio costs may require separate calculation based on your plan.
Cause: Webhook contains authenticated user info from widget but code doesn't extract it
Solution: Extract dynamic_variables from conversation_initiation_client_data
const dynamicVars = data.conversation_initiation_client_data?.dynamic_variables
const callerName = dynamicVars?.user_name || null
const callerEmail = dynamicVars?.user_email || null
const currentPage = dynamicVars?.current_page || null
Payload example:
{
"conversation_initiation_client_data": {
"dynamic_variables": {
"user_name": "Jeremy Dawes",
"user_email": "jeremy@jezweb.net",
"current_page": "/dashboard/calls"
}
}
}
Cause: ElevenLabs agents can collect structured data during calls (configured in agent settings). This data is stored in analysis.data_collection_results but often not parsed/displayed in UI.
Solution: Parse the JSON and display collected fields with their values and rationales
const dataCollectionResults = analysis?.dataCollectionResults
? JSON.parse(analysis.dataCollectionResults)
: null
// Display each collected field:
Object.entries(dataCollectionResults).forEach(([key, data]) => {
console.log(`${key}: ${data.value} (${data.rationale})`)
})
Payload example:
{
"data_collection_results": {
"customer_name": { "value": "John Smith", "rationale": "Customer stated their name" },
"intent": { "value": "billing_inquiry", "rationale": "Asking about invoice" },
"callback_number": { "value": "+61400123456", "rationale": "Provided for callback" }
}
}
Cause: Custom success criteria (configured in agent) produce results in analysis.evaluation_criteria_results but often not parsed/displayed
Solution: Parse and show pass/fail status with rationales
const evaluationResults = analysis?.evaluationCriteriaResults
? JSON.parse(analysis.evaluationCriteriaResults)
: null
Object.entries(evaluationResults).forEach(([key, data]) => {
const passed = data.result === 'success' || data.result === true
console.log(`${key}: ${passed ? 'PASS' : 'FAIL'} - ${data.rationale}`)
})
Payload example:
{
"evaluation_criteria_results": {
"verified_identity": { "result": "success", "rationale": "Customer verified DOB" },
"resolved_issue": { "result": "failure", "rationale": "Escalated to human" }
}
}
Cause: User can provide thumbs up/down feedback. Stored in metadata.feedback.thumb_rating but not extracted
Solution: Extract and store the rating (1 = thumbs up, -1 = thumbs down)
const feedback = metadata?.feedback as any
const feedbackRating = feedback?.thumb_rating ?? null // 1, -1, or null
// Also available:
const likes = feedback?.likes // Array of things user liked
const dislikes = feedback?.dislikes // Array of things user disliked
Payload example:
{
"metadata": {
"feedback": {
"thumb_rating": 1,
"likes": ["helpful", "natural"],
"dislikes": []
}
}
}
Cause: Each transcript turn has valuable metadata that's often ignored Solution: Store these fields per message for analytics and debugging
const turnAny = turn as any
const messageData = {
// ... existing fields
interrupted: turnAny.interrupted ?? null, // Was turn cut off by user?
sourceMedium: turnAny.source_medium ?? null, // Channel: web, phone, etc.
originalMessage: turnAny.original_message ?? null, // Pre-processed message
ragRetrievalInfo: turnAny.rag_retrieval_info // What knowledge was retrieved
? JSON.stringify(turnAny.rag_retrieval_info)
: null,
}
Use cases:
interrupted: true → User spoke over agent (UX insight)source_medium → Analytics by channelrag_retrieval_info → Debug/improve knowledge base retrievalCause: Three new boolean fields coming in August 2025 webhooks that may break schemas Solution: Add these fields to schemas now (as optional) to be ready
// In webhook payload (coming August 15, 2025):
has_audio: boolean // Was full audio recorded?
has_user_audio: boolean // Was user audio captured?
has_response_audio: boolean // Was agent audio captured?
// Schema (future-proof):
const schema = z.object({
// ... existing fields
has_audio: z.boolean().optional(),
has_user_audio: z.boolean().optional(),
has_response_audio: z.boolean().optional(),
})
Note: These match the existing fields in the GET Conversation API response
This skill composes well with:
Official Documentation:
Examples:
Community:
Production Tested: WordPress Auditor, Customer Support Agents, AgentFlow (webhook integration) Last Updated: 2025-12-06 Package Versions: elevenlabs@1.59.0, @elevenlabs/elevenlabs-js@2.25.0, @elevenlabs/agents-cli@0.6.1, @elevenlabs/react@0.11.3, @elevenlabs/client@0.11.3, @elevenlabs/react-native@0.5.4