Build backend AI with Vercel AI SDK v5/v6. Covers v6 beta (Agent abstraction, tool approval, reranking), v4→v5 migration (breaking changes), latest models (GPT-5/5.1, Claude 4.x, Gemini 2.5), Workers startup fix, and 12 error solutions (AI_APICallError, AI_NoObjectGeneratedError, streamText silent errors). Use when: implementing AI SDK v5/v6, migrating v4→v5, troubleshooting errors, fixing Workers startup issues, or updating to latest models.
Inherits all available tools
Additional assets for this skill
This skill inherits all available tools. When active, it can use any tool Claude has access to.
README.mdVERIFICATION_REPORT.mdreferences/links-to-official-docs.mdreferences/production-patterns.mdreferences/providers-quickstart.mdreferences/top-errors.mdreferences/v5-breaking-changes.mdrules/ai-sdk-core.mdscripts/check-versions.shtemplates/agent-with-tools.tstemplates/anthropic-setup.tstemplates/cloudflare-worker-integration.tstemplates/generate-object-zod.tstemplates/generate-text-basic.tstemplates/google-setup.tstemplates/multi-step-execution.tstemplates/nextjs-server-action.tstemplates/openai-setup.tstemplates/package.jsontemplates/stream-object-zod.tsname: ai-sdk-core description: | Build backend AI with Vercel AI SDK v5/v6. Covers v6 beta (Agent abstraction, tool approval, reranking), v4→v5 migration (breaking changes), latest models (GPT-5/5.1, Claude 4.x, Gemini 2.5), Workers startup fix, and 12 error solutions (AI_APICallError, AI_NoObjectGeneratedError, streamText silent errors).
Backend AI with Vercel AI SDK v5 and v6 Beta.
Installation:
npm install ai @ai-sdk/openai @ai-sdk/anthropic @ai-sdk/google zod
# Beta: npm install ai@beta @ai-sdk/openai@beta
Status: Beta (stable release planned end of 2025) Latest: ai@6.0.0-beta.107 (Nov 22, 2025)
1. Agent Abstraction
Unified interface for building agents with ToolLoopAgent class:
2. Tool Execution Approval (Human-in-the-Loop) Request user confirmation before executing tools:
3. Reranking Support Improve search relevance by reordering documents:
4. Structured Output (Stable) Combine multi-step tool calling with structured data generation:
5. Call Options Dynamic runtime configuration:
6. Image Editing (Coming Soon) Native support for image transformation workflows.
Unlike v4→v5, v6 has minimal breaking changes:
Install Beta:
npm install ai@beta @ai-sdk/openai@beta @ai-sdk/react@beta
Official Docs: https://ai-sdk.dev/docs/announcing-ai-sdk-6-beta
GPT-5 (Aug 7, 2025):
GPT-5.1 (Nov 13, 2025):
import { openai } from '@ai-sdk/openai';
const gpt5 = openai('gpt-5');
const gpt51 = openai('gpt-5.1');
Claude 4 Family (May-Oct 2025):
import { anthropic } from '@ai-sdk/anthropic';
const sonnet45 = anthropic('claude-sonnet-4-5-20250929'); // Latest
const opus41 = anthropic('claude-opus-4-1-20250805');
const haiku45 = anthropic('claude-haiku-4-5-20251015');
Gemini 2.5 Family (Mar-Sept 2025):
import { google } from '@ai-sdk/google';
const pro = google('gemini-2.5-pro');
const flash = google('gemini-2.5-flash');
const lite = google('gemini-2.5-flash-lite');
generateText() - Text completion with tools streamText() - Real-time streaming generateObject() - Structured output (Zod schemas) streamObject() - Streaming structured data
See official docs for usage: https://ai-sdk.dev/docs/ai-sdk-core
When returning streaming responses from an API, use the correct method:
| Method | Output Format | Use Case |
|---|---|---|
toTextStreamResponse() | Plain text chunks | Simple text streaming |
toUIMessageStreamResponse() | SSE with JSON events | Chat UIs (text-start, text-delta, text-end, finish) |
For chat widgets and UIs, always use toUIMessageStreamResponse():
const result = streamText({
model: workersai('@cf/qwen/qwen3-30b-a3b-fp8'),
messages,
system: 'You are helpful.',
});
// ✅ For chat UIs - returns SSE with JSON events
return result.toUIMessageStreamResponse({
headers: { 'Access-Control-Allow-Origin': '*' },
});
// ❌ For simple text - returns plain text chunks only
return result.toTextStreamResponse();
Note: toDataStreamResponse() does NOT exist in AI SDK v5 (common misconception).
IMPORTANT: workers-ai-provider@2.x requires AI SDK v5, NOT v4.
# ✅ Correct - AI SDK v5 with workers-ai-provider v2
npm install ai@^5.0.0 workers-ai-provider@^2.0.0 zod@^3.25.0
# ❌ Wrong - AI SDK v4 causes error
npm install ai@^4.0.0 workers-ai-provider@^2.0.0
# Error: "AI SDK 4 only supports models that implement specification version v1"
Zod Version: AI SDK v5 requires zod@^3.25.0 or later for zod/v3 and zod/v4 exports. Older versions (3.22.x) cause build errors: "Could not resolve zod/v4".
Problem: AI SDK v5 + Zod causes >270ms startup time (exceeds Workers 400ms limit).
Solution:
// ❌ BAD: Top-level imports cause startup overhead
import { createWorkersAI } from 'workers-ai-provider';
const workersai = createWorkersAI({ binding: env.AI });
// ✅ GOOD: Lazy initialization inside handler
app.post('/chat', async (c) => {
const { createWorkersAI } = await import('workers-ai-provider');
const workersai = createWorkersAI({ binding: c.env.AI });
// ...
});
Additional:
Breaking Changes:
parameters → inputSchema (Zod schema)args → input, result → outputToolExecutionError removed (now tool-error content parts)maxSteps parameter removed → Use stopWhen(stepCountIs(n))New in v5:
AI SDK v5 introduced extensive breaking changes. If migrating from v4, follow this guide.
Parameter Renames
maxTokens → maxOutputTokensproviderMetadata → providerOptionsTool Definitions
parameters → inputSchemaargs → input, result → outputMessage Types
CoreMessage → ModelMessageMessage → UIMessageconvertToCoreMessages → convertToModelMessagesTool Error Handling
ToolExecutionError class removedtool-error content partsMulti-Step Execution
maxSteps → stopWhenstepCountIs() or hasToolCall()Message Structure
content string → parts arrayStreaming Architecture
Tool Streaming
toolCallStreaming option removedPackage Reorganization
ai/rsc → @ai-sdk/rscai/react → @ai-sdk/reactLangChainAdapter → @ai-sdk/langchainBefore (v4):
import { generateText } from 'ai';
const result = await generateText({
model: openai.chat('gpt-4'),
maxTokens: 500,
providerMetadata: { openai: { user: 'user-123' } },
tools: {
weather: {
description: 'Get weather',
parameters: z.object({ location: z.string() }),
execute: async (args) => { /* args.location */ },
},
},
maxSteps: 5,
});
After (v5):
import { generateText, tool, stopWhen, stepCountIs } from 'ai';
const result = await generateText({
model: openai('gpt-4'),
maxOutputTokens: 500,
providerOptions: { openai: { user: 'user-123' } },
tools: {
weather: tool({
description: 'Get weather',
inputSchema: z.object({ location: z.string() }),
execute: async ({ location }) => { /* input.location */ },
}),
},
stopWhen: stepCountIs(5),
});
maxTokens to maxOutputTokensproviderMetadata to providerOptionsparameters to inputSchemaargs → inputmaxSteps with stopWhen(stepCountIs(n))CoreMessage → ModelMessageToolExecutionError handlingai/rsc → @ai-sdk/rsc)AI SDK provides a migration tool:
npx ai migrate
This will update most breaking changes automatically. Review changes carefully.
Official Migration Guide: https://ai-sdk.dev/docs/migration-guides/migration-guide-5-0
Cause: API request failed (network, auth, rate limit).
Solution:
import { AI_APICallError } from 'ai';
try {
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
});
} catch (error) {
if (error instanceof AI_APICallError) {
console.error('API call failed:', error.message);
console.error('Status code:', error.statusCode);
console.error('Response:', error.responseBody);
// Check common causes
if (error.statusCode === 401) {
// Invalid API key
} else if (error.statusCode === 429) {
// Rate limit - implement backoff
} else if (error.statusCode >= 500) {
// Provider issue - retry
}
}
}
Prevention:
Cause: Model didn't generate valid object matching schema.
Solution:
import { AI_NoObjectGeneratedError } from 'ai';
try {
const result = await generateObject({
model: openai('gpt-4'),
schema: z.object({ /* complex schema */ }),
prompt: 'Generate data',
});
} catch (error) {
if (error instanceof AI_NoObjectGeneratedError) {
console.error('No valid object generated');
// Solutions:
// 1. Simplify schema
// 2. Add more context to prompt
// 3. Provide examples in prompt
// 4. Try different model (gpt-4 better than gpt-3.5 for complex objects)
}
}
Prevention:
Cause: AI SDK v5 + Zod initialization overhead in Cloudflare Workers exceeds startup limits.
Solution:
// BAD: Top-level imports cause startup overhead
import { createWorkersAI } from 'workers-ai-provider';
import { complexSchema } from './schemas';
const workersai = createWorkersAI({ binding: env.AI });
// GOOD: Lazy initialization inside handler
export default {
async fetch(request, env) {
const { createWorkersAI } = await import('workers-ai-provider');
const workersai = createWorkersAI({ binding: env.AI });
// Use workersai here
}
}
Prevention:
GitHub Issue: Search for "Workers startup limit" in Vercel AI SDK issues
Cause: Stream errors can be swallowed by createDataStreamResponse.
Status: ✅ RESOLVED - Fixed in ai@4.1.22 (February 2025)
Solution (Recommended):
// Use the onError callback (added in v4.1.22)
const stream = streamText({
model: openai('gpt-4'),
prompt: 'Hello',
onError({ error }) {
console.error('Stream error:', error);
// Custom error logging and handling
},
});
// Stream safely
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
Alternative (Manual try-catch):
// Fallback if not using onError callback
try {
const stream = streamText({
model: openai('gpt-4'),
prompt: 'Hello',
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
} catch (error) {
console.error('Stream error:', error);
}
Prevention:
onError callback for proper error capture (recommended)GitHub Issue: #4726 (RESOLVED)
Cause: Missing or invalid API key.
Solution:
import { AI_LoadAPIKeyError } from 'ai';
try {
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
});
} catch (error) {
if (error instanceof AI_LoadAPIKeyError) {
console.error('API key error:', error.message);
// Check:
// 1. .env file exists and loaded
// 2. Correct env variable name (OPENAI_API_KEY)
// 3. Key format is valid (starts with sk-)
}
}
Prevention:
Cause: Invalid parameters passed to function.
Solution:
import { AI_InvalidArgumentError } from 'ai';
try {
const result = await generateText({
model: openai('gpt-4'),
maxOutputTokens: -1, // Invalid!
prompt: 'Hello',
});
} catch (error) {
if (error instanceof AI_InvalidArgumentError) {
console.error('Invalid argument:', error.message);
// Check parameter types and values
}
}
Prevention:
Cause: Model generated no content (safety filters, etc.).
Solution:
import { AI_NoContentGeneratedError } from 'ai';
try {
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Some prompt',
});
} catch (error) {
if (error instanceof AI_NoContentGeneratedError) {
console.error('No content generated');
// Possible causes:
// 1. Safety filters blocked output
// 2. Prompt triggered content policy
// 3. Model configuration issue
// Handle gracefully:
return { text: 'Unable to generate response. Please try different input.' };
}
}
Prevention:
Cause: Zod schema validation failed on generated output.
Solution:
import { AI_TypeValidationError } from 'ai';
try {
const result = await generateObject({
model: openai('gpt-4'),
schema: z.object({
age: z.number().min(0).max(120), // Strict validation
}),
prompt: 'Generate person',
});
} catch (error) {
if (error instanceof AI_TypeValidationError) {
console.error('Validation failed:', error.message);
// Solutions:
// 1. Relax schema constraints
// 2. Add more guidance in prompt
// 3. Use .optional() for unreliable fields
}
}
Prevention:
.optional() for fields that may not always be presentCause: All retry attempts failed.
Solution:
import { AI_RetryError } from 'ai';
try {
const result = await generateText({
model: openai('gpt-4'),
prompt: 'Hello',
maxRetries: 3, // Default is 2
});
} catch (error) {
if (error instanceof AI_RetryError) {
console.error('All retries failed');
console.error('Last error:', error.lastError);
// Check root cause:
// - Persistent network issue
// - Provider outage
// - Invalid configuration
}
}
Prevention:
Cause: Exceeded provider rate limits (RPM/TPM).
Solution:
// Implement exponential backoff
async function generateWithBackoff(prompt: string, retries = 3) {
for (let i = 0; i < retries; i++) {
try {
return await generateText({
model: openai('gpt-4'),
prompt,
});
} catch (error) {
if (error instanceof AI_APICallError && error.statusCode === 429) {
const delay = Math.pow(2, i) * 1000; // Exponential backoff
console.log(`Rate limited, waiting ${delay}ms`);
await new Promise(resolve => setTimeout(resolve, delay));
} else {
throw error;
}
}
}
throw new Error('Rate limit retries exhausted');
}
Prevention:
Cause: Complex Zod schemas slow down TypeScript type checking.
Solution:
// Instead of deeply nested schemas at top level:
// const complexSchema = z.object({ /* 100+ fields */ });
// Define inside functions or use type assertions:
function generateData() {
const schema = z.object({ /* complex schema */ });
return generateObject({ model: openai('gpt-4'), schema, prompt: '...' });
}
// Or use z.lazy() for recursive schemas:
type Category = { name: string; subcategories?: Category[] };
const CategorySchema: z.ZodType<Category> = z.lazy(() =>
z.object({
name: z.string(),
subcategories: z.array(CategorySchema).optional(),
})
);
Prevention:
z.lazy() for recursive typesOfficial Docs: https://ai-sdk.dev/docs/troubleshooting/common-issues/slow-type-checking
Cause: Some models occasionally return invalid JSON.
Solution:
// Use built-in retry and mode selection
const result = await generateObject({
model: openai('gpt-4'),
schema: mySchema,
prompt: 'Generate data',
mode: 'json', // Force JSON mode (supported by GPT-4)
maxRetries: 3, // Retry on invalid JSON
});
// Or catch and retry manually:
try {
const result = await generateObject({
model: openai('gpt-4'),
schema: mySchema,
prompt: 'Generate data',
});
} catch (error) {
// Retry with different model
const result = await generateObject({
model: openai('gpt-4-turbo'),
schema: mySchema,
prompt: 'Generate data',
});
}
Prevention:
mode: 'json' when availableGitHub Issue: #4302 (Imagen 3.0 Invalid JSON)
More Errors: https://ai-sdk.dev/docs/reference/ai-sdk-errors (28 total)
AI SDK:
Latest Models (2025):
Check Latest:
npm view ai version
npm view ai dist-tags # See beta versions
Core:
GitHub:
Last Updated: 2025-11-22 Skill Version: 1.2.0 AI SDK: 5.0.98 stable / 6.0.0-beta.107