Vercel AI Elements for workflow UI components. Use when building chat interfaces, displaying tool execution, showing reasoning/thinking, or creating job queues. Triggers on ai-elements, Queue, Confirmation, Tool, Reasoning, Shimmer, Loader, Message, Conversation, PromptInput.
This skill inherits all available tools. When active, it can use any tool Claude has access to.
references/conversation.mdreferences/prompt-input.mdreferences/visualization.mdreferences/workflow.mdAI Elements is a comprehensive React component library for building AI-powered user interfaces. The library provides 30+ components specifically designed for chat interfaces, tool execution visualization, reasoning displays, and workflow management.
Install via shadcn registry:
npx shadcn@latest add https://ai-elements.vercel.app/r/[component-name]
Import Pattern: Components are imported from individual files, not a barrel export:
// Correct - import from specific files
import { Conversation } from "@/components/ai-elements/conversation";
import { Message } from "@/components/ai-elements/message";
import { PromptInput } from "@/components/ai-elements/prompt-input";
// Incorrect - no barrel export
import { Conversation, Message } from "@/components/ai-elements";
Components for displaying chat-style interfaces with messages, attachments, and auto-scrolling behavior.
See references/conversation.md for details.
Advanced text input with file attachments, drag-and-drop, speech input, and state management.
See references/prompt-input.md for details.
Components for displaying job queues, tool execution, and approval workflows.
See references/workflow.md for details.
ReactFlow-based components for workflow visualization and custom node types.
See references/visualization.md for details.
AI Elements is built on top of shadcn/ui and integrates seamlessly with its theming system:
AI Elements follows a composition-first approach where larger components are built from smaller primitives:
<Tool>
<ToolHeader title="search" type="tool-call-search" state="output-available" />
<ToolContent>
<ToolInput input={{ query: "AI tools" }} />
<ToolOutput output={results} errorText={undefined} />
</ToolContent>
</Tool>
Many components use React Context for state management:
PromptInputProvider for global input stateMessageBranch for alternative response navigationConfirmation for approval workflow stateReasoning for collapsible thinking stateComponents support both controlled and uncontrolled patterns:
// Uncontrolled (self-managed state)
<PromptInput onSubmit={handleSubmit} />
// Controlled (external state)
<PromptInputProvider initialInput="">
<PromptInput onSubmit={handleSubmit} />
</PromptInputProvider>
The Tool component follows the Vercel AI SDK's state machine:
input-streaming: Parameters being receivedinput-available: Ready to executeapproval-requested: Awaiting user approval (SDK v6)approval-responded: User responded (SDK v6)output-available: Execution completedoutput-error: Execution failedoutput-denied: Approval deniedQueue components support hierarchical organization:
<Queue>
<QueueSection defaultOpen={true}>
<QueueSectionTrigger>
<QueueSectionLabel count={3} label="tasks" icon={<Icon />} />
</QueueSectionTrigger>
<QueueSectionContent>
<QueueList>
<QueueItem>
<QueueItemIndicator completed={false} />
<QueueItemContent>Task description</QueueItemContent>
</QueueItem>
</QueueList>
</QueueSectionContent>
</QueueSection>
</Queue>
The Conversation component uses the use-stick-to-bottom hook for intelligent auto-scrolling:
PromptInput provides comprehensive file handling:
The PromptInputSpeechButton uses the Web Speech API for voice input:
The Reasoning component provides auto-collapse behavior:
All components are fully typed with TypeScript:
import type { ToolUIPart, FileUIPart, UIMessage } from "ai";
type ToolProps = ComponentProps<typeof Collapsible>;
type QueueItemProps = ComponentProps<"li">;
type MessageAttachmentProps = HTMLAttributes<HTMLDivElement> & {
data: FileUIPart;
onRemove?: () => void;
};
Combine Conversation, Message, and PromptInput for a complete chat UI:
import { Conversation, ConversationContent, ConversationScrollButton } from "@/components/ai-elements/conversation";
import { Message, MessageContent, MessageResponse } from "@/components/ai-elements/message";
import {
PromptInput,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
PromptInputButton,
PromptInputSubmit
} from "@/components/ai-elements/prompt-input";
<div className="flex flex-col h-screen">
<Conversation>
<ConversationContent>
{messages.map(msg => (
<Message key={msg.id} from={msg.role}>
<MessageContent>
<MessageResponse>{msg.content}</MessageResponse>
</MessageContent>
</Message>
))}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput onSubmit={handleSubmit}>
<PromptInputTextarea />
<PromptInputFooter>
<PromptInputTools>
<PromptInputButton onClick={() => attachments.openFileDialog()}>
<PaperclipIcon />
</PromptInputButton>
</PromptInputTools>
<PromptInputSubmit status={chatStatus} />
</PromptInputFooter>
</PromptInput>
</div>
Show tool execution with expandable details:
import { Tool, ToolHeader, ToolContent, ToolInput, ToolOutput } from "@/components/ai-elements/tool";
{toolInvocations.map(tool => (
<Tool key={tool.id}>
<ToolHeader
title={tool.toolName}
type={`tool-call-${tool.toolName}`}
state={tool.state}
/>
<ToolContent>
<ToolInput input={tool.args} />
{tool.result && (
<ToolOutput output={tool.result} errorText={tool.error} />
)}
</ToolContent>
</Tool>
))}
Request user confirmation before executing actions:
import {
Confirmation,
ConfirmationTitle,
ConfirmationRequest,
ConfirmationActions,
ConfirmationAction,
ConfirmationAccepted,
ConfirmationRejected
} from "@/components/ai-elements/confirmation";
<Confirmation approval={tool.approval} state={tool.state}>
<ConfirmationTitle>
Approve deletion of {resource}?
</ConfirmationTitle>
<ConfirmationRequest>
<ConfirmationActions>
<ConfirmationAction onClick={approve} variant="default">
Approve
</ConfirmationAction>
<ConfirmationAction onClick={reject} variant="outline">
Reject
</ConfirmationAction>
</ConfirmationActions>
</ConfirmationRequest>
<ConfirmationAccepted>
Action approved and executed.
</ConfirmationAccepted>
<ConfirmationRejected>
Action rejected.
</ConfirmationRejected>
</Confirmation>
Display task lists with completion status:
import {
Queue,
QueueSection,
QueueSectionTrigger,
QueueSectionLabel,
QueueSectionContent,
QueueList,
QueueItem,
QueueItemIndicator,
QueueItemContent,
QueueItemDescription
} from "@/components/ai-elements/queue";
<Queue>
<QueueSection>
<QueueSectionTrigger>
<QueueSectionLabel count={todos.length} label="todos" />
</QueueSectionTrigger>
<QueueSectionContent>
<QueueList>
{todos.map(todo => (
<QueueItem key={todo.id}>
<QueueItemIndicator completed={todo.status === 'completed'} />
<QueueItemContent completed={todo.status === 'completed'}>
{todo.title}
</QueueItemContent>
{todo.description && (
<QueueItemDescription completed={todo.status === 'completed'}>
{todo.description}
</QueueItemDescription>
)}
</QueueItem>
))}
</QueueList>
</QueueSectionContent>
</QueueSection>
</Queue>
Components include accessibility features:
Many components use Framer Motion for smooth animations: