Vercel AI SDK
Role in the runtime
VercelAI centralises model resolution, provider options, and streamChat integration. Every chat request flows through this service so configuration changes and tool execution stay consistent.
Core flow
src/services/VercelAI.ts
const streamChat = <TOOLS extends ToolSet>(request: StreamChatRequest<TOOLS>) =>
Effect.gen(function* () {
// Load configuration and model
const config = yield* configService.load
const model = yield* getModel
const { messages, tools, maxSteps, temperature, onStepFinish } = request
return streamText({
model,
messages,
tools,
// Limit tool-calling iterations
stopWhen: stepCountIs(maxSteps ?? config.maxSteps ?? 10),
temperature: temperature ?? config.temperature,
maxOutputTokens: config.maxTokens,
providerOptions: normalizeProviderOptions(config) as never,
onStepFinish,
})
})
- Model resolution —
getModellazy-loads the provider-specific client viacreateAnthropic,createOpenAI, orcreateGoogleGenerativeAI. - Step gating —
stepCountIslimits tool calls per response; callers can overridemaxStepswhen required. - Provider options —
normalizeProviderOptionspasses through provider-specific configuration stored inConfigService. - Streaming hook —
onStepFinishforwards tool calls/results to the UI presenters.
Request contract
src/services/VercelAI.ts
export interface StreamChatRequest<TOOLS extends ToolSet = ToolSet> {
readonly messages: ModelMessage[]
readonly tools: TOOLS
readonly maxSteps?: number
readonly temperature?: number
readonly onStepFinish?: (step: StepResult<TOOLS>) => void
}
messages— history produced byMessageService(system prompt + conversation).tools— theToolSetfromToolRegistry.onStepFinish— invoked for every tool call/result chunk; optional for fire-and-forget streams.
Typical usage
Inside MessageService:
src/chat/MessageService.ts
const assistantText =
yield *
handleChatStream(
messages,
tools,
{
maxSteps: config.maxSteps ?? 10,
temperature: config.temperature,
},
vercelAI,
)
handleChatStream calls vercelAI.streamChat and subscribes to the incremental output, keeping the CLI responsive.
Experiments
- Increase
AI_MAX_STEPSto allow more tool iterations per request. - Override
temperatureper message to explore creative vs deterministic behaviour. - Provide custom
onStepFinishcallbacks to log raw tool payloads for debugging.
Source
src/services/VercelAI.tssrc/chat/MessageService.tssrc/services/ConfigService.ts