Releases: VoltAgent/voltagent
@voltagent/[email protected]
Minor Changes
-
#773
35290d9Thanks @hyperion912! - feat(postgres-memory-adapter): add schema configuration supportAdd support for defining a custom PostgreSQL schema during adapter initialization.
Defaults to undefined (uses the database’s default schema if not provided).Includes tests for schema configuration.
Resolves #763
@voltagent/[email protected]
Patch Changes
-
#785
f4b9524Thanks @omeraplak! - fix: the/agents/:id/textresponse to always include tool calling data. Previously we only bubbled up the last step'stoolCalls/toolResults, so multi-step providers (likeollama-ai-provider-v2) returned empty arrays even though the tool actually ran. We now aggregate tool activity across every step before returning the result, restoring parity with GPT-style providers and matching the AI SDK output. -
#783
46597cfThanks @omeraplak! - fix: unwrap provider-executed tool outputs when persisting conversation history so Anthropic’sserver_tool_useIDs stay unique on replay -
#786
f262b51Thanks @omeraplak! - fix: ensure sub-agent metadata is persisted alongside supervisor history so supervisor conversations know which sub-agent produced each tool event and memory record. You can now filter historical events the same way you handle live streams:const memoryMessages = await memory.getMessages(userId, conversationId); const formatterSteps = memoryMessages.filter( (message) => message.metadata?.subAgentId === "Formatter" ); for (const message of formatterSteps) { console.log(`[${message.metadata?.subAgentName}]`, message.parts); }
The same metadata also exists on live
fullStreamchunks, so you can keep the streaming UI and the historical memory explorer in sync.
@voltagent/[email protected]
Patch Changes
65e3317Thanks @omeraplak! - feat: add tags support for tools
@voltagent/[email protected]
Patch Changes
-
#767
cc1f5c0Thanks @omeraplak! - feat: add tunnel commandNew:
volt tunnelExpose your local VoltAgent server over a secure public URL with a single command:
pnpm volt tunnel 3141
The CLI handles tunnel creation for
localhost:3141and keeps the connection alive until you pressCtrl+C. You can omit the port argument to use the default.
@voltagent/[email protected]
Patch Changes
-
#767
cc1f5c0Thanks @omeraplak! - feat: add tunnel commandNew:
volt tunnelExpose your local VoltAgent server over a secure public URL with a single command:
pnpm volt tunnel 3141
The CLI handles tunnel creation for
localhost:3141and keeps the connection alive until you pressCtrl+C. You can omit the port argument to use the default.
@voltagent/[email protected]
Minor Changes
-
#761
0d13b73Thanks @omeraplak! - feat: addonHandoffCompletehook for early termination in supervisor/subagent workflowsThe Problem
When using the supervisor/subagent pattern, subagents always return to the supervisor for processing, even when they generate final outputs (like JSON structures or reports) that need no additional handling. This causes unnecessary token consumption.
Current flow:
Supervisor → SubAgent (generates 2K token JSON) → Supervisor (processes JSON) → User ↑ Wastes ~2K tokensExample impact:
- Current: ~2,650 tokens per request
- With bail: ~560 tokens per request
- Savings: 79% (~2,000 tokens / ~$0.020 per request)
The Solution
Added
onHandoffCompletehook that allows supervisors to intercept subagent results and optionally bail (skip supervisor processing) when the subagent produces final output.New flow:
Supervisor → SubAgent → bail() → User ✅API
The hook receives a
bail()function that can be called to terminate early:const supervisor = new Agent({ name: "Workout Supervisor", subAgents: [exerciseAgent, workoutBuilder], hooks: { onHandoffComplete: async ({ agent, result, bail, context }) => { // Workout Builder produces final JSON - no processing needed if (agent.name === "Workout Builder") { context.logger?.info("Final output received, bailing"); bail(); // Skip supervisor, return directly to user return; } // Large result - bail to save tokens if (result.length > 2000) { context.logger?.warn("Large result, bailing to save tokens"); bail(); return; } // Transform and bail if (agent.name === "Report Generator") { const transformed = `# Final Report\n\n${result}\n\n---\nGenerated at: ${new Date().toISOString()}`; bail(transformed); // Bail with transformed result return; } // Default: continue to supervisor for processing }, }, });
Hook Arguments
interface OnHandoffCompleteHookArgs { agent: Agent; // Target agent (subagent) sourceAgent: Agent; // Source agent (supervisor) result: string; // Subagent's output messages: UIMessage[]; // Full conversation messages usage?: UsageInfo; // Token usage info context: OperationContext; // Operation context bail: (transformedResult?: string) => void; // Call to bail }
Features
- ✅ Clean API: No return value needed, just call
bail() - ✅ True early termination: Supervisor execution stops immediately, no LLM calls wasted
- ✅ Conditional bail: Decide based on agent, result content, size, etc.
- ✅ Optional transformation:
bail(newResult)to transform before bailing - ✅ Observability: Automatic logging and OpenTelemetry events with visual indicators
- ✅ Backward compatible: Existing code works without changes
- ✅ Error handling: Hook errors logged, flow continues normally
How Bail Works (Implementation Details)
When
bail()is called in theonHandoffCompletehook:1. Hook Level (
packages/core/src/agent/subagent/index.ts):- Sets
bailed: trueflag in handoff return value - Adds OpenTelemetry span attributes to both supervisor and subagent spans
- Logs the bail event with metadata
2. Tool Level (
delegate_tasktool):- Includes
bailed: truein tool result structure - Adds note: "One or more subagents produced final output. No further processing needed."
3. Step Handler Level (
createStepHandlerinagent.ts):- Detects bail during step execution when tool results arrive
- Creates
BailErrorand aborts execution viaabortController.abort(bailError) - Stores bailed result in
systemContextfor retrieval - Works for both
generateTextandstreamText
4. Catch Block Level (method-specific handling):
- generateText: Catches
BailError, retrieves bailed result fromsystemContext, applies guardrails, calls hooks, returns as successful generation - streamText:
onErrorcatchesBailErrorgracefully (not logged as error),onFinishretrieves and uses bailed result
This unified abort-based implementation ensures true early termination for all generation methods.
Stream Support (NEW)
For
streamTextsupervisors:When a subagent bails during streaming, the supervisor stream is immediately aborted using a
BailError:- Detection during streaming (
createStepHandler):- Tool results are checked in
onStepFinishhandler - If
bailed: truefound,BailErroris created and stream is aborted viaabortController.abort(bailError) - Bailed result stored in
systemContextfor retrieval inonFinish
- Tool results are checked in
- Graceful error handling (
streamTextonError):BailErroris detected and handled gracefully (not logged as error)- Error hooks are NOT called for bail
- Stream abort is treated as successful early termination
- Final result (
streamTextonFinish):- Bailed result retrieved from
systemContext - Output guardrails applied to bailed result
onEndhook called with bailed result
- Bailed result retrieved from
Benefits for streaming:
- ✅ Stream stops immediately when bail detected (no wasted supervisor chunks)
- ✅ No unnecessary LLM calls after bail
- ✅ Works with
fullStreamEventForwarding- subagent chunks already forwarded - ✅ Clean abort semantic with
BailErrorclass - ✅ Graceful handling - not treated as error
Supported methods:
- ✅
generateText- Aborts execution during step handler, catchesBailErrorand returns bailed result - ✅
streamText- Aborts stream during step handler, handlesBailErrorinonErrorandonFinish - ❌
generateObject- No tool support, bail not applicable - ❌
streamObject- No tool support, bail not applicable
Key difference from initial implementation:
- ❌ OLD: Post-execution check in
generateText(after AI SDK completes) - redundant - ✅ NEW: Unified abort mechanism in
createStepHandler- works for both methods, stops execution immediately
Use Cases
Perfect for scenarios where specialized subagents generate final outputs:
- JSON/Structured data generators: Workout builders, report generators
- Large content producers: Document creators, data exports
- Token optimization: Skip processing for expensive results
- Business logic: Conditional routing based on result characteristics
Observability
When bail occurs, both logging and OpenTelemetry tracking provide full visibility:
Logging:
- Log event:
Supervisor bailed after handoff - Includes: supervisor name, subagent name, result length, transformation status
OpenTelemetry:
- Span event:
supervisor.handoff.bailed(for timeline events) - Span attributes added to both supervisor and subagent spans:
bailed:truebail.supervisor: supervisor agent name (on subagent span)bail.subagent: subagent name (on supervisor span)bail.transformed:trueif result was transformed
Console Visualization:
Bailed subagents are visually distinct in the observability react-flow view:- Purple border with shadow (
border-purple-500 shadow-purple-600/50) - "⚡ BAILED" badge in the header (shows "⚡ BAILED (T)" if transformed)
- Tooltip showing which supervisor initiated the bail
- Node opacity remains at 1.0 (fully visible)
- Status badge shows "BAILED" with purple styling instead of error
- Details panel shows "Early Termination" info section with supervisor info
Type Safety Improvements
Also improved type safety by replacing
usage?: anywith properUsageInfotype:export type UsageInfo = { promptTokens: number; completionTokens: number; totalTokens: number; cachedInputTokens?: number; reasoningTokens?: number; };
This provides:
- ✅ Better autocomplete in IDEs
- ✅ Compile-time type checking
- ✅ Clear documentation of available fields
Breaking Changes
None - this is a purely additive feature. The
UsageInfotype structure is fully compatible with existing code.
Patch Changes
-
#754
c80d18fThanks @omeraplak! - feat: encapsulate tool-specific metadata in toolContext + prevent AI SDK context collisionChanges
1. Tool Context Encapsulation
Tool-specific metadata now organized under optional
toolContextfield for better separation and future-proofing.Migration:
// Before execute: async ({ location }, options) => { // Fields were flat (planned, not released) }; // After execute: async ({ location }, options) => { const { name, callId, messages, abortSignal } = options?.toolContext || {}; // Session context remains flat const userId = options?.userId; const logger = options?.logger;
...
@voltagent/[email protected]
Patch Changes
- #757
a0509c4Thanks @omeraplak! - fix: evals & guardrails observability issue
@voltagent/[email protected]
Patch Changes
-
#744
e9e467aThanks @marinoska! - Refactor ToolManager into hierarchical architecture with BaseToolManager and ToolkitManagerIntroduces new class hierarchy for improved tool management:
- BaseToolManager: Abstract base class with core tool management functionality
- ToolManager: Main manager supporting standalone tools, provider tools, and toolkits
- ToolkitManager: Specialized manager for toolkit-scoped tools (no nested toolkits)
Features:
- Enhanced type-safe tool categorization with type guards
- Conflict detection for toolkit tools
- Reorganized tool preparation process - moved
prepareToolsForExecutionlogic from agent into ToolManager, simplifying agent code
Public API remains compatible.
-
#752
002ebadThanks @omeraplak! - fix: forward AI SDK tool call metadata (includingtoolCallId) to server-side tool executions - #746Tool wrappers now receive the full options object from the AI SDK, so custom tools and hook listeners can access
toolCallId, abort signals, and other metadata. We also propagate the real call id to OpenTelemetry spans. Existing tools keep working (the extra argument is optional), but they can now inspect the thirdoptionsparameter if they need richer context.
@voltagent/[email protected]
Patch Changes
-
#738
d3ed347Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647The Problem
When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.
Symptoms:
- Timeline showed events during execution
- Timeline cleared/reset when workflow completed
- No execution history for completed workflows
- Events were lost after browser refresh
The Solution
Backend (Framework):
- Added
events,output, andcancellationfields toWorkflowStateEntryinterface - Modified workflow execution to collect all stream events in memory during execution
- Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
- Updated all storage adapters to support the new fields:
- LibSQL: Added schema columns + automatic migration method (
addWorkflowStateColumns) - Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
- Postgres: Added schema columns + INSERT/UPDATE queries
- In-Memory: Automatically supported via TypeScript interface
- LibSQL: Added schema columns + automatic migration method (
Frontend (Console):
- Updated
WorkflowPlaygroundProviderto include events when convertingWorkflowStateEntry→WorkflowHistoryEntry - Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
- Events are now preserved across page refreshes and always visible in timeline UI
What Gets Persisted
// In WorkflowStateEntry (stored in Memory V2): { "events": [ { "id": "evt_123", "type": "workflow-start", "name": "Workflow Started", "startTime": "2025-01-24T10:00:00Z", "status": "running", "input": { "userId": "123" } }, { "id": "evt_124", "type": "step-complete", "name": "Step: fetch-user", "startTime": "2025-01-24T10:00:01Z", "endTime": "2025-01-24T10:00:02Z", "status": "success", "output": { "user": { "name": "John" } } } ], "output": { "result": "success" }, "cancellation": { "cancelledAt": "2025-01-24T10:00:05Z", "reason": "User requested cancellation" } }
Migration Guide
LibSQL Users
No action required - migrations run automatically on next initialization.
Supabase Users
When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:
-- Add workflow event persistence columns ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS events JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS output JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS cancellation JSONB;
Postgres Users
No action required - migrations run automatically on next initialization.
In-Memory Users
No action required - automatically supported.
VoltAgent Managed Memory Users
No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:
- Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
- Run automatic column addition migration for existing databases (lazy migration on first request)
- Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields
Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.
Impact
- ✅ Workflow execution timeline is now persistent and survives completion
- ✅ Full execution history visible for completed, suspended, and failed workflows
- ✅ Events, output, and cancellation metadata preserved in database
- ✅ Console UI timeline works consistently across all workflow states
- ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
- ✅ No data loss on workflow completion or page refresh
@voltagent/[email protected]
Patch Changes
-
#738
d3ed347Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647The Problem
When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.
Symptoms:
- Timeline showed events during execution
- Timeline cleared/reset when workflow completed
- No execution history for completed workflows
- Events were lost after browser refresh
The Solution
Backend (Framework):
- Added
events,output, andcancellationfields toWorkflowStateEntryinterface - Modified workflow execution to collect all stream events in memory during execution
- Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
- Updated all storage adapters to support the new fields:
- LibSQL: Added schema columns + automatic migration method (
addWorkflowStateColumns) - Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
- Postgres: Added schema columns + INSERT/UPDATE queries
- In-Memory: Automatically supported via TypeScript interface
- LibSQL: Added schema columns + automatic migration method (
Frontend (Console):
- Updated
WorkflowPlaygroundProviderto include events when convertingWorkflowStateEntry→WorkflowHistoryEntry - Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
- Events are now preserved across page refreshes and always visible in timeline UI
What Gets Persisted
// In WorkflowStateEntry (stored in Memory V2): { "events": [ { "id": "evt_123", "type": "workflow-start", "name": "Workflow Started", "startTime": "2025-01-24T10:00:00Z", "status": "running", "input": { "userId": "123" } }, { "id": "evt_124", "type": "step-complete", "name": "Step: fetch-user", "startTime": "2025-01-24T10:00:01Z", "endTime": "2025-01-24T10:00:02Z", "status": "success", "output": { "user": { "name": "John" } } } ], "output": { "result": "success" }, "cancellation": { "cancelledAt": "2025-01-24T10:00:05Z", "reason": "User requested cancellation" } }
Migration Guide
LibSQL Users
No action required - migrations run automatically on next initialization.
Supabase Users
When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:
-- Add workflow event persistence columns ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS events JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS output JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS cancellation JSONB;
Postgres Users
No action required - migrations run automatically on next initialization.
In-Memory Users
No action required - automatically supported.
VoltAgent Managed Memory Users
No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:
- Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
- Run automatic column addition migration for existing databases (lazy migration on first request)
- Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields
Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.
Impact
- ✅ Workflow execution timeline is now persistent and survives completion
- ✅ Full execution history visible for completed, suspended, and failed workflows
- ✅ Events, output, and cancellation metadata preserved in database
- ✅ Console UI timeline works consistently across all workflow states
- ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
- ✅ No data loss on workflow completion or page refresh