Skip to content

Releases: VoltAgent/voltagent

@voltagent/[email protected]

05 Nov 22:32
ebd88ce

Choose a tag to compare

Patch Changes

  • #767 cc1f5c0 Thanks @omeraplak! - feat: add tunnel command

    New: volt tunnel

    Expose your local VoltAgent server over a secure public URL with a single command:

    pnpm volt tunnel 3141

    The CLI handles tunnel creation for localhost:3141 and keeps the connection alive until you press Ctrl+C. You can omit the port argument to use the default.

@voltagent/[email protected]

05 Nov 22:32
ebd88ce

Choose a tag to compare

Patch Changes

  • #767 cc1f5c0 Thanks @omeraplak! - feat: add tunnel command

    New: volt tunnel

    Expose your local VoltAgent server over a secure public URL with a single command:

    pnpm volt tunnel 3141

    The CLI handles tunnel creation for localhost:3141 and keeps the connection alive until you press Ctrl+C. You can omit the port argument to use the default.

@voltagent/[email protected]

04 Nov 06:00
7df9b59

Choose a tag to compare

Minor Changes

  • #761 0d13b73 Thanks @omeraplak! - feat: add onHandoffComplete hook for early termination in supervisor/subagent workflows

    The Problem

    When using the supervisor/subagent pattern, subagents always return to the supervisor for processing, even when they generate final outputs (like JSON structures or reports) that need no additional handling. This causes unnecessary token consumption.

    Current flow:

    Supervisor → SubAgent (generates 2K token JSON) → Supervisor (processes JSON) → User
                                                        ↑ Wastes ~2K tokens
    

    Example impact:

    • Current: ~2,650 tokens per request
    • With bail: ~560 tokens per request
    • Savings: 79% (~2,000 tokens / ~$0.020 per request)

    The Solution

    Added onHandoffComplete hook that allows supervisors to intercept subagent results and optionally bail (skip supervisor processing) when the subagent produces final output.

    New flow:

    Supervisor → SubAgent → bail() → User ✅
    

    API

    The hook receives a bail() function that can be called to terminate early:

    const supervisor = new Agent({
      name: "Workout Supervisor",
      subAgents: [exerciseAgent, workoutBuilder],
      hooks: {
        onHandoffComplete: async ({ agent, result, bail, context }) => {
          // Workout Builder produces final JSON - no processing needed
          if (agent.name === "Workout Builder") {
            context.logger?.info("Final output received, bailing");
            bail(); // Skip supervisor, return directly to user
            return;
          }
    
          // Large result - bail to save tokens
          if (result.length > 2000) {
            context.logger?.warn("Large result, bailing to save tokens");
            bail();
            return;
          }
    
          // Transform and bail
          if (agent.name === "Report Generator") {
            const transformed = `# Final Report\n\n${result}\n\n---\nGenerated at: ${new Date().toISOString()}`;
            bail(transformed); // Bail with transformed result
            return;
          }
    
          // Default: continue to supervisor for processing
        },
      },
    });

    Hook Arguments

    interface OnHandoffCompleteHookArgs {
      agent: Agent; // Target agent (subagent)
      sourceAgent: Agent; // Source agent (supervisor)
      result: string; // Subagent's output
      messages: UIMessage[]; // Full conversation messages
      usage?: UsageInfo; // Token usage info
      context: OperationContext; // Operation context
      bail: (transformedResult?: string) => void; // Call to bail
    }

    Features

    • Clean API: No return value needed, just call bail()
    • True early termination: Supervisor execution stops immediately, no LLM calls wasted
    • Conditional bail: Decide based on agent, result content, size, etc.
    • Optional transformation: bail(newResult) to transform before bailing
    • Observability: Automatic logging and OpenTelemetry events with visual indicators
    • Backward compatible: Existing code works without changes
    • Error handling: Hook errors logged, flow continues normally

    How Bail Works (Implementation Details)

    When bail() is called in the onHandoffComplete hook:

    1. Hook Level (packages/core/src/agent/subagent/index.ts):

    • Sets bailed: true flag in handoff return value
    • Adds OpenTelemetry span attributes to both supervisor and subagent spans
    • Logs the bail event with metadata

    2. Tool Level (delegate_task tool):

    • Includes bailed: true in tool result structure
    • Adds note: "One or more subagents produced final output. No further processing needed."

    3. Step Handler Level (createStepHandler in agent.ts):

    • Detects bail during step execution when tool results arrive
    • Creates BailError and aborts execution via abortController.abort(bailError)
    • Stores bailed result in systemContext for retrieval
    • Works for both generateText and streamText

    4. Catch Block Level (method-specific handling):

    • generateText: Catches BailError, retrieves bailed result from systemContext, applies guardrails, calls hooks, returns as successful generation
    • streamText: onError catches BailError gracefully (not logged as error), onFinish retrieves and uses bailed result

    This unified abort-based implementation ensures true early termination for all generation methods.

    Stream Support (NEW)

    For streamText supervisors:

    When a subagent bails during streaming, the supervisor stream is immediately aborted using a BailError:

    1. Detection during streaming (createStepHandler):
      • Tool results are checked in onStepFinish handler
      • If bailed: true found, BailError is created and stream is aborted via abortController.abort(bailError)
      • Bailed result stored in systemContext for retrieval in onFinish
    2. Graceful error handling (streamText onError):
      • BailError is detected and handled gracefully (not logged as error)
      • Error hooks are NOT called for bail
      • Stream abort is treated as successful early termination
    3. Final result (streamText onFinish):
      • Bailed result retrieved from systemContext
      • Output guardrails applied to bailed result
      • onEnd hook called with bailed result

    Benefits for streaming:

    • ✅ Stream stops immediately when bail detected (no wasted supervisor chunks)
    • ✅ No unnecessary LLM calls after bail
    • ✅ Works with fullStreamEventForwarding - subagent chunks already forwarded
    • ✅ Clean abort semantic with BailError class
    • ✅ Graceful handling - not treated as error

    Supported methods:

    • generateText - Aborts execution during step handler, catches BailError and returns bailed result
    • streamText - Aborts stream during step handler, handles BailError in onError and onFinish
    • generateObject - No tool support, bail not applicable
    • streamObject - No tool support, bail not applicable

    Key difference from initial implementation:

    • OLD: Post-execution check in generateText (after AI SDK completes) - redundant
    • NEW: Unified abort mechanism in createStepHandler - works for both methods, stops execution immediately

    Use Cases

    Perfect for scenarios where specialized subagents generate final outputs:

    1. JSON/Structured data generators: Workout builders, report generators
    2. Large content producers: Document creators, data exports
    3. Token optimization: Skip processing for expensive results
    4. Business logic: Conditional routing based on result characteristics

    Observability

    When bail occurs, both logging and OpenTelemetry tracking provide full visibility:

    Logging:

    • Log event: Supervisor bailed after handoff
    • Includes: supervisor name, subagent name, result length, transformation status

    OpenTelemetry:

    • Span event: supervisor.handoff.bailed (for timeline events)
    • Span attributes added to both supervisor and subagent spans:
      • bailed: true
      • bail.supervisor: supervisor agent name (on subagent span)
      • bail.subagent: subagent name (on supervisor span)
      • bail.transformed: true if result was transformed

    Console Visualization:
    Bailed subagents are visually distinct in the observability react-flow view:

    • Purple border with shadow (border-purple-500 shadow-purple-600/50)
    • "⚡ BAILED" badge in the header (shows "⚡ BAILED (T)" if transformed)
    • Tooltip showing which supervisor initiated the bail
    • Node opacity remains at 1.0 (fully visible)
    • Status badge shows "BAILED" with purple styling instead of error
    • Details panel shows "Early Termination" info section with supervisor info

    Type Safety Improvements

    Also improved type safety by replacing usage?: any with proper UsageInfo type:

    export type UsageInfo = {
      promptTokens: number;
      completionTokens: number;
      totalTokens: number;
      cachedInputTokens?: number;
      reasoningTokens?: number;
    };

    This provides:

    • ✅ Better autocomplete in IDEs
    • ✅ Compile-time type checking
    • ✅ Clear documentation of available fields

    Breaking Changes

    None - this is a purely additive feature. The UsageInfo type structure is fully compatible with existing code.

Patch Changes

  • #754 c80d18f Thanks @omeraplak! - feat: encapsulate tool-specific metadata in toolContext + prevent AI SDK context collision

    Changes

    1. Tool Context Encapsulation

    Tool-specific metadata now organized under optional toolContext field for better separation and future-proofing.

    Migration:

    // Before
    execute: async ({ location }, options) => {
      // Fields were flat (planned, not released)
    };
    
    // After
    execute: async ({ location }, options) => {
      const { name, callId, messages, abortSignal } = options?.toolContext || {};
    
      // Session context remains flat
      const userId = options?.userId;
      const logger = options?.logger;

...

Read more

@voltagent/[email protected]

02 Nov 21:17
522eb96

Choose a tag to compare

Patch Changes

@voltagent/[email protected]

31 Oct 04:01
a9d5023

Choose a tag to compare

Patch Changes

  • #744 e9e467a Thanks @marinoska! - Refactor ToolManager into hierarchical architecture with BaseToolManager and ToolkitManager

    Introduces new class hierarchy for improved tool management:

    • BaseToolManager: Abstract base class with core tool management functionality
    • ToolManager: Main manager supporting standalone tools, provider tools, and toolkits
    • ToolkitManager: Specialized manager for toolkit-scoped tools (no nested toolkits)

    Features:

    • Enhanced type-safe tool categorization with type guards
    • Conflict detection for toolkit tools
    • Reorganized tool preparation process - moved prepareToolsForExecution logic from agent into ToolManager, simplifying agent code

    Public API remains compatible.

  • #752 002ebad Thanks @omeraplak! - fix: forward AI SDK tool call metadata (including toolCallId) to server-side tool executions - #746

    Tool wrappers now receive the full options object from the AI SDK, so custom tools and hook listeners can access toolCallId, abort signals, and other metadata. We also propagate the real call id to OpenTelemetry spans. Existing tools keep working (the extra argument is optional), but they can now inspect the third options parameter if they need richer context.

@voltagent/[email protected]

25 Oct 17:10
825aeb2

Choose a tag to compare

Patch Changes

  • #738 d3ed347 Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647

    The Problem

    When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.

    Symptoms:

    • Timeline showed events during execution
    • Timeline cleared/reset when workflow completed
    • No execution history for completed workflows
    • Events were lost after browser refresh

    The Solution

    Backend (Framework):

    • Added events, output, and cancellation fields to WorkflowStateEntry interface
    • Modified workflow execution to collect all stream events in memory during execution
    • Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
    • Updated all storage adapters to support the new fields:
      • LibSQL: Added schema columns + automatic migration method (addWorkflowStateColumns)
      • Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
      • Postgres: Added schema columns + INSERT/UPDATE queries
      • In-Memory: Automatically supported via TypeScript interface

    Frontend (Console):

    • Updated WorkflowPlaygroundProvider to include events when converting WorkflowStateEntryWorkflowHistoryEntry
    • Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
    • Events are now preserved across page refreshes and always visible in timeline UI

    What Gets Persisted

    // In WorkflowStateEntry (stored in Memory V2):
    {
      "events": [
        {
          "id": "evt_123",
          "type": "workflow-start",
          "name": "Workflow Started",
          "startTime": "2025-01-24T10:00:00Z",
          "status": "running",
          "input": { "userId": "123" }
        },
        {
          "id": "evt_124",
          "type": "step-complete",
          "name": "Step: fetch-user",
          "startTime": "2025-01-24T10:00:01Z",
          "endTime": "2025-01-24T10:00:02Z",
          "status": "success",
          "output": { "user": { "name": "John" } }
        }
      ],
      "output": { "result": "success" },
      "cancellation": {
        "cancelledAt": "2025-01-24T10:00:05Z",
        "reason": "User requested cancellation"
      }
    }

    Migration Guide

    LibSQL Users

    No action required - migrations run automatically on next initialization.

    Supabase Users

    When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:

    -- Add workflow event persistence columns
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS events JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS output JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS cancellation JSONB;

    Postgres Users

    No action required - migrations run automatically on next initialization.

    In-Memory Users

    No action required - automatically supported.

    VoltAgent Managed Memory Users

    No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:

    • Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
    • Run automatic column addition migration for existing databases (lazy migration on first request)
    • Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields

    Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.

    Impact

    • ✅ Workflow execution timeline is now persistent and survives completion
    • ✅ Full execution history visible for completed, suspended, and failed workflows
    • ✅ Events, output, and cancellation metadata preserved in database
    • ✅ Console UI timeline works consistently across all workflow states
    • ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
    • ✅ No data loss on workflow completion or page refresh

@voltagent/[email protected]

25 Oct 17:10
825aeb2

Choose a tag to compare

Patch Changes

  • #738 d3ed347 Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647

    The Problem

    When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.

    Symptoms:

    • Timeline showed events during execution
    • Timeline cleared/reset when workflow completed
    • No execution history for completed workflows
    • Events were lost after browser refresh

    The Solution

    Backend (Framework):

    • Added events, output, and cancellation fields to WorkflowStateEntry interface
    • Modified workflow execution to collect all stream events in memory during execution
    • Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
    • Updated all storage adapters to support the new fields:
      • LibSQL: Added schema columns + automatic migration method (addWorkflowStateColumns)
      • Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
      • Postgres: Added schema columns + INSERT/UPDATE queries
      • In-Memory: Automatically supported via TypeScript interface

    Frontend (Console):

    • Updated WorkflowPlaygroundProvider to include events when converting WorkflowStateEntryWorkflowHistoryEntry
    • Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
    • Events are now preserved across page refreshes and always visible in timeline UI

    What Gets Persisted

    // In WorkflowStateEntry (stored in Memory V2):
    {
      "events": [
        {
          "id": "evt_123",
          "type": "workflow-start",
          "name": "Workflow Started",
          "startTime": "2025-01-24T10:00:00Z",
          "status": "running",
          "input": { "userId": "123" }
        },
        {
          "id": "evt_124",
          "type": "step-complete",
          "name": "Step: fetch-user",
          "startTime": "2025-01-24T10:00:01Z",
          "endTime": "2025-01-24T10:00:02Z",
          "status": "success",
          "output": { "user": { "name": "John" } }
        }
      ],
      "output": { "result": "success" },
      "cancellation": {
        "cancelledAt": "2025-01-24T10:00:05Z",
        "reason": "User requested cancellation"
      }
    }

    Migration Guide

    LibSQL Users

    No action required - migrations run automatically on next initialization.

    Supabase Users

    When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:

    -- Add workflow event persistence columns
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS events JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS output JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS cancellation JSONB;

    Postgres Users

    No action required - migrations run automatically on next initialization.

    In-Memory Users

    No action required - automatically supported.

    VoltAgent Managed Memory Users

    No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:

    • Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
    • Run automatic column addition migration for existing databases (lazy migration on first request)
    • Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields

    Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.

    Impact

    • ✅ Workflow execution timeline is now persistent and survives completion
    • ✅ Full execution history visible for completed, suspended, and failed workflows
    • ✅ Events, output, and cancellation metadata preserved in database
    • ✅ Console UI timeline works consistently across all workflow states
    • ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
    • ✅ No data loss on workflow completion or page refresh

@voltagent/[email protected]

25 Oct 17:10
825aeb2

Choose a tag to compare

Patch Changes

  • #738 d3ed347 Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647

    The Problem

    When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.

    Symptoms:

    • Timeline showed events during execution
    • Timeline cleared/reset when workflow completed
    • No execution history for completed workflows
    • Events were lost after browser refresh

    The Solution

    Backend (Framework):

    • Added events, output, and cancellation fields to WorkflowStateEntry interface
    • Modified workflow execution to collect all stream events in memory during execution
    • Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
    • Updated all storage adapters to support the new fields:
      • LibSQL: Added schema columns + automatic migration method (addWorkflowStateColumns)
      • Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
      • Postgres: Added schema columns + INSERT/UPDATE queries
      • In-Memory: Automatically supported via TypeScript interface

    Frontend (Console):

    • Updated WorkflowPlaygroundProvider to include events when converting WorkflowStateEntryWorkflowHistoryEntry
    • Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
    • Events are now preserved across page refreshes and always visible in timeline UI

    What Gets Persisted

    // In WorkflowStateEntry (stored in Memory V2):
    {
      "events": [
        {
          "id": "evt_123",
          "type": "workflow-start",
          "name": "Workflow Started",
          "startTime": "2025-01-24T10:00:00Z",
          "status": "running",
          "input": { "userId": "123" }
        },
        {
          "id": "evt_124",
          "type": "step-complete",
          "name": "Step: fetch-user",
          "startTime": "2025-01-24T10:00:01Z",
          "endTime": "2025-01-24T10:00:02Z",
          "status": "success",
          "output": { "user": { "name": "John" } }
        }
      ],
      "output": { "result": "success" },
      "cancellation": {
        "cancelledAt": "2025-01-24T10:00:05Z",
        "reason": "User requested cancellation"
      }
    }

    Migration Guide

    LibSQL Users

    No action required - migrations run automatically on next initialization.

    Supabase Users

    When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:

    -- Add workflow event persistence columns
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS events JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS output JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS cancellation JSONB;

    Postgres Users

    No action required - migrations run automatically on next initialization.

    In-Memory Users

    No action required - automatically supported.

    VoltAgent Managed Memory Users

    No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:

    • Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
    • Run automatic column addition migration for existing databases (lazy migration on first request)
    • Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields

    Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.

    Impact

    • ✅ Workflow execution timeline is now persistent and survives completion
    • ✅ Full execution history visible for completed, suspended, and failed workflows
    • ✅ Events, output, and cancellation metadata preserved in database
    • ✅ Console UI timeline works consistently across all workflow states
    • ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
    • ✅ No data loss on workflow completion or page refresh

@voltagent/[email protected]

25 Oct 17:10
825aeb2

Choose a tag to compare

Patch Changes

  • #740 bac1f49 Thanks @marinoska! - Stable fix for the providerMetadata openai entries normalization bug: #718

  • #738 d3ed347 Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647

    The Problem

    When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.

    Symptoms:

    • Timeline showed events during execution
    • Timeline cleared/reset when workflow completed
    • No execution history for completed workflows
    • Events were lost after browser refresh

    The Solution

    Backend (Framework):

    • Added events, output, and cancellation fields to WorkflowStateEntry interface
    • Modified workflow execution to collect all stream events in memory during execution
    • Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
    • Updated all storage adapters to support the new fields:
      • LibSQL: Added schema columns + automatic migration method (addWorkflowStateColumns)
      • Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
      • Postgres: Added schema columns + INSERT/UPDATE queries
      • In-Memory: Automatically supported via TypeScript interface

    Frontend (Console):

    • Updated WorkflowPlaygroundProvider to include events when converting WorkflowStateEntryWorkflowHistoryEntry
    • Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
    • Events are now preserved across page refreshes and always visible in timeline UI

    What Gets Persisted

    // In WorkflowStateEntry (stored in Memory V2):
    {
      "events": [
        {
          "id": "evt_123",
          "type": "workflow-start",
          "name": "Workflow Started",
          "startTime": "2025-01-24T10:00:00Z",
          "status": "running",
          "input": { "userId": "123" }
        },
        {
          "id": "evt_124",
          "type": "step-complete",
          "name": "Step: fetch-user",
          "startTime": "2025-01-24T10:00:01Z",
          "endTime": "2025-01-24T10:00:02Z",
          "status": "success",
          "output": { "user": { "name": "John" } }
        }
      ],
      "output": { "result": "success" },
      "cancellation": {
        "cancelledAt": "2025-01-24T10:00:05Z",
        "reason": "User requested cancellation"
      }
    }

    Migration Guide

    LibSQL Users

    No action required - migrations run automatically on next initialization.

    Supabase Users

    When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:

    -- Add workflow event persistence columns
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS events JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS output JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS cancellation JSONB;

    Postgres Users

    No action required - migrations run automatically on next initialization.

    In-Memory Users

    No action required - automatically supported.

    VoltAgent Managed Memory Users

    No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:

    • Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
    • Run automatic column addition migration for existing databases (lazy migration on first request)
    • Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields

    Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.

    Impact

    • ✅ Workflow execution timeline is now persistent and survives completion
    • ✅ Full execution history visible for completed, suspended, and failed workflows
    • ✅ Events, output, and cancellation metadata preserved in database
    • ✅ Console UI timeline works consistently across all workflow states
    • ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
    • ✅ No data loss on workflow completion or page refresh
  • #743 55e3555 Thanks @omeraplak! - feat: add OperationContext support to Memory adapters for dynamic runtime behavior

    The Problem

    Memory adapters (InMemory, PostgreSQL, custom) had fixed configuration at instantiation time. Users couldn't:

    1. Pass different memory limits per generateText() call (e.g., 10 messages for quick responses, 100 for summaries)
    2. Access agent execution context (logger, tracing, abort signals) within memory operations
    3. Implement context-aware memory behavior without modifying adapter configuration

    The Solution

    Framework (VoltAgent Core):

    • Added optional context?: OperationContext parameter to all StorageAdapter methods
    • Memory adapters now receive full agent execution context including:
      • context.context - User-provided key-value map for dynamic parameters
      • context.logger - Contextual logger for debugging
      • context.traceContext - OpenTelemetry tracing integration
      • context.abortController - Cancellation support
      • userId, conversationId, and other operation metadata

    Type Safety:

    • Replaced any types with proper OperationContext type
    • No circular dependencies (type-only imports)
    • Full IDE autocomplete support

    Usage Example

    Dynamic Memory Limits

    import { Agent, Memory, InMemoryStorageAdapter } from "@voltagent/core";
    import type { OperationContext } from "@voltagent/core/agent";
    
    class DynamicMemoryAdapter extends InMemoryStorageAdapter {
      async getMessages(
        userId: string,
        conversationId: string,
        options?: GetMessagesOptions,
        context?: OperationContext
      ): Promise<UIMessage[]> {
        // Extract dynamic limit from context
        const dynamicLimit = context?.context.get("memoryLimit") as number;
        return super.getMessages(
          userId,
          conversationId,
          {
            ...options,
            limit: dynamicLimit || options?.limit || 10,
          },
          context
        );
      }
    }
    
    const agent = new Agent({
      memory: new Memory({ storage: new DynamicMemoryAdapter() }),
    });
    
    // Short context for quick queries
    await agent.generateText("Quick question", {
      context: new Map([["memoryLimit", 5]]),
    });
    
    // Long context for detailed analysis
    await agent.generateText("Summarize everything", {
      context: new Map([["memoryLimit", 100]]),
    });

    Access Logger and Tracing

    class ObservableMemoryAdapter extends InMemoryStorageAdapter {
      async getMessages(...args, context?: OperationContext) {
        context?.logger.debug("Fetching messages", {
          traceId: context.traceContext.getTraceId(),
          userId: args[0],
        });
        return super.getMessages(...args, context);
      }
    }

    Impact

    • Dynamic behavior per request without changing adapter configuration
    • Full observability - Access to logger, tracing, and operation metadata
    • Type-safe - Proper TypeScript types with IDE autocomplete
    • Backward compatible - Context parameter is optional
    • Extensible - Custom adapters can implement context-aware logic

    Breaking Changes

    None - the context parameter is optional on all methods.

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #734 2084fd4 Thanks @omeraplak! - fix: add URL path support for single package updates and resolve 404 errors

    The Problem

    The update endpoint only accepted package names via request body (POST /updates with { "packageName": "@voltagent/core" }), but users expected to be able to specify the package name directly in the URL path (POST /updates/@voltagent/core). This caused 404 errors when trying to update individual packages using the more intuitive URL-based approach.

    The Solution

    Added a new route POST /updates/:packageName that accepts the package name as a URL parameter, providing a more RESTful API design while maintaining backward compatibility with the existing body-based approach.

    New Routes Available:

    • POST /updates/@voltagent/core - Update single package (package name in URL path)
    • POST /updates with body { "packageName": "@voltagent/core" } - Update single package (package name in body)
    • POST /updates with no body - Update all VoltAgent packages

    Package Manager Detection:
    The system automatically detects your package manager based on lock files:

    • pnpm-lock.yaml → uses pnpm add
    • yarn.lock → uses yarn add
    • package-lock.json → uses npm install
    • bun.lockb → uses bun add

    Usage Example

    // Update a single package using URL path
    fetch("http://localhost:3141/updates/@voltagent/core", {
      method: "POST",
    });
    
    // Or using the body parameter (backward compatible)
    fetch("http://localhost:3141/updates", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ packageName: "@voltagent/core" }),
    });
    
    // Update all packages
    fetch("http://localhost:3141/updates", {
      method: "POST",
    });
  • #736 348bda0 Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646

    The Problem

    When users configured a custom logger with a specific log level (e.g., level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:

    1. LoggerProxy was forwarding all log calls to the underlying logger without checking the configured level
    2. Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
    3. OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs

    The Solution

    Framework Changes:

    • Updated LoggerProxy to check configured log level before forwarding to console/stdout
    • Added shouldLog(level) method that inspects the underlying logger's level (supports both Pino and ConsoleLogger)
    • Separated console output filtering from OpenTelemetry emission:
      • Console/stdout: Respects configured level (error level → only shows error/fatal)
      • OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)

    What Gets Fixed:

    const logger = createPinoLogger({ level: "error" });
    
    logger.debug("Agent created");
    // Console: ❌ Hidden (keeps dev environment clean)
    // OpenTelemetry: ✅ Sent (full observability)
    
    logger.error("Generation failed");
    // Console: ✅ Shown (important errors visible)
    // OpenTelemetry: ✅ Sent (full observability)

    Impact

    • Cleaner Development: Console output now respects configured log levels
    • Full Observability: OpenTelemetry platforms receive all logs regardless of console level
    • Better Debugging: Debug/trace logs available in observability tools even in production
    • No Breaking Changes: Existing code works as-is with improved behavior

    Usage

    No code changes needed - the fix applies automatically:

    // Create logger with error level
    const logger = createPinoLogger({
      level: "error",
      name: "my-app",
    });
    
    // Use it with VoltAgent
    new VoltAgent({
      agents: { myAgent },
      logger, // Console will be clean, OpenTelemetry gets everything
    });

    Migration Notes

    If you were working around this issue by:

    • Filtering console output manually
    • Using different loggers for different components
    • Avoiding debug logs altogether

    You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.

  • Updated dependencies [2084fd4, 348bda0]: