Skip to content

Releases: VoltAgent/voltagent

@voltagent/[email protected]

25 Oct 17:10
825aeb2

Choose a tag to compare

Patch Changes

  • #740 bac1f49 Thanks @marinoska! - Stable fix for the providerMetadata openai entries normalization bug: #718

  • #738 d3ed347 Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647

    The Problem

    When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.

    Symptoms:

    • Timeline showed events during execution
    • Timeline cleared/reset when workflow completed
    • No execution history for completed workflows
    • Events were lost after browser refresh

    The Solution

    Backend (Framework):

    • Added events, output, and cancellation fields to WorkflowStateEntry interface
    • Modified workflow execution to collect all stream events in memory during execution
    • Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
    • Updated all storage adapters to support the new fields:
      • LibSQL: Added schema columns + automatic migration method (addWorkflowStateColumns)
      • Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
      • Postgres: Added schema columns + INSERT/UPDATE queries
      • In-Memory: Automatically supported via TypeScript interface

    Frontend (Console):

    • Updated WorkflowPlaygroundProvider to include events when converting WorkflowStateEntryWorkflowHistoryEntry
    • Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
    • Events are now preserved across page refreshes and always visible in timeline UI

    What Gets Persisted

    // In WorkflowStateEntry (stored in Memory V2):
    {
      "events": [
        {
          "id": "evt_123",
          "type": "workflow-start",
          "name": "Workflow Started",
          "startTime": "2025-01-24T10:00:00Z",
          "status": "running",
          "input": { "userId": "123" }
        },
        {
          "id": "evt_124",
          "type": "step-complete",
          "name": "Step: fetch-user",
          "startTime": "2025-01-24T10:00:01Z",
          "endTime": "2025-01-24T10:00:02Z",
          "status": "success",
          "output": { "user": { "name": "John" } }
        }
      ],
      "output": { "result": "success" },
      "cancellation": {
        "cancelledAt": "2025-01-24T10:00:05Z",
        "reason": "User requested cancellation"
      }
    }

    Migration Guide

    LibSQL Users

    No action required - migrations run automatically on next initialization.

    Supabase Users

    When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:

    -- Add workflow event persistence columns
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS events JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS output JSONB;
    
    ALTER TABLE voltagent_workflow_states
    ADD COLUMN IF NOT EXISTS cancellation JSONB;

    Postgres Users

    No action required - migrations run automatically on next initialization.

    In-Memory Users

    No action required - automatically supported.

    VoltAgent Managed Memory Users

    No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:

    • Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
    • Run automatic column addition migration for existing databases (lazy migration on first request)
    • Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields

    Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.

    Impact

    • ✅ Workflow execution timeline is now persistent and survives completion
    • ✅ Full execution history visible for completed, suspended, and failed workflows
    • ✅ Events, output, and cancellation metadata preserved in database
    • ✅ Console UI timeline works consistently across all workflow states
    • ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
    • ✅ No data loss on workflow completion or page refresh
  • #743 55e3555 Thanks @omeraplak! - feat: add OperationContext support to Memory adapters for dynamic runtime behavior

    The Problem

    Memory adapters (InMemory, PostgreSQL, custom) had fixed configuration at instantiation time. Users couldn't:

    1. Pass different memory limits per generateText() call (e.g., 10 messages for quick responses, 100 for summaries)
    2. Access agent execution context (logger, tracing, abort signals) within memory operations
    3. Implement context-aware memory behavior without modifying adapter configuration

    The Solution

    Framework (VoltAgent Core):

    • Added optional context?: OperationContext parameter to all StorageAdapter methods
    • Memory adapters now receive full agent execution context including:
      • context.context - User-provided key-value map for dynamic parameters
      • context.logger - Contextual logger for debugging
      • context.traceContext - OpenTelemetry tracing integration
      • context.abortController - Cancellation support
      • userId, conversationId, and other operation metadata

    Type Safety:

    • Replaced any types with proper OperationContext type
    • No circular dependencies (type-only imports)
    • Full IDE autocomplete support

    Usage Example

    Dynamic Memory Limits

    import { Agent, Memory, InMemoryStorageAdapter } from "@voltagent/core";
    import type { OperationContext } from "@voltagent/core/agent";
    
    class DynamicMemoryAdapter extends InMemoryStorageAdapter {
      async getMessages(
        userId: string,
        conversationId: string,
        options?: GetMessagesOptions,
        context?: OperationContext
      ): Promise<UIMessage[]> {
        // Extract dynamic limit from context
        const dynamicLimit = context?.context.get("memoryLimit") as number;
        return super.getMessages(
          userId,
          conversationId,
          {
            ...options,
            limit: dynamicLimit || options?.limit || 10,
          },
          context
        );
      }
    }
    
    const agent = new Agent({
      memory: new Memory({ storage: new DynamicMemoryAdapter() }),
    });
    
    // Short context for quick queries
    await agent.generateText("Quick question", {
      context: new Map([["memoryLimit", 5]]),
    });
    
    // Long context for detailed analysis
    await agent.generateText("Summarize everything", {
      context: new Map([["memoryLimit", 100]]),
    });

    Access Logger and Tracing

    class ObservableMemoryAdapter extends InMemoryStorageAdapter {
      async getMessages(...args, context?: OperationContext) {
        context?.logger.debug("Fetching messages", {
          traceId: context.traceContext.getTraceId(),
          userId: args[0],
        });
        return super.getMessages(...args, context);
      }
    }

    Impact

    • Dynamic behavior per request without changing adapter configuration
    • Full observability - Access to logger, tracing, and operation metadata
    • Type-safe - Proper TypeScript types with IDE autocomplete
    • Backward compatible - Context parameter is optional
    • Extensible - Custom adapters can implement context-aware logic

    Breaking Changes

    None - the context parameter is optional on all methods.

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #734 2084fd4 Thanks @omeraplak! - fix: add URL path support for single package updates and resolve 404 errors

    The Problem

    The update endpoint only accepted package names via request body (POST /updates with { "packageName": "@voltagent/core" }), but users expected to be able to specify the package name directly in the URL path (POST /updates/@voltagent/core). This caused 404 errors when trying to update individual packages using the more intuitive URL-based approach.

    The Solution

    Added a new route POST /updates/:packageName that accepts the package name as a URL parameter, providing a more RESTful API design while maintaining backward compatibility with the existing body-based approach.

    New Routes Available:

    • POST /updates/@voltagent/core - Update single package (package name in URL path)
    • POST /updates with body { "packageName": "@voltagent/core" } - Update single package (package name in body)
    • POST /updates with no body - Update all VoltAgent packages

    Package Manager Detection:
    The system automatically detects your package manager based on lock files:

    • pnpm-lock.yaml → uses pnpm add
    • yarn.lock → uses yarn add
    • package-lock.json → uses npm install
    • bun.lockb → uses bun add

    Usage Example

    // Update a single package using URL path
    fetch("http://localhost:3141/updates/@voltagent/core", {
      method: "POST",
    });
    
    // Or using the body parameter (backward compatible)
    fetch("http://localhost:3141/updates", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ packageName: "@voltagent/core" }),
    });
    
    // Update all packages
    fetch("http://localhost:3141/updates", {
      method: "POST",
    });
  • #736 348bda0 Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646

    The Problem

    When users configured a custom logger with a specific log level (e.g., level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:

    1. LoggerProxy was forwarding all log calls to the underlying logger without checking the configured level
    2. Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
    3. OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs

    The Solution

    Framework Changes:

    • Updated LoggerProxy to check configured log level before forwarding to console/stdout
    • Added shouldLog(level) method that inspects the underlying logger's level (supports both Pino and ConsoleLogger)
    • Separated console output filtering from OpenTelemetry emission:
      • Console/stdout: Respects configured level (error level → only shows error/fatal)
      • OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)

    What Gets Fixed:

    const logger = createPinoLogger({ level: "error" });
    
    logger.debug("Agent created");
    // Console: ❌ Hidden (keeps dev environment clean)
    // OpenTelemetry: ✅ Sent (full observability)
    
    logger.error("Generation failed");
    // Console: ✅ Shown (important errors visible)
    // OpenTelemetry: ✅ Sent (full observability)

    Impact

    • Cleaner Development: Console output now respects configured log levels
    • Full Observability: OpenTelemetry platforms receive all logs regardless of console level
    • Better Debugging: Debug/trace logs available in observability tools even in production
    • No Breaking Changes: Existing code works as-is with improved behavior

    Usage

    No code changes needed - the fix applies automatically:

    // Create logger with error level
    const logger = createPinoLogger({
      level: "error",
      name: "my-app",
    });
    
    // Use it with VoltAgent
    new VoltAgent({
      agents: { myAgent },
      logger, // Console will be clean, OpenTelemetry gets everything
    });

    Migration Notes

    If you were working around this issue by:

    • Filtering console output manually
    • Using different loggers for different components
    • Avoiding debug logs altogether

    You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.

  • Updated dependencies [2084fd4, 348bda0]:

@voltagent/[email protected]

24 Oct 00:37
8aaffec

Choose a tag to compare

Patch Changes

  • #728 3952b4b Thanks @omeraplak! - feat: automatic detection and display of custom routes in console logs and Swagger UI

    Custom routes added via configureApp callback are now automatically detected and displayed in both server startup logs and Swagger UI documentation.

    What Changed

    Previously, only OpenAPI-registered routes were visible in:

    • Server startup console logs
    • Swagger UI documentation (/ui)

    Now all custom routes are automatically detected, including:

    • Regular Hono routes (app.get(), app.post(), etc.)
    • OpenAPI routes with full documentation
    • Routes with path parameters (:id, {id})

    Usage Example

    import { honoServer } from "@voltagent/server-hono";
    
    new VoltAgent({
      agents: { myAgent },
      server: honoServer({
        configureApp: (app) => {
          // These routes are now automatically detected!
          app.get("/api/health", (c) => c.json({ status: "ok" }));
          app.post("/api/calculate", async (c) => {
            const { a, b } = await c.req.json();
            return c.json({ result: a + b });
          });
        },
      }),
    });

    Console Output

    ══════════════════════════════════════════════════
      VOLTAGENT SERVER STARTED SUCCESSFULLY
    ══════════════════════════════════════════════════
      ✓ HTTP Server:  http://localhost:3141
      ✓ Swagger UI:   http://localhost:3141/ui
    
      ✓ Registered Endpoints: 2 total
    
        Custom Endpoints
          GET    /api/health
          POST   /api/calculate
    ══════════════════════════════════════════════════
    

    Improvements

    • ✅ Extracts routes from app.routes array (includes all Hono routes)
    • ✅ Merges with OpenAPI document routes for descriptions
    • ✅ Filters out built-in VoltAgent paths using exact matching (not regex)
    • ✅ Custom routes like /agents-dashboard or /workflows-manager are now correctly detected
    • ✅ Normalizes path formatting (removes duplicate slashes)
    • ✅ Handles both :param and {param} path parameter formats
    • ✅ Adds custom routes to Swagger UI with auto-generated schemas
    • ✅ Comprehensive test coverage (44 unit tests)

    Implementation Details

    The extractCustomEndpoints() function now:

    1. Extracts all routes from app.routes (regular Hono routes)
    2. Merges with OpenAPI document routes (for descriptions)
    3. Deduplicates and filters built-in VoltAgent routes
    4. Returns a complete list of custom endpoints

    The getEnhancedOpenApiDoc() function:

    1. Adds custom routes to OpenAPI document for Swagger UI
    2. Generates response schemas for undocumented routes
    3. Preserves existing OpenAPI documentation
    4. Supports path parameters and request bodies
  • Updated dependencies [59da0b5]:

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #734 2084fd4 Thanks @omeraplak! - fix: add URL path support for single package updates and resolve 404 errors

    The Problem

    The update endpoint only accepted package names via request body (POST /updates with { "packageName": "@voltagent/core" }), but users expected to be able to specify the package name directly in the URL path (POST /updates/@voltagent/core). This caused 404 errors when trying to update individual packages using the more intuitive URL-based approach.

    The Solution

    Added a new route POST /updates/:packageName that accepts the package name as a URL parameter, providing a more RESTful API design while maintaining backward compatibility with the existing body-based approach.

    New Routes Available:

    • POST /updates/@voltagent/core - Update single package (package name in URL path)
    • POST /updates with body { "packageName": "@voltagent/core" } - Update single package (package name in body)
    • POST /updates with no body - Update all VoltAgent packages

    Package Manager Detection:
    The system automatically detects your package manager based on lock files:

    • pnpm-lock.yaml → uses pnpm add
    • yarn.lock → uses yarn add
    • package-lock.json → uses npm install
    • bun.lockb → uses bun add

    Usage Example

    // Update a single package using URL path
    fetch("http://localhost:3141/updates/@voltagent/core", {
      method: "POST",
    });
    
    // Or using the body parameter (backward compatible)
    fetch("http://localhost:3141/updates", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ packageName: "@voltagent/core" }),
    });
    
    // Update all packages
    fetch("http://localhost:3141/updates", {
      method: "POST",
    });
  • Updated dependencies [348bda0]:

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #736 348bda0 Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646

    The Problem

    When users configured a custom logger with a specific log level (e.g., level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:

    1. LoggerProxy was forwarding all log calls to the underlying logger without checking the configured level
    2. Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
    3. OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs

    The Solution

    Framework Changes:

    • Updated LoggerProxy to check configured log level before forwarding to console/stdout
    • Added shouldLog(level) method that inspects the underlying logger's level (supports both Pino and ConsoleLogger)
    • Separated console output filtering from OpenTelemetry emission:
      • Console/stdout: Respects configured level (error level → only shows error/fatal)
      • OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)

    What Gets Fixed:

    const logger = createPinoLogger({ level: "error" });
    
    logger.debug("Agent created");
    // Console: ❌ Hidden (keeps dev environment clean)
    // OpenTelemetry: ✅ Sent (full observability)
    
    logger.error("Generation failed");
    // Console: ✅ Shown (important errors visible)
    // OpenTelemetry: ✅ Sent (full observability)

    Impact

    • Cleaner Development: Console output now respects configured log levels
    • Full Observability: OpenTelemetry platforms receive all logs regardless of console level
    • Better Debugging: Debug/trace logs available in observability tools even in production
    • No Breaking Changes: Existing code works as-is with improved behavior

    Usage

    No code changes needed - the fix applies automatically:

    // Create logger with error level
    const logger = createPinoLogger({
      level: "error",
      name: "my-app",
    });
    
    // Use it with VoltAgent
    new VoltAgent({
      agents: { myAgent },
      logger, // Console will be clean, OpenTelemetry gets everything
    });

    Migration Notes

    If you were working around this issue by:

    • Filtering console output manually
    • Using different loggers for different components
    • Avoiding debug logs altogether

    You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #736 348bda0 Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646

    The Problem

    When users configured a custom logger with a specific log level (e.g., level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:

    1. LoggerProxy was forwarding all log calls to the underlying logger without checking the configured level
    2. Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
    3. OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs

    The Solution

    Framework Changes:

    • Updated LoggerProxy to check configured log level before forwarding to console/stdout
    • Added shouldLog(level) method that inspects the underlying logger's level (supports both Pino and ConsoleLogger)
    • Separated console output filtering from OpenTelemetry emission:
      • Console/stdout: Respects configured level (error level → only shows error/fatal)
      • OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)

    What Gets Fixed:

    const logger = createPinoLogger({ level: "error" });
    
    logger.debug("Agent created");
    // Console: ❌ Hidden (keeps dev environment clean)
    // OpenTelemetry: ✅ Sent (full observability)
    
    logger.error("Generation failed");
    // Console: ✅ Shown (important errors visible)
    // OpenTelemetry: ✅ Sent (full observability)

    Impact

    • Cleaner Development: Console output now respects configured log levels
    • Full Observability: OpenTelemetry platforms receive all logs regardless of console level
    • Better Debugging: Debug/trace logs available in observability tools even in production
    • No Breaking Changes: Existing code works as-is with improved behavior

    Usage

    No code changes needed - the fix applies automatically:

    // Create logger with error level
    const logger = createPinoLogger({
      level: "error",
      name: "my-app",
    });
    
    // Use it with VoltAgent
    new VoltAgent({
      agents: { myAgent },
      logger, // Console will be clean, OpenTelemetry gets everything
    });

    Migration Notes

    If you were working around this issue by:

    • Filtering console output manually
    • Using different loggers for different components
    • Avoiding debug logs altogether

    You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.

@voltagent/[email protected]

24 Oct 05:24
a5cb1c7

Choose a tag to compare

Patch Changes

  • #730 1244b3e Thanks @omeraplak! - feat: add finish reason and max steps observability to agent execution traces - #721

    The Problem

    When agents hit the maximum steps limit (via maxSteps or stopWhen conditions), execution would terminate abruptly without clear indication in observability traces. This created confusion as:

    1. The AI SDK's finishReason (e.g., stop, tool-calls, length, error) was not being captured in OpenTelemetry spans
    2. MaxSteps termination looked like a normal completion with finishReason: "tool-calls"
    3. Users couldn't easily debug why their agent stopped executing

    The Solution

    Framework (VoltAgent Core):

    • Added setFinishReason(finishReason: string) method to AgentTraceContext to capture AI SDK finish reasons in OpenTelemetry spans as ai.response.finish_reason attribute
    • Added setStopConditionMet(stepCount: number, maxSteps: number) method to track when maxSteps limit is reached
    • Updated agent.generateText() and agent.streamText() to automatically record:
      • ai.response.finish_reason - The AI SDK finish reason (stop, tool-calls, length, error, etc.)
      • voltagent.stopped_by_max_steps - Boolean flag when maxSteps is reached
      • voltagent.step_count - Actual number of steps executed
      • voltagent.max_steps - The maxSteps limit that was configured

    What Gets Captured:

    // In OpenTelemetry spans:
    {
      "ai.response.finish_reason": "tool-calls",
      "voltagent.stopped_by_max_steps": true,
      "voltagent.step_count": 10,
      "voltagent.max_steps": 10
    }

    Impact

    • Better Debugging: Users can now clearly see why their agent execution terminated
    • Observability: All AI SDK finish reasons are now visible in traces
    • MaxSteps Detection: Explicit tracking when agents hit step limits
    • Console UI Ready: These attributes power warning UI in VoltOps Console to alert users when maxSteps is reached

    Usage

    No code changes needed - this is automatically tracked for all agent executions:

    const agent = new Agent({
      name: "my-agent",
      maxSteps: 5, // Will be tracked in spans
    });
    
    await agent.generateText("Hello");
    // Span will include ai.response.finish_reason and maxSteps metadata

@voltagent/[email protected]

24 Oct 00:37
8aaffec

Choose a tag to compare

Patch Changes

  • #727 59da0b5 Thanks @omeraplak! - feat: add agent.toTool() for converting agents into tools

    Agents can now be converted to tools using the .toTool() method, enabling multi-agent coordination where one agent uses other specialized agents as tools. This is useful when the LLM should dynamically decide which agents to call based on the request.

    Usage Example

    import { Agent } from "@voltagent/core";
    import { openai } from "@ai-sdk/openai";
    
    // Create specialized agents
    const writerAgent = new Agent({
      id: "writer",
      purpose: "Writes blog posts",
      model: openai("gpt-4o-mini"),
    });
    
    const editorAgent = new Agent({
      id: "editor",
      purpose: "Edits content",
      model: openai("gpt-4o-mini"),
    });
    
    // Coordinator uses them as tools
    const coordinator = new Agent({
      tools: [writerAgent.toTool(), editorAgent.toTool()],
      model: openai("gpt-4o-mini"),
    });
    
    // LLM decides which agents to call
    await coordinator.generateText("Create a blog post about AI");

    Key Features

    • Dynamic agent selection: LLM intelligently chooses which agents to invoke
    • Composable agents: Reuse agents as building blocks across multiple coordinators
    • Type-safe: Full TypeScript support with automatic type inference
    • Context preservation: Automatically passes through userId, conversationId, and operation context
    • Customizable: Optional custom name, description, and parameter schema

    Customization

    const customTool = agent.toTool({
      name: "professional_writer",
      description: "Writes professional blog posts",
      parametersSchema: z.object({
        topic: z.string(),
        style: z.enum(["formal", "casual"]),
      }),
    });

    When to Use

    • Use agent.toTool() when the LLM should decide which agents to call (e.g., customer support routing)
    • Use Workflows for deterministic, code-defined pipelines (e.g., always: Step A → Step B → Step C)
    • Use Sub-agents for fixed sets of collaborating agents

    See the documentation and examples/with-agent-tool for more details.

@voltagent/[email protected]

24 Oct 18:35
6892fac

Choose a tag to compare

Patch Changes

  • #734 2084fd4 Thanks @omeraplak! - fix: auto-detect package managers and add automatic installation to volt update command

    The Problem

    The volt update CLI command had several UX issues:

    1. Only updated package.json without installing packages
    2. Required users to manually run installation commands
    3. Always suggested npm install regardless of the user's actual package manager (pnpm, yarn, or bun)
    4. No way to skip automatic installation when needed

    This was inconsistent with the HTTP API's updateSinglePackage and updateAllPackages functions, which properly detect and use the correct package manager.

    The Solution

    Enhanced the volt update command to match the HTTP API behavior:

    Package Manager Auto-Detection:

    • Automatically detects package manager by checking lock files:
      • pnpm-lock.yaml → runs pnpm install
      • yarn.lock → runs yarn install
      • package-lock.json → runs npm install
      • bun.lockb → runs bun install

    Automatic Installation:

    • After updating package.json, automatically runs the appropriate install command
    • Shows detected package manager and installation progress
    • Works in both interactive mode and --apply mode

    Optional Skip:

    • Added --no-install flag to skip automatic installation when needed
    • Useful for CI/CD pipelines or when manual control is preferred

    Usage Examples

    Default behavior (auto-install with detected package manager):

    $ volt update
    Found 3 outdated VoltAgent packages:
      @voltagent/core: 1.1.34 → 1.1.35
      @voltagent/server-hono: 0.1.10 → 0.1.11
      @voltagent/cli: 0.0.45 → 0.0.46
    
    ✓ Updated 3 packages in package.json
    
    Detected package manager: pnpm
    Running pnpm install...
    ⠹ Installing packages...
    ✓ Packages installed successfully

    Skip automatic installation:

    $ volt update --no-install
    ✓ Updated 3 packages in package.json
    ⚠ Automatic installation skipped
    Run 'pnpm install' to install updated packages

    Non-interactive mode:

    $ volt update --apply
    ✓ Updates applied to package.json
    Detected package manager: pnpm
    Running pnpm install...
    ✓ Packages installed successfully

    Benefits

    • Better UX: No manual steps required - updates are fully automatic
    • Package Manager Respect: Uses your chosen package manager (pnpm/yarn/npm/bun)
    • Consistency: CLI now matches HTTP API behavior
    • Flexibility: --no-install flag for users who need manual control
    • CI/CD Friendly: Works seamlessly in automated workflows

@voltagent/[email protected]

23 Oct 18:56
aa2037c

Choose a tag to compare

Minor Changes

  • #720 91c7269 Thanks @omeraplak! - fix: simplify CORS configuration and ensure custom routes are auth-protected

    Breaking Changes

    CORS Configuration

    CORS configuration has been simplified. Instead of configuring CORS in configureApp, use the new cors field:

    Before:

    server: honoServer({
      configureApp: (app) => {
        app.use(
          "*",
          cors({
            origin: "https://your-domain.com",
            credentials: true,
          })
        );
    
        app.get("/api/health", (c) => c.json({ status: "ok" }));
      },
    });

    After (Simple global CORS):

    server: honoServer({
      cors: {
        origin: "https://your-domain.com",
        credentials: true,
      },
      configureApp: (app) => {
        app.get("/api/health", (c) => c.json({ status: "ok" }));
      },
    });

    After (Route-specific CORS):

    import { cors } from "hono/cors";
    
    server: honoServer({
      cors: false, // Disable default CORS for route-specific control
    
      configureApp: (app) => {
        // Different CORS for different routes
        app.use("/agents/*", cors({ origin: "https://agents.com" }));
        app.use("/api/public/*", cors({ origin: "*" }));
    
        app.get("/api/health", (c) => c.json({ status: "ok" }));
      },
    });

    Custom Routes Authentication

    Custom routes added via configureApp are now registered AFTER authentication middleware. This means:

    • Opt-in mode (default): Custom routes follow the same auth rules as built-in routes
    • Opt-out mode (defaultPrivate: true): Custom routes are automatically protected

    Before: Custom routes bypassed authentication unless you manually added auth middleware.

    After: Custom routes inherit authentication behavior automatically.

    Example with opt-out mode:

    server: honoServer({
      auth: jwtAuth({
        secret: process.env.JWT_SECRET,
        defaultPrivate: true, // Protect all routes by default
        publicRoutes: ["GET /api/health"],
      }),
      configureApp: (app) => {
        // This is now automatically protected
        app.get("/api/user/profile", (c) => {
          const user = c.get("authenticatedUser");
          return c.json({ user }); // user is guaranteed to exist
        });
      },
    });

    Why This Change?

    1. Security: Custom routes are no longer accidentally left unprotected
    2. Simplicity: CORS configuration is now a simple config field for common cases
    3. Flexibility: Advanced users can still use route-specific CORS with cors: false
    4. Consistency: Custom routes follow the same authentication rules as built-in routes

Patch Changes