Releases: VoltAgent/voltagent
@voltagent/[email protected]
Patch Changes
-
#740
bac1f49Thanks @marinoska! - Stable fix for the providerMetadata openai entries normalization bug: #718 -
#738
d3ed347Thanks @omeraplak! - fix: persist workflow execution timeline events to prevent data loss after completion - #647The Problem
When workflows executed, their timeline events (step-start, step-complete, workflow-complete, etc.) were only visible during streaming. Once the workflow completed, the WebSocket state update would replace the execution object without the events field, causing the timeline UI to reset and lose all execution history. Users couldn't see what happened in completed or suspended workflows.
Symptoms:
- Timeline showed events during execution
- Timeline cleared/reset when workflow completed
- No execution history for completed workflows
- Events were lost after browser refresh
The Solution
Backend (Framework):
- Added
events,output, andcancellationfields toWorkflowStateEntryinterface - Modified workflow execution to collect all stream events in memory during execution
- Persist collected events to workflow state when workflow completes, suspends, fails, or is cancelled
- Updated all storage adapters to support the new fields:
- LibSQL: Added schema columns + automatic migration method (
addWorkflowStateColumns) - Supabase: Added schema columns + migration detection + ALTER TABLE migration SQL
- Postgres: Added schema columns + INSERT/UPDATE queries
- In-Memory: Automatically supported via TypeScript interface
- LibSQL: Added schema columns + automatic migration method (
Frontend (Console):
- Updated
WorkflowPlaygroundProviderto include events when convertingWorkflowStateEntry→WorkflowHistoryEntry - Implemented smart merge strategy for WebSocket updates: Use backend persisted events when workflow finishes, keep streaming events during execution
- Events are now preserved across page refreshes and always visible in timeline UI
What Gets Persisted
// In WorkflowStateEntry (stored in Memory V2): { "events": [ { "id": "evt_123", "type": "workflow-start", "name": "Workflow Started", "startTime": "2025-01-24T10:00:00Z", "status": "running", "input": { "userId": "123" } }, { "id": "evt_124", "type": "step-complete", "name": "Step: fetch-user", "startTime": "2025-01-24T10:00:01Z", "endTime": "2025-01-24T10:00:02Z", "status": "success", "output": { "user": { "name": "John" } } } ], "output": { "result": "success" }, "cancellation": { "cancelledAt": "2025-01-24T10:00:05Z", "reason": "User requested cancellation" } }
Migration Guide
LibSQL Users
No action required - migrations run automatically on next initialization.
Supabase Users
When you upgrade and initialize the adapter, you'll see migration SQL in the console. Run it in your Supabase SQL Editor:
-- Add workflow event persistence columns ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS events JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS output JSONB; ALTER TABLE voltagent_workflow_states ADD COLUMN IF NOT EXISTS cancellation JSONB;
Postgres Users
No action required - migrations run automatically on next initialization.
In-Memory Users
No action required - automatically supported.
VoltAgent Managed Memory Users
No action required - migrations run automatically on first request per managed memory database after API deployment. The API has been updated to:
- Include new columns in ManagedMemoryProvisioner CREATE TABLE statements (new databases)
- Run automatic column addition migration for existing databases (lazy migration on first request)
- Update PostgreSQL memory adapter to persist and retrieve events, output, and cancellation fields
Zero-downtime deployment: Existing managed memory databases will be migrated lazily when first accessed after the API update.
Impact
- ✅ Workflow execution timeline is now persistent and survives completion
- ✅ Full execution history visible for completed, suspended, and failed workflows
- ✅ Events, output, and cancellation metadata preserved in database
- ✅ Console UI timeline works consistently across all workflow states
- ✅ All storage backends (LibSQL, Supabase, Postgres, In-Memory) behave consistently
- ✅ No data loss on workflow completion or page refresh
-
#743
55e3555Thanks @omeraplak! - feat: add OperationContext support to Memory adapters for dynamic runtime behaviorThe Problem
Memory adapters (InMemory, PostgreSQL, custom) had fixed configuration at instantiation time. Users couldn't:
- Pass different memory limits per
generateText()call (e.g., 10 messages for quick responses, 100 for summaries) - Access agent execution context (logger, tracing, abort signals) within memory operations
- Implement context-aware memory behavior without modifying adapter configuration
The Solution
Framework (VoltAgent Core):
- Added optional
context?: OperationContextparameter to allStorageAdaptermethods - Memory adapters now receive full agent execution context including:
context.context- User-provided key-value map for dynamic parameterscontext.logger- Contextual logger for debuggingcontext.traceContext- OpenTelemetry tracing integrationcontext.abortController- Cancellation supportuserId,conversationId, and other operation metadata
Type Safety:
- Replaced
anytypes with properOperationContexttype - No circular dependencies (type-only imports)
- Full IDE autocomplete support
Usage Example
Dynamic Memory Limits
import { Agent, Memory, InMemoryStorageAdapter } from "@voltagent/core"; import type { OperationContext } from "@voltagent/core/agent"; class DynamicMemoryAdapter extends InMemoryStorageAdapter { async getMessages( userId: string, conversationId: string, options?: GetMessagesOptions, context?: OperationContext ): Promise<UIMessage[]> { // Extract dynamic limit from context const dynamicLimit = context?.context.get("memoryLimit") as number; return super.getMessages( userId, conversationId, { ...options, limit: dynamicLimit || options?.limit || 10, }, context ); } } const agent = new Agent({ memory: new Memory({ storage: new DynamicMemoryAdapter() }), }); // Short context for quick queries await agent.generateText("Quick question", { context: new Map([["memoryLimit", 5]]), }); // Long context for detailed analysis await agent.generateText("Summarize everything", { context: new Map([["memoryLimit", 100]]), });
Access Logger and Tracing
class ObservableMemoryAdapter extends InMemoryStorageAdapter { async getMessages(...args, context?: OperationContext) { context?.logger.debug("Fetching messages", { traceId: context.traceContext.getTraceId(), userId: args[0], }); return super.getMessages(...args, context); } }
Impact
- ✅ Dynamic behavior per request without changing adapter configuration
- ✅ Full observability - Access to logger, tracing, and operation metadata
- ✅ Type-safe - Proper TypeScript types with IDE autocomplete
- ✅ Backward compatible - Context parameter is optional
- ✅ Extensible - Custom adapters can implement context-aware logic
Breaking Changes
None - the
contextparameter is optional on all methods. - Pass different memory limits per
@voltagent/[email protected]
Patch Changes
-
#734
2084fd4Thanks @omeraplak! - fix: add URL path support for single package updates and resolve 404 errorsThe Problem
The update endpoint only accepted package names via request body (
POST /updateswith{ "packageName": "@voltagent/core" }), but users expected to be able to specify the package name directly in the URL path (POST /updates/@voltagent/core). This caused 404 errors when trying to update individual packages using the more intuitive URL-based approach.The Solution
Added a new route
POST /updates/:packageNamethat accepts the package name as a URL parameter, providing a more RESTful API design while maintaining backward compatibility with the existing body-based approach.New Routes Available:
POST /updates/@voltagent/core- Update single package (package name in URL path)POST /updateswith body{ "packageName": "@voltagent/core" }- Update single package (package name in body)POST /updateswith no body - Update all VoltAgent packages
Package Manager Detection:
The system automatically detects your package manager based on lock files:pnpm-lock.yaml→ usespnpm addyarn.lock→ usesyarn addpackage-lock.json→ usesnpm installbun.lockb→ usesbun add
Usage Example
// Update a single package using URL path fetch("http://localhost:3141/updates/@voltagent/core", { method: "POST", }); // Or using the body parameter (backward compatible) fetch("http://localhost:3141/updates", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ packageName: "@voltagent/core" }), }); // Update all packages fetch("http://localhost:3141/updates", { method: "POST", });
-
#736
348bda0Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646The Problem
When users configured a custom logger with a specific log level (e.g.,
level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:LoggerProxywas forwarding all log calls to the underlying logger without checking the configured level- Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
- OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs
The Solution
Framework Changes:
- Updated
LoggerProxyto check configured log level before forwarding to console/stdout - Added
shouldLog(level)method that inspects the underlying logger's level (supports both Pino and ConsoleLogger) - Separated console output filtering from OpenTelemetry emission:
- Console/stdout: Respects configured level (error level → only shows error/fatal)
- OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)
What Gets Fixed:
const logger = createPinoLogger({ level: "error" }); logger.debug("Agent created"); // Console: ❌ Hidden (keeps dev environment clean) // OpenTelemetry: ✅ Sent (full observability) logger.error("Generation failed"); // Console: ✅ Shown (important errors visible) // OpenTelemetry: ✅ Sent (full observability)
Impact
- Cleaner Development: Console output now respects configured log levels
- Full Observability: OpenTelemetry platforms receive all logs regardless of console level
- Better Debugging: Debug/trace logs available in observability tools even in production
- No Breaking Changes: Existing code works as-is with improved behavior
Usage
No code changes needed - the fix applies automatically:
// Create logger with error level const logger = createPinoLogger({ level: "error", name: "my-app", }); // Use it with VoltAgent new VoltAgent({ agents: { myAgent }, logger, // Console will be clean, OpenTelemetry gets everything });
Migration Notes
If you were working around this issue by:
- Filtering console output manually
- Using different loggers for different components
- Avoiding debug logs altogether
You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.
-
Updated dependencies [
2084fd4,348bda0]:- @voltagent/[email protected]
- @voltagent/[email protected]
@voltagent/[email protected]
Patch Changes
-
#728
3952b4bThanks @omeraplak! - feat: automatic detection and display of custom routes in console logs and Swagger UICustom routes added via
configureAppcallback are now automatically detected and displayed in both server startup logs and Swagger UI documentation.What Changed
Previously, only OpenAPI-registered routes were visible in:
- Server startup console logs
- Swagger UI documentation (
/ui)
Now all custom routes are automatically detected, including:
- Regular Hono routes (
app.get(),app.post(), etc.) - OpenAPI routes with full documentation
- Routes with path parameters (
:id,{id})
Usage Example
import { honoServer } from "@voltagent/server-hono"; new VoltAgent({ agents: { myAgent }, server: honoServer({ configureApp: (app) => { // These routes are now automatically detected! app.get("/api/health", (c) => c.json({ status: "ok" })); app.post("/api/calculate", async (c) => { const { a, b } = await c.req.json(); return c.json({ result: a + b }); }); }, }), });
Console Output
══════════════════════════════════════════════════ VOLTAGENT SERVER STARTED SUCCESSFULLY ══════════════════════════════════════════════════ ✓ HTTP Server: http://localhost:3141 ✓ Swagger UI: http://localhost:3141/ui ✓ Registered Endpoints: 2 total Custom Endpoints GET /api/health POST /api/calculate ══════════════════════════════════════════════════Improvements
- ✅ Extracts routes from
app.routesarray (includes all Hono routes) - ✅ Merges with OpenAPI document routes for descriptions
- ✅ Filters out built-in VoltAgent paths using exact matching (not regex)
- ✅ Custom routes like
/agents-dashboardor/workflows-managerare now correctly detected - ✅ Normalizes path formatting (removes duplicate slashes)
- ✅ Handles both
:paramand{param}path parameter formats - ✅ Adds custom routes to Swagger UI with auto-generated schemas
- ✅ Comprehensive test coverage (44 unit tests)
Implementation Details
The
extractCustomEndpoints()function now:- Extracts all routes from
app.routes(regular Hono routes) - Merges with OpenAPI document routes (for descriptions)
- Deduplicates and filters built-in VoltAgent routes
- Returns a complete list of custom endpoints
The
getEnhancedOpenApiDoc()function:- Adds custom routes to OpenAPI document for Swagger UI
- Generates response schemas for undocumented routes
- Preserves existing OpenAPI documentation
- Supports path parameters and request bodies
-
Updated dependencies [
59da0b5]:- @voltagent/[email protected]
@voltagent/[email protected]
Patch Changes
-
#734
2084fd4Thanks @omeraplak! - fix: add URL path support for single package updates and resolve 404 errorsThe Problem
The update endpoint only accepted package names via request body (
POST /updateswith{ "packageName": "@voltagent/core" }), but users expected to be able to specify the package name directly in the URL path (POST /updates/@voltagent/core). This caused 404 errors when trying to update individual packages using the more intuitive URL-based approach.The Solution
Added a new route
POST /updates/:packageNamethat accepts the package name as a URL parameter, providing a more RESTful API design while maintaining backward compatibility with the existing body-based approach.New Routes Available:
POST /updates/@voltagent/core- Update single package (package name in URL path)POST /updateswith body{ "packageName": "@voltagent/core" }- Update single package (package name in body)POST /updateswith no body - Update all VoltAgent packages
Package Manager Detection:
The system automatically detects your package manager based on lock files:pnpm-lock.yaml→ usespnpm addyarn.lock→ usesyarn addpackage-lock.json→ usesnpm installbun.lockb→ usesbun add
Usage Example
// Update a single package using URL path fetch("http://localhost:3141/updates/@voltagent/core", { method: "POST", }); // Or using the body parameter (backward compatible) fetch("http://localhost:3141/updates", { method: "POST", headers: { "Content-Type": "application/json" }, body: JSON.stringify({ packageName: "@voltagent/core" }), }); // Update all packages fetch("http://localhost:3141/updates", { method: "POST", });
-
Updated dependencies [
348bda0]:- @voltagent/[email protected]
@voltagent/[email protected]
Patch Changes
-
#736
348bda0Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646The Problem
When users configured a custom logger with a specific log level (e.g.,
level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:LoggerProxywas forwarding all log calls to the underlying logger without checking the configured level- Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
- OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs
The Solution
Framework Changes:
- Updated
LoggerProxyto check configured log level before forwarding to console/stdout - Added
shouldLog(level)method that inspects the underlying logger's level (supports both Pino and ConsoleLogger) - Separated console output filtering from OpenTelemetry emission:
- Console/stdout: Respects configured level (error level → only shows error/fatal)
- OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)
What Gets Fixed:
const logger = createPinoLogger({ level: "error" }); logger.debug("Agent created"); // Console: ❌ Hidden (keeps dev environment clean) // OpenTelemetry: ✅ Sent (full observability) logger.error("Generation failed"); // Console: ✅ Shown (important errors visible) // OpenTelemetry: ✅ Sent (full observability)
Impact
- Cleaner Development: Console output now respects configured log levels
- Full Observability: OpenTelemetry platforms receive all logs regardless of console level
- Better Debugging: Debug/trace logs available in observability tools even in production
- No Breaking Changes: Existing code works as-is with improved behavior
Usage
No code changes needed - the fix applies automatically:
// Create logger with error level const logger = createPinoLogger({ level: "error", name: "my-app", }); // Use it with VoltAgent new VoltAgent({ agents: { myAgent }, logger, // Console will be clean, OpenTelemetry gets everything });
Migration Notes
If you were working around this issue by:
- Filtering console output manually
- Using different loggers for different components
- Avoiding debug logs altogether
You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.
@voltagent/[email protected]
Patch Changes
-
#736
348bda0Thanks @omeraplak! - fix: respect configured log levels for console output while sending all logs to OpenTelemetry - #646The Problem
When users configured a custom logger with a specific log level (e.g.,
level: "error"), DEBUG and INFO logs were still appearing in console output, cluttering the development environment. This happened because:LoggerProxywas forwarding all log calls to the underlying logger without checking the configured level- Multiple components (agents, workflows, retrievers, memory adapters, observability) were logging at DEBUG level unconditionally
- OpenTelemetry logs were also being filtered by the same level, preventing observability platforms from receiving all logs
The Solution
Framework Changes:
- Updated
LoggerProxyto check configured log level before forwarding to console/stdout - Added
shouldLog(level)method that inspects the underlying logger's level (supports both Pino and ConsoleLogger) - Separated console output filtering from OpenTelemetry emission:
- Console/stdout: Respects configured level (error level → only shows error/fatal)
- OpenTelemetry: Always receives all logs (debug, info, warn, error, fatal)
What Gets Fixed:
const logger = createPinoLogger({ level: "error" }); logger.debug("Agent created"); // Console: ❌ Hidden (keeps dev environment clean) // OpenTelemetry: ✅ Sent (full observability) logger.error("Generation failed"); // Console: ✅ Shown (important errors visible) // OpenTelemetry: ✅ Sent (full observability)
Impact
- Cleaner Development: Console output now respects configured log levels
- Full Observability: OpenTelemetry platforms receive all logs regardless of console level
- Better Debugging: Debug/trace logs available in observability tools even in production
- No Breaking Changes: Existing code works as-is with improved behavior
Usage
No code changes needed - the fix applies automatically:
// Create logger with error level const logger = createPinoLogger({ level: "error", name: "my-app", }); // Use it with VoltAgent new VoltAgent({ agents: { myAgent }, logger, // Console will be clean, OpenTelemetry gets everything });
Migration Notes
If you were working around this issue by:
- Filtering console output manually
- Using different loggers for different components
- Avoiding debug logs altogether
You can now remove those workarounds and use a single logger with your preferred console level while maintaining full observability.
@voltagent/[email protected]
Patch Changes
-
#730
1244b3eThanks @omeraplak! - feat: add finish reason and max steps observability to agent execution traces - #721The Problem
When agents hit the maximum steps limit (via
maxStepsorstopWhenconditions), execution would terminate abruptly without clear indication in observability traces. This created confusion as:- The AI SDK's
finishReason(e.g.,stop,tool-calls,length,error) was not being captured in OpenTelemetry spans - MaxSteps termination looked like a normal completion with
finishReason: "tool-calls" - Users couldn't easily debug why their agent stopped executing
The Solution
Framework (VoltAgent Core):
- Added
setFinishReason(finishReason: string)method toAgentTraceContextto capture AI SDK finish reasons in OpenTelemetry spans asai.response.finish_reasonattribute - Added
setStopConditionMet(stepCount: number, maxSteps: number)method to track when maxSteps limit is reached - Updated
agent.generateText()andagent.streamText()to automatically record:ai.response.finish_reason- The AI SDK finish reason (stop,tool-calls,length,error, etc.)voltagent.stopped_by_max_steps- Boolean flag when maxSteps is reachedvoltagent.step_count- Actual number of steps executedvoltagent.max_steps- The maxSteps limit that was configured
What Gets Captured:
// In OpenTelemetry spans: { "ai.response.finish_reason": "tool-calls", "voltagent.stopped_by_max_steps": true, "voltagent.step_count": 10, "voltagent.max_steps": 10 }
Impact
- Better Debugging: Users can now clearly see why their agent execution terminated
- Observability: All AI SDK finish reasons are now visible in traces
- MaxSteps Detection: Explicit tracking when agents hit step limits
- Console UI Ready: These attributes power warning UI in VoltOps Console to alert users when maxSteps is reached
Usage
No code changes needed - this is automatically tracked for all agent executions:
const agent = new Agent({ name: "my-agent", maxSteps: 5, // Will be tracked in spans }); await agent.generateText("Hello"); // Span will include ai.response.finish_reason and maxSteps metadata
- The AI SDK's
@voltagent/[email protected]
Patch Changes
-
#727
59da0b5Thanks @omeraplak! - feat: add agent.toTool() for converting agents into toolsAgents can now be converted to tools using the
.toTool()method, enabling multi-agent coordination where one agent uses other specialized agents as tools. This is useful when the LLM should dynamically decide which agents to call based on the request.Usage Example
import { Agent } from "@voltagent/core"; import { openai } from "@ai-sdk/openai"; // Create specialized agents const writerAgent = new Agent({ id: "writer", purpose: "Writes blog posts", model: openai("gpt-4o-mini"), }); const editorAgent = new Agent({ id: "editor", purpose: "Edits content", model: openai("gpt-4o-mini"), }); // Coordinator uses them as tools const coordinator = new Agent({ tools: [writerAgent.toTool(), editorAgent.toTool()], model: openai("gpt-4o-mini"), }); // LLM decides which agents to call await coordinator.generateText("Create a blog post about AI");
Key Features
- Dynamic agent selection: LLM intelligently chooses which agents to invoke
- Composable agents: Reuse agents as building blocks across multiple coordinators
- Type-safe: Full TypeScript support with automatic type inference
- Context preservation: Automatically passes through userId, conversationId, and operation context
- Customizable: Optional custom name, description, and parameter schema
Customization
const customTool = agent.toTool({ name: "professional_writer", description: "Writes professional blog posts", parametersSchema: z.object({ topic: z.string(), style: z.enum(["formal", "casual"]), }), });
When to Use
- Use
agent.toTool()when the LLM should decide which agents to call (e.g., customer support routing) - Use Workflows for deterministic, code-defined pipelines (e.g., always: Step A → Step B → Step C)
- Use Sub-agents for fixed sets of collaborating agents
See the documentation and
examples/with-agent-toolfor more details.
@voltagent/[email protected]
Patch Changes
-
#734
2084fd4Thanks @omeraplak! - fix: auto-detect package managers and add automatic installation tovolt updatecommandThe Problem
The
volt updateCLI command had several UX issues:- Only updated
package.jsonwithout installing packages - Required users to manually run installation commands
- Always suggested
npm installregardless of the user's actual package manager (pnpm, yarn, or bun) - No way to skip automatic installation when needed
This was inconsistent with the HTTP API's
updateSinglePackageandupdateAllPackagesfunctions, which properly detect and use the correct package manager.The Solution
Enhanced the
volt updatecommand to match the HTTP API behavior:Package Manager Auto-Detection:
- Automatically detects package manager by checking lock files:
pnpm-lock.yaml→ runspnpm installyarn.lock→ runsyarn installpackage-lock.json→ runsnpm installbun.lockb→ runsbun install
Automatic Installation:
- After updating
package.json, automatically runs the appropriate install command - Shows detected package manager and installation progress
- Works in both interactive mode and
--applymode
Optional Skip:
- Added
--no-installflag to skip automatic installation when needed - Useful for CI/CD pipelines or when manual control is preferred
Usage Examples
Default behavior (auto-install with detected package manager):
$ volt update Found 3 outdated VoltAgent packages: @voltagent/core: 1.1.34 → 1.1.35 @voltagent/server-hono: 0.1.10 → 0.1.11 @voltagent/cli: 0.0.45 → 0.0.46 ✓ Updated 3 packages in package.json Detected package manager: pnpm Running pnpm install... ⠹ Installing packages... ✓ Packages installed successfullySkip automatic installation:
$ volt update --no-install ✓ Updated 3 packages in package.json ⚠ Automatic installation skipped Run 'pnpm install' to install updated packages
Non-interactive mode:
$ volt update --apply ✓ Updates applied to package.json Detected package manager: pnpm Running pnpm install... ✓ Packages installed successfully
Benefits
- Better UX: No manual steps required - updates are fully automatic
- Package Manager Respect: Uses your chosen package manager (pnpm/yarn/npm/bun)
- Consistency: CLI now matches HTTP API behavior
- Flexibility:
--no-installflag for users who need manual control - CI/CD Friendly: Works seamlessly in automated workflows
- Only updated
@voltagent/[email protected]
Minor Changes
-
#720
91c7269Thanks @omeraplak! - fix: simplify CORS configuration and ensure custom routes are auth-protectedBreaking Changes
CORS Configuration
CORS configuration has been simplified. Instead of configuring CORS in
configureApp, use the newcorsfield:Before:
server: honoServer({ configureApp: (app) => { app.use( "*", cors({ origin: "https://your-domain.com", credentials: true, }) ); app.get("/api/health", (c) => c.json({ status: "ok" })); }, });
After (Simple global CORS):
server: honoServer({ cors: { origin: "https://your-domain.com", credentials: true, }, configureApp: (app) => { app.get("/api/health", (c) => c.json({ status: "ok" })); }, });
After (Route-specific CORS):
import { cors } from "hono/cors"; server: honoServer({ cors: false, // Disable default CORS for route-specific control configureApp: (app) => { // Different CORS for different routes app.use("/agents/*", cors({ origin: "https://agents.com" })); app.use("/api/public/*", cors({ origin: "*" })); app.get("/api/health", (c) => c.json({ status: "ok" })); }, });
Custom Routes Authentication
Custom routes added via
configureAppare now registered AFTER authentication middleware. This means:- Opt-in mode (default): Custom routes follow the same auth rules as built-in routes
- Opt-out mode (
defaultPrivate: true): Custom routes are automatically protected
Before: Custom routes bypassed authentication unless you manually added auth middleware.
After: Custom routes inherit authentication behavior automatically.
Example with opt-out mode:
server: honoServer({ auth: jwtAuth({ secret: process.env.JWT_SECRET, defaultPrivate: true, // Protect all routes by default publicRoutes: ["GET /api/health"], }), configureApp: (app) => { // This is now automatically protected app.get("/api/user/profile", (c) => { const user = c.get("authenticatedUser"); return c.json({ user }); // user is guaranteed to exist }); }, });
Why This Change?
- Security: Custom routes are no longer accidentally left unprotected
- Simplicity: CORS configuration is now a simple config field for common cases
- Flexibility: Advanced users can still use route-specific CORS with
cors: false - Consistency: Custom routes follow the same authentication rules as built-in routes
Patch Changes
- Updated dependencies [
efe4be6]:- @voltagent/[email protected]