Skip to content

Releases: VoltAgent/voltagent

@voltagent/[email protected]

30 Apr 20:33
735da3d

Choose a tag to compare

Patch Changes

  • #71 1f20509 Thanks @omeraplak! - feat: Standardize Agent Error and Finish Handling

    This change introduces a more robust and consistent way errors and successful finishes are handled across the @voltagent/core Agent and LLM provider implementations (like @voltagent/vercel-ai).

    Key Improvements:

    • Standardized Errors (VoltAgentError):

      • Introduced VoltAgentError, ToolErrorInfo, and StreamOnErrorCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now wrap underlying SDK/API errors into a structured VoltAgentError before passing them to onError callbacks or throwing them.
      • Agent methods (generateText, streamText, generateObject, streamObject) now consistently handle VoltAgentError, enabling richer context (stage, code, tool details) in history events and logs.
    • Standardized Stream Finish Results:

      • Introduced StreamTextFinishResult, StreamTextOnFinishCallback, StreamObjectFinishResult, and StreamObjectOnFinishCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now construct these standardized result objects upon successful stream completion.
      • Agent streaming methods (streamText, streamObject) now receive these standardized results in their onFinish handlers, ensuring consistent access to final output (text or object), usage, finishReason, etc., for history, events, and hooks.
    • Updated Interfaces: The LLMProvider interface and related options types (StreamTextOptions, StreamObjectOptions) have been updated to reflect these new standardized callback types and error-throwing expectations.

    These changes lead to more predictable behavior, improved debugging capabilities through structured errors, and a more consistent experience when working with different LLM providers.

  • Updated dependencies [1f20509, 1f20509, 7a7a0f6]:

@voltagent/[email protected]

30 Apr 20:33
735da3d

Choose a tag to compare

Patch Changes

  • #71 1f20509 Thanks @omeraplak! - feat: Standardize Agent Error and Finish Handling

    This change introduces a more robust and consistent way errors and successful finishes are handled across the @voltagent/core Agent and LLM provider implementations (like @voltagent/vercel-ai).

    Key Improvements:

    • Standardized Errors (VoltAgentError):

      • Introduced VoltAgentError, ToolErrorInfo, and StreamOnErrorCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now wrap underlying SDK/API errors into a structured VoltAgentError before passing them to onError callbacks or throwing them.
      • Agent methods (generateText, streamText, generateObject, streamObject) now consistently handle VoltAgentError, enabling richer context (stage, code, tool details) in history events and logs.
    • Standardized Stream Finish Results:

      • Introduced StreamTextFinishResult, StreamTextOnFinishCallback, StreamObjectFinishResult, and StreamObjectOnFinishCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now construct these standardized result objects upon successful stream completion.
      • Agent streaming methods (streamText, streamObject) now receive these standardized results in their onFinish handlers, ensuring consistent access to final output (text or object), usage, finishReason, etc., for history, events, and hooks.
    • Updated Interfaces: The LLMProvider interface and related options types (StreamTextOptions, StreamObjectOptions) have been updated to reflect these new standardized callback types and error-throwing expectations.

    These changes lead to more predictable behavior, improved debugging capabilities through structured errors, and a more consistent experience when working with different LLM providers.

  • Updated dependencies [1f20509, 1f20509, 7a7a0f6]:

@voltagent/[email protected]

30 Apr 20:33
735da3d

Choose a tag to compare

Patch Changes

  • #71 1f20509 Thanks @omeraplak! - feat: Introduce userContext for passing custom data through agent operations

    Introduced userContext, a Map<string | symbol, unknown> within the OperationContext. This allows developers to store and retrieve custom data across agent lifecycle hooks (onStart, onEnd) and tool executions for a specific agent operation (like a generateText call). This context is isolated per operation, providing a way to manage state specific to a single request or task.

    Usage Example:

    import {
      Agent,
      createHooks,
      createTool,
      type OperationContext,
      type ToolExecutionContext,
    } from "@voltagent/core";
    import { z } from "zod";
    
    // Define hooks that set and retrieve data
    const hooks = createHooks({
      onStart: (agent: Agent<any>, context: OperationContext) => {
        // Set data needed throughout the operation and potentially by tools
        const requestId = `req-${Date.now()}`;
        const traceId = `trace-${Math.random().toString(16).substring(2, 8)}`;
        context.userContext.set("requestId", requestId);
        context.userContext.set("traceId", traceId);
        console.log(
          `[${agent.name}] Operation started. RequestID: ${requestId}, TraceID: ${traceId}`
        );
      },
      onEnd: (agent: Agent<any>, result: any, context: OperationContext) => {
        // Retrieve data at the end of the operation
        const requestId = context.userContext.get("requestId");
        const traceId = context.userContext.get("traceId"); // Can retrieve traceId here too
        console.log(
          `[${agent.name}] Operation finished. RequestID: ${requestId}, TraceID: ${traceId}`
        );
        // Use these IDs for logging, metrics, cleanup, etc.
      },
    });
    
    // Define a tool that uses the context data set in onStart
    const customContextTool = createTool({
      name: "custom_context_logger",
      description: "Logs a message using trace ID from the user context.",
      parameters: z.object({
        message: z.string().describe("The message to log."),
      }),
      execute: async (params: { message: string }, options?: ToolExecutionContext) => {
        // Access userContext via options.operationContext
        const traceId = options?.operationContext?.userContext?.get("traceId") || "unknown-trace";
        const requestId =
          options?.operationContext?.userContext?.get("requestId") || "unknown-request"; // Can access requestId too
        const logMessage = `[RequestID: ${requestId}, TraceID: ${traceId}] Tool Log: ${params.message}`;
        console.log(logMessage);
        // In a real scenario, you might interact with external systems using these IDs
        return `Logged message with RequestID: ${requestId} and TraceID: ${traceId}`;
      },
    });
    
    // Create an agent with the tool and hooks
    const agent = new Agent({
      name: "MyCombinedAgent",
      llm: myLlmProvider, // Your LLM provider instance
      model: myModel, // Your model instance
      tools: [customContextTool],
      hooks: hooks,
    });
    
    // Trigger the agent. The LLM might decide to use the tool.
    await agent.generateText(
      "Log the following information using the custom logger: 'User feedback received.'"
    );
    
    // Console output will show logs from onStart, the tool (if called), and onEnd,
    // demonstrating context data flow.
  • #71 1f20509 Thanks @omeraplak! - feat: Standardize Agent Error and Finish Handling

    This change introduces a more robust and consistent way errors and successful finishes are handled across the @voltagent/core Agent and LLM provider implementations (like @voltagent/vercel-ai).

    Key Improvements:

    • Standardized Errors (VoltAgentError):

      • Introduced VoltAgentError, ToolErrorInfo, and StreamOnErrorCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now wrap underlying SDK/API errors into a structured VoltAgentError before passing them to onError callbacks or throwing them.
      • Agent methods (generateText, streamText, generateObject, streamObject) now consistently handle VoltAgentError, enabling richer context (stage, code, tool details) in history events and logs.
    • Standardized Stream Finish Results:

      • Introduced StreamTextFinishResult, StreamTextOnFinishCallback, StreamObjectFinishResult, and StreamObjectOnFinishCallback types in @voltagent/core.
      • LLM Providers (e.g., Vercel) now construct these standardized result objects upon successful stream completion.
      • Agent streaming methods (streamText, streamObject) now receive these standardized results in their onFinish handlers, ensuring consistent access to final output (text or object), usage, finishReason, etc., for history, events, and hooks.
    • Updated Interfaces: The LLMProvider interface and related options types (StreamTextOptions, StreamObjectOptions) have been updated to reflect these new standardized callback types and error-throwing expectations.

    These changes lead to more predictable behavior, improved debugging capabilities through structured errors, and a more consistent experience when working with different LLM providers.

  • 7a7a0f6 Thanks @omeraplak! - feat: Refactor Agent Hooks Signature to Use Single Argument Object - #57

    This change refactors the signature for all agent hooks (onStart, onEnd, onToolStart, onToolEnd, onHandoff) in @voltagent/core to improve usability, readability, and extensibility.

    Key Changes:

    • Single Argument Object: All hooks now accept a single argument object containing named properties (e.g., { agent, context, output, error }) instead of positional arguments.
    • onEnd / onToolEnd Refinement: The onEnd and onToolEnd hooks no longer use an isError flag or a combined outputOrError parameter. They now have distinct output: <Type> | undefined and error: VoltAgentError | undefined properties, making it explicit whether the operation or tool execution succeeded or failed.
    • Unified onEnd Output: The output type for the onEnd hook (AgentOperationOutput) is now a standardized union type, providing a consistent structure regardless of which agent method (generateText, streamText, etc.) completed successfully.

    Migration Guide:

    If you have implemented custom agent hooks, you will need to update their signatures:

    Before:

    const myHooks = {
      onStart: async (agent, context) => {
        /* ... */
      },
      onEnd: async (agent, outputOrError, context, isError) => {
        if (isError) {
          // Handle error (outputOrError is the error)
        } else {
          // Handle success (outputOrError is the output)
        }
      },
      onToolStart: async (agent, tool, context) => {
        /* ... */
      },
      onToolEnd: async (agent, tool, result, context) => {
        // Assuming result might contain an error or be the success output
      },
      // ...
    };

    After:

    import type {
      OnStartHookArgs,
      OnEndHookArgs,
      OnToolStartHookArgs,
      OnToolEndHookArgs,
      // ... other needed types
    } from "@voltagent/core";
    
    const myHooks = {
      onStart: async (args: OnStartHookArgs) => {
        const { agent, context } = args;
        /* ... */
      },
      onEnd: async (args: OnEndHookArgs) => {
        const { agent, output, error, context } = args;
        if (error) {
          // Handle error (error is VoltAgentError)
        } else if (output) {
          // Handle success (output is AgentOperationOutput)
        }
      },
      onToolStart: async (args: OnToolStartHookArgs) => {
        const { agent, tool, context } = args;
        /* ... */
      },
      onToolEnd: async (args: OnToolEndHookArgs) => {
        const { agent, tool, output, error, context } = args;
        if (error) {
          // Handle tool error (error is VoltAgentError)
        } else {
          // Handle tool success (output is the result)
        }
      },
      // ...
    };

    Update your hook function definitions to accept the single argument object and use destructuring or direct property access (args.propertyName) to get the required data.

@voltagent/[email protected]

30 Apr 20:33
735da3d

Choose a tag to compare

Patch Changes

  • #73 ac6ecbc Thanks @necatiozmen! - feat: Add placeholder add command

    Introduces the add <agent-slug> command. Currently, this command informs users that the feature for adding agents from the marketplace is upcoming and provides a link to the GitHub discussions for early feedback and participation.

@voltagent/[email protected]

29 Apr 15:39

Choose a tag to compare

Patch Changes

@voltagent/[email protected]

29 Apr 15:39

Choose a tag to compare

Patch Changes

  • #40 37c2136 Thanks @TheEmi! - feat(groq-ai): initial implementation using groq-sdk

    GroqProvider class implementing LLMProvider.
    Integration with the groq-sdk.
    Implementation of generateText, streamText, generateObject.
    Stub for streamObject.
    Basic build (tsup.config.ts, tsconfig.json) and package (package.json) setup copied from VercelAIProvider.

    Feature #13

  • Updated dependencies [55c58b0, d40cb14, e88cb12, 0651d35]:

@voltagent/[email protected]

29 Apr 15:39

Choose a tag to compare

Minor Changes

  • #52 96f2395 Thanks @foxy17! - feat: Add generateObject method for structured JSON output via Zod schemas and Google's JSON mode.
    feat: Add support for reading the Google GenAI API key from the GEMINI_API_KEY environment variable as a fallback.

Patch Changes

@voltagent/[email protected]

29 Apr 15:39

Choose a tag to compare

Patch Changes

  • #51 55c58b0 Thanks @kwaa! - Use the latest Hono to avoid duplicate dependencies

  • #59 d40cb14 Thanks @kwaa! - fix: add package exports

  • e88cb12 Thanks @omeraplak! - feat: Enhance createPrompt with Template Literal Type Inference

    Improved the createPrompt utility to leverage TypeScript's template literal types. This provides strong type safety by:

    • Automatically inferring required variable names directly from {{variable}} placeholders in the template string.
    • Enforcing the provision of all required variables with the correct types at compile time when calling createPrompt.

    This significantly reduces the risk of runtime errors caused by missing or misspelled prompt variables.

  • #65 0651d35 Thanks @omeraplak! - feat: Add OpenAPI (Swagger) Documentation for Core API - #64

    • Integrated @hono/zod-openapi and @hono/swagger-ui to provide interactive API documentation.
    • Documented the following core endpoints with request/response schemas, parameters, and examples:
      • GET /agents: List all registered agents.
      • POST /agents/{id}/text: Generate text response.
      • POST /agents/{id}/stream: Stream text response (SSE).
      • POST /agents/{id}/object: Generate object response (Note: Requires backend update to fully support JSON Schema input).
      • POST /agents/{id}/stream-object: Stream object response (SSE) (Note: Requires backend update to fully support JSON Schema input).
    • Added /doc endpoint serving the OpenAPI 3.1 specification in JSON format.
    • Added /ui endpoint serving the interactive Swagger UI.
    • Improved API discoverability:
      • Added links to Swagger UI and OpenAPI Spec on the root (/) endpoint.
      • Added links to Swagger UI in the server startup console logs.
    • Refactored API schemas and route definitions into api.routes.ts for better organization.
    • Standardized generation options (like userId, temperature, maxTokens) in the API schema with descriptions, examples, and sensible defaults.

[email protected]

25 Apr 16:57
d5f8255

Choose a tag to compare

Patch Changes

  • #33 3ef2eaa Thanks @kwaa! - Update package.json files:

    • Remove src directory from the files array.
    • Add explicit exports field for better module resolution.

@voltagent/[email protected]

25 Apr 16:57
d5f8255

Choose a tag to compare

Patch Changes