-
Notifications
You must be signed in to change notification settings - Fork 2.5k
feat: return logprobs in OpenAIChatGenerator and OpenAIResponsesChatGenerator
#10035
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub. 1 Skipped Deployment
|
Pull Request Test Coverage Report for Build 19171927776Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
logprobs in OpenAIChatGeneratorlogprobs in OpenAIChatGenerator and OpenAIResponsesChatGenerator
sjrl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
cf1742b to
5c840ed
Compare
|
@Amnah199 for the failing test I might recommend using a bigger model or improving the instructions to the LLM to only extract the city name when calling the tool. |
…tack into openai-longprobs
@sjrl In essence is it necessary to check this? As its more of a model performance thing rather than an error on our end? |
I'd say it's necessary for us to test whether we can handle two tool calls being returned in a single response from an LLM. So as long as we are able to test that behavior then specifically testing the values of what city the LLM extracted is not important. But we don't want the test to pass whether only one or two tool calls are returned if we specifically ask for two to be returned by the LLM. |
…tack into openai-longprobs
sjrl
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good!
Related Issues
Proposed Changes:
logprobsinChatMessage.metafor both OpenAI and OpenAIResponses chat generatorsserialize_usagegeneric as it can be used to serialize multiple OpenAI specific objectsHow did you test it?
Update few tests with logprobs enabled
Notes for the reviewer
Checklist
fix:,feat:,build:,chore:,ci:,docs:,style:,refactor:,perf:,test:and added!in case the PR includes breaking changes.