build(deps): bump vllm from 0.15.1 to 0.17.0 in /python/kserve#1172
build(deps): bump vllm from 0.15.1 to 0.17.0 in /python/kserve#1172dependabot[bot] wants to merge 1 commit intomasterfrom
Conversation
Bumps [vllm](https://github.com/vllm-project/vllm) from 0.15.1 to 0.17.0. - [Release notes](https://github.com/vllm-project/vllm/releases) - [Changelog](https://github.com/vllm-project/vllm/blob/main/RELEASE.md) - [Commits](vllm-project/vllm@v0.15.1...v0.17.0) --- updated-dependencies: - dependency-name: vllm dependency-version: 0.17.0 dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com>
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: dependabot[bot] The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Hi @dependabot[bot]. Thanks for your PR. I'm waiting for a opendatahub-io member to verify that this patch is reasonable to test. If it is, they should reply with Regular contributors should join the org to skip this step. Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Bumps vllm from 0.15.1 to 0.17.0.
Release notes
Sourced from vllm's releases.
... (truncated)
Commits
b31e932Bound openai to under 2.25.0e346c08[Release] Include source distribution (sdist) in PyPI uploads (#35136)b7a423c[BUGFIX]Fix Qwen-Omni models audio max_token_per_item estimation error leadin...fa78ec8[Bugfix] Fix Qwen-VL tokenizer implementation (#36140)9a474ce[XPU] bump vllm-xpu-kernels to v0.1.3 (#35984)097eb54[Bugfix] Improve engine ready timeout error message (#35616)7cdba98[BugFix] Support tool_choice=none in the Anthropic API (#35835)3c85cd9[Rocm][CI] Fix ROCm LM Eval Large Models (8 Card) (#35913)edba150[Bugfix] Guard mm_token_type_ids kwarg in get_mrope_input_positions (#35711)e379396[Refactor] Clean up processor kwargs extraction (#35872)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebasewill rebase this PR@dependabot recreatewill recreate this PR, overwriting any edits that have been made to it@dependabot show <dependency name> ignore conditionswill show all of the ignore conditions of the specified dependency@dependabot ignore this major versionwill close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this minor versionwill close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)@dependabot ignore this dependencywill close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)You can disable automated security fix PRs for this repo from the Security Alerts page.