-
Notifications
You must be signed in to change notification settings - Fork 257
Async vllm #693
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async vllm #693
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
Linked to #670 |
awesome ! |
Or @lewtun if you want to take a look? |
Btw, interestingly I suspect that our current other async model calls (like the tgi ones) were only successful because we were only implementing one type of async loop - we might want to unify them in a later PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice feature @clefourrier ! The logic LGTM and I'm curious if you've benchmarked both the speed up and whether there is approximate parity with the synchronous version for a small model on a few benchmarks?
I didn't quite understand the constraint about batch size = 1 because AFAIK vllm
doesn't expose that: normally you just pass a list of prompts and let the engine's continuous batching handle the scheduling.
Re batch size = 1, batch size of more is simply not supported yet in the generate method of the AsyncLLM model (unless I'm reading this comment wrong ^^) |
50acc39
to
d3dddb5
Compare
Ah I see, so in the async version one cannot pass a list of prompts. It would be interesting to benchmark one of the pass@1 evals like AIME24 with a DeepSeek-Distill model to get a sense for how much of a speed difference this makes (mostly asking to see if we adopt it in |
On it, so DP2 with ray vs with async? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great PR ! Once tests are done it's ready :)
We can keep the cleanup for adadpter models in this PR, those are small atomic changes so i think it's fine
Ok so I found a fun thing - I assumed that the |
There's an issue on the pass @ metrics that I need to investigate as they are failing |
That would be a good test! Even better would be DP=8 if you can get a free node :) |
Spent my day on this, I suspect I'm either missing something extremely trivial in how the sampling is done in the async vllm or how I should gather results - I need to start working on stg else so feel free to take a look if it's urgetn |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
Adds support for asynchronous inference using the new AsyncVLLM engine, updates the pipeline to dispatch sync vs async calls, and pins vllm dependencies for compatibility.
- Introduce
AsyncVLLMModel
with native async methods for generation and loglikelihood. - Modify pipeline to choose between
_run_model_async
and_run_model_sync
based onmodel.is_async
. - Update dependency versions for
vllm
,ray
, andmore_itertools
.
Reviewed Changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
src/lighteval/pipeline.py | Added async dispatch logic and separate async/sync methods |
src/lighteval/models/vllm/vllm_model.py | Defined AsyncVLLMModel with async generation and loglikelihood |
src/lighteval/models/transformers/delta_model.py | Added cleanup to remove delta weights temp folder |
src/lighteval/models/transformers/adapter_model.py | Added cleanup to remove adapter weights temp folder |
src/lighteval/models/model_loader.py | Select AsyncVLLMModel when config.is_async |
src/lighteval/models/abstract_model.py | Introduced is_async flag on base model class |
pyproject.toml | Pinned vllm , ray , and more_itertools versions |
Comments suppressed due to low confidence (3)
src/lighteval/pipeline.py:488
- The
_run_model
method cleans up the model but never returnssample_id_to_responses
. Addreturn sample_id_to_responses
after cleanup.
self.model.cleanup()
src/lighteval/models/vllm/vllm_model.py:579
- [nitpick] Variable name
input
shadows the built-in. Rename to something likereq
orrequest
for clarity.
for response, input in zip(responses, requests):
src/lighteval/models/vllm/vllm_model.py:424
- [nitpick] New async paths (
_async_batch
,greedy_until
,loglikelihood
) are introduced here but lack dedicated tests. Adding unit tests for both success and error scenarios would improve confidence.
class AsyncVLLMModel(VLLMModel):
Co-authored-by: Copilot <[email protected]>
@NathanHB going to merge to avoid diverging from main too much, though there might be issues with sampling evals (added a warning for now, + created an issue to investigate) |
Adds the option to use the new AsyncVLLM from vllm v1. It supports DP + PP/TP, but not setting the batch size, and deploys an independent async VLLM model which manages requests on its own through the async engine.
Thanks to the kind people at vllm vllm-project/vllm#17385, I realised we actually have to use a single event loop for async models, so this is what it does - it's also very fast now.