vllm.v1.executor.ray_utils ¶
FutureWrapper ¶
Bases: Future
A wrapper around Ray output reference to meet the interface of .execute_model(): The top level (core busy loop) expects .result() api to block and return a single output.
If aggregator is provided, the outputs from all workers are aggregated upon the result() call. If not only the first worker's output is returned.
Source code in vllm/v1/executor/ray_utils.py
RayWorkerWrapper ¶
Bases: WorkerWrapperBase
Ray wrapper for vllm.worker.Worker, allowing Worker to be lazily initialized after Ray sets CUDA_VISIBLE_DEVICES.
Source code in vllm/v1/executor/ray_utils.py
55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 | |
adjust_rank ¶
Adjust the rpc_rank based on the given mapping. It is only used during the initialization of the executor, to adjust the rpc_rank of workers after we create all workers.
Source code in vllm/v1/executor/ray_utils.py
_verify_bundles ¶
_verify_bundles(
placement_group: PlacementGroup,
parallel_config: ParallelConfig,
device_str: str,
require_gpu_on_driver: bool = True,
)
Verify a given placement group has bundles located in the right place.
There are 2 rules. - Warn if all tensor parallel workers cannot fit in a single node. - Fail if driver node is not included in a placement group (only when require_gpu_on_driver is True).
Source code in vllm/v1/executor/ray_utils.py
_wait_until_pg_ready ¶
Wait until a placement group is ready.
It prints the informative log messages if the placement group is not created within time.
Source code in vllm/v1/executor/ray_utils.py
assert_ray_available ¶
Raise an exception if Ray is not available.
build_actor_name ¶
Build a descriptive Ray actor name for dashboard visibility.
Source code in vllm/v1/executor/ray_utils.py
detach_zero_copy_from_model_runner_output ¶
Detach Ray SHM-channel zero-copy buffers from a ModelRunnerOutput in-place.
Ray compiled DAG SHM channels may return zero-copy objects (e.g. np.ndarray) backed by Ray's shared-memory object store. Ray's channel docs explicitly warn that subsequent reads may block if such an object is still in scope.
vLLM can return numpy-backed logprobs in ModelRunnerOutput.logprobs. If those arrays are backed by Ray SHM (commonly read-only), retaining them in scope across scheduler iterations can stall the channel and eventually hit RAY_CGRAPH_get_timeout.
Copy read-only numpy arrays so the returned output no longer retains references to Ray's shared-memory buffers.
We intentionally do not touch prompt_logprobs_dict: those entries are LogprobsTensors backed by PyTorch-owned CPU tensors (to_cpu_nonblocking or empty_cpu), not NumPy views decoded from Ray channels.
Source code in vllm/v1/executor/ray_utils.py
get_bundles_for_indices ¶
get_bundles_for_indices(
placement_group: PlacementGroup,
bundle_indices: list[int],
world_size: int,
) -> list[tuple[int, str, str]]
Return GPU bundle indices paired with node IDs and node IPs for explicit bundle indices specified via VLLM_RAY_BUNDLE_INDICES.
Source code in vllm/v1/executor/ray_utils.py
get_bundles_sorted_by_node ¶
Return GPU bundle indices paired with node IDs and node IPs, sorted driver-first.
This utility has to be invoked from the driver node.
Example: 3-node cluster, driver on node-A, PG bundles spread across nodes:
Input: [ (0, node-C), (1, node-A), (2, node-B), (3, node-C), (4, node-A), (5, node-B), ] Output: [ (1, node-A), (4, node-A), (2, node-B), (5, node-B), (0, node-C), (3, node-C), ]
Source code in vllm/v1/executor/ray_utils.py
initialize_ray_cluster ¶
initialize_ray_cluster(
parallel_config: ParallelConfig,
ray_address: str | None = None,
require_gpu_on_driver: bool = True,
)
Initialize the distributed cluster with Ray.
it will connect to the Ray cluster and create a placement group for the workers, which includes the specification of the resources for each distributed worker.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
parallel_config | ParallelConfig | The configurations for parallel execution. | required |
ray_address | str | None | The address of the Ray cluster. If None, uses the default Ray cluster address. | None |
require_gpu_on_driver | bool | If True (default), require at least one GPU on the current (driver) node and pin the first PG bundle to it. Set to False for executors like RayExecutorV2 where all GPU work is delegated to remote Ray actors. | True |
Source code in vllm/v1/executor/ray_utils.py
525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 | |