Skip to content

Commit 4f84a45

Browse files
authored
[https://nvbugs/5452463][doc] update disagg doc about UCX_MAX_RNDV_RAILS (#7205)
Signed-off-by: zhengd-nv <[email protected]>
1 parent 20922b7 commit 4f84a45

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

docs/source/advanced/disaggregated-service.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ There are some other useful environment variables that may help when encounterin
3636

3737
* `NCCL_GRAPH_MIXING_SUPPORT`: With the default value `1`, the CUDA driver may create too many CUDA streams while working with one CUDA graph, leading to performance drop. Setting it to `0` will reduce the number of CUDA streams, but please make sure there are no other NCCL ops outside the one CUDA graph, otherwise it's unsafe.
3838

39+
* `UCX_MAX_RNDV_RAILS`: With the default value `2`, UCX attempts to use two InfiniBand (IB) NIC devices per GPU for Rendezvous (RNDV) transfers. When both the context and generation instances enable tensor- and expert-parallel (TEP), multiple TP ranks may transfer KV cache concurrently. Because each TP rank can use up to two NIC devices, some NIC devices can be shared across GPUs, causing contention and reduced throughput. Setting `UCX_MAX_RNDV_RAILS=1` can reduce contention in this case.
40+
3941
## Troubleshooting and FAQ
4042

4143
### General FAQs
@@ -74,7 +76,7 @@ A. Yes, TRT-LLM supports using GPU direct RDMA for inter-node KV cache transfer.
7476

7577
A. The communication for kvCache transfer between executors are established dynamically. The connection establishment process incurs significant overhead, which explains the apparently lower kvCache transfer bandwidth observed during the initial requests after service startup. This lower bandwidth reflects the inclusion of connection establishment overhead. When conducting benchmarks, it is recommended to perform a warm-up phase to ensure accurate performance measurements.
7678

77-
*Q. When my servers are running on different NVLink domains, some servers hang or have a lower performance. How to fix that?
79+
*Q. When my servers are running on different NVLink domains, some servers hang or have a lower performance. How to fix that?*
7880

7981
A. NVLink domain can be found with `nvidia-smi -q` in the `Fabric.ClusterUUID` field. A few UCX environment variables can be adjusted when your servers have different NVLink domains:
8082

0 commit comments

Comments
 (0)