Skip to content

Commit 00c2b81

Browse files
authored
[None][chore] Skip failing import of mxfp4_moe (#8591)
Signed-off-by: Balaram Buddharaju <[email protected]>
1 parent df689f8 commit 00c2b81

File tree

1 file changed

+4
-1
lines changed

1 file changed

+4
-1
lines changed

tests/unittest/_torch/auto_deploy/unit/multigpu/custom_ops/test_mxfp4_moe_ep.py

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,12 @@
55
import torch.distributed as dist
66
from _dist_test_utils import get_device_counts
77

8-
from tensorrt_llm._torch.auto_deploy.custom_ops.mxfp4_moe import IS_TRITON_KERNELS_AVAILABLE
98
from tensorrt_llm._torch.auto_deploy.distributed.common import spawn_multiprocess_job
109

10+
# FIXME: https://nvbugspro.nvidia.com/bug/5604136.
11+
# from tensorrt_llm._torch.auto_deploy.custom_ops.mxfp4_moe import IS_TRITON_KERNELS_AVAILABLE
12+
IS_TRITON_KERNELS_AVAILABLE = False
13+
1114

1215
def _split_range_last_remainder(n: int, world_size: int, rank: int):
1316
"""[lo, hi) split along dim0; last rank gets remainder."""

0 commit comments

Comments
 (0)