Expected behavior
R.isnan(), R.isinf(), and R.isfinite() should compile and produce correct boolean tensor outputs on both CPU and CUDA targets.
Actual behavior
All three ops crash during relax.build() with:
InternalError: CodeGenVM cannot handle this intrinsic now:
Op(relax.isnan)
This occurs on both CPU (llvm) and CUDA targets.
Environment
- TVM: main branch @ commit
0b0afd8dd (2026-04-25)
- OS: Ubuntu 20.04
Steps to reproduce
import numpy as np
import tvm
from tvm import relax
import tvm.relax.op as R
bb = relax.BlockBuilder()
x = relax.Var("x", relax.TensorStructInfo((4, 8), "float32"))
with bb.function("main", [x]):
with bb.dataflow():
out = bb.emit(R.isnan(x))
gv = bb.emit_output(out)
bb.emit_func_output(gv)
mod = bb.get()
pipeline = tvm.ir.transform.Sequential([relax.transform.LegalizeOps()])
mod_l = pipeline(mod)
exe = relax.build(mod_l, target="llvm") # CRASH
Same crash for R.isinf(x) and R.isfinite(x).
Notes
Expected behavior
R.isnan(),R.isinf(), andR.isfinite()should compile and produce correct boolean tensor outputs on both CPU and CUDA targets.Actual behavior
All three ops crash during
relax.build()with:This occurs on both CPU (llvm) and CUDA targets.
Environment
0b0afd8dd(2026-04-25)Steps to reproduce
Same crash for
R.isinf(x)andR.isfinite(x).Notes
isnan/isinf/isfinite(PR [Relax][PyTorch] Add support for bitwise_not, isfinite, isinf, isnan, logical_not, sign and square ops #17659), but the backend code generation was not updated to handle them — so models using these ops can be imported but not compiled.