Skip to content

Enabled some cases with known issue to test triage_bot #1714

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

daisyden
Copy link
Contributor

@daisyden daisyden commented Jun 2, 2025

No description provided.

@pytorchxpubot
Copy link

@sys_pytorchxpubot triage result for run 15387896749Triage bot UT analaysis result for reference only, please note unique error message only report once:
  1. third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU test_compare_cpu_sparse_sampled_addmm_xpu_float32 got failed with error message
 NotImplementedError: Could not run 'aten::sparse_sampled_addmm' with arguments from the 'SparseCsrXPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::sparse_sampled_addmm' is only available for these backends: [XPU, Meta, SparseCsrCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradHIP, AutogradXLA, AutogradMPS, AutogradIPU, AutogradXPU, AutogradHPU, AutogradVE, AutogradLazy, AutogradMTIA, AutogradMAIA, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, AutogradMeta, AutogradNestedTensor, Tracer, AutocastCPU, AutocastMTIA, AutocastMAIA, AutocastXPU, AutocastMPS, AutocastCUDA, FuncTorchBatched, BatchedNestedTensor, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLayerFrontMode, PreDispatch, PythonDispatcher]. ; Exception: Caused by reference input at index 0: SampleInput(input=Tensor[size=(0, 0), device="xpu:0", dtype=torch.float32], args=TensorList[Tensor[size=(0, 0), device="xpu:0", dtype=torch.float32], Tensor[size=(0, 0), device="xpu:0", dtype=torch.float32]], kwargs={'alpha': '0.6', 'beta': '0.2'}, broadcasts_input=False, name='') 

Triage bot response:

{
  "similar_issue_id": 618,
  "similar_issue_state": "closed",
  "issue_owner": "PenghuiCheng",
  "issue_description": "The failed unit test is due to the 'sparse_sampled_addmm' operation not being implemented for the 'SparseCsrXPU' backend. This is a known issue related to missing sparse tensor operation support in the XPU backend.",
  "root_causes": [
    "The 'sparse_sampled_addmm' operation is not implemented for the 'SparseCsrXPU' backend.",
    "Sparse tensor operations are not fully supported on XPU, leading to similar issues as previously reported."
  ],
  "suggested_solutions": [
    "Implement the 'sparse_sampled_addmm' operation for the XPU backend to resolve the NotImplementedError.",
    "Enhance the support for sparse tensor operations on XPU to align with other backends."
  ]
}
  1. third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU test_python_ref__refs_eye_xpu_float8_e4m3fn got failed with error message
 RuntimeError: "where_xpu" not implemented for 'Float8_e4m3fn' ; Exception: Caused by reference input at index 0: SampleInput(input=0, args=(), kwargs={'device': "'xpu:0'", 'dtype': 'torch.float8_e4m3fn', 'requires_grad': 'False'}, broadcasts_input=False, name='') 

Triage bot response:

{
  "similar_issue_id": "233",
  "similar_issue_state": "closed",
  "issue_owner": "guangyey",
  "issue_description": "Failures in test_ops::TestCompositeCompliance related to test failures and feature support in torch-xpu-ops, with issues in tensor memory management and unsupported operations.",
  "root_causes": [
    "Unsupported operation for Float8_e4m3fn dtype on XPU.",
    "Missing implementation of 'where' function for Float8_e4m3fn on XPU backend."
  ],
  "suggested_solutions": [
    "Implement the 'where' function for Float8_e4m3fn dtype in the XPU backend.",
    "Ensure proper handling of Float8 data types in DispatchStub for XPU."
  ]
}
  1. third_party.torch-xpu-ops.test.xpu.test_ops_xpu.TestCommonXPU test_python_ref__refs_eye_xpu_float8_e4m3fnuz got failed with error message
 RuntimeError: "where_xpu" not implemented for 'Float8_e4m3fnuz' ; Exception: Caused by reference input at index 0: SampleInput(input=0, args=(), kwargs={'device': "'xpu:0'", 'dtype': 'torch.float8_e4m3fnuz', 'requires_grad': 'False'}, broadcasts_input=False, name='') 

Triage bot response:

{
  "similar_issue_id": "233",
  "similar_issue_state": "closed",
  "issue_owner": "guangyey",
  "issue_description": "Failures in test_ops::TestCompositeCompliance related to test failures and feature support in torch-xpu-ops, with issues in tensor operations and memory management.",
  "root_causes": [
    "Missing implementation of 'where' operation for Float8_e4m3fnuz dtype on XPU.",
    "Incomplete support for new data types in XPU backend operations."
  ],
  "suggested_solutions": [
    "Implement the 'where' operation for Float8_e4m3fnuz dtype in the XPU backend.",
    "Ensure proper stubbing and operation implementation for new data types in the DispatchStub."
  ]
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants