Skip to content

Conversation

@defertoexpertise
Copy link

On macs the xsampler was giving errors because cuda wasn't present, because its a mac... i fixed that by converting an if to an elif, and set the dtype to the correct value for mps as same time to fix the second error.

Please test on pc before accepting however it shouldn't break anything.

!!! Exception during processing !!! Torch not compiled with CUDA enabled
Traceback (most recent call last):
  File "/Users/user/AI/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/user/AI/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/user/AI/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/user/AI/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/user/AI/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 361, in sampling
    if torch.cuda.is_bf16_supported():
  File "/Users/user/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 138, in is_bf16_supported
    device = torch.cuda.current_device()
  File "/Users/user/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 878, in current_device
    _lazy_init()
  File "/Users/user/AI/ComfyUI/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 305, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
!!! Exception during processing !!! local variable 'dtype_model' referenced before assignment
Traceback (most recent call last):
  File "/Users/user/AI/ComfyUI/execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/user/AI/ComfyUI/execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
  File "/Users/user/AI/ComfyUI/execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)
  File "/Users/user/AI/ComfyUI/execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))
  File "/Users/user/AI/ComfyUI/custom_nodes/x-flux-comfyui/nodes.py", line 376, in sampling
    dtype=dtype_model, seed=noise_seed
UnboundLocalError: local variable 'dtype_model' referenced before assignment```

@bijlevel
Copy link

bijlevel commented Jan 19, 2025

Is this also the fix for this error: " Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." ? I'm asking, because I'm not (AFAIK) using cuda and if I read the code well, your fix assumes the use of cuda.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants