Skip to content

Tensor Network Decoder #179

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 118 commits into from
Jul 4, 2025
Merged

Tensor Network Decoder #179

merged 118 commits into from
Jul 4, 2025

Conversation

npancotti
Copy link
Collaborator

@npancotti npancotti commented Jun 6, 2025

This pull request adds a tensor network decoder to the python code base at libs/qec/python/cudaq_qec/plugins/decoders/

The main entry point is the TensorNetworkDecoder class defined in tensor_network_decoder.py. The file contains the basic functionality to manipulate tensor networks and to dispatch to different backends.

Other files in decoders/tensor_network_utils/

  • noise_model.py: helper functions to create tensor network representation of noise models
  • contractors.py: the tensor network contractors backends and path finders
  • tensor_network_factory.py: helper functions to build tensor networks

Features

  • Construct a tensor network from an arbitrary parity-check matrix (code capacity & circuit level noise)
  • Use arbitrary noise models and logical obervables
    • Correlated, inhomogeneous, disordered
  • Support different CPU and GPU backends via numpy, torch, and cuQuantum (cuTensorNet)

Initialization

import cudaq_qec as qec
import numpy as np

# parity check matrix
H = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
# logical observable
logicals = np.array([[0, 0, 1]])
# noise model
noise_model = 0.01 * np.random.rand(3)

decoder = qec.get_decoder(
    "tensor_network_decoder", 
    H, 
    logicals=logicals, 
    noise_model=noise_model, 
    contract_noise_model=True,
    # contractor_name="torch", # to use torch array type
    # device="cuda:0", # to load the tensors on a gpu via torch 
)

Decoding

Single syndrome

syndrome = [0.0, 1.0, 1.0]
res = decoder.decode(syndrome)
print(f"Results. res.result = {res.result}, converged = {res.converged}")

a batch of syndromes

syndrome_batch = np.array([[0.0, 1.0, 1.0], [1.0, 0.0, 1.0], [0.0, 0.0, 1.0]])
res = decoder.decode_batch(syndrome_batch)
print([r.result for r in res])

Circuit level noise

For circuit level noise, you can extract H, logicals and noise_model from a Stim detector error model as follows.

To run this example you need to install Beliefmatching and Stim in your enviroment

pip install stim
pip install beliefmatching
from beliefmatching.belief_matching import detector_error_model_to_check_matrices

def parse_detector_error_model(detector_error_model):
    matrices = detector_error_model_to_check_matrices(detector_error_model)

    out_H = np.zeros(matrices.check_matrix.shape)
    matrices.check_matrix.astype(np.float64).toarray(out=out_H)
    out_L = np.zeros(matrices.observables_matrix.shape)
    matrices.observables_matrix.astype(np.float64).toarray(out=out_L)

    return out_H, out_L, [float(p) for p in matrices.priors]

circuit = stim.Circuit.generated(
    "surface_code:rotated_memory_z",
    rounds=3,
    distance=3,
    after_clifford_depolarization=0.001,
    after_reset_flip_probability=0.01,
    before_measure_flip_probability=0.01,
    before_round_data_depolarization=0.01
)

detector_error_model = circuit.detector_error_model(decompose_errors=True)

H, logicals, noise_model = parse_detector_error_model(detector_error_model)

decoder = qec.get_decoder(
    "tensor_network_decoder", 
    H, 
    logical_obs=logicals, 
    noise_model=noise_model, 
    contract_noise_model=True,
    contractor_name="cutensornet",
    device="cuda:0",
)


num_shots = 5
sampler = circuit.compile_detector_sampler()
detection_events, observable_flips = sampler.sample(num_shots, separate_observables=True)

res = decoder.decode_batch(detection_events)
print([r.result for r in res])

print(observable_flips)

New dependencies

  • torch
  • quimb
  • opt_einsum

Testing

Pick the <image-name> for your platform

  • ghcr.io/nvidia/cudaqx-dev:latest-amd64 for AMD64 platforms
  • ghcr.io/nvidia/cudaqx-dev:latest-arm64 for ARM64 platforms

Then

docker pull <image-name>
docker run -it --gpus all --name cudaqx-dev <image-name>

Inside the container

export CUDAQ_INSTALL_PREFIX=/usr/local/cudaq
export CUDAQX_INSTALL_PREFIX=~/.cudaqx
cd /workspaces

# Get latest source code
git clone https://github.com/npancotti/cudaqx.git
cd cudaqx
mkdir build && cd build

# Configure your build (adjust as necessary)
cmake -G Ninja -S .. \
  -DCUDAQ_INSTALL_DIR=$CUDAQ_INSTALL_PREFIX \
  -DCMAKE_INSTALL_PREFIX=${CUDAQX_INSTALL_PREFIX} \
  -DCUDAQ_DIR=${CUDAQ_INSTALL_PREFIX}/lib/cmake/cudaq \
  -DCMAKE_BUILD_TYPE=Release

# Install your build
ninja install

# Perform tests just to prove that it is running
export PYTHONPATH=${CUDAQ_INSTALL_PREFIX}:${CUDAQX_INSTALL_PREFIX}
export PATH="${CUDAQ_INSTALL_PREFIX}/bin:${CUDAQX_INSTALL_PREFIX}/bin:${PATH}"
ctest

Set up the virtual environment & dependencies

cd /workspaces/cudaqx

# checkout the PR branch
git checkout npancotti/tn_decoder 
# propagate the python changes to the build subfolder
ninja -C build 

# set up the virtual environment
apt update
apt install python3-venv -y
python3 -m venv .venv
echo "export PYTHONPATH=/usr/local/cudaq:/workspaces/cudaqx/build/python" >> .venv/bin/activate
source .venv/bin/activate

# install the old and new dependencies
pip install cuda-quantum-cu12
python -c "import cudaq_qec"
pip install quimb opt_einsum torch

If you modify anything in the python code, don't forget to run ninja -C build to propagate the modification.

You can test the environment by running the example above. Copy the whole block below an paste it to the container bash shell. This creates a tn_decoder.py file with the example above

echo 'import cudaq_qec as qec
import numpy as np

# parity check matrix
H = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
# logical observable
logicals = np.array([[0, 0, 1]])
# noise model
noise_model = 0.01 * np.random.rand(3)

decoder = qec.get_decoder(
    "tensor_network_decoder", 
    H, 
    logicals=logicals, 
    noise_model=noise_model, 
    contract_noise_model=True,
    # contractor_name="torch", # to use torch array type
    # device="cuda:0", # to load the tensors on a gpu via torch 
)

syndrome = [0, 1, 1]
res = decoder.decode(syndrome)
print(f"Results. res.result = {res.result}, converged = {res.converged}")' >> tn_decoder.py

And then

python tn_decoder.py # you will get some warnings

To run the actual tests, you can

pip install pytest
python -m pytest libs/qec/python/tests/test_tensor_network_decoder.py

Copy link

copy-pr-bot bot commented Jun 6, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@npancotti
Copy link
Collaborator Author

npancotti commented Jun 10, 2025

Surprising thing I noticed on the way: the order I import stim and cudaq_qec matters.

(.venv) root@f75d8b092fe5:/workspaces/cudaqx# python 
Python 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import stim 
>>> import cudaq_qec
>>> circuit = stim.Circuit.generated("surface_code:rotated_memory_z", rounds=3, distance=3)
>>> 
(.venv) root@f75d8b092fe5:/workspaces/cudaqx# python 
Python 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cudaq_qec
>>> import stim
>>> circuit = stim.Circuit.generated("surface_code:rotated_memory_z", rounds=3, distance=3)
Segmentation fault (core dumped)
(.venv) root@f75d8b092fe5:/workspaces/cudaqx# 

@bmhowe23
Copy link
Collaborator

Surprising thing I noticed on the way: the order I import stim and cudaq_qec matters.

(.venv) root@f75d8b092fe5:/workspaces/cudaqx# python 
Python 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import stim 
>>> import cudaq_qec
>>> circuit = stim.Circuit.generated("surface_code:rotated_memory_z", rounds=3, distance=3)
>>> 
(.venv) root@f75d8b092fe5:/workspaces/cudaqx# python 
Python 3.10.12 (main, Feb  4 2025, 14:57:36) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import cudaq_qec
>>> import stim
>>> circuit = stim.Circuit.generated("surface_code:rotated_memory_z", rounds=3, distance=3)
Segmentation fault (core dumped)
(.venv) root@f75d8b092fe5:/workspaces/cudaqx# 

Hmm, we will need to fix this. CUDA-Q includes a Stim-backed simulator. Perhaps it is re-exporting some Stim symbols that are conflicting with the true Python stim. Can you reproduce this a standalone CUDA-Q Python file? If so, I think it would be helpful to post a CUDA-Q issue here: https://github.com/NVIDIA/cuda-quantum/issues

@bmhowe23
Copy link
Collaborator

bmhowe23 commented Jul 2, 2025

/ok to test adfd366

npancotti added 3 commits July 2, 2025 01:16
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: e2658f8
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 7ebf0bd
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 5efa209
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 9f38207
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: adfd366

Signed-off-by: Nicola Pancotti <[email protected]>
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: b908a3f

Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti npancotti marked this pull request as ready for review July 2, 2025 19:09
@melody-ren
Copy link
Collaborator

/ok to test 4ea411f

@bmhowe23 bmhowe23 added the enhancement New feature or request label Jul 2, 2025
npancotti added 5 commits July 3, 2025 00:34
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 5a9f4ac
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: c41eec4
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 0521cd2
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 413f737

Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti
Copy link
Collaborator Author

/ok to test 922156b

1 similar comment
@npancotti
Copy link
Collaborator Author

/ok to test 922156b

npancotti added 2 commits July 3, 2025 02:08
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 56621e0

Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti
Copy link
Collaborator Author

/ok to test f411628

npancotti added 2 commits July 3, 2025 19:06
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: 2f86aa4

Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti
Copy link
Collaborator Author

/ok to test 8a90b6d

npancotti added 2 commits July 3, 2025 19:37
I, Nicola Pancotti <[email protected]>, hereby add my Signed-off-by to this commit: a9fabd5

Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti
Copy link
Collaborator Author

/ok to test 0367c4c

npancotti added 2 commits July 3, 2025 23:53
Signed-off-by: Nicola Pancotti <[email protected]>
Signed-off-by: Nicola Pancotti <[email protected]>
@npancotti
Copy link
Collaborator Author

/ok to test 28616c6

Copy link
Collaborator

@bmhowe23 bmhowe23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great ... thanks, Nico!

@bmhowe23 bmhowe23 merged commit e276d89 into NVIDIA:main Jul 4, 2025
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants