Skip to content

Commit 90a7fe4

Browse files
authored
Neuron SDK Release 2.24.0 (#1181)
1 parent fd0d619 commit 90a7fe4

File tree

31 files changed

+1042
-112
lines changed

31 files changed

+1042
-112
lines changed

dlami/index.rst

Lines changed: 42 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -89,20 +89,20 @@ Virtual Environments pre-installed
8989
* - Neuron Framework/Libraries supported
9090
- Virtual Environment
9191

92-
* - PyTorch 2.6 Torch NeuronX, NxD Core
93-
- /opt/aws_neuronx_venv_pytorch_2_6
92+
* - PyTorch 2.7 Torch NeuronX, NxD Core
93+
- /opt/aws_neuronx_venv_pytorch_2_7
9494

95-
* - PyTorch 2.6 NxD Training, Torch NeuronX
96-
- /opt/aws_neuronx_venv_pytorch_2_6_nxd_training
95+
* - PyTorch 2.7 NxD Training, Torch NeuronX
96+
- /opt/aws_neuronx_venv_pytorch_2_7_nxd_training
9797

98-
* - PyTorch 2.6 NxD Inference, Torch NeuronX
99-
- /opt/aws_neuronx_venv_pytorch_2_6_nxd_inference
98+
* - PyTorch 2.7 NxD Inference, Torch NeuronX
99+
- /opt/aws_neuronx_venv_pytorch_2_7_nxd_inference
100100

101-
* - Transformers NeuronX (PyTorch 2.6)
102-
- /opt/aws_neuronx_venv_pytorch_2_6_transformers
101+
* - Transformers NeuronX (PyTorch 2.7)
102+
- /opt/aws_neuronx_venv_pytorch_2_7_transformers
103103

104-
* - JAX 0.5 NeuronX
105-
- /opt/aws_neuronx_venv_jax_0_5
104+
* - JAX 0.6 NeuronX
105+
- /opt/aws_neuronx_venv_jax_0_6
106106

107107
* - Tensorflow 2.10 NeuronX
108108
- /opt/aws_neuronx_venv_tensorflow_2_10
@@ -114,7 +114,7 @@ Virtual Environments pre-installed
114114
- /opt/aws_neuron_venv_pytorch_1_13_inf1
115115

116116

117-
Within the PyTorch 2.6 NxD Training virtual environment, we have included a setup script that installs required dependencies for the package. To run this script,
117+
Within the PyTorch 2.7 NxD Training virtual environment, we have included a setup script that installs required dependencies for the package. To run this script,
118118
activate the virtual environment and run ``setup_nxdt.sh`` and this will run :ref:`the setup steps here <nxdt_installation_guide>`.
119119

120120
You can easily get started with the multi-framework DLAMI through AWS console by following this :ref:`setup guide <setup-ubuntu22-multi-framework-dlami>`. If you are looking to
@@ -140,25 +140,25 @@ Single Framework DLAMIs supported
140140
- Neuron Instances Supported
141141
- DLAMI Name
142142

143-
* - PyTorch 2.6
143+
* - PyTorch 2.7
144144
- Ubuntu 22.04
145145
- Inf2, Trn1, Trn1n, Trn2
146-
- Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04)
146+
- Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04)
147147

148-
* - PyTorch 2.6
148+
* - PyTorch 2.7
149149
- Amazon Linux 2023
150150
- Inf2, Trn1, Trn1n, Trn2
151-
- Deep Learning AMI Neuron PyTorch 2.6 (Amazon Linux 2023)
151+
- Deep Learning AMI Neuron PyTorch 2.7 (Amazon Linux 2023)
152152

153-
* - JAX 0.5
153+
* - JAX 0.6
154154
- Ubuntu 22.04
155155
- Inf2, Trn1, Trn1n, Trn2
156-
- Deep Learning AMI Neuron JAX 0.5 (Ubuntu 22.04)
156+
- Deep Learning AMI Neuron JAX 0.6 (Ubuntu 22.04)
157157

158-
* - JAX 0.5
158+
* - JAX 0.6
159159
- Amazon Linux 2023
160160
- Inf2, Trn1, Trn1n, Trn2
161-
- Deep Learning AMI Neuron JAX 0.5 (Amazon Linux 2023)
161+
- Deep Learning AMI Neuron JAX 0.6 (Amazon Linux 2023)
162162

163163
* - Tensorflow 2.10
164164
- Ubuntu 22.04
@@ -189,25 +189,25 @@ Virtual Environments pre-installed
189189
- Neuron Libraries supported
190190
- Virtual Environment
191191

192-
* - Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04, Amazon Linux 2023)
193-
- PyTorch 2.6 Torch NeuronX, NxD Core
194-
- /opt/aws_neuronx_venv_pytorch_2_6
192+
* - Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04, Amazon Linux 2023)
193+
- PyTorch 2.7 Torch NeuronX, NxD Core
194+
- /opt/aws_neuronx_venv_pytorch_2_7
195195

196-
* - Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04, Amazon Linux 2023)
197-
- PyTorch 2.6 NxD Training, Torch NeuronX
198-
- /opt/aws_neuronx_venv_pytorch_2_6_nxd_training
196+
* - Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04, Amazon Linux 2023)
197+
- PyTorch 2.7 NxD Training, Torch NeuronX
198+
- /opt/aws_neuronx_venv_pytorch_2_7_nxd_training
199199

200-
* - Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04, Amazon Linux 2023)
201-
- PyTorch 2.6 NxD Inference, Torch NeuronX
202-
- /opt/aws_neuronx_venv_pytorch_2_6_nxd_inference
200+
* - Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04, Amazon Linux 2023)
201+
- PyTorch 2.7 NxD Inference, Torch NeuronX
202+
- /opt/aws_neuronx_venv_pytorch_2_7_nxd_inference
203203

204-
* - Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04, Amazon Linux 2023)
205-
- Transformers NeuronX PyTorch 2.6
206-
- /opt/aws_neuronx_venv_pytorch_2_6_transformers
204+
* - Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04, Amazon Linux 2023)
205+
- Transformers NeuronX PyTorch 2.7
206+
- /opt/aws_neuronx_venv_pytorch_2_7_transformers
207207

208-
* - Deep Learning AMI Neuron JAX 0.5 (Ubuntu 22.04, Amazon Linux 2023)
209-
- JAX NeuronX 0.5
210-
- /opt/aws_neuronx_venv_jax_0_5
208+
* - Deep Learning AMI Neuron JAX 0.6 (Ubuntu 22.04, Amazon Linux 2023)
209+
- JAX NeuronX 0.6
210+
- /opt/aws_neuronx_venv_jax_0_6
211211

212212
* - Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 22.04)
213213
- Pytorch Neuron (Inf1)
@@ -298,36 +298,30 @@ SSM Parameter Prefix
298298
* - Deep Learning AMI Neuron (Amazon Linux 2023)
299299
- /aws/service/neuron/dlami/multi-framework/amazon-linux-2023
300300

301-
* - Deep Learning AMI Neuron PyTorch 2.6 (Ubuntu 22.04)
302-
- /aws/service/neuron/dlami/pytorch-2.6/ubuntu-22.04
301+
* - Deep Learning AMI Neuron PyTorch 2.7 (Ubuntu 22.04)
302+
- /aws/service/neuron/dlami/pytorch-2.7/ubuntu-22.04
303303

304-
* - Deep Learning AMI Neuron PyTorch 2.6 (Amazon Linux 2023)
305-
- /aws/service/neuron/dlami/pytorch-2.6/amazon-linux-2023
304+
* - Deep Learning AMI Neuron PyTorch 2.7 (Amazon Linux 2023)
305+
- /aws/service/neuron/dlami/pytorch-2.7/amazon-linux-2023
306306

307-
* - Deep Learning AMI Neuron JAX 0.5 (Ubuntu 22.04)
308-
- /aws/service/neuron/dlami/jax-0.5/ubuntu-22.04
307+
* - Deep Learning AMI Neuron JAX 0.6 (Ubuntu 22.04)
308+
- /aws/service/neuron/dlami/jax-0.6/ubuntu-22.04
309309

310-
* - Deep Learning AMI Neuron JAX 0.5 (Amazon Linux 2023)
311-
- /aws/service/neuron/dlami/jax-0.5/amazon-linux-2023
310+
* - Deep Learning AMI Neuron JAX 0.6 (Amazon Linux 2023)
311+
- /aws/service/neuron/dlami/jax-0.6/amazon-linux-2023
312312

313313
* - Deep Learning AMI Neuron PyTorch 1.13 Inf1 (Ubuntu 22.04)
314314
- /aws/service/neuron/dlami/pytorch-1.13-inf1/ubuntu-22.04
315315

316316
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
317317
- /aws/service/neuron/dlami/tensorflow-2.10/ubuntu-22.04
318318

319-
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04)
320-
- /aws/service/neuron/dlami/tensorflow-2.10/ubuntu-20.04
321-
322319
* - Deep Learning Base Neuron AMI (Amazon Linux 2023)
323320
- /aws/service/neuron/dlami/base/amazon-linux-2023
324321

325322
* - Deep Learning Base Neuron AMI (Ubuntu 22.04)
326323
- /aws/service/neuron/dlami/base/ubuntu-22.04
327324

328-
* - Deep Learning Base Neuron AMI (Ubuntu 20.04)
329-
- /aws/service/neuron/dlami/base/ubuntu-20.04
330-
331325

332326
For example to find the latest DLAMI id for Multi-Framework DLAMI (Ubuntu 22) you can use the following
333327

frameworks/torch/torch-neuronx/tutorials/training/finetune_hftrainer.rst

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -62,15 +62,15 @@ First, paste the following script into your terminal to create a “run.sh” fi
6262

6363
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_single_worker_training.sh
6464
:language: shell
65-
:lines: 7-27
65+
:lines: 7-28
6666

6767
We optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
6868
graph cache (Neuron Cache) such that the actual run has fewer compilations (faster run
6969
time):
7070

7171
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_single_worker_training.sh
7272
:language: shell
73-
:lines: 30
73+
:lines: 31
7474

7575
Please ignore the results from this precompile run as it is only for
7676
extracting and compiling the XLA graphs.
@@ -86,7 +86,7 @@ additional compilations.
8686

8787
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_single_worker_training.sh
8888
:language: shell
89-
:lines: 32
89+
:lines: 33
9090

9191
If precompilation was not done, the first execution of ./run.sh will be slower due to serial compilations. Rerunning the same script a second time would show quicker execution as the compiled graphs will be already cached in persistent cache.
9292

@@ -108,23 +108,23 @@ Paste the following script into your terminal to create a “run_2w.sh” file a
108108

109109
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_multi_worker_training_code.sh
110110
:language: shell
111-
:lines: 7-27
111+
:lines: 7-28
112112

113113
Again, we optionally precompile the model and training script using neuron_parallel_compile to warm up the persistent
114114
graph cache (Neuron Cache), ignoring the results from this precompile run as it is only for
115115
extracting and compiling the XLA graphs:
116116

117117
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_multi_worker_training_code.sh
118118
:language: shell
119-
:lines: 30
119+
:lines: 31
120120

121121
Precompilation is optional and only needed to be done once unless hyperparameters such as batch size are modified.
122122
After the optional precompilation, the actual run will be faster with minimal
123123
additional compilations.
124124

125125
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_multi_worker_training_code.sh
126126
:language: shell
127-
:lines: 32
127+
:lines: 33
128128

129129
During run, you will now notice that the "Total train batch size" is now 16 and the "Total optimization steps" is now half the number for one worker training.
130130

@@ -149,19 +149,19 @@ Paste the following script into your terminal to create a “run_converted.sh”
149149

150150
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_converted_checkpoint_training.sh
151151
:language: shell
152-
:lines: 38-59
152+
:lines: 38-60
153153

154154
If it is the first time running with ``bert-large-uncased`` model or if hyperparameters have changed, then the optional one-time precompilation step can save compilation time:
155155

156156
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_converted_checkpoint_training.sh
157157
:language: shell
158-
:lines: 62
158+
:lines: 63
159159

160160
If you have run the single worker training in a previous section, then you can skip the precompilation step and just do:
161161

162162
.. literalinclude:: tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_converted_checkpoint_training.sh
163163
:language: shell
164-
:lines: 65
164+
:lines: 66
165165

166166
.. _workarounds_for_older_versions:
167167

@@ -234,6 +234,7 @@ The following are currently known issues:
234234
- Variable input sizes: When fine-tune models such as dslim/bert-base-NER using the `token-classification example <https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification>`__, you may encounter timeouts (lots of "socket.h:524 CCOM WARN Timeout waiting for RX" messages) and execution hang. This occurs because NER dataset has different sample sizes, which causes many recompilations and compiled graph (NEFF) reloads. Furthermore, different data parallel workers can execute different compiled graph. This multiple-program multiple-data behavior is currently unsupported. To workaround this issue, please pad to maximum length using the Trainer API option ``--pad_to_max_length``.
235235
- When running HuggingFace GPT fine-tuning with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, you might see NaNs in the loss immediately at the first step. This issue occurs due to large negative constants used to implement attention masking (https://github.com/huggingface/transformers/pull/17306). To workaround this issue, please use transformers version <= 4.20.0.
236236
- When using Trainer API option --bf16, you will see "RuntimeError: No CUDA GPUs are available". To workaround this error, please add "import torch; torch.cuda.is_bf16_supported = lambda: True" to the Python script (i.e. run_glue.py). (Trainer API option --fp16 is not yet supported).
237+
- When using latest HuggingFace transformers version, you may see "ValueError: Your setup doesn't support bf16/gpu." To fix this, please use ``--use_cpu True`` in your scripts.
237238

238239
The following are resolved issues:
239240

frameworks/torch/torch-neuronx/tutorials/training/tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_converted_checkpoint_training.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -47,6 +47,7 @@ NEURON_RT_STOCHASTIC_ROUNDING_EN=1 torchrun --nproc_per_node=2 ./run_glue.py \\
4747
--do_train \\
4848
--do_eval \\
4949
--bf16 \\
50+
--use_cpu True \\
5051
--max_seq_length 128 \\
5152
--per_device_train_batch_size 8 \\
5253
--learning_rate 2e-5 \\

frameworks/torch/torch-neuronx/tutorials/training/tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_multi_worker_training_code.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ NEURON_RT_STOCHASTIC_ROUNDING_EN=1 torchrun --nproc_per_node=2 ./run_glue.py \\
1515
--do_train \\
1616
--do_eval \\
1717
--bf16 \\
18+
--use_cpu True \\
1819
--max_seq_length 128 \\
1920
--per_device_train_batch_size 8 \\
2021
--learning_rate 2e-5 \\

frameworks/torch/torch-neuronx/tutorials/training/tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_setup_code.sh

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,9 @@
22
set -eExuo
33

44
# Install packages and clone transformers
5-
export HF_VER=4.44.0
6-
pip install -U transformers==$HF_VER datasets evaluate scikit-learn
5+
export HF_VER=4.52.0
6+
export ACC_VER=1.7.0
7+
pip install -U transformers==$HF_VER accelerate==$ACC_VER datasets evaluate scikit-learn
78
cd ~/
89
git clone https://github.com/huggingface/transformers --branch v$HF_VER
9-
cd ~/transformers/examples/pytorch/text-classification
10+
cd ~/transformers/examples/pytorch/text-classification

frameworks/torch/torch-neuronx/tutorials/training/tutorial_source_code/bert_mrpc_finetuning/bert_mrpc_finetuning_single_worker_training.sh

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ NEURON_RT_STOCHASTIC_ROUNDING_EN=1 torchrun --nproc_per_node=1 ./run_glue.py \\
1515
--do_train \\
1616
--do_eval \\
1717
--bf16 \\
18+
--use_cpu True \\
1819
--max_seq_length 128 \\
1920
--per_device_train_batch_size 8 \\
2021
--learning_rate 2e-5 \\

general/appnotes/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,5 +81,6 @@ Neuron Application Notes
8181
.. toctree::
8282
:maxdepth: 1
8383

84+
/general/appnotes/torch-neuronx/introducing-pytorch-2-7
8485
/general/appnotes/torch-neuronx/introducing-pytorch-2-6
8586
/general/appnotes/torch-neuronx/introducing-pytorch-2-x

general/appnotes/torch-neuronx/introducing-pytorch-2-6.rst

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ See :ref:`migrate_to_pytorch_2.6` for changes needed to use PyTorch NeuronX 2.6.
4141
How can I install PyTorch NeuronX 2.6?
4242
--------------------------------------------
4343

44-
To install PyTorch NeuronX 2.6 please follow the :ref:`setup-torch-neuronx` guides for Amazon Linux 2023 and Ubuntu 22 AMI. Please also refer to the Neuron multi-framework DLAMI :ref:`setup guide <setup-ubuntu22-multi-framework-dlami>` for Ubuntu 22 with a pre-installed virtual environment for PyTorch NeuronX 2.6 that you can use to easily get started. PyTorch NeuronX 2.6 can be installed using the following:
44+
To install PyTorch NeuronX 2.6 please follow the :ref:`setup-torch-neuronx` guides for Amazon Linux 2023 and Ubuntu 22 AMI. Please also refer to the Neuron multi-framework DLAMI :ref:`setup guide <setup-ubuntu22-multi-framework-dlami>` for Ubuntu 22 with a pre-installed virtual environment for PyTorch NeuronX 2.6 that you can use to get started. PyTorch NeuronX 2.6 can be installed using the following:
4545

4646
.. code::
4747
@@ -66,7 +66,7 @@ To migrate the training scripts from PyTorch NeuronX 2.5 to PyTorch NeuronX 2.6,
6666

6767
.. note::
6868

69-
``xm`` below refers to ``torch_xla.core.xla_model`` and ``xr`` refers to ``torch_xla.runtime``
69+
``xm`` below refers to ``torch_xla.core.xla_model``, ``xr`` refers to ``torch_xla.runtime``, and ``xmp`` refers to ``torch_xla.distributed.xla_multiprocessing``
7070

7171
* The environment variables ``XLA_DOWNCAST_BF16`` and ``XLA_USE_BF16`` are deprecated (warning when used) and will be removed in an upcoming release. Please switch to automatic mixed-precision or use ``model.to(torch.bfloat16)`` command to convert model to BF16 format. (see :ref:`migration_from_xla_downcast_bf16`)
7272
* The functions ``xm.xrt_world_size()``, ``xm.xla_model.get_ordinal()``, and ``xm.xla_model.get_local_ordinal()`` are deprecated (warning when used). Please switch to ``xr.world_size()``, ``xr.global_ordinal()``, and ``xr.local_ordinal()`` respectively as replacements.
@@ -123,20 +123,21 @@ Warning "XLA_DOWNCAST_BF16 will be deprecated after the 2.6 release, please down
123123
Environment variables ``XLA_DOWNCAST_BF16`` and ``XLA_USE_BF16`` are deprecated (warning when used). Please switch to automatic mixed-precision or use ``model.to(torch.bfloat16)`` command to cast model to BF16. (see :ref:`migration_from_xla_downcast_bf16`)
124124

125125

126-
AttributeError: <module 'torch_xla.core.xla_model' ... does not have the attribute 'xrt_world_size'
127-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
126+
WARNING:root:torch_xla.core.xla_model.xrt_world_size() will be removed in release 2.7. is deprecated. Use torch_xla.runtime.world_size instead.
127+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128+
129+
This is a warning that ``torch_xla.core.xla_model.xrt_world_size()`` will be removed in a future release. Please switch to using ``torch_xla.runtime.world_size`` instead.
128130

129-
This is an error that ``torch_xla.core.xla_model.xrt_world_size()`` is removed in torch-xla version 2.7. Please switch to using ``torch_xla.runtime.world_size()`` instead.
130131

131-
AttributeError: <module 'torch_xla.core.xla_model' ... does not have the attribute 'get_ordinal'
132-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
132+
WARNING:torch_xla.core.xla_model.xla_model.get_ordinal() will be removed in release 2.7. is deprecated. Use torch_xla.runtime.global_ordinal instead.
133+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
133134

134-
This is an error that ``torch_xla.core.xla_model.xla_model.get_ordinal()`` is removed in torch-xla version 2.7. Please switch to using ``torch_xla.runtime.global_ordinal()`` instead.
135+
This is a warning that ``torch_xla.core.xla_model.xla_model.get_ordinal()`` will be removed in a future release. Please switch to using ``torch_xla.runtime.global_ordinal`` instead.
135136

136-
AttributeError: <module 'torch_xla.core.xla_model' ... does not have the attribute 'get_local_ordinal'
137-
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
137+
WARNING:torch_xla.core.xla_model.xla_model.get_local_ordinal() will be removed in release 2.7. is deprecated. Use torch_xla.runtime.local_ordinal instead.
138+
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
138139

139-
This is an error that ``torch_xla.core.xla_model.xla_model.get_local_ordinal()`` is removed in torch-xla version 2.7. Please switch to using ``torch_xla.runtime.local_ordinal()`` instead.
140+
This is a warning that ``torch_xla.core.xla_model.xla_model.get_local_ordinal()`` will be removed in a future release. Please switch to using ``torch_xla.runtime.local_ordinal`` instead.
140141

141142

142143
Socket Error: Socket failed to bind

0 commit comments

Comments
 (0)