You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: compiler/neuronx-cc/api-reference-guide/neuron-compiler-cli-reference-guide.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -122,7 +122,7 @@ Common parameters for the Neuron CLI:
122
122
123
123
- ``llm-training``: Enable the compiler to perform optimizations applicable to large language model (LLMS) training runs that shard parameters, gradients, and optimizer states across data-parallel workers. This is equivalent to the previously documented option argument value of ``NEMO``, which will be deprecated in a future release.
124
124
125
-
- :option:`--logical-nc-config <shard_degree>`: Instructs the compiler to shard the input graph across physical NeuronCore accelerators. Possible numeric values are {1, 2}. (only available on trn2; Default: ``2``)
125
+
- :option:`--logical-nc-config <shard_degree>`: Instructs the compiler to shard the input graph across physical NeuronCore accelerators. Possible numeric values are {1, 2}. (Only available on trn2; Default: ``2``)
126
126
127
127
Valid values:
128
128
@@ -141,7 +141,7 @@ Common parameters for the Neuron CLI:
141
141
142
142
- :option:`--enable-mixed-precision-accumulation`: Perform intermediate calculations of accumulation operators (such as softmax and layernorm) in FP32 and cast the result to the model-designated datatype. This improves the operator's resulting accuracy.
143
143
144
-
- :option:`--enable-saturate-infinity`: Convert +/- infinity values to MAX/MIN_FLOAT for compiler-introduced matrix-multiply transpose computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled.
144
+
- :option:`--enable-saturate-infinity`: Convert +/- infinity values to MAX/MIN_FLOAT for compiler-introduced matrix-multiply transpose computations that have a high risk of generating Not-a-Number (NaN) values. There is a potential performance impact during model execution when this conversion is enabled. (Only needed on trn1; while the trn2 compiler will accept this flag for compatibility reasons, it has no effect on the compilation.)
145
145
146
146
- :option:`--enable-fast-context-switch`: Optimize for faster model switching rather than execution latency.
147
147
This option will defer loading some weight constants until the start of model execution. This results in overall faster system performance when your application switches between models frequently on the same Neuron Core (or set of cores).
Copy file name to clipboardExpand all lines: conf.py
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -195,7 +195,7 @@
195
195
196
196
#top_banner_message="<span>⚠</span><a class='reference internal' style='color:white;' href='https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/setup/setup-troubleshooting.html#gpg-key-update'> Neuron repository GPG key for Ubuntu installation has expired, see instructions how to update! </a>"
197
197
198
-
top_banner_message="Neuron 2.21.1 is released! check <a class='reference internal' style='color:white;' href='https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release'> What's New </a> and <a class='reference internal' style='color:white;' href='https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html'> Announcements </a>"
198
+
top_banner_message="Neuron 2.22.0 is released! check <a class='reference internal' style='color:white;' href='https://awsdocs-neuron.readthedocs-hosted.com/en/latest/release-notes/index.html#latest-neuron-release'> What's New </a> and <a class='reference internal' style='color:white;' href='https://awsdocs-neuron.readthedocs-hosted.com/en/latest/general/announcements/index.html'> Announcements </a>"
You can easily get started with the multi-framework DLAMI through AWS console by following this :ref:`setup guide <setup-ubuntu22-multi-framework-dlami>`. If you are looking to
89
115
use the Neuron DLAMI in your cloud automation flows, Neuron also supports :ref:`SSM parameters <ssm-parameter-neuron-dlami>` to easily retrieve the latest DLAMI id.
@@ -109,15 +135,30 @@ Single Framework DLAMIs supported
109
135
- Neuron Instances Supported
110
136
- DLAMI Name
111
137
112
-
* - Tensorflow 2.10
138
+
* - PyTorch 2.5
113
139
- Ubuntu 22.04
114
-
- Inf1, Inf2, Trn1, Trn1n, Trn2
115
-
- Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
140
+
- Inf2, Trn1, Trn1n, Trn2
141
+
- Deep Learning AMI Neuron PyTorch 2.5 (Ubuntu 22.04)
142
+
143
+
* - PyTorch 2.5
144
+
- Amazon Linux 2023
145
+
- Inf2, Trn1, Trn1n, Trn2
146
+
- Deep Learning AMI Neuron PyTorch 2.5 (Amazon Linux 2023)
116
147
117
148
* - Tensorflow 2.10
118
-
- Ubuntu 20.04
119
-
- Inf2, Trn1, Trn1n
120
-
- Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04)
149
+
- Ubuntu 22.04
150
+
- Inf2, Trn1, Trn1n, Trn2
151
+
- Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
152
+
153
+
* - Tensorflow 2.10 (Inf1)
154
+
- Ubuntu 22.04
155
+
- Inf1
156
+
- Deep Learning AMI Neuron TensorFlow 2.10 Inf1 (Ubuntu 22.04)
157
+
158
+
* - PyTorch 1.13 (Inf1)
159
+
- Ubuntu 22.04
160
+
- Inf1
161
+
- Deep Learning AMI Neuron PyTorch 1.13 Inf1 (Ubuntu 22.04)
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
137
-
- tensorflow-neuronx
138
-
- /opt/aws_neuronx_venv_tensorflow_2_10
177
+
* - Deep Learning AMI Neuron PyTorch 2.5 (Ubuntu 22.04, Amazon Linux 2023)
178
+
- PyTorch 2.5 Torch NeuronX, NxD Core
179
+
- /opt/aws_neuronx_venv_pytorch_2_5
139
180
140
-
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 20.04)
141
-
- tensorflow-neuronx
142
-
- /opt/aws_neuron_venv_tensorflow_2_10
181
+
* - Deep Learning AMI Neuron PyTorch 2.5 (Ubuntu 22.04, Amazon Linux 2023)
182
+
- PyTorch 2.5 NxD Training, Torch NeuronX
183
+
- /opt/aws_neuronx_venv_pytorch_2_5_nxd_training
143
184
185
+
* - Deep Learning AMI Neuron PyTorch 2.5 (Ubuntu 22.04, Amazon Linux 2023)
186
+
- PyTorch 2.5 NxD Inference, Torch NeuronX
187
+
- /opt/aws_neuronx_venv_pytorch_2_5_nxd_inference
188
+
189
+
* - Deep Learning AMI Neuron PyTorch 2.5 (Ubuntu 22.04, Amazon Linux 2023)
190
+
- Transformers NeuronX PyTorch 2.5
191
+
- /opt/aws_neuronx_venv_pytorch_2_5_transformers
192
+
193
+
* - Deep Learning AMI Neuron PyTorch 1.13 (Ubuntu 22.04)
194
+
- Pytorch Neuron (Inf1)
195
+
- /opt/aws_neuron_venv_pytorch_1_13_inf1
196
+
197
+
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
198
+
- Tensorflow Neuronx
199
+
- /opt/aws_neuronx_venv_tensorflow_2_10
200
+
144
201
* - Deep Learning AMI Neuron TensorFlow 2.10 (Ubuntu 22.04)
145
-
- tensorflow-neuron (Inf1)
202
+
- Tensorflow Neuron (Inf1)
146
203
- /opt/aws_neuron_venv_tensorflow_2_10_inf1
147
-
148
-
You can easily get started with the single framework DLAMI through AWS console by following one of the corresponding setup guides . If you are looking to
204
+
205
+
206
+
You can easily get started with the single framework DLAMI through AWS console by following one of the corresponding setup guides . If you are looking to
149
207
use the Neuron DLAMI in your cloud automation flows , Neuron also supports :ref:`SSM parameters <ssm-parameter-neuron-dlami>` to easily retrieve the latest DLAMI id.
150
208
151
209
Neuron Base DLAMI
@@ -166,14 +224,14 @@ Base DLAMIs supported
166
224
- Neuron Instances Supported
167
225
- DLAMI Name
168
226
227
+
* - Amazon Linux 2023
228
+
- Inf1, Inf2, Trn1n, Trn1, Trn2
229
+
- Deep Learning Base Neuron AMI (Amazon Linux 2023)
Environment variables allow modifications to JAX NeuronX behavior
7
+
without requiring code change to user script. It is recommended to set
8
+
them in code or just before invoking the python process, such as
9
+
``NEURON_RT_VISIBLE_CORES=8 python3 <script>`` to avoid inadvertently
10
+
changing behavior for other scripts. Environment variables specific to
11
+
JAX Neuronx are:
12
+
13
+
``NEURON_CC_FLAGS``
14
+
15
+
- Compiler options. Full compiler options are described in the :ref:`mixed-precision-casting-options`.
16
+
17
+
``XLA_FLAGS``
18
+
19
+
- When set to ``"--xla_dump_hlo_snapshots --xla_dump_to=<dir>"``, this environmental variable enables dumping snapshots in ``<dir>`` directory. See :ref:`torch-neuronx-snapshotting` section for more information. The snapshotting interface for JAX and Pytorch are identical.
20
+
- When set to ``"--xla_dump_hlo_as_text --xla_dump_hlo_as_proto --xla_dump_to=<dir> --xla_dump_hlo_pass_re='.*'"``, this environmental variable enables dumping HLOs in proto and text formats after each XLA pass. The dumped ``*.hlo.pb`` files are in HloProto format.
21
+
22
+
``NEURON_FORCE_PJRT_PLUGIN_REGISTRATION``
23
+
24
+
- When ``NEURON_FORCE_PJRT_PLUGIN_REGISTRATION=1``, the Neuron PJRT plugin will be registered in JAX regardless of the instance type.
25
+
26
+
``NEURON_RUN_TRIVIAL_COMPUTATION_ON_CPU``
27
+
28
+
- When ``NEURON_RUN_TRIVIAL_COMPUTATION_ON_CPU=1``, the Neuron PJRT plugin will compile and execute "trivial" computations on CPU instead of Neuron cores. A "trivial" computation is defined as an HLO program that does not contain any collective-compute instructions. The HLO program will be compiled by the XLA CPU compiler and outputs of the computation will be allocated on Neuron cores. The following HLO instructions are considered as collective-compute instructions.
29
+
30
+
- ``all-gather``
31
+
- ``all-gather-done``
32
+
- ``all-gather-start``
33
+
- ``all-reduce-done``
34
+
- ``all-reduce-start``
35
+
- ``all-to-all``
36
+
- ``collective-permute``
37
+
- ``partition-id``
38
+
- ``replica-id``
39
+
- ``recv``
40
+
- ``recv-done``
41
+
- ``reduce-scatter``
42
+
- ``send``
43
+
- ``send-done``
44
+
45
+
``NEURON_PJRT_PROCESSES_NUM_DEVICES``
46
+
47
+
- Should be set to a comma-separated list stating the number of NeuronCores used by each worker process. It is used to construct a global device array with its size equal to the sum of the list. This gets reported to the XLA PJRT runtime when requested. Must be set for multi-process executions. It can be used in conjunction with ``NEURON_RT_VISIBLE_CORES`` to expose a limited number of NeuronCores to each worker process. If ``NEURON_RT_VISIBLE_CORES`` is not set, it should be set to available number of NeuronCores on the host. ``NEURON_PJRT_PROCESSES_NUM_DEVICES`` must be less than or equal to ``NEURON_RT_VISIBLE_CORES``.
48
+
49
+
``NEURON_PJRT_PROCESS_INDEX``
50
+
51
+
- An integer stating the index (or rank) of the current worker process. This is required for multi-process environments where all workers need to know information on all participating processes. Must be set for multi-process executions. The value should be between ``0`` and ``NEURON_PJRT_PROCESS_INDEX - 1``.
- Sets the seed for the random number generator used in stochastic rounding (see previous section). If this environment variable is not set, the seed is set to 0 by default. Please set ``NEURON_RT_STOCHASTIC_ROUNDING_SEED`` to a fixed value to ensure reproducibility between runs.
64
+
65
+
``NEURON_RT_VISIBLE_CORES`` **[Neuron Runtime]**
66
+
67
+
- Integer range of specific NeuronCores needed by the process (for example, 0-3 specifies NeuronCores 0, 1, 2, and 3). Use this environment variable when launching processes to limit the launched process to specific consecutive NeuronCores.
68
+
69
+
Additional Neuron runtime environment variables are described in :ref:`nrt-configuration`.
0 commit comments