@@ -93,7 +93,7 @@ To use them you need to either launch an interactive job or submit a batch job.
9393 .. code-block ::
9494
9595 #SBATCH -p gpu
96- #SBATCH --gpus: l40s:<number of GPUs>
96+ #SBATCH --gpus= l40s:<number of GPUs>
9797
9898 - for H100 GPUs (up to 2 GPU cards):
9999
@@ -278,36 +278,39 @@ As before, we need a batch script to run the code. There are no GPUs on the logi
278278
279279 .. tab :: UPPMAX
280280
281- Running a GPU Python code interactively.
281+ Running a GPU Python code interactively on Pelle. Note that his is not currently working fully-cuntional .
282282
283283 .. code-block :: console
284284
285- $ interactive -A uppmax2025-2-393 -n 1 -M snowy --gres=gpu:1 -t 1:00:01 --gres=gpu:1 -t 1:00:01
286- You receive the high interactive priority.
287-
288- Please, use no more than 8 GB of RAM.
289-
290- salloc: Pending job allocation 9697978
291- salloc: job 9697978 queued and waiting for resources
292- salloc: job 9697978 has been allocated resources
293- salloc: Granted job allocation 9697978
285+ $[bbrydsoe@pelle2 ~] salloc -A uppmax2025-2-393 -t 00:30:00 -n 2 -p gpu --gpus=l40s:1
286+ salloc: Pending job allocation 406444
287+ salloc: job 406444 queued and waiting for resources
288+ salloc: job 406444 has been allocated resources
289+ salloc: Granted job allocation 406444
294290 salloc: Waiting for resource configuration
295- salloc: Nodes s195 are ready for job
296- _ _ ____ ____ __ __ _ __ __
297- | | | | _ \| _ \| \/ | / \ \ \/ / | System: s195
298- | | | | |_) | |_) | |\/| | / _ \ \ / | User: bbrydsoe
299- | |_| | __/| __/| | | |/ ___ \ / \ |
300- \___/|_| |_| |_| |_/_/ \_\/_/\_\ |
301- ###############################################################################
291+ salloc: Nodes p202 are ready for job
292+ [bbrydsoe@p202 ~]$ module load numba/0.60.0-foss-2024a
293+ [bbrydsoe@p202 ~]$ python add-list.py
294+
302295
303- User Guides: https://docs.uppmax.uu.se/
296+ .. tab :: UPPMAX: batch
304297
305- Write to support@uppmax.uu.se, if you have questions or comments.
298+ Running a GPU Python code on Pelle.
299+
300+ .. code-block :: console
301+
302+ #!/bin/bash
303+ # Remember to change this to your own project ID after the course!
304+ #SBATCH -A uppmax2025-2-393
305+ # We are asking for 5 minutes
306+ #SBATCH --time=00:05:00
307+ # Asking for one L40s GPU
308+ #SBATCH -p gpu
309+ #SBATCH --gpus=l40s:1
306310
307- [bbrydsoe@s195 python]$ ml uppmax python/3.11.8 python_ML_packages/3.11.8-gpu
308- [bbrydsoe@s195 python]$ python add-list.py
309- CPU function took 35.272032 seconds.
310- GPU function took 1.324215 seconds.
311+ module load numba/0.60.0-foss-2024a
312+
313+ python add-list.py
311314
312315 .. tab :: HPC2N
313316
@@ -340,13 +343,15 @@ As before, we need a batch script to run the code. There are no GPUs on the logi
340343 # SBATCH -A hpc2n2025-151 # HPC2N ID - change to your own
341344 # We are asking for 5 minutes
342345 # SBATCH --time=00:05:00
346+ # SBATCH -n 1
343347 # Asking for one L40s GPU
344348 # SBATCH --gpus=1
345349 # SBATCH -C l40s
346350
347351 # Remove any loaded modules and load the ones we need
348352 module purge > /dev/null 2>&1
349- module load GCC/12.3.0 Python/3.11.3 OpenMPI/4.1.5 SciPy-bundle/2023.07 CUDA/12.1.1 numba/0.58.1 CUDA/12.1.1
353+ module load GCC/12.3.0 Python/3.11.3 OpenMPI/4.1.5 SciPy-bundle/2023.07 CUDA/12.1.1 numba/0.58.1
354+ module load CUDA/12.1.1
350355
351356 # Run your Python script
352357 python add-list.py
0 commit comments