-
Notifications
You must be signed in to change notification settings - Fork 14
Description
I have a NVidia Geforce 4090Ti and I tried to enable the GPU computation.
I created a test folder as described in the Readme
I edited the StartProcess.py file as follow:
Options for training and inference on a GPU
USE_GPUS_NO = 1 # List of GPUs used for training (if there is more than one available)
USE_GPU_FOR_WHOLE_IMAGE_INFERENCE = True # If set to False, inference of whole images (as opposed to image tiles) will be done on a CPU (slower, but generally necessary due to GPU memory restrictions). Has no effect if RUN_INFERENCE_ON_WHOLE_IMAGE=False
ALLOW_MEMORY_GROWTH = True # Whether to pre-allocate all memory at the beginning or allow for memory growth
When I launch the script, it is very slow and the GPU doesn't seem to be used.
Can you tell me if these options are the good ones and if not what are the good options?
Thank you