Skip to content

Conversation

@ZXYFrank
Copy link

In feature_extractor.py, images are put to self.device every time it's called.

It's necessary to consider this because we should make sure that images(the inputs) and the model parameters are on the same device.

However, the current code cannot achieve this with the default setting. Currently, Feature extractor is initialized with the default device='cuda', it's ambiguous for that "cuda" with no specified index may point to different GPUs.

image

"cuda" points to different devices during initialization and being called

I modified a line which set self.cuda with a specified device index when none is provided.

Add **kwargs optional arguments to ImageDataManager, which will ensure flexibility of the custom and registered dataset
In `feature_extractor.py`, images are put to `self.device` every time it's called.
It's necessary to consider this because we should make sure that images(the inputs) and the model parameters are on the same device.
However, the current code cannot achieve this with the default setting. Currently, Feature extractor is initialized with the default `device='cuda'`, it's ambiguous for that "cuda" with no specified index may point to different GPUs.
I modified a line which set `self.cuda` with a specified device index when none is provided.
@ZXYFrank
Copy link
Author

Besides, there's also another modification of kwargs, which is still open in the PR list.

@KaiyangZhou
Copy link
Owner

KaiyangZhou commented Jul 28, 2022

Sorry for the late reply.

However, the current code cannot achieve this with the default setting. Currently, Feature extractor is initialized with the default device='cuda', it's ambiguous for that "cuda" with no specified index may point to different GPUs.

What if you set device="cuda:0"? (a specific gpu)

there's also another modification of kwargs

What's the reason for this change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants