Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore pin_memory if cuda is not available #331

Merged

Conversation

vbourgin
Copy link
Contributor

Summary: Ignore pin_memory in get_pytorch_dataloader if cuda is not available. This replicates the behavior of the pytorch dataloader (see torch.utils.data.dataloader._BaseDataLoaderIter) and avoids job failures when cuda is not available.

Reviewed By: moto-meta

Differential Revision: D68357863

Summary: Ignore `pin_memory` in `get_pytorch_dataloader` if cuda is not available. This replicates the behavior of the pytorch dataloader (see `torch.utils.data.dataloader._BaseDataLoaderIter`) and avoids job failures when cuda is not available.

Reviewed By: moto-meta

Differential Revision: D68357863
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Jan 23, 2025
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D68357863

@facebook-github-bot facebook-github-bot merged commit 686c00a into facebookresearch:main Jan 23, 2025
17 of 32 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants