Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid host allocations for by-offload arguments to GPU kernels #24970

Merged

Conversation

e-kayrakli
Copy link
Contributor

@e-kayrakli e-kayrakli commented May 2, 2024

This PR addresses a performance issue first noted by @stonea in the context of Coral. Today, I had a chat with @Guillaume-Helbecque who was also suffering partly from it.

This issue arises when:

  1. multiple GPUs in a node are used, and
  2. LICM is not able to peel off the array metadata before we generate the GPU kernel

When (2), we pass arrays by offload -- we copy the record onto GPU memory and pass that record as an argument to the kernel. This involves allocating memory for the record on device-accessible memory. Now, most likely because oversight (on my part), that allocation is done on page-locked host memory instead. (Probably because I was thinking that object instances should be host-accessible so that they can be initialized, but we're talking about an already initialized record that we'll bit-copy anyways).

Page-locked allocations are globally synchronous. In other words, they will block all other GPUs, or be blocked until all other GPUs are free. That is why this is especially problematic with (1).

This PR adjusts the memory allocation to be on device memory proper.

Resolves #24936

Test:

  • nvidia
  • amd
  • some EX testing as I noted it to be problematic in the past

Signed-off-by: Engin Kayraklioglu <e-kayrakli@users.noreply.github.com>
@e-kayrakli e-kayrakli force-pushed the gpu-offload-arg-no-host-alloc branch from 236242b to 8e4afeb Compare August 29, 2024 21:24
Copy link
Contributor

@stonea stonea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming tests pass and work on EX then it LGTM.

So chpl_gpu_mem_alloc was allocating things on the host (as page locked memory)? I suppose the idea is the is the gpu (runtime) module's version of 'mem_alloc', which may or may not actually be on the gpu given context? Anyway, not something needed for this PR but I wonder if there's some better name we can come up with for this function to avoid the confusion going forward.

@e-kayrakli
Copy link
Contributor Author

@e-kayrakli
Copy link
Contributor Author

Some example directories, coral and jacobi is clear on AMD gpus on an EX. I am merging this.

@e-kayrakli e-kayrakli merged commit 9d058ee into chapel-lang:main Aug 29, 2024
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Variable performance with multiple GPUs per node (probably because of unnecessary synchronization)
2 participants