Skip to content

Commit

Permalink
fix: multithreading issue with FINUFFT (temporary fix) & update insta…
Browse files Browse the repository at this point in the history
…llation instructions
  • Loading branch information
remy-abergel committed Dec 3, 2024
1 parent 9db832e commit 4cb6340
Show file tree
Hide file tree
Showing 6 changed files with 107 additions and 24 deletions.
13 changes: 12 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
# Changelog
l# Changelog

<!--next-version-placeholder-->

## v1.0.1 (under development)

### code

- temporary fix related to a multithreading issue with FINUFFT (see
[FINUFFT issue
#596](https://github.com/flatironinstitute/finufft/issues/596)):
introduced a decorator in [backends.py](src/pyepri/backends.py) to
change the default value of the `nthreads` keyword argument of the
finufft functions according to the number of physical cores (or the
`OMP_NUM_THREADS` environment variable if set)

- increased to `1E6` the default maximal number of iterations
(parameter `nitermax`) for optimization schemes and related functions

Expand All @@ -20,6 +28,9 @@

### documentation

- updated installation instructions to allow cupy installation using
pip

- fixed bibtex reference [Bar21]

- fixed minor issues in demonstration examples
Expand Down
34 changes: 27 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ code, don't hesitate to open a
[bug issue](https://github.com/remy-abergel/pyepri/issues).

## Installation
### Install latest stable version using pip (recommended)
### Install latest stable version using pip

Assuming you have a compatible system with `python3` and `pip`
installed, the following steps will create a virtual environment, and
Expand All @@ -35,11 +35,21 @@ pip install pyepri
# installed), you can enable `torch` and/or `cupy` backends by #
# executing the following commands #
################################################################

# enable `torch` backend support
pip install pyepri[torch] # for enabling `torch` backend support
pip install pyepri[cupy] # for enabling `cupy` backend support

# enable `cupy` backend support: you need to select the
# appropriate line depending on your system
#
# PLEASE BE CAREFUL NOT TO INSTALL MULTIPLE CUPY PACKAGES AT
# THE SAME TIME TO AVOID INTERNAL CONFLICTS
#
pip install pyepri[cupy-cuda12x] # For CUDA 12.x
pip install pyepri[cupy-cuda11x] # For CUDA 11.x
```

### Install latest version from sources
### Install latest stable version from sources

Assuming you have a compatible system with `python3`, `pip` and `git`
installed, the following steps will checkout current code release,
Expand Down Expand Up @@ -69,8 +79,18 @@ pip install -e .
# installed), you can enable `torch` and/or `cupy` backends by #
# executing the following commands #
################################################################
pip install -e ".[torch]" # for enabling `torch` backend support
pip install -e ".[cupy]" # for enabling `cupy` backend support

# enable `torch` backend support
pip install ".[torch]" # for enabling `torch` backend support

# enable `cupy` backend support: you need to select the
# appropriate line depending on your system
#
# PLEASE BE CAREFUL NOT TO INSTALL MULTIPLE CUPY PACKAGES AT
# THE SAME TIME TO AVOID INTERNAL CONFLICTS
#
pip install ".[cupy-cuda12x]" # For CUDA 12.x
pip install ".[cupy-cuda11x]" # For CUDA 11.x

################################################################
# If you want to compile the documentation by yourself, you #
Expand All @@ -91,8 +111,8 @@ installation of the package.

+ If the installation of the package or one of its optional dependency
fails, you may have more chance with
[conda](https://anaconda.org/anaconda/conda) (or
[miniconda](https://docs.anaconda.com/miniconda/miniconda-install/)).
[miniconda](https://docs.anaconda.com/miniconda/miniconda-install/)
(or [conda](https://anaconda.org/anaconda/conda)).

+ If you still encounter difficulties, feel free to open a [bug
issue](https://github.com/remy-abergel/pyepri/issues).
Expand Down
26 changes: 23 additions & 3 deletions docs/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,18 @@ pip install pyepri
# installed), you can enable `torch` and/or `cupy` backends by #
# executing the following commands #
################################################################

# enable `torch` backend support
pip install pyepri[torch] # for enabling `torch` backend support
pip install pyepri[cupy] # for enabling `cupy` backend support

# enable `cupy` backend support: you need to select the
# appropriate line depending on your system
#
# PLEASE BE CAREFUL NOT TO INSTALL MULTIPLE CUPY PACKAGES AT
# THE SAME TIME TO AVOID INTERNAL CONFLICTS
#
pip install pyepri[cupy-cuda12x] # For CUDA 12.x
pip install pyepri[cupy-cuda11x] # For CUDA 11.x
```

## Install latest version from sources
Expand Down Expand Up @@ -58,8 +68,18 @@ pip install -e .
# installed), you can enable `torch` and/or `cupy` backends by #
# executing the following commands #
################################################################
pip install -e ".[torch]" # for enabling `torch` backend support
pip install -e ".[cupy]" # for enabling `cupy` backend support

# enable `torch` backend support
pip install ".[torch]" # for enabling `torch` backend support

# enable `cupy` backend support: you need to select the
# appropriate line depending on your system
#
# PLEASE BE CAREFUL NOT TO INSTALL MULTIPLE CUPY PACKAGES AT
# THE SAME TIME TO AVOID INTERNAL CONFLICTS
#
pip install ".[cupy-cuda12x]" # For CUDA 12.x
pip install ".[cupy-cuda11x]" # For CUDA 11.x

################################################################
# If you want to compile the documentation by yourself, you #
Expand Down
22 changes: 18 additions & 4 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ dependencies = [
"pyvista",
"finufft",
"jupyter",
"psutil",
]

[project.urls]
Expand All @@ -33,11 +34,24 @@ requires = ["setuptools", "wheel"]
build-backend = "setuptools.build_meta"

[project.optional-dependencies]
cupy = [
"cupy",
"cudnn",
cupy-cuda12x = [
"cupy-cuda12x",
# "cutensor",
"cufinufft",
]
cupy-cuda11x = [
"cupy-cuda11x",
# "cutensor",
"cufinufft",
]
cupy-rocm-5-0 = [
"cupy-rocm-5-0",
# "cutensor",
"cufinufft",
]
cupy-rocm-4-3 = [
"cupy-rocm-4-3",
"cutensor",
"nccl",
"cufinufft",
]
torch = [
Expand Down
5 changes: 5 additions & 0 deletions src/pyepri/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,8 @@
# temporary fix for FINUFFT issue #596: force OMP_NUM_THREADS = number of physical cores
#import os
#import psutil
#os.environ["OMP_NUM_THREADS"] = str(psutil.cpu_count(logical=False))

# read version from installed package
from importlib.metadata import version
__version__ = version("pyepri")
31 changes: 22 additions & 9 deletions src/pyepri/backends.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,13 @@
"""
import functools
import numpy as np
import scipy as sp
import os
import psutil
import re
from psutil import cpu_count # temporary fix for FINUFFT issue #596:
# force OMP_NUM_THREADS = number of
# physical cores in FINUFFT calls for CPU
# backends

def create_numpy_backend():
"""Create a numpy backend.
Expand Down Expand Up @@ -514,6 +519,14 @@ def __init__(self, lib, device):
# set minimal doc for the above defined lambda functions
self.meshgrid.__doc__ = "return " + lib.__name__ + ".meshgrid(*xi, indexing=indexing)\n"
self.exp.__doc__ = "return " + lib.__name__ + ".exp(arr, out=out)"

# prepare decorator for temporary fix for FINUFFT issue #596
def assign_finufft_nthreads(func):
omp_num_threads = os.environ.get("OMP_NUM_THREADS")
nthreads = int(omp_num_threads) if omp_num_threads is not None else max(1, cpu_count(logical=False) - 1)
def assigned_func(*args, **kwargs):
return func(*args, nthreads=nthreads, **kwargs) if 'nthreads' not in kwargs.keys() else func(*args, **kwargs)
return assigned_func

# set lib-dependent backends methods
if lib.__name__ in ['numpy','cupy']:
Expand Down Expand Up @@ -688,10 +701,10 @@ def __init__(self, lib, device):
# for GPU device)
if lib.__name__ == "numpy":
import finufft
self.nufft2d = finufft.nufft2d2
self.nufft3d = finufft.nufft3d2
self.nufft2d_adjoint = finufft.nufft2d1
self.nufft3d_adjoint = finufft.nufft3d1
self.nufft2d = assign_finufft_nthreads(finufft.nufft2d2)
self.nufft3d = assign_finufft_nthreads(finufft.nufft3d2)
self.nufft2d_adjoint = assign_finufft_nthreads(finufft.nufft2d1)
self.nufft3d_adjoint = assign_finufft_nthreads(finufft.nufft3d1)
else:
import cufinufft
self.nufft2d = cufinufft.nufft2d2
Expand Down Expand Up @@ -817,10 +830,10 @@ def numpyfied_func(*args, **kwargs):
return numpyfied_func

# decorate finufft functions
self.nufft2d = numpyfy(finufft.nufft2d2)
self.nufft3d = numpyfy(finufft.nufft3d2)
self.nufft2d_adjoint = numpyfy(finufft.nufft2d1)
self.nufft3d_adjoint = numpyfy(finufft.nufft3d1)
self.nufft2d = numpyfy(assign_finufft_nthreads(finufft.nufft2d2))
self.nufft3d = numpyfy(assign_finufft_nthreads(finufft.nufft3d2))
self.nufft2d_adjoint = numpyfy(assign_finufft_nthreads(finufft.nufft2d1))
self.nufft3d_adjoint = numpyfy(assign_finufft_nthreads(finufft.nufft3d1))

# add short documentation
self.nufft2d.__doc__ = (
Expand Down

0 comments on commit 4cb6340

Please sign in to comment.