Releases: rapidsai/gha-tools
Releases · rapidsai/gha-tools
v0.0.53
Update `RAPIDS_DATE_STRING` (#53) This PR updates how `RAPIDS_DATE_STRING` is computed. The variable's value is now computed from the workflow run date. This will help ensure that RAPIDS packages are published on a nightly basis even if no changes to the source code have been made.
v0.0.52
Use more robust method to launch twine (#52) Using the twine executable directly relies on the Python bin directory being added to the path, which is not reliably the case in different environments. I've created a new copy of the script since I'm iterating very actively right now in the new wheels workflow. We can get rid of the old script once I've verified that everything is working as expected.
v0.0.51
Add scripts for assembling docker multiarch manifests from a local re…
v0.0.50
Remove rapids-get-rapids-version-from-git (#50) This tool is not used anymore Signed-off-by: Jordan Jacobelli <jjacobelli@nvidia.com>
v0.0.49
Remove conditional on `RAPIDS_DATE_STRING` (#49) We need to remove this conditional since all builds in RAPIDS (branch & release) require `RAPIDS_DATE_STRING` now. See: https://github.com/rapidsai/rmm/actions/runs/4678847102/jobs/8288079290#step:7:432
v0.0.48
Clean cache after `conda` segfault (#48) Based on the logs in the run below, it seems that a retry alone is not enough to resolve the segfault issues that have been appearing. - https://github.com/rapidsai/cudf/actions/runs/4298659489/jobs/7493440631#step:6:403 We should also try cleaning the cache based on the `invalid tarball` messages just under the log line linked above.
v0.0.47
fix typo
v0.0.46
Add additional output to `rapids-upload-artifacts-dir` (#47) This PR adds some additional helpful output to the `rapids-upload-artifacts-dir` command. Specifically it: - Indicates whether additional artifacts were found in the `RAPIDS_ARTIFACTS_DIR` directory - Adds a clickable URL to the page which contains all artifacts for a given workflow
v0.0.45
Retry conda commands if a segfault occurs. (#46) This PR makes `rapids-conda-retry` retry if the conda command segfaults. In discussion with @AyodeAwe and @stadlmax, we believe that the segfault is a temporary failure related to concurrent resource utilization (or perhaps a network hiccup?) that can be fixed by sleeping and retrying. Example: ``` /usr/local/bin/rapids-conda-retry: line 68: 155 Segmentation fault (core dumped) ${condaCmd} ${args} 2>&1 156 Done | tee "${outfile}" [rapids-conda-retry] conda returned exit code: 139 [rapids-conda-retry] Exiting, no retryable mamba errors detected: 'ChecksumMismatchError:', 'ChunkedEncodingError:', 'CondaHTTPError:', 'CondaMultiError:', 'ConnectionError:', 'EOFError:', 'JSONDecodeError:', 'Multi-download failed', 'Timeout was reached' [rapids-conda-retry] Error: Process completed with exit code 139. ``` https://github.com/rapidsai/cugraph-ops/actions/runs/4283919882/jobs/7460790452#step:6:387
v0.0.44
Modify rapids-twine to discover and upload all wheels (#45) This changes rapids-twine to more closely resemble the anaconda upload; all wheels are automatically discovered by looking for the `wheel_python` string in the S3 path. Then, all the wheels are uploaded with twine. This way, the wheel publish workflows don't need to be aware of the different wheel variants (python versions, architectures, CUDA toolkits, etc.) that are being built.