Skip to content

Commit

Permalink
[Refactor]: Refactor integration tests and add use_cfg flag to `run…
Browse files Browse the repository at this point in the history
…_diags()` (#747)

* Refactor integration tests (_tests/integration_)
  * Delete `test_dataset.py` because it is a really old and incomplete test file for the legacy `Dataset` class based on CDAT
  * Delete `test_all_sets.py` and `all_sets_modified.cfg` because it tests for expected image counts which is redundant (`test_all_sets_image_diffs.py` does this already)
  * Replace `subprocess` calls with direct call to Python API for real-time test results and easier debugging
  * Move `complete_run.py` to `/test/integration`
  * Move `test_run.py` to `tests/e3sm_diags` since it is more of a unit test file
* Refactor `Run` class (_run.py_)
  * Closes #735 
  * Add `use_cfg` boolean argument to `run_diags()` and `get_run_parameters()`
  * Add `self.cfg_path` attribute, which is set if `.cfg` file(s) are used for diagnostic runs
  * Add `is_cfg_file_arg_set()` property to check parser for `-d/--diags` 
  * Rename `get_final_parameters()` to `get_run_parameters()` and refactored to smaller methods
* Update CI/CD build workflow (_build_workflow.yml_)
  * Split up testing step into: 1) run unit tests 2) download integration test data 3) run integration tests
  * Easier to keep track of runtime and results
  • Loading branch information
tomvothecoder authored and chengzhuzhang committed Feb 24, 2024
1 parent 059c71c commit ecc5f2a
Show file tree
Hide file tree
Showing 18 changed files with 536 additions and 813 deletions.
12 changes: 10 additions & 2 deletions .github/workflows/build_workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -99,10 +99,18 @@ jobs:
mamba info
- if: ${{ steps.skip_check.outputs.should_skip != 'true' }}
name: Run Tests
name: Run Unit Tests
run: pytest tests/e3sm_diags

- if: ${{ steps.skip_check.outputs.should_skip != 'true' }}
name: Download Integration Test Data
run: python -m tests.integration.download_data

- if: ${{ steps.skip_check.outputs.should_skip != 'true' }}
name: Run Integration Tests
env:
CHECK_IMAGES: False
run: bash tests/test.sh
run: pytest tests/integration

publish-docs:
if: ${{ github.event_name == 'push' }}
Expand Down
8 changes: 8 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,14 @@ clean-test: ## remove test and coverage artifacts
rm -fr .pytest_cache
rm -rf .mypy_cache

clean-test-int-res: ## remove integration test results and image check failures
rm -rf tests/integration/all_sets_results_test
rm -rf tests/integration/image_check_failures

clean-test-int-data: # remove integration test data and images (expected) -- useful when they are updated
rm -rf tests/integration/integration_test_data
rm -rf tests/integration/integration_test_images

# Quality Assurance
# ----------------------
pre-commit: # run pre-commit quality assurance checks
Expand Down
10 changes: 5 additions & 5 deletions docs/source/dev_guide/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ The unit and integration tests are run automatically as part of this.
Complete run test
-----------------

``tests/complete_run.py`` checks the images generated by all diagnostics to
``tests/integration/complete_run.py`` checks the images generated by all diagnostics to
see if any differ from expected.
This test is not run as part of the unit test suite, because it relies on a large
quantity of data found on LCRC (Anvil/Chrysalis).
Expand All @@ -78,7 +78,7 @@ Now that you have your changes on LCRC, enter your development environment. Then
.. code::
pip install . # Install your changes
python -m unittest tests/complete_run.py
pytest tests/integration/complete_run.py
If this test passes, you're done. If it fails however, that means
your code has changed what the output looks like.
Expand Down Expand Up @@ -108,11 +108,11 @@ then you need to update the expected images.
find . -type f -name '*.png' > ../image_list_all_sets.txt
cd ..
Run ``python -m unittest tests/complete_run.py`` again. Now, the test should pass.
Run ``pytest tests/integration/complete_run.py`` again. Now, the test should pass.

After merging your pull request, edit ``README.md``.
The version should be the version of E3SM Diags you ran
``python -m unittest tests/complete_run.py`` with,
the date should be the date you ran ``python -m unittest tests/complete_run.py`` on,
``pytest tests/integration/complete_run.py`` with,
the date should be the date you ran ``pytest tests/integration/complete_run.py`` on,
and the hash should be for the top commit shown by ``git log`` or on
https://github.com/E3SM-Project/e3sm_diags/commits/main.
7 changes: 5 additions & 2 deletions e3sm_diags/e3sm_diags_driver.py
Original file line number Diff line number Diff line change
Expand Up @@ -347,15 +347,16 @@ def _collapse_results(parameters: List[List[CoreParameter]]) -> List[CoreParamet
return output_parameters


def main(parameters=[]):
def main(parameters=[]) -> List[CoreParameter]:
# Get the diagnostic run parameters
# ---------------------------------
parser = CoreParser()

# If no parameters are passed, use the parser args as defaults. Otherwise,
# create the dictionary of expected parameters.
if not parameters:
if len(parameters) == 0:
parameters = get_parameters(parser)

expected_parameters = create_parameter_dict(parameters)

if not os.path.exists(parameters[0].results_dir):
Expand Down Expand Up @@ -408,6 +409,8 @@ def main(parameters=[]):
)
raise Exception(message)

return parameters_results


if __name__ == "__main__":
main()
50 changes: 28 additions & 22 deletions e3sm_diags/parameter/core_parameter.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,32 @@

logger = custom_logger(__name__)

# FIXME: There is probably a better way of defining default sets because most of
# this is repeated in SETS_TO_PARAMETERS and SETS_TO_PARSERS.
# Also integration tests will break if "mp_partition" is included because
# we did not take it into account yet.
DEFAULT_SETS = [
"zonal_mean_xy",
"zonal_mean_2d",
"zonal_mean_2d_stratosphere",
"meridional_mean_2d",
"lat_lon",
"polar",
"area_mean_time_series",
"cosp_histogram",
"enso_diags",
"qbo",
"streamflow",
"diurnal_cycle",
"arm_diags",
"tc_analysis",
"annual_cycle_zonal_mean",
"lat_lon_land",
"lat_lon_river",
"aerosol_aeronet",
"aerosol_budget",
]


class CoreParameter:
def __init__(self):
Expand Down Expand Up @@ -58,28 +84,8 @@ def __init__(self):
# 'model_vs_obs' (by default), 'model_vs_model', or 'obs_vs_obs'.
self.run_type: str = "model_vs_obs"

# A list of the sets to be run. Default is all sets
self.sets: List[str] = [
"zonal_mean_xy",
"zonal_mean_2d",
"zonal_mean_2d_stratosphere",
"meridional_mean_2d",
"lat_lon",
"polar",
"area_mean_time_series",
"cosp_histogram",
"enso_diags",
"qbo",
"streamflow",
"diurnal_cycle",
"arm_diags",
"tc_analysis",
"annual_cycle_zonal_mean",
"lat_lon_land",
"lat_lon_river",
"aerosol_aeronet",
"aerosol_budget",
]
# A list of the sets to be run, by default all sets.
self.sets: List[str] = DEFAULT_SETS

# The current set that is being ran when looping over sets in
# `e3sm_diags_driver.run_diag()`.
Expand Down
1 change: 1 addition & 0 deletions e3sm_diags/plot/cartopy/arm_diags_plot.py
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,7 @@ def plot_convection_onset_statistics(
var_time_absolute = cwv.getTime().asComponentTime()
time_interval = int(var_time_absolute[1].hour - var_time_absolute[0].hour)

# FIXME: UnboundLocalError: local variable 'cwv_max' referenced before assignment
number_of_bins = int(np.ceil((cwv_max - cwv_min) / bin_width))
bin_center = np.arange(
(cwv_min + (bin_width / 2)),
Expand Down
Loading

0 comments on commit ecc5f2a

Please sign in to comment.