Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Api unit tests #15048

Open
wants to merge 35 commits into
base: master
Choose a base branch
from
Open

Api unit tests #15048

wants to merge 35 commits into from

Conversation

benmartin-coforma
Copy link
Collaborator

@benmartin-coforma benmartin-coforma commented Feb 5, 2025

Description

This PR creates unit tests for the SEDS app-api service. There are three important things to note:

  1. My goal was to reach 100% coverage with 0 changes to non-test files; this turned out to be impossible, because bugs exist. For each bug, I had to choose whether to leave those lines untested (falling short of the 100% coverage goal), or to fix it (falling short of the 0 other changes goal). I ended up fixing the 3 bugs I judged most important which were fixable with minimal changes, and leaving the rest as-is. Those bugs are in authorization.js, obtainUserByEmail.js, and obtainUserByUsername.js. In addition to code that was literally untestable, I also left some completely unused code uncovered. For example: local-user.js, generateEnrollmentTotals.js, and response-lib.js. In the end, I was able to reach 90% coverage - although I did have to write tests for the currently-unused email notification code.

  2. This PR does not incorporate the new tests into the deployment pipeline; at the moment, the only way to run them is to invoke them yourself locally. It also does not incorporate the Code Climate diff coverage checks, so nothing will automatically stop you from merging a PR that deletes API tests, adds new API code without tests, or otherwise reduces coverage. We can and should add those workflow integrations, and I intend to do so in a future PR.

  3. The tests in this PR use the vitest framework. It is notably not jest, but there are very few differences when you get down to the level of writing test code. All the usual test methods like describe and expect work exactly the same way. Here's what is different:

    • Vitest was much easier for me to set up, in that I understand everything about the config file (unlike with Jest (which I acknowledge is at least partially my own fault)). No additional plugins or babel integration was required... for these API tests. I have not yet tried Vitest in the frontend, so there are still unknowns there, but I expect them to be similarly easy to deal with.
    • Everywhere you might refer to the jest global, you use vi instead. As in vi.fn() and vi.mock().
    • The test methods (describe(), expect(), and so on) must be imported in each file; they are not globally available.
      • (Unless you make them so through a Vitest config option, which I do not do here)
    • There was one actual syntactical difference for mocking only parts of a file or module: jest.requireActual() has a cousin rather than a twin. See generateQuarterForms.test.js for an example of Vitest's importOriginal().
    • With Jest, when you want tests to automatically re-run on file change, you invoke it with yarn test --watch; by default it runs all tests and then exits. With Vitest, this is inverted: watching is the default. In order to run all tests and then exit, you must invoke yarn test --run.
      • I assume there are lots of other differences in the command line options, but I have to re-learn those every time I use them anyway, so I couldn't tell you what those differences are.
    • The VSCode plugin Jest Runner (which inserts a "Run" button in the editor directly above each individual test) doesn't work with Vitest, obviously. I ended up using the --testNamePattern CLI option when I wanted to run a single test. I have not yet tried to debug tests, nor have I looked for a similar VSCode plugin.

Related ticket(s)

N/A


How to test

  1. Check out the branch
  2. Run yarn test --run --coverage
  3. Additionally: In order to test the one significant code change (in authorization.js), try logging in to the app. If you're able to do so, then the new jwt-decode invocation is working.

Notes


Pre-review checklist

  • I have added thorough tests, if necessary
  • I have added thorough tests, if necessary
  • I have added thorough tests, if necessary
  • I have added thorough tests, if necessary
  • I have added thorough tests, if necessary
  • I have added thorough tests, if necessary
  • I have updated relevant documentation, if necessary
  • I have performed a self-review of my code
  • I have manually tested this PR in the deployed cloud environment

Pre-merge checklist

Review

  • [ ] Design: This work has been reviewed and approved by design, if necessary
  • [ ] Product: This work has been reviewed and approved by product owner, if necessary

Security

If either of the following are true, notify the team's ISSO (Information System Security Officer).

  • [ ] These changes are significant enough to require an update to the SIA.
  • [ ] These changes are significant enough to require a penetration test.

Copy link

codeclimate bot commented Feb 5, 2025

Code Climate has analyzed commit de8fc4b and detected 0 issues on this pull request.

The test coverage on the diff in this pull request is 100.0% (90% is the threshold).

This pull request will bring the total coverage in the repository to 82.5% (0.0% change).

View more on Code Climate.

@benmartin-coforma
Copy link
Collaborator Author

A fun way to look at all of the tests here is with Vitest's shiny GUI. If you run

yarn test --ui

then (perhaps after prompting you to install a package), Vitest will spin up a local server that hosts an interactive webpage that looks like this:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant