Skip to content

Commit

Permalink
big README update
Browse files Browse the repository at this point in the history
  • Loading branch information
GondekNP committed Dec 17, 2024
1 parent b8245d7 commit a55ac33
Show file tree
Hide file tree
Showing 2 changed files with 80 additions and 38 deletions.
117 changes: 79 additions & 38 deletions .devcontainer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,77 +6,84 @@ http://localhost:8000/cog/map?url=https://burn-severity.s3.us-east-2.amazonaws.c

## Local Development

### Dev Container
### Base Dev Container

Within the `.devcontainer` directory, there are a few configurations that are used either by the local ![DevPod Client](https://devpod.sh/) or the VS Code `Dev Container` extension.

The former allows you to deploy your dev container to cloud infrastructure without too much hassle, a la GitPod. Both are based on the `devcontainer.json` standard. Either approach will build a Docker container containing the repo itself and its dependencies defined below. Then, VSCode will SSH into that container, treating it as a remote entity, which can be a bit weird at first but is great for reproducibility of your environments across machines!

### Using the DevPod CLI
To run the repo using the `VSCode - Dev Containers` extension, first ensure you have it installed (along with Docker Desktop), and run:

```
devpod up git@github.com:SchmidtDSE/burn-severity-mapping-poc.git --devcontainer-path .devcontainer/burn_backend/devcontainer.json --ide vscode
Cmd + Shift + P > Reopen in Dev Container
```

Useful flags:
This will take a while on the first pass, but later on, Docker will use cached build images to speed up the process. Just make sure you don't use `Rebuild and Reopen in Container`, which does what it says, unless you really want to.

- `--devcontainer-path` - path to the devcontainer.json file, which varies depending on the service you're working on.
- `--debug` to get verbose output
- `--recreate` and `--reset` - which will reclone and rebuild the dev container (recreate) and recreate while ignoring cache (reset) respectively. This is useful particuarly when you're updating the devcontainer.json file or relevant build dependencies, and you will want to ensure that your changes work after a --reset before merging anything (since manual changes within the running dev container aren't propogated).
### Deploying to Cloud

Important files:
If you want to deploy to the cloud, you can use OpenTofu to do so - but you need to auth with both `AWS` and `GCP`.

- `.devcontainer/[SERVICE]/devcontainer.json` contains some launch configurations, most notably:
```
aws configure sso
gcloud auth application-default login
- includes VSCode extensions that are installed **within the container** upon build (your other extensions, most of which will be installed on your 'local' side, should still be there).
- allows for `postCreateCommands` which allow runtime logic to be performed. The initial iteration of this repo downloaded BARC data on build from a public gcs bucket, but could be anything. This is handled by `.devcontainer/[SERVICE]/init_runtime.sh`, which in turn executes files from `.devcontainer/[SERVICE]/runtime/`.
```

- `.devcontainer/[SERVICE]/Dockerfile` contains the build instructions for the dev container. Primary responsibilities here are to install necessary linux utilities (bash, curl, etc), setup cloud SDKs/CLIs and OpenTofu to manage those cloud resources, and build a conda environment to manage python dependencies. Note that **this Dockerfile build the dev environment, not the prod environment - so if you need a package to be running in prod, you will add it to the `Dockerfile` found in the root dir as well as this one**.
_Note_: This SSO auth process must be performed periodically, as the authentication token generated are short-lived (important, as the scope of this auth is broad for provisioning resources and could be used by nefarious actors). So, if you run into an credentials-related issue running any `tofu` command, you may need to re-auth. Both will provide you a URL to login via SSO. You can accept all defaults.

- by convention, files in `.devcontainer/[SERVICE]/prebuild` are run here - this means that their effects are baked into the resultant docker image and can take advantage of caching on subsequent runs. This has the benefit of saving time for chunky installs, at the expense of a larger docker image.
#### Dev / Prod split

- `devcontainer/[SERVICE]/dev_environment.yml` contains the Conda env requirements. As above, **this is just for the dev environment, if you need a package in prod, you must add to the `prod_environment.yaml` in the root dir**.
To ensure that we can safely develop in a live environment, without breaking existing functionality, we split dev and prod environments using tofu's `workspace`s.

### Generating Sphinx Docs
To select the `prod` or `dev` environment:

```
sphinx-apidoc -o .sphinx/source src
sphinx-build -M html .sphinx/source .sphinx
tofu workspace select prod
tofu workspace select dev
```

### Debugging w/ VSCode
By doing this, we avoid having to duplicate tofu source files, such that the deployments are more or less identical between environments.

For local development (inside or outside of a dev container, though the former is recommended), VSCode should detect the launch configuration (`.vscode/launch.json`), allowing for local breakpoints/stepthrough. Port configuration can be found within the same file (default is port `5050`).
For local development, you will typically want `dev`, until you are ready to push infrastructure changes to `prod`.

### Deploying to Cloud
### .env Generation

If you want to deploy to the cloud, you can use OpenTofu to do so - but you need to auth with both `AWS` and `GCP` to do so.
After initializing with `aws` and `gcp`, you can run the bash script `.devcontainer/scripts/export_tofu_dotenv' to get a valid .env:

```
aws configure sso
gcloud auth application-default login
ENV=LOCAL
DEPLOYMENT=DEV
DEBUG_SERVICE=BURN_BACKEND
S3_FROM_GCP_ROLE_ARN="arn:aws:iam::557418946771:role/aws_s3_from_gcp_dev"
S3_BUCKET_NAME="burn-severity-backend-dev"
GCP_SERVICE_ACCOUNT_S3_EMAIL=burn-backend-service-dev@dse-nps.iam.gserviceaccount.com
GCP_CLOUD_RUN_ENDPOINT_BURN_BACKEND="https://tf-rest-burn-backend-dev-ohi6r6qs2a-uc.a.run.app"
GCP_CLOUD_RUN_ENDPOINT_TITILER="https://tf-titiler-dev-ohi6r6qs2a-uc.a.run.app"
GCP_CLOUD_RUN_ENDPOINT_TITILER_POSSIBLE_ORIGINS="[\"https://tf-titiler-dev-113009620257.us-central1.run.app\",\"https://tf-titiler-dev-ohi6r6qs2a-uc.a.run.app\"]"
```

_Note_: This SSO auth process must be performed periodically, as the authentication token generated are short-lived (important, as the scope of this auth is broad for provisioning resources and could be used by nefarious actors). So, if you run into an credentials-related issue running any `tofu` command, you may need to re-auth. Both will provide you a URL to login via SSO. You can accept all defaults.
Note that `DEBUG_SERVICE` can be changed to `TITILER` to attach the VSCode debugger to titiler instead of burn backend.

#### Dev / Prod split
### Debugging w/ VSCode

To ensure that we can safely develop in a live environment, without breaking existing functionality, we split dev and prod environments using tofu's `workspace`s.
You can either run the services in the devcontainer, using the relevant conda environment, or you can run the docker files as they run server side to basically have a 1:1 replication of how the server is running. The former is usefu for development since you don't have to build regularly, but the latter more or less ensures there are no build / runtime issues on the server side that you cannot observe locally.

To select the `prod` environment:
#### Docker compose workflow

Once you are authenticated with the cloud services and have a valid .env generated, you can can start the services using `docker compose`. Fear not, this is not docker in docker, this is passing the docker daemon through to the dev container. Simply run:

```
tofu workspace select default
docker compose -f .devcontainer/docker-compose.dev.yml build
docker compose -f .devcontainer/docker-compose.dev.yml up
```

To select the `dev` environment:
For local development (inside or outside of a dev container, though the former is recommended), VSCode should detect the launch configuration (`.vscode/launch.json`), allowing for local breakpoints/stepthrough. Port configuration can be found within the same file (default is port `5050`).

```
tofu workspace select dev
```
Navigate to the debug panel and select `Attach to Docker Burn Backend`. You should see `Debugger Attached` in the debug console. At this point, you can add breakpoints as you would a locally running application.

By doing this, we avoid having to duplicate tofu source files, such that the deployments are more or less identical between environments.
#### Just-inside-DevContainer workflow

For this approach, simply run the `Burn Backend FastAPI` launch routine in the debug panel. Everthing should be more or less the same, but in this case you are running the application within the mambaforge image that is the OS for the 'base' dev container, and there are some additional things installed here for development's sake, so it is good to try any major changes using the docker compose based workflow described above before getting ready to merge.

### Cloud Deployment / Maintenance

#### Creating / updating deployments

Expand All @@ -89,3 +96,37 @@ tofu apply ".terraform/tfplan"
```

As alluded to above in the Dev Container section, cloud deployments are pointed to `/prod.Dockerfile` and `/prod_environment.yml` - best practice to avoid the import of dev-related resources here (for eg, most visualization libraries, some pretty printing tools, etc) to keep deployments lighter.

### API documentation

```
sphinx-apidoc -o .sphinx/source src
sphinx-build -M html .sphinx/source .sphinx
```

### Deprecated

### Using the DevPod CLI (Deprecated, for now)

NOTE: Since the cloud-side backend for devpod is not working well for this repo, I recommend using the native `VSCode Dev Containers` extension

```
devpod up git@github.com:SchmidtDSE/burn-severity-mapping-poc.git --devcontainer-path .devcontainer/burn_backend/devcontainer.json --ide vscode
```

Useful flags:

- `--devcontainer-path` - path to the devcontainer.json file, which varies depending on the service you're working on.
- `--debug` to get verbose output
- `--recreate` and `--reset` - which will reclone and rebuild the dev container (recreate) and recreate while ignoring cache (reset) respectively. This is useful particuarly when you're updating the devcontainer.json file or relevant build dependencies, and you will want to ensure that your changes work after a --reset before merging anything (since manual changes within the running dev container aren't propogated).

Important files:

- `.devcontainer/[SERVICE]/devcontainer.json` contains some launch configurations, most notably:

- includes VSCode extensions that are installed **within the container** upon build (your other extensions, most of which will be installed on your 'local' side, should still be there).
- allows for `postCreateCommands` which allow runtime logic to be performed. The initial iteration of this repo downloaded BARC data on build from a public gcs bucket, but could be anything. This is handled by `.devcontainer/[SERVICE]/init_runtime.sh`, which in turn executes files from `.devcontainer/[SERVICE]/runtime/`.

- `.devcontainer/[SERVICE]/Dockerfile` contains the build instructions for the dev container. Primary responsibilities here are to install necessary linux utilities (bash, curl, etc), setup cloud SDKs/CLIs and OpenTofu to manage those cloud resources, and build a conda environment to manage python dependencies. Note that **this Dockerfile build the dev environment, not the prod environment - so if you need a package to be running in prod, you will add it to the `Dockerfile` found in the root dir as well as this one**.

- by convention, files in `.devcontainer/[SERVICE]/prebuild` are run here - this means that their effects are baked into the resultant docker image and can take advantage of caching on subsequent runs. This has the benefit of saving time for chunky installs, at the expense of a larger docker image.
1 change: 1 addition & 0 deletions .devcontainer/scripts/export_tofu_dotenv.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
cd /workspace/.deployment/tofu
tofu workspace select dev
tofu init
tofu refresh

Expand Down

0 comments on commit a55ac33

Please sign in to comment.