-
Notifications
You must be signed in to change notification settings - Fork 1
Production Server Administration
Our production server supports opening a remote shell over SSH. (You must be connected to the Rensselaer network, either directly or over the VPN.) You can sign in by running ssh -p 2222 [username]@srv1.webtech.union.rpi.edu
, where [username]
is your username for the production server as was configured by the server administrator. Conventionally, this is set to your RCS ID, but it’s possible for it to be something else. Remember that when using scp
to transfer files, the port must be specified with -P 2222
(note the use of an uppercase letter “P” instead of a lowercase letter “p”).
As an administrator with sudo
permissions, follow these instructions to add a new user account with SSH access on the production sever:
- Open an SSH connection to the production server.
- Run
sudo adduser [rcs-id]
, where[rcs-id]
is the RCS ID of the person to whom you want to grant SSH access. - Follow the prompts to set a temporary UNIX password for the new user account.
- When you’re prompted to do so, enter the person’s full name, properly capitalized.
- Leave all of the other biographical prompts blank.
- Confirm that the information is correct.
- Request that the person to whom you want to grant SSH access send you a public SSH key that they’ll use (together with the corresponding private key) to sign in to the server.
- Run
cd /home/[rcs-id]
, where[rcs-id]
is the RCS ID of the person to whom you want to grant SSH access. - Run
ls -a | grep .ssh
. - If
.ssh
was not printed in the output from step 9, then runmkdir .ssh
.- Skip this step if
.ssh
was indeed printed in the output from step 9.
- Skip this step if
- Run
nano .ssh/authorized_keys
. - Manually append the contents of the public SSH key that the person sent you on a new line in the text file.
- Press
Control-X
to exit. - Press
Y
to save the file. - Run
chmod 700 .ssh
. - Run
chmod 600 .ssh/authorized_keys
. - Run
chown -R [rcs-id]:[rcs-id] .ssh
, where[rcs-id]
is the RCS ID of the person to whom you want to grant SSH access. - If you want that the person have
sudo
permission, then runsudo usermod -aG sudo [rcs-id]
, where[rcs-id]
is the RCS ID of the person to whom you want to grantsudo
permission. - If you want that the person have deployment permission, then follow these steps:
- Run
nano temp.pem
. - Paste in the contents of the public SSH key that the person sent you.
- Press
Control-X
to exit. - Press
Y
to save the file. - Run
sudo dokku ssh-keys:add [rcs-id] temp.pem
, where[rcs-id]
is the RCS ID of the person to whom you want to grant deployment permission. - Run
rm temp.pem
.
- Run
- Have the person to whom you want to grant SSH access sign in to the production server by following the instructions in the SSH Connections section.
- Have the person to whom you want to grant SSH access run
passwd
. - Have the person to whom you want to grant SSH access follow the prompts to set a new, secure password.
- The person doesn’t need to tell you or anyone else what their password is.
As an administrator with sudo
permissions, follow these instructions to delete an existing user account on the production sever:
- Open an SSH connection to the production server.
- Run
sudo deluser [username]
, where[username]
is the username of the user account that you want to delete.- Conventionally,
[username]
is set to the person’s RCS ID, but it’s possible for it to be something else.
- Conventionally,
- Run
sudo rm -rf /home/[username]
, where[username]
is the username of the user account that you want to delete.
The single physical production server (which is actually a virtual machine that’s running through Proxmox on top of a bare-metal physical server, but that’s beside the point) uses a virtual-server model, powered by Dokku, to host multiple virtual servers on the same machine. For clarity, we’ll refer to a virtual server as an “environment”. (Dokku calls them “apps”, but that can be very confusing because we also maintain several client apps that have nothing to do with Dokku.) Dokku uses the domain header in incoming HTTP requests to route those requests to the appropriate environments.
We share a production server with several other projects, but the two environments that are relevant for the modern Shuttle Tracker project are shuttletracker-new
(production) and shuttletracker-new-staging
(staging). Requests to the shuttletracker.app
and shuttles.rpi.edu
domains are routed to shuttletracker-new
, and requests to the staging.shuttletracker.app
domain are routed to shuttletracker-new-staging
. Both the production environment and the staging environment run on the production server.
It’s safe to test in the staging environment, but don’t submit any test data to the production environment. Data in the production environment are shown to users, so these data must be accurate and valid.
If the server isn’t responding, then restarting the relevant environment is very likely to solve the problem. Reports from users will almost certainly refer to the production environment because very few users manually change the server base URL to that of the staging environment.
For reasons that are currently unknown, the Shuttle Tracker server software gradually increases in CPU utilization over many hours until it hits 100%, at which point performance drastically declines. As a stop-gap measure until we can diagnose the root cause, both the production and staging environments automatically shut themselves down after 6 hours. The production environment automatically restarts itself immediately after shutting down. (This shut-down-and-restart process is quick enough that it’s invisible to real-world users.) The staging environment, however, might or might not automatically restart itself, depending on its current configuration. If you need to test in the staging environment more than 6 hours after you last restarted it, then you might need to open an SSH connection to the production server and to restart it manually.
- Open an SSH connection to the production server.
- Run
dokku ps:restart shuttletracker-new
.- Replace
shuttletracker-new
withshuttletracker-new-staging
for the staging environment.
- Replace
To deploy a new version of the codebase to the production server, you need to push to a special Git remote. This can be done either on the command line (git push
…) or in a local Git GUI of your choice. The following instructions apply to the command line.
- Open a local terminal with the root of your local clone of this repository as the working directory.
- Run
git remote add production ssh://dokku@srv1.webtech.union.rpi.edu:2222/shuttletracker-new
.- This adds the production environment as a Git remote with the local name
production
. - Skip this step if you don’t ever intend to deploy to the production environment (e.g., if you don’t have permission to do so).
- This adds the production environment as a Git remote with the local name
- Run
git remote add staging ssh://dokku@srv1.webtech.union.rpi.edu:2222/shuttletracker-new-staging
.- This adds the staging environment as a Git remote with the local name
staging
. - Skip this step if you don’t ever intend to deploy to the staging environment (e.g., if you don’t have permission to do so).
- This adds the staging environment as a Git remote with the local name
Note that you must have already been granted SSH access to the production server and be connected to the Rensselaer network, either directly or over the VPN, to push changes to the special Git remote.
- Open a local terminal with the root of your local clone of this repository as the working directory.
- Run
git push production main:master
.- Replace
production
withstaging
for the staging environment. - Locally, the default branch will be named
main
, but on the production server, it’s namedmaster
, hence the need formain:master
to specify the branch correspondence.
- Replace
If you make breaking changes to the public API, then make sure to increment the apiVersion
static property in the Constants
structure in Utilities.swift
. Doing so signals to Shuttle Tracker clients that are configured to work with a prior version of the API to prompt their users to update to newer versions of those respective clients.
We use Let’s Encrypt as our certificate authority. Let’s Encrypt issues to us 90-day SSL certificates that secure connections to our server over HTTPS. Because of the 90-day validity period, the certificates must be renewed before they expire.
- Open an SSH connection to the production server.
- Run
dokku letsencrypt:auto-renew
.- This will renew the certificates for all environments on the server that use Let’s Encrypt. This is usually fine because there’s no harm in renewing a certificate early. However, if you’re still concerned, then reach out to the maintainers of the other environments on the server. Most likely, the maintainer for all of them will be the current Chair of the Web Technologies Group.
- Alternatively, you can renew a certificate for a single environment by running
dokku letsencrypt:auto-renew [environment]
, where[envrionment]
is the name of the environment (such asshuttletracker-new
).
- Open an SSH connection to the production server.
- Run
dokku postgres:enter shuttletracker_new
.- Replace
shuttletracker_new
withshuttletracker_new_staging
for the staging environment. - Note the use of underscores instead of hyphens.
- Replace
- Run
su postgres
. - Run
psql
. - Run
\c shuttletracker_new
.- Replace
shuttletracker_new
withshuttletracker_new_staging
for the staging environment. - Note the use of underscores instead of hyphens.
- Replace
- Run your manual SQL queries.
With the introduction of our in-house versioned migrator, you might occasionally need to add migration-version records manually on the production server, such as when you encounter a deployment error that complains that a particular versioned database table already exists. You can use following SQL query to do so:
INSERT INTO migrationversions VALUES ('[uuid]', '[schema-name]', [schema-version]);
-
[uuid]
is a universally unique identifier (UUID) that should be newly generated with theuuidgen
shell utility for each new record. -
[schema-name]
is the name of the relevant schema (analyticsentries
, for example) exactly as is specified in the source code. -
[schema-version]
is an unsigned integer that indicates the current schema version. Usually, if you’re manually creating a migration-version record, this should be the same schema version that’s specified in the source code.
Make sure to keep the semicolon!
We maintain a suite of internal utility apps that aid with server administration and testing by eliminating the need to open a remote shell on the server to perform common tasks like scheduling new announcements manually. You can find all of these apps in the Utilities repository. Note that these apps only work on macOS at this time.
Most of the useful functionality of these apps requires that you authenticate with the server via public-key cryptography. The apps maintain a shared key store that’s backed by your Mac’s Secure Enclave. You can access this key store in supported apps by opening the Key Manager function. You can generate new key pairs and export the public keys in those key pairs in the Key Manager. Upload the public keys (stored as PEM files) to the /var/lib/dokku/data/storage/shuttletracker-new/keys/
directory on the production server. (Replace shuttletracker-new
with shuttletracker-new-staging
for the staging environment.) The convention is to name an uploaded PEM file as [rcs-id].pem
, where [rcs-id]
is the RCS ID of the person whose key it is. It’s recommended that you use a separate key pair for each environment.
The server will reject HTTP requests to protected endpoints that aren’t cryptographically signed with a private key the corresponding public key for which is installed in the aforementioned keys directory for the relevant environment. The utility apps generate the proper cryptographic signatures automatically.
Note that you can’t export the private keys, and those private keys can’t be backed up with Time Machine or any other backup service. If you wipe or otherwise lose access to the data on your Mac, then you’ll need to create entirely new key pairs and to re-upload the public keys to the production server.