This is just a plain local instance that you can clone in order to add Cypress. Prior to running Cypress, verify that you can run this instance on local host.
-
create a directory called
tests
in root:mkdir tests
-
cd into the
tests
directory:cd tests
-
run npm init from
tests
(you can select ok for all of the default settings):npm init
-
install Cypress in the tests directory:
npm install cypress --save-dev
-
extend Cypress with custom commands from the testing library ecosystem:
npm install --save-dev @testing-library/cypress
-
start your local instance from the root directory (if it's not already running):
docker compose up -d
-
start Cypress from the
tests
directory:npx cypress open
-
You should see your browser open. Follow the prompt to install any configuration files.
-
Follow the prompt to create your first spec.
-
Take a look at the test created in
user/cypress/e2e/spec.cy.js
in your IDE to verify that things are working correctly. -
Replace lines 1-5 in
spec.cy.js
with the following code and save:
import "@testing-library/cypress/add-commands";
Cypress.session.clearAllSavedSessions();
Cypress.Commands.add("login", (username, password) => {
cy.visit("http://localhost:3000/auth/login"); // Adjust this to your local app's login URL
cy.get('input[placeholder="name@company.com"]').type(username); // Adjust the placeholder to match your username field
cy.get('input[placeholder="Password"]').type(password); // Adjust the placeholder to match your password field
cy.contains(/^Sign in$/).click(); // Click the Sign in button
});
describe('Login Test', () => {
it('should log in successfully and navigate to the home page', () => {
cy.login("username", "password"); // Replace with actual username and password
cy.url().should('eq', "http://localhost:3000/auth/login");
});
});
- Run the new test, and if everything is configured correctly, it should log you into your local instance:
npx cypress run --browser chrome
🐳 Docker Desktop: version 4.25+
In general settings:
- choose
VirtioFS
so that virtualization framework is enabled - Enable
Use Rosetta for x86/amd64 emulation on Apple Silicon
In resources:
- Allocate at least
4 CPU
and8GB Memory
🐙 Git installed, along with a SSH key configured for your GitHub account in the tryretool org.
💻 A Text Editor/IDE set up, and some comfort using a Terminal for CLI commands
-
Use this template to create a new repo owned your github user, e.g
my-local-docker
-
Clone that repo to your machine. If you don't already have one, create a folder in your home directory to help keep organized, e.g
~/local-retool
-
Open up that repo in our text editor of choice
This template is made up of some Dockerfiles, some Compose files, and dynamic configuration file for Temporal (don't worry too much about this one).
There are two separate Compose files: one for a deployment with Workflows, and one without. I recommend working mostly with a non-Workflows deployment and spinning up the one with Workflows when needed.
-
Set an image tag in the
Dockerfile
and optionallyCodeExecutor.Dockerfile
if you're spinning up Workflows with Python support -
Rename the template env file to
docker.env
, and set theJWT_SECRET
andENCRYPTION_KEY
. Optionally, you can replace theLICENSE_KEY
with your own (which is useful for testing feature flags)
Run the default compose.yaml
:
docker compose up -d --build
Pass a specific compose file to use:
docker compose -f compose-workflows.yaml up -d --build
It typically takes around one minute for the deployment to be ready to handle requests, after which you can access it via localhost:3000
.
To spin things down, run
docker compose down
Spend some time getting familiar with the Docker Desktop UI, which will show the status of your services.
Primarily you'll use Docker Desktop to view container logs, see metrics/charts of your containers CPU and Memory, and exec
into containers themselves.
Third-party tools/apps to extend Docker Desktop functionality. I primarily use two:
ContainerWatch: Provides nice graphs for CPU and Memory usage for containers, and some nice logging.
Docker Debug Tools: Makes exec
ing into continers and running commands more pleasant
One of the benefits running locally is being to easily inspect database state, and doing so is much easier using a GUI application (rather than running psql
commands).
Install an app like Postico, TablePlus, or something from this list to better view your deployment data, see table structure, and run ad-hoc queries.
This template starts with a postgres
container that keeps your data in a local volume via Docker Desktop. If you ever lose this volume (e.g if you run docker system prune
) then your data is gone.
Consider setting up a database on Neon, Render, Supabase, or another cloud service to manage your database for free or very cheap.
I personally pay for Neon (ends up $2 or so a month) primarily for their Branching feauture, which comes in very handy for testing Retool things across versions where database state can potentially get messed up.
New versions of Retool may include changes to the schema of the postgres database, and these are implemented through DB Migrations.
It's important to keep these in mind when doing large jumps between Retool versions locally using the same postgres volume/instance, as Retool does not run down-migrations automatically when you downgrade.
-
DB Migration info is stored in the single-column
SequelizeMeta
table (the beginning digits of each migration name roughly translate to the date it was first created) -
You can copy the name and search for it in Retool's codebase to find out what the migration is actually doing, and a link to the PR which should give more info on why it was created and what version it should be run in.
Power-up your local deployment by:
-
Creating a GitHub or GitLab repo to set up Source Control
-
Creating an Okta or Auth0 account to set up Custom SSO
-
Adding NGINX to your configuration to setup SSL