Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Monitoring Stack & Default Values in Docker Compose #18

Merged
merged 11 commits into from
Feb 18, 2025
57 changes: 57 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,10 @@ The JPO ITS utilities repository serves as a central location for deploying open
- [Configuration](#configuration)
- [Configure Kafka Connector Creation](#configure-kafka-connector-creation)
- [Quick Run](#quick-run-2)
- [5. Monitoring Stack](#5-monitoring-stack)
- [Configuration](#configuration-1)
- [Quick Run](#quick-run-3)
- [Scrape Configurations](#scrape-configurations)
- [Security Notice](#security-notice)


Expand Down Expand Up @@ -198,6 +202,59 @@ The following environment variables can be used to configure Kafka Connectors:

[Back to top](#toc)

## 5. Monitoring Stack

The monitoring stack consists of Prometheus for metrics collection and Grafana for visualization, along with several exporters that collect metrics from different services. The configuration is defined in [docker-compose-monitoring.yml](docker-compose-monitoring.yml).

Set the `COMPOSE_PROFILES` environmental variable as follows:

- `monitoring_full` - deploys all resources in the [docker-compose-monitoring.yml](docker-compose-monitoring.yml) file
- `prometheus` - deploys only the Prometheus service
- `grafana` - deploys only the Grafana service
- `node_exporter` - deploys only the Node Exporter service for system metrics
- `kafka_exporter` - deploys only the Kafka Lag Exporter service
- `mongodb_exporter` - deploys only the MongoDB Exporter service

### Configuration

The following environment variables can be used to configure the monitoring stack:

| Environment Variable | Description |
|---|---|
| `PROMETHEUS_RETENTION` | Data retention period for Prometheus (default: 15d) |
| `GRAFANA_ADMIN_USER` | Grafana admin username (default: admin) |
| `GRAFANA_ADMIN_PASSWORD` | Grafana admin password (default: grafana) |
| `KAFKA_LAG_EXPORTER_ROOT_LOG_LEVEL` | Root log level for kafka lag exporter (default: WARN) |
| `KAFKA_LAG_EXPORTER_LOG_LEVEL` | Kafka lag exporter log level (default: INFO) |
| `KAFKA_LAG_EXPORTER_KAFKA_LOG_LEVEL` | Kafka log level for kafka lag exporter (default: ERROR) |

### Quick Run

1. Create a copy of `sample.env` and rename it to `.env`.
2. Set the `COMPOSE_PROFILES` variable to: `monitoring_full`
3. Update any passwords in the `.env` file for security
4. Run the following command: `docker compose up -d`
5. Access the monitoring interfaces:
- Grafana: `http://localhost:3000` (default credentials: admin/grafana)
- Prometheus: `http://localhost:9090`
6. The following metrics endpoints will be available:
- Node Exporter: `http://localhost:9100/metrics`
- Kafka Lag Exporter: `http://localhost:8000/metrics`
- MongoDB Exporter: `http://localhost:9216/metrics`

### Scrape Configurations

The scrape configurations for the monitoring stack are defined in the [prometheus.yml](monitoring/prometheus/prometheus.yml) file. If you would like to add a new scrape configuration, you can do so by adding a new job to the `scrape_configs` section. Please note that this file doesn't support environment variables, so you will need to manually edit the file.

The following scrape configurations are available:

- `prometheus` - scrapes the Prometheus metrics
- `node_exporter` - scrapes the Node Exporter metrics
- `kafka_exporter` - scrapes the Kafka Lag Exporter metrics
- `mongodb_exporter` - scrapes the MongoDB Exporter metrics

[Back to top](#toc)

## Security Notice

While default passwords are provided for development convenience, it is **strongly recommended** to:
Expand Down
32 changes: 16 additions & 16 deletions docker-compose-connect.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ services:
build:
context: kafka-connect
dockerfile: Dockerfile
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
deploy:
resources:
limits:
Expand All @@ -29,14 +29,14 @@ services:
condition: service_healthy
required: false
environment:
CONNECT_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS}
CONNECT_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS:-kafka:9092}
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: kafka-connect-group
# Topics are created with jikkou in the kafka-setup service
CONNECT_CONFIG_STORAGE_TOPIC: topic.KafkaConnectConfigs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: -1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_FLUSH_INTERVAL_MS: ${CONNECT_FLUSH_INTERVAL:-1000}
CONNECT_OFFSET_STORAGE_TOPIC: topic.KafkaConnectOffsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: -1
CONNECT_OFFSET_STORAGE_CLEANUP_POLICY: compact
Expand All @@ -47,8 +47,8 @@ services:
CONNECT_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_LOG4J_ROOT_LOGLEVEL: ${CONNECT_LOG_LEVEL}
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=${CONNECT_LOG_LEVEL},org.reflections=${CONNECT_LOG_LEVEL},com.mongodb.kafka=${CONNECT_LOG_LEVEL}"
CONNECT_LOG4J_ROOT_LOGLEVEL: ${CONNECT_LOG_LEVEL:-ERROR}
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=${CONNECT_LOG_LEVEL:-ERROR},org.reflections=${CONNECT_LOG_LEVEL:-ERROR},com.mongodb.kafka=${CONNECT_LOG_LEVEL:-ERROR}"
CONNECT_PLUGIN_PATH: /usr/share/confluent-hub-components

kafka-connect-setup:
Expand All @@ -73,14 +73,14 @@ services:
condition: service_healthy
required: false
environment:
CONNECT_URL: ${CONNECT_URL}
CONNECT_TASKS_MAX: ${CONNECT_TASKS_MAX}
CONNECT_CREATE_ODE: ${CONNECT_CREATE_ODE}
CONNECT_CREATE_GEOJSONCONVERTER: ${CONNECT_CREATE_GEOJSONCONVERTER}
CONNECT_CREATE_CONFLICTMONITOR: ${CONNECT_CREATE_CONFLICTMONITOR}
CONNECT_CREATE_DEDUPLICATOR: ${CONNECT_CREATE_DEDUPLICATOR}
CONNECT_CREATE_MECDEPOSIT: ${CONNECT_CREATE_MECDEPOSIT}
MONGO_CONNECTOR_USERNAME: ${MONGO_ADMIN_DB_USER}
MONGO_CONNECTOR_PASSWORD: ${MONGO_ADMIN_DB_PASS:?}
MONGO_DB_IP: ${MONGO_IP}
MONGO_DB_NAME: ${MONGO_DB_NAME}
CONNECT_URL: ${CONNECT_URL:-http://connect:8083}
CONNECT_TASKS_MAX: ${CONNECT_TASKS_MAX:-10}
CONNECT_CREATE_ODE: ${CONNECT_CREATE_ODE:-true}
CONNECT_CREATE_GEOJSONCONVERTER: ${CONNECT_CREATE_GEOJSONCONVERTER:-true}
CONNECT_CREATE_CONFLICTMONITOR: ${CONNECT_CREATE_CONFLICTMONITOR:-true}
CONNECT_CREATE_DEDUPLICATOR: ${CONNECT_CREATE_DEDUPLICATOR:-false}
CONNECT_CREATE_MECDEPOSIT: ${CONNECT_CREATE_MECDEPOSIT:-false}
MONGO_CONNECTOR_USERNAME: ${MONGO_ADMIN_DB_USER:-admin}
MONGO_CONNECTOR_PASSWORD: ${MONGO_ADMIN_DB_PASS:-replace_me}
MONGO_DB_IP: ${MONGO_IP:-mongo}
MONGO_DB_NAME: ${MONGO_DB_NAME:-CV}
45 changes: 20 additions & 25 deletions docker-compose-kafka.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ services:
- kafka
image: bitnami/kafka:3.8.0
hostname: kafka
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
healthcheck:
test: /opt/bitnami/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server kafka:9092 --version || exit 1
interval: 30s
Expand All @@ -29,14 +29,14 @@ services:
KAFKA_CFG_CONTROLLER_LISTENER_NAMES: "CONTROLLER"
KAFKA_CFG_LISTENERS: "PLAINTEXT://:9094,CONTROLLER://:9093,EXTERNAL://:9092"
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: "CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9094,EXTERNAL://${KAFKA_BOOTSTRAP_SERVERS}"
KAFKA_CFG_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9094,EXTERNAL://${KAFKA_BOOTSTRAP_SERVERS:-kafka:9092}"
KAFKA_BROKER_ID: "1"
KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: "1@kafka:9093"
ALLOW_PLAINTEXT_LISTENER: "yes"
KAFKA_CFG_NODE_ID: "1"
KAFKA_CFG_DELETE_TOPIC_ENABLE: "true"
KAFKA_CFG_LOG_RETENTION_HOURS: ${KAFKA_LOG_RETENTION_HOURS}
KAFKA_CFG_LOG_RETENTION_BYTES: ${KAFKA_LOG_RETENTION_BYTES}
KAFKA_CFG_LOG_RETENTION_HOURS: ${KAFKA_LOG_RETENTION_HOURS:-3}
KAFKA_CFG_LOG_RETENTION_BYTES: ${KAFKA_LOG_RETENTION_BYTES:-10737418240}
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: "false"
logging:
options:
Expand Down Expand Up @@ -64,22 +64,17 @@ services:
condition: service_healthy
required: false
environment:
KAFKA_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS}
KAFKA_TOPIC_PARTITIONS: ${KAFKA_TOPIC_PARTITIONS}
KAFKA_TOPIC_REPLICAS: ${KAFKA_TOPIC_REPLICAS}
KAFKA_TOPIC_MIN_INSYNC_REPLICAS: ${KAFKA_TOPIC_MIN_INSYNC_REPLICAS}
KAFKA_TOPIC_RETENTION_MS: ${KAFKA_TOPIC_RETENTION_MS}
KAFKA_TOPIC_DELETE_RETENTION_MS: ${KAFKA_TOPIC_DELETE_RETENTION_MS}
KAFKA_TOPIC_CREATE_ODE: ${KAFKA_TOPIC_CREATE_ODE}
KAFKA_TOPIC_CREATE_GEOJSONCONVERTER: ${KAFKA_TOPIC_CREATE_GEOJSONCONVERTER}
KAFKA_TOPIC_CREATE_CONFLICTMONITOR: ${KAFKA_TOPIC_CREATE_CONFLICTMONITOR}
KAFKA_TOPIC_CREATE_DEDUPLICATOR: ${KAFKA_TOPIC_CREATE_DEDUPLICATOR}
KAFKA_TOPIC_CREATE_MECDEPOSIT: ${KAFKA_TOPIC_CREATE_MECDEPOSIT}

KAFKA_SECURITY_PROTOCOL: ${KAFKA_SECURITY_PROTOCOL:-PLAINTEXT}
KAFKA_SASL_MECHANISM: ${KAFKA_SASL_MECHANISM}
KAFKA_SASL_JAAS_CONFIG: ${KAFKA_SASL_JAAS_CONFIG}
KAFKA_SSL_ENDPOINT_ALGORITHM: ${KAFKA_SSL_ENDPOINT_ALGORITHM}
KAFKA_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS:-kafka:9092}
KAFKA_TOPIC_PARTITIONS: ${KAFKA_TOPIC_PARTITIONS:-1}
KAFKA_TOPIC_REPLICAS: ${KAFKA_TOPIC_REPLICAS:-1}
KAFKA_TOPIC_MIN_INSYNC_REPLICAS: ${KAFKA_TOPIC_MIN_INSYNC_REPLICAS:-1}
KAFKA_TOPIC_RETENTION_MS: ${KAFKA_TOPIC_RETENTION_MS:-300000}
KAFKA_TOPIC_DELETE_RETENTION_MS: ${KAFKA_TOPIC_DELETE_RETENTION_MS:-3600000}
KAFKA_TOPIC_CREATE_ODE: ${KAFKA_TOPIC_CREATE_ODE:-true}
KAFKA_TOPIC_CREATE_GEOJSONCONVERTER: ${KAFKA_TOPIC_CREATE_GEOJSONCONVERTER:-true}
KAFKA_TOPIC_CREATE_CONFLICTMONITOR: ${KAFKA_TOPIC_CREATE_CONFLICTMONITOR:-true}
KAFKA_TOPIC_CREATE_DEDUPLICATOR: ${KAFKA_TOPIC_CREATE_DEDUPLICATOR:-false}
KAFKA_TOPIC_CREATE_MECDEPOSIT: ${KAFKA_TOPIC_CREATE_MECDEPOSIT:-false}
logging:
options:
max-size: "10m"
Expand All @@ -91,7 +86,7 @@ services:
- kafka_schema_registry
image: confluentinc/cp-schema-registry:7.7.0
hostname: schema-registry
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
deploy:
resources:
limits:
Expand All @@ -105,7 +100,7 @@ services:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS}
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: ${KAFKA_BOOTSTRAP_SERVERS:-kafka:9092}
SCHEMA_REGISTRY_CONFLUENT_SUPPORT_METRICS_ENABLE: "false"
SCHEMA_REGISTRY_AVRO_COMPATIBILITY_LEVEL: "NONE"
healthcheck:
Expand All @@ -125,7 +120,7 @@ services:
- kafka_ui
hostname: kafka-ui
image: ghcr.io/kafbat/kafka-ui:v1.1.0
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
deploy:
resources:
limits:
Expand All @@ -140,9 +135,9 @@ services:
environment:
DYNAMIC_CONFIG_ENABLED: "true"
KAFKA_CLUSTERS_0_NAME: local
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: ${KAFKA_BOOTSTRAP_SERVERS}
KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: ${KAFKA_BOOTSTRAP_SERVERS:-kafka:9092}
KAFKA_CLUSTERS_0_KAFKACONNECT_0_NAME: kafka-connect
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: ${CONNECT_URL}
KAFKA_CLUSTERS_0_KAFKACONNECT_0_ADDRESS: ${CONNECT_URL:-http://connect:8083}
logging:
options:
max-size: "10m"
Expand Down
80 changes: 41 additions & 39 deletions docker-compose-mongo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ services:
- mongo
image: mongo:8
hostname: mongo
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
deploy:
resources:
limits:
Expand All @@ -16,19 +16,19 @@ services:
ports:
- "27017:27017"
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_DB_USER}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_DB_PASS:?}
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ADMIN_DB_USER:-admin}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ADMIN_DB_PASS:-replace_me}
MONGO_INITDB_DATABASE: admin
MONGO_DATABASE_STORAGE_COLLECTION_NAME: ${MONGO_DATABASE_STORAGE_COLLECTION_NAME}
MONGO_DATABASE_SIZE_GB: ${MONGO_DATABASE_SIZE_GB}
MONGO_DATABASE_SIZE_TARGET_PERCENT: ${MONGO_DATABASE_SIZE_TARGET_PERCENT}
MONGO_DATABASE_DELETE_THRESHOLD_PERCENT: ${MONGO_DATABASE_DELETE_THRESHOLD_PERCENT}
MONGO_DATABASE_MAX_TTL_RETENTION_SECONDS: ${MONGO_DATABASE_MAX_TTL_RETENTION_SECONDS}
MONGO_DATABASE_MIN_TTL_RETENTION_SECONDS: ${MONGO_DATABASE_MIN_TTL_RETENTION_SECONDS}
MONGO_ENABLE_STORAGE_RECORD: ${MONGO_ENABLE_STORAGE_RECORD}
MONGO_ENABLE_DYNAMIC_TTL: ${MONGO_ENABLE_DYNAMIC_TTL}
MONGO_DB_NAME: ${MONGO_DB_NAME}
MONGO_DB_KEYFILE_STRING: ${MONGO_DB_KEYFILE_STRING:?}
MONGO_DATABASE_STORAGE_COLLECTION_NAME: ${MONGO_DATABASE_STORAGE_COLLECTION_NAME:-storage}
MONGO_DATABASE_SIZE_GB: ${MONGO_DATABASE_SIZE_GB:-10}
MONGO_DATABASE_SIZE_TARGET_PERCENT: ${MONGO_DATABASE_SIZE_TARGET_PERCENT:-0.8}
MONGO_DATABASE_DELETE_THRESHOLD_PERCENT: ${MONGO_DATABASE_DELETE_THRESHOLD_PERCENT:-0.9}
MONGO_DATABASE_MAX_TTL_RETENTION_SECONDS: ${MONGO_DATABASE_MAX_TTL_RETENTION_SECONDS:-5184000}
MONGO_DATABASE_MIN_TTL_RETENTION_SECONDS: ${MONGO_DATABASE_MIN_TTL_RETENTION_SECONDS:-604800}
MONGO_ENABLE_STORAGE_RECORD: ${MONGO_ENABLE_STORAGE_RECORD:-true}
MONGO_ENABLE_DYNAMIC_TTL: ${MONGO_ENABLE_DYNAMIC_TTL:-true}
MONGO_DB_NAME: ${MONGO_DB_NAME:-CV}
MONGO_DB_KEYFILE_STRING: ${MONGO_DB_KEYFILE_STRING:-replacethisstring}
entrypoint:
- bash
- -c
Expand All @@ -41,7 +41,7 @@ services:
dos2unix /etc/cron.d/manage-volume-cron
chmod 644 /etc/cron.d/manage-volume-cron
systemctl restart cron
echo "$MONGO_DB_KEYFILE_STRING" > /data/keyfile.txt
echo "${MONGO_DB_KEYFILE_STRING:-replacethisstring}" > /data/keyfile.txt
chmod 400 /data/keyfile.txt
chown 999:999 /data/keyfile.txt

Expand All @@ -53,7 +53,7 @@ services:
- ./mongo/manage_volume.js:/data/manage_volume.js
healthcheck:
# Removal of replica set status check as the mongo-setup container is what actually configures the replica set
test: mongosh --quiet --username ${MONGO_ADMIN_DB_USER} --password ${MONGO_ADMIN_DB_PASS} --authenticationDatabase admin --eval "db.adminCommand('ping').ok"
test: mongosh --quiet --username ${MONGO_ADMIN_DB_USER:-admin} --password ${MONGO_ADMIN_DB_PASS:-replace_me} --authenticationDatabase admin --eval "db.adminCommand('ping').ok"
interval: 10s
timeout: 10s
retries: 10
Expand All @@ -77,26 +77,28 @@ services:
cpus: '0.5'
memory: 1G
environment:
MONGO_ADMIN_DB_USER: ${MONGO_ADMIN_DB_USER}
MONGO_ADMIN_DB_PASS: ${MONGO_ADMIN_DB_PASS:?}
MONGO_DB_NAME: ${MONGO_DB_NAME}
MONGO_READ_WRITE_USER: ${MONGO_READ_WRITE_USER}
MONGO_READ_WRITE_PASS: ${MONGO_READ_WRITE_PASS:?}
MONGO_READ_USER: ${MONGO_READ_USER}
MONGO_READ_PASS: ${MONGO_READ_PASS:?}
MONGO_EXPORTER_USERNAME: ${MONGO_EXPORTER_USERNAME}
MONGO_EXPORTER_PASSWORD: ${MONGO_EXPORTER_PASSWORD:?}
MONGO_DATA_RETENTION_SECONDS: ${MONGO_DATA_RETENTION_SECONDS}
MONGO_ASN_RETENTION_SECONDS: ${MONGO_ASN_RETENTION_SECONDS}
CONNECT_CREATE_GEOJSONCONVERTER: ${CONNECT_CREATE_GEOJSONCONVERTER}
CONNECT_CREATE_CONFLICTMONITOR: ${CONNECT_CREATE_CONFLICTMONITOR}
CONNECT_CREATE_DEDUPLICATOR: ${CONNECT_CREATE_DEDUPLICATOR}
CONNECT_CREATE_MECDEPOSIT: ${CONNECT_CREATE_MECDEPOSIT}
MONGO_ADMIN_DB_USER: ${MONGO_ADMIN_DB_USER:-admin}
MONGO_ADMIN_DB_PASS: ${MONGO_ADMIN_DB_PASS:-replace_me}
MONGO_DB_NAME: ${MONGO_DB_NAME:-CV}
MONGO_READ_WRITE_USER: ${MONGO_READ_WRITE_USER:-ode}
MONGO_READ_WRITE_PASS: ${MONGO_READ_WRITE_PASS:-replace_me}
MONGO_READ_USER: ${MONGO_READ_USER:-read}
MONGO_READ_PASS: ${MONGO_READ_PASS:-replace_me}
MONGO_EXPORTER_USERNAME: ${MONGO_EXPORTER_USERNAME:-exporter}
MONGO_EXPORTER_PASSWORD: ${MONGO_EXPORTER_PASSWORD:-replace_me}

MONGO_DATA_RETENTION_SECONDS: ${MONGO_DATA_RETENTION_SECONDS:-5184000}
MONGO_ASN_RETENTION_SECONDS: ${MONGO_ASN_RETENTION_SECONDS:-86400}

MONGO_INDEX_CREATE_ODE: ${MONGO_INDEX_CREATE_ODE:-true}
MONGO_INDEX_CREATE_GEOJSONCONVERTER: ${MONGO_INDEX_CREATE_GEOJSONCONVERTER:-true}
MONGO_INDEX_CREATE_CONFLICTMONITOR: ${MONGO_INDEX_CREATE_CONFLICTMONITOR:-true}
MONGO_INDEX_CREATE_DEDUPLICATOR: ${MONGO_INDEX_CREATE_DEDUPLICATOR:-false}
entrypoint: ["/bin/bash", "setup_mongo.sh"]
volumes:
- ${MONGO_SETUP_SCRIPT_RELATIVE_PATH}:/setup_mongo.sh
- ${MONGO_CREATE_INDEXES_SCRIPT_RELATIVE_PATH}:/create_indexes.js
- ${MONGO_INIT_REPLICAS_SCRIPT_RELATIVE_PATH}:/init_replicas.js
- ${MONGO_SETUP_SCRIPT_RELATIVE_PATH:-./mongo/setup_mongo.sh}:/setup_mongo.sh
- ${MONGO_CREATE_INDEXES_SCRIPT_RELATIVE_PATH:-./mongo/create_indexes.js}:/create_indexes.js
- ${MONGO_INIT_REPLICAS_SCRIPT_RELATIVE_PATH:-./mongo/init_replicas.js}:/init_replicas.js

mongo-express:
profiles:
Expand All @@ -105,7 +107,7 @@ services:
- mongo_express
image: mongo-express:1.0.2-18
hostname: mongo-express
restart: ${RESTART_POLICY}
restart: ${RESTART_POLICY:-on-failure:3}
deploy:
resources:
limits:
Expand All @@ -119,11 +121,11 @@ services:
required: false
environment:
ME_CONFIG_MONGODB_ENABLE_ADMIN: "true"
ME_CONFIG_BASICAUTH_USERNAME: ${MONGO_EXPRESS_USER}
ME_CONFIG_BASICAUTH_PASSWORD: ${MONGO_EXPRESS_PASS:?}
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ADMIN_DB_USER}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ADMIN_DB_PASS:?}
ME_CONFIG_MONGODB_URL: mongodb://${MONGO_ADMIN_DB_USER}:${MONGO_ADMIN_DB_PASS}@${MONGO_IP}:27017/?authSource=admin&directConnection=true
ME_CONFIG_BASICAUTH_USERNAME: ${MONGO_EXPRESS_USER:-admin}
ME_CONFIG_BASICAUTH_PASSWORD: ${MONGO_EXPRESS_PASS:-replace_me}
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ADMIN_DB_USER:-admin}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ADMIN_DB_PASS:-replace_me}
ME_CONFIG_MONGODB_URL: mongodb://${MONGO_ADMIN_DB_USER:-admin}:${MONGO_ADMIN_DB_PASS:-replace_me}@${MONGO_IP:-mongo}:27017/?authSource=admin&directConnection=true
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8081"]
interval: 30s
Expand Down
Loading