Debugging Host Inventory code requires a database, kakfa broker, Kafka Zookeeper, Debezium Connector, and xjoin-search. These required services are deployed to Kubernetes using Kubernetes operators and bonfire.
- Start 'Minikube' with the following suggested configuration:
- disk-size: 200GB - driver: kvm2 - memory: 16384 - cpus: 6
- Connect to image repository for generating authentication token, which is used by bonfire for pulling images from image repository:
docker/podmin login -u <usrename> quay.io
- Clone the xjoin-operator code repository and
cd
into thexjoin-operator
directory. - Deploy host-inventory:
./dev/setup-clowder.sh
- Install CRDs:
make install
- Run the xjoin-operator:
make run ENABLE_WEBHOOKS=false
- Create a new xjoin pipeline:
kubectl apply -f ./config/samples/xjoin_v1alpha1_xjoinpipeline.yaml -n test
- Login to Ephemeral cluster.
- Reserve a namespace:
bonfire namespace reserve -d xxh # the default reservation time is one hour
- Deploy host-inventory:
bonfire deploy host-inventory -n <namespace>
- Exposed service ports by downloading the forward-ports-clowder.sh script and running:
forward-ports-clowder.sh <namespace>
- In the host-inventory project directory, run:
pipenv install --dev pipenv shell
- Get Kafka broker address:
The
make run_inv_mq_service_test_producer
make
command complains that the kafka broker is not available and should provide the address, which should look likeenv-ephemeral-spxayh-509fc239-kafka-0.env-ephemeral-spxayh-509fc239-kafka-brokers.ephemeral-spxayh.svc
- Add the Kafka broker address to your local
/etc/hosts
file127.0.0.1 env-ephemeral-spxayh-509fc239-kafka-0.env-ephemeral-spxayh-509fc239-kafka-brokers.ephemeral-spxayh.svc
- To connect to database and elasticsearch pods, get credentials by downloading the get_credentials.sh script and running:
get_credentials provides the credentials to access database and elasticsearch.
get_credentials.sh <namespace>
- Create a new host by running:
make run_inv_mq_service_test_producer
- Verify the newly created host is available:
Though API GET is used, the host has been provided by
curl --location 'http://localhost:8000/api/inventory/v1/hosts' \ --header 'x-rh-identity: <base64-encoded idenity>' \ --header 'Content-Type: application/json' \ --header 'x-rh-cloud-bulk-query-source: xjoin'
xjoin-search
which gets it fromelasticsearch
index. - To get hosts directly from the Elasticsearch index, run:
Note: index name, "xjoin.inventory.hosts" may be different in your case.
curl --location 'http://localhost:9200/xjoin.inventory.hosts/_search' \ --header 'Authorization: Basic <elastic-user value from get_credentials>='
This section describes how to launch debugger using VS Code
to execute the local API server which in turn uses resources deployed in the ephemeral namespace.
- Load the
host-inventory
project to VS Code - Add a launch configuration to run the api-server
run.py
with the following environment variables:{ "version": "0.2.0", "configurations": [ { "name": "Python: Current File", "type": "python", "request": "launch", "program": "run.py", "console": "integratedTerminal", "justMyCode": false, "env": { "BYPASS_RBAC": "true", "INVENTORY_DB_USER": "<user provided get_credentials>", "INVENTORY_DB_PASS": "<password provided get_credentials>", "INVENTORY_DB_NAME": "<DB name provided get_credentials>", "PROMETHEUS_MULTIPROC_DIR": "./temp/prometheus_multiproc_dir/", "prometheus_multiproc_dir": "./temp/prometheus_multiproc_dir/", "FLASK_ENV": "development" }, }, ] }
- Launch the api-server on a port different from the one used by the host-inventory-service
- Use the
curl
statements provided above to debug and step through the local code.