provider-kafka
is a Crossplane Provider that is used to
manage Kafka resources.
-
Create a provider secret containing a json like the following, see expected schema here:
{ "brokers":[ "kafka-dev-0.kafka-dev-headless:9092" ], "sasl":{ "mechanism":"PLAIN", "username":"user", "password":"<your-password>" } }
-
Create a k8s secret containing above config:
kubectl -n crossplane-system create secret generic kafka-creds --from-file=credentials=kc.json
-
Create a
ProviderConfig
, see this as an example. -
Create a managed resource see, see this for an example creating a
Kafka topic
.
The following instructions will setup a development environment where you will have a locally running Kafka installation (SASL-Plain enabled). To change the configuration of your instance further, please see available helm parameters here.
-
(Optional) Create a local kind cluster unless you want to develop against an existing k8s cluster.
-
Install the Kafka helm chart:
helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update bitnami helm upgrade --install kafka-dev -n kafka-cluster bitnami/kafka \ --create-namespace \ --version 31.0.0 \ --set auth.clientProtocol=sasl \ --set deleteTopicEnable=true \ --set authorizerClassName="kafka.security.authorizer.AclAuthorizer" \ --set controller.replicaCount=1 \ --wait
Username is
user1
, obtain password using the following:export KAFKA_PASSWORD=$(kubectl get secret kafka-dev-user-passwords -oyaml | yq '.data.client-passwords | @base64d')
Create the Kubernetes secret to be used by the
ProviderConfig
with:cat <<EOF > /tmp/creds.json { "brokers": [ "kafka-dev-controller-headless.kafka-cluster.svc:9092" ], "sasl": { "mechanism": "PLAIN", "username": "user1", "password": "${KAFKA_PASSWORD}" } } EOF kubectl -n kafka-cluster create secret generic kafka-creds \ --from-file=credentials=/tmp/creds.json
-
Install kubefwd.
-
Run
kubefwd
forkafka-cluster
namespace which will make internal k8s services locally accessible:sudo kubefwd svc -n kafka-cluster -c ~/.kube/config
-
To run tests, use the
KAFKA_PASSWORD
environment variable from step 2 -
(optional) Install the kafka cli
-
Create a config file for the client with:
cat <<EOF > ~/.kcl/config.toml seed_brokers = ["kafka-dev-0.kafka-dev-headless:9092"] timeout_ms = 10000 [sasl] method = "plain" user = "user1" pass = "${KAFKA_PASSWORD}" EOF
- Verify that cli could talk to the Kafka cluster:
export KCL_CONFIG_DIR=~/.kcl kcl metadata --all
-
-
(optional) or deploy RedPanda console with:
kubectl create -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: rp-console spec: replicas: 1 selector: matchLabels: app: rp-console template: metadata: labels: app: rp-console spec: containers: - name: rp-console image: docker.redpanda.com/redpandadata/console:latest ports: - containerPort: 8001 env: - name: KAFKA_TLS_ENABLED value: "false" - name: KAFKA_SASL_ENABLED value: "true" - name: KAFKA_SASL_USERNAME value: user1 - name: KAFKA_SASL_PASSWORD value: ${KAFKA_PASSWORD} - name: KAFKA_BROKERS value: kafka-dev-controller-headless.kafka-cluster.svc:9092 EOF
### Building and Running the provider locally
Run against a Kubernetes cluster:
```console
# Install CRD and run provider locally
make dev
# Create a ProviderConfig pointing to the local Kafka cluster
kubectl apply -f - <<EOF
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
secretRef:
key: credentials
name: kafka-creds
namespace: kafka-cluster
source: Secret
EOF
Build, push, and install:
make all
Build image:
make image
Push image:
make push
Build binary:
make build