Skip to content

fernandezcuesta/provider-kafka

 
 

Repository files navigation

provider-kafka

provider-kafka is a Crossplane Provider that is used to manage Kafka resources.

Usage

  1. Create a provider secret containing a json like the following, see expected schema here:

    {
      "brokers":[
        "kafka-dev-0.kafka-dev-headless:9092"
       ],
       "sasl":{
         "mechanism":"PLAIN",
         "username":"user",
         "password":"<your-password>"
       }
    }
  2. Create a k8s secret containing above config:

    kubectl -n crossplane-system create secret generic kafka-creds --from-file=credentials=kc.json
  3. Create a ProviderConfig, see this as an example.

  4. Create a managed resource see, see this for an example creating a Kafka topic.

Development

Setting up a Development Kafka Cluster

The following instructions will setup a development environment where you will have a locally running Kafka installation (SASL-Plain enabled). To change the configuration of your instance further, please see available helm parameters here.

  1. (Optional) Create a local kind cluster unless you want to develop against an existing k8s cluster.

  2. Install the Kafka helm chart:

    helm repo add bitnami https://charts.bitnami.com/bitnami
    helm repo update bitnami
    helm upgrade --install kafka-dev -n kafka-cluster bitnami/kafka \
       --create-namespace \
       --version 31.0.0 \
       --set auth.clientProtocol=sasl \
       --set deleteTopicEnable=true \
       --set authorizerClassName="kafka.security.authorizer.AclAuthorizer" \
       --set controller.replicaCount=1 \
       --wait

    Username is user1, obtain password using the following:

    export KAFKA_PASSWORD=$(kubectl get secret kafka-dev-user-passwords -oyaml | yq '.data.client-passwords | @base64d')

    Create the Kubernetes secret to be used by the ProviderConfig with:

    cat <<EOF > /tmp/creds.json
    {
       "brokers": [
          "kafka-dev-controller-headless.kafka-cluster.svc:9092"
       ],
       "sasl": {
          "mechanism": "PLAIN",
          "username": "user1",
          "password": "${KAFKA_PASSWORD}"
       }
    }
    EOF
    
    kubectl -n kafka-cluster create secret generic kafka-creds \
       --from-file=credentials=/tmp/creds.json
  3. Install kubefwd.

  4. Run kubefwd for kafka-cluster namespace which will make internal k8s services locally accessible:

    sudo kubefwd svc -n kafka-cluster -c ~/.kube/config
  5. To run tests, use the KAFKA_PASSWORD environment variable from step 2

  6. (optional) Install the kafka cli

    1. Create a config file for the client with:

      cat <<EOF > ~/.kcl/config.toml
      seed_brokers = ["kafka-dev-0.kafka-dev-headless:9092"]
      timeout_ms = 10000
      [sasl]
      method = "plain"
      user = "user1"
      pass = "${KAFKA_PASSWORD}"
      EOF
      1. Verify that cli could talk to the Kafka cluster:
      export  KCL_CONFIG_DIR=~/.kcl
      
      kcl metadata --all
  7. (optional) or deploy RedPanda console with:

    kubectl create -f - <<EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: rp-console
    spec:
    replicas: 1
    selector:
       matchLabels:
          app: rp-console
    template:
       metadata:
          labels:
          app: rp-console
       spec:
          containers:
          - name: rp-console
          image: docker.redpanda.com/redpandadata/console:latest
          ports:
          - containerPort: 8001
          env:
             - name: KAFKA_TLS_ENABLED
                value: "false"
             - name: KAFKA_SASL_ENABLED
                value: "true"
             - name: KAFKA_SASL_USERNAME
                value: user1
             - name: KAFKA_SASL_PASSWORD
                value: ${KAFKA_PASSWORD}
             - name: KAFKA_BROKERS
                value: kafka-dev-controller-headless.kafka-cluster.svc:9092
    EOF

### Building and Running the provider locally

Run against a Kubernetes cluster:

```console
# Install CRD and run provider locally
make dev

# Create a ProviderConfig pointing to the local Kafka cluster
kubectl apply -f - <<EOF
kind: ProviderConfig
metadata:
name: default
spec:
credentials:
  secretRef:
    key: credentials
    name: kafka-creds
    namespace: kafka-cluster
  source: Secret
EOF

Build, push, and install:

make all

Build image:

make image

Push image:

make push

Build binary:

make build

About

Crossplane provider for Kafka

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Go 84.3%
  • Shell 8.3%
  • Makefile 7.1%
  • Dockerfile 0.3%