-
Notifications
You must be signed in to change notification settings - Fork 730
Closed
Description
I want to configure Kafka on my Kubernetes cluster such that it is accessible from outside. I cannot use a nodePort and the VM's IP address.
Instead, I configured one Service
of type LoadBalancer
for each broker and modified the init.sh
to use the ELBs external IP.
OUTSIDE_HOST=$(kubectl get svc outside-${KAFKA_BROKER_ID} -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
I then created the ConfigMaps and started the kafka statefulsets. I can see that the /etc/kafka/server.properties
file gets populated with the correct dns entry for the OUTSIDE HOST.
advertised.listeners=OUTSIDE://a17c8eeeavcdefd1234566-12345678.us-east-1.elb.amazonaws.com:32400,PLAINTEXT://:9092
However, the broker hostnames received outside the K8s cluster have internal cluster object names.
Metadata for all topics (from broker -1: a8eaabbccddeeff-123456.us-east-2.elb.amazonaws.com:9092/bootstrap):
3 brokers:
broker 2 at kafka-2.broker.kafka-test.svc.cluster.local:9092
broker 1 at kafka-1.broker.kafka-test.svc.cluster.local:9092
broker 0 at kafka-0.broker.kafka-test.svc.cluster.local:9092
As a result, brokers are not accessible from outside the K8S cluster.
Are there other changes to be made for each broker's ELB addresses (OUTSIDE addresses) to show up?
Metadata
Metadata
Assignees
Labels
No labels