Skip to main content

Installing a Kafka Broker on Ubuntu 18.04 Using a Docker Image

·4 mins

This post tells you how to install an Apache Kafka broker using a pre-built Docker image. It assumes that you installed Docker already. See Installing Docker on Ubuntu 18.04 Using Snap for details.

Repository Setup #

Let’s start by creating a working directory.

mkdir -p ~/Workspace/docker
cd ~/Workspace/docker

I use the wurstmeister kafka-docker image at GitHub which is well maintained and keeps track of the latest Kafka releases. Let’s clone the repository:

$ git clone https://github.com/wurstmeister/kafka-docker.git
$ cd kafka-docker

The master branch is active now. Let’s create a new branch to keep local modifications:

git checkout -b local

The kafka-docker repository provides two Docker compose file that you can use to start the images, one that launches a single broker and another that launches a cluster of brokers that you can scale up and down as required. I go with the former as I am interested in a basic setup of the Kafka broker for now.

Before you launch the Kafka and Zookeeper images you need to edit the file docker-compose-single-broker.yml to change the advertised host name. The latter has to match the IP address of the host Ethernet interface where you want to broker to become available. As stated in the documentation, don’t use localhost or 127.0.0.1 as these addresses prevent access to the broker from outside the container. This is how the single broker Docker compose file looks like in my setup.

$ cat docker-compose-single-broker.yml
version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    build: .
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: 192.168.1.100
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

Launching the Docker Images #

You can start the docker images now:

$ docker.compose -f docker-compose-single-broker.yml up -d
Building kafka
Step 1/14 : FROM openjdk:8u201-jre-alpine
 ---> ce8477c7d086
...
...
success

Verify that the Docker images are running:

$ docker ps
CONTAINER ID  IMAGE                    COMMAND                  CREATED             STATUS         PORTS                                                NAMES
43774ba958df  wurstmeister/zookeeper   "/bin/sh -c '/usr/sb…"   21 seconds ago      Up 18 seconds  22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp   kafka-docker_zookeeper_1
52e9ab098bba  kafka-docker_kafka       "start-kafka.sh"         21 seconds ago      Up 19 seconds  0.0.0.0:9092->9092/tcp                               kafka-docker_kafka_1

The Docker images are running now. The Kafka broker is listening on port 9092 and Zookeeper is listening on port 2181. As instructed in the Docker compose file, the test topic is available already, implemented using a single partition and one replica (see the KAFKA_CREATE_TOPICS environment variable).

If you need to restart the images later on, for instance, after reconfiguring your Docker setup, you may run into issues related to node already exists error messages. When this happens, issue the following command to reset all preexisting files and then restart the images.

docker.compose rm -svf

Testing #

You can test the installation by producing and consuming sample messages using the built-in Kafka tools available from within the Kafka container.

First, open a Kafka shell in the Kafka container using the following command:

./start-kafka-shell.sh 192.168.1.100 192.168.1.100:2181
bash-4.4#

The general format of this command is

start-kafka-shell.sh <DOCKER_HOST_IP> <ZK_HOST:ZK_PORT>

Once you run the command you enter into the container’s shell which you can identify by the bash-4.4# prompt.

The Kafka Producer #

Run the Kafka producer inside the Kafka shell as follows:

$KAFKA_HOME/bin/kafka-console-producer.sh --topic=test --broker-list=`broker-list.sh`
>

The > character is the producer’s prompt. Anything you type at this point becomes a Kafka message in the test topic after you hit the Enter key.

You can create a different topic with the following command:

$KAFKA_HOME/bin/kafka-topics.sh --create --topic other-topic \
--partitions 2 --zookeeper $ZK --replication-factor 1

The Kafka Consumer #

Open a different Linux terminal and launch there a new Kafka shell:

./start-kafka-shell.sh 192.168.1.100 192.168.1.100:2181
bash-4.4#

Run the Kafka consumer on this new shell as follows:

$KAFKA_HOME/bin/kafka-console-consumer.sh --topic=test --bootstrap-server=`broker-list.sh`

Now, go back to the producer’s shell and type something at the > prompt followed by the Enter key. If everything works well the consumer shell should show you the producer’s messages with very little delay.

Closing #

The Kafka broker is ready now. You should be able to use producers and consumers running on any other servers with access to your Kafka Docker host.