You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A large percentage of questions and issues opened against the kafka-docker project concern configuring Kafka networking. This is often a case of not understanding the Kafka requirements, or not understanding docker networking.
This page aims to explain some of the basic requirements to help people resolve any initial issues they may encounter. It is not an exhaustive guide to configuring Kafka and/or Docker.
Two containers are created which share the kafka-docker_default bridge network created by docker-compose. Here you can see both ports from the container are mapped directly to the host's network interface (2181 and 9092)
NOTE: When using docker-compose, all containers are generally started in the same networking namespace. I say 'generally' as you can configure multiple networks, but we're sticking to the simple use-case here.
In this setup all Kafka requirements are met:
the Kafka container can talk to Zookeeper via the zookeeper DNS entry that exists within the kafka-docker_default network
the Kafka container can talk to itself on localhost:9092 (within the containers networking namespace)
Consumers and Producers can talk to the Kafka broker using localhost:9092 address. This is because docker will generally bind the port to all interfaces (0.0.0.0) when it is published.
Here, the key differences are two configurations in the docker-compose.yml file; ports and the KAFKA_ADVERTISED_HOST_NAME environment variable.
Because it is only possible to bind to each unique port once on a single interface, we can no longer publish the Broker port (9092). Instead, we simply expose the port.
ports:
- "9092"
This results in docker binding an ephemeral port on the host interface to the container port.
This should hopefully explain why we had to use the hosts interface address in the KAFKA_ADVERTISED_HOST_NAME environment var. Let's cement this understanding by adding consumers / producers to the diagram:
This explains why all Kafka requirements are met:
The Kafka containers can still use the zookeeper DNS entry to talk to zookeeper in the kafka-docker_default network
The Kafka containers can talk to each other by routing from the kafka-docker_default bridge network to the listening host interface ( 192.168.1.2:32000 and 192.168.1.2:32001 ) 后面这句好像是错的
Consumers and Producers can also talk to the Kafka containers by the host interface ( 192.168.1.2:32000 and 192.168.1.2:32001 )
The text was updated successfully, but these errors were encountered:
Kafka connectivity
A large percentage of questions and issues opened against the
kafka-docker
project concern configuring Kafka networking. This is often a case of not understanding the Kafka requirements, or not understanding docker networking.This page aims to explain some of the basic requirements to help people resolve any initial issues they may encounter. It is not an exhaustive guide to configuring Kafka and/or Docker.
和本项目相关的、被提及最多的问题是关于 Kafka networking 的;而这个问题和 Kafka requirements 及 docker networking 有关;
Kafka requirements
There are three main requirements for configuring Kafka networking.
Kafka networking 的三大要求;
The following diagram represents the different communication paths:
This means for a complete working Kafka setup, each one of the components must be able to route to the other and have accessible ports.
Kafka in Docker
First, let's take a look at the simplest use-case - running a single Kafka broker.
最简模式:运行单个 Kafka broker
docker-compose-single-broker.yml
内容如下Two containers are created which share the
kafka-docker_default
bridge network created bydocker-compose
. Here you can see both ports from the container are mapped directly to the host's network interface (2181 and 9092)NOTE: When using
docker-compose
, all containers are generally started in the same networking namespace. I say 'generally' as you can configure multiple networks, but we're sticking to the simple use-case here.In this setup all Kafka requirements are met:
kafka-docker_default
networklocalhost:9092
(within the containers networking namespace)localhost:9092
address. This is because docker will generally bind the port to all interfaces (0.0.0.0) when it is published.Next, let's look at the common use-case - running multiple Kafka brokers.
常用模式:运行多个 Kafka broker
docker-compose.yml
内容如下Here, the key differences are two configurations in the
docker-compose.yml
file;ports
and theKAFKA_ADVERTISED_HOST_NAME
environment variable.Because it is only possible to bind to each unique port once on a single interface, we can no longer publish the Broker port (9092). Instead, we simply expose the port.
This results in docker binding an ephemeral port on the host interface to the container port.
This should hopefully explain why we had to use the hosts interface address in the
KAFKA_ADVERTISED_HOST_NAME
environment var. Let's cement this understanding by adding consumers / producers to the diagram:This explains why all Kafka requirements are met:
kafka-docker_default
networkkafka-docker_default
bridge networkto the listening host interface (后面这句好像是错的192.168.1.2:32000
and192.168.1.2:32001
)192.168.1.2:32000
and192.168.1.2:32001
)The text was updated successfully, but these errors were encountered: