1.下载两个官方images
docker pull confluentinc/cp-zookeeper
docker pull confluentinc/cp-kafka
2.然后创建一个docker-compose.yml 配置如下:
version: '2'
services: zookeeper: image: confluentinc/cp-zookeeper container_name: zookeeper mem_limit: 1024M environment: ZOOKEEPER_CLIENT_PORT: 2181 kafka: image: confluentinc/cp-kafka container_name: kafka mem_limit: 1024M depends_on: - zookeeper environment: KAFKA_BROKER_NO: 1 KAFKA_ADVERTISED_HOST_NAME: 127.0.0.1 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://127.0.0.1:9092 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_HEAP_OPTS: "-Xmx512M -Xms16M"3.启动docker-compose:
docker-compose up -d
4. 打开两个新的终端窗口,分别用以下命令登录docker:
docker exec -it kafka /bin/bash
5. 在其中一个窗口里创建topic并运行producer
kafka-topics --zookeeper zookeeper:2181 --create --replication-factor 1 --partitions 1 --topic kafkatest
kafka-console-producer --broker-list localhost:9092 --topic kafkatest 6.在另一个窗口里运行consumer:kafka-console-consumer --bootstrap-server localhost:9092 --topic kafkatest --from-beginning
现在,在producer里输入任何内容,都会在consumer里收到。