30分鐘帶你搞定Dokcer部署Kafka集群

来源:https://www.cnblogs.com/zer0Black/p/18195613
-Advertisement-
Play Games

docker網路規劃 docker network create kafka-net --subnet 172.20.0.0/16 docker network ls zookeeper1(172.20.0.11 2184:2181) zookeeper2(172.20.0.12 2185:2181 ...


docker網路規劃

docker network create kafka-net --subnet 172.20.0.0/16

docker network ls
  • zookeeper1(172.20.0.11 2184:2181)
  • zookeeper2(172.20.0.12 2185:2181)
  • zookeeper3(172.20.0.13 2186:2181)
  • kafka(172.20.0.14 內部9093:9093,外部9193:9193)
  • kafka(172.20.0.15 內部9094:9094,外部9194:9194)
  • kafka(172.20.0.16 內部9095:9095,外部9195:9195)
  • kafka manager(172.20.0.10 9000:9000)

部署中的配置和授權認證文件製作

準備一下兩個文件,他們的位置可以放到任意地方,只需要鏡像部署的配置文件中能引用到即可。

  1. 新建一個zookeeper和kafka共用的授權認證文件:server_jass.conf。按照本教程建議放到/root/kafka/kafka-sasl/server_jass.conf
Client {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="test"
    password="test@QWER";
};

Server {
    org.apache.zookeeper.server.auth.DigestLoginModule required
    username="test"
    password="test@QWER"  
    user_admin="test@QWER"
    user_test="test@QWER"; # 賬號是test,密碼是test@QWER
};

KafkaServer {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="test"
    password="test@QWER"
    user_test="test@QWER";
};

KafkaClient {
    org.apache.kafka.common.security.plain.PlainLoginModule required
    username="test"
    password="test@QWER";
};
  1. 新建一個kafka-run-class腳本文件,規避JMX衝突:kafka-run-class.sh。按照本教程建議放到/root/kafka/kafka-run-class.sh
#!/bin/bash
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

if [ $# -lt 1 ];
then
  echo "USAGE: $0 [-daemon] [-name servicename] [-loggc] classname [opts]"
  exit 1
fi

# CYGWIN == 1 if Cygwin is detected, else 0.
if [[ $(uname -a) =~ "CYGWIN" ]]; then
  CYGWIN=1
else
  CYGWIN=0
fi

if [ -z "$INCLUDE_TEST_JARS" ]; then
  INCLUDE_TEST_JARS=false
fi

# Exclude jars not necessary for running commands.
regex="(-(test|test-sources|src|scaladoc|javadoc)\.jar|jar.asc)$"
should_include_file() {
  if [ "$INCLUDE_TEST_JARS" = true ]; then
    return 0
  fi
  file=$1
  if [ -z "$(echo "$file" | egrep "$regex")" ] ; then
    return 0
  else
    return 1
  fi
}

base_dir=$(dirname $0)/..

if [ -z "$SCALA_VERSION" ]; then
  SCALA_VERSION=2.13.5
  if [[ -f "$base_dir/gradle.properties" ]]; then
    SCALA_VERSION=`grep "^scalaVersion=" "$base_dir/gradle.properties" | cut -d= -f 2`
  fi
fi

if [ -z "$SCALA_BINARY_VERSION" ]; then
  SCALA_BINARY_VERSION=$(echo $SCALA_VERSION | cut -f 1-2 -d '.')
fi

# run ./gradlew copyDependantLibs to get all dependant jars in a local dir
shopt -s nullglob
if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  for dir in "$base_dir"/core/build/dependant-libs-${SCALA_VERSION}*;
  do
    CLASSPATH="$CLASSPATH:$dir/*"
  done
fi

for file in "$base_dir"/examples/build/libs/kafka-examples*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  clients_lib_dir=$(dirname $0)/../clients/build/libs
  streams_lib_dir=$(dirname $0)/../streams/build/libs
  streams_dependant_clients_lib_dir=$(dirname $0)/../streams/build/dependant-libs-${SCALA_VERSION}
else
  clients_lib_dir=/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs
  streams_lib_dir=$clients_lib_dir
  streams_dependant_clients_lib_dir=$streams_lib_dir
fi


for file in "$clients_lib_dir"/kafka-clients*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

for file in "$streams_lib_dir"/kafka-streams*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

if [ -z "$UPGRADE_KAFKA_STREAMS_TEST_VERSION" ]; then
  for file in "$base_dir"/streams/examples/build/libs/kafka-streams-examples*.jar;
  do
    if should_include_file "$file"; then
      CLASSPATH="$CLASSPATH":"$file"
    fi
  done
else
  VERSION_NO_DOTS=`echo $UPGRADE_KAFKA_STREAMS_TEST_VERSION | sed 's/\.//g'`
  SHORT_VERSION_NO_DOTS=${VERSION_NO_DOTS:0:((${#VERSION_NO_DOTS} - 1))} # remove last char, ie, bug-fix number
  for file in "$base_dir"/streams/upgrade-system-tests-$SHORT_VERSION_NO_DOTS/build/libs/kafka-streams-upgrade-system-tests*.jar;
  do
    if should_include_file "$file"; then
      CLASSPATH="$file":"$CLASSPATH"
    fi
  done
  if [ "$SHORT_VERSION_NO_DOTS" = "0100" ]; then
    CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.8.jar":"$CLASSPATH"
    CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.6.jar":"$CLASSPATH"
  fi
  if [ "$SHORT_VERSION_NO_DOTS" = "0101" ]; then
    CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zkclient-0.9.jar":"$CLASSPATH"
    CLASSPATH="/opt/kafka-$UPGRADE_KAFKA_STREAMS_TEST_VERSION/libs/zookeeper-3.4.8.jar":"$CLASSPATH"
  fi
fi

for file in "$streams_dependant_clients_lib_dir"/rocksdb*.jar;
do
  CLASSPATH="$CLASSPATH":"$file"
done

for file in "$streams_dependant_clients_lib_dir"/*hamcrest*.jar;
do
  CLASSPATH="$CLASSPATH":"$file"
done

for file in "$base_dir"/shell/build/libs/kafka-shell*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

for dir in "$base_dir"/shell/build/dependant-libs-${SCALA_VERSION}*;
do
  CLASSPATH="$CLASSPATH:$dir/*"
done

for file in "$base_dir"/tools/build/libs/kafka-tools*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

for dir in "$base_dir"/tools/build/dependant-libs-${SCALA_VERSION}*;
do
  CLASSPATH="$CLASSPATH:$dir/*"
done

for cc_pkg in "api" "transforms" "runtime" "file" "mirror" "mirror-client" "json" "tools" "basic-auth-extension"
do
  for file in "$base_dir"/connect/${cc_pkg}/build/libs/connect-${cc_pkg}*.jar;
  do
    if should_include_file "$file"; then
      CLASSPATH="$CLASSPATH":"$file"
    fi
  done
  if [ -d "$base_dir/connect/${cc_pkg}/build/dependant-libs" ] ; then
    CLASSPATH="$CLASSPATH:$base_dir/connect/${cc_pkg}/build/dependant-libs/*"
  fi
done

# classpath addition for release
for file in "$base_dir"/libs/*;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done

for file in "$base_dir"/core/build/libs/kafka_${SCALA_BINARY_VERSION}*.jar;
do
  if should_include_file "$file"; then
    CLASSPATH="$CLASSPATH":"$file"
  fi
done
shopt -u nullglob

if [ -z "$CLASSPATH" ] ; then
  echo "Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=$SCALA_VERSION'"
  exit 1
fi

# JMX settings
if [ -z "$KAFKA_JMX_OPTS" ]; then
  KAFKA_JMX_OPTS="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false  -Dcom.sun.management.jmxremote.ssl=false "
fi

# JMX port to use
ISKAFKASERVER="false"
if [[ "$*" =~ "kafka.Kafka" ]]; then
  ISKAFKASERVER="true"
fi
if [  $JMX_PORT ] && [ -z "$ISKAFKASERVER" ]; then
  KAFKA_JMX_OPTS="$KAFKA_JMX_OPTS -Dcom.sun.management.jmxremote.port=$JMX_PORT "
fi

# Log directory to use
if [ "x$LOG_DIR" = "x" ]; then
  LOG_DIR="$base_dir/logs"
fi

# Log4j settings
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  LOG4J_DIR="$base_dir/config/tools-log4j.properties"
  # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
  (( CYGWIN )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
  KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
else
  # create logs directory
  if [ ! -d "$LOG_DIR" ]; then
    mkdir -p "$LOG_DIR"
  fi
fi

# If Cygwin is detected, LOG_DIR is converted to Windows format.
(( CYGWIN )) && LOG_DIR=$(cygpath --path --mixed "${LOG_DIR}")
KAFKA_LOG4J_OPTS="-Dkafka.logs.dir=$LOG_DIR $KAFKA_LOG4J_OPTS"

# Generic jvm settings you want to add
if [ -z "$KAFKA_OPTS" ]; then
  KAFKA_OPTS=""
fi

# Set Debug options if enabled
if [ "x$KAFKA_DEBUG" != "x" ]; then

    # Use default ports
    DEFAULT_JAVA_DEBUG_PORT="5005"

    if [ -z "$JAVA_DEBUG_PORT" ]; then
        JAVA_DEBUG_PORT="$DEFAULT_JAVA_DEBUG_PORT"
    fi

    # Use the defaults if JAVA_DEBUG_OPTS was not set
    DEFAULT_JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=${DEBUG_SUSPEND_FLAG:-n},address=$JAVA_DEBUG_PORT"
    if [ -z "$JAVA_DEBUG_OPTS" ]; then
        JAVA_DEBUG_OPTS="$DEFAULT_JAVA_DEBUG_OPTS"
    fi

    echo "Enabling Java debug options: $JAVA_DEBUG_OPTS"
    KAFKA_OPTS="$JAVA_DEBUG_OPTS $KAFKA_OPTS"
fi

# Which java to use
if [ -z "$JAVA_HOME" ]; then
  JAVA="java"
else
  JAVA="$JAVA_HOME/bin/java"
fi

# Memory options
if [ -z "$KAFKA_HEAP_OPTS" ]; then
  KAFKA_HEAP_OPTS="-Xmx256M"
fi

# JVM performance options
# MaxInlineLevel=15 is the default since JDK 14 and can be removed once older JDKs are no longer supported
if [ -z "$KAFKA_JVM_PERFORMANCE_OPTS" ]; then
  KAFKA_JVM_PERFORMANCE_OPTS="-server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:MaxInlineLevel=15 -Djava.awt.headless=true"
fi

while [ $# -gt 0 ]; do
  COMMAND=$1
  case $COMMAND in
    -name)
      DAEMON_NAME=$2
      CONSOLE_OUTPUT_FILE=$LOG_DIR/$DAEMON_NAME.out
      shift 2
      ;;
    -loggc)
      if [ -z "$KAFKA_GC_LOG_OPTS" ]; then
        GC_LOG_ENABLED="true"
      fi
      shift
      ;;
    -daemon)
      DAEMON_MODE="true"
      shift
      ;;
    *)
      break
      ;;
  esac
done

# GC options
GC_FILE_SUFFIX='-gc.log'
GC_LOG_FILE_NAME=''
if [ "x$GC_LOG_ENABLED" = "xtrue" ]; then
  GC_LOG_FILE_NAME=$DAEMON_NAME$GC_FILE_SUFFIX

  # The first segment of the version number, which is '1' for releases before Java 9
  # it then becomes '9', '10', ...
  # Some examples of the first line of `java --version`:
  # 8 -> java version "1.8.0_152"
  # 9.0.4 -> java version "9.0.4"
  # 10 -> java version "10" 2018-03-20
  # 10.0.1 -> java version "10.0.1" 2018-04-17
  # We need to match to the end of the line to prevent sed from printing the characters that do not match
  JAVA_MAJOR_VERSION=$("$JAVA" -version 2>&1 | sed -E -n 's/.* version "([0-9]*).*$/\1/p')
  if [[ "$JAVA_MAJOR_VERSION" -ge "9" ]] ; then
    KAFKA_GC_LOG_OPTS="-Xlog:gc*:file=$LOG_DIR/$GC_LOG_FILE_NAME:time,tags:filecount=10,filesize=100M"
  else
    KAFKA_GC_LOG_OPTS="-Xloggc:$LOG_DIR/$GC_LOG_FILE_NAME -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=100M"
  fi
fi

# Remove a possible colon prefix from the classpath (happens at lines like `CLASSPATH="$CLASSPATH:$file"` when CLASSPATH is blank)
# Syntax used on the right side is native Bash string manipulation; for more details see
# http://tldp.org/LDP/abs/html/string-manipulation.html, specifically the section titled "Substring Removal"
CLASSPATH=${CLASSPATH#:}

# If Cygwin is detected, classpath is converted to Windows format.
(( CYGWIN )) && CLASSPATH=$(cygpath --path --mixed "${CLASSPATH}")

# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
  nohup "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else
  exec "$JAVA" $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp "$CLASSPATH" $KAFKA_OPTS "$@"
fi

鏡像部署

  1. 新建zookeeper鏡像文件:zk-docker-compose.yml
services:
  zook1:
    image: zookeeper:latest
    #restart: always #自動重新啟動
    hostname: zook1
    container_name: zook1 #容器名稱,方便在rancher中顯示有意義的名稱
    ports:
    - 2183:2181 #將本容器的zookeeper預設埠號映射出去
    volumes: # 掛載數據捲 前面是宿主機即本機的目錄位置,後面是docker的目錄
    - "/Users/konsy/Development/volume/zkcluster/zook1/data:/data"
    - "/Users/konsy/Development/volume/zkcluster/zook1/datalog:/datalog"
    - "/Users/konsy/Development/volume/zkcluster/zook1/logs:/logs"
    - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/" #映射賬號密碼配置文件
    environment:
        ZOO_MY_ID: 1  #即是zookeeper的節點值,也是kafka的brokerid值
        ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
        ZOO_TLS_QUORUM_CLIENT_AUTH: need
        SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf #指定賬號密碼配置文件地址
    networks:
        kafka-net:
            ipv4_address: 172.20.0.11

  zook2:   
    image: zookeeper:latest
    #restart: always #自動重新啟動
    hostname: zook2
    container_name: zook2 #容器名稱,方便在rancher中顯示有意義的名稱
    ports:
    - 2184:2181 #將本容器的zookeeper預設埠號映射出去
    volumes:
    - "/Users/konsy/Development/volume/zkcluster/zook2/data:/data"
    - "/Users/konsy/Development/volume/zkcluster/zook2/datalog:/datalog"
    - "/Users/konsy/Development/volume/zkcluster/zook2/logs:/logs"
    - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/"
    environment:
        ZOO_MY_ID: 2  #即是zookeeper的節點值,也是kafka的brokerid值
        ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
        ZOO_TLS_QUORUM_CLIENT_AUTH: need
        SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf
    networks:
        kafka-net:
            ipv4_address: 172.20.0.12
            
  zook3:   
    image: zookeeper:latest
    #restart: always #自動重新啟動
    hostname: zook3
    container_name: zook3 #容器名稱,方便在rancher中顯示有意義的名稱
    ports:
    - 2185:2181 #將本容器的zookeeper預設埠號映射出去
    volumes:
    - "/Users/konsy/Development/volume/zkcluster/zook3/data:/data"
    - "/Users/konsy/Development/volume/zkcluster/zook3/datalog:/datalog"
    - "/Users/konsy/Development/volume/zkcluster/zook3/logs:/logs"
    - "/root/kafka/kafka-sasl/:/opt/zookeeper/secrets/"
    environment:
        ZOO_MY_ID: 3  #即是zookeeper的節點值,也是kafka的brokerid值
        ZOO_SERVERS: server.1=zook1:2888:3888;2181 server.2=zook2:2888:3888;2181 server.3=zook3:2888:3888;2181
        ZOO_TLS_QUORUM_CLIENT_AUTH: need
        SERVER_JVMFLAGS: -Djava.security.auth.login.config=/opt/zookeeper/secrets/server_jass.conf
    networks:
        kafka-net:
            ipv4_address: 172.20.0.13
networks:
  kafka-net:
    external: true
  1. 執行腳本部署zookeeper至Docker
docker compose -p zookeeper -f ./zk-docker-compose.yml up -d
  1. 新建Kafka集群配置文件:kafka-docker-compose.yml
services:
  kafka1:
    image: docker.io/wurstmeister/kafka
    #restart: always #自動重新啟動
    hostname: 172.20.0.14
    container_name: kafka1
    ports:
      - 9093:9093
    volumes:
      - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
      - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
      - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
      - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_LISTENERS: PLAINTEXT://:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9093
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
      KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
      ALLOW_PLAINTEXT_LISTENER : 'yes'
      JMX_PORT: 9999 #開放JMX監控埠,來監測集群數據
      KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
    external_links:
      - zook1
      - zook2
      - zook3
    networks:
      kafka-net:
        ipv4_address: 172.20.0.14

  kafka2:
    image: docker.io/wurstmeister/kafka
    #restart: always #自動重新啟動
    hostname: 172.20.0.15
    container_name: kafka2
    ports:
      - 9094:9094
    volumes:
      - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
      - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
      - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
      - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_LISTENERS: PLAINTEXT://:9094
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9094
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
      KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
      ALLOW_PLAINTEXT_LISTENER : 'yes'
      JMX_PORT: 9999 #開放JMX監控埠,來監測集群數據
      KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
    external_links:
      - zook1
      - zook2
      - zook3
    networks:
      kafka-net:
        ipv4_address: 172.20.0.15

  kafka3:
    image: docker.io/wurstmeister/kafka
    #restart: always #自動重新啟動
    hostname: 172.20.0.16
    container_name: kafka3
    ports:
      - 9095:9095
    volumes:
      - /Users/konsy/Development/volume/kafka/kafka1/wurstmeister/kafka:/wurstmeister/kafka
      - /Users/konsy/Development/volume/kafka/kafka1/kafka:/kafka
      - /root/kafka/kafka-sasl/:/opt/kafka/secrets/
      - /root/kafka/kafka-run-class.sh:/opt/kafka_2.13-2.8.1/bin/kafka-run-class.sh
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_LISTENERS: PLAINTEXT://:9095
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.198.131:9095
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: PLAIN
      KAFKA_SASL_ENABLED_MECHANISMS: PLAIN
      KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer
      KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
      KAFKA_ZOOKEEPER_CONNECT: zook1:2181,zook2:2181,zook3:2181
      ALLOW_PLAINTEXT_LISTENER : 'yes'
      JMX_PORT: 9999 #開放JMX監控埠,來監測集群數據
      KAFKA_OPTS: -Djava.security.auth.login.config=/opt/kafka/secrets/server_jass.conf
    external_links:
      - zook1
      - zook2
      - zook3
    networks:
      kafka-net:
        ipv4_address: 172.20.0.16
networks:
  kafka-net:
      external: true
  1. 執行腳本部署kafka至Docker
docker compose -f ./kafka-docker-compose.yml up -d
  1. 新建kafka-manager配置文件:kafka-manager-docker-compose.yml
services:
  kafka-manager:
    image: scjtqs/kafka-manager:latest
    restart: always
    hostname: kafka-manager
    container_name: kafka-manager
    ports:
      - 9000:9000
    external_links:  # 連接本compose文件以外的container
      - zook1
      - zook2
      - zook3
      - 172.20.0.14
      - 172.20.0.15
      - 172.20.0.16
    environment:
      ZK_HOSTS: zook1:2181,zook2:2181,zook3:2181
      KAFKA_BROKERS: 172.20.0.14:9093,172.20.0.15:9094,172.20.0.16:9095
      APPLICATION_SECRET: letmein
      KM_ARGS: -Djava.net.preferIPv4Stack=true
    networks:
      kafka-net:
        ipv4_address: 172.20.10.10
networks:
  kafka-net:
    external: true
  1. 執行腳本部署kafka-manager至Docker
docker compose -f ./kafka-manager-docker-compose.yml up -d
  1. 配置Cluster


您的分享是我們最大的動力!

-Advertisement-
Play Games
更多相關文章
  • NumPy 分割數組 NumPy 提供了 np.array_split() 函數來分割數組,將一個數組拆分成多個較小的子數組。 基本用法 語法: np.array_split(array, indices_or_sections, axis=None) array: 要分割的 NumPy 數組。 i ...
  • REST(Representational State Transfer),表現形式狀態轉換,它是一種軟體架構風格 當我們想表示一個網路資源的時候,可以使用兩種方式: 傳統風格資源描述形式 http://localhost/user/getById?id=1 查詢id為1的用戶信息 http://l ...
  • title: Django 自定義管理命令:從入門到高級 date: 2024/5/16 18:34:29 updated: 2024/5/16 18:34:29 categories: 後端開發 tags: Django 自定義命令 入門教程 高級技巧 命令創建 命令使用 自定義管理 第 1 章 ...
  • 目錄簡介工作流程核心架構核心模塊介紹DataX調度流程支持的數據實踐下載環境執行流程引用 簡介 DataX是一個數據同步工具,可以將數據從一個地方讀取出來並以極快的速度寫入另外一個地方。常見的如將mysql中的數據同步到另外一個mysql中,或者另外一個mongodb中。 工作流程 read:設置一 ...
  • 本文記錄我在對接位元組旗下產品火山雲旗下雲游戲產品 OpenApi 介面文檔時遇到的坑,希望能幫助大家(火山雲旗下雲游戲產品的文檔坑很多,我算是從零到一都踩了一遍,特此記錄,希望大家引以為鑒)。 1. 文檔問題 很經典的開局一張圖,對接全靠問, 這裡給大家強調下,當要跟第三方產品對接時,一定要確認拿到 ...
  • 抽象類與介面的理解、設計思路與實際用途 在面向對象的編程中,抽象類和介面是兩個非常重要的概念,它們為開發者提供了創建可重用、可擴展和可維護代碼的基礎。下麵我們將從理解、設計思路和實際用途三個方面來探討抽象類和介面。 1. 抽象類(Abstract Class) 理解: 抽象類是一種不能被實例化的類, ...
  • 正文 今天是做櫃員的第一天,準確來說是半天。雖然沒什麼業務,不過還是有些手足無措。主要是真上陣了還是有些恐慌吧。 交接手續真的非常麻煩。 聽他們說,不久之後他們要去插秧什麼的,據說是黨日活動,我真心覺得有些麻,這都搞的什麼麽蛾子。前陣子還說要買扶貧戶的產品,為了完成任務,一個人攤下來得有 660 塊 ...
  • 本系列深入分析編譯器對於C++虛函數的底層實現,最後分析C++在多態的情況下的性能是否有受影響,多態究竟有多大的性能損失。 ...
一周排行
    -Advertisement-
    Play Games
  • 移動開發(一):使用.NET MAUI開發第一個安卓APP 對於工作多年的C#程式員來說,近來想嘗試開發一款安卓APP,考慮了很久最終選擇使用.NET MAUI這個微軟官方的框架來嘗試體驗開發安卓APP,畢竟是使用Visual Studio開發工具,使用起來也比較的順手,結合微軟官方的教程進行了安卓 ...
  • 前言 QuestPDF 是一個開源 .NET 庫,用於生成 PDF 文檔。使用了C# Fluent API方式可簡化開發、減少錯誤並提高工作效率。利用它可以輕鬆生成 PDF 報告、發票、導出文件等。 項目介紹 QuestPDF 是一個革命性的開源 .NET 庫,它徹底改變了我們生成 PDF 文檔的方 ...
  • 項目地址 項目後端地址: https://github.com/ZyPLJ/ZYTteeHole 項目前端頁面地址: ZyPLJ/TreeHoleVue (github.com) https://github.com/ZyPLJ/TreeHoleVue 目前項目測試訪問地址: http://tree ...
  • 話不多說,直接開乾 一.下載 1.官方鏈接下載: https://www.microsoft.com/zh-cn/sql-server/sql-server-downloads 2.在下載目錄中找到下麵這個小的安裝包 SQL2022-SSEI-Dev.exe,運行開始下載SQL server; 二. ...
  • 前言 隨著物聯網(IoT)技術的迅猛發展,MQTT(消息隊列遙測傳輸)協議憑藉其輕量級和高效性,已成為眾多物聯網應用的首選通信標準。 MQTTnet 作為一個高性能的 .NET 開源庫,為 .NET 平臺上的 MQTT 客戶端與伺服器開發提供了強大的支持。 本文將全面介紹 MQTTnet 的核心功能 ...
  • Serilog支持多種接收器用於日誌存儲,增強器用於添加屬性,LogContext管理動態屬性,支持多種輸出格式包括純文本、JSON及ExpressionTemplate。還提供了自定義格式化選項,適用於不同需求。 ...
  • 目錄簡介獲取 HTML 文檔解析 HTML 文檔測試參考文章 簡介 動態內容網站使用 JavaScript 腳本動態檢索和渲染數據,爬取信息時需要模擬瀏覽器行為,否則獲取到的源碼基本是空的。 本文使用的爬取步驟如下: 使用 Selenium 獲取渲染後的 HTML 文檔 使用 HtmlAgility ...
  • 1.前言 什麼是熱更新 游戲或者軟體更新時,無需重新下載客戶端進行安裝,而是在應用程式啟動的情況下,在內部進行資源或者代碼更新 Unity目前常用熱更新解決方案 HybridCLR,Xlua,ILRuntime等 Unity目前常用資源管理解決方案 AssetBundles,Addressable, ...
  • 本文章主要是在C# ASP.NET Core Web API框架實現向手機發送驗證碼簡訊功能。這裡我選擇是一個互億無線簡訊驗證碼平臺,其實像阿裡雲,騰訊雲上面也可以。 首先我們先去 互億無線 https://www.ihuyi.com/api/sms.html 去註冊一個賬號 註冊完成賬號後,它會送 ...
  • 通過以下方式可以高效,並保證數據同步的可靠性 1.API設計 使用RESTful設計,確保API端點明確,並使用適當的HTTP方法(如POST用於創建,PUT用於更新)。 設計清晰的請求和響應模型,以確保客戶端能夠理解預期格式。 2.數據驗證 在伺服器端進行嚴格的數據驗證,確保接收到的數據符合預期格 ...