脚本宝典收集整理的这篇文章主要介绍了kafka集群搭建,脚本宝典觉得挺不错的,现在分享给大家,也给大家做个参考。
1.环境
序号 | 域名/IP | 硬件资源 | 安装服务 |
1 | server1/172.16.101.181 | 内存:2G | 硬盘:50G | 处理器:4个单核CPU | jdk1.8|zookeePEr3.6.3|kafka2.12 |
2 | server2/172.16.101.182 | 内存:2G | 硬盘:50G | 处理器:4个单核CPU | jdk1.8|zookeeper3.6.3|kafka2.12 |
3 | server3/172.16.101.183 | 内存:2G | 硬盘:50G | 处理器:4个单核CPU | jdk1.8|zookeeper3.6.3|kafka2.12 |
2.zookeeper集群安装配置
把apache-zookeeper-3.6.3-bin.tar.gz上传解压
修改zookeeper配置文件
菜单进入conf目录下面,将zoo_sample.CFg复制一份到本目录并改名为zoo.cfg
vim编辑该配置文件:
#编辑文件: vim zoo.cfg ---------------------------------------------------------------------------- # The number of milliseconds of each tick #时间单元,zk中的所有时间都是以该时间单元为基础,进行整数倍配置(单位是毫秒,下面配置的是2秒) tickTime=2000 # The number of ticks that the inITial # synchronization phase can take #follower在启动过程中,会从leader同步最新数据需要的最大时间。如果集群规模比较大,可以调大该参数 initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowlEdgement #leader与集群中所有机器进行心跳检查的最大时间。如果超出该时间,某follower没有回应,则说明该follower下线 syncLimit=5 # the directory where the snapshot is Stored. # do not use /tmp for storage, /tmp here is just # example sakes. #事务日志输出目录 dataDir=/usr/local/zookeeper/zookeeper-3.4.5-cdh5.14.0/zkdatas # the port at which the clients will connect #客户端连接端口 clientPort=2181 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.htML#sc_maintenance # # The number of snapshots to retain in dataDir #需要保留文件数目,默认就是3个 autopurge.snaPRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #自动清理事务日志和快照文件的频率,这里是1个小时 autopurge.purgeinterval=1 #集群服务器配置,数字1/2/3需要与myid文件一致。右边两个端口,2888表示数据同步和通信端口;3888表示选举端口 server1=域名1:2888:3888 server2=域名2:2888:3888 server3=域名3:2888:3888
#创建数据目录mkdir -p /home/environment/apache-zookeeper-3.6.3-bin/zkdata cd /home/environment/apache-zookeeper-3.6.3-bin/zkdata #创建myid文件并将内容编辑为1 touch myid
#将文件发送到server2服务器,并修改myid值为2 scp -r apache-zookeeper-3.6.3-bin/ server2:$PWD #将文件发送到server3服务器,并修改myid值为3 scp -r apache-zookeeper-3.6.3-bin/ server3:$PWD #集群主机间免密登录参考考:https://www.cnblogs.COM/luzhanshi/p/13369797.html
#分别在node01/node02/node03节点启动/停止: ./zkServer.sh start/stop #查看zookeeper集群状态 ./zkServer.sh status
3.kafka集群安装配置
上传解压kafka.tgz
修改config目录下的server.properties文件
# Licensed to the Apache Software Foundation (ASF) under one or more # contributor license agreements. See the NOTICE file distributed with # this work for additional information regarding copyright ownership. # The ASF licenses this file to You under the Apache License, Version 2.0 # (the "License"); you may not use this file except in compliance with # the License. You may oBTain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either exPress or implied. # See the License for the specific language governing permissions and # limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. #每个broker在集群中的唯一标识,不能重复 broker.id=0 #端口 port=9092 #broker主机地址 host.name=server1 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned From # java.net.inetAddress.getcanonicalHostName() if not configured. # FORMAT: # listeners = listener_name://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network #broker处理消息的线程数 num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O #broker处理磁盘io的线程数 num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server #socket发送数据缓冲区 socket.send.buffer.bytes=102400 # The receive buffer (SO_RCvbUF) used by the socket server #socket接收数据缓冲区 socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) #socket接收请求最大值 socket.request.max.bytes=104857600 ############################# LOG Basics ############################# # A comma seperated list of directories under which to store log files #kafka数据存放目录位置,多个位置用逗号隔开 log.dirs=/home/environment/kafka_2.12-3.0.0/kafkaData/kafka-logs # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. #topic默认的分区数 num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. #恢复线程数 num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings ############################# # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state" # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3. #默认副本数 offsets.topic.replication.factor=1 transaction.state.log.replication.factor=1 transaction.state.log.min.isr=1 ############################# Log Flush policy ############################# # Messages are immediately written to the fileSystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion due to age #消息日志最大存储时间,这里是7天 log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining # segments drop below log.retention.bytes. Functions independently of log.retention.hours. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. #每个日志段文件大小,这里是1g log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies #消息日志文件大小检查间隔时间 log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. #zookeeper集群地址 zookeeper.connect=server1:2181,server2:2181,server3:2181 # Timeout in ms for connecting to zookeeper #zookeeper连接超时时间 zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance. # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms. # The default value for this is 3 seconds. # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing. # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup. group.initial.rebalance.delay.ms=0
创建数据存储目录
mkdir -p /home/environment/kafka_2.12-3.0.0/kafkaData/kafka-logs
#将对应文件分发在server2上 scp -r kafka_2.12-3.0.0/ server2:$PWD
#将对应文件分发在server3上 scp -r kafka_2.12-3.0.0/ server3:$PWD
#server2节点 cd /usr/local/kafka/kafka_2.11-1.0.0/config vim server.properties ---------------------------------------------------- # The id of the broker. This must be set to a unique integer for each broker. broker.id=1 port=9092 #server3节点 cd /usr/local/kafka/kafka_2.11-1.0.0/config vim server.properties ---------------------------------------------------- # The id of the broker. This must be set to a unique integer for each broker. broker.id=2 port=9092
#分别在三台节点执行:node01/node02/node03 ##启动kafka集群-daemon(以后台服务方式启动) 后面跟的是以配置文件启动 ./kafka-server-start.sh -daemon ../config/server.properties ## 停止kafka集群 ./kafka-server-stop.sh
#查看topic 列表: /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --list --zookeeper server1:2181,server2:2181,server3:2181 #查看指定topic: /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --describe --zookeeper server1:2181,server2:2181,server3:2181 --topic itcast_topic #创建topic # --create:表示创建 # --zookeeper 后面的参数是zk的集群节点 # --replication-factor 1 :表示复本数 # --partitions 1:表示分区数 # --topic itheima_topic:表示topic的主题名称 /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --create --zookeeper server1:2181,server2:2181,server3:2181 --replication-factor 1 --partitions 1 --topic oc_itheima_topic #删除topic /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-topics.sh --delete --zookeeper server1:2181,server2:2181,server3:2181 --topic itheima_topic #创建生产者,生产消息 /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-console-producer.sh --broker-list server1:9092,server2:9092,server3:9092 --topic oc_itheima_topic #创建消费者,消费消息: /usr/local/kafka/kafka_2.11-1.0.0/bin/kafka-console-consumer.sh --bootstrap-server server1:9092,server2:9092,server3:9092 --topic oc_itheima_topic --consumer-property group.id=my-consumer-g --partition 0 --offset 0
以上是脚本宝典为你收集整理的kafka集群搭建全部内容,希望文章能够帮你解决kafka集群搭建所遇到的问题。
本图文内容来源于网友网络收集整理提供,作为学习参考使用,版权属于原作者。
如您有任何意见或建议可联系处理。小编QQ:384754419,请注明来意。