“I cannot make you understand. I cannot make anyone understand what is happening inside me. I cannot even explain it to myself.”
― Franz Kafka, The Metamorphosis
也许跟老高说的一样我们就是来修炼的
今天是接续上一天模拟其中一个 broker 挂掉然後又恢复後的状态,因为有两个 Partition Leader 同时在单一 broker 上,这不符合我们希望各 broker 平均分摊流量的目标,因此今天会简单示范如何手动重新选举 Partition Leader
$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker
Topic: topicWithThreeBroker TopicId: BAocHAwHR_STmwAUlI3YMw PartitionCount: 3 ReplicationFactor: 2 Configs:
Topic: topicWithThreeBroker Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: topicWithThreeBroker Partition: 1 Leader: 2 Replicas: 2,1 Isr: 1,2
Topic: topicWithThreeBroker Partition: 2 Leader: 2 Replicas: 0,2 Isr: 2,0
$ vim leader_election.json
新增内容如下
{ "partitions":
[
{ "topic": "topicWithThreeBroker", "partition": 0 },
{ "topic": "topicWithThreeBroker", "partition": 1 },
{ "topic": "topicWithThreeBroker", "partition": 2 }
]
}
kafka-leader-election.sh
重新选举 Leader$ kafka-leader-election --path-to-json-file leader-election.json --election-type preferred --bootstrap-server :9092
Successfully completed leader election (PREFERRED) for partitions topicWithThreeBroker-2
Valid replica already elected for partitions topicWithThreeBroker-2
这边可以看到只有 partition2 重新进行了选举
$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker
Topic: topicWithThreeBroker TopicId: BAocHAwHR_STmwAUlI3YMw PartitionCount: 3 ReplicationFactor: 2 Configs:
Topic: topicWithThreeBroker Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: topicWithThreeBroker Partition: 1 Leader: 2 Replicas: 2,1 Isr: 1,2
Topic: topicWithThreeBroker Partition: 2 Leader: 0 Replicas: 0,2 Isr: 2,0
这边可以看到原本 Broker2 上挤了两个 Partition Leader,重新选举後又平均分散了,这样可以避免单一机器 loading 比例过重
这时如果再执行一下选举,会发现会显示讯息表示没有需要重新选举的 partition,因为已经 partition leader 已经分派平均了
$ kafka-leader-election --path-to-json-file leader_election.json --election-type preferred --bootstrap-server :9092
Valid replica already elected for partitions
在上一天的模拟中,我们将 broker0 关掉後,其实是由 KafkaController 去帮 partition2 自动重新选举新的 leader,并且在每次 Isr 发生变动後去通知每一台的 broker 去更新 metadataCache 的资讯,更甚者在为某个 topic 新增 partition 时,也是由 KafkaController 去作自动重新选举、分配的动作。
$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker
Topic: topicWithThreeBroker TopicId: BAocHAwHR_STmwAUlI3YMw PartitionCount: 3 ReplicationFactor: 2 Configs:
Topic: topicWithThreeBroker Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: topicWithThreeBroker Partition: 1 Leader: 2 Replicas: 2,1 Isr: 1,2
Topic: topicWithThreeBroker Partition: 2 Leader: 2 Replicas: 0,2 Isr: 2,0
$ kafka-topics --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker --alter --partitions 9
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
$ kafka-topics --describe --zookeeper 127.0.0.1:2181 --topic topicWithThreeBroker
Topic: topicWithThreeBroker TopicId: BAocHAwHR_STmwAUlI3YMw PartitionCount: 9 ReplicationFactor: 2 Configs:
Topic: topicWithThreeBroker Partition: 0 Leader: 1 Replicas: 1,0 Isr: 1,0
Topic: topicWithThreeBroker Partition: 1 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: topicWithThreeBroker Partition: 2 Leader: 2 Replicas: 0,2 Isr: 2,0
Topic: topicWithThreeBroker Partition: 3 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: topicWithThreeBroker Partition: 4 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: topicWithThreeBroker Partition: 5 Leader: 0 Replicas: 0,2 Isr: 0,2
Topic: topicWithThreeBroker Partition: 6 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: topicWithThreeBroker Partition: 7 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: topicWithThreeBroker Partition: 8 Leader: 0 Replicas: 0,2 Isr: 0,2
可以看到 KafkaController 预设的自动分配策略就是将 partition 平均分派到各 broker 上
<<: [Day 07] 从简单的Todolist 了解 iOS开发的大致流程之二
前情提要 艾草:「我们今天来提升一下吧!」 「不是每天都在提升魔力总量了吗?」 艾草:「不一样唷,今...
如果没有浏览器,你觉得你会看到什麽? 人生中的浏览器,只剩下 Chrome 了吗? 什麽是网页浏览器...
延续昨日 有了资料库之後再来就是想想如何登入! 登入的意思就是你输入的帐号密码都和资料库的帐号密码一...
前言: 今天是铁人赛的第五天,要特别讲一下vhost(虚拟网站)的设定方式 启动XAMPP的Apac...
这次2021 iThome铁人赛得奖名单出炉啦,看了 @搋兔 写的排版神器 Tailwind CSS...