concept controller in category kafka

appears as: controller, The controller, Controllers
Kafka in Action MEAP V14

This is an excerpt from Manning's book Kafka in Action MEAP V14.

In addition, your servers that host a controller will also have a controller.log. The controller log file is interesting to look at and see the changes that a controller observed and reacted to. You can also see when a controller was moved from a broker with a message such as the following: DEBUG [Controller id=0] Resigning (kafka.controller.KafkaController). New topics and new partition creation messages will also pepper the log. While the controller usually takes care of itself, if you are working on a restart of the cluster to apply patches or updates, it might be a good place to look to see the controller’s actions if you run into any issues. For example, you should be able to see any broker failures and what actions where a result of those failures.

The state-change log is directly related to the actions in the controller log file. Every controller to message broker interaction is modeled as a state change for one or more partitions. This log really gives a view of requested changes from the controller and what the broker did in response. In other words, any decisions that came from the controller should be listed in this log.

6.5 What Controllers are for

While partition have a broker that will be a leader, it is important to note that a specific broker can be a leader for multiple partitions. However, within a cluster of brokers, one of the brokers will act as a controller. The role of the controller is to manage the state of partitions and replicas. The controller also will perform other administrative actions like partition reassignment.

The controller leverages ZooKeeper in order to detect restarts and failures. If a broker is restarted, a controller will send the leader information as well as the ISR information for the broker re-joining the cluster. For a broker failure, the controller will select a new leader and update the ISR. These values will be persisted to Zookeeper. In order to coordinate all of the other brokers, it also sends the new leader and In-Sync Replicas changes to all other brokers in the cluster.

When you talk about shutting down a cluster for an upgrade, it is important to know which broker is serving as the controller. One thing to note is that you would not want to keep shutting down the controller. This could happen if you shutdown a controller which would cause a new controller to start on another broker. Then, when it is time for the next shutdown, you will be stopping the controller that just moved. In other words, we would try to avoid the overhead with forcing the controller to move and startup on each broker if we can avoid it. The controller failover process is able to recover due to having data into Zookeeper.

Figure 6.2. Example Controller Output
Example Controller Output

To figure out which broker is the current controller, you can use the zookeeper-shell script to look up the id of the broker as shown in Listing 6.2. The path /controller exists in zookeeper and we are running one command to look at that current value. Running that command for my cluster showed my broker with id 0 as the controller. Figure 6.2 shows all of the output from Zookeeper and how the brokerid value is returned.

Listing 6.2. Listing our topics
zookeeper-shell.sh localhost:2181  #1
get /controller  #2

There will also be a controller log file with the name: controller.log as we discussed application logs in the previous section. This will be important when you are looking at broker actions and failures. The state-change.log is also useful as it will tell you the decisions that it received from the controller.

sitemap

Unable to load book!

The book could not be loaded.

(try again in a couple of minutes)

manning.com homepage