Sunday, December 9, 2018

Spring Boot and Kafka

Let's now play around with Spring Boot and Kafka. We use Kafka as messaging system. Here are differences between ActiveMQ and Kafka.

ActiveMQ

In ActiveMQ we have topic and queue. Topic works like publish-subscribe while queue works like producer-consumer. In a topic, a message is published to many subscribers. After all subscribers, including durable subscribers, receive the message then the message is removed. In a queue, a message can only be consumed by a consumer. It works like point to point. Once a message is consumed, it is removed. Other consumers won't receive it.

Kafka

Kafka only supports topic but somehow looks like the combination of ActiveMQ topic and queue. Messages in Kafka are kept for a number period of time even after consumed. A topic in Kafka consists of many partitions. A message sent to a topic is saved in one of the partitions. A consumer then reads from the partitions. The difference is that there is a concept of consumer group in Kafka. If one or more consumer has same group id, we can see as if it is a consumer that has multiple instances. This is better explained with images as shown below.


This is a topic that has one partition. There is a consumer group named Group 1 that has one consumer.


This is a topic with one partition and one consumer group. The group contains two consumers. Messages in Partition 1 will be sent to either Consumer 1 or Consumer 2. So the other consumer will be idle and does nothing? This is not allowed in Kafka. We can't have consumer more than partition.


This topic has two partitions and a consumer group. The consumer group contains two consumers. A possible approach is messages in Partition 1 will be sent to Consumer 1 and messages in Partition 2 will be sent to Consumer 2.


This topic has two partitions and two consumer groups. Each group contains two consumers. A possible approach is messages in Partition 1 will be sent to Consumer 1 of Group 1 and Group 2, messages in Partition 2 will be sent to Consumer 2 of Group 1 and Group 2.


This topic now has three partitions. A possible approach is message in Partition 1 and Partition 3 will be sent to Consumer 1 of each group, and messages in Partition 2 will be sent to Consumer 2 of each group.

Having this knowledge, we can conclude that in Kafka:
  • The number of consumer in a group must be less than or equal to the number of partition in that topic. If not, the consumer will be idle.
  • A partition is consumed by one consumer in a group.
  • Ordering of messages in a partition is guaranteed by Kafka since one partition is consumed by one consumer in a group
Start Coding

Now we will use Kafka in our Spring Boot project. First we need to modify our pom.xml as shown below.


Then we will create a configuration class. We do this since we want to be able to send a Customer object to Kafka instead of a string. We annotate the configuration class with @Configuration to indicate that this is a Spring configuration class and @EnableKafka to indicate that we want to enable Kafka messaging listener.


Next we create private variables which values are defined in application.properties. Group id is the consumer group id. We have discussed it above that in Kafka consumers can be grouped together. We can see it as a consumer that has many instances. Each instance is responsible for a partition. Then we also have bootstrap server which is the Kafka server name and port. Lastly we have trusted packages flag. This flag is required to let Kafka know that all classes are trusted to be sent to Kafka server as message and be consumed.



Next we have to create producer factory bean. Here we supply the bootstrap server and key-value serializer class.


Next we have to create consumer factory bean. Here we supply the bootstrap server, key-value deserializer class, consumer group id, and also trusted package flag.


Next step is to create a Kafka template bean as shown below. It takes producer factory bean as the parameter. Also note that the key is string and value is Customer.


Last configuration to setup is Kafka listener container factory. It takes consumer factory bean as parameter.


Next things to see is the sender and receiver classes. The KafkaSender class is annotated with @Component to indicate that it is a Spring component class. It has Kafka template bean object and a method named send() to send message to topic. Note that the message is a Customer object.


Then KafkaReceiver class is responsible to receive messages from Kafka topic. It has a method that is annotated with @KafkaListener. Here we define the topic name. Note that the message we receive is a Customer object.


To send a message to Kafka we will create a Kafka sender bean in CustomerController. Then when a new Customer is added we send the new Customer data as message to Kafka in CustomerController.addCustomer().



To test it we need to run Kafka server first. Apache provides a quickstart page here to install and setup Kafka for the first time. The steps are:

  • Download and unzip the Kafka files.
  • Start Zookeeper server using command:
bin\windows\zookeeper-server-start.bat config\zookeeper.properties
 
  • Start Kafka server using command:
bin\windows\kafka-server-start.bat config\server.properties
 
  • Create a topic. Provide topic name and number of partition. In this example we set only one partition.
bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic mq.request
  • Check whether the topic has been created using command
bin\windows\kafka-topics.bat --list --zookeeper localhost:2181
 
  • Add more partitions using command
bin\windows\kafka-topics.bat --zookeeper localhost:2181 --alter --topic mq.request --partitions 3
 
  • Check whether the partitions have been added using command
bin\windows\kafka-topics.bat --describe --zookeeper localhost:2181 --topic mq.request

Now start our application and create a new customer. The following message will be printed on console.


0 comments:

 

©2009 Stay the Same | by TNB