Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
5 views12 pages

Kafka

The document outlines the integration of Kafka in a microservices architecture, detailing the roles of message producers, consumers, and brokers within an event-driven framework. It provides a step-by-step guide for setting up a Kafka producer and consumer using Spring Boot, including necessary dependencies and configuration steps. Additionally, it emphasizes the importance of Kafka's structure, including topics and partitions, for efficient message handling and scalability.

Uploaded by

thirtyseven93
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views12 pages

Kafka

The document outlines the integration of Kafka in a microservices architecture, detailing the roles of message producers, consumers, and brokers within an event-driven framework. It provides a step-by-step guide for setting up a Kafka producer and consumer using Spring Boot, including necessary dependencies and configuration steps. Additionally, it emphasizes the importance of Kafka's structure, including topics and partitions, for efficient message handling and scalability.

Uploaded by

thirtyseven93
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

SPRING BOOT KAFKA

MICROSERVICES
MESSAGING SYSTEM IN EVENT DRIVEN ARCHITECTURE
KAFKA IN MICROSERVICES

• Message Broker Kafka structure


• Message Producer (Microservice 1)
• Kafka Broker
• Message Consumer (Microservice 2)

• Event Driven Architecture


• Allows for real-time microservices communication, enabling data to be consumed in the form
of events before they’re even requested.
KAFKA

• Message Producer
• Message Consumer
• Kafka Cluster
• Broker
• Topic
• Partition

• Zookeeper
KAFKA PRODUCER AND CONSUMER

• Same as other messaging tools such as RabbitMQ, Kafka also implements the AMQP
(Advanced Messaging Queue Protocol)
• A Kafka messaging system contains three main parts:
• The Kafka producer, which sends the messages;
• The Kafka consumer, which receives the messages;
• The Kafka broker, which receives the messages from the producers and distributes them to the
correct consumers.
• Zookeeper, currently zookeeper is needed to host a Kafka cluster.
KAFKA CLUSTER, BROKER, TOPIC AND PARTITION

• A Kafka cluster can contain multiple brokers.


• A broker can contain multiple topics.
• A topic can contain multiple partitions.
• A partition is the smallest storage unit that holds a subset of records owned by a topic.
• Kafka splits the message logs into partitions to increase system scalability. It uses key-value
pairs to store messages. Where the keys are used to determine the partition within a log to
which a message gets appended to, and the values being the actual payload of the message.
SET UP

• Make sure Java 8+ environment is installed on your machine.


• Go to https://kafka.apache.org/downloads to download Kafka.
• Click on GET STARTED -> QUICKSTART and follow the steps to start both zookeeper and
Kafka broker services. (You may continue to explore the guide but it’s not necessary)

KAFKA PRODUCER

• We will create a SpringBoot application for the producer that will be fed data from the
Wikimedia website: https://stream.wikimedia.org/v2/stream/recentchange
• This mimics the event driven environment since the recentchange always gets updated
and our producer will listen to those changes and constantly send them to the Kafka
Broker.
• To get started, we need three dependencies in our producer application, namely,
<okhttp-eventsource> to retrieve the event from Wikimedia, and <jackson-core>
and <Jackson-Databind> to parse the event messages in json format.
KAFKA PRODUCER

• To make the Wikimedia Kafka Producer work, we also need to create three additional files:
• KafkaConfig, which creates a Kafka topic and provides configuration for it.
• WikimediaChangesHandler, which implements EventHandler and listens to the event source for any new
changes and sends them to the Kafka topic.
• KafkaProducerService, which specifies the url to the event source and builds and starts the
wikimediaChangesHandler object.
• The driver class implements the CommandLineRunner which calls the kafkaProducerService sendMessage()
method to start the process.
• Add these three lines to the properties/yml file:
• spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
KAFKA CONSUMER

• We will create an application to be our Kafka Consumer which will receive the messages
from the Kafka Broker and store the messages to the database.
• To get started, we need to add the <spring-kafka> dependency for kafka to work.
• Since the consumer application will store the messages to the mysql database, we also
need <mysql-connector-java>, and <spring-boot-starter-data-jpa>
KAFKA CONSUMER

• First configure the properties file, add the following properties:

• To make the Kafka Consumer work, we need to create a service that consumes the
messages sent from the producer.
• The consume method needs to be annotationed with the @KafkaListen annotation and within
it we need to specify the topics name and the groupId.
KAFKA CONSUMER

• Create an entity class and annotate it with the appropriate annotations, @Entity, @Table,
@Id etc.
• Create a MySQL database that will store the messages.
• Use any database connectivity tool, in the demo we are using JpaRepository to store the
messages to the database.
• Add the properties for database connection to the properties file.
FOOTNOTE

• This is a very simple Kafka demo to get you started and familiarize yourself with the
technology.

You might also like