Skip to content

Configuring Kafka for Martini

Apache Kafka is an open-source distributed streaming platform for building real-time applications. Producers put messages into topics and are divided into partitions for consuming processes. Every partition has ordered messages with different offsets. Kafka brokers that make up a cluster are partitioned; this makes the system redundant and scalable. Consumers subscribe to topics and track their position with offsets and messages can be read concurrently when working in groups. Kafka provides the feature of fault tolerance through the data replication mechanism suitable for low-latency messaging and real-time processing.

Broker Configuration

To start things off, you are required to configure your Kafka setup and successfully verify all of it's functionality. You can refer to the Kafka for the initial setup.

  • Install Java: The bare minimum Java requirement for Kafka is Java 8. However, it is recommended to use Java 11 or higher for better performance and compatibility with newer features.

!!! info Starting from Kafka version 2.8.0, Java 8 is supported, and from version 3.0.0 onwards, Java 11 is required.

For an on-premise instance of Kafka, follow these steps:

  • Install Kafka: You can find the latest version in their website.
  • Configure Kafka for Kraft: Configure Kafka for Kraft to enable its operation without ZooKeeper, allowing for simplified architecture and efficient metadata management using the Raft consensus algorithm.
  • Start your Kafka Broker: Make sure that your broker is initialized successfully and test out its functionality.

For a Cloud-Hosted instance of Kafka, you can provision from the following:

  • Amazon Managed Streaming for Apache Kafka: A fully managed service from AWS that simplifies Apache Kafka cluster management, allowing automatic scaling, monitoring, and security integration with AWS services like IAM, S3, and CloudWatch.
  • Google Cloud Managed Service for Apache Kafka: Google Cloud offers Kafka through third-party partners such as Confluent and Aiven, providing fully managed Kafka clusters with native GCP integration for seamless scaling and monitoring.
  • Confluent Cloud: A cloud-native, fully managed Kafka service from Confluent, supporting multi-cloud deployments (AWS, GCP, Azure) and providing additional tools for stream processing, security, and monitoring.

Web Interface

Kafka does not come with a web UI that helps you create topics, view consumer groups, or visualize streams of data. It is primarily employed as a message-passing system for throughput and low-latency data communication.

However, third-party tools and frameworks are currently available like AKHQ, Confluent Control Center and Kafka Manager to be used as a way to provide a web-based UI for better visualization and management of Kafka Clusters.

Kafka Functionalities

You can also use your Kafka CLI Tools as well as your Kafka Web UI to publish and consume Kafka Events. Also, there are many more tools you can use depending on your needs and dedicated Web User interface. You can refer to the Official Documentation for Kafka here.


Martini Configuration

Using Martini Designer

To establish connectivity to your Kafka Broker, you must acquire a package that has Kafka related functions for interacting with Kafka which is available in https://console.lonti.com/marketplace. With this, you can perform tasks such as creating Kafka producers and sending messages to a topic.

Configuration

  1. Configure the following properties of your Kafka Listener Trigger:

  2. Bootstrap Servers: {broker-ip-address}:{port}

  3. Topics: [Desired Topic Name]
  4. Group Id: [Desired Group Consumer Name]
  5. Key Deserializer: [chosen type]
  6. Value Deserializer: [chosen type]

Info

Key Deserializer - The key deserializer is responsible for converting the byte representation of the key back into a specific data type (e.g., String, Integer, JSON object, etc.) that the consumer application can work with.

Value Deserializer - The value deserializer performs a similar function but for the value part of the message. It converts the byte array representing the value back into a specific data type that the consumer application can process.

This registers the Kafka Listener Trigger to subscribe to your desired topic name and expects the message's key and value to be a valid string.

  1. Test the Configuration and Start the Kafka Listener Trigger:

Test the Configuration to check if all of your configurations are valid to ensure no errors within the Listener Configuration. Once validated, you can start your specified Kafka Listener Trigger. You can also set the Trigger to start automatically, just tick the checkbox near the Auto Start feature.

!!! warning You may encounter a warning within Designer Logs regarding ConsumerConfig, this can be ignored as the Trigger's status is unaffected and it should still start and function properly.

You can check your dedicated Kafka Web UI to confirm the creation of your Topic and Consumer Group. This means that your Kafka Listener Trigger is now subscribed to your Kafka Topic.


Kafka Events

Kafka events, or messages, are the core components of Kafka, representing specific occurrences as key-value pairs with optional partitioning keys. Produced and published into topics, these events are consumed by applications. Kafka's real-time streaming capability makes it suitable for log aggregation and analysis, with events stored on disk for durability. Its scalability allows multiple consumers to read concurrently, supporting diverse applications from simple tracking to complex stream processing, making Kafka a versatile solution for data-driven environments.

Using Martini Designer

The Kafka Listener Trigger enables developers to write applications that subscribe to Kafka topics and invoke a registered service when messages are received.

  1. Navigate and configure your Kafka Function: Navigate within your Kafka Package and find the different pre-configured Kafka Service and choose the one that is tailored to your needs.

For this example, we'll be using the sendString service. Edit the bootstrapServers property within the Kafka Function based on your Broker IP Address. Change the key and value serializers to match your configurations within the Kafka Listener Trigger.

  1. Run the Service: Once all configurations are finished, start the service. You will be asked to input your desired values within two properties: key and value.

You should be able to see a message within your Designer Logs that indicates the operation was a success. In this case, the key value was name, the value was martini, and the topic is named martini-test.

1
30/09/24 12:23:05.423 INFO  [Martini] ConsumerRecord(topic = martini-test, partition = 0, leaderEpoch = 0, offset = 0, CreateTime = 1727670185056, serialized key size = 4, serialized value size = 7, headers = RecordHeaders(headers = [], isReadOnly = false), key = name, value = martini)

The message should also be visible to your topic within your Kafka Web UI.