Skip to content

Installing Martini Server Runtime on Kubernetes

Martini Runtime leverages the Fabric8.io Kubernetes client to establish connections with Kubernetes-managed clusters, enabling seamless interaction and management of resources within the cluster. This integration allows developers to easily deploy and manage microservices, utilize Kubernetes features like service discovery and scaling, and automate CI/CD processes, all while enhancing the overall developer experience.

Prerequisites

You can either set up an on-premises Kubernetes cluster or use a cloud-hosted service.

On-Premise Kubernetes Distributions

  • Official Kubernetes: The standard upstream version of Kubernetes, maintained by the Kubernetes community.
  • Openshift: A distribution of Kubernetes by Red Hat, designed for enterprise use with a focus on developer experience and security.
  • Rancher: A complete container management platform that includes a Kubernetes distribution called RKE (Rancher Kubernetes Engine).
  • MicroK8s: A minimal, lightweight Kubernetes distribution developed by Canonical, the company behind Ubuntu.
  • K3s: A lightweight Kubernetes distribution designed for resource-constrained environments, such as edge computing and IoT.

Cloud-Hosted Kubernetes Services

Martini Configuration

  1. Configure Martini's connection properties in the <martini-home>/conf/overrides/override.properties. There are two ways to configure Martini. These properties represent the bare minimum requirements needed to get started with the application. Using kubeconfig:

    1
    2
    3
    # The file path of the kube config.
    # By using "default" it uses the kube config from ~/.kube/config 
    kubernetes.config-location=default
    

    Using URL:

    1
    2
    # The URL of your kubernetes node
    kubernetes.master-url=https://[HOST]:6443
    

    For authentication in the Martini Server Runtime, you can choose from three methods: bearer token, client certificate, or basic authentication. These configurations are relevant when using the kubernetes.master-url= property to connect to your Kubernetes cluster.

    1
    2
    # Bearer Token
    kubernetes.bearer-token=
    
    1
    2
    3
    4
    # Client Certificate
    kubernetes.client-certificate=
    kubernetes.client-key=
    kubernetes.trust-certificate=
    
    1
    2
    3
    # Basic auth
    kubernetes.username=
    kubernetes.password=
    
  2. Restart Martini Server Runtime to apply the changes.

  3. Verify by checking your leases on your Kubernetes cluster. You should see martini-runtime-endpoint .

Leadership Election

Martini Server Runtime facilitates leadership election by enabling multiple instances to collaborate effectively. To implement this, you need to set up two or more Martini Server Runtime instances either inside or connected to different nodes on Kubernetes, ensuring that each instance has a unique identity. By default, the system assigns a random UUID string to each instance. However, you can customize this by assigning a specific identity name, which will replace the automatically generated UUID. To do this, set the following property in your configuration:

1
2
3
# Defines the identity (e.g., service account name or token) that the application will use to authenticate with the Kubernetes API.
# If left empty, a random UUID string will be assigned to the instance.
kubernetes.identity=toroMartini-1

By defining a unique identity, such as toroMartini-1, you enhance the reliability of the leadership election process, allowing the Martini Server Runtime to maintain consistent coordination and task management across your distributed environment. This ensures that each instance can participate in the election process without confusion, contributing to the overall stability and efficiency of your system.

To verify if the leadership election is functioning correctly, you can shut down the Martini instance that holds the Kubernetes lease. Upon shutting down this instance, you should observe that the other Martini instance seamlessly takes over the leadership role. This transition will be indicated by a log message similar to the following:

1
[KubernetesLeadershipElector] Started endpoint 'resource-sampler'

This message confirms that the newly elected instance has successfully assumed leadership, ensuring continued operation and coordination within your distributed environment. By performing this test, you can validate the resilience and effectiveness of the leadership election mechanism within the Martini Server Runtime setup.

Advanced properties

These properties are advanced settings for configuring Martini Runtime. Adjust them according to the needs of your applications. This can be set in the <martini-home>/conf/overrides/override.properties.

1
2
3
4
# Determines whether the application should connect to a specific Kubernetes instance and use settings that pertain only to that cluster.
# Set this property to "true" if you are running the Martini Runtime inside the same instance.
# By default, this is set to "false".
kubernetes.localized-cluster=false
1
2
3
# Defines the Kubernetes namespace for your application, allowing you to isolate resources and manage permissions within that namespace.
# Martini connects to the "default" namespace if an override is not set.
kubernetes.namespace=toro-martini
1
2
3
4
# The amount of time non-leader candidates will wait after observing a leadership renewal until attempting to acquire leadership of a led but unrenewed leader slot.
# This effectively defines the maximum duration that a leader can be unresponsive before being replaced by another candidate.
# By default, this is set to 15 seconds.
kubernetes.lease-duration=15s
1
2
3
# The interval between attempts by the acting master to renew a leadership slot before it stops leading. This must be less than or equal to the lease duration.
# By default, this is set to 10 seconds.
kubernetes.renew-deadline=10s
1
2
3
# The duration that clients should wait between attempts to acquire and renew leadership. This is only applicable if leader election is enabled.
# By default, this is set to 2 seconds.
kubernetes.retry-period=2s
1
2
3
4
5
# Specifies label selectors that determine which Kubernetes resources (like pods or services) your application interacts with or registers itself under.
# You can define one or more selectors using a format like: kubernetes.selectors.<label>
# Here are some examples.
kubernetes.selectors.environment=production
kubernetes.selectors.team=dev-team