Skip to content

Installing Martini Server Runtime on Consul

Martini Runtime uses the Ecwid/consul-api library to work with Consul and offers rich service management features. This integration makes it possible to have a dynamic registration of the services and discovery where the components can discover each other through DNS queries. In this way, Martini Runtime can use Consul as a DNS server to create fault-tolerant and highly scalable systems that can respond to changes in the environment and provide reliable mechanisms for inter-service communication and the construction of distributed systems that can automatically route and discover services as needed.

Installations

For an on-premise instance of Consul:

  • Consul Binary: Consul OSS (Open Source) can be installed directly using pre-compiled binaries for different platforms (Linux, macOS, Windows). It's the simplest way to set up Consul locally for testing or small-scale deployments.
  • Consul using Docker: You can run Consul using Docker containers locally, providing a lightweight and isolated environment. Ideal for development and testing.
  • Consul on a Local Kubernetes Setup: Consul can be installed on local Kubernetes clusters like Minikube or K3s using Helm charts, providing a local service mesh and discovery platform.
  • Consul Enterprise (Local): Consul Enterprise offers advanced features for larger, production-grade environments, but can be installed locally for development and evaluation.

For a Cloud-Hosted instance of Kubernetes, you can provision from the following:

  • HashiCorp Consul Cloud (Managed Service): Consul Cloud is a fully managed service provided by HashiCorp, allowing users to deploy and manage Consul clusters without the need to handle the underlying infrastructure. It offers scalability, auto-updates, and built-in integrations with other HashiCorp tools.
  • Consul on AWS: You can deploy Consul OSS or Enterprise on Amazon Web Services (AWS) using EC2 instances, ECS, or EKS. This allows you to leverage AWS services for scalability and availability while managing your own Consul instance.
  • Consul on Azure: Consul can also be deployed on Microsoft Azure, utilizing Azure Virtual Machines or Azure Kubernetes Service (AKS) for a cloud-native approach. This setup allows for seamless integration with other Azure services.
  • Consul on Google Cloud: Deploying Consul on Google Cloud Platform (GCP) enables you to manage service discovery and networking across GCP resources using Compute Engine.
  • Consul on Kubernetes (Cloud-based): Consul can be installed on managed Kubernetes services (EKS, AKS, GKE) for a cloud-native service mesh solution, providing features like service discovery and traffic management.

Connecting Consul with Martini Server Runtime

  1. Configure Martini's connection properties in the <martini-home>/conf/overrides/override.properties.

    Configuration properties: These properties represent the bare minimum requirements needed to get started with the application. If a port is not specified, it will use the default port 8500.

    1
    consul.url=<your-consul-server>
    
  2. Restart Martini Server Runtime to apply the changes.

  3. Verify if your Martini Server Runtime has logged the following:

    1
    INFO  [ConsulLeadershipElector] This node has become the leader, any non-replicated endpoints will now be started
    

This means that your Martini Server Runtime is now connected to Consul and has gained leadership, allowing the use of Consul-specific functionalities.

Using Multiple Martini Server Runtime Instances

Consul’s leadership election is a significant component that makes it possible to have a high availability and fault tolerance in Consul clusters. The leader’s role implies ensuring that the consistency and coordination of such important tasks as modifying the service catalog, changes in the key/value store, and managing working sessions occur efficiently. The other servers in the cluster are referred to as followers and hold a copy of the leader server’s state for backup purposes.

To successfully have multiple instances to compete for leadership, additional configurations are required. Each of your Consul Cluster node must be connected to your respective Martini Server Runtime Instance using the consul.url property. Afterwards, you must add this second property:

1
2
3
# Name of the session Martini creates for the purposes of leader election. All instances competing for leadership must use the same key.
# By default, this is set to martini-leader.
consul.leader-key=martini-leader
Once configured, whenever one of your Martini Server Runtime Instance goes down, one of your other instances is automatically elected as leader.


Service Discovery

Consul's service discovery allows services in distributed systems to discover and communicate with each other. Services register themselves with Consul by providing their name, IP address, port number, and health check details. Other services can then find them through either DNS queries or Consul's HTTP API. The system has integrated health checking that removes unhealthy instances and supports cross-datacenter discovery, making it ideal for cloud and containerized environments. To learn more about service discovery with Martini, you can refer here.

Procedure

  1. Configure your Martini Server Runtime in a Cluster Setup
  2. Upload your packages via API Explorer or Martini CLI for each of your Martini Server Runtime Instances
  3. To verify service discovery, configure each node to run only its designated service while ensuring all other services are stopped. Then, query a service on one node and check if it receives responses from services on other nodes.

Whenever a node requires a certain service, it queries the location from the Consul leader. The leader responds with the IP address of the node hosting that service and Martini connects the requesting node directly to that service. This simple process makes it easier for services within the distributed system to communicate with other services.


Advanced properties

These properties are for advanced tuning on Martini, you may adjust them according to the needs of your applications.

1
2
# ACL token to use when making requests to Consul.
consul.acl=
1
2
3
4
# Timeout in milliseconds, which is the timeout for waiting for data or, put differently, a maximum period inactivity between two consecutive data packets. 
# A timeout value of 0 is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default).
# By default this is set to 600000 seconds.
consul.read-timeout=600000
1
2
3
# Timeout in milliseconds until a connection is established. A timeout value of 0 is interpreted as an infinite timeout. A negative value is interpreted as undefined (system default).
# By default this is set to 10000 seconds.
consul.connection-timeout=10000
1
2
3
4
5
# How often Consul will check the health of registered services with Martini, in seconds. 
# Consul will check whether the service is available by requesting <martini-url>/esbapi/service-registry/available/(soap|rest)/<service|api-namespace>. 
# Martini will return with a 200 if the service is available, or a 404 if it's not.
# By default this is set to 10 seconds.
consul.services-interval=10
1
2
3
# How long Consul will wait before removing unavailable services.
# By default this is set to 60 seconds.
consul.services-deregister-after=60
1
2
3
# How often Martini will renew the session it created for the purposes of leader election, in seconds.
# By default this is set to 10 seconds.
consul.leader-interval=10
1
2
3
# Time to Live duration of the session Martini creates for the purposes of leader election (between 10 and 86400 seconds). The session is invalidated (and a new leader chosen) if it is not renewed before the TTL expires.
# By default this is set to 15 seconds.
consul.leader-ttl=15