Tag Archives: service

Deploy Docker Compose Services to Swarm

Docker 1.13 introduced a new version of Docker Compose. The main feature of this release is that it allow services defined using Docker Compose files to be directly deployed to Docker Engine enabled with Swarm mode. This enables simplified deployment of multi-container application on multi-host.

Docker 1.13

This blog will show use a simple Docker Compose file to show how services are created and deployed in Docker 1.13.

Here is a Docker Compose v2 definition for starting a Couchbase database node:

This definition can be started on a Docker Engine without Swarm mode as:

This will start a single replica of the service define in the Compose file. This service can be scaled as:

If the ports are not exposed then this would work fine on a single host. If swarm mode is enabled on on Docker Engine, then it shows the message:

Docker Compose gives us multi-container applications but the applications are still restricted to a single host. And that is a single point of failure.

Swarm mode allows to create a cluster of Docker Engines. With 1.13, docker stack deploy command can be used to deploy a Compose file to Swarm mode.

Here is a Docker Compose v3 definition:

As you can see, the only change is the value of version attribute. There are other changes in Docker Compose v3. Also, read about different Docker Compose versions and how to upgrade from v2 to v3.

Enable swarm mode:

Other nodes can join this swarm cluster and this would easily allow to deploy the multi-container application to a multi-host as well.

Deploy the services defined in Compose file as:

A default value of Compose file here would make the command a bit shorter. #30352 should take care of that.

List of services running can be verified using docker service ls command:

The list of containers running within the service can be seen using docker service ps command:

In this case, a single container is running as part of the service. The node is listed as moby which is the default name of Docker Engine running using Docker for Mac.

The service can now be scaled as:

The list of container can then be seen again as:

Note that the containers are given the name using the format <service-name>_n. Both the containers are running on the same host.

Also note, the two containers are independent Couchbase nodes and are not configured in a cluster yet. This has already been explained at Couchbase Cluster using Docker and a refresh of the steps is coming soon.

A service will typically have multiple containers running spread across multiple hosts. Docker 1.13 introduces a new command docker service logs <service-name> to stream the log of service across all the containers on all hosts to your console. In our case, this can be seen using the command docker service logs couchbase_db and looks like:

The preamble of the log statement uses the format <container-name>.<container-id>@<host>. And then actual log message from your container shows up.

At first instance, attaching container id may seem redundant. But Docker services are self-healing. This means that if a container dies then the Docker Engine will start another container to ensure the specified number of replicas at a given time. This new container will have a new id. And thus it allows  to attach the log message from the right container.

So a quick comparison of commands:

 Docker Compose v2  Docker compose v3
 Start services docker-compose up -d docker stack deploy --compose-file=docker-compose.yml <stack-name> 
 Scale service docker-compose scale <service>=<replicas> docker service scale <service>=<replicas>
 Shutdown docker-compose down docker stack rm <stack-name>
 Multi-host No Yes

Want to get started with Couchbase? Look at Couchbase Starter Kits.

Want to learn more about running Couchbase in containers?

  • Couchbase on Containers
  • Couchbase Forums
  • Couchbase Developer Portal
  • @couchhasedev and @couchbase

Source: https://blog.couchbase.com/2017/deploy-docker-compose-services-swarm

ZooKeeper for Microservice Registration and Discovery

In a microservice world, multiple services are typically distributed in a PaaS environment. Immutable infrastructure is provided by containers or immutable VM images. Services may scale up and down based upon certain pre-defined metrics. Exact address of the service may not be known until the service is deployed and ready to be used.

This dynamic nature of service endpoint address is handled by service registration and discovery. In this, each service registers with a broker and provide more details about itself, such as the endpoint address. Other consumer services then queries the broker to find out the location of a service and invoke it. There are several ways to register and query services such as ZooKeeper, etcd, consul, Kubernetes, Netflix Eureka and others.

Monolithic to Microservice Refactoring showed how to refactor an existing monolith to a microservice-based application. User, Catalog, and Order service URIs were defined statically. This blog will show how to register and discover microservices using ZooKeeper.

Many thanks to Ioannis Canellos (@iocanel) for all the ZooKeeper hacking!

What is ZooKeeper?

Apache ZooKeeperZooKeeper is an Apache project and provides a distributed, eventually consistent hierarchical configuration store.

 

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.

Apache ZooKeeper

So a service can register with ZooKeeper using a logical name, and the configuration information can contain the URI endpoint. It can consists of other details as well, such as QoS.

Apache Curator ZooKeeper has a steep learning curve as explained in Apache ZooKeeper Made Simpler with Curator. So, instead of using ZooKeeper directly, this blog will use Apache Curator.

Curator n ˈkyoor͝ˌātər: a keeper or custodian of a museum or other collection – A ZooKeeper Keeper.

Apache Curator

Apache Curator has several components, and this blog will use the Framework:

The Curator Framework is a high-level API that greatly simplifies using ZooKeeper. It adds many features that build on ZooKeeper and handles the complexity of managing connections to the ZooKeeper cluster and retrying operations.

Apache Curator Framework

ZooKeeper Concepts

ZooKeeper Overview provides a great overview of the main concepts. Here are some of the relevant ones:

  • Znodes: ZooKeeper stores data in a shared hierarchical namespace that is organized like a standard filesystem. The name space consists of data registers – called znodes, in ZooKeeper parlance – and these are similar to files and directories.
  • Node name: Every node in ZooKeeper’s name space is identified by a path. Exact name of a node is a sequence of path elements separated by a slash (/).
  • Client/Server: Clients connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the client will connect to a different server.
  • Configuration data: Each node in a ZooKeeper namespace can have data associated with it as well as children. ZooKeeper was originally designed to store coordination data, so the data stored at each node is usually small, in less than KB range).
  • Ensemble: ZooKeeper itself is intended to be replicated over a sets of hosts called an ensemble. The servers that make up the ZooKeeper service must all know about each other.
  • Watches: ZooKeeper supports the concept of watches. Clients can set a watch on a znode. A watch will be triggered and removed when the znode changes.

ZooKeeper is a CP system with regards to CAP theorem. This means if there is a partition failure, it will be consistent but not available. This can lead to problems that are explained in Eureka! Why You Shouldn’t Use ZooKeeper for Service Discovery.

Nevertheless, ZooKeeper is one of the most popular service discovery mechanisms used in microservices world.

Lets get started!

Start ZooKeeper

  1. Start a ZooKeeper instance in a Docker container:
  2. Verify ZooKeeper instance by using telnet as:
    Type the command “ruok” to verify that the server is running in a non-error state.The server will respond with “imok” if it is running:
    Otherwise it will not respond at all. ZooKeeper has other similar four-letter commands.

Service Registration and Discovery

Each service, User, Catalog, and Order in our case, has an eagerly initialized bean that registers and unregisters the service as part of lifecycle initialization methods as. Here is the code from CatalogService:

The code is pretty simple, it injects ServiceRegistry class, with @ZooKeeperRegistry qualifier. This is then used to register and unregister the service. Multiple URIs, one each for a stateless service, can be registered under the same logical name.

At this time, the qualifier comes from another maven module. A cleaner Java EE way would be to move the @ZooKeeperRegistry qualifier to a CDI extension (#20). And when this qualifier when specified on any REST endpoint will register the service with ZooKeeper (#22). For now, service endpoint URI is hardcoded as well (#24).

What does ZooKeeper class look like?

  1. ZooKeeper class uses constructor injection and hardcoding IP address and port (#23):
    It does the following tasks:

    1. Loads ZooKeeper’s host/port from a properties file
    2. Initializes Curator framework and starts it
    3. Initializes a hashmap to store the URI name to zNode mapping. This node is deleted later to unregister the service.
  2. Service registration is done using registerService method as:
    Code is pretty straight forward:

    1. Create a parent zNode, if needed
    2. Create an ephemeral and sequential node
    3. Add metadata, including URI, to this node
  3. Service discovery is done using discover method as:
    Again, simple code:

    1. Find all children for the path registered for the service
    2. Get metadata associated with this node, URI in our case, and return.The first such node is returned in this case. Different QoS parameters can be attached to the configuration data. This will allow to return the appropriate service endpoint.

Read ZooKeeper Javadocs for API.

ZooKeeper watches can be setup to inform the client about the lifecycle of the service (#27). ZooKeeper path caches can provide an optimized implementation of the children nodes (#28).

Multiple Service Discovery Implementations

Our shopping cart application has two two service discovery implementations – ServiceDisccoveryStatic and ServiceDiscoveryZooKeeper. The first one has all the service URIs defined statically, and the other one retrieves them from ZooKeeper.

Other means to register and discover can be easily added by creating a new package in services module and implementing ServiceRegistry interface. For example, Snoop, etcd, Consul, and Kubernetes. Feel free to send a PR for any of those.

Run Application

  1. Make sure the ZooKeeper image is running as explained earlier.
  2. Download and run WildFly:
  3. Deploy the application:
  4. Access the application at localhost:8080/everest-web/. Learn more about the application and different components in Monolithic to Microservices Refactoring for Java EE Applications blog.

Enjoy!