Category Archives: techtip

Automatic Restarting of Pods inside Replication Controller of Kubernetes Cluster

kubernetes-logo

A key feature of Kubernetes is its ability to maintain the “desired state” using declared primitives. Replication Controllers is a key concept that helps achieve this state.

A replication controller ensures that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more.

Replication Controller

Lets take a look on how to spin up a Replication Controller with two replicas of a Pod. Then we’ll kill one pod and see how Kubernetes will start another Pod automatically.

Start Kubernetes Cluster

  1. Easiest way to start a Kubernetes cluster on a Mac OS is using Vagrant:
  2. Alternatively, Kubernetes can be downloaded from github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.0/kubernetes.tar.gz, and cluster can be started as:

Start and Verify Replication Controller and Pods

  1. All configuration files required by Kubernetes to start Replication Controller are in kubernetes-java-sample project.  Clone the workspace:
  2. Start a Replication Controller that has two replicas of a pod, each with a WildFly container:
    The configuration file used is shown:
    Default WildFly Docker image is used here.
  3. Get status of the Pods:
    Notice -w refreshes the status whenever there is a change. The status changes from Pending to Running and then Ready to receive requests.
  4. Get status of the Replication Controller:
    If multiple Replication Controllers are running then you can query for this specific one using the label:
  5. Get name of the running Pods:
  6. Find IP address of each Pod (using the name):
    And of the other Pod as well:
  7. Pod’s IP address is accessible only inside the cluster. Login to the minion to access WildFly’s main page hosted by the containers:

Automatic Restart of Pods

Lets delete a Pod and see how a new Pod is automatically created.

Notice how the Pod with name wildfly-rc-15xg5 was deleted and a new Pod with the name wildfly-rc-0xoms was created.

Finally, delete the Replication Controller:

The latest configuration files and detailed instructions are at kubernetes-java-sample.

In real world, you’ll typically wrap this Replication Controller in a Service and front-end with a Load Balancer. But that’s a topic for another blog!

Enjoy!

Multi-container Applications using Docker Compose and Swarm

Docker Compose to Orchestrate Containers shows how to run two linked Docker containers using Docker Compose. Clustering Using Docker Swarm shows how to configure a Docker Swarm cluster.

This blog will show how to run a multi-container application created using Docker Compose in a Docker Swarm cluster.

Updated version of Docker Compose and Docker Swarm are released with Docker 1.7.0.

Docker 1.7.0 CLI

Get the latest Docker CLI:

and check the version as:

Docker Machine 0.3.0

Get the latest Docker Machine as:

and check the version as:

Docker Compose 1.3.0

Get the latest Docker Compose as:

and verify the version as:

Docker Swarm 0.3.0

Swarm is run as a Docker container and can be downloaded as:

You can learn about Docker Swarm at docs.docker.com/swarm or Clustering using Docker Swarm.

Create Docker Swarm Cluster

The key components of Docker Swarm are shown below:

and explained in Clustering Using Docker Swarm.

  1. The easiest way of getting started with Swarm is by using the official Docker image:
    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    The last line is the <TOKEN>.

    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with#661.

  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    This will show the output as:

  4. Create a Swarm node

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    This shows the output similar to the one below:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master.

  8. List nodes in the cluster with the following command:

    This shows the output as:

Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Docker Compose to Orchestrate Containers explains how multi container applications can be easily started using Docker Compose.

  1. Use the docker-compose.yml file explained in that blog to start the containers as:
    The docker-compose.yml file looks like:
  2. Check the containers running in the cluster as:
    to see the output as:
  3. “swarm-node-02” is running three containers and so lets look at the list of containers running there:
    and see the list of running containers as:
  4. Application can then be accessed again using:
    and shows the output as:

Latest instructions for this setup are always available at: github.com/javaee-samples/docker-java/blob/master/chapters/docker-swarm.adoc.

Enjoy!

Docker 1.7.0, Docker Machine 0.3.0, Docker Compose 1.3.0, Docker Swarm 0.3.0

Docker 1.7.0 is released (change log) and so time to update Docker Hosts, CLI, and other tools.

Docker 1.7.0

Docker Host is running inside a Docker Machine and so the machine needs to be upgraded. The machine must be stopped otherwise you get an error as:

So start the machine as:

And then upgrade the machine as:

The machine is anyway stopped to perform an upgrade, and so the need to start the machine seems superfluous (#1399).

Upgrading the host updates .docker/machine/cache/boot2docker.iso. Any previously created machines cache the boot2docker.iso in .docker/machine/machines/<MACHINE-NAME> and so they’ll continue to boot using the same version.

Docker CLI

Update the Docker CLI as:

Now docker version shows the following output:

Note, the client API version (1.7.0) and the server API version (1.7.0) are both shown here.

If you update only the CLI and not the Docker Host, then the following error message is shown:

This error messages shows a version mismatch between the client CLI and the Docker Host running in the machine. The will typically happen if the active machine was created a few days ago using an older boot2docker.iso. There seems to be no way straight forward way to find out the exact version currently being used (#1398).

There seems to be no way for a new client to talk to the old server (#14077), and thus the host needs to be upgraded. There is a proposal to override the API version of client (#11486), but at this time there is no ETA for the fix. So the only option is to upgrade the docker machine, which will then then upgrade to the latest version of Docker.

So upgrading the CLI requires to upgrade the machine as well.

Here are the options supported by Docker CLI :

Docker Machine 0.3.0

This was rather straight forward:

There are a lots of new features, including an experimental provisioner for Red Hat Enterprise Linux 7.0.

The version is shown as:

Complete list of commands are:

Docker Compose 1.3.0

Docker Compose can be updated to 1.3.0 as:

The version is shown as:

Two important points to note:

  • At least Docker 1.6.0 is required
  • There are breaking changes from Compose 1.2 and so you either need to remove and recreate your containers, or migrate them.Fortunately docker-compose migrate-to-labels can be used to migrate pre-Compose 1.3.0 containers to the latest format. This will recreate the containers with labels added.

Learn more in Docker Compose to Orchestrate Containers.

Docker Swarm 0.3.0

As of this blog, Docker Swarm 0.3.0 RC3 is available. Clustering Using Docker Swarm provide a good introduction to Docker Swarm and can be used to get started with the latest Docker Swarm release.

34 issues have been fixed since 0.2.0 but the commit notifications since 0.2.0 for Release Candidates seem to show no significant changes.

More detailed blogs on each Docker component will be shared in subsequent blogs.

Enjoy!

ZooKeeper for Microservice Registration and Discovery

In a microservice world, multiple services are typically distributed in a PaaS environment. Immutable infrastructure is provided by containers or immutable VM images. Services may scale up and down based upon certain pre-defined metrics. Exact address of the service may not be known until the service is deployed and ready to be used.

This dynamic nature of service endpoint address is handled by service registration and discovery. In this, each service registers with a broker and provide more details about itself, such as the endpoint address. Other consumer services then queries the broker to find out the location of a service and invoke it. There are several ways to register and query services such as ZooKeeper, etcd, consul, Kubernetes, Netflix Eureka and others.

Monolithic to Microservice Refactoring showed how to refactor an existing monolith to a microservice-based application. User, Catalog, and Order service URIs were defined statically. This blog will show how to register and discover microservices using ZooKeeper.

Many thanks to Ioannis Canellos (@iocanel) for all the ZooKeeper hacking!

What is ZooKeeper?

Apache ZooKeeperZooKeeper is an Apache project and provides a distributed, eventually consistent hierarchical configuration store.

 

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.

Apache ZooKeeper

So a service can register with ZooKeeper using a logical name, and the configuration information can contain the URI endpoint. It can consists of other details as well, such as QoS.

Apache Curator ZooKeeper has a steep learning curve as explained in Apache ZooKeeper Made Simpler with Curator. So, instead of using ZooKeeper directly, this blog will use Apache Curator.

Curator n ˈkyoor͝ˌātər: a keeper or custodian of a museum or other collection – A ZooKeeper Keeper.

Apache Curator

Apache Curator has several components, and this blog will use the Framework:

The Curator Framework is a high-level API that greatly simplifies using ZooKeeper. It adds many features that build on ZooKeeper and handles the complexity of managing connections to the ZooKeeper cluster and retrying operations.

Apache Curator Framework

ZooKeeper Concepts

ZooKeeper Overview provides a great overview of the main concepts. Here are some of the relevant ones:

  • Znodes: ZooKeeper stores data in a shared hierarchical namespace that is organized like a standard filesystem. The name space consists of data registers – called znodes, in ZooKeeper parlance – and these are similar to files and directories.
  • Node name: Every node in ZooKeeper’s name space is identified by a path. Exact name of a node is a sequence of path elements separated by a slash (/).
  • Client/Server: Clients connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the client will connect to a different server.
  • Configuration data: Each node in a ZooKeeper namespace can have data associated with it as well as children. ZooKeeper was originally designed to store coordination data, so the data stored at each node is usually small, in less than KB range).
  • Ensemble: ZooKeeper itself is intended to be replicated over a sets of hosts called an ensemble. The servers that make up the ZooKeeper service must all know about each other.
  • Watches: ZooKeeper supports the concept of watches. Clients can set a watch on a znode. A watch will be triggered and removed when the znode changes.

ZooKeeper is a CP system with regards to CAP theorem. This means if there is a partition failure, it will be consistent but not available. This can lead to problems that are explained in Eureka! Why You Shouldn’t Use ZooKeeper for Service Discovery.

Nevertheless, ZooKeeper is one of the most popular service discovery mechanisms used in microservices world.

Lets get started!

Start ZooKeeper

  1. Start a ZooKeeper instance in a Docker container:
  2. Verify ZooKeeper instance by using telnet as:
    Type the command “ruok” to verify that the server is running in a non-error state.The server will respond with “imok” if it is running:
    Otherwise it will not respond at all. ZooKeeper has other similar four-letter commands.

Service Registration and Discovery

Each service, User, Catalog, and Order in our case, has an eagerly initialized bean that registers and unregisters the service as part of lifecycle initialization methods as. Here is the code from CatalogService:

The code is pretty simple, it injects ServiceRegistry class, with @ZooKeeperRegistry qualifier. This is then used to register and unregister the service. Multiple URIs, one each for a stateless service, can be registered under the same logical name.

At this time, the qualifier comes from another maven module. A cleaner Java EE way would be to move the @ZooKeeperRegistry qualifier to a CDI extension (#20). And when this qualifier when specified on any REST endpoint will register the service with ZooKeeper (#22). For now, service endpoint URI is hardcoded as well (#24).

What does ZooKeeper class look like?

  1. ZooKeeper class uses constructor injection and hardcoding IP address and port (#23):
    It does the following tasks:

    1. Loads ZooKeeper’s host/port from a properties file
    2. Initializes Curator framework and starts it
    3. Initializes a hashmap to store the URI name to zNode mapping. This node is deleted later to unregister the service.
  2. Service registration is done using registerService method as:
    Code is pretty straight forward:

    1. Create a parent zNode, if needed
    2. Create an ephemeral and sequential node
    3. Add metadata, including URI, to this node
  3. Service discovery is done using discover method as:
    Again, simple code:

    1. Find all children for the path registered for the service
    2. Get metadata associated with this node, URI in our case, and return.The first such node is returned in this case. Different QoS parameters can be attached to the configuration data. This will allow to return the appropriate service endpoint.

Read ZooKeeper Javadocs for API.

ZooKeeper watches can be setup to inform the client about the lifecycle of the service (#27). ZooKeeper path caches can provide an optimized implementation of the children nodes (#28).

Multiple Service Discovery Implementations

Our shopping cart application has two two service discovery implementations – ServiceDisccoveryStatic and ServiceDiscoveryZooKeeper. The first one has all the service URIs defined statically, and the other one retrieves them from ZooKeeper.

Other means to register and discover can be easily added by creating a new package in services module and implementing ServiceRegistry interface. For example, Snoop, etcd, Consul, and Kubernetes. Feel free to send a PR for any of those.

Run Application

  1. Make sure the ZooKeeper image is running as explained earlier.
  2. Download and run WildFly:
  3. Deploy the application:
  4. Access the application at localhost:8080/everest-web/. Learn more about the application and different components in Monolithic to Microservices Refactoring for Java EE Applications blog.

Enjoy!

Docker Tools in Eclipse

Upcoming Docker Tooling for Eclipse gave a preview of Docker Tooling coming in Eclipse. This Tech Tip will show how to get started with it.

docker-logoeclipse-logo

NOTE: This is pretty bleeding edge and so some of the features may be half baked. But we are looking for all the feedback!

The Docker tooling is aimed at providing at minimum the same basic level features as the command-line interface, but also provide some advantages by having access to a full fledged UI.

Install Docker Tools Plugins

  • Download and Install JBoss Developer Studio 9.0 Nightly, take defaults through out the installation. Alternatively, download Eclipse Mars latest build and configure JBoss Tools plugin from the update site http://download.jboss.org/jbosstools/updates/nightly/mars/.
  • Open JBoss Developer Studio 9.0 Nightly or Eclipse Mars.
  • Add a new site using the menu items: Help > Install New Software… > Add…. Specify the Name: as “Docker Nightly” and Location: as http://download.eclipse.org/linuxtools/updates-docker-nightly/.Eclipse Docker Tooling
  • Expand Linux Tools, select Docker Client and Docker Tooling:Eclipse Docker Tooling
  • Click on Next >, Next >, accept the license agreement, and click on Finish. This will complete the installation of plugins.Restart the IDE for changes to take effect.

Docker Explorer

The Docker Explorer provides a wizard to establish a new connection to a Docker daemon. This wizard can detect default settings if the user’s machine runs Docker natively (such as in Linux) or in a VM using Boot2Docker (such as in Mac or Windows). Both Unix sockets on Linux machines and the REST API on other OSes are detected and supported. The wizard also allows remote connections using custom settings.

  • Use the menu Window, Show View, Other…. Type “docker” to see the output as:Docker Eclipse Tools
  • Select Docker Explorer to open the explorer.Docker Eclipse Tools
  • Click on the link in this window to create a connection to Docker Host. Specify the settings as shown:Docker Eclipse ToolsMake sure to get IP address of the Docker Host using docker-machine ip command.Also, make sure to specify the correct directory for .docker on your machine.
  • Click on Test Connection to check the connection. This should show the output as:Docker Eclipse ToolsClick on OK and Finish to exit out of the wizard.
  • Docker Explorer itself is a tree view that handles multiple connections and provides users with quick overview of the existing images and containers.Docker Eclipse Tools
  • Customize the view by clicking on the arrow in toolbar:Docker Eclipse Tools
  • Built-in filters can show/hide intermediate and dangling images, as well as stopped containers.Docker Eclipse Tools

Docker Images

The Docker Images view lists all images in the Docker host selected in the Docker Explorer view. This view allows user to manage images, including:

  • Pull/push images from/to the Docker Hub Registry (other registries will be supported as well, #469306)
  • Build images from a Dockerfile
  • Create a container from an image

Lets take a look at it.

  • Use the menu  Window, Show View, Other…, select Docker Images. It shows the list of images on Docker Host:Docker Eclipse Tools
  • Right-click on the image ending with wildfly:latest and click on the green arrow in the toolbar. This will show the following wizard:Docker Eclipse ToolsBy default, all exports ports from the image are mapped to random ports on the host interface. This setting can be changed by unselecting the first checkbox and specify exact port mapping.Click on Finish to start the container.
  • When the container is started, all logs are streamed into Eclipse Console:Docker Eclipse Tools

Docker Containers

Docker Containers view lets the user manage the containers. The view toolbar provides commands to start, stop, pause, unpause, display the logs and kill containers.

  • Use the menu Window, Show View, Other…, select Docker Containers. It shows the list of running containers on Docker Host:Docker Eclipse Tools
  • Pause the container by clicking on the pause button in the toolbar (#469310). Show the complete list of containers by clicking on the View Menu, Show all containers.

    Docker Eclipse Tools

  • Select the paused container, and click on the green arrow in the toolbar to restart the container.
  • Right-click on any running container and select Display Log to view the log for this container.

    Docker Eclipse Tools

Information and Inspect on Images and Containers

Eclipse Properties view is used to provide more information about the containers and images.

  • Just open the Properties View and click on a Connection, Container, or Image in any of the Docker Explorer View, Docker Containers View, or Docker Images View. This will fill in data in the Properties view.
  • Info view is shown as:

    Docker Eclipse Tools

  • Inspect view is shown as:

    Docker Eclipse Tools

The code is hosted in Linux Tools project.

File your bugs at: bugs.eclipse.org/bugs/enter_bug.cgi?product=Linux%20Tools and use “Docker” component. Talk to us on IRC.

Enjoy!

Java EE, Docker and Maven (Tech Tip #89)

Java EE applications are typically built and packaged using Maven. For example, github.com/javaee-samples/javaee7-docker-maven is a trivial Java EE 7 application and shows the Java EE 7 dependency:

And the two Maven plugins that compiles the source and builds the WAR file:

This application can then be deployed to a Java EE 7 container, such as WildFly, using the wildfly-maven-plugin:

Tests can be invoked using Arquillian, again using Maven. So if you were to package this application as a Docker image and run it inside a Docker container, there should be a mechanism to seamlessly integrate in the Maven workflow.

Docker Maven Plugin

Meet docker-maven-plugin!

This plugin allows you to manage Docker images and containers from your pom.xml. It comes with predefined goals:

Goal Description
docker:start Create and start containers
docker:stop Stop and destroy containers
docker:build Build images
docker:push Push images to a registry
docker:remove Remove images from local docker host
docker:logs Show container logs

Introduction provides a high level introduction to the plugin including building images, running containers and configuration.

Run Java EE 7 Application as Docker Container using Maven

TLDR;

  1. Create and Configure a Docker Machine as explained in Docker Machine to Setup Docker Host
  2. Clone the workspace as: git clone https://github.com/javaee-samples/javaee7-docker-maven.git
  3. Build the Docker image as: mvn package -Pdocker
  4. Run the Docker container as: mvn install -Pdocker
  5. Find out IP address of the Docker Machine as: docker-machine ip mydocker
  6. Access your application

Docker Maven Plugin Configuration

Lets look little deeper in our sample application.

pom.xml is updated to include docker-maven-plugin as:

Each image configuration has three parts:

  • Image name and alias
  • <build> that defines how the image is created. Base image, build artifacts and their dependencies, ports to be exposed, etc to be included in the image are specified here.Assembly descriptor format is used to specify the artifacts to be included and is defined in src/main/docker directory. assembly.xml in our case looks like:
  • <run> that defines how the container is run. Ports that need to be exposed are specified here.

In addition, package phase is tied to docker:build goal and install phase is tied to docker:start goal.

There are four docker-maven-plugin and you can read more details in the shootout on what serves your purpose the best.

How are you creating your Docker images from existing applications?

Enjoy!

 

Deploying Java EE Application to Docker Swarm Cluster (Tech Tip #88)

What is Docker Swarm?

Docker Swarm provides native clustering to Docker. Clustering using Docker Swarm 0.2.0 provide a basic introduction to Docker Swarm, and how to create a simple three node cluster. As a refresher, the key components of Docker Swarm are shown below:

In short, Swarm Manager is a pre-defined Docker Host, and is a single point for all administration. Additional Docker hosts are identified as Nodes and communicate with the Manager using TCP. By default, Swarm uses hosted Discovery Service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. Each node runs a Node Agent that registers the referenced Docker daemon, monitors it, and updates the Discovery Service with the node’s status. The containers run on a node.

That blog provide complete details, but a quick summary to create the cluster is shown below:

Listing the cluster shows:

It has one master and two nodes.

Deploy a Java EE application to Docker Swarm

All hosts in the cluster are accessible using a single, virtual host. Swarm serves the standard Docker API, so any tool that communicates with a single Docker host communicate can scale to multiple Docker hosts by communicating to this virtual host.

Docker Container Linking Across Multiple Hosts explains how to link containers across multiple Docker hosts. It deploys a Java EE 7 application to WildFly on one Docker host, and connects it with a MySQL container running on a different Docker host. We can deploy both of these containers using the virtual host, and they will then be deployed to the Docker Swarm cluster.

Lets get started!

MySQL on Docker Swarm

  1. Start the MySQL container
  2. Status of the container can be seen as:
    It shows the container is running on swarm-node-01.

    Make sure you are connected to the Docker Swarm cluster using eval $(docker-machine env --swarm swarm-master).

  3. Find IP address of the host where this container is started:

    Note IP address of the node where MySQL server is running. This will be used when starting WildFly application server later.

    ps: Filtering by name seem to not return accurate results (#10897).

WildFly on Docker Swarm

  1. Start WildFly application server by passing the IP address of the host and the port on which MySQL server is running:

  2. Status of the container can be seen as:

    It shows the container is running on swarm-node-02. IP address of the host is also shown in the PORTS column.

    As explained in Tech Tip #69, JDBC URL of the data source uses the specified IP address and port for connecting with the MySQL server. However passing IP address is very brittle as the MySQL server may restart on a different Docker host. This is filed as #773.

  3. Access the application at:

    This is using the IP address of the host where the container is started.

Enjoy!

 

JDK 9 REPL: Getting Started (Tech Tip #87)

Conferences are a great place to meet Java luminaries. Devoxx France was one such opportunity to meet Java language architect, ex-colleague and an old friend – Brian Goetz (@briangoetz). We talked about JDK 9 and he was all raving about REPL. He mentioned that even though there are a lot of significant features, such as modularity and HTTP2 client, in Java SE 9, but this tool is going to be talked about the most often. The statement makes sense since it will really simplify exploration of Java APIs, prototyping, demos in conferences, and similar tasks a lot simpler. This blog is coming out of our discussion there and his strong vote on REPL!

Read-Evaluate-Print-Loop has been there in Lisp, Python, Ruby, Groovy, Clojure, and other languages for a while. Unix shell is a REPL which can read shell commands, evaluate them, print the output, and goes back in the loop to do the same thing.

JDK 9 REPL

You can read all about REPL in JDK 9 in JEP 222. Summary from the JEP is:

Provide an interactive tool which evaluates declarations, statements, and expressions of the Java Programming Language: that is, provide a Read-Evaluate-Print Loop (REPL) for the Java Programming Language. Also, provide an API on which the tool is built, enabling external tools to supply this functionality.

JEP 222

The motivation is also clearly spelt out in the JEP:

Without the ceremony of class Foo { public static void main(String[] args) { … } }, learning and exploration is streamlined.

JEP 222

JEP 222 targets to ship REPL with JDK 9 but openjdk.java.net/projects/jdk9 does not list it is as “targeted” or “proposed to target”. Seems like a documentation bug 😉

 

As of JDK 9 build 61, REPL is not integrated, and needs to be built separately. Eventually, at some time before JDK 9 is released, this tool will be integrated in the build.

Lets see what does it require to have it running on OSX. This blog followed Java 9 REPL – Getting Started Guide to build and run REPL. In addition, it provides complete log output from the commands which might be helpful for some.

Lets get started!

Install JDK 9

  1. Download the latest build, 61 at the time of this writing.
  2. Setup JAVA_HOME as:
    More details on setting JAVA_HOME on OSX are here.
  3. Verify the version:

Checkout and Install jline2

jline2 is a Java library for handling console input. Check it out:

And then build it:

Clone and Build JDK 9 REPL

OpenJDK codename for the project is Kulla which means “The God of Builders”. Planned name for the tool is jshell.

  1. Check out the workspace:
  2. Get the sources:
  3. Edit langtools/repl/scripts/compile.shscript such that it looks like:
    Notice, the only edits are #!/bin/sh for OSX and adding JLINE2LIB to the location of your previously compiled jline2 workspace. javac is picked from JAVA_HOME that is referring to JDK 9.
  4. Compile the REPL tool by invoking the script from langtools/repl directory:

Run JDK 9 REPL

  1. Edit langtools/repl/scripts/run.sh script such that it looks like:
    Notice, the only edits are !/bin/sh for OSX and adding JLINE2LIB.
  2. Run REPL as:

JDK 9 REPL Hello World

Unlike the introduction of bouncing ball or dancing Duke that was used to introduce Java, we’ll just use the conventional Hello World for REPL

Run “Hello World” as:

Voila!

No public static void main, no class creation, no ceremony, just clean and simple Java code. The entered text is called as “snippet”.

The complete Java code can be seen using /list all and looks like:

This snippet can be saved to a file as:

Note this is not a Java file. Saved snippet is exactly what was entered:

And the tool can be exited as:

Or you can just hit Ctrl+C.

Complete list of commands can be easily seen:

JDK 9 REPL Next Steps and Feedback

Follow the REPL Tutorial to learn more about the tool’s capability. Here is a quick overview:

  • Accepts Java statements, variable, method, and class definitions, imports, and expressions
  • Commands for settings and to display information, such as /list to display the list of snippets, /vars to display the list of variables, /save to save your snippets, /open to read them back in.
  • History of snippets is available, snippets can be edited by number, and much more

Here is an RFE that would be useful:

  • Export a snippet as full blown Java class

A subsequent blog will showcase how this could be used for playing with a Java EE application. How would you use REPL?

Discuss the project/issues on kulla-dev.

Keep Calm and REPL

Enjoy!

WildFly 9 on NetBeans, Eclipse, IntelliJ, OpenShift, and Maven (Tech Tip #86)

wildfly9-banner

WildFly 9 CR1 was recently released. Lots of cool features are included:

  • Intelligent load balancing
  • HTTP/2 and SPDY support
  • A new offline CLI mode
  • Graceful single node shutdown
  • A new Servlet-only distribution

And this is above the usual Java EE 7 compliance!

This blog is a quick check to verify that it works in all three major IDEs and OpenShift.

WildFly 9 and NetBeans

Lets start with NetBeans 8.0.x first. The screenshot shows WildFly 9 CR1 configured in NetBeans and started. The log is shown in the console.

WildFly 9 CR1 on NetBeans

Complete instructions to setup WildFly in NetBeans are in NetBeans 8 and WildFly 8.

WildFly 9 and Eclipse

Getting Started with JBoss Tools and WildFly 8 shows how to configure WildFly with JBoss Tools. Here are the series of snapshots that shows configuring WildFly 9 in JBoss Tools with Eclipse Mars M6.

A new experimental runtime …

WildFly 9 CR1 Experimental

Specify the directory …

WildFly 9 CR1 Eclipse New Runtime

Now WildFly 9 is configured as a Server in Eclipse …

WildFly 9 CR1 Eclipse Servers

And finally the server is up and running …

WildFly 9 CR1 Eclipse Console

Complete details, including download and update center coordinates, are explained at JBoss Tools Alpha 2 for Eclipse Mars.

WildFly 9 and IntelliJ

WildFly 8 and IntelliJ IDEA Screencast provide complete details on how to setup IntelliJ with WildFly. The snapshot below shows WildFly 9 configured in IntelliJ 14.1.2.

WildFly 9 CR1 with IntelliJ 14

WildFly 9 and OpenShift

Creating an OpenShift application is pretty straightforward as well:

This creates a new application and uses WildFly 9 as the underlying application server. Complete details about the OpenShift cartridge are at github.com/openshift-cartridges/openshift-wildfly-cartridge/tree/wildfly-9. You can find about how to create an OpenShift application with an existing application, how to connect to this WildFly instance using JBoss CLI.

WildFly 8 CR1 on OpenShift also provide more details.

WildFly 9 and Maven

WildFly Maven Plugin  provide the latest information about how to get started with WildFly Maven plugin.

But you just need to fire up a WildFly server as:

And then deploy the Java EE 7 Movieplex application as:

And the plugin definition is very simple:

Enjoy!

Clustering Using Docker Swarm 0.2.0 (Tech Tip #85)

One of the key updates as part of Docker 1.6 is Docker Swarm 0.2.0. Docker Swarm solves one of the fundamental limitations of Docker where the containers could only run on a single Docker host. Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host.

This Tech Tip will show how to create a cluster across multiple hosts with Docker Swarm.

Docker Swarm

A good introduction to Docker Swarm is by @aluzzardi and @vieux from Container Camp:

Key Components of Docker Swarm

Docker Swarm Cluster

Swarm Manager: Docker Swarm has a Master or Manager, that is a pre-defined Docker Host, and is a single point for all administration. Currently only a single instance of manager is allowed in the cluster. This is a SPOF for high availability architectures and additional managers will be allowed in a future version of Swarm with #598.

Swarm Nodes: The containers are deployed on Nodes that are additional Docker Hosts. Each Swarm Node  must be accessible by the manager, each node must listen to the same network interface (TCP port). Each node runs a node agent that registers the referenced Docker daemon, monitors it, and updates the discovery backend with the node’s status. The containers run on a node.

Scheduler Strategy: Different scheduler strategies (binpack, spread, and random) can be applied to pick the best node to run your container. The default strategy is spread which optimizes the node for least number of running containers. There are multiple kinds of filters, such as constraints and affinity.  This should allow for a decent scheduling algorithm.

Node Discovery Service: By default, Swarm uses hosted discovery service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. However etcd, consul, and zookeeper can be also be used for service discovery as well. This is particularly useful if there is no access to Internet, or you are running the setup in a closed network. A new discovery backend can be created as explained here. It would be useful to have the hosted Discovery Service inside the firewall and #660 will discuss this.

Standard Docker API: Docker Swarm serves the standard Docker API and thus any tool that talks to a single Docker host will seamlessly scale to multiple hosts now. That means if you were using shell scripts using Docker CLI to configure multiple Docker hosts, the same CLI would can now talk to Swarm cluster and Docker Swarm will then act as proxy and run it on the cluster.

There are lots of other concepts but these are the main ones.

TL;DR Here is a simple script that will create a boilerplate cluster with a master and two nodes:

Lets dig into the details now!

Create Swarm Cluster

Create a Swarm cluster as:

This command returns a token and is the unique cluster id. It will be used when creating master and nodes later. As mentioned earlier, this cluster id is returned by the hosted discovery service on Docker Hub.

Make sure to note this cluster id now as there is no means to list it later. #661 should fix this.

Create Swarm Master

Swarm is fully integrated with Docker Machine, and so is the easiest way to get started on OSX.

  1. Create Swarm master as:
    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Make sure to replace cluster id after token:// with that obtained in the previous step. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

    There should be an option to make an existing machine as Swarm master. This is reported as #1017.

  2. List all the running machines as:

    Notice, how swarm-master is marked as master.

    Seems like the cluster name is derived from the master’s name. There should be an option to specify the cluster name, likely during cluster creation. This is reported as #1018.

  3. Connect to this newly created master and find some more information about it:

Create Swarm Nodes

  1. Create a swarm node as:

    Once again, node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  2.  Create another Swarm node as:

  3. List all the existing Docker machines:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, mydocker is a standalone machine where as all other machines are part of swarm-master cluster. The Swarm master is also identified by (master) in the SWARM column.

  4. Connect to the Swarm cluster and find some information about it:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master. This can be verified by connecting to the master and listing all the containers:

  5. Configure the Docker client to connect to Swarm cluster and check the list of running containers:

    No application containers are running in the cluster, as expected.

  6. List the nodes in the cluster as:

A subsequent blog will show how to run multiple containers across hosts on this cluster, and also look into different scheduling strategies.

Scaling Docker with Swarm has good details.

Swarm is not fully integrated with Docker Compose yet. But what would be really cool is when I can specify all the Docker Machine descriptions in docker-compose.yml, in addition to the containers. Then docker-compose up -d would setup the cluster and run the containers in that cluster.

Docker 1.6 released – Docker Machine 0.2.0 (Tech Tip #84)

Docker 1.6 was released yesterday. The key highlights are:

  • Container and Image Labels allow to attach user-defined metadata to containers and images (blog post)
  • Docker Windows Client (blog post)
  • Logging Drivers allow you to send container logs to other systems such as Syslog or a third-party. This available as a new option to docker run,  --log-driver, which has three options: json-file (the default, and same as the old functionality), syslog, and none. (pull request)
  • Content Addressable Image Identifiers simplifies applying patches and updates (docs)
  • Custom cgroups using --cgroup-parent allow to define custom resources for those cgroups and put containers under a common parent group (pull request)
  • Configurable ulimit settings for all containers using --default-ulimit(pull request)
  • Apply Dockerfile instructions when committing or changing can be done using commit --change and import –change`. It allows to specify standard changes to be applied to the new image (docs)
  • Changelog

In addition, Registry 2.0, Machine 0.2, Swarm 0.2, and Compose 1.2 are also released.

This blog will show how to get started with Docker Machine 0.2.0. Subsequent blogs will show how to use Docker Swarm 0.2.0 and Compose 1.2.

Download Docker Client

Docker Machine takes you from zero-to-Docker on a host with a single command. This host could be your laptop, in the cloud, or in your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

It works with different drivers such as Amazon, VMWare, and Rackspace. The easiest to start on a local laptop is to use VirtualBox driver. More details on configuring Docker Machine in the next section. But in order for Docker commands to work without having to use SSH into the VirtualBox image, we need to install Docker CLI.

Lets do that!

If you have installed Boot2Docker separately, then there is Docker CLI included in the VM. But this approach would allow you to directly call multiple hosts from your local machine.

Docker Machine 0.2.0

Learn more details about Docker Machine and how to getting started with version 0.1.0. Docker 1.6 released Docker Machine 0.2.0. This section will talk about how to use that and configure it on Mac OS X.

  1. Download Docker Machine 0.2.0:
  2. Verify the version:
  3. Download and install the latest VirtualBox.
  4. Create a Docker host using VirtualBox provider:
  5. Setup client by giving the following command in terminal:
  6. List the Docker Machine instances running:
  7. List Docker images and containers:
    Note, there are no existing images or container.
  8. Run a trivial Java EE 7 application on WildFly using arungupta/javaee7-hol image:
  9. Find IP address of the Docker host:
  10. Access the application at http://192.168.99.100:8080/movieplex7/ to see the output as:Docker Machine 0.2.0 Output
  11. List the images again:
    And the containers:

Enjoy!

Docker MySQL Persistence (Tech Tip #83)

One of the recipes in 9 Docker recipes for Java developers  is using MySQL container with WildFly. Docker containers are ephemeral, and so any state stored in them is gone after they are terminated and removed. So even though MySQL container can be used as explained in the recipe, DDL/DML commands can be used to persist data, but that state is lost, or at least not accessible, after the container is terminated and removed.

This blog shows different approaches of Docker MySQL Persistence – across container restarts and accessible from multiple containers.

Default Data Location of MySQL Docker Container

Lets see the default location where MySQL Docker container stores the data.

Start a MySQL container as:

And inspect as:

Then it shows the anonymous volumes:

If you are using Boot2Docker, then /mnt/sda1 directory is used for storing images, containers, and data. This directory is from the Boot2Docker virtual machine filesystem. This is clarified in Docker docs as well and worth repeating here:

Note: If you are using Boot2Docker, your Docker daemon only has limited access to your OSX/Windows filesystem. Boot2Docker tries to auto-share your /Users (OSX) or C:\Users (Windows) directory – and so you can mount files or directories using docker run -v /Users/<path>:/<container path> ... (OSX) or docker run -v /c/Users/<path>:/<container path ... (Windows). All other paths come from the Boot2Docker virtual machine’s filesystem.

You can view this mounted directory on Boot2Docker by logging into the VM as:

And then view the directory listing as:

MySQL Data Across Container Restart – Anonymous Volumes

Anonymous volumes, i.e. volumes created by a container and which are not explicitly mounted, are container specific. They stay around unless explicitly deleted using docker remove -v command. This means a new anonymous volume is mounted for a new container even though the previous volume may not be deleted. The volume still lives on the Docker host even after the container is terminated and removed. Anonymous volume created by one MySQL container is not accessible to another MySQL container. This means data cannot be shared between different data containers.

Lets understand this using code.

Start a MySQL container as:

Login to the container:

Connect to the MySQL instance, and create a table, as:

Stop the container:

Restart the container:

Now when you connect to the MySQL container, the database table is shown correctly. This shows that anonymous volumes can persist state across container restarts.

Inspect the container:

And it correctly shows the same anonymous volume from /mnt/sda1 directory.

Now lets delete the container, and start a new MySQL container. First remove the container:

And start a new container using the same command as earlier:

Now when you try to see the list of tables, its shown as empty:

This is because anonymous volumes are visible across container restarts, but not visible to different containers. A new volume is mounted for a new run of the container. This is also verified by inspecting the container again:

A different directory is used to mount the anonymous volume.

So effectively, any data stored in the MySQL database by one container is not available to another MySQL container.

Docker Volume to Store MySQL Data

One option to share data between different MySQL containers is to mount directories on your Docker host as volume in the containers using -v switch when running the Docker image. If you are using Boot2Docker, then there are two options:

  • Mount a directory from the Boot2Docker VM filesystem. This directory, if does not exist already, would need to be created.
  • Mount a directory from your Mac host. For convenience, this need to exist in /Users/arungupta or whatever your corresponding directory is.

The first approach ties to the specific Boot2Docker VM image, and the second approach ties to a specific Mac host. We’ll look at how this can be fixed later.

We’ll discuss the first approach only here. Start the MySQL container as:

/var/lib/mysql is the default directory where MySQL container writes its files. This directory is not persisted after a Boot2Docker reboot. So the recommended option is to create a directory in /mnt/sda1 and map that instead. Make sure to create the directory /mnt/sda1/var/mysql_data, as is the case above.

Now inspecting the container as:

Now any additional runs of the container can mount the same volume and will have access to the data.

Remember, multiple MySQL containers cannot access this shared mount together and instead will give the error:

So you need to make sure to stop an existing MySQL container, start a new MySQL container using the same volume, and the data would still be accessible.

This might be configured using master/slave configuration, where the master and slave have access to same volume. It’ll be great if somebody who has tried that configuration can share that recipe.

But as mentioned before, this approach is host-centric. It restricts MySQL to a particular Boot2Docker VM image. That means, you once again loose the big benefit of portability as offered by Docker.

Meet Docker data-only containers!

Docker Data-only Containers

Docker follows Single Responsibility Principle (SRP) really well. Docker Data-only containers are NoOp containers that perform a command that is not really relevant, and instead mount volumes that are used for storing data. These containers don’t even need to start or run, and so the command really is irrelevant, just creating them is enough.

Create the container as:

If you plan to use a MySQL container later, its recommended to use the mysql image to save bandwidth and space from downloading another random image. You can adjust this command for whatever database container you are using.

If you intend to use MySQL, then this data-only container can be created as:

Dockerfile for this container is pretty simple and can be adopted for a database server of your choice.

Since this container is not running, it will not be visible with just docker ps. Instead you’ll need to use docker ps -a to view the container:

Docker allows to mount, or pull in, volumes from other containers using --volumes-from switch specified when running the container.

Lets start our MySQL container to use this data-only container as:

Boot2Docker VM has /var/lib/mysql directory now populated:

If you stop this container, and run another container then the data will be accessible there.

Docker Data Containers

In a simple scenario, application server, database, and data-only container can all live on the same host. Alternatively, application server can live on a separate host and database server and data-only container can stay on the same host.

Hopefully this would be more extensive when Docker volumes can work across multiple hosts.

It would be nice if all of this, i.e. creating the data-only container and starting the MySQL container that uses the volume from data-only container can be easily done using Docker Compose. #1284 should fix this.

Usual mysqldump and mysql commands can be used to backup and restore from the volume. This can be achieved by connecting to the MySQL using CLI as explained here.

You can also look at docker-volumes to manage volumes on your host.

You can also read more about volumes may evolve in future at #6496.

Enjoy!

Minecraft Server on Google Cloud – Tech Tip #82

Minecraft Logo

Bukkit Logo

If you’ve not followed the Minecraft/Bukkit saga over the past few months, Bukkit and CraftBukkit downloads were taken down by DMCA because a developer (@wolvereness) wanted Mojang to open up. Mojang (@vubui) posted an official statement in their forums. The general feeling is that @wolvereness left the Bukkit community hanging, and Mojang is not responsible for this debacle.

One of my friends (@ryanmichela), and a contributor to Bukkit, prepared a slide deck explaining the unfortunate debacle:

Anyway, leaving all the gory details behind, this blog will show how to get started with Bukkit 1.8.3.

What?

You just said, Bukkit was shutdown by DMCA.

SpigotMC LogoHail Spigot for reviving Bukkit, and updating to 1.8.3!

Its still not clear how did Spigot get around DMCA shutdown but the binaries seem to be available again, at least for now.

As a refresher, Bukkit is the API used by developers to make plugins. CraftBukkit is the modified Minecraft server that can understand plugins made by the Bukkit API.

Minecraft Server Hosting on OpenShift already explained how to setup a Minecraft server on OpenShift. This Tech Tip will show how to get a Minecraft server running on Google Cloud.

Lets get started!

Get Started with Google Cloud

Google Cloud Platform logo

  1. Sign up for a free trial at cloud.google.com. This gives you credit of $300, which should be pretty decent to begin with.

Create and Configure Google Compute Engine

  1. Go to console.developers.google.com and create a new project by specifying the values as shown:Create Project on Google Cloud
  2. In console.developers.google.com, go to “Compute”, “Compute Engine”, “Networks”, “default”, “New firewall rule” and enter the values as shown and click on “Create”.Google Cloud Firewall Rule
  3. In the left menu bar, click on “VM Instances” under “Compute Engine”, “Create instance”. Take everything default except:
    1. Provide a name as “minecraft-instance”
    2. Change Image to Ubuntu 14.10.
    3. Change External IP to “New static IP address” and fill in the details. IP address is automatically assigned.

    Exact values are shown here:

    Google Cloud Create Instance

    And click on “Create”.

    Note down the IP address, this will be used later to connect from Minecraft launcher.

  4. Click on the newly created instance, “Add tags”, and specify “minecraft” tag. Exact same tag on the VM instance and Firewall rule ensures that the rule is applied to the appropriate instance.

Install JDK, Git, and Spigot

In console.developers.google.com, select the recently created instance, click on “SSH”, “Open in browser window”. The software is installed in the shell window.

Install JDK

Make sure to answer questions and accept license during the install. Using OpenJDK 8 to install Spigot gives the following exception:

Install Git

This is required for installing Spigot.

Install Spigot

Download and Install Spigot

A successful completion of this task shows the following message:

Start Minecraft Server on Google Cloud

Run the server as:

This will generate “eula.txt”. Accept license agreement by giving the following command:

Run server as:

This will start the CraftBukkit 1.8 server in background.

Connect to Minecraft Server from the Client

Launch Minecraft client and create a new Minecraft server as:

Google Cloud Minecraft Multiplayer

Clicking on Done shows:

Google Cloud Multiplayer Minecraft Server

Now your client can connect to the Minecraft server running on Google Cloud.
Google Cloud Minecraft Client

The server is now live. Add 104.155.38.193  to your Minecraft launcher and put some Google resources to test :)

I was hoping to provide a script that can be run using Google Cloud SDK but the bundled CLI seems to have some issues creating the project. CLI equivalent for other commands can be easily seen from the console itself.

Enjoy and happy Minecrafting!

Configure JRebel with Docker containers – Tech Tip #81

JRebel allows you to skip build and redeploy process by instantly deploying your application to the application server of your choice. It is supported in all the major IDEs such as NetBeans, Eclipse, and IntelliJ. It is also supported in a wide variety of application servers such as JBoss EAP, WildFly, WebLogic, WebsFear (err, WebSphere), Tomcat, and many others.

You can easily get started with JRebel in JBoss Developer Studio  or Integrate JRebel with JBoss on your local desktop. It can also be easily used with JBoss Developer Studio and Ticket Monster on OpenShift.

This Tech Tip will explain how do you set up JRebel with Docker containers. Specifically, we’ll use the sample application provided by Java EE 7 Hands-on Lab (jrebel branch), JBoss Tools with Eclipse Mars M5, and running the sample application in WildFly Docker container.

Many thanks to Adam Koblentz (@akoblentz) for helping me through the steps!

Lets get started!

Install JRebel in Eclipse

JRebel runs in three modes:

  • Local: App server is running from inside the IDE
  • External: App server is running from outside the IDE, such as using CLI, but on the same machine
  • Remote: App server is running on a different machine, VM, container, or cloud

Docker containers need to be configured using the “remote” mode.

  1. Install JBoss Tools  as explained at tools.jboss.org/downloads/. JRebel’s remote mode can only be enabled using the IDE. Install JRebel plugin from Eclipse Marketplace.

Package rebel.xml and rebel-remote.xml with the WAR

These files define the location of classes and resources in your archive.

  1. Clone the Java EE 7 HOL repo:
  2. Import the Maven project (from the solution directory) in the IDE, right-click on the project, select JRebel menu, and click on “Enable JRebel Nature”. This will generate rebel.xml in src/main/resources directory and would look something like:
  3. Right-click on the project again, and select “Enable Remoting”. This generates rebel-remote.xml, in src/main/resources directory again, and will look like:
    This needs to be done on the machine where JRebel will be used in the IDE. This will ensure that the public key is generated appropriately.
  4. Package your application as
    This will package rebel.xml and rebel-remote.xml in the WAR file.

Configure and Run the Application Server

Application server needs to know about JRebel agent and platform-specific library. Both of these files are available from Eclipse if JRebel was installed earlier. On Mac these files are available in eclipse/mars/m5/eclipse/plugins/org.zeroturnaround.eclipse.embedder_6.1.1.RELEASE-201503121801/jr6/jrebel/ directory. The exact name would very likely differ in your case.

  1. Build the image using the Dockerfile:
    The key parts in this image are:

    1. Using the official jboss/wildfly Docker image
    2. Copying the JRebel agent and platform-specific library to the image
    3. Configuring application server such that it knows about the “remote” mode and platform-specific library
    4. Start WildFly
    5. Downloads the pre-built WAR file from GitHub. This will not work for you, and you’ll need to replace it with something like:
      This WAR file is the same that was generated earlier.
  2. Actually build the image as:
  3. Run the container as:
    and this should show something like:
    JRebel license information is a good sign that everything is configured properly.

    If you used Docker Machine to Setup Docker Host then the application should now be accessible at 192.168.99.100:8080/movieplex7/.

Configure Eclipse

Last step is to configure Eclipse so that it knows where the application is deployed.

  1. In Eclipse, right-click on the project, select JRebel, Advanced Properties
  2. Click on “Edit” on next to “Deployment URLs”
  3. Click on Add and specify the URL of the application, 192.168.99.100:8080/movieplex7/ in our case.
  4. Click on Continue, Apply, OK.

Voila, the configuration is now complete.

Now changing any class, adding any method, updating any entity or HTML or JSF page will push the changes to the Docker container instantly. No need to redeploy the application.

Enjoy!

9 Docker recipes for Java EE Applications – Tech Tip #80

Cross-posted from www.voxxed.com/blog/2015/03/9-docker-recipes-for-java-ee-applications/

So, you’d like to start using Docker for Java EE applications?

A typical Java EE application consists of an application server, such as WildFly, and a database, such as MySQL. In addition, you might have a separate front-end tier, say Apache, for load balancing a number of application server. A caching layer, such as Infinispan, may be used to improve overall application performance. Messaging system, such as ActiveMQ, may be used for processing queues. Both the caching and messaging components could be setup as a cluster for further scalability.

This Tech Tip will show some simple Docker recipes to configure your containers that use application server and database. Subsequent blog will cover more advanced recipes that will include front-end, caching, messaging, and clustering.

Lets get started!

Docker Recipe #1: Setup Docker using Docker Machine

If Docker is not already setup on your machine, then as a first step, you need to set it up. If you are on a recent version of Linux then you already have Docker. Or optionally can be installed as:

On Mac and Windows, this means installing boot2docker which is a Tinycore Linux VM and comes with Docker host. Then you need to configure ssh keys and certificates.

Fortunately, this is extremely simplified using Docker Machine. It takes you from zero-to-Docker on a host with a single command. This host could be your laptop, in the cloud, or in your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

This recipe is explained in detail in Docker Machine to Setup Docker Host.

Docker Recipe #2: Application Server + In-memory Database

One of the cool features of Java EE 7 is the default database resource. This allows you to not worry about creating a JDBC resource in an application server-specific before your application is accessible. Any Java EE 7 compliant application server will map the default JDBC resource name (java:comp/DefaultDataSource) to the application server-specific resource in the bundled database server.

For example, WildFly comes bundled with H2 in-memory database. This database is ready to be used as soon as WildFly is ready to accept your requests. This simplifies your development efforts and allows you to do a rapid prototyping. The default JDBC resource is mapped to
java:jboss/datasources/ExampleDS which is then mapped to the JDBC URL of jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE.

In such a case, the Database server is another application running inside the Application server.

Docker Recipe for Java EE Application #1

Here is the command that runs Java EE 7 application in WildFly:

If you want to run a typical Java EE 7 application using WildFly and H2 in-memory database, then this Docker recipe is explained in detail in Java EE 7 Hands-on Lab on WildFly and Docker.

Docker Recipe #3: Two Containers on Same Host Using Linking

The previous recipe gets you started rather quickly but becomes a bottleneck soon as the database is only in-memory. This means that any changes made to your schema and data are lost after the application server shuts down. In this case, you need to use a database server that resides outside the application server. For example, MySQL as the database server and WildFly as the application server.

To keep things simple, both the database server and application server can run on the same host.

Docker Recipe for Java EE Application #2

Docker Container Links are used to link the two containers. Creating a link between two containers creates a conduit between a source container and a target container and securely transfer information about source container to target container. In our case, target container (WildFly) can see information about source container (MySQL). The important part to understand here is that none of this information needs to be publicly exposed by the source container, and is only made available to the target container.

Here are the commands that start the MySQL and WildFly containers and link them:

WildFly and MySQL linked on two Docker containers explains how to set up this recipe.

Docker Recipe #4: Two Containers on Same Host Using Fig

The previous recipe require you to run the containers in a specific order. Running multi-container applications can quickly become challenging if each tier of your application is sitting in a container. Fig (deprecated in favor of Docker Compose) is a Docker Orchestration Tool that:

  • Define multiple containers in a single configuration file
  • Create dependencies between two containers by creating links between them
  • Start containers in the right sequence

Docker Recipe for Java EE Application #3

The entry point for Fig is a configuration file as shown:

and all the containers can be started as:

Docker orchestration using Fig explains this recipe in detail.

Fig is only receiving updates. Its code base is used as basis for Docker Compose. This is explained in the next recipe.

Docker Recipe #5: Two Containers on Same Host Using Compose

Docker Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

The application configuration file is the same format as from Fig. The containers can be started as:

This recipe is explained in detail in Docker Compose to Orchestrate Containers.

Docker Recipe #6: Two Containers on Different Hosts using IP Address

In the previous recipe, the two containers are running on the same host. These two could easily communicate using Docker linking. But simple container linking does not allow cross-host communication.

Running containers on the same host means you cannot scale each tier, database or application server, independently. This is where you need to run each container on a separate host.

Docker Recipe for Java EE Application #4

MySQL container can start as:

JDBC resource can be created as:

And WildFly container can start as:

Complete details for this recipe is explained in Docker container linking across multiple hosts.

Docker Recipe #7: Two Containers on Different Hosts using Docker Swarm

Docker Machine

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host. It picks-up where Docker Machine leaves off by optimizing host resource utilization and providing failover services.  Specifically, Docker Swarm allows users to create resource pools of hosts running Docker daemons and then schedule Docker containers to run on top, automatically managing workload placement and maintaining cluster state.

More details about this recipe are coming in a subsequent blog.

Docker Recipe #8: Deploy Java EE Application from Eclipse

Last recipe will deal with how to deploy your existing applications to a Docker container.

Lets say you are using JBoss Tools as your development environment and WildFly as your application server.

eclipse-logo JBoss Tools Logo

There are a couple of ways by which these applications can be deployed:

  • Use Docker volumes + Local deployment:  Here a directory on your local machine is mounted as a Docker Volume. WildFly Docker container is started by mapping that directory to the deployment directory as:
    Configure JBoss Tools to deploy WAR files to this directory.
  • Use WildFly management API + Remote deployment: Start WildFly Docker container, and additionally expose the management port 9990 as:
    Configure JBoss Tools to use a remote WildFly server and deploy using management APIs.

This recipe is explained in detail at Deploy to WildFly and Docker from Eclipse.

Docker Recipe #9: Test Java EE Applications using Arquillian Cube

Arquillian Cube allows you to control the lifecycle of Docker images as part of the test lifecyle, either automatically or manually. Cube uses Docker REST API to talk to the container. It uses the remote adapter API, for example WildFly in this case, to talk to the application server. Docker configuration is specified as part of the maven-surefire-plugin as:

Complete details about this recipe are available on Run Java EE Tests on Docker using Arquillian Cube.

What other recipes are you using to deploy your Java EE applications using Docker?

Enjoy!