Tag Archives: docker

Vagrant with Docker provider, using WildFly and Java EE 7 image

What is Vagrant?

vagrant-logoVagrant is a simplified and portable way to create virtual development environments. It works with multiple virtualization software such as VirtualBox, VMWare, AWS, and more. It also works with multiple configuration software such as Ansible, Chef, Puppet, or Salt.

No more “works on my machine”!

The usual providers are, well, usual. Starting with version 1.6, Docker containers can be used as one of the backend providers as well. This allows your development environment to be based on Docker containers as opposed to full Virtual Machines. Read more about this at docs.vagrantup.com/v2/docker/index.html.

The complete development environment definition such as the type of machine, software that needs to be installed, networking, and other configuration information is defined in a text file, typically called as Vagrantfile. Based upon the provider, it creates the virtual development environment.

Read more about what can be defined in the file, and how, at docs.vagrantup.com/v2/vagrantfile/index.html.

Getting Started with Vagrant

Getting Started Guide is really simple and easy to follow to get your feet wet with Vagrant. Once your basic definition is created, the environment can be started with a simple command:

The complete set of commands are defined at docs.vagrantup.com/v2/cli/index.html.

Default provider for Vagrant is VirtualBox. An alternate provider can be specified at the CLI as:

This will spin up the Docker container based upon the image specified in the Vagrantfile.

Packaging Format

Vagrant environments are packaged as Boxes. You can search from the publicly available list of boxes to find the box of your choice. Or even create your own box, and add them to the central repository using the following command:

Vagrant with WildFly Docker image

After learning the basic commands, lets see what does it take to start a WildFly Docker image using Vagrant.

The Vagrantfile is defined at github.com/arun-gupta/vagrant-images/blob/master/docker-wildfly/Vagrantfile and shown in line:

Clone the git repo and change to docker-wildfly directory. Vagrant image can be started using the following command:

and shows the output as:

This will not work until #5187 is fixed. But at least this blog explained the main concepts of Vagrant.

Build Kubernetes on Mac OS X (Tech Tip #70)

kubernetes on macKey Concepts of Kubernetes explained the basic fundamentals of Kubernetes. Binary distributions of Kubernetes on Linux can be downloaded from Continuous Integration builds. But it needs to manually built on other platforms, for example Mac. Building Kubernetes on Mac is straightforward as long as you know the steps.

This Tech Tip explains how to build Kubernetes on Mac.

Lets get started!

  1. Kubernetes is written using Go programming language. So you’ll need to download the tools/compilers to build Kubernetes.Install Go (golang.org/doc/install). For example Go 1.4 package for 64-bit Mac OS X can be downloaded from storage.googleapis.com/golang/go1.4.darwin-amd64-osx10.8.pkg.
  2. Configure Go. GOROOT is directory of Go installation is done and contains compiler/tools. GOPATH is directory for your Go projects / 3rd party libraries (downloaded with “go get”).Setup environment variables GOPATH and GOROOT. For example, on my environment they are:

    Make sure $GOROOT/bin is in $PATH.
  3. Install Gnutar:

    Without this, the following message will be shown:
  4. Tech Tip #39 shows how to get stared with Docker on Mac using boot2docker. Download boot2docker for Mac from github.com/boot2docker/osx-installer/releases and install.
  5. Git clone Kubernetes repo:
  6. Build it. This needs to be done from within the boot2docker VM.


Subsequent blogs will show how to run a Kubernetes cluster of WildFly containers. WildFly will have a Java EE 7 application deployed and persist data to MySQL containers.

Key Concepts of Kubernetes

What is Kubernetes?


Kubernetes is an open source orchestration system for Docker containers. It manages containerized applications across multiple hosts and provides basic mechanisms for deployment, maintenance, and scaling of applications.

It allows the user to provide declarative primitives for the desired state, for example “need 5 WildFly servers and 1 MySQL server running”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.

How is it related to Docker?

Docker provides the lifecycle management of containers. A Docker image defines a build time representation of the runtime containers. There are commands to start, stop, restart, link, and perform other lifecycle methods on these containers. Containers can be manually linked as shown in Tech Tip #66 or orchestrated using Fig as shown in Tech Tip #68. Containers can run on multiple hosts as well as shown in Tech Tip #69.

Kubernetes uses Docker to package, instantiate, and run containerized applications.

How does Kubernetes simplify containerized application deployment?

A typical application would have a cluster of containers across multiple hosts. For example, your web tier (Apache or Undertow) might run on a set of containers. Similarly, your application tier (WildFly) would run on a different set of containers. The web tier would need to delegate the request to application tier. In some cases, or at least to begin with, you may have your web and application server packaged together in the same set of containers. The database tier would generally run on a separate tier anyway. These containers would need to talk to each other. Using any of the solutions mentioned above would require scripting to start the containers, and monitoring/bouncing if something goes down. Kubernetes does all of that for the user after the application state has been defined.

Kubernetes is cloud-agnostic. This allows it run on public, private or hybrid clouds. Any cloud provider such as Google Cloud Engine. OpenShift v3 is going to be based upon Docker and Kubernetes. It can even run on a variety of hypervisors, such as VirtualBox.

Key concepts of Kubernetes

At a very high level, there are three key concepts:

  • Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.
  • Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.
  • Minion is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

A picture is always worth a thousand words and so this is a high-level logical block diagram for Kubernetes:


After the 50,000 feet view, lets fly a little lower at 30,000 feet and take a look at how Kubernetes make all of this happen. There are a few key components at Master and Minion that make this happen.

  • Replication Controller is a resource at Master that ensures that requested number of pods are running on minions at all times.
  • Service is an object on master that provides load balancing across a replicated group of pods.
  • Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.
  • Kubelet: Each minion runs services to run containers and be managed from the master. In addition to Docker, Kubelet is another key service installed there. It reads container manifests as YAML files that describes a pod. Kubelet ensures that the containers defined in the pods are started and continue running.
  • Master serves RESTful Kubernetes API that validate and configure Pod, Service, and Replication Controller.

Kubernetes Design Overview provides great summary of all the key components as shown below.



Extensive docs are already available at github.com/GoogleCloudPlatform/kubernetes/tree/master/docs. A subsequent blog will explain a Kubernetes version of Tech Tip #66.

OpenShift v3 uses Kubernetes and Docker to provide the next level of PaaS platform.

As a fun fact, “Kubernetes” is actually a Greek word written as κυβερνήτης and means “helmsman of a ship”. In that sense, Kubernetes serves that role for your Docker containers.

Docker container linking across multiple hosts (Tech Tip #69)

Docker container linking is important concept to understand since any application in production will typically run on a cluster of containers across multiple hosts. But simple container linking does not allow cross-host communication.

Whats the issue with Docker container linking?

Docker containers can communicate with each other be manually linking as shown in Tech Tip #66 or orchestrated using Fig as shown in Tech Tip #68. Both of these using container linking but that has an inherent disadvantage that it is restricted to a single host. Linking does not work if containers are running across multiple hosts.

What is the solution?

This Tech Tip will evolve the sample built in Tech Tip #66 and #68 and show the containers can be connected if they are running across multiple hosts.

Docker container linking across multiple hosts can be easily done by explicitly publishing the host/port and using it from a container on a different host.

Lets get started!

  1. Start MySQL container as:

    The MySQL container is explicitly forwarding the port 3306 to port 5506.
  2. Git repo has customization/execute.sh that creates the MySQL data source. The command looks like:

    This command creates the JDBC resource for WildFly using jboss-cli. It is using $DB_PORT_3306_TCP_ADDR and $DB_PORT_3306_TCP_PORT variables which are defined per Container Linking Environment Variables. The scheme by which the environment variables for containers are created is rather weird. It exposes the port number in the variable name itself. I hope this improves in subsequent releases.

    This command needs to be updated such that an explicit host/port can be used instead.

    So update the command to:

    The only change in the command is to use $MYSQL_HOST and $MYSQL_PORT variables. This command already exists in the file but is commented. So just comment the previous one and uncomment this one.

  3. Build the image and run it as:

    Make sure to substitute <IP_ADDRESS> with the IP address of your host. For convenience, I ran it on the same host. The IP address in this case can be easily obtained using boot2docker ip.

  4. A quick verification of the deployment can be done by accessing the REST endpoint:

With this, your WildFly and MySQL can run on two separate hosts, no special configuration required.


Docker allows cross-host container linking using Ambassador Containers but that adds a redundant hop for the service to be accessed. A cleaner solution would to use Kubernetes or Swarm, more on that later.

Marek also blogged about a more elaborate solution in Connecting Docker Containers on Multiple Hosts.


Docker orchestration using Fig (Tech Tip #68)

Tech Tip #66 showed how to run a Java EE 7 application using WildFly and MySQL in two separate containers. It required to explicitly start the two containers, and link them using --link. This defining and controlling a multi-container service is a common design pattern in order to get an application up and going.

Meet Fig – Docker Orchestration Tool.

Fig allows to:

  • Define multiple containers in a single configuration file
  • Create dependencies between two containers by creating links between them
  • Start containers in the right sequence

Let’s get started!

  1. Install Fig as:
  2. Entry point to Fig is a configuration file that defines the containers and their dependencies. The equivalent configuration file from Tech Tip #65 is:

    This YML-based configuration file has:

    1. Two containers defined by the name “mysqldb” and “mywildfly”
    2. Image names are defined using “image”
    3. Environment variables for the MySQL container are defined in “environment”
    4. MySQL container is linked with WildFly container using “links”
    5. Port forwarding is achieved using “ports”
  3. All the containers can be started, in detached mode, by giving the command:

    The output is shown as:

    Fig commands allow to monitor and update the status of the containers:

    1. Logs can be seen as:
    2. Container status can be seen by giving the command:

      to show the output as:
    3. Containers can be stopped as:
    4. Alternatively, containers can be started in foreground by giving the command:

      and the output is seen as:
  4. Find out the IP address using boot2docker ip and access the app as:

Complete list of Fig commands can be seen by typing fig:

Particularly interesting is scale command, and we’ll take a look at it in a subsequent blog.

File issues on github.


WildFly Admin Console in a Docker image (Tech Tip #67)

WildFly Docker image binds application port (8080) to all network interfaces (using -b If you want to view feature-rich lovely-looking web-based administration console, then management port (9990) needs to be bound to all network interfaces as well using the shown command:

This is overriding the default command in Docker file, explicitly starting WildFly, and binding application and management port to all network interfaces.

The -P flag map any network ports inside the image it to a random high port from the range 49153 to 65535 on Docker host. Exact port can be verified by giving docker ps command as shown:

In this case, port 8080 is mapped to 49161 and port 9990 is mapped to 49162. IP address of Docker containers can be verified using boot2docker ip command. The default web page and admin console can then be accessed on these ports.

Accessing WildFly Administration Console require a user in administration realm. This can be done by using an image which will create that user. And since a new image is created, the Dockerfile can also consume network interface binding to keep the actual command-line simple. The Dockerfile is pretty straight forward:

This image has already been pushed to Docker Hub and source file is at github.com/arun-gupta/docker-images/tree/master/wildfly-admin.
So to have a WildFly image with Administration Console, just run the image as shown:

Then checked the mapped ports as:

Application port is mapped to 49165 and management port is mapped to 49166. Access the admin console at which will then prompt for the username (“admin”) and the password (“Admin#007″).


If you don’t like random ports being assigned by Docker, then you can map them to specific ports as well using the following command:

In this case, application port 8080 is mapped to 8080 on Docker host and management port 9990 is mapped to 9990 on Docker host. So the admin console will then be accessible at

WildFly/JavaEE7 and MySQL linked on two Docker containers (Tech Tip #66)

Tech Tip #61 showed how to run Java EE 7 hands-on lab on WildFly Docker container. A couple of assumptions were made in that case:

  • WildFly bundles H2 in-memory database. The Java EE 7 application uses the default database resource, which in case of WildFly, gets resolved to a JDBC connection to that in-memory database. This is a good way to start building your application but pretty soon you want to start using a real database, like MySQL.
  • Typically, Application Server and Database may not be residing on the same host. This reduces the risk by avoiding a single point of failure. And so WildFly and MySQL would be on to separate host.

There is plethora of material available to show how to configure WildFly and MySQL on separate hosts. What are the design patterns, and anti-patterns, if you were to do that using Docker?

Lets take a look!

In simplified steps:

  1. Run the MySQL container as:
  2. Run the WildFly container, with MySQL JDBC resource pre-configured, as:
  3. Find the IP address of the WildFly container:

    If you are on a Mac, then use boot2docker ip to find the IP address.
  4. Access the application as:

    to see the output as:

    The application is a trivial Java EE 7 application that publishes a REST endpoint. Access it as:

    to see:

If you are interested in nitty gritty, read further details.

Linking Containers

The first concept we need to understand is how Docker allows linking containers. Creating a link between two containers creates a conduit between a source container and a target container and securely transfer information about source container to target container. In our case, target container (WildFly) can see information about source container (MySQL). The important part to understand here is that none of this information needs to be publicly exposed by the source container, and is only made available to the target container.

The magic switch to enable link is, intuitively, --link. So for example, if MySQL and WildFly containers are run as shown above, then --link mysqldb:db links the MySQL container named mysqldb with an alias db to the WildFly target container. This defines some environment variables, following the defined protocol, in the target container which can then be used to access information about the source container. For example, IP address, exposed ports, username, passwords, etc. The complete list of environment variables can be seen as:

So you can see there are DB_* environment variables providing plenty of information about source container.

Linking only works if all the containers are running on the same host. A better solution will be shown in the subsequent blog, stay tuned.

Override default Docker command

Dockerfile for this image inherits from jboss/wildfly:latest and starts the WildFly container. Docker containers can only run one command but we need to install JDBC driver, create JDBC resource using the correct IP address and port, and deploy the WAR file. So we will override the command by inheriting from jboss/wildfly:latest and use a custom command. This command will do everything that we want to do, and then start WildFly as well.

The custom command does the following:

  • Add MySQL module
  • Add MySQL JDBC driver
  • Add the JDBC data source using IP address and port of the linked MySQL container
  • Deploy the WAR file
  • Start WildFly container

Note, WildFly is starting with -b that allows it to be bound to any IP address. Also, the command needs to run in foreground so that the container stays active.

Customizing security

Ideally, you’ll poke holes in the firewall to enable connection to specific host/ports. But these instructions were tried on Fedora 20 running in Virtual Box. So for convenience, the complete firewall was disabled as:

In addition, a Host-only adapter was added using Virtual Box settings and looks like:


That’s it, that should get you going to to use WildFly and MySQL on two separate containers.

Also verified the steps on boot2docker, and it worked seamlessly there too:

Source code for the image is at github.com/arun-gupta/docker-images/tree/master/wildfly-mysql-javaee7.


Resolve “dial unix /var/run/docker.sock” error with Docker (Tech Tip #65)

I’ve played around with Docker configuration on Mac using boot2docker (#62, #61, #60, #59, #58, #57) and starting to play with native support on Fedora 20. Boot2docker starts as a separate VM on Mac and everything is pre-configured. Booting a fresh Fedora VM and trying to run Docker commands there gives:

Debugging revealed that Docker daemon was not running on this VM. It can be easily started as:

And then enable it to start automatically with every restart of the VM as:

Simple, isn’t it?


Run Java EE Tests on Docker using Arquillian Cube (Tech Tip #62)

Tech Tip #61 showed how to run Java EE 7 Hands-on Lab using Docker. The Dockerfile used there can be used to create a new image that can deploy any Java EE 7 WAR file to the WildFly instance running in the container.

For example github.com/arun-gupta/docker-images/blob/master/javaee7-test/Dockerfile can be copied to the root directory of javaee7-samples and be used to deploy jaxrs-client.war file to the container. Of course, you first need to build the sample as:

The exact Dockerfile is shown here:

If you want to deploy another Java EE 7 application, then you need to do the following steps:

  • Create the WAR file of the sample
  • Change the Dockerfile
  • Build the image
  • Stop the previous container
  • Start the new container

Now, if you want to run tests against this instance then mvn test alone will not do it because either you need to bind the IP address of the Docker container statically, or dynamically find out the address and then patch it at runtime. Anyway, the repeated cycle is little too cumbersome. How do you solve it?

Meet Arquillian Cube!

Arquillian Cube allows you to control the lifecycle of Docker images as part of the test lifecyle, either automatically or manually.

The blog entry provide more details about getting started with Arquillian Cube, and this functionality has now been enabled in “docker” branch of javaee7-samples. Arquillian Cube Extension Alpha2 was recently released and is used to provide integration. Here are the key concepts:

  • A new “wildfly-docker-arquillian” profile is being introduced
  • The profile adds a dependency on:
  • Uses Docker REST API to talk to the container. Complete API docs shows the sample payloads and explains the query parameters and status codes.
  • Uses WildFly remote adapter to talk to the application server running within the container
  • Configuration for Docker image is specified as part of maven-surefire-plugin.:

    Username and password are specified are for the WildFly in arungupta/javaee7-samples-wildfly image. All the configuration values can be overridden by arquillian.xml for each test case, as explained here.

How do you try out this functionality?

Here is a complete log of running simple-servlet test: