Vagrant with Docker provider, using WildFly and Java EE 7 image

What is Vagrant?

vagrant-logoVagrant is a simplified and portable way to create virtual development environments. It works with multiple virtualization software such as VirtualBox, VMWare, AWS, and more. It also works with multiple configuration software such as Ansible, Chef, Puppet, or Salt.

No more “works on my machine”!

The usual providers are, well, usual. Starting with version 1.6, Docker containers can be used as one of the backend providers as well. This allows your development environment to be based on Docker containers as opposed to full Virtual Machines. Read more about this at

The complete development environment definition such as the type of machine, software that needs to be installed, networking, and other configuration information is defined in a text file, typically called as Vagrantfile. Based upon the provider, it creates the virtual development environment.

Read more about what can be defined in the file, and how, at

Getting Started with Vagrant

Getting Started Guide is really simple and easy to follow to get your feet wet with Vagrant. Once your basic definition is created, the environment can be started with a simple command:

The complete set of commands are defined at

Default provider for Vagrant is VirtualBox. An alternate provider can be specified at the CLI as:

This will spin up the Docker container based upon the image specified in the Vagrantfile.

Packaging Format

Vagrant environments are packaged as Boxes. You can search from the publicly available list of boxes to find the box of your choice. Or even create your own box, and add them to the central repository using the following command:

Vagrant with WildFly Docker image

After learning the basic commands, lets see what does it take to start a WildFly Docker image using Vagrant.

The Vagrantfile is defined at and shown in line:

Clone the git repo and change to docker-wildfly directory. Vagrant image can be started using the following command:

and shows the output as:

This will not work until #5187 is fixed. But at least this blog explained the main concepts of Vagrant.

Build Kubernetes on Mac OS X (Tech Tip #70)

kubernetes on macKey Concepts of Kubernetes explained the basic fundamentals of Kubernetes. Binary distributions of Kubernetes on Linux can be downloaded from Continuous Integration builds. But it needs to manually built on other platforms, for example Mac. Building Kubernetes on Mac is straightforward as long as you know the steps.

This Tech Tip explains how to build Kubernetes on Mac.

Lets get started!

  1. Kubernetes is written using Go programming language. So you’ll need to download the tools/compilers to build Kubernetes.Install Go ( For example Go 1.4 package for 64-bit Mac OS X can be downloaded from
  2. Configure Go. GOROOT is directory of Go installation is done and contains compiler/tools. GOPATH is directory for your Go projects / 3rd party libraries (downloaded with “go get”).Setup environment variables GOPATH and GOROOT. For example, on my environment they are:

    Make sure $GOROOT/bin is in $PATH.
  3. Install Gnutar:

    Without this, the following message will be shown:
  4. Tech Tip #39 shows how to get stared with Docker on Mac using boot2docker. Download boot2docker for Mac from and install.
  5. Git clone Kubernetes repo:
  6. Build it. This needs to be done from within the boot2docker VM.


Subsequent blogs will show how to run a Kubernetes cluster of WildFly containers. WildFly will have a Java EE 7 application deployed and persist data to MySQL containers.

Minecraft Modding with Forge: Pre-release of a New O’Reilly Book

Would you like to learn Minecraft Modding in a simple and easy-to-understand language?
Don’t have any technical background or previous programming experience?
Never programmed in Java?

This new O’Reilly book on Minecraft Modding with Forge is targeted at parents and kids who would like to learn how to mod the game of Minecraft. It can be read by parents or kids independently, and is more fun when they read it together. No prior programming experience is required however some familiarity with software installation would be very helpful.


Release Date: May 2015 (hopefully sooner)
Language: English
Pages: 200
Print ISBN: 978-1-4919-1889-0| ISBN 10:1-4919-1889-6
Early Release Ebook ISBN: 978-1-4919-1882-1| ISBN 10:1-4919-1882-9

It uses Minecraft Forge and shows how to build 26 mods. Here is the complete Table of Content:

Chapter 1 Introduction
Chapter 2 Block Break Message
Chapter 3 Fun with Explosions
Chapter 4 Entities
Chapter 5 Movement
Chapter 6 New Commands
Chapter 7 New Block
Chapter 8 New Item
Chapter 9 Recipes and Textures
Chapter 10 Sharing Mods
Appendix A What is Minecraft?
Appendix B List of Forge Classes and Methods
Appendix C Eclipse Shortcuts and Correct Imports
Appendix D Downloading the Source Code from GitHub
Appendix E Devoxx4Kids

Each chapter also share several ideas on what readers can try.

Game of Minecraft is commonly associated with “addiction”. This book hopes to leverage the passionate kids and teach them how to do Minecraft Modding, and in the process teach them some fundamental Java concepts. They also pick up basic Eclipse skills as well.

It has been an extremely joyful and rewarding experience to co-author the book with my 12-year old son. Many thanks to O’Reilly for providing this opportunity of a lifetime experience to us.

The book is available in pre-released and can be purchased from Any pre-release buyers will get a final copy of the book as well.

Scan the QR code to get the URL on your favorite device.


Happy modding and looking forward to your feedback!

Key Concepts of Kubernetes

What is Kubernetes?


Kubernetes is an open source orchestration system for Docker containers. It manages containerized applications across multiple hosts and provides basic mechanisms for deployment, maintenance, and scaling of applications.

It allows the user to provide declarative primitives for the desired state, for example “need 5 WildFly servers and 1 MySQL server running”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.

How is it related to Docker?

Docker provides the lifecycle management of containers. A Docker image defines a build time representation of the runtime containers. There are commands to start, stop, restart, link, and perform other lifecycle methods on these containers. Containers can be manually linked as shown in Tech Tip #66 or orchestrated using Fig as shown in Tech Tip #68. Containers can run on multiple hosts as well as shown in Tech Tip #69.

Kubernetes uses Docker to package, instantiate, and run containerized applications.

How does Kubernetes simplify containerized application deployment?

A typical application would have a cluster of containers across multiple hosts. For example, your web tier (Apache or Undertow) might run on a set of containers. Similarly, your application tier (WildFly) would run on a different set of containers. The web tier would need to delegate the request to application tier. In some cases, or at least to begin with, you may have your web and application server packaged together in the same set of containers. The database tier would generally run on a separate tier anyway. These containers would need to talk to each other. Using any of the solutions mentioned above would require scripting to start the containers, and monitoring/bouncing if something goes down. Kubernetes does all of that for the user after the application state has been defined.

Kubernetes is cloud-agnostic. This allows it run on public, private or hybrid clouds. Any cloud provider such as Google Cloud Engine. OpenShift v3 is going to be based upon Docker and Kubernetes. It can even run on a variety of hypervisors, such as VirtualBox.

Key concepts of Kubernetes

At a very high level, there are three key concepts:

  • Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.
  • Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.
  • Minion is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

A picture is always worth a thousand words and so this is a high-level logical block diagram for Kubernetes:


After the 50,000 feet view, lets fly a little lower at 30,000 feet and take a look at how Kubernetes make all of this happen. There are a few key components at Master and Minion that make this happen.

  • Replication Controller is a resource at Master that ensures that requested number of pods are running on minions at all times.
  • Service is an object on master that provides load balancing across a replicated group of pods.
  • Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.
  • Kubelet: Each minion runs services to run containers and be managed from the master. In addition to Docker, Kubelet is another key service installed there. It reads container manifests as YAML files that describes a pod. Kubelet ensures that the containers defined in the pods are started and continue running.
  • Master serves RESTful Kubernetes API that validate and configure Pod, Service, and Replication Controller.

Kubernetes Design Overview provides great summary of all the key components as shown below.



Extensive docs are already available at A subsequent blog will explain a Kubernetes version of Tech Tip #66.

OpenShift v3 uses Kubernetes and Docker to provide the next level of PaaS platform.

As a fun fact, “Kubernetes” is actually a Greek word written as κυβερνήτης and means “helmsman of a ship”. In that sense, Kubernetes serves that role for your Docker containers.

Hibernate OGM: NoSQL solutions for Java EE (Hanginar #4)

Hibernate OGM Hibernate OGM brings the power and simplicity of JPA for NoSQL datastores.

It provides one standard way to access a variety of NoSQL datastores such as Infinispan, Ehcache, MongoDB, Neo4J. And support for others is coming. It even allows rich querying capabilities and convert them into datastore-specific query (if supported). You can even mix/match Persistence Unit in persistence.xml for a RBDMS and NoSQL datastore.

This hanginar (#1, #2, #3) with Emmanuel Bernard (@emmanuelbernard) shows how to get started with Hibernate OGM. It specifically addresses the following questions:

  • What NoSQL datastores are supported?
  • Can I build support for other datastores?
  • What application servers can it run on?
  • Do I need Hibernate for it, or does it work with EclipseLink?
  • Can I have a SQL and NoSQL PU in persistence.xml and access them from Java EE application?
  • Can I use this in Java SE applications?

Learn everything about Hibernate OGM at

Source code used in the webinar is at


How to write effective and SEO-friendly blogs?

Everybody around you seems to be blogging and talk about effective and SEO-friendly content. You get all of this but don’t know how would you write SEO-friendly content. Where do you start? And what to blog about? How to structure content so that it shows up on Google SERP and drive traffic to your blog.

If you are interested in details, read on!

Where to Blog?

The following websites are the easiest ways to start a new blog:

Each of them have a free offering, with simple and easy-to-use interface.

If you don’t know which one to pick, start with‌. It gives you 3GB space. My entire blog with all themes, images, media, text is ~416 MB. So there is a good likelihood that you may not exceed 3GB. If you are feeling adventurous then you can host a WordPress instance on OpenShift. This is also fairly straight forward and gives you more control of the WordPress instance (customization, plugins, menus, etc). But the basic one on is good to start with.

If you are one of those nerdy types, then you can even consider setting up Awestruct/Asciidoc-based blog as well.

Another option is GitHub Pages is another option. This is particularly useful if you are contributing on GitHub. Otherwise one of the earlier options is easier and simpler to use.

Migrate from an existing blog?

Do you have an existing blog on a non-Wordpress site? There are plenty of plugins to migrate from those platforms to WordPress:

Your blog is only relevant if it shows up in Google’s SERP. Several of the guidelines below are provided to get a higher SEO for your blog entry, or making it SEO-friendly. Note, that it takes time for the blog to start showing up in first page, so be patient!

Guidelines for SEO-friendly

  • Blog Objective: Choose an appropriate blog objective to provide context to your readers. This will set the tone for your blog readers on what to expect. For example “This blog will talk about middleware solutions using JBoss technologies”. Some WordPress themes shows the purpose in the header or sidebar all the time. Otherwise its common to create first entry as “welcome entry”, for example:
  • Content Objective: Each blog entry can have a variety of content such as Tech Tip, How To, Product announcement, Conference report, Webinar announcement/rerun, cross-posting from another blog or some other purpose. It will be targeted to a different set of audience accordingly. Its important to clearly identify the style and audience of for each blog. This is important to begin with at least. Readers can get used to the style as they keep coming back to your blog. Here are some samples:
    • “This blog will show you how to get started with JBoss Fuse on OpenShift”
    • “Have you ever felt the need for your applications to perform faster? This blog will show you JBoss Data Grid enables that”
    • “Red Hat Summit Call for Papers is live. This blog will provide more detailed information on how to submit a paper to this awesome conference”.
    • “This webinar shows how create a Java EE workflow on OpenShift using WildFly, JBoss Tools, Forge, Arquillian, and OpenShift.”
  • Blog Entry Title is the first introduction to the content. Some specific tips about it.
    • Make sure its brief and conveys the purpose very clearly. A long and drawn-out title would make the reader bookmark it for later reading (which generally never happens) as opposed to reading it in the first time itself. There have been several occasions where I’ve started with a draft title, created the content, and completely rewrote the title to align based upon the keywords that highlight the content. Ask yourself “If I read this blog title and content, do they match?”. If not, change the title.
    • Make sure the title includes all the keywords that are relevant to the content. This allows for a better SEO optimization of your content. More on this later.
    • Google shows up to 70 characters (including spaces) of a page’s title in its search results. If a page title exceeds 70 characters, Google will show as many whole words as it can, and the rest are replaced with an ellipsis (…). So a good recommendation is to keep your title under 60 characters. This accommodates for capital letters and letters like “m” and “w” that take more space than letters like “i” and “l”.
    • WordPress allows title and URL to be separate. Avoid usage of “noisy” words such as conjunction in the title, and definitely in the URL.
    • WordPress blog entries are effectively an HTML page. By default, WordPress takes the Blog Entry Title and appends Blog Title to it to create the <title> tag of the HTML page. This is good because the words earlier in the title are given more emphasis. But be careful if you are using some other option. You can also SEO plugins in WordPress that add relevant metadata to your blog for a possible better ranking.
    • Change Permalink so that the URL is “timeless”. By default, WordPress include month/day in the URL and does not have any benefit. This can be done by going to Settings, Permalinks and changing the URL type to just include /%postname%/. More on this here‌.
  • Content
    • Key purpose of the blog entry should be highlighted at the beginning.
    • Write in short sentences. Try to format text using bullets, different level headings, tables, images.
    • Make sure the content has valid HTML/CSS. WordPress visual editor will take care of this for you. If you are authoring blog offline and copy/pasting to your blog then make sure visual editor can render it correctly. Otherwise a missing tag or incorrect CSS could mess up the whole site.
    • Good to quote other blogs, articles, websites, and third-party content. Make sure to link back to them and give proper credits. If you are linking to any JBoss related blogs, its highly recommended to link from
    • Tables: Use Easy Table plugin for generating tables easily.
    • Code: Use Crayon Syntax Highlighter for syntax coloring code fragments. It supports multiple languages, themes, fonts, and is even integrated in the editor.
    • Multi-part blogs: If a particular blog entry has started becoming too big (say 2-3+ pages) then the audience may lose interest in it. Its better to break the blog into multiple parts and set the tone in the first blog. This also allows you to build a cadence for your blog. Sometimes a bigger blog entry is fine as that provides a better context and story. The decision is purely context dependent.
    • Media rich: Make your blog colorful by adding pictures of architecture diagrams, events, conference trips, book front page or whatever you are talking about. Upload your slides to and embed the slides. Several conferences have been sharing video replay of sessions and those are great candidates to be included. Consider adding twitter cards for a more interactive tweetsphere. At the least, one or two pictures is always better than plain boring text
    • Videos: Host your videos, such as recorded screencasts, on or Its highly recommended to embed them in the blog, as opposed to just creating a link to them. This not only gives the user a visual cue about the video, but also gives them an opportunity to play inline.
  • Images (details on image SEO)
    • Google can crawl both the HTML page the image is embedded in, and the image itself;
    • Image is in one of our supported formats: BMP, GIF, JPEG, PNG, WebP or SVG.
    • Make sure image filename is related to the image’s content.
    • Alt attribute of the image should describe the image in a human-friendly way.
    • Also helps if the HTML page’s textual contents as well as the text near the image are related to the image.
    • Google is also looking at the image EXIF information. There are some tools to edit those. It doesn’t hurt to add even more keywords and information as part of the EXIF ImageDescription tag. Use ExifTool to edit EXIF information.
  • Hyperlinking
    • Include links to tutorial on or as appropriate. Make sure the anchor text is directly relevant to the link. For example “BPMS” should link to
    • Include links to forums, issue tracker, twitter handle. There should always be some means for readers to contact you or ask questions about the content that is being talked about.
    • Always include a link to download or a “try” URL on
  • Optimize page speed
    • Google provide PageSpeed Insights. It analyzes the content of a webpage and generate suggestions to make that webpage faster on mobile devices and desktop.
    • Use EWWW Image Optimizer plugin in WordPress to either Bulk Optimize previously loaded images, or automatically optimize any new image that is uploaded. At least PHP 5.3 is required for this.

Here is the complete set of WordPress plugins on my blog:

Plugin Name/Website Header 2
Akismet Checks your comments against the Akismet Web service to see if they look like spam or not
Captcha Allows you to implement super security captcha form into web forms
Crayon Syntax Highlighter Syntax Highlighter supporting multiple languages, themes, fonts, highlighting from a URL, local file or post text
Display PHP Version Displays the current PHP version in the “At a Glance” admin dashboard widget
Easy Table Create table in post, page, or widget in easy way using CSV format. This can also display table from CSV file.
EWWW Image Optimizer Reduce file sizes for images in WordPress using lossless/lossy methods and image format conversion.
FD Feedburner Plugin Redirects the main feed and optionally the comments feed seamlessly and transparently to
Google XML Sitemaps Generate a special XML sitemap which will help search engines like Google, Bing, Yahoo and to better index your blog
Quttera Web Malware Scanner Scans your WordPress website for known and unknown malware and other suspicious activities
Share a Draft Let your friends preview one of your drafts, without giving them permissions to edit posts in your blog
Smart 404 Automatically redirect to the content the user was most likely after, or show suggestions, instead of showing an unhelpful 404 error.
Sociable Enable simplified sharing of blog/pages sharing on social media
Tiny Google Analytics Adds Google Analytics Tracking Code using an optimzed code
Ultimate Tag Cloud Widget Configurable tag cloud widget
UpdraftPlus – Backup/Restore Backup and restoration made easy. Complete backups; manual or scheduled (backup to S3, Dropbox, Google Drive, Rackspace, FTP, SFTP, email + others).
WordPress SEO Improve your WordPress SEO
WP Custom Search Allows searching by custom post types on your website.
WP Open Graph Add facebook open graph protocol to your blog
Yet Another Related Posts Plugin Display a list of related posts on your site based on a powerful unique algorithm.

More details on WordPress SEO.

What are your tips for making your blog SEO-friendly?

Docker container linking across multiple hosts (Tech Tip #69)

Docker container linking is important concept to understand since any application in production will typically run on a cluster of containers across multiple hosts. But simple container linking does not allow cross-host communication.

Whats the issue with Docker container linking?

Docker containers can communicate with each other be manually linking as shown in Tech Tip #66 or orchestrated using Fig as shown in Tech Tip #68. Both of these using container linking but that has an inherent disadvantage that it is restricted to a single host. Linking does not work if containers are running across multiple hosts.

What is the solution?

This Tech Tip will evolve the sample built in Tech Tip #66 and #68 and show the containers can be connected if they are running across multiple hosts.

Docker container linking across multiple hosts can be easily done by explicitly publishing the host/port and using it from a container on a different host.

Lets get started!

  1. Start MySQL container as:

    The MySQL container is explicitly forwarding the port 3306 to port 5506.
  2. Git repo has customization/ that creates the MySQL data source. The command looks like:

    This command creates the JDBC resource for WildFly using jboss-cli. It is using $DB_PORT_3306_TCP_ADDR and $DB_PORT_3306_TCP_PORT variables which are defined per Container Linking Environment Variables. The scheme by which the environment variables for containers are created is rather weird. It exposes the port number in the variable name itself. I hope this improves in subsequent releases.

    This command needs to be updated such that an explicit host/port can be used instead.

    So update the command to:

    The only change in the command is to use $MYSQL_HOST and $MYSQL_PORT variables. This command already exists in the file but is commented. So just comment the previous one and uncomment this one.

  3. Build the image and run it as:

    Make sure to substitute <IP_ADDRESS> with the IP address of your host. For convenience, I ran it on the same host. The IP address in this case can be easily obtained using boot2docker ip.

  4. A quick verification of the deployment can be done by accessing the REST endpoint:

With this, your WildFly and MySQL can run on two separate hosts, no special configuration required.


Docker allows cross-host container linking using Ambassador Containers but that adds a redundant hop for the service to be accessed. A cleaner solution would to use Kubernetes or Swarm, more on that later.

Marek also blogged about a more elaborate solution in Connecting Docker Containers on Multiple Hosts.


Reactive HTML presentations using Reveal.js, Gist, and OpenShift (Tech Tip #69)

Reveal.js is an HTML-base presentation framework. You just need a browser with CSS 3D transforms. That means Chrome, Firefox, Safari, Opera, and IE 10-11 are supported. It also provides a nice fallback for other legacy browsers. Check out a live demo yourself.

This Tech Tip will show how to create Reveal.js-based presentations easily using Gist and OpenShift.

Why Gist?

This allows to separate presentation layer (Node.js on OpenShift) and data layer (HTML source on Gist), and also keep them distributed. You may not be able to show demos using this, at least yet, but at least you don’t need to worry about laptop crashes. You can certainly keep the source for your presentation any where you like, such as github or some other repo, but will need to change the templating framework accordingly.

Why OpenShift?

Full setup of Reveal.js require installation of Node.js, Grunt, and some other dependencies. And even then your slides are only locally available. To keep it completely distributed, its important to have Node.js and other dependencies running in the cloud. OpenShift is an open-source polyglot PaaS from Red Hat that allows Node.js to be run in the cloud. You can certainly choose any other Node.js hosting environment as well but this is what I feel most comfortable with.


Ryan (@ryanj) created an open source slideshow templating service that makes it easy to create, edit, present, and share Reveal.js slides on the web.

A cool feature of this framework on how one browser can be configured as broadcaster and all others as receivers. This allows the presenter (or broadcaster) to share the slides URL and allow the viewers (or receivers) to follow the slides along on their own favorite device. This could be very useful in a conference setting particularly.

Gist-powered Reveal.js slideshows provide a quick introduction to setup. This Tech Tip will use + OpenShift and show you how to setup your own personal hosting environment for beautiful HTML slides.

Lets get started!

  • Sign up for OpenShift at No credit card and free account gives you 3 gears where each gear is 1GB disk and 0.5 GB memory. A free gear is enough to host your web front end for Reveal.js.
  • Sample slides are available at Click on the button on bottom-left to create a new OpenShift application. This application will clone the source code from and use that as the basis for the newly created application.
  • Create a new gist and copy the unique ID assigned to it. For example, for the gist created at the unique id is “9ac2cea40c302986a8e3″.
  • Register a new API key on GitHub at Note down the “Client ID” and “Client Secret”. Leave “Authorization callback URL” empty, everything else is straight forward.
  • Install OpenShift CLI tools and set them up. Setup a few environment variables for the OpenShift application:

    Replace <CLIENT_SECRET>, <CLIENT_ID>, and <GIST_ID> with your specific values. Also note, “slides” is the application that is used in this blog. If your OpenShift application name is different then use that instead.

    REVEAL_SOCKET_SECRET is an environment variable that is used by the templating framework to look for a special value to identify the broadcaster (or the presenter). This value needs to be kept secret and not shared with others. A browser can be made as a broadcaster by accessing the following URL

    Accessing this URL stores this token in browser’s local storage. Make sure to change the URL to reflect your particular application and domain on OpenShift. For example, application name in this case is “slides” and domain is “milestogo”.

Other configuration values are explained at Complete documentation about the framework is at


Thanks to Ryan (@ryanj) for helping me setup the environment.

I’m waiting which conference will be the first one to provide Gist-Reveal themes :-)

Docker orchestration using Fig (Tech Tip #68)

Tech Tip #66 showed how to run a Java EE 7 application using WildFly and MySQL in two separate containers. It required to explicitly start the two containers, and link them using --link. This defining and controlling a multi-container service is a common design pattern in order to get an application up and going.

Meet Fig – Docker Orchestration Tool.

Fig allows to:

  • Define multiple containers in a single configuration file
  • Create dependencies between two containers by creating links between them
  • Start containers in the right sequence

Let’s get started!

  1. Install Fig as:
  2. Entry point to Fig is a configuration file that defines the containers and their dependencies. The equivalent configuration file from Tech Tip #65 is:

    This YML-based configuration file has:

    1. Two containers defined by the name “mysqldb” and “mywildfly”
    2. Image names are defined using “image”
    3. Environment variables for the MySQL container are defined in “environment”
    4. MySQL container is linked with WildFly container using “links”
    5. Port forwarding is achieved using “ports”
  3. All the containers can be started, in detached mode, by giving the command:

    The output is shown as:

    Fig commands allow to monitor and update the status of the containers:

    1. Logs can be seen as:
    2. Container status can be seen by giving the command:

      to show the output as:
    3. Containers can be stopped as:
    4. Alternatively, containers can be started in foreground by giving the command:

      and the output is seen as:
  4. Find out the IP address using boot2docker ip and access the app as:

Complete list of Fig commands can be seen by typing fig:

Particularly interesting is scale command, and we’ll take a look at it in a subsequent blog.

File issues on github.


WildFly Admin Console in a Docker image (Tech Tip #67)

WildFly Docker image binds application port (8080) to all network interfaces (using -b If you want to view feature-rich lovely-looking web-based administration console, then management port (9990) needs to be bound to all network interfaces as well using the shown command:

This is overriding the default command in Docker file, explicitly starting WildFly, and binding application and management port to all network interfaces.

The -P flag map any network ports inside the image it to a random high port from the range 49153 to 65535 on Docker host. Exact port can be verified by giving docker ps command as shown:

In this case, port 8080 is mapped to 49161 and port 9990 is mapped to 49162. IP address of Docker containers can be verified using boot2docker ip command. The default web page and admin console can then be accessed on these ports.

Accessing WildFly Administration Console require a user in administration realm. This can be done by using an image which will create that user. And since a new image is created, the Dockerfile can also consume network interface binding to keep the actual command-line simple. The Dockerfile is pretty straight forward:

This image has already been pushed to Docker Hub and source file is at
So to have a WildFly image with Administration Console, just run the image as shown:

Then checked the mapped ports as:

Application port is mapped to 49165 and management port is mapped to 49166. Access the admin console at which will then prompt for the username (“admin”) and the password (“Admin#007″).


If you don’t like random ports being assigned by Docker, then you can map them to specific ports as well using the following command:

In this case, application port 8080 is mapped to 8080 on Docker host and management port 9990 is mapped to 9990 on Docker host. So the admin console will then be accessible at

WildFly/JavaEE7 and MySQL linked on two Docker containers (Tech Tip #66)

Tech Tip #61 showed how to run Java EE 7 hands-on lab on WildFly Docker container. A couple of assumptions were made in that case:

  • WildFly bundles H2 in-memory database. The Java EE 7 application uses the default database resource, which in case of WildFly, gets resolved to a JDBC connection to that in-memory database. This is a good way to start building your application but pretty soon you want to start using a real database, like MySQL.
  • Typically, Application Server and Database may not be residing on the same host. This reduces the risk by avoiding a single point of failure. And so WildFly and MySQL would be on to separate host.

There is plethora of material available to show how to configure WildFly and MySQL on separate hosts. What are the design patterns, and anti-patterns, if you were to do that using Docker?

Lets take a look!

In simplified steps:

  1. Run the MySQL container as:
  2. Run the WildFly container, with MySQL JDBC resource pre-configured, as:
  3. Find the IP address of the WildFly container:

    If you are on a Mac, then use boot2docker ip to find the IP address.
  4. Access the application as:

    to see the output as:

    The application is a trivial Java EE 7 application that publishes a REST endpoint. Access it as:

    to see:

If you are interested in nitty gritty, read further details.

Linking Containers

The first concept we need to understand is how Docker allows linking containers. Creating a link between two containers creates a conduit between a source container and a target container and securely transfer information about source container to target container. In our case, target container (WildFly) can see information about source container (MySQL). The important part to understand here is that none of this information needs to be publicly exposed by the source container, and is only made available to the target container.

The magic switch to enable link is, intuitively, --link. So for example, if MySQL and WildFly containers are run as shown above, then --link mysqldb:db links the MySQL container named mysqldb with an alias db to the WildFly target container. This defines some environment variables, following the defined protocol, in the target container which can then be used to access information about the source container. For example, IP address, exposed ports, username, passwords, etc. The complete list of environment variables can be seen as:

So you can see there are DB_* environment variables providing plenty of information about source container.

Linking only works if all the containers are running on the same host. A better solution will be shown in the subsequent blog, stay tuned.

Override default Docker command

Dockerfile for this image inherits from jboss/wildfly:latest and starts the WildFly container. Docker containers can only run one command but we need to install JDBC driver, create JDBC resource using the correct IP address and port, and deploy the WAR file. So we will override the command by inheriting from jboss/wildfly:latest and use a custom command. This command will do everything that we want to do, and then start WildFly as well.

The custom command does the following:

  • Add MySQL module
  • Add MySQL JDBC driver
  • Add the JDBC data source using IP address and port of the linked MySQL container
  • Deploy the WAR file
  • Start WildFly container

Note, WildFly is starting with -b that allows it to be bound to any IP address. Also, the command needs to run in foreground so that the container stays active.

Customizing security

Ideally, you’ll poke holes in the firewall to enable connection to specific host/ports. But these instructions were tried on Fedora 20 running in Virtual Box. So for convenience, the complete firewall was disabled as:

In addition, a Host-only adapter was added using Virtual Box settings and looks like:


That’s it, that should get you going to to use WildFly and MySQL on two separate containers.

Also verified the steps on boot2docker, and it worked seamlessly there too:

Source code for the image is at


Resolve “dial unix /var/run/docker.sock” error with Docker (Tech Tip #65)

I’ve played around with Docker configuration on Mac using boot2docker (#62, #61, #60, #59, #58, #57) and starting to play with native support on Fedora 20. Boot2docker starts as a separate VM on Mac and everything is pre-configured. Booting a fresh Fedora VM and trying to run Docker commands there gives:

Debugging revealed that Docker daemon was not running on this VM. It can be easily started as:

And then enable it to start automatically with every restart of the VM as:

Simple, isn’t it?


Java EE Workflows on OpenShift (Tech Tip #64)

This webinar shows how create a Java EE workflow on OpenShift using WildFly, JBoss Tools, Forge, Arquillian, and OpenShift. Specifically it talks about:

  • How a Java EE application can be easily developed using JBoss Developer Studio and deployed directly to OpenShift
  • Set up Test and Production instances on OpenShift
  • Enable Jenkins to provide Continuous Integration
  • Run the tests on Test and push the WAR to Production

More detailed blog entries are at:

And a lot more at


Modular Java EE applications with OSGi (Hanginar #3)

This hanginar (#1, #2) with Paul Bakker (@pbakker) shows how to build modular Java EE applications.

Learn all about:

  • Why is it important to build modular Java EE applications?
  • How OSGi enables modular applications?
  • See bndtools plugin in action using Eclipse
  • Learn how to transform an existing Java EE application to be modular
  • How to take a modular application to production?
  • Learn about Amdatu – open source OSGi components that enable Java EE modular applications

Many thanks to Paul Bakker (@pbakker) and Luminis for all the great work on enabling modular Java EE applications and participating in this series! Most of the work shown in this webinar is also explained in an O’Reilly book: Building Modular Clouds Apps with OSGi (co-authored with @BertErtman).

A tentative list of speakers for upcoming hanginars is identified at Each speaker is assigned an issue which allows you to ask questions. Feel free to file an issue for any other speaker that should be on the list.

The next hanginar will be advertised ahead of time so that any body can participate!

Patching Weld 3 in WildFly 8.2 – First Experimental RI of Java EE 8 (Tech Tip #63)

Java EE 8 is moving along and several new component JSRs have been filed. JSR 365 will define the specification for CDI 2.0. Red Hat has already started working on the implementation prototype in Weld 3 and Alpha3 was released recently.

The Java EE 8 compliant application server from Red Hat will be WildFly where all the different technologies will be implemented. In the meanwhile, how do you try out these early experimental releases?

Tech Tip #29 showed how to patch WildFly 8.x from a previous release. This tip will leverage that mechanism to install Weld 3 Alpha3 in WildFly 8.2. You can also download Weld 3 Alpha3 Standalone or Weld 3 Alpha3 as patch to WildFly 9.0 Alpha1.

The instructions are rather simple:

  1. Download and unzip WildFly 8.2:
  2. Download Weld 3 Alpha3 Patch for WildFly 8.2:
  3. Apply the patch as (also available in README bundled in the patch):
  4. Start WildFly:
  5. Run a simple CDI test from javaee7-samples:

    and see output in the WildFly console as:

    Note that the Weld version of “3.0.0 (Alpha 3)” is shown appropriately in the logs.

In terms of features, here is what is available so far:

  • Declarative ordering of observer methods using @Priority
  • Ability for an extension to veto and modify an observer method
  • Support for Java 8 repeatable annotations as qualifiers and interceptor bindings
  • Enhanced AnnotatedType API
  • Asynchronous events
  • Simplified configuration of Weld-specific properties
  • Guava is no longer used internally

More details, including code samples, are explained in Weld 3.0.0 Alpha1 Released and An update on Weld 3. All the prototyped API is in org.jboss.weld.experimental package indicating the early nature.

Here are some resources for you to look at:

  • Javadocs
  • Maven coordinates
  • Feedback at Weld forums or the cdi-dev mailing list.

Created Java EE 8 Samples repository and will start adding some CDI 2.0 samples there, stay tuned.