Minecraft Server on Google Cloud – Tech Tip #82

Minecraft Logo

Bukkit Logo

If you’ve not followed the Minecraft/Bukkit saga over the past few months, Bukkit and CraftBukkit downloads were taken down by DMCA because a developer (@wolvereness) wanted Mojang to open up. Mojang (@vubui) posted an official statement in their forums. The general feeling is that @wolvereness left the Bukkit community hanging, and Mojang is not responsible for this debacle.

One of my friends (@ryanmichela), and a contributor to Bukkit, prepared a slide deck explaining the unfortunate debacle:

Anyway, leaving all the gory details behind, this blog will show how to get started with Bukkit 1.8.3.

What?

You just said, Bukkit was shutdown by DMCA.

SpigotMC LogoHail Spigot for reviving Bukkit, and updating to 1.8.3!

Its still not clear how did Spigot get around DMCA shutdown but the binaries seem to be available again, at least for now.

As a refresher, Bukkit is the API used by developers to make plugins. CraftBukkit is the modified Minecraft server that can understand plugins made by the Bukkit API.

Minecraft Server Hosting on OpenShift already explained how to setup a Minecraft server on OpenShift. This Tech Tip will show how to get a Minecraft server running on Google Cloud.

Lets get started!

Get Started with Google Cloud

Google Cloud Platform logo

  1. Sign up for a free trial at cloud.google.com. This gives you credit of $300, which should be pretty decent to begin with.

Create and Configure Google Compute Engine

  1. Go to console.developers.google.com and create a new project by specifying the values as shown:Create Project on Google Cloud
  2. In console.developers.google.com, go to “Compute”, “Compute Engine”, “Networks”, “default”, “New firewall rule” and enter the values as shown and click on “Create”.Google Cloud Firewall Rule
  3. In the left menu bar, click on “VM Instances” under “Compute Engine”, “Create instance”. Take everything default except:
    1. Provide a name as “minecraft-instance”
    2. Change Image to Ubuntu 14.10.
    3. Change External IP to “New static IP address” and fill in the details. IP address is automatically assigned.

    Exact values are shown here:

    Google Cloud Create Instance

    And click on “Create”.

    Note down the IP address, this will be used later to connect from Minecraft launcher.

  4. Click on the newly created instance, “Add tags”, and specify “minecraft” tag. Exact same tag on the VM instance and Firewall rule ensures that the rule is applied to the appropriate instance.

Install JDK, Git, and Spigot

In console.developers.google.com, select the recently created instance, click on “SSH”, “Open in browser window”. The software is installed in the shell window.

Install JDK

Make sure to answer questions and accept license during the install. Using OpenJDK 8 to install Spigot gives the following exception:

Install Git

This is required for installing Spigot.

Install Spigot

Download and Install Spigot

A successful completion of this task shows the following message:

Start Minecraft Server on Google Cloud

Run the server as:

This will generate “eula.txt”. Accept license agreement by giving the following command:

Run server as:

This will start the CraftBukkit 1.8 server in background.

Connect to Minecraft Server from the Client

Launch Minecraft client and create a new Minecraft server as:

Google Cloud Minecraft Multiplayer

Clicking on Done shows:

Google Cloud Multiplayer Minecraft Server

Now your client can connect to the Minecraft server running on Google Cloud.
Google Cloud Minecraft Client

The server is now live. Add 104.155.38.193  to your Minecraft launcher and put some Google resources to test :)

I was hoping to provide a script that can be run using Google Cloud SDK but the bundled CLI seems to have some issues creating the project. CLI equivalent for other commands can be easily seen from the console itself.

Enjoy and happy Minecrafting!

Minecraft Modding Course at Elementary School – Teach Java to Kids

Cross posted from weblogs.java.net/blog/arungupta/archive/2015/03/22/minecraft-modding-course-elementary-school-teach-java-kids

minecraft-logo

Exactly two years ago, I wrote a blog on Introducing Kids to Java Programming using Minecraft. Since then, Devoxx4Kids has delivered numerous Minecraft Modding workshops all around the world. The workshop material is all publicly accessible at bit.ly/d4k-minecraft. In these workshops, we teach attendees, typically 8 – 16 years of age, how to create Minecraft Mods. Given the excitement around Minecraft in this age range, these workshops are typically sold out very quickly.

One of the parents from our workshops in the San Francisco Bay Area asked us to deliver a 8-week course on Minecraft modding at their local public school. As an athlete, I’m always looking for new challenges and break the rhythm. This felt like a good option, and so the game was on!

My son has been playing the game, and modding, for quite some time and helped me create the mods easily. We’ve also finished authoring our upcoming O’Reilly book on Minecraft Modding using Forge so had a decent idea on what needs to be done for these workshops.

Minecraft Modding Workshop Material

All the workshop material is available at bit.ly/d4k-minecraft.

Getting Started with Minecraft Modding using Forge shows the basic installation steps.

These classes were taught from 7:30am – 7:45am, before start of the school. Given the nature of workshop, the enthusiasm and concentration in the kids was just amazing.

Minecraft Modding Course Outline

The 8 week course was delivered using the following lessons:

Week LESSON Java concepts
1 Watch through the video and understand the software required for modding Familiarity with JDK, Forge, Eclipse
2 Work through the installation and get the bundled sample mod running. This bundled mod, without any typing, allows to explain the basic concepts of Java such as class, packages, methods, running Minecraft from Eclipse, seeing the output in Eclipse panel.
3 Chat Items mod shows how to create a stack of 64 potatoes if the word “potato” is typed in the chat window.
  • Create a new class in Eclipse
  • Annotations and event-driven programming to listen for events when a player types a message in the chat window is introduced.
  • String variable types and how they are enclosed within a quotes is introduced.
4 Continue with Chat Items mod and a couple of variations. Change the number of items to be generated. Generate different items on different words, or multiple items on same word.
  • Integer variables for changing the number of items.
  • How  use Eclipse allows code completion and scroll through the list of items that can be generated.
  • Multiple if/else blocks and scope of a block.
5 Eclipse Tutorial for Beginners Some familiarity with Eclipse
6 Ender Dragon Spawner mod spawns an Ender Dragon every time a dragon egg is placed.
  •  == to compare objects
  • Accessing properties using . notation
  • Creating a new class
  • Calling methods on a class
7 Creeper Spawn Alert mod alerts a player when creeper is spawned
  •  instanceof operator
  • for loop
  • java.util.List
  • Enums
  • && and || operators
  • Parent/child class
8 Sharp Snowballs mod turns all snowballs into arrows
  • 15-20 LOC of methods
  • ! operator
  • Basic Math in Minecraft

Most of the kids in this 8-week course had no prior programming experience. And it was amazing to see them be able to read the Java code by week 7. Some kids who had prior experience finished the workshop in first 3-4 weeks, and were helping other kids.

Check out some of pictures from the 8-week workshops:

 Minecraft Modding at Public Elementary School
 

Many thanks to attendees, parents, volunteers, Parent Teacher Association, and school authorities for giving us a chance. The real benchmark was when all the kids raised their hands to continue workshop for another 8 weeks … that was awesome!

Is Java difficult as kids first programming language?

One of the common questions asked during these workshops is “Java is too difficult a language to start with”. Most of the times these questions are not based on any personal experience but more on the lines my-friend-told-me-so or i-read-an-article-explaining-so. My typical answer consists of the following parts:

  1. Yes, Java is a bit verbose, but was designed to be readable by humans and computer. Ask somebody to read Scala or Clojure code at this age and they’ll probably never come back to programming again. These languages serve a niche purpose, and their concepts are now anyway getting integrated into the mainstream language already.
  2. Ruby, Groovy, and Python are alternative decent languages to start with. But do you really want to start teaching them fundamental programming using Hello World.
  3. Kids are already “addicted” to Minecraft. Game is written using Java and modding can be done using Java. Lets leverage that addiction and convert that into their passion for programming. Minecraft provides a perfect platform for gamification of programming experience at this early age.
  4. There are 9 million Java developers. It is a very well adopted and understood language, with lots of help in terms of books, articles, blogs, videos, tools, etc. And the language has been around for almost 20 years now. Other languages come and go, but this is the one to stay!

As Alan Kay said

The best way to predict the future is to create it

Lets create some young Java developers by teaching them Minecraft modding. This will give them bragging rights in their friends, parents a satisfaction that their kids are learning a top notch programming language, and budding Java developers to the industry.

I dare you to pick up this workshop and run in your local school :)

Minecraft Modding Course References

Sign up for an existing Devoxx4Kids chapter in your city, or open a new one.

If you are in the San Francisco Bay Area, then register for one of our upcoming workshops at meetup.com/Devoxx4Kids-BayArea/. There are several chapters in the USA (Denver, Atlanta, Seattle, Chicago, and others).

Would your school be interested in hosting a similar workshop? Devoxx4Kids can provide train-the-trainer workshop. Let us know by sending an email to info@devoxx4kids.org.

As a registered NPO and 501(c)(3) organization in the US, it allows us to deliver these workshops quite selflessly, fueled by our passion to teach kids. But donations are always welcome :)

Configure JRebel with Docker containers – Tech Tip #81

JRebel allows you to skip build and redeploy process by instantly deploying your application to the application server of your choice. It is supported in all the major IDEs such as NetBeans, Eclipse, and IntelliJ. It is also supported in a wide variety of application servers such as JBoss EAP, WildFly, WebLogic, WebsFear (err, WebSphere), Tomcat, and many others.

You can easily get started with JRebel in JBoss Developer Studio  or Integrate JRebel with JBoss on your local desktop. It can also be easily used with JBoss Developer Studio and Ticket Monster on OpenShift.

This Tech Tip will explain how do you set up JRebel with Docker containers. Specifically, we’ll use the sample application provided by Java EE 7 Hands-on Lab (jrebel branch), JBoss Tools with Eclipse Mars M5, and running the sample application in WildFly Docker container.

Many thanks to Adam Koblentz (@akoblentz) for helping me through the steps!

Lets get started!

Install JRebel in Eclipse

JRebel runs in three modes:

  • Local: App server is running from inside the IDE
  • External: App server is running from outside the IDE, such as using CLI, but on the same machine
  • Remote: App server is running on a different machine, VM, container, or cloud

Docker containers need to be configured using the “remote” mode.

  1. Install JBoss Tools  as explained at tools.jboss.org/downloads/. JRebel’s remote mode can only be enabled using the IDE. Install JRebel plugin from Eclipse Marketplace.

Package rebel.xml and rebel-remote.xml with the WAR

These files define the location of classes and resources in your archive.

  1. Clone the Java EE 7 HOL repo:
  2. Import the Maven project (from the solution directory) in the IDE, right-click on the project, select JRebel menu, and click on “Enable JRebel Nature”. This will generate rebel.xml in src/main/resources directory and would look something like:
  3. Right-click on the project again, and select “Enable Remoting”. This generates rebel-remote.xml, in src/main/resources directory again, and will look like:

    This needs to be done on the machine where JRebel will be used in the IDE. This will ensure that the public key is generated appropriately.
  4. Package your application as

    This will package rebel.xml and rebel-remote.xml in the WAR file.

Configure and Run the Application Server

Application server needs to know about JRebel agent and platform-specific library. Both of these files are available from Eclipse if JRebel was installed earlier. On Mac these files are available in eclipse/mars/m5/eclipse/plugins/org.zeroturnaround.eclipse.embedder_6.1.1.RELEASE-201503121801/jr6/jrebel/ directory. The exact name would very likely differ in your case.

  1. Build the image using the Dockerfile:

    The key parts in this image are:

    1. Using the official jboss/wildfly Docker image
    2. Copying the JRebel agent and platform-specific library to the image
    3. Configuring application server such that it knows about the “remote” mode and platform-specific library
    4. Start WildFly
    5. Downloads the pre-built WAR file from GitHub. This will not work for you, and you’ll need to replace it with something like:

      This WAR file is the same that was generated earlier.
  2. Actually build the image as:
  3. Run the container as:

    and this should show something like:

    JRebel license information is a good sign that everything is configured properly.

    If you used Docker Machine to Setup Docker Host then the application should now be accessible at 192.168.99.100:8080/movieplex7/.

Configure Eclipse

Last step is to configure Eclipse so that it knows where the application is deployed.

  1. In Eclipse, right-click on the project, select JRebel, Advanced Properties
  2. Click on “Edit” on next to “Deployment URLs”
  3. Click on Add and specify the URL of the application, 192.168.99.100:8080/movieplex7/ in our case.
  4. Click on Continue, Apply, OK.

Voila, the configuration is now complete.

Now changing any class, adding any method, updating any entity or HTML or JSF page will push the changes to the Docker container instantly. No need to redeploy the application.

Enjoy!

9 Docker recipes for Java EE Applications – Tech Tip #80

Cross-posted from www.voxxed.com/blog/2015/03/9-docker-recipes-for-java-ee-applications/

So, you’d like to start using Docker for Java EE applications?

A typical Java EE application consists of an application server, such as WildFly, and a database, such as MySQL. In addition, you might have a separate front-end tier, say Apache, for load balancing a number of application server. A caching layer, such as Infinispan, may be used to improve overall application performance. Messaging system, such as ActiveMQ, may be used for processing queues. Both the caching and messaging components could be setup as a cluster for further scalability.

This Tech Tip will show some simple Docker recipes to configure your containers that use application server and database. Subsequent blog will cover more advanced recipes that will include front-end, caching, messaging, and clustering.

Lets get started!

Docker Recipe #1: Setup Docker using Docker Machine

If Docker is not already setup on your machine, then as a first step, you need to set it up. If you are on a recent version of Linux then you already have Docker. Or optionally can be installed as:

On Mac and Windows, this means installing boot2docker which is a Tinycore Linux VM and comes with Docker host. Then you need to configure ssh keys and certificates.

Fortunately, this is extremely simplified using Docker Machine. It takes you from zero-to-Docker on a host with a single command. This host could be your laptop, in the cloud, or in your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

This recipe is explained in detail in Docker Machine to Setup Docker Host.

Docker Recipe #2: Application Server + In-memory Database

One of the cool features of Java EE 7 is the default database resource. This allows you to not worry about creating a JDBC resource in an application server-specific before your application is accessible. Any Java EE 7 compliant application server will map the default JDBC resource name (java:comp/DefaultDataSource) to the application server-specific resource in the bundled database server.

For example, WildFly comes bundled with H2 in-memory database. This database is ready to be used as soon as WildFly is ready to accept your requests. This simplifies your development efforts and allows you to do a rapid prototyping. The default JDBC resource is mapped to
java:jboss/datasources/ExampleDS which is then mapped to the JDBC URL of jdbc:h2:mem:test;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE.

In such a case, the Database server is another application running inside the Application server.

Docker Recipe for Java EE Application #1

Here is the command that runs Java EE 7 application in WildFly:

If you want to run a typical Java EE 7 application using WildFly and H2 in-memory database, then this Docker recipe is explained in detail in Java EE 7 Hands-on Lab on WildFly and Docker.

Docker Recipe #3: Two Containers on Same Host Using Linking

The previous recipe gets you started rather quickly but becomes a bottleneck soon as the database is only in-memory. This means that any changes made to your schema and data are lost after the application server shuts down. In this case, you need to use a database server that resides outside the application server. For example, MySQL as the database server and WildFly as the application server.

To keep things simple, both the database server and application server can run on the same host.

Docker Recipe for Java EE Application #2

Docker Container Links are used to link the two containers. Creating a link between two containers creates a conduit between a source container and a target container and securely transfer information about source container to target container. In our case, target container (WildFly) can see information about source container (MySQL). The important part to understand here is that none of this information needs to be publicly exposed by the source container, and is only made available to the target container.

Here are the commands that start the MySQL and WildFly containers and link them:

WildFly and MySQL linked on two Docker containers explains how to set up this recipe.

Docker Recipe #4: Two Containers on Same Host Using Fig

The previous recipe require you to run the containers in a specific order. Running multi-container applications can quickly become challenging if each tier of your application is sitting in a container. Fig (deprecated in favor of Docker Compose) is a Docker Orchestration Tool that:

  • Define multiple containers in a single configuration file
  • Create dependencies between two containers by creating links between them
  • Start containers in the right sequence

Docker Recipe for Java EE Application #3

The entry point for Fig is a configuration file as shown:

and all the containers can be started as:

Docker orchestration using Fig explains this recipe in detail.

Fig is only receiving updates. Its code base is used as basis for Docker Compose. This is explained in the next recipe.

Docker Recipe #5: Two Containers on Same Host Using Compose

Docker Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

The application configuration file is the same format as from Fig. The containers can be started as:

This recipe is explained in detail in Docker Compose to Orchestrate Containers.

Docker Recipe #6: Two Containers on Different Hosts using IP Address

In the previous recipe, the two containers are running on the same host. These two could easily communicate using Docker linking. But simple container linking does not allow cross-host communication.

Running containers on the same host means you cannot scale each tier, database or application server, independently. This is where you need to run each container on a separate host.

Docker Recipe for Java EE Application #4

MySQL container can start as:

JDBC resource can be created as:

And WildFly container can start as:

Complete details for this recipe is explained in Docker container linking across multiple hosts.

Docker Recipe #7: Two Containers on Different Hosts using Docker Swarm

Docker Machine

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual host. It picks-up where Docker Machine leaves off by optimizing host resource utilization and providing failover services.  Specifically, Docker Swarm allows users to create resource pools of hosts running Docker daemons and then schedule Docker containers to run on top, automatically managing workload placement and maintaining cluster state.

More details about this recipe are coming in a subsequent blog.

Docker Recipe #8: Deploy Java EE Application from Eclipse

Last recipe will deal with how to deploy your existing applications to a Docker container.

Lets say you are using JBoss Tools as your development environment and WildFly as your application server.

eclipse-logo JBoss Tools Logo

There are a couple of ways by which these applications can be deployed:

  • Use Docker volumes + Local deployment:  Here a directory on your local machine is mounted as a Docker Volume. WildFly Docker container is started by mapping that directory to the deployment directory as:

    Configure JBoss Tools to deploy WAR files to this directory.
  • Use WildFly management API + Remote deployment: Start WildFly Docker container, and additionally expose the management port 9990 as:

    Configure JBoss Tools to use a remote WildFly server and deploy using management APIs.

This recipe is explained in detail at Deploy to WildFly and Docker from Eclipse.

Docker Recipe #9: Test Java EE Applications using Arquillian Cube

Arquillian Cube allows you to control the lifecycle of Docker images as part of the test lifecyle, either automatically or manually. Cube uses Docker REST API to talk to the container. It uses the remote adapter API, for example WildFly in this case, to talk to the application server. Docker configuration is specified as part of the maven-surefire-plugin as:

Complete details about this recipe are available on Run Java EE Tests on Docker using Arquillian Cube.

What other recipes are you using to deploy your Java EE applications using Docker?

Enjoy!

Deploy to WildFly and Docker from Eclipse – Tech Tip #79

Docker and WildFly Part 1 – Deployment via Volumes and Docker and WildFly Part 2 – Deployment over Management API shows two approaches of how JBoss Tools can be configured to run any application on WildFly server running as a Docker container.

The blogs provide detailed setup and the underlying background. This Tech Tip will provide a quick summary of how to deploy a Java EE 7 application to WildFly and Docker from Eclipse.

Lets get started!

Configure Docker

  1. Configure Docker on your machine using Docker Machine.
  2. Find the IP address as:

    and add an entry in /etc/hosts as:

Deployment to WildFly Container using Docker Volumes

  1. Create a folder that will be mounted as volume in the WildFly Docker container. In this case, the folder is /Users/arungupta/tmp/deployments.WildFly Docker container can be started as:

    rw ensures that the Docker container can write to it.
  2. Create a new server adapter:
    WildFly Docker Server Adapter
  3. Assign or create a WildFly 8.x runtime:

    Docker WildFly Server Adapter

    Changed properties are highlighted.

  4. Setup the server properties as:

    Docker WildFly Adapter Properties

     

    Changed properties are highlighted. The two properties on the left are automatically propagated from the previous dialog. Additional two properties on the right side are required to disable to keep deployment scanners in sync with the server.

  5. Specify a custom deployment folder on Deployment tab of Server Editor:

    Docker WildFly Server Adapter

  6. Right-click on the newly created server adapter and click “Start”.

    Docker WildFly Server Synchronized

    Status quickly changes to “Started, Synchronized” as shown.

  7. Open up any Java EE 7 project (for example javaee7-simple-sample), right-click, Run on Server, and chose this server. The project runs and displays the page:

    Docker Java EE 7 Output

 

Deployment to WildFly Container using Management API

  1. Run WildFly management image as:

    This is only a convenience image to reduce the number of steps required to get started. Dockerfile for this image has more details, including admin credentials.

    Volume mapping is not required in this case, instead additional management port is exposed.

  2. Configure a remote server controlled by management operations:Docker WildFly Remote Server Configuration

    Changed properties are highlighted.

  3. Take the defaults:

    Docker WildFly Remote System Integration

  4. Set up server properties by specifying the admin credentials (Admin#70365). Note, you need to delete the existing password and use this instead:

    Docker WildFly Admin Credentials

  5. Right-click on the newly created server adapter and click “Start”.Status quickly changes to “Started, Synchronized” as shown.

    Docker WildFly Server Synchronized

  6. Open up any Java EE 7 project (for example javaee7-simple-sample), right-click, Run on Server, and chose this server. The project runs and displays the page:

    Docker Java EE 7 Output

Enjoy!

This blog showed how how to deploy a Java EE 7 application to WildFly and Docker from Eclipse.

Is there any other way that you deploy to WildFly Docker container from Eclipse?

Docker Machine to Setup Docker Host – Tech Tip #78

Running Docker containers typically involve three components:

  • Docker Client is a binary that accepts commands from the user and communicates back and forth with host
  • Docker Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers
  • Docker Registry is SaaS platform for sharing and managing Docker images.Docker Hub is a public hub. Private registries can be easily setup as well, such as one by Artifactory. More on this in a subsequent blog.

Docker Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.

Docker Architecture

 

In a typical development environment setup, Docker Client and Host/Daemon will be co-located on the same host machine. Even if they are on separate machines, it still require to login to the Host and setup Docker Daemon for that OS.

docker-logoDocker Machine takes you from zero-to-Docker on a host with a single command. This host could be your laptop, in the cloud, or in your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

This downloads the boot2docker VM, setup ssh keys, generate certificates, start the VM. It basically takes care of all the boring work so that you can focus on all the fun things.

This Tech Tip will show you to get started with Docker Machine and use it to setup Docker Host on Mac. It does not work on Windows yet because of github.com/docker/machine/issues/742.

Lets get started!

Install Docker Machine

  1. Download the appropriate binary from docs.docker.com/machine/#installation. Binary for Mac can be downloaded as:
  2. Verify the installation as:

Setup Mac Host using Docker Machine

  1. Docker Machine can be configured to use with multiple drivers, such as Amazon Web Services, Google Compute Engine, Microsoft Azure, and Oracle VirtualBox. On a developer laptop, Virtual Box is a convenient option.Virtual Box 4.3.20 is the minimum requirement. So make sure you’ve the correct version installed.
  2. Create a Docker Host using VirtualBox provider and call the machine as “mydocker”.Make sure ssh-keygen is in the PATH before invoking this command. On Mac, this is already in /usr/bin/ssh-keygen. On Windows, this can be installed as part of Git Bash.This can be done as:

    This downloads boot2docker with the Docker daemon installed, and will create and start a VirtualBox VM with Docker running.
  3. Find IP address of the machine as:

    Note down this IP address, this will be used for accessing the application.
  4. Check the status of running machine as:

    The * in the ACTIVE column indicates this is an active host.
  5. Check the environment of newly created machine as:

Setup Docker client to Communicate

  1. Setup your client to talk to this host as:

Run Java Application on Host

  1. Run Java EE 7 Application discussed in Java EE 7 Hands-on Lab on WildFly and Docker on this host as:
  2. Access the application at 192.168.99.101:8080/movieplex7/ and looks like:Docker Machine Output

Docker Machine Commands

Complete list of Docker Machine commands can be seen as:

Learn more about Docker Machine, Swarm, and Compose in this video:

Why would you use anything else other than Docker Machine to setup Docker host? How do you setup Docker Host otherwise?

Some useful references …

Enjoy!

Docker Compose to Orchestrate Containers – Tech Tip #77

Docker Orchestration using Fig showed how to defining and control a multi-container service using Fig. Since then, Fig has been renamed to Docker Compose, or Compose for short.

First release of Compose was announced recently

From github.com/docker/compose

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Docker Compose uses the same API used by other Docker commands and tools.

Docker Compose

This Tech Tip will rewrite Docker Orchestration using Fig blog to use Docker Compose. In other words, it will show how to run a Java EE 7 application that is deployed using MySQL and WildFly.

Lets get started!

Install Docker Compose

Install Compose as:

Docker Compose Configuration File

Entry point to Compose is docker-compose.yml. To begin with, docker-compose tool also recognizes fig.yml file name but shows the following message:

And if both fig.yml and docker-compose.yml are available in the directory then the following message is shown:

Use the same configuration file from the previous blog and rename to docker-compose.yml:

This YML-based configuration file has:

  1. Two containers defined by the name “mysqldb” and “mywildfly”
  2. Image names are defined using “image”
  3. Environment variables for the MySQL container are defined in “environment”
  4. MySQL container is linked with WildFly container using “links”
  5. Port forwarding is achieved using “ports”

Start, Verify, Stop Docker Containers

  1. All the containers can be started, in detached mode, by giving the command:

    And that shows the output as:
  2. Verify the containers as:
  3. Logs for the containers can be seen as:

    And shows the output as:
  4. Find the IP address of the host as:

    And access the application as:

    To see the output as:

    Or in the browser as:

    Docker Compose Output

  5. Stop the containers as:

    to see the output as:

Docker Compose Commands

Complete list of Docker Compose commands can be seen by typing docker-compose and shows the output as:

A subsequent blog will likely play with scale command.

Help for each command is shown by typing -h after the command name. For example, help for run command is shown as:

Enjoy!

Jenkins to Nexus with Git Polling – Tech Tip #76

Build Binaries Only Once is a very important principle of Continuous Deployment (CD). However that blog guides you to build and deploy binaries to Nexus from your development machine. This is fine as a starting step where everything is locally contained on your laptop and you are just testing setup to figure out how things work. But everybody in the team having a local Nexus repository defies the purpose of a “shared repository”.  This is also against Continuous Integration (CI) where the source code committed by different team members checked out and build on a CI server. And CI is a fundamental requirement for Continuous Deployment. How do you set this up then?

You use a CI server to push binaries to Nexus.

There are a varety of CI servers in both open source and commercial range. Jenkins, Travis,  CruiseControl and Go are some of the popular ones in the open source land. They all have a commercial edition as well. Bamboo and AnthillPro are a couple of popular commercial-only offerings. This blog will use the simplest, most popular, and easiest to use Jenkins CI server.

The overall flow is shown in the diagram and explained after that.

Jenkins, GitHub, Nexus setup

 

The flow is:

  • Developers push code from inside firewall to GitHub
  • Jenkins is polling GitHub for code updates
  • Build the binaries and push the artifacts to Nexus (inside firewall)

This tech tip will show how to get started with Jenkins and push binaries to Nexus by polling the GitHub workspace. While polling is inefficient, it may be the only, and probably a simplified, choice.

In this setup, Jenkins and Nexus are both setup inside your firewall. This is a more common scenario as at least Nexus would be typically inside firewall. However Jenkins may be configured outside firewall in which case it will be able to archive artifacts but not directly push to Nexus. A proxy needs to be configured for Jenkins and Nexus to communicate in this case.

Lets get started!

Download and Start Jenkins Server

All information about Jenkins can be found at jenkins-ci.org.

  1. Download the latest WAR file:

    The total time to download will differ based upon your network speed.
  2. Start Jenkins as

    Starting and Accessing Jenkins provide more details about starting Jenkins and different configuration options.
  3. Once Jenkins is started, it can be accessed at localhost:8080 and shown as:

    Default Jenkins Output

Install Git Plugin

By default, Jenkins does not have the ability to handle Git workspace. Hopefully this will be fixed in a subsequent release because of INFRA-253. In the meanwhile, lets install Git plugin.

  1. Click on “Manage Jenkins”, “Manage Plugins”
  2. Click on “Available” tab, select “GIT Plugin” and click on “Install without restart”Git plugin installation in Jenkins
  3. Click on “Restart Jenkins …” to restart Jenkins.

Create a Jenkins Job

  1. Configure Maven at Configure System as explained here
  2. Create a new Jenkins job by going to localhost:8080/newJob
  3. Choose “Maven project” and give the name as shown:

    Jenkins New JobClick on “OK”.

  4. In “Source Code Management”, choose “Git” and specify the repository “https://github.com/javaee-samples/javaee7-simple-sample” as shown:
    Java EE 7 Simple Sample GitHub Repo in Jenkins
  5. In “Build Triggers”, choose “Poll SCM” and specify the schedule to poll the repo every 5 minutes as “H/5 * * * *”:

    techtip75-polling-schedule

  6. In “Build”, specify “deploy” target as shown:

    Maven target in Jenkins

Deploy SNAPSHOT to Nexus

Once the setup is done, deploying to Nexus is just a click or a poll away.

  1. Click on Build Now to build the job

    Jenkins Build Job

  2. Console output for the first job will show something like:

  3. Git Polling Log will show the last poll to your workspace repo. If there are any commits to the workspace after last job then a new job will be started.

This blog showed how to push binaries from Jenkins to Nexus using Git Polling.

Enjoy!

Announcing JBoss Champions

JBoss Champion

JBoss Champions is a selected group of community members who are passionate advocate of JBoss technologies under a program sponsored by Red Hat. It fosters the growth of individuals that promote adoption of JBoss Projects and/or JBoss Products and actively share their deep technical expertise about it in the community, within their company and their customers or partners. This could be done in forums, blogs, screencasts, tweets, conferences, social media, whitepapers, articles, books, and other means.

Founding JBoss Champions

Proud and excited to announce the first set of JBoss Champions:

  1. Adam Bien (@AdamBien)
  2. Alexis Hassler (@alexishassler)
  3. Antonin Stefanutti (@astefanut)
  4. Antonio Goncalves (@agoncal)
  5. Bartosz Majsak (@majson)
  6. Francesco Marchioni (@mastertheboss)
  7. Geert Schuring (@geertshuring)
  8. Guillaume Scheibel (@g_scheibel)
  9. Jaikiran Pai
  10. John Ament (@JohnAment)
  11. Mariano Nicolas De Maio (@marianbuenosayr)
  12. Paris Apostolopoulos (@javapapo)

Many congratulations to the first set of JBoss Champions! Make sure to wish them using email, tweet, blog, or any other means that is available on their jboss.org profile. Give them a hug when you meet them at a conference. Ask them a tough JBoss question, challenge them! Invite them to your local Java User Group to give a talk about JBoss

Want to nominate a JBoss Champion?

Do you have it in you, and feel worthy of being a JBoss Champion?

Want to nominate yourself, or somebody else?

Send an email to champions@jboss.org.

Here are some likely candidates:

  • Senior developers, architects, consultants, academia who are using and promoting JBoss technologies using different means
    • Blogs and webinars
    • Publish articles on jboss.org, InfoQ, DZone, etc.
    • Social media
    • Talks at conferences and local JUGs/JBUGs
  • Implemented real-world projects using JBoss technologies
  • Actively answering questions in JBoss forums/StackOverflow
  • Authored a book on JBoss topic
  • Lead a JBoss User Group
  • Mentoring other community members and grooming champions

Make sure the nominee has a current jboss.org profile and has all the relevant details. Include any references that will highlight your value to the JBoss community. The complete list of criteria is clearly defined at jboss.org/champions.

Subscribe to the twitter feed of existing JBoss Champions.

Once again, many congratulations to the first set of JBoss Champions, and looking forward to many others. Submit your nomination today!

Bind WildFly to a different IP address, or all addresses on multihomed (Tech Tip #75)

Interface is a logical name, in WildFly parlance, for a network interface/IP address/host name to which sockets can be bound. There are two interfaces: “public” and “management”.

The “public” interface binding is used for all application related network communication (i.e. Web, Messaging, etc). The “management” interface is used for all components and services that are required by the management layer (i.e. the HTTP Management Endpoint).

By default, “public” interface is configured to listen on the loopback address of 127.0.0.1. So if you start WildFly as:

Then WildFly default page can be accessed as http://127.0.0.1:8080. Usually, /etc/hosts provide a mapping of 127.0.0.1 to localhost, and so the same page is accessible at http://localhost:8080. 8080 is the port where all applications are accessed.

On a multihomed machine, you may like to start WildFly and bind “public” interface to a specific IP address. This can be easily done as:

Now the applications can be accessed at http://192.168.1.1:8080.

For compatibility, -b 192.168.1.1 is also supported but -b=192.168.1.1 is recommended.

Or, if you want to bind to all available IP addresses, then you can do:

Similarly, by default, WildFly can be managed using Admin Console at http://127.0.0.1:9990. 9990 is the management port.

WildFly “management” interface can be bound to a specific IP address as:

Now Admin Console can be accessed at http://192.168.1.1:9990.

Or, bind “management” interface to all available IP addresses as:

You can also bind to two specific addresses as explained here.

Of course, you can bind WildFly “public” and “management” interface together as:

Learn more about it Interface and Port Configuration in WildFly. And more about these switches in Controlling the Bind Address with -b.

Build Binaries Only Once for Continuous Deployment

What is Build Binaries Only Once?

One of the fundamental principle of Continuous Delivery is Build Binaries Only Once, or in short BBOO. This means that the binary artifacts should be build once, and only once. These artifacts should then be stored in a repository manager, such as a Nexus Repository. Subsequent deploy, test, and release cycles should never attempt to build this binary again and instead reuse this binary. This ensures that the exact same binary has gone through all different test cycles and delivered to the customer.

Several times binaries are rebuilt during each testing phase using a specific tag of the workspace, and considered the same. But that is still different! This might turn out to be the same but that’s more incidental. Its more likely not same because of different environment configurations. For example, development team might be using JDK 8 on their machine and the test/staging might be using JDK 7. There are a multitude reasons because of which the binary artifacts could differ. So it’s very essential to build binaries only once, store them in a repository, and make them go through different test, staging, and production cycle. This increases the overall confidence level of delivery to the customer.

Build Binaries Only Once

This image shows how the binaries are built once during Build stage and stored on Nexus repository. There after, Deploy, Test, and Release stages are only reading the binary from Nexus.

The fact that dev, test, and staging environments differ is a different issue. And we’ll deal with that in a subsequent blog.

How do you setup Build Binaries Only Once?

For now, lets look at the setup:

  1. A Java EE 7 application WAR file is built once
  2. Store in a Nexus repository, or .m2 local repository
  3. Same binary is used for smoke testing
  4. Same binary is used for running full test suite

The smoke test in our case will be just a single test and full suite has four tests. Hopefully this is not your typical setup in terms of the number of tests, but at least you get to see how to setup everything.

Also only two stages of testing, smoke and full but the concept can be easily extended to add other stages. A subsequent blog will show a full blown deployment pipeline.

Lets get started!

  1. Check out a trivial Java EE 7 sample application from github.com/javaee-samples/javaee7-simple-sample. This is a typical Java EE application with REST endpoints, CDI beans, JPA entities.
  2. Setup a local Nexus Repository and deploy a SNAPSHOT of the application to it as:

    By default, Nexus repository is configured on localhost:8081/nexus. Note down the host/port if you are using a different combination. Also note down the exact version number that is deployed to Nexus. By default, it will be 1.0-SNAPSHOT.

    You can also deploy a RELEASE to this Nexus repository as:

    Note down whether you deployed SNAPSHOT or RELEASE.

    In either case, you can also specify -P release Maven profile and sources and javadocs will be attached with the deployment. So if RELEASE is deployed as:

    Then sources and javadocs are also attached.

  3. Check out the test workspace from github.com/javaee-samples/javaee7-simple-sample-test. Make the following changes in this project:
    1. Change nexus-repo property to match the host/port of the Nexus repository. If you used the default installation of Nexus and deployed a RELEASE, then nothing needs to be changed.By default, Nexus has one repository for SNAPSHOTs and another for RELEASEs. The workspace is configured to use RELEASE repository. If you deployed a SNAPSHOT, then “releases” in nexus-repo needs to be changed to “snapshots”to point to the appopriate repository.
    2. Change javaee7-sample-app-version property to match the version of the application deployed to Nexus.
  4. Start WildFly and run smoke tests as:

    This will run all files ending in “SmokeTest”. ShrinkWrap and Arquillian perform the heavy lifting of resolving the WAR file from Nexus and using it for running the tests:

    Running the smoke tests will show the results as:

  5. Run the full tests as:

    This will run all files included in your test suite and will show the results as:

    In both cases, smoke tests and full tests are using the binary that is deployed to Nexus.

Learn more about your toolset for creating this simple yet powerful setup:

arquillian-logo nexus-logowildfly-logo

 

Here are some other blogs coming in this series:

  • Use a CI server to deploy to Nexus
  • Run tests on WildFly running in a PaaS
  • Add static code coverage and code metrics in testing
  • Build a deployment pipeline

Enjoy!

Setup Local Nexus Repository and Deploying WAR File from Maven (Tech Tip #74)

Maven Central serves as the central repository manager where binary artifacts are uploaded by different teams/companies/individuals and shared with rest of the world. Much like github, and other source code repositories, which are very effective for source code control, these repository managers also act as a deployment destination for your own generated binary artifacts.

Setting up a local repository manager has several advantages. The primary ones are that they act as a highly configurable proxy between Maven central so that everybody does not have to download all the dependencies from the central repo. Another primary reason is to control your interim generated artifacts within your team. Reasons to Use a Repository Manager provide detailed explanation of a complete set of benefits.

This Tech Tip will show how to setup a local Nexus repository manager, and push artifacts to it – both snapshots and releases.

Lets get started!

Install and Configure Local Nexus Repository

  1. Download and unzip latest Nexus OSS. Default administrator’s login/password is admin/admin123. Default deployment login/password is deployment/deployment123.
  2. Start up Nexus as:

    The logs can then be seen as:

    Or you can start where the logs are displayed in the console itself:
  3. Configure Maven Settings file (~.m2/settings.xml) to include the default deployment username and password as:

Deploy Snapshot to Local Nexus Repository

  1. Check out a simple Java EE sample from github.com/javaee-samples/javaee7-simple-sample.
  2. Create and deploy the WAR file to the local Nexus repository as:

    The snapshot repository, after pushing couple of builds, can be seen at localhost:8081/nexus/#view-repositories;snapshots~browsestorage and looks like as shown:

    Nexus Snapshot Repository

    The actual repository storage is in ../sonatype-work/nexus directory. This is created in parallel to where ever Nexus OSS bundle was unzipped.

Deploy Release to Local Nexus Repository

  1. Clean any previously performed release:

  2. Prepare for the next release:

  3. Perform the release:

    Notice, how this command is ending with an error. This is similar to as reported here but the strange thing is that the files are still uploaded on Nexus. Here is the snapshot from localhost:8081/nexus/#view-repositories;releases~browsestorage while trying to test multiple releases and wondering about these “spurious” error messages:

    Nexus Release Repository

    This error will require more debugging but at least snapshot and release builds can now be stored on local Nexus repository.

    UPDATE: Manfred Moser helped debug this error by sending pull requests. This error is now gone and instead should show something like:

     

You learned how to setup a local Nexus Repository and push snapshot and release builds to it. Subsequent blogs will show how this repository can be used for CI/CD.

Enjoy!

Tips for Effective Session Submissions at Technology Conferences

Several of us go through the process of submitting talks at a technology conference. This requires thinking of a topic that you seem worthy of a presentation. Deciding a topic can be a blog by itself, but once the topic is selected then it involves creating a title and an abstract that will hopefully get selected. The dreaded task of preparing the slides and demos after that is a secondary story, but this blog will talk about dos and don’ts of an effective session submission that can improve your chances of acceptance.

What qualifies me to write this blog?

I’ve been speaking for 15+ years in ~40 countries, around the world, in a variety of technology conferences. In early years, this involved submitting a session at a conference, getting rejected and accepted, and then speaking at some of the conferences. The sessions were reviewed by a Program Committee which is typically a bunch of people, expert in their domain, and help shape the conference agenda. For the past several years, I’ve participated in Program Committees of several conferences, either as an individual member or leading the track with multiple individual members.

Now, I’ve had my share of rejects, and still get rejected at conferences. There are multiple reasons for that such as too many sessions by a speaker, more compelling abstract by another speaker, Program Committee looking for a real-life practitioner, and others. But the key part is that these rejects never let me down. I do miss the opportunity to talk to the attendees at that conference though. For example, I’ve had rejects from a conference three years in a row but got accepted on the fourth year. And hopefully will get invited again this year ;)

Lets see what I practice to write a session title/abstract. And what I expect from other sessions when I’m part of the Program Committee!

Tips for Effective Session Submission

  1. No product pitches - In a technology-focused conference, any product, marketing or seemingly market-ish talk is put at the bottom of the list, or basically rejected right away. Most vendors have their product specific conference and such talks are better suited there.
  2. Catchy title – Title is typically 50-80 characters that explain what your talk is all about. Make your title is catchy and conveys the intention. Program Committee will read through the entire submission but more likely to look at yours first if the title is catchy. Attendees are more likely to read the abstract, and possibly attend the talk, if they like the title.Some more points on title:
    1. Politically correct language – However, don’t lean on the side of making it arcane or at least use any foul language. You must remember that Program Committee has both male and female members and people from different cultures. Certain words may be appropriate in a particular culture but not so on a global level. So make sure you check global political correctness of the title before picking the words.
    2. Use numbers, if possible - Instead of saying “Tips for Java EE 7″, use “50 Tips in 50 Minutes for Java EE 7″. This talk got me a JavaOne 2013 Rockstar Award. Now this was not entirely due to the title but I’ve seen a few other talks with similar titles in JavaOne 2014. I guess the formula works ;) And there is something about numbers and how the human brain operate. If something is more quantified then you are more likely to pay attention to it!
  3. Coherent Abstract – Abstract is typically 500-1500 characters, some times more than that, that describes what you are going to do in your session. Session abstracts can differ based upon what is being presented. But typically as a submitter, I divide it into three parts – setup/define the problem space, show what will be presented (preferably with an outline), and then the lessons learned by the attendees. I also include any demos, cast studies, customer/partner participation, that will be included in the talk.

    As a Program Committee member, I’m looking at similar points and how the title/abstract is going to fit in the overall rhythm of the conference.Some additional points about abstract since that is where most of the important information is available.

    1. WIIFM (Whats In It For Me) – Prepare an abstract that will allow the attendees to connect with you. Is this something that they may care about? Something that they face in their daily life? Think if you were an attendee, would you be interested in attending this session by reading the abstract? Think WIIFM from attendee’s perspective.
    2. Use all the characters – Conferences have different limit of characters to pitch your abstract. The reviewers may not know you or your product at all and you get N characters to pitch your idea. Make sure to use all of them, to the last Nth character.
    3. Review your abstract – Make sure to read your abstract multiple times to ensure that you are giving all the relevant information. Think through your presentation and see if you are leaving out any important aspects. Also look if the abstract has any redundant information that will not required by the reviewers. You can also consider getting your abstract peer reviewed.I’m always happy to provide that service to my blog readers :-)
    4. Coordinate within team – Make sure to coordinate within your team before the submission – multiple sessions from the same team or company does not ensure that the best speaker is picked. In such case we rely upon your “google presence” and/or review committee’s prior knowledge of the speaker. Its unfortunate if the selected speaker is not the most appropriate one.

    Make sure you don’t write an essay here, or at least provide a TLDR; version. Just pick the three most important aspect of your session and highlight them.

  4. Hands-on Labs: Hands-on labs is where attendees sit through the session from two to four hours, and learn a tool, build/debug/test an application, practice some methodology, or something else in a hands-on manner. Make sure you clearly highlight flow of the lab, down to every 30 mins, if possible. The end goal such as “Attendees will build an end-to-end Java EE 7 application using X, Y, X” or “Attendees will learn the tools and techniques for adopting DevOps in their team”. A broad outline of the content is still very important so that Program Committee can understand attendees’ experience.
  5. Appropriate track – Typically conferences have multiple tracks and as a submitter you typically one as a primary track, and possibly another as a secondary. Give yourself time to read through track descriptions and choose the appropriate track for your talk. In some cases, the selected track may be inappropriate, either by accident, or some other reason. In that case, Program Committee will try their best to recategorize the talk to an appropriate track, if it needs to. But please ensure that you are filing in the right track to have all the right eyeballs looking at it. It would be really unfortunate, for the speaker and the conference, if an excellent talk gets dropped because of being in the inappropriate track.
  6. Use tags – Some conferences have the ability to apply tags to a submission. Feel free to use the existing tags, or create something that is more likely to be searched by the Program Committee. This provides a different dissection of all the submissions, and possibly some more eyes on your submission.
  7. First time speaker – If you are a newbie, or a first time presenter, then consider paying close attention to the CFP sections which gives you an opportunity to toot your horn. Make sure to include a URL of your video presentation that has been done elsewhere. If you never ever presented at a public conference or speaking at this conference for the first time, then you can consider recording a technical presentation and upload the video on YouTube or Vimeo. This will allow the Program Committee to know you slightly better. Links to slideshare profile are recommended as well in this case. Very often the Program Committee members will google the speaker. So make sure your social profile, at least Twitter and LinkedIn, are up to date. Please don’t say “call me at xxx-xxx-xxxx to find out the details” :-)
  8. Run spell checker – Make sure to run spell checker in everything you submit as part of the session. Spelling mistakes turn off some of the Program Committee members, including myself ;-) This will generally never be a sole criteria of rejection but shows lack of attention, and only makes us wonder about the quality of session.

Never Give Up!

If your session does not get accepted, don’t give up and don’t take it personally. Each conference has a limited number of session slots and typically the number of submissions is more, sometimes way more, than that. The Program Committee tries, to the best of their ability, to pick the right sessions that fits in the rhythm of the conference. You’ve done the hard work of preparing a compelling title/abstract, submit at other conferences. At the least, try giving the talk at a local Java User Group and get feedback from the attendees there. You can always try out Virtual JUG as well for a more global audience.

Even though these tips are based upon my experience on presenting and selecting sessions at technology conferences, but most of these would be valid at others as well.

If your talk do get approved and you go through the process of creating compelling slides and sizzling demos, the attendees will always be a mixed bunch ;)

Attendees in a Conference Session

Enjoy, good luck, and happy conferencing!

Any more tips to share?

Database Migrations in Java EE using Flyway (Hanginar #6)

flyway-logoDatabase schema of any Java EE application evolves along with business logic. This makes database migrations an important of any Java EE application.

Do you still perform them manually, along with your application deployment? Is it still a lock step process or run as two separate scripts – one for application deployment and one for database migrations?

Learn how Flyway simplifies database migrations, and seamlessly integrates with your Java EE application in this webinar with Axel Fontaine (@axelfontaine).

You’ll learn about:

  • Need for database migration tool in a Java EE application
  • Seamless integration with Java EE application lifecycle
  • SQL scripts and Java-based migrations
  • Getting Started guides
  • Comparison with Liquibase
  • And much more!

A fun fact about this, and jOOQ hanginar, is that both were conceived on the wonderful cruise as part of JourneyZone. Happy to report that these are now complete!

Enjoy!

OpenShift v3: Getting Started with Java EE 7 using WildFly and MySQL (Tech Tip #73)

OpenShift OriginOpenShift is Red Hat’s open source PaaS platform. OpenShift v3 (due to be released this year) will provide a holistic experience on running your microservices using Docker and Kubernetes. In a classic Red Hat way, all the work is done in the open source at OpenShift Origin. This will also drive the next major release of OpenShift Online and OpenShift Enterprise.

OpenShift v3 uses a new platform stack that is using plenty of community projects where Red Hat contributes such as Fedora, Centos, Docker, Project Atomic, Kubernetes, and OpenStack. OpenShift v3 Platform Combines Docker, Kubernetes, Atomic and More explain this platform stack in detail.

OpenShift v3 Stack

This tech tip will explain how to get started with OpenShift v3, lets get started!

Getting Started with OpenShift v3

Pre-built binaries for OpenShift v3 can be downloaded from Origin at GitHub. However the simplest way to get started is to run OpenShift Origin as a Docker container.

OpenShift Application Lifecycle provide complete details on what it takes to run a sample application from scratch. This blog will use those steps and adapt them to run using boot2docker VM on Mac. And in the process we’ll also deploy a Java EE 7 application on WildFly which will be accessing database on a separate MySQL container.

Here is our deployment diagram:

OpenShift v3 WildFly MySQL Deployment Strategy

  • WildFly and MySQL are running on separate pods.
  • Each of them is wrapped in a Replication Controller to enable simplified scaling.
  • Each Replication Controller is published as a Service.
  • WildFly talks to the MySQL service, as opposed to directly to the pod. This is important as Pods, and IP addresses assigned to them, are ephemeral.

Lets get started!

Configure Docker Daemon

  1. Configure the docker daemon on your host to trust the docker registry service you’ll be starting. This registry will be used to push images for build/test/deploy cycle.
    • Log into boot2docker VM as:
    • Edit the file

      This will be an empty file.
    • Add the following name/value pair:

      Save the file, and quit the editor.

    This will instruct the docker daemon to trust any docker registry on the 172.30.17.0/24 subnet.

Check out OpenShift v3 and Java EE 7 Sample

  1. Download and Install Go and setup GOPATH and PATH environment variable. Check out OpenShift origin directory:

    Note the directory where its checked out. In this case, its ~/workspaces/openshift.

    Build the workspace:

  2. Check out javaee7-hol workspace that has been converted to a Kubernetes application:

    This is also done in ~/workspaces/openshift directory.

Start OpenShift v3 Container

  1. Start OpenShift Origin as Docker container:

    Note ~/workspaces/openshift directory is mounted as /workspaces/openshift volume in the container. Some additional volumes are mounted as well.

    Check that the container is running:

  2. Log into the container as:

  3. Install Docker registry in the container by giving the following command:

  4. Confirm that the registry is running by getting the list of pods:

    osc is OpenShift Client CLI and allows to create and manage OpenShift projects. Some of the kubectl commands can also be using this script.

  5. Confirm the registry service is running. Note the actual IP address may vary:

  6. Confirm that the registry service is accessible:

    And look for the output:

Access OpenShift v3 Web Console

  1. OpenShift Origin server is now up and running. Find out the host’s IP address using boot2docker ip and open http://<IP addresss of boot2docker host>:8444 to view OpenShift Web  Console in your browser.For example, the console is accessible at https://192.168.59.103:8444/ on this machine.
    OpenShift Origin Browser Certificate

    You will need to have the browser accept the certificate at https://<host>:8444 before the console can consult the OpenShift API. Of course this would not be necessary with a legitimate certificate.

  2. OpenShift Origin login screen shows up. Enter the username/password as admin/admin:
    OpenShift Origin Login Screen

    and click on the “Log In” button. The default web console looks like:

    OpenShift v3 Web Console Default

Create OpenShift v3 Project

  1. Use project.json from github.com/openshift/origin/blob/master/examples/sample-app/project.json in the OpenShift v3 container and create a test project as:

    Refreshing the web console now shows:

    OpenShift Origin Test Project

    Clicking on “OpenShift 3 Sample” shows an empty project description:

    OpenShift v3 Empty Project

  2. Request creation of the application template:

  3. Web Console automatically refreshes and shows:

    OpenShift v3 Java EE 7 Default Project

    The list of services running can be seen as:

    OpenShift v3 Java EE 7 Project Services

Build the Project

  1. Trigger an initial build of your project:

  2. Monitor the builds and wait for the status to go to “complete” (this can take a few minutes):

    You can add the –watch flag to wait for updates until the build completes:

    Wait for the STATUS column to show Complete. It will take a few minutes as all the components (WIldFly, MySQL, Java EE 7 application) are provisioned.  Effectively, their new Docker images are created and pushed to the local registry that was started earlier.

    Hit Ctrl+C to stop watching builds after the status changes to Complete.

  3. Complete log of the build can be seen as:

  4. Check for the application pods to start:

    Note, that the “frontend” and “database” pods are now running.

  5. Determine IP of the “frontend” service:

  6. Access the application at http://<IP address of “frontend”>:8080/movieplex7-1.0-SNAPSHOT should work. Note the IP address may (most likely will) vary. In this case, it would be http://172.30.17.115:8080/moviexplex7-1.0-SNAPSHOT.The app would not be accessible yet, as some further debugging is required to configure firewall on Mac when OpenShift v3 is used as Docker container. Until we figure that out, you can do docker ps in your boot2docker VM to see the list of all the containers:

    And then login to the container associated with frontend as:

    This will log in to the Docker container where you can check that the application is deployed successfully by giving the following command:

    This will print the index.html page from the application which has license at the top and rest of the page after that.

    Now once the firewall issue is resolved, this page will then be accessible on host Mac as well.

Lets summarize:

  • Cloned the OpenShift Origin and Java EE 7 sample repo
  • Started OpenShift v3 as Docker container
  • Loaded the OpenShift v3 Web Console
  • Create an OpenShift v3 project
  • Loaded Java EE 7 application template
  • Triggered a build, which deployed the application

Here are some troubleshooting tips if you get stuck.

Enjoy!