Tag Archives: javaee7

Java EE, Docker and Maven (Tech Tip #89)

Java EE applications are typically built and packaged using Maven. For example, github.com/javaee-samples/javaee7-docker-maven is a trivial Java EE 7 application and shows the Java EE 7 dependency:

And the two Maven plugins that compiles the source and builds the WAR file:

This application can then be deployed to a Java EE 7 container, such as WildFly, using the wildfly-maven-plugin:

Tests can be invoked using Arquillian, again using Maven. So if you were to package this application as a Docker image and run it inside a Docker container, there should be a mechanism to seamlessly integrate in the Maven workflow.

Docker Maven Plugin

Meet docker-maven-plugin!

This plugin allows you to manage Docker images and containers from your pom.xml. It comes with predefined goals:

Goal Description
docker:start Create and start containers
docker:stop Stop and destroy containers
docker:build Build images
docker:push Push images to a registry
docker:remove Remove images from local docker host
docker:logs Show container logs

Introduction provides a high level introduction to the plugin including building images, running containers and configuration.

Run Java EE 7 Application as Docker Container using Maven

TLDR;

  1. Create and Configure a Docker Machine as explained in Docker Machine to Setup Docker Host
  2. Clone the workspace as: git clone https://github.com/javaee-samples/javaee7-docker-maven.git
  3. Build the Docker image as: mvn package -Pdocker
  4. Run the Docker container as: mvn install -Pdocker
  5. Find out IP address of the Docker Machine as: docker-machine ip mydocker
  6. Access your application

Docker Maven Plugin Configuration

Lets look little deeper in our sample application.

pom.xml is updated to include docker-maven-plugin as:

Each image configuration has three parts:

  • Image name and alias
  • <build> that defines how the image is created. Base image, build artifacts and their dependencies, ports to be exposed, etc to be included in the image are specified here.Assembly descriptor format is used to specify the artifacts to be included and is defined in src/main/docker directory. assembly.xml in our case looks like:
  • <run> that defines how the container is run. Ports that need to be exposed are specified here.

In addition, package phase is tied to docker:build goal and install phase is tied to docker:start goal.

There are four docker-maven-plugin and you can read more details in the shootout on what serves your purpose the best.

How are you creating your Docker images from existing applications?

Enjoy!

 

WildFly Swarm: Building Microservices with Java EE

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away

Antoine de Saint-Exupery

This quote by the French writer Antoine de Saint-Exupery was made to substantiate that often less is more. This is true for architect, artist, designer, writer, running, software developer, or in any other profession. Simplicity, minimalism, cutting down the cruft always goes a long way and has several advantages as opposed to something bloated.

What is WildFly Swarm?

WildFly is a light weight, flexible, feature rich, Java EE 7 compliant application server. WildFly 9 even introduced a 27MB Servlet-only distribution. These are very well suited for your enterprise and web applications.

WildFly Swarm takes the notch a bit higher. From the announcement:

WildFly Swarm is a new sidecar project supporting WildFly 9.x to enable deconstructing the WildFly AS and pasting just enough of it back together with your application to create a self-contained executable jar.

WildFly Swarm

The typical application development model for a Java EE application is to create an EAR or WAR archive and deploy it in an application server. All the dependencies, such as Java EE implementations are packaged in the application server and provide the functionality required by the application classes. Multiple archives can be deployed and they all share the same libraries. This is a well understood model and have been used over the past several years.

WildFly Swarm turns the table where it creates a “fat jar” that has all the dependencies packaged in a JAR file. This includes a minimalist version of WildFly, any required dependencies, and of course, the application code itself. The application can simply be run using java -jar.

Each fat jar could possibly be a microservice which can then independently upgrade, replace, or scale. Each fat jar would typically follow single responsibility principle and thus will have only the required dependencies packaged. Each JAR can use polyglot persistence, and use only the persistence mechanism that is required.

Show me the code!

A Java EE application can be packaged as WildFly Swarm fat jar by adding a Maven dependency and a plugin. Complete source code for a simple JAX-RS sample is available at github.com/arun-gupta/wildfly-samples/tree/master/swarm.

WildFly Swarm Maven Dependency

Add the following Maven dependency in pom.xml:

WildFly Swarm Maven Plugin

Add the following Maven plugin in pom.xml:

Create WildFly Swarm Fat Jar

The fat jar can be easily created by invoking the standard Maven target:

This generates a JAR file using the usual Maven conventions, and appends -swarm at the end. The generated WAR file name in our sample is swarm-1.0-SNAPSHOT-swarm.jar.

The generated WAR file is ~30MB, has 134 JARs (all in m2repo directory), and 211 classes. The application code is bundled in app/swarm-1.0-SNAPSHOT.war.

Run WildFly Swarm Fat Jar

This far jar can be run as:

The response can be verified as:

WildFly Swarm Release blog refers to lots of blogs about Servlet, JAX-RS with ShrinkWrap, DataSource via Deployment, Messaging and JAX-RS, and much more.

WildFly Swarm Next Steps

This is only 1.0.0.Alpha1 release so feel free to try out samples and give us feedback by filing an issue.

You have the power of all WildFly subsystems, and can even create embeddable Java EE container as shown in the release blog:

Subsequent blogs will show how a microservice can be easily created using WildFly Swarm.

WildFly Swarm Stay Connected

You can keep up with the project through the WildFly HipChat room, @wildflyswarm on Twitter, or through GitHub Issues.

WildFly 9.0 CR1 is Released – HTTP/2, Intelligent Load Balancing, Graceful shutdown, Offline CLI, more

WildFly 9 Banner

WildFly 9.0 CR1 is now released, download from wildfly.org/downloads (zip)!

It builds upon the Java EE 7 compliance already achieved in earlier versions and adds the following features:

  • Intelligent load balancing
  • HTTP/2 and SPDY support
  • A new offline CLI mode
  • Server suspend, resume, and graceful shutdown
  • A new Servlet-only distribution

Release Notes provide complete details. 86 issues were fixed since 9.0.0 Beta 2.

How do you get started?

  • Download zip and unzip
  • Start as:
  • Access the main page as localhost:8080 as shown:WildFly 9 Welcome
  • Shutting down the server now shows the following log:

You can already configure this in JBoss Tools Alpha 2. Its currently not supported in NetBeans and has been filed as #252197.

Configure JRebel with Docker containers – Tech Tip #81

JRebel allows you to skip build and redeploy process by instantly deploying your application to the application server of your choice. It is supported in all the major IDEs such as NetBeans, Eclipse, and IntelliJ. It is also supported in a wide variety of application servers such as JBoss EAP, WildFly, WebLogic, WebsFear (err, WebSphere), Tomcat, and many others.

You can easily get started with JRebel in JBoss Developer Studio  or Integrate JRebel with JBoss on your local desktop. It can also be easily used with JBoss Developer Studio and Ticket Monster on OpenShift.

This Tech Tip will explain how do you set up JRebel with Docker containers. Specifically, we’ll use the sample application provided by Java EE 7 Hands-on Lab (jrebel branch), JBoss Tools with Eclipse Mars M5, and running the sample application in WildFly Docker container.

Many thanks to Adam Koblentz (@akoblentz) for helping me through the steps!

Lets get started!

Install JRebel in Eclipse

JRebel runs in three modes:

  • Local: App server is running from inside the IDE
  • External: App server is running from outside the IDE, such as using CLI, but on the same machine
  • Remote: App server is running on a different machine, VM, container, or cloud

Docker containers need to be configured using the “remote” mode.

  1. Install JBoss Tools  as explained at tools.jboss.org/downloads/. JRebel’s remote mode can only be enabled using the IDE. Install JRebel plugin from Eclipse Marketplace.

Package rebel.xml and rebel-remote.xml with the WAR

These files define the location of classes and resources in your archive.

  1. Clone the Java EE 7 HOL repo:
  2. Import the Maven project (from the solution directory) in the IDE, right-click on the project, select JRebel menu, and click on “Enable JRebel Nature”. This will generate rebel.xml in src/main/resources directory and would look something like:
  3. Right-click on the project again, and select “Enable Remoting”. This generates rebel-remote.xml, in src/main/resources directory again, and will look like:

    This needs to be done on the machine where JRebel will be used in the IDE. This will ensure that the public key is generated appropriately.
  4. Package your application as

    This will package rebel.xml and rebel-remote.xml in the WAR file.

Configure and Run the Application Server

Application server needs to know about JRebel agent and platform-specific library. Both of these files are available from Eclipse if JRebel was installed earlier. On Mac these files are available in eclipse/mars/m5/eclipse/plugins/org.zeroturnaround.eclipse.embedder_6.1.1.RELEASE-201503121801/jr6/jrebel/ directory. The exact name would very likely differ in your case.

  1. Build the image using the Dockerfile:

    The key parts in this image are:

    1. Using the official jboss/wildfly Docker image
    2. Copying the JRebel agent and platform-specific library to the image
    3. Configuring application server such that it knows about the “remote” mode and platform-specific library
    4. Start WildFly
    5. Downloads the pre-built WAR file from GitHub. This will not work for you, and you’ll need to replace it with something like:

      This WAR file is the same that was generated earlier.
  2. Actually build the image as:
  3. Run the container as:

    and this should show something like:

    JRebel license information is a good sign that everything is configured properly.

    If you used Docker Machine to Setup Docker Host then the application should now be accessible at 192.168.99.100:8080/movieplex7/.

Configure Eclipse

Last step is to configure Eclipse so that it knows where the application is deployed.

  1. In Eclipse, right-click on the project, select JRebel, Advanced Properties
  2. Click on “Edit” on next to “Deployment URLs”
  3. Click on Add and specify the URL of the application, 192.168.99.100:8080/movieplex7/ in our case.
  4. Click on Continue, Apply, OK.

Voila, the configuration is now complete.

Now changing any class, adding any method, updating any entity or HTML or JSF page will push the changes to the Docker container instantly. No need to redeploy the application.

Enjoy!

Deploy to WildFly and Docker from Eclipse – Tech Tip #79

Docker and WildFly Part 1 – Deployment via Volumes and Docker and WildFly Part 2 – Deployment over Management API shows two approaches of how JBoss Tools can be configured to run any application on WildFly server running as a Docker container.

The blogs provide detailed setup and the underlying background. This Tech Tip will provide a quick summary of how to deploy a Java EE 7 application to WildFly and Docker from Eclipse.

Lets get started!

Configure Docker

  1. Configure Docker on your machine using Docker Machine.
  2. Find the IP address as:

    and add an entry in /etc/hosts as:

Deployment to WildFly Container using Docker Volumes

  1. Create a folder that will be mounted as volume in the WildFly Docker container. In this case, the folder is /Users/arungupta/tmp/deployments.WildFly Docker container can be started as:

    rw ensures that the Docker container can write to it.
  2. Create a new server adapter:
    WildFly Docker Server Adapter
  3. Assign or create a WildFly 8.x runtime:

    Docker WildFly Server Adapter

    Changed properties are highlighted.

  4. Setup the server properties as:

    Docker WildFly Adapter Properties

     

    Changed properties are highlighted. The two properties on the left are automatically propagated from the previous dialog. Additional two properties on the right side are required to disable to keep deployment scanners in sync with the server.

  5. Specify a custom deployment folder on Deployment tab of Server Editor:

    Docker WildFly Server Adapter

  6. Right-click on the newly created server adapter and click “Start”.

    Docker WildFly Server Synchronized

    Status quickly changes to “Started, Synchronized” as shown.

  7. Open up any Java EE 7 project (for example javaee7-simple-sample), right-click, Run on Server, and chose this server. The project runs and displays the page:

    Docker Java EE 7 Output

 

Deployment to WildFly Container using Management API

  1. Run WildFly management image as:

    This is only a convenience image to reduce the number of steps required to get started. Dockerfile for this image has more details, including admin credentials.

    Volume mapping is not required in this case, instead additional management port is exposed.

  2. Configure a remote server controlled by management operations:Docker WildFly Remote Server Configuration

    Changed properties are highlighted.

  3. Take the defaults:

    Docker WildFly Remote System Integration

  4. Set up server properties by specifying the admin credentials (Admin#70365). Note, you need to delete the existing password and use this instead:

    Docker WildFly Admin Credentials

  5. Right-click on the newly created server adapter and click “Start”.Status quickly changes to “Started, Synchronized” as shown.

    Docker WildFly Server Synchronized

  6. Open up any Java EE 7 project (for example javaee7-simple-sample), right-click, Run on Server, and chose this server. The project runs and displays the page:

    Docker Java EE 7 Output

Enjoy!

This blog showed how how to deploy a Java EE 7 application to WildFly and Docker from Eclipse.

Is there any other way that you deploy to WildFly Docker container from Eclipse?

Docker Machine to Setup Docker Host – Tech Tip #78

Running Docker containers typically involve three components:

  • Docker Client is a binary that accepts commands from the user and communicates back and forth with host
  • Docker Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers
  • Docker Registry is SaaS platform for sharing and managing Docker images.Docker Hub is a public hub. Private registries can be easily setup as well, such as one by Artifactory. More on this in a subsequent blog.

Docker Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.

Docker Architecture

 

In a typical development environment setup, Docker Client and Host/Daemon will be co-located on the same host machine. Even if they are on separate machines, it still require to login to the Host and setup Docker Daemon for that OS.

docker-logoDocker Machine takes you from zero-to-Docker on a host with a single command. This host could be your laptop, in the cloud, or in your data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

This downloads the boot2docker VM, setup ssh keys, generate certificates, start the VM. It basically takes care of all the boring work so that you can focus on all the fun things.

This Tech Tip will show you to get started with Docker Machine and use it to setup Docker Host on Mac. It does not work on Windows yet because of github.com/docker/machine/issues/742.

Lets get started!

Install Docker Machine

  1. Download the appropriate binary from docs.docker.com/machine/#installation. Binary for Mac can be downloaded as:
  2. Verify the installation as:

Setup Mac Host using Docker Machine

  1. Docker Machine can be configured to use with multiple drivers, such as Amazon Web Services, Google Compute Engine, Microsoft Azure, and Oracle VirtualBox. On a developer laptop, Virtual Box is a convenient option.Virtual Box 4.3.20 is the minimum requirement. So make sure you’ve the correct version installed.
  2. Create a Docker Host using VirtualBox provider and call the machine as “mydocker”.Make sure ssh-keygen is in the PATH before invoking this command. On Mac, this is already in /usr/bin/ssh-keygen. On Windows, this can be installed as part of Git Bash.This can be done as:

    This downloads boot2docker with the Docker daemon installed, and will create and start a VirtualBox VM with Docker running.
  3. Find IP address of the machine as:

    Note down this IP address, this will be used for accessing the application.
  4. Check the status of running machine as:

    The * in the ACTIVE column indicates this is an active host.
  5. Check the environment of newly created machine as:

Setup Docker client to Communicate

  1. Setup your client to talk to this host as:

Run Java Application on Host

  1. Run Java EE 7 Application discussed in Java EE 7 Hands-on Lab on WildFly and Docker on this host as:
  2. Access the application at 192.168.99.101:8080/movieplex7/ and looks like:Docker Machine Output

Docker Machine Commands

Complete list of Docker Machine commands can be seen as:

Learn more about Docker Machine, Swarm, and Compose in this video:

Why would you use anything else other than Docker Machine to setup Docker host? How do you setup Docker Host otherwise?

Some useful references …

  • Full documentation
  • GitHub Repo
  • Issues
  • #docker-machine on freenode

Enjoy!

Docker Compose to Orchestrate Containers – Tech Tip #77

Docker Orchestration using Fig showed how to defining and control a multi-container service using Fig. Since then, Fig has been renamed to Docker Compose, or Compose for short.

First release of Compose was announced recently

From github.com/docker/compose

Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Docker Compose uses the same API used by other Docker commands and tools.

Docker Compose

This Tech Tip will rewrite Docker Orchestration using Fig blog to use Docker Compose. In other words, it will show how to run a Java EE 7 application that is deployed using MySQL and WildFly.

Lets get started!

Install Docker Compose

Install Compose as:

Docker Compose Configuration File

Entry point to Compose is docker-compose.yml. To begin with, docker-compose tool also recognizes fig.yml file name but shows the following message:

And if both fig.yml and docker-compose.yml are available in the directory then the following message is shown:

Use the same configuration file from the previous blog and rename to docker-compose.yml:

This YML-based configuration file has:

  1. Two containers defined by the name “mysqldb” and “mywildfly”
  2. Image names are defined using “image”
  3. Environment variables for the MySQL container are defined in “environment”
  4. MySQL container is linked with WildFly container using “links”
  5. Port forwarding is achieved using “ports”

Start, Verify, Stop Docker Containers

  1. All the containers can be started, in detached mode, by giving the command:

    And that shows the output as:
  2. Verify the containers as:
  3. Logs for the containers can be seen as:

    And shows the output as:
  4. Find the IP address of the host as:

    And access the application as:

    To see the output as:

    Or in the browser as:

    Docker Compose Output

  5. Stop the containers as:

    to see the output as:

Docker Compose Commands

Complete list of Docker Compose commands can be seen by typing docker-compose and shows the output as:

A subsequent blog will likely play with scale command.

Help for each command is shown by typing -h after the command name. For example, help for run command is shown as:

Enjoy!

Build Binaries Only Once for Continuous Deployment

What is Build Binaries Only Once?

One of the fundamental principle of Continuous Delivery is Build Binaries Only Once, or in short BBOO. This means that the binary artifacts should be build once, and only once. These artifacts should then be stored in a repository manager, such as a Nexus Repository. Subsequent deploy, test, and release cycles should never attempt to build this binary again and instead reuse this binary. This ensures that the exact same binary has gone through all different test cycles and delivered to the customer.

Several times binaries are rebuilt during each testing phase using a specific tag of the workspace, and considered the same. But that is still different! This might turn out to be the same but that’s more incidental. Its more likely not same because of different environment configurations. For example, development team might be using JDK 8 on their machine and the test/staging might be using JDK 7. There are a multitude reasons because of which the binary artifacts could differ. So it’s very essential to build binaries only once, store them in a repository, and make them go through different test, staging, and production cycle. This increases the overall confidence level of delivery to the customer.

Build Binaries Only Once

This image shows how the binaries are built once during Build stage and stored on Nexus repository. There after, Deploy, Test, and Release stages are only reading the binary from Nexus.

The fact that dev, test, and staging environments differ is a different issue. And we’ll deal with that in a subsequent blog.

How do you setup Build Binaries Only Once?

For now, lets look at the setup:

  1. A Java EE 7 application WAR file is built once
  2. Store in a Nexus repository, or .m2 local repository
  3. Same binary is used for smoke testing
  4. Same binary is used for running full test suite

The smoke test in our case will be just a single test and full suite has four tests. Hopefully this is not your typical setup in terms of the number of tests, but at least you get to see how to setup everything.

Also only two stages of testing, smoke and full but the concept can be easily extended to add other stages. A subsequent blog will show a full blown deployment pipeline.

Lets get started!

  1. Check out a trivial Java EE 7 sample application from github.com/javaee-samples/javaee7-simple-sample. This is a typical Java EE application with REST endpoints, CDI beans, JPA entities.
  2. Setup a local Nexus Repository and deploy a SNAPSHOT of the application to it as:

    By default, Nexus repository is configured on localhost:8081/nexus. Note down the host/port if you are using a different combination. Also note down the exact version number that is deployed to Nexus. By default, it will be 1.0-SNAPSHOT.

    You can also deploy a RELEASE to this Nexus repository as:

    Note down whether you deployed SNAPSHOT or RELEASE.

    In either case, you can also specify -P release Maven profile and sources and javadocs will be attached with the deployment. So if RELEASE is deployed as:

    Then sources and javadocs are also attached.

  3. Check out the test workspace from github.com/javaee-samples/javaee7-simple-sample-test. Make the following changes in this project:
    1. Change nexus-repo property to match the host/port of the Nexus repository. If you used the default installation of Nexus and deployed a RELEASE, then nothing needs to be changed.By default, Nexus has one repository for SNAPSHOTs and another for RELEASEs. The workspace is configured to use RELEASE repository. If you deployed a SNAPSHOT, then “releases” in nexus-repo needs to be changed to “snapshots”to point to the appopriate repository.
    2. Change javaee7-sample-app-version property to match the version of the application deployed to Nexus.
  4. Start WildFly and run smoke tests as:

    This will run all files ending in “SmokeTest”. ShrinkWrap and Arquillian perform the heavy lifting of resolving the WAR file from Nexus and using it for running the tests:

    Running the smoke tests will show the results as:

  5. Run the full tests as:

    This will run all files included in your test suite and will show the results as:

    In both cases, smoke tests and full tests are using the binary that is deployed to Nexus.

Learn more about your toolset for creating this simple yet powerful setup:

arquillian-logo nexus-logowildfly-logo

 

Here are some other blogs coming in this series:

  • Use a CI server to deploy to Nexus
  • Run tests on WildFly running in a PaaS
  • Add static code coverage and code metrics in testing
  • Build a deployment pipeline

Enjoy!

OpenShift v3: Getting Started with Java EE 7 using WildFly and MySQL (Tech Tip #73)

OpenShift OriginOpenShift is Red Hat’s open source PaaS platform. OpenShift v3 (due to be released this year) will provide a holistic experience on running your microservices using Docker and Kubernetes. In a classic Red Hat way, all the work is done in the open source at OpenShift Origin. This will also drive the next major release of OpenShift Online and OpenShift Enterprise.

OpenShift v3 uses a new platform stack that is using plenty of community projects where Red Hat contributes such as Fedora, Centos, Docker, Project Atomic, Kubernetes, and OpenStack. OpenShift v3 Platform Combines Docker, Kubernetes, Atomic and More explain this platform stack in detail.

OpenShift v3 Stack

This tech tip will explain how to get started with OpenShift v3, lets get started!

Getting Started with OpenShift v3

Pre-built binaries for OpenShift v3 can be downloaded from Origin at GitHub. However the simplest way to get started is to run OpenShift Origin as a Docker container.

OpenShift Application Lifecycle provide complete details on what it takes to run a sample application from scratch. This blog will use those steps and adapt them to run using boot2docker VM on Mac. And in the process we’ll also deploy a Java EE 7 application on WildFly which will be accessing database on a separate MySQL container.

Here is our deployment diagram:

OpenShift v3 WildFly MySQL Deployment Strategy

  • WildFly and MySQL are running on separate pods.
  • Each of them is wrapped in a Replication Controller to enable simplified scaling.
  • Each Replication Controller is published as a Service.
  • WildFly talks to the MySQL service, as opposed to directly to the pod. This is important as Pods, and IP addresses assigned to them, are ephemeral.

Lets get started!

Configure Docker Daemon

  1. Configure the docker daemon on your host to trust the docker registry service you’ll be starting. This registry will be used to push images for build/test/deploy cycle.
    • Log into boot2docker VM as:
    • Edit the file

      This will be an empty file.
    • Add the following name/value pair:

      Save the file, and quit the editor.

    This will instruct the docker daemon to trust any docker registry on the 172.30.17.0/24 subnet.

Check out OpenShift v3 and Java EE 7 Sample

  1. Download and Install Go and setup GOPATH and PATH environment variable. Check out OpenShift origin directory:

    Note the directory where its checked out. In this case, its ~/workspaces/openshift.

    Build the workspace:

  2. Check out javaee7-hol workspace that has been converted to a Kubernetes application:

    This is also done in ~/workspaces/openshift directory.

Start OpenShift v3 Container

  1. Start OpenShift Origin as Docker container:

    Note ~/workspaces/openshift directory is mounted as /workspaces/openshift volume in the container. Some additional volumes are mounted as well.

    Check that the container is running:

  2. Log into the container as:

  3. Install Docker registry in the container by giving the following command:

  4. Confirm that the registry is running by getting the list of pods:

    osc is OpenShift Client CLI and allows to create and manage OpenShift projects. Some of the kubectl commands can also be using this script.

  5. Confirm the registry service is running. Note the actual IP address may vary:

  6. Confirm that the registry service is accessible:

    And look for the output:

Access OpenShift v3 Web Console

  1. OpenShift Origin server is now up and running. Find out the host’s IP address using boot2docker ip and open http://<IP addresss of boot2docker host>:8444 to view OpenShift Web  Console in your browser.For example, the console is accessible at https://192.168.59.103:8444/ on this machine.
    OpenShift Origin Browser Certificate

    You will need to have the browser accept the certificate at https://<host>:8444 before the console can consult the OpenShift API. Of course this would not be necessary with a legitimate certificate.

  2. OpenShift Origin login screen shows up. Enter the username/password as admin/admin:
    OpenShift Origin Login Screen

    and click on the “Log In” button. The default web console looks like:

    OpenShift v3 Web Console Default

Create OpenShift v3 Project

  1. Use project.json from github.com/openshift/origin/blob/master/examples/sample-app/project.json in the OpenShift v3 container and create a test project as:

    Refreshing the web console now shows:

    OpenShift Origin Test Project

    Clicking on “OpenShift 3 Sample” shows an empty project description:

    OpenShift v3 Empty Project

  2. Request creation of the application template:

  3. Web Console automatically refreshes and shows:

    OpenShift v3 Java EE 7 Default Project

    The list of services running can be seen as:

    OpenShift v3 Java EE 7 Project Services

Build the Project

  1. Trigger an initial build of your project:

  2. Monitor the builds and wait for the status to go to “complete” (this can take a few minutes):

    You can add the –watch flag to wait for updates until the build completes:

    Wait for the STATUS column to show Complete. It will take a few minutes as all the components (WIldFly, MySQL, Java EE 7 application) are provisioned.  Effectively, their new Docker images are created and pushed to the local registry that was started earlier.

    Hit Ctrl+C to stop watching builds after the status changes to Complete.

  3. Complete log of the build can be seen as:

  4. Check for the application pods to start:

    Note, that the “frontend” and “database” pods are now running.

  5. Determine IP of the “frontend” service:

  6. Access the application at http://<IP address of “frontend”>:8080/movieplex7-1.0-SNAPSHOT should work. Note the IP address may (most likely will) vary. In this case, it would be http://172.30.17.115:8080/moviexplex7-1.0-SNAPSHOT.The app would not be accessible yet, as some further debugging is required to configure firewall on Mac when OpenShift v3 is used as Docker container. Until we figure that out, you can do docker ps in your boot2docker VM to see the list of all the containers:

    And then login to the container associated with frontend as:

    This will log in to the Docker container where you can check that the application is deployed successfully by giving the following command: