Category Archives: javaee

Service Discovery with Java and Database application in Kubernetes

This blog will show how a simple Java application can talk to a database using service discovery in Kubernetes.

 Kubernetes Logo WildFly Logo

Service Discovery with Java and Database application in DC/OS explains why service discovery is an important aspect for a multi-container application. That blog also explained how this can be done for DC/OS.

Let’s see how this can be accomplished in Kubernetes with a single instance of application server and database server. This blog will use WildFly for application server and Couchbase for database.

This blog will use the following main steps:

  • Start Kubernetes one-node cluster
  • Kubernetes application definition
  • Deploy the application
  • Access the application

Start Kubernetes Cluster

Minikube is the easiest way to start a one-node Kubernetes cluster in a VM on your laptop. The binary needs to be downloaded first and then installed.

Complete installation instructions are available at github.com/kubernetes/minikube.

The latest release can be installed on OSX as as:

It also requires kubectl to be installed. Installing and Setting up kubectl provide detailed instructions on how to setup kubectl. On OSX, it can be installed as:

Now, start the cluster as:

The kubectl version command shows more details about the kubectl client and minikube server version:

More details about the cluster can be obtained using the kubectl cluster-info command:

Kubernetes Application Definition

Application definition is defined at github.com/arun-gupta/kubernetes-java-sample/blob/master/service-discovery.yml. It consists of:

  • A Couchbase service
  • Couchbase replica set with a single pod
  • A WildFly replica set with a single pod
The key part is where the value of the COUCHBASE_URI environment variable is name of the Couchbase service. This allows the application deployed in WildFly to dynamically discovery the service and communicate with the database.

arungupta/couchbase:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/couchbase/Dockerfile.

arungupta/wildfly-couchbase-javaee:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/Dockerfile.

Java EE application waits for database initialization to be complete before it starts querying the database. This can be seen at github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java#L25.

Deploy Application

This application can be deployed as:

The list of service and replica set can be shown using the command kubectl get svc,rs:

Logs for the single replica of Couchbase can be obtained using the command kubectl logs rs/couchbase-rs:

Logs for the WildFly replica set can be seen using the command kubectl logs rs/wildfly-rs:

Access Application

The kubectl proxy command starts a proxy to the Kubernetes API server. Let’s start a Kubernetes proxy to access our application:

Expose the WildFly replica set as a service using:

The list of services can be seen again using kubectl get svc command:

Now, the application is accessible at:

A formatted output looks like:

Now, new pods may be added as part of Couchbase service by scaling the replica set. Existing pods may be terminated or get rescheduled. But the Java EE application will continue to access the database service using the logical name.

This blog showed how a simple Java application can talk to a database using service discovery in Kubernetes.

For further information check out:

  • Kubernetes Docs
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on Couchbase Forums or Stack Overflow
  • Download Couchbase

Docker Services, Stack and Distributed Application Bundle

docker-1.12

First Release Candidate of Docker 1.12 was announced over two weeks ago. Several new features are planned for this release.

This blog will show how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode. Many thanks to @friism to help me understand these concepts.

Let’s look at the features first:

  • Built-in orchestration: A typical application is defined using a Docker Compose file. This definition consists of multiple containers and deployed on multiple hosts. This avoids Single Point of Failure (SPOF) and keeps your application resilient. Multiple orchestration frameworks such as Docker Swarm, Kubernetes and Mesos allow you to orchestrate these applications. However it is such an important characteristic of the application, Docker Engine now has built-in orchestration. More details on this topic in a later blog.
  • Service: A replicated, distributed and load balanced service can be easily created using docker service create command. A “desired state” of the application, such as run 3 containers of Couchbase, is provided and the self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node. More on this in a later blog.
  • Zero-configuration Security: Docker 1.12 comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in the swarm, out of the box. More on this in a later blog.
  • Docker Stack and Distributed Application Bundle: Distributed Application Bundle, or DAB, is a multi-services distributable image format. Read further for more details.

So far, you can take a Dockerfile and create an image from it using the docker build command. A container can be started using the docker run command. Multiple containers can be easily started by giving that command multiple times. Or you can also use Docker Compose file and scale up your containers using the docker-compose scale command.

docker-lifecycle

Image is a portable format for a single container. Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime.

docker-stack-lifecycle

Learn more about DAB at docker.com/dab.

For simplicity, here is an analogy that can be drawn:

Dockerfile -> Image -> Container

Docker Compose -> Distributed Application Bundle -> Docker Stack

Let’s use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.

Its important to note that this is an experimental feature in 1.12-RC2.

Create a Distributed Application Bundle from Docker Compose

Docker Compose CLI adds a new bundle command. More details can be found:

Now, let’s take a Docker Compose definition and create a DAB from it. Here is our Docker Compose definition:

This Compose file starts a WildFly and a Couchbase server. A Java EE application is pre-deployed in the WildFly server that connects to the Couchbase server and allows to perform CRUD operations using the REST API.

The source for this file is at: github.com/arun-gupta/oreilly-docker-book/blob/master/hello-javaee/docker-compose.yml.

Generate an application bundle with it:

depends_on only creates dependency between two services and makes them start in a specific order. This only ensures that the Docker container is started but the application within the container may take longer to start. So this attribute only partially solves the problem. container_name gives a specific name to the container. Relying upon a specific container name is tight coupling and does not allow to scale the container.  So both the warnings can be ignored, for now.

This command generates a file using the Compose project name, which is the directory name. So in our case, hellojavaee.dsb file is generated. This file extension has been renamed to .dab in RC3.

The generated application bundle looks like:

This file provides complete description of the services included in the application. I’m not entirely sure if Distributed Application Bundle is the most appropriate name, discuss this in #24250. It would be great if other container formats, such as Rkt, or even VMs can be supported here. But for now, Docker is the only supported format.

Initialize Swarm Mode in Docker

As mentioned above, “desired state” is now maintained by Docker Swarm. And this is now baked into Docker Engine already.

Docker Swarm concepts have evolved as well and can be read at Swarm mode key concepts. A more detailed blog on this will be coming later.

But for this blog, a new command docker swarm is now added:

Initialize a Swarm node (as a manager) in the Docker Engine:

More details about this node can be found using docker node inspect self command.

The detailed output is verbose but the relevant section is:

The output shows that the node is a manager. For a single-node cluster, this node will also act as a worker.

 

More details about the cluster can be obtained using the docker swarm inspect command.

AcceptancePolicy shows that other worker nodes can join this cluster, but a manager requires explicit approval.

Deploy a Docker Stack

Create a stack using docker deploy command:

The command usage can certainly be simplified as discussed in #24249.

See the list of services:

The output shows that two services, WildFly and Couchbase, are running. Services is also a new concept introduced in Docker 1.12. There is what gives you the “desired state” and Docker Engine works to give you that.

docker ps shows the list of containers running:

WildFly container starts up before the Couchbase container is up and running. This means the Java EE application tries to connect to the Couchbase server and fails. So the application never boots successfully.

Self-healing Docker Service

Docker Service maintains the “desired state” of an application. In our case, the desired state is to ensure that one, and only one, container for the service is running. If we remove the container, not the service, then the service will automatically start the container again.

Remove the container as:

Note, you’ve to give -f because the container is already running. Docker 1.12 self-healing mechanisms kick in and automatically restart the container. Now if you list the containers again:

This shows that a new container has been started.

Inspect the WildFly service:

Swarm assigns a random port to the service, or this can be manually updated using docker service update command. In our case, port 8080 of the container is mapped to 30004 port on the host.

Verify the Application

Check that the application is successfully deployed:

Add a new book to the application:

Verify the books again:

Learn more about this Java EE application at github.com/arun-gupta/oreilly-docker-book/tree/master/hello-javaee.

This blog showed how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.

Docker Service and Stack References

  • Docker Service Create
  • FREE book from O’Reilly: Docker for Java Developers
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on @couchbasedev or Stackoverflow
Create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.… Click To Tweet

Source: blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle

JBoss EAP 7 and NoSQL using Java EE and Docker

JBoss EAP 7 Beta is now released, many congratulations to Red Hat and particularly to the WildFly team!

There are plenty of improvements coming in this release as documented in Release Notes. One of the major themes is Java EE 7 compliance.

JBoss EAP 7 and Java EE 7

IBM and Oracle already provide commercially supported Java EE 7-compliant Application Servers. And now Red Hat will be joining this party soon as well. Although WildFly has supported Java EE 7 for 2+ years but commercial support is a critical for open source to be adopted enterprise-wide. So this is good news!

You can learn all about different Java EE 7 APIs in the DZone Refcardz that I authored along with @alrubinger.

Java EE 7 Refcardz

There are plenty of “hello world” Java EE 7 Samples that should all run with JBoss EAP. Hopefully somebody will update the pom.xml and add a new profile.

Why NoSQL?

If you are building a traditional enterprise application then you might be fine using an RDBMS. There are plenty of advantages of using RDBMS but using a NoSQL database instead has a few advantages:

  • No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them.
  • Scalability, agility and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD.
  • NoSQL are designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Queries requiring JOINs across shards is extremely inefficient.
  • RDBMS have impedance mismatch between the database structure and the domain classes. An Object Relational Mapping, such as one provided by Java Persistence API or Hibernate is needed in such case.
  • NoSQL databases are designed for less management and simpler data models lead to lower administration cost as well.

So you are all excited about NoSQL now and want to learn more:

  • Why NoSQL?
  • Why do successful enterprises rely on NoSQL?
  • Top 10 Enterprise NoSQL Usecases

In short, there are four different types of NoSQL databases:

  • Document: Couchbase, Mongo, and others
  • Key/Value: Couchbase, Redis, and others
  • Graph: Neo4J, OrientDB, and others
  • Column: Cassandra and others

Java EE 7 provides Java Persistence API that does not provide any support for NoSQL. So how do you get started with NoSQL with JBoss EAP 7?

This blog will show how to query a Couchbase database using simple Java EE application deployed on JBoss EAP 7 Beta.

What is Couchbase?

Couchbase is an open-source, NoSQL, document database. It allows to access, index, and query JSON documents while taking advantage of integrated distributed caching for high performance data access.

Developers can write applications to Couchbase using different languages (Java, Go, .NET, Node, PHP, Python, C) multiple SDKs. This blog will show how you can easily create a CRUD application using Java SDK for Couchbase.

Run JBoss EAP 7

There are two ways to start JBoss EAP 7.

Download and Run

  • Download JBoss EAP 7 Beta and unzip.
  • Start the application server as:

Docker Run

In a containerized world, you just docker run to run your JBoss EAP. However, JBoss EAP image does not exist on Docker Hub and so the image needs to be explicitly built. You still need to explicitly download JBoss EAP and then use the following Dockerfile to build the image:

The image is built as:

And then you can run the JBoss EAP 7 container as:

Notice, how application and management ports are bound to all network interfaces. This will simplify to deploy the application to this JBoss EAP instance later.

Stop the server as we will show an easier way to start it later.

Start Application Server and Database

The Java EE application will provide a HTTP CRUD interface over JSON documents stored in Couchbase. The application itself will be deployed on JBoss EAP 7 Beta. So it would require to start Couchbase and JBoss EAP.

Use the Docker Compose file from github.com/arun-gupta/docker-images/blob/master/jboss-eap7-nosql/docker-compose.yml to start Couchbase and JBoss EAP 7 container:

The application is started as:

The started containers can be seen as:

Configure Couchbase Server

Clone couchbase-javaee application. This Java EE application uses Couchbase Java SDK APIs to connect to the Couchbase server. The bootstrap code is:

and is invoked from Database abstraction.

Couchbase Server can be configured using REST API. These REST APIs are defined in a Maven profile in pom.xml of this application. And so configure Couchbase server as:

Deploy Java EE Application to JBoss

Java EE Application can be easily deployed to JBoss EAP 7 Beta using the WildFly Maven Plugin. This is also defined as a Maven profile in pom.xml as well.

Deploy the application as:

Access the Application

As mentioned earlier, the application provides HTTP CRUD API over JSON documents stored in Couchbase.

Access the application as:

CRUD operations (GET, POST, PUT, DELETE) can be performed on Airline resource in the application. Complete CRUD API is documented at github.com/arun-gupta/couchbase-javaee.

This blog explained how to access a NoSQL database from JBoss EAP 7.

Read more about Couchbase 4:

  • What’s New in Couchbase Server 4.1
  • Couchbase Server documentation
  • Talk to us on Couchbase Forums
  • Follow @couchbasedev or @couchbase

Learn more about Couchbase in this recent developer-focused webinar:

Docker Machine, Swarm and Compose for multi-container and multi-host applications with Couchbase and WildFly

This blog will explain how to create multi-container application deployed on multiple hosts using Docker. This will be achieved using Docker Machine, Swarm and Compose.

Yes, all three tools together makes this blog that much more interesting!

Docker Swarm Machine Compose

The diagram explains the key components:

  • Docker Machine is used to provision multiple Docker hosts
  • Docker Swarm will be used to create a multi-host cluster
  • Each node in Docker Swarm cluster is registered/discovered using Consul
  • Multi-container application will be deployed using Docker Compose
  • WildFly and Couchbase are provisioned on different hosts
  • Docker multi-host networking is used for WildFly and Couchbase to communicate

In addition, Maven is used to configure Couchbase and deploy application to WildFly.

Latest instructions at Docker for Java Developers.

No story, just pure code, lets do it!

Create Discovery Service using Docker Machine

  1. Create a Machine that will host discovery service:
  2. Connect to this Machine:
  3. Run Consul service using the following Compose file:
    This Compose file is available at https://github.com/arun-gupta/docker-images/blob/master/consul/docker-compose.yml.
    Started container can be verified as:

Create Docker Swarm Cluster using Docker Machine

Swarm is fully integrated with Machine, and so is the easiest way to get started.

  1. Create a Swarm Master and point to the Consul discovery service:
    Few options to look here:

    1. --swarm configures the Machine with Swarm
    2. --swarm-master configures the created Machine to be Swarm master
    3. --swarm-discovery defines address of the discovery service
    4. --cluster-advertise advertise the machine on the network
    5. --cluster-store designate a distributed k/v storage backend for the cluster
    6. --virtualbox-disk-size sets the disk size for the created Machine to 5GB. This is required so that WildFly and Couchbase image can be downloaded on any of the nodes.
  2. Find some information about this machine:
    Note that the disk size is 5GB.
  3. Connect to the master by using the command:
  4. Find some information about the cluster:
  5. Create a new Machine to join this cluster:
    Notice no --swarm-master is specified in this command. This ensure that the created Machines are worker nodes.
  6. Create a second Swarm node to join this cluster:
  7. List all the created Machines:
    The machines that are part of the cluster have cluster’s name in the SWARM column, blank otherwise. For example,consul-machine is a standalone machine where as all other machines are part of the swarm-master cluster. The Swarm master is also identified by (master) in the SWARM column.
  8. Connect to the Swarm cluster and find some information about it:

    Note, --swarm is specified to connect to the Swarm cluster. Otherwise the command will connect to swarm-masterMachine only.

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm worker nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master. This can be verified by connecting to the master and listing all the containers.

  9. List nodes in the cluster with the following command:

Start Application Environment using Docker Compose

Make sure you are connected to the cluster by giving the command eval "$(docker-machine env --swarm swarm-master)".

  1. List all the networks created by Docker so far:
    Docker create three networks for each host automatically:

    Network Name Purpose
    bridge Default network that containers connect to. This is docker0 network in all Docker installations.
    none Container-specific networking stack
    host Adds a container on hosts networking stack. Network configuration is identical to the host.

    This explains a total of nine networks, three for each node, as shown in this Swarm cluster.

  2. Use Compose file to start WildFly and Couchbase:

    In this Compose file:

    1. Couchbase service has a custom container name defined by container_name. This name is used when creating a new environment variable COUCHBASE_URI during WildFly startup.
    2. arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.Source for this file is at https://github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml.

    This application environment can be started as:

    --x-networking creates an overlay network for the Swarm cluster. This can be verified by listing networks again:

    Three new networks are created:

    1. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.
    2. A new overlay network wildflycouchbasejavaee7 is created. Connect to different Swarm nodes and check that the overlay network exists on them.

      Lets begin with master:

      Next, with swarm-node-01:

      Finally, with swarm-node-02:

      As seen, wildflycouchbasejavaee7 overlay network exists on all Machines. This confirms that the overlay network created for Swarm cluster was added to each host in the cluster. docker_gwbridge only exists on Machines that have application containers running.

      Read more about Docker Networks.

  3. Verify that WildFly and Couchbase are running:

Configure Application and Database

  1. Clone https://github.com/arun-gupta/couchbase-javaee.git. This workspace contains a simple Java EE application that is deployed on WildFly and provides a REST API over travel-sample bucket in Couchbase.
  2. Couchbase server can be configured using REST API. The application contains a Maven profile that allows to configure Couchbase server with travel-sample bucket. This can be invoked as:
  3. Deploy the application to WildFly by specifying three parameters:
    1. Host IP address where WildFly is running
    2. Username of a user in WildFly’s administrative realm
    3. Password of the user specified in WildFly’s administrative realm

Access Application

Now that WildFly and Couchbase server have started, lets access the application. You need to specify IP address of the Machine where WildFly is running:

Complete set of REST API for this application is documented at github.com/arun-gupta/couchbase-javaee.

Latest instructions at Docker for Java Developers.

Enjoy!

Docker Networking with Couchbase and WildFly

Docker Multi-Host networking allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. This blog will show how to use it with Docker Compose.

CRUD Java Application with Couchbase, Java EE, and WildFly explained how to use a Java EE application to provide a CRUD/REST interface on a data bucket in Couchbase. It required to manually download and run WildFly. The blog also used Couchbase server using Docker and required manual configuration to load travel-sample bucket.

Configure Couchbase Docker Container using REST API explained how to use Couchbase REST API to configure Couchbase Server.

Docker Multi-Host Networking

This blog will remove the explicit download of WildFly and manual configuration of Couchbase server:

  • Use Docker Compose to start WildFly and Couchbase (no download required)
  • Use a Maven profile to configure Couchbase server (no manual configuration required)
  • Uses Docker multi-host networking so that WildFly and Couchbase server can talk to each other

Lets get started!

Start Couchbase and WildFly using Docker Multi-Host Networking and Compose

  1. Start WildFly and Couchbase server using docker-compose.yml file from github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml:
    arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.

    container_name is specified for Couchbase service and referred in WildFly service using COUCHBASE_URI. This is then used to connect to Couchbase from the Java EE application.

    The application environment is started as:

    --x-networking is an experimental switch added to Docker Compose 1.9 that allows to create a bridge or an overlay network. By default, it creates a bridge network that works on a single host. The network created can be seen as:

    Issue 2221 provide more explanation about the default networks created. wildflycouchbasejavaee7 is the new bridge network created for our application.  Issue #2345 provide some details about incorrect driver name in the output message.

Configure Couchbase Server

  1. Clone couchbase-javaee repo:

  2. Configure Couchbase server:

    exec-maven-plugin is used to invoke REST API and configure Couchbase server and is configured in a Maven profile. Make sure to setup docker.host property in pom.xml.

  3. Deploy the application to WildFly:

    Make sure to specify the correct host on CLI. In this case, this is the IP address obtained using docker-machine ip default.

Invoke the Application

  1. Invoke the REST endpoint using cURL:

    Complete set of REST endpoints are documented at CRUD Java Application with Couchbase, Java EE and WildFly. They are listed here for convenience:

    1. GET a single airline:
    2. Create a new airline using POST:

    3. Update an existing airline using PUT:
    4. Delete an existing airline using DELETE:

Enjoy!

CRUD Java Application with Couchbase, Java EE and WildFly

Couchbase is an open-source, NoSQL, document database. It allows to access, index, and query JSON documents while taking advantage of integrated distributed caching for high performance data access.

Developers can write applications to Couchbase using different languages (Java, Go, .NET, Node, PHP, Python, C) multiple SDKs. This blog will show how you can easily create a CRUD application using Java SDK for Couchbase.

REST with Couchbase

The application will use curl to issue REST commands to a JAX-RS endpoint deployed on WildFly. These commands will then perform CRUD operations on travel-sample bucket in Couchbase. N1QL (SQL query language for JSON) will be used to communicate with Couchbase to retrieve results. Both the “builder pattern” and raw N1QL commands will be used.

Couchbase CRUD using WildFly and Curl

TL;DR

Complete source code and instructions for the sample are available at github.com/arun-gupta/couchbase-javaee.

Lets get started!

Run Couchbase Server

Couchbase server can be easily downloaded from Couchbase Server Downloads page. In a containerized world, its a lot easier to fire up a Couchbase server using Docker.

If Docker is configured on your machine then the easiest way is to use Docker Compose for Couchbase:

Starting up the application server shows:

And then the logs can be seen as:

The database needs to be configured and is explained at Configure Couchbase Server. Make sure to install travel-sample bucket.

Deploy the Java EE Application on WildFly

  • Download WildFly 9.0.2 , unzip, and start WildFly application server as ./wildfly-9.0.0.Final/bin/standalone.sh.
  • Git clone the repo: git clone https://github.com/arun-gupta/couchbase-javaee.git
  • Change directory cd couchbase-javaee
  • Deploy the application to WildFly: mvn install -Pwildfly.

The application uses Java SDK for Couchbase by importing the following Maven coordinates:

Invoke the REST Endpoints Using cURL

GET Airline resources (limit to 10)

Lets query the database to list 10 Airline resources.

Request

Response

The N1QL query for this is written as:
And can also be alternatively written as:
You may optionally update the code to include ORDER BY clause as shown in N1QL Tutorial.

GET one Airline resource

Use id attribute to query a single Airline resource

Request

Response

POST a new Airline resource

Learn how to run N1QL queries from the CLI using CBQ tool and verify the existing sample data:

This query retrieve documents where airline’s name is Airlinair. The count is shown in metrics.resultCount.

Create a new document using POST.

Request

Response

Query again using CBQ and now the results are shown as:
Note that two JSON documents are returned instead of one as before the POST command was issued.

PUT an existing Airline resource

Update an existing resource using HTTP POST.

The data model for travel-sample bucket requires to include “id” attribute in the payload and in the URI as well.

Request

Name of the airline is updated from “Airlinair” to “Airlin Air”, all other attributes stay the same.

Response

The updated record is shown in the response.

Querying for Airlinair gives:

So the previously added record is now updated and thus does not appear in query results. Querying for Airlin Airgives:

This shows the newly updated document.

DELETE an existing Airline resource

Query for a unique id:

Notice that one document is returned.

Lets delete this document.

Request

Response

The deleted document is shown in the response.

Query again for the deleted id:

And no results are returned!

As mentioned earlier, the complete code base is at github.com/arun-gupta/couchbase-javaee.

Enjoy!

Kubernetes Application – Package Multiple Resources Together

Deploying an application in Kubernetes require to create multiple resources such as Pods, Services, Replication Controllers, and others. Typically each resource is define in a configuration file and created using kubectl script. But if multiple resources need to be created then you need to invoke kubectl multiple times. So if you need to create the following resources:

  • MySQL Pod
  • MySQL Service
  • WildFly Replication Controller

Then the commands would look like:

Or for convenience, wrap these invocations in a shell script. But that is not very intuitive! There is a better, and more natural and intuitive way.

Kubernetes allow multiple resources to be specified in a single configuration file. This allows to create a “Kubernetes Application” that can consists of multiple resources easily.

Previous section showed how to deploy the Java EE application using multiple configuration files. This application can be delpoyed using a single configuration file as well.

An application, as discussed above, consisting of MySQL Pod, MySQL Service, and WildFly Replication Controller can be created using the following configuration file:

Notice that each section, one each for MySQL Pod, MySQL Service, and WildFly Replication Controller, is separated by ----.

Such an application can be created as:

Complete details about how to setup Kubernetes and run this application are available at github.com/arun-gupta/kubernetes-java-sample/#kubernetes-application.

More details about creating a Kubernetes application with multiple resources can be found in #12104.

You can learn about how to create Kubernetes resources for a Java application, or otherwise, at github.com/arun-gupta/kubernetes-java-sample/.

JBoss EAP gives 509% ROI over closed source application servers

A new study by IDC shows how Red Hat JBoss EAP customers are significantly benefitting over closed source commercial application servers:

JBoss EAP IDC 2015

The study says that a common paradigm with JBoss EAP customers three years ago …

There was uncertainty about whether JBoss EAP would scale as well as more expensive options and whether JBoss EAP was as feature rich, particularly for high-end projects. Those concerns prevented IT operations from going all in on a single software standard. Despite that, customers were pleased with the benefits they were able to achieve from their use of JBoss EAP, and customers were able to achieve an impressive return on investment by adopting this approach.

Sounds familiar?
Does your company still think like that?
Do the closed source vendors still give you that pitch?

The study shares how customers’ perspectives have changed over these years …

Today, we’ve found that JBoss EAP customers are more systematic in their use of JBoss EAP or OpenShift by Red Hat as the standard application server or cloud application platform within a standardized environment. Customers are no longer worried that JBoss EAP is not as sophisticated as more expensive alternatives and now believe it’s at performance parity with its competitors.

There are much more fundamental benefits for developing in open source:

One customer said that because JBoss EAP comes from open source, it is built from a lot of good ideas from the community and the internal design is architected to make it simpler to use than non-community-based alternatives.

That’s why we love Community Powered Innovation!

The cost benefit cannot be over emphasized anyway:

the cost benefit associated with JBoss EAP gave them an affordable opportunity to standardize, whereas they would have been too cost challenged using other options.

And the customers are saying:

We did a cost-benefit analysis. Many of our applications need a development platform for multiple environments … and if we compare JBoss EAP with other solutions … it’s a no-brainer. Night and day

Are customers using it for only new projects? Or migrating their existing mission critical projects to JBoss EAP as well?

Three years ago …

Then it was enough to begin using JBoss EAP for new projects.

And now …

we’ve found that customers made the decision to migrate production applications to the new environment in order to gain speed and compliance benefits from standardizing application operations and change management.

Register and download the report now!

What are you waiting for?

Download JBoss EAP (Java EE 6 compliant) and get full commercial support or WildFly 10 Beta1 to try Java EE 7 and lots of other cool features!

Getting Started with ELK Stack on WildFly

Your typical business application would consist of a variety of servers such as WildFly, MySQL, Apache, ActiveMQ, and others. They each have a log format, with minimal to no consistency across them. The log statement typically consist of some sort of timestamp (could be widely varied) and some text information. Logs could be multi-line. If you are running a cluster of servers then these logs are decentralized, in different directories.

How do you aggregate these logs? Provide a consistent visualization over them? Make this data available to business users?

This blog will:

  • Introduce ELK stack
  • Explain how to start it
  • Start a WildFly instance to send log messages to the ELK stack (Logstash)
  • View the messages using ELK stack (Kibana)

What is ELK Stack?

ELK stack provides a powerful platform to index, search and analyze your data. It uses  Logstash for log aggregation, Elasticsearch for searching, and Kibana for visualizing and analyzing data. In short, ELK stack:

  • Collect logs and events data (Logstash)
  • Make it searchable in fast and meaningful ways (Elasticsearch)
  • Use powerful analytics to summarize data across many dimensions (Kibana)

logstash-logo

Logstash is a flexible, open source data collection, enrichment, and transportation pipeline.

Logstash

elasticsearch-logo

Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.

Elasticsearch

kibana-logo

Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics.

Kibana

How does ELK Stack work?

Logstash can collect logs from a variety of sources (using input plugins), process the data into a common format using filters, and stream data to a variety of sources (using output plugins). Multiple filters can be chained to parse the data into a common format. Together, they build a Logstash Processing Pipeline.

Logstash Processing Pipeline

Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.

Logstash can then store the data in Elasticsearch and Kibana provides a visualization of that data. Here is a sample pipeline that can collect logs from different servers and run it through the ELK stack.

ELK Stack

Start ELK Stack

You can download individual components of ELK stack and start that way. There is plenty of advise on how to configure these components. But I like to start with a KISS, and Docker makes it easy to KISS!

All the source code on this blog is at github.com/arun-gupta/elk.

  1. Clone the repo:
  2. Run the ELK stack:
    This will use the pre-built Elasticsearch, Logstack, and Kibana images. It is built upon the work done in github.com/nathanleclaire/elk.

    docker ps will show the output as:

    It shows all the containers running.

WildFly and ELK

James (@the_jamezp) blogged about Centralized Logging for WildFly with ELK Stack. The blog explains how to configure WildFly to send log messages to Logstash. It uses the highly modular nature of WildFly to install jboss-logmanager-ext library and install it as a module. The configured logmanager includes @timestamp field to the log messages sent to logstash. These log messages are then sent to Elasticsearch.

Instead of following the steps, lets Docker KISS and use a pre-configured image to get you started.

Start the image as:

Make sure to substitute <DOCKER_HOST_IP> with the IP address of the host where your Docker host is running. This can be easily found using docker-machine ip <MACHINE_NAME>.

View Logs using ELK Stack

Kibana runs on an embedded nginx and is configured to run on port 80 in docker-compose.yml. Lets view the logs using that.

  1. Access http://<DOCKER_HOST_IP> in your machine and it should show the default page as:ELK Stack WildFly PatternThe @timestamp field was created by logmanager configured in WildFly.
  2. Click on Create to create an index pattern and select Discover tab to view the logs as:ELK Stack WildFly Output

Try connecting other sources and enjoy the power of distributed consolidated by ELK!

Some more references …

  • Logstash docs
  • Kibana docs
  • Elasticsearch The Definitive Guide

Distributed logging and visualization is a critical component in a microservices world where multiple services would come and go at a given time. A future blog will show how to use ELK stack with a microservices architecture based application.

Enjoy!

Multi-container Applications using Docker Compose and Swarm

Docker Compose to Orchestrate Containers shows how to run two linked Docker containers using Docker Compose. Clustering Using Docker Swarm shows how to configure a Docker Swarm cluster.

This blog will show how to run a multi-container application created using Docker Compose in a Docker Swarm cluster.

Updated version of Docker Compose and Docker Swarm are released with Docker 1.7.0.

Docker 1.7.0 CLI

Get the latest Docker CLI:

and check the version as:

Docker Machine 0.3.0

Get the latest Docker Machine as:

and check the version as:

Docker Compose 1.3.0

Get the latest Docker Compose as:

and verify the version as:

Docker Swarm 0.3.0

Swarm is run as a Docker container and can be downloaded as:

You can learn about Docker Swarm at docs.docker.com/swarm or Clustering using Docker Swarm.

Create Docker Swarm Cluster

The key components of Docker Swarm are shown below:

and explained in Clustering Using Docker Swarm.

  1. The easiest way of getting started with Swarm is by using the official Docker image:
    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    The last line is the <TOKEN>.

    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with#661.

  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    This will show the output as:

  4. Create a Swarm node

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    This shows the output similar to the one below:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master.

  8. List nodes in the cluster with the following command:

    This shows the output as:

Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Docker Compose to Orchestrate Containers explains how multi container applications can be easily started using Docker Compose.

  1. Use the docker-compose.yml file explained in that blog to start the containers as:
    The docker-compose.yml file looks like:
  2. Check the containers running in the cluster as:
    to see the output as:
  3. “swarm-node-02” is running three containers and so lets look at the list of containers running there:
    and see the list of running containers as:
  4. Application can then be accessed again using:
    and shows the output as:

Latest instructions for this setup are always available at: github.com/javaee-samples/docker-java/blob/master/chapters/docker-swarm.adoc.

Enjoy!

ZooKeeper for Microservice Registration and Discovery

In a microservice world, multiple services are typically distributed in a PaaS environment. Immutable infrastructure is provided by containers or immutable VM images. Services may scale up and down based upon certain pre-defined metrics. Exact address of the service may not be known until the service is deployed and ready to be used.

This dynamic nature of service endpoint address is handled by service registration and discovery. In this, each service registers with a broker and provide more details about itself, such as the endpoint address. Other consumer services then queries the broker to find out the location of a service and invoke it. There are several ways to register and query services such as ZooKeeper, etcd, consul, Kubernetes, Netflix Eureka and others.

Monolithic to Microservice Refactoring showed how to refactor an existing monolith to a microservice-based application. User, Catalog, and Order service URIs were defined statically. This blog will show how to register and discover microservices using ZooKeeper.

Many thanks to Ioannis Canellos (@iocanel) for all the ZooKeeper hacking!

What is ZooKeeper?

Apache ZooKeeperZooKeeper is an Apache project and provides a distributed, eventually consistent hierarchical configuration store.

 

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.

Apache ZooKeeper

So a service can register with ZooKeeper using a logical name, and the configuration information can contain the URI endpoint. It can consists of other details as well, such as QoS.

Apache Curator ZooKeeper has a steep learning curve as explained in Apache ZooKeeper Made Simpler with Curator. So, instead of using ZooKeeper directly, this blog will use Apache Curator.

Curator n ˈkyoor͝ˌātər: a keeper or custodian of a museum or other collection – A ZooKeeper Keeper.

Apache Curator

Apache Curator has several components, and this blog will use the Framework:

The Curator Framework is a high-level API that greatly simplifies using ZooKeeper. It adds many features that build on ZooKeeper and handles the complexity of managing connections to the ZooKeeper cluster and retrying operations.

Apache Curator Framework

ZooKeeper Concepts

ZooKeeper Overview provides a great overview of the main concepts. Here are some of the relevant ones:

  • Znodes: ZooKeeper stores data in a shared hierarchical namespace that is organized like a standard filesystem. The name space consists of data registers – called znodes, in ZooKeeper parlance – and these are similar to files and directories.
  • Node name: Every node in ZooKeeper’s name space is identified by a path. Exact name of a node is a sequence of path elements separated by a slash (/).
  • Client/Server: Clients connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the client will connect to a different server.
  • Configuration data: Each node in a ZooKeeper namespace can have data associated with it as well as children. ZooKeeper was originally designed to store coordination data, so the data stored at each node is usually small, in less than KB range).
  • Ensemble: ZooKeeper itself is intended to be replicated over a sets of hosts called an ensemble. The servers that make up the ZooKeeper service must all know about each other.
  • Watches: ZooKeeper supports the concept of watches. Clients can set a watch on a znode. A watch will be triggered and removed when the znode changes.

ZooKeeper is a CP system with regards to CAP theorem. This means if there is a partition failure, it will be consistent but not available. This can lead to problems that are explained in Eureka! Why You Shouldn’t Use ZooKeeper for Service Discovery.

Nevertheless, ZooKeeper is one of the most popular service discovery mechanisms used in microservices world.

Lets get started!

Start ZooKeeper

  1. Start a ZooKeeper instance in a Docker container:
  2. Verify ZooKeeper instance by using telnet as:
    Type the command “ruok” to verify that the server is running in a non-error state.The server will respond with “imok” if it is running:
    Otherwise it will not respond at all. ZooKeeper has other similar four-letter commands.

Service Registration and Discovery

Each service, User, Catalog, and Order in our case, has an eagerly initialized bean that registers and unregisters the service as part of lifecycle initialization methods as. Here is the code from CatalogService:

The code is pretty simple, it injects ServiceRegistry class, with @ZooKeeperRegistry qualifier. This is then used to register and unregister the service. Multiple URIs, one each for a stateless service, can be registered under the same logical name.

At this time, the qualifier comes from another maven module. A cleaner Java EE way would be to move the @ZooKeeperRegistry qualifier to a CDI extension (#20). And when this qualifier when specified on any REST endpoint will register the service with ZooKeeper (#22). For now, service endpoint URI is hardcoded as well (#24).

What does ZooKeeper class look like?

  1. ZooKeeper class uses constructor injection and hardcoding IP address and port (#23):
    It does the following tasks:

    1. Loads ZooKeeper’s host/port from a properties file
    2. Initializes Curator framework and starts it
    3. Initializes a hashmap to store the URI name to zNode mapping. This node is deleted later to unregister the service.
  2. Service registration is done using registerService method as:
    Code is pretty straight forward:

    1. Create a parent zNode, if needed
    2. Create an ephemeral and sequential node
    3. Add metadata, including URI, to this node
  3. Service discovery is done using discover method as:
    Again, simple code:

    1. Find all children for the path registered for the service
    2. Get metadata associated with this node, URI in our case, and return.The first such node is returned in this case. Different QoS parameters can be attached to the configuration data. This will allow to return the appropriate service endpoint.

Read ZooKeeper Javadocs for API.

ZooKeeper watches can be setup to inform the client about the lifecycle of the service (#27). ZooKeeper path caches can provide an optimized implementation of the children nodes (#28).

Multiple Service Discovery Implementations

Our shopping cart application has two two service discovery implementations – ServiceDisccoveryStatic and ServiceDiscoveryZooKeeper. The first one has all the service URIs defined statically, and the other one retrieves them from ZooKeeper.

Other means to register and discover can be easily added by creating a new package in services module and implementing ServiceRegistry interface. For example, Snoop, etcd, Consul, and Kubernetes. Feel free to send a PR for any of those.

Run Application

  1. Make sure the ZooKeeper image is running as explained earlier.
  2. Download and run WildFly:
  3. Deploy the application:
  4. Access the application at localhost:8080/everest-web/. Learn more about the application and different components in Monolithic to Microservices Refactoring for Java EE Applications blog.

Enjoy!

Monolithic to Microservices Refactoring for Java EE Applications

Have you ever wondered what does it take to refactor an existing Java EE monolithic application to a microservices-based one?

This blog explains how a trivial shopping cart example was converted to microservices-based application, and what are some of the concerns around it. The complete code base for monolithic and microservices-based application is at: github.com/arun-gupta/microservices.

Read on for full glory!

Java EE Monolith

A Java EE monolithic application is typically defined as a WAR or an EAR archive. The entire functionality for the application is packaged in a single unit. For example, an online shopping cart may consist of User, Catalog, and Order functionalities. All web pages are in root of the application, all corresponding Java classes are in the WEB-INF/classes directory, resources in WEB-INF/classes/META-INF directory.

Lets assume that your monolith is not designed as a distributed big ball of mud and the application is built following good software architecture. Some of the common rules are:

  • Separation of concerns, possibly using Model-View-Controller
  • High cohesion and low coupling using well-defined APIs
  • Don’t Repeat Yourself (DRY)
  • Interfaces/APIs and implementations are separate, and following Law of Demeter. Classes don’t call other classes directly because they happen to be in the same archive
  • Using Domain Driven Design to keep objects related to a domain/component together
  • YAGNI or You Aren’t Going to Need It: Don’t build something that you don’t need now

Here is how a trivial shopping cart monolithic WAR archive might look like:

Java EE Monolithic

This monolithic application has:

  • Web pages, such as .xhtml files, for User, Catalog, and Order component, packaged in the root of the archive. Any CSS and JavaScript resources that are shared across different webpages are also packaged with these pages.
  • Classes for the three components are in separate packages in WEB-INF/classes directory. Any utility/common classes used by multiple classes are packed here as well.
  • Configuration files for each component are packaged inWEB-INF/classes/META-INF directory. Any config files for the application, such as persistence.xml and load.sql to connect and populate the data store respectively, are also packaged here.

It has the usual advantages of well-known architecture, IDE-friendly, easy sharing, simplified testing, easy deployment, and others. But also comes with disadvantages such as limited agility, obstacle for continuous delivery, “stuck” with a technology stack, growing technical debt, and others.

Even though microservices are all the raze these days, but monoliths are not bad. Even those that are not working for you may not benefit much, or immediately, from moving to microservices. Other approaches, such as just better software engineering and architecture, may help. Microservices is neither a free lunch or a silver bullet and requires significant investment to be successful such as service discovery, service replication, service monitoring, containers, PaaS, resiliency, and a lot more.

don’t even consider microservices unless you have a system that’s too complex to manage as a monolith.

Microservice Premium

Microservice Architecture for Java EE

Alright, I’ve heard about all of that but would like to see a before/after, i.e. how a monolith code base and how a refactored microservice codebase looks like.

First, lets look at the overall architecture:

Java EE Microservices

The key pieces in this architecture are:

  • Application should be functionally decomposed where User, Order, and Catalog components are packaged as separate WAR files. Each WAR file should have the relevant web pages (#15), classes, and configuration files required for that component.
    • Java EE is used to implement each component but there is no long term commitment to the stack as different components talk to each other using a well-defined API (#14).
    • Different classes in this component belong to the same domain and so the code is easier to write and maintain. The underlying stack can also change, possibly keeping technical debt to a minimum.
  • Each archive has its own database, i.e. no sharing of data stores. This allows each microservice to evolve and choose whatever type of datastore – relational, NoSQL, flat file, in-memory or some thing else – is most appropriate.
  • Each component will register with a Service Registry. This is required because multiple stateless instances of each service might be running at a given time and their exact endpoint location will be known only at the runtime (#17).Netflix Eureka, Etcd, Zookeeper are some options in this space (more details).
  • If components need to talk to each other, which is quite common, then they would do so using a pre-defined API. REST for synchronous or Pub/Sub for asynchronous communication are the common means to achieve this.In our case, Order component discovers User and Catalog service and talks to them using REST API.
  • Client interaction for the application is defined in another application, Shopping Cart UI in our case. This application mostly discover the services from Service Registry and compose them together. It should mostly be a dumb proxy where the UI pages of different components are invoked to show the interface (#18).A common look-and-feel can be achieved by providing a standard CSS/JavaScript resources.

This application is fairly trivial but at least highlights some basic architectural differences.

Monolith vs Microservice

Some of the statistics for the monolith and microservices-based applications are compared below:

Characteristic Monolith Microservice
Number of archives  1  5

  • Contracts (JAR, ~4 KB)
  • Order (WAR, ~7 KB)
  • User (WAR, ~6 KB)
  • Catalog (WAR, ~8 KB)
  • Web UI (WAR, 27 KB)
Web pages  8  8 (see below)
Configuration Files  4

  • web.xml
  • template.xhtml
  • persistence.xml
  • load.sql
 3 per archive

  • persistence.xml
  • load.sql
  • web.xml
Class files  12 26

  • Service registration for each archive
  • Service discovery classes
  • Application class for each archive
Total archive size  24 KB  ~52 KB (total)

Code base for the monolithic application is at: github.com/arun-gupta/microservices/tree/master/monolith/everest

Code base for the microservices-enabled application is at: github.com/arun-gupta/microservices/tree/master/microservice

Issues and TODOs

Here are the issues encountered, and TODOs, during refactoring of the monolith to a microservices-based application:

  • Java EE already enables functional decomposition of an application using EAR packaging. Each component of an application can be packaged as a WAR file and be bundled within an EAR file. They can even share resources that way. Now that is not a true microservices way, but this could be an interim step to get you started. However, be aware that@FlowScoped bean is not activated in an EAR correctly (WFLY-4565).
  • Extract all template files using JSF Resource Library Templates.
    • All web pages are currently in everest module but they should live in each component instead (#15).
    • Resource Library Template should be deployed at a central location as opposed to packaged with each WAR file (#16).
  • Breakup monolithic database into multiple databases require separate persistence.xml and DDL/DML scripts for each application. Similarly, migration scripts, such as using Flyway, would need to be created accordingly.
  • A REST interface for all components, that need to be accessed by another one, had to be created.
  • UI is still in a single web application. This should instead be included in the decomposed WAR (#15) and then composed again in the dumb proxy. Does that smell like portlets?
  • Deploy the multiple WAR files in a PaaS (#12)
  • Each microservice should be easily deployable in a container (#6)

Here is the complete list of classes for the monolithic application:

Here is the complete list of classes for the microservices-based application:

Once again, the complete code base is at github.com/arun-gupta/microservices.

Future Topics

Some of the future topics in this series would be:

  • Are containers required for microservices?
  • How do I deploy multiple microservices using containers?
  • How can all these services be easily monitored?
  • A/B Testing
  • Continuous Deployment using microservices and containers

What else would you like to see?

Enjoy!

 

Java EE, Docker and Maven (Tech Tip #89)

Java EE applications are typically built and packaged using Maven. For example, github.com/javaee-samples/javaee7-docker-maven is a trivial Java EE 7 application and shows the Java EE 7 dependency:

And the two Maven plugins that compiles the source and builds the WAR file:

This application can then be deployed to a Java EE 7 container, such as WildFly, using the wildfly-maven-plugin:

Tests can be invoked using Arquillian, again using Maven. So if you were to package this application as a Docker image and run it inside a Docker container, there should be a mechanism to seamlessly integrate in the Maven workflow.

Docker Maven Plugin

Meet docker-maven-plugin!

This plugin allows you to manage Docker images and containers from your pom.xml. It comes with predefined goals:

Goal Description
docker:start Create and start containers
docker:stop Stop and destroy containers
docker:build Build images
docker:push Push images to a registry
docker:remove Remove images from local docker host
docker:logs Show container logs

Introduction provides a high level introduction to the plugin including building images, running containers and configuration.

Run Java EE 7 Application as Docker Container using Maven

TLDR;

  1. Create and Configure a Docker Machine as explained in Docker Machine to Setup Docker Host
  2. Clone the workspace as: git clone https://github.com/javaee-samples/javaee7-docker-maven.git
  3. Build the Docker image as: mvn package -Pdocker
  4. Run the Docker container as: mvn install -Pdocker
  5. Find out IP address of the Docker Machine as: docker-machine ip mydocker
  6. Access your application

Docker Maven Plugin Configuration

Lets look little deeper in our sample application.

pom.xml is updated to include docker-maven-plugin as:

Each image configuration has three parts:

  • Image name and alias
  • <build> that defines how the image is created. Base image, build artifacts and their dependencies, ports to be exposed, etc to be included in the image are specified here.Assembly descriptor format is used to specify the artifacts to be included and is defined in src/main/docker directory. assembly.xml in our case looks like:
  • <run> that defines how the container is run. Ports that need to be exposed are specified here.

In addition, package phase is tied to docker:build goal and install phase is tied to docker:start goal.

There are four docker-maven-plugin and you can read more details in the shootout on what serves your purpose the best.

How are you creating your Docker images from existing applications?

Enjoy!

 

Deploying Java EE Application to Docker Swarm Cluster (Tech Tip #88)

What is Docker Swarm?

Docker Swarm provides native clustering to Docker. Clustering using Docker Swarm 0.2.0 provide a basic introduction to Docker Swarm, and how to create a simple three node cluster. As a refresher, the key components of Docker Swarm are shown below:

In short, Swarm Manager is a pre-defined Docker Host, and is a single point for all administration. Additional Docker hosts are identified as Nodes and communicate with the Manager using TCP. By default, Swarm uses hosted Discovery Service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. Each node runs a Node Agent that registers the referenced Docker daemon, monitors it, and updates the Discovery Service with the node’s status. The containers run on a node.

That blog provide complete details, but a quick summary to create the cluster is shown below:

Listing the cluster shows:

It has one master and two nodes.

Deploy a Java EE application to Docker Swarm

All hosts in the cluster are accessible using a single, virtual host. Swarm serves the standard Docker API, so any tool that communicates with a single Docker host communicate can scale to multiple Docker hosts by communicating to this virtual host.

Docker Container Linking Across Multiple Hosts explains how to link containers across multiple Docker hosts. It deploys a Java EE 7 application to WildFly on one Docker host, and connects it with a MySQL container running on a different Docker host. We can deploy both of these containers using the virtual host, and they will then be deployed to the Docker Swarm cluster.

Lets get started!

MySQL on Docker Swarm

  1. Start the MySQL container
  2. Status of the container can be seen as:
    It shows the container is running on swarm-node-01.

    Make sure you are connected to the Docker Swarm cluster using eval $(docker-machine env --swarm swarm-master).

  3. Find IP address of the host where this container is started:

    Note IP address of the node where MySQL server is running. This will be used when starting WildFly application server later.

    ps: Filtering by name seem to not return accurate results (#10897).

WildFly on Docker Swarm

  1. Start WildFly application server by passing the IP address of the host and the port on which MySQL server is running:

  2. Status of the container can be seen as:

    It shows the container is running on swarm-node-02. IP address of the host is also shown in the PORTS column.

    As explained in Tech Tip #69, JDBC URL of the data source uses the specified IP address and port for connecting with the MySQL server. However passing IP address is very brittle as the MySQL server may restart on a different Docker host. This is filed as #773.

  3. Access the application at:

    This is using the IP address of the host where the container is started.

Enjoy!

 

JBoss EAP 6.4 – Java 8, JSR 356 WebSocket, Kereberos auth for management

JBoss Enterprise Application Platform 6.4, an update to the commercial release of Red Hat’s Java EE 6 compliant application server is now available.

JBoss EAP Logo

Download JBoss EAP 6.4

For current customers with active subscriptions, the binaries can be downloaded from the Customer Support Portal. This also has Installer, Quickstarts, Javadocs, Maven repository, Source Code, and much more.

Bits are also available from jboss.org/products/eap under development terms & conditions; and questions can be posed to the EAP Forum.

New Features in JBoss EAP 6.4

The key new features are:

  • Java 8 Support
  • JSR 356 WebSockets 1.0 support
  • Kerberos auth for management API connections, EJB invocations, and selected database access
  • Hibernate Search included as a new feature
  • Support for nested expressions
  • Ability to read boot errors from the management APIs
  • Display of server logs in the admin console

Read the comprehensive list of new features in JBoss EAP 6.4.

Documentation

Complete documentation is available at Customer Support Portal, and here are quick links:

  • Release Notes
  • Documentation
    • Getting Started Guide
    • Installation Guide
    • Administration and Configuration Guide
    • Development Guide
    • Security Guide
    • Migration Guide

If you are looking for a Java EE 7 compliant application server, then download WildFly.