Category Archives: wildfly

Service Discovery with Java and Database application in Kubernetes

This blog will show how a simple Java application can talk to a database using service discovery in Kubernetes.

 Kubernetes Logo WildFly Logo

Service Discovery with Java and Database application in DC/OS explains why service discovery is an important aspect for a multi-container application. That blog also explained how this can be done for DC/OS.

Let’s see how this can be accomplished in Kubernetes with a single instance of application server and database server. This blog will use WildFly for application server and Couchbase for database.

This blog will use the following main steps:

  • Start Kubernetes one-node cluster
  • Kubernetes application definition
  • Deploy the application
  • Access the application

Start Kubernetes Cluster

Minikube is the easiest way to start a one-node Kubernetes cluster in a VM on your laptop. The binary needs to be downloaded first and then installed.

Complete installation instructions are available at github.com/kubernetes/minikube.

The latest release can be installed on OSX as as:

It also requires kubectl to be installed. Installing and Setting up kubectl provide detailed instructions on how to setup kubectl. On OSX, it can be installed as:

Now, start the cluster as:

The kubectl version command shows more details about the kubectl client and minikube server version:

More details about the cluster can be obtained using the kubectl cluster-info command:

Kubernetes Application Definition

Application definition is defined at github.com/arun-gupta/kubernetes-java-sample/blob/master/service-discovery.yml. It consists of:

  • A Couchbase service
  • Couchbase replica set with a single pod
  • A WildFly replica set with a single pod
The key part is where the value of the COUCHBASE_URI environment variable is name of the Couchbase service. This allows the application deployed in WildFly to dynamically discovery the service and communicate with the database.

arungupta/couchbase:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/couchbase/Dockerfile.

arungupta/wildfly-couchbase-javaee:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/Dockerfile.

Java EE application waits for database initialization to be complete before it starts querying the database. This can be seen at github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java#L25.

Deploy Application

This application can be deployed as:

The list of service and replica set can be shown using the command kubectl get svc,rs:

Logs for the single replica of Couchbase can be obtained using the command kubectl logs rs/couchbase-rs:

Logs for the WildFly replica set can be seen using the command kubectl logs rs/wildfly-rs:

Access Application

The kubectl proxy command starts a proxy to the Kubernetes API server. Let’s start a Kubernetes proxy to access our application:

Expose the WildFly replica set as a service using:

The list of services can be seen again using kubectl get svc command:

Now, the application is accessible at:

A formatted output looks like:

Now, new pods may be added as part of Couchbase service by scaling the replica set. Existing pods may be terminated or get rescheduled. But the Java EE application will continue to access the database service using the logical name.

This blog showed how a simple Java application can talk to a database using service discovery in Kubernetes.

For further information check out:

  • Kubernetes Docs
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on Couchbase Forums or Stack Overflow
  • Download Couchbase

Microservice using Docker stack deploy – WildFly, Java EE and Couchbase

There is plenty of material on microservices, just google it! I gave a presentation on refactoring monolith to microservices at Devoxx Belgium a couple of years back and it has good reviews:

This blog will show how Docker simplifies creation and shutting down of a microservice.

All code used in this blog is at github.com/arun-gupta/couchbase-javaee.

Microservice Definition using Compose

Docker 1.13 introduced a v3 of Docker Compose. The changes in the syntax are minimal but the key difference is addition of deploy attribute. This attribute allows to specify replicas, rolling update and restart policy for the container.

Our microservice will start a WldFly application server with a Java EE application pre-deployed. This application will talk to a Couchbase database to CRUD application data.

Here is the Compose definition:

In this Compose file:

  1. Two services in this Compose are defined by the name db and web attributes
  2. Image name for each service defined using image attribute
  3. The arungupta/couchbase:travel image starts Couchbase server, configures it using Couchbase REST API, and loads travel-sample bucket with ~32k JSON documents.
  4. The arungupta/couchbase-javaee:travel image starts WildFly and deploys application WAR file built from https://github.com/arun-gupta/couchbase-javaee. Clone that project if you want to build your own image.
  5. envrionment attribute defines environment variables accessible by the application deployed in WildFly. COUCHBASE_URI refers to the database service. This is used in the application code as shown at https://github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java.
  6. Port forwarding is achieved using ports attribute
  7. depends_on attribute in Compose definition file ensures the container start up order. But application-level start up needs to be ensured by the applications running inside container. In our case, WildFly starts up rather quickly but takes a few seconds for the database to start up. This means the Java EE application deployed in WildFly is not able to communicate with the database. This outlines a best practice when building micro services applications: you must code defensively and ensure in your application initialization that the micro services you depend on have started, without assuming startup order. This is shown in the database initialization code at https://github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java. It performs the following checks:

    1. Bucket exists
    2. Query service of Couchbase is up and running
    3. Sample bucket is fully loaded

This application can be started using docker-compose up -d command on a single host. Or a cluster of Docker engines in swarm-mode using docker stack deploy command.

Setup Docker Swarm-mode

Initialize Swarm mode using the following command:

This starts a Swarm Manager. By default, manager node are also worker but can be configured to be manager-only.

Find some information about this one-node cluster using the command docker info command:

This cluster has 1 node, and that is manager.

Alternatively, a multi-host cluster can be easily setup using Docker for AWS.

Deploy Microservice

The microservice can be started as:

This shows the output:

WildFly and Couchbase services are started on this node. Each service has a single container. If the Swarm mode is enabled on multiple nodes then the containers will be distributed across multiple nodes.

A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.

Verify that the WildFly and Couchbase services are running using docker service ls:

Logs for the service can be seen using docker service logs -f webapp_web:

Make sure to wait for the last log statement to show.

Access Microservice

Get 10 airlines from the microservice:

This shows the results as:

Docker for Java Developers workshop is a self-paced hands-on lab and allows you to get started with Docker easily.

Get a single resource:

Create a new resource:

Update a resource:

Delete a resource:

Detailed output from each of these commands is at github.com/arun-gupta/couchbase-javaee.

Delete Microservice

The microservice can be removed using  the command docker stack rm webapp:

Want to get started with Couchbase? Look at Couchbase Starter Kits.

Want to learn more about running Couchbase in containers?

  • Couchbase on Containers
  • Couchbase Forums
  • Couchbase Developer Portal
  • @couchhasedev and @couchbase

Source: https://blog.couchbase.com/2017/february/microservice-using-docker-stack-deploy-wildfly-javaee-couchbase

Docker Services, Stack and Distributed Application Bundle

docker-1.12

First Release Candidate of Docker 1.12 was announced over two weeks ago. Several new features are planned for this release.

This blog will show how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode. Many thanks to @friism to help me understand these concepts.

Let’s look at the features first:

  • Built-in orchestration: A typical application is defined using a Docker Compose file. This definition consists of multiple containers and deployed on multiple hosts. This avoids Single Point of Failure (SPOF) and keeps your application resilient. Multiple orchestration frameworks such as Docker Swarm, Kubernetes and Mesos allow you to orchestrate these applications. However it is such an important characteristic of the application, Docker Engine now has built-in orchestration. More details on this topic in a later blog.
  • Service: A replicated, distributed and load balanced service can be easily created using docker service create command. A “desired state” of the application, such as run 3 containers of Couchbase, is provided and the self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node. More on this in a later blog.
  • Zero-configuration Security: Docker 1.12 comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in the swarm, out of the box. More on this in a later blog.
  • Docker Stack and Distributed Application Bundle: Distributed Application Bundle, or DAB, is a multi-services distributable image format. Read further for more details.

So far, you can take a Dockerfile and create an image from it using the docker build command. A container can be started using the docker run command. Multiple containers can be easily started by giving that command multiple times. Or you can also use Docker Compose file and scale up your containers using the docker-compose scale command.

docker-lifecycle

Image is a portable format for a single container. Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime.

docker-stack-lifecycle

Learn more about DAB at docker.com/dab.

For simplicity, here is an analogy that can be drawn:

Dockerfile -> Image -> Container

Docker Compose -> Distributed Application Bundle -> Docker Stack

Let’s use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.

Its important to note that this is an experimental feature in 1.12-RC2.

Create a Distributed Application Bundle from Docker Compose

Docker Compose CLI adds a new bundle command. More details can be found:

Now, let’s take a Docker Compose definition and create a DAB from it. Here is our Docker Compose definition:

This Compose file starts a WildFly and a Couchbase server. A Java EE application is pre-deployed in the WildFly server that connects to the Couchbase server and allows to perform CRUD operations using the REST API.

The source for this file is at: github.com/arun-gupta/oreilly-docker-book/blob/master/hello-javaee/docker-compose.yml.

Generate an application bundle with it:

depends_on only creates dependency between two services and makes them start in a specific order. This only ensures that the Docker container is started but the application within the container may take longer to start. So this attribute only partially solves the problem. container_name gives a specific name to the container. Relying upon a specific container name is tight coupling and does not allow to scale the container.  So both the warnings can be ignored, for now.

This command generates a file using the Compose project name, which is the directory name. So in our case, hellojavaee.dsb file is generated. This file extension has been renamed to .dab in RC3.

The generated application bundle looks like:

This file provides complete description of the services included in the application. I’m not entirely sure if Distributed Application Bundle is the most appropriate name, discuss this in #24250. It would be great if other container formats, such as Rkt, or even VMs can be supported here. But for now, Docker is the only supported format.

Initialize Swarm Mode in Docker

As mentioned above, “desired state” is now maintained by Docker Swarm. And this is now baked into Docker Engine already.

Docker Swarm concepts have evolved as well and can be read at Swarm mode key concepts. A more detailed blog on this will be coming later.

But for this blog, a new command docker swarm is now added:

Initialize a Swarm node (as a manager) in the Docker Engine:

More details about this node can be found using docker node inspect self command.

The detailed output is verbose but the relevant section is:

The output shows that the node is a manager. For a single-node cluster, this node will also act as a worker.

 

More details about the cluster can be obtained using the docker swarm inspect command.

AcceptancePolicy shows that other worker nodes can join this cluster, but a manager requires explicit approval.

Deploy a Docker Stack

Create a stack using docker deploy command:

The command usage can certainly be simplified as discussed in #24249.

See the list of services:

The output shows that two services, WildFly and Couchbase, are running. Services is also a new concept introduced in Docker 1.12. There is what gives you the “desired state” and Docker Engine works to give you that.

docker ps shows the list of containers running:

WildFly container starts up before the Couchbase container is up and running. This means the Java EE application tries to connect to the Couchbase server and fails. So the application never boots successfully.

Self-healing Docker Service

Docker Service maintains the “desired state” of an application. In our case, the desired state is to ensure that one, and only one, container for the service is running. If we remove the container, not the service, then the service will automatically start the container again.

Remove the container as:

Note, you’ve to give -f because the container is already running. Docker 1.12 self-healing mechanisms kick in and automatically restart the container. Now if you list the containers again:

This shows that a new container has been started.

Inspect the WildFly service:

Swarm assigns a random port to the service, or this can be manually updated using docker service update command. In our case, port 8080 of the container is mapped to 30004 port on the host.

Verify the Application

Check that the application is successfully deployed:

Add a new book to the application:

Verify the books again:

Learn more about this Java EE application at github.com/arun-gupta/oreilly-docker-book/tree/master/hello-javaee.

This blog showed how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.

Docker Service and Stack References

  • Docker Service Create
  • FREE book from O’Reilly: Docker for Java Developers
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on @couchbasedev or Stackoverflow
Create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.… Click To Tweet

Source: blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle

Microservices using WildFly Swarm, Docker and Couchbase

Containers, Microsoervices, and NoSQL provide an awesome threesome for building your modern applications. These applications need to be agile, meet constantly evolving customer demands, be pervasive, and should work across mobile, web and IoT platforms.

This blog will explain a simple microservices stack using WildFly Swarm, Docker, and Couchbase. Complete code and instructions in this blog are documented at: github.com/arun-gupta/wildfly-swarm-couchbase.

Let’s understand the key components of this stack first!

wildfly-swarm-logo

WildFly Swarm allows to package and run JavaEE applications by packaging them with just enough of the server runtime to java -jar your application. With built-in service discovery, single sign-on using Keycloak, monitoring using Hawkular, and many more features, WildFly Swarm provides all the necessary components to develop your microservice.

docker-for-mac

Docker for Mac provides native support for running Docker containers on Mac OSX. It relies upon Hypervisor.framework in OSX. Docker engine runs in an Alpine Linux distribution on top of an xhyve Virtual Machine, and even the VM is managed by Docker. There is no need for Docker Machine or VirtualBox, and it integrates with OSX security sandbox model. DockerCon 2016 removed the private beta restriction from Docker for Mac, and so its available for everybody now.

Couchbase Logo

NoSQL provides the agility and flexibility of schema-less databases. This allows the application to evolve independently and rapidly without going through cumbersome database migrations. Couchbase offers true horizontal scaling with homogenous architecture, as opposed to non-scalable master/slave architecture. It also offers auto-sharding, SQL-like query language for JSON (N1QL), mobile database and synchronization with the backend server, and much more.

The complete sample application in this blog is at: github.com/arun-gupta/wildfly-swarm-couchbase.

WildFly Swarm Application

Let’s look at the Java EE REST endpoint:

It uses standard JAX-RS annotation to convert a POJO into a REST endpoint. Couchbase Java API provide a fluent API and used N1QL statement to query the documents and return the results. The N1QL statement returns the first 10 elements from the query result. Learn more about N1QL syntax in this interactive tutorial.

Database abstraction is defined as:

This is a singleton EJB that is eagerly initialized. It uses Couchbase Java SDK to connect to Couchbase. Database endpoint can be specified using the COUCHBASE_URI environment variable.

Next up is pom.xml for configuring the WildFly Swarm and Couchbase Java Client:

It uses WildFly Swarm “bill of materials” to pull in all the dependencies. Only the specific dependencies needed for the build are specified in <dependencies>. These are then packaged in the “fat jar”.

WildFly Swarm Maven plugin is used to package and run the application:

COUCHBASE_URI is used to read the host of where Couchbase database server is running.

Run Couchbase Server

Run the Couchbase server using Docker for Mac:

The arungupta/couchbase is built upon the standard Couchbase image and uses Couchbase REST API to configure the server.

Wait for a couple of minutes for the sample bucket to be populated with the JSON documents.

Invoke the Couchbase CLI tool cbq create a primary index on the sample bucket:

This will show the output as:

This output shows that result of creating index was successful.

One of the advantages of running Docker for Mac is that all the containers are accessible at localhost. This means Couchbase Web Console can be accessed at localhost:8091.

couchbase-web-console-docker-mac-wildfly-swarm-microsoervice

This screen ensures that Couchbase is configured  correctly.

Run WildFly Swarm Microservice

 

Package and run the self-contained microservice as:

If Couchbase is running on a different host, then the command will change to:

It shows the output as:

Now the application can be accessed as:

And a formatted output looks like:

So you built a simple microservice using WildFly Swarm accessing a Couchbase database running as a Docker container.

Now, ideally this WildFly Swarm service should be packaged as a Docker image and then that Docker image would serve as the service. A Maven profile with the name docker is already added to pom.xml but issue #3 is making that scenario fail.

Microservices References

  • docs.docker.com
  • WildFly Swarm
  • Getting Started with NoSQL
  • GitHub Repo: github.com/arun-gupta/wildfly-swarm-couchbase

Source: blog.couchbase.com/2016/june/microservices-wildfly-swarm-docker-couchbase

Docker Machine, Swarm and Compose for multi-container and multi-host applications with Couchbase and WildFly

This blog will explain how to create multi-container application deployed on multiple hosts using Docker. This will be achieved using Docker Machine, Swarm and Compose.

Yes, all three tools together makes this blog that much more interesting!

Docker Swarm Machine Compose

The diagram explains the key components:

  • Docker Machine is used to provision multiple Docker hosts
  • Docker Swarm will be used to create a multi-host cluster
  • Each node in Docker Swarm cluster is registered/discovered using Consul
  • Multi-container application will be deployed using Docker Compose
  • WildFly and Couchbase are provisioned on different hosts
  • Docker multi-host networking is used for WildFly and Couchbase to communicate

In addition, Maven is used to configure Couchbase and deploy application to WildFly.

Latest instructions at Docker for Java Developers.

No story, just pure code, lets do it!

Create Discovery Service using Docker Machine

  1. Create a Machine that will host discovery service:
  2. Connect to this Machine:
  3. Run Consul service using the following Compose file:
    This Compose file is available at https://github.com/arun-gupta/docker-images/blob/master/consul/docker-compose.yml.
    Started container can be verified as:

Create Docker Swarm Cluster using Docker Machine

Swarm is fully integrated with Machine, and so is the easiest way to get started.

  1. Create a Swarm Master and point to the Consul discovery service:
    Few options to look here:

    1. --swarm configures the Machine with Swarm
    2. --swarm-master configures the created Machine to be Swarm master
    3. --swarm-discovery defines address of the discovery service
    4. --cluster-advertise advertise the machine on the network
    5. --cluster-store designate a distributed k/v storage backend for the cluster
    6. --virtualbox-disk-size sets the disk size for the created Machine to 5GB. This is required so that WildFly and Couchbase image can be downloaded on any of the nodes.
  2. Find some information about this machine:
    Note that the disk size is 5GB.
  3. Connect to the master by using the command:
  4. Find some information about the cluster:
  5. Create a new Machine to join this cluster:
    Notice no --swarm-master is specified in this command. This ensure that the created Machines are worker nodes.
  6. Create a second Swarm node to join this cluster:
  7. List all the created Machines:
    The machines that are part of the cluster have cluster’s name in the SWARM column, blank otherwise. For example,consul-machine is a standalone machine where as all other machines are part of the swarm-master cluster. The Swarm master is also identified by (master) in the SWARM column.
  8. Connect to the Swarm cluster and find some information about it:

    Note, --swarm is specified to connect to the Swarm cluster. Otherwise the command will connect to swarm-masterMachine only.

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm worker nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master. This can be verified by connecting to the master and listing all the containers.

  9. List nodes in the cluster with the following command:

Start Application Environment using Docker Compose

Make sure you are connected to the cluster by giving the command eval "$(docker-machine env --swarm swarm-master)".

  1. List all the networks created by Docker so far:
    Docker create three networks for each host automatically:

    Network Name Purpose
    bridge Default network that containers connect to. This is docker0 network in all Docker installations.
    none Container-specific networking stack
    host Adds a container on hosts networking stack. Network configuration is identical to the host.

    This explains a total of nine networks, three for each node, as shown in this Swarm cluster.

  2. Use Compose file to start WildFly and Couchbase:

    In this Compose file:

    1. Couchbase service has a custom container name defined by container_name. This name is used when creating a new environment variable COUCHBASE_URI during WildFly startup.
    2. arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.Source for this file is at https://github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml.

    This application environment can be started as:

    --x-networking creates an overlay network for the Swarm cluster. This can be verified by listing networks again:

    Three new networks are created:

    1. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.
    2. A new overlay network wildflycouchbasejavaee7 is created. Connect to different Swarm nodes and check that the overlay network exists on them.

      Lets begin with master:

      Next, with swarm-node-01:

      Finally, with swarm-node-02:

      As seen, wildflycouchbasejavaee7 overlay network exists on all Machines. This confirms that the overlay network created for Swarm cluster was added to each host in the cluster. docker_gwbridge only exists on Machines that have application containers running.

      Read more about Docker Networks.

  3. Verify that WildFly and Couchbase are running:

Configure Application and Database

  1. Clone https://github.com/arun-gupta/couchbase-javaee.git. This workspace contains a simple Java EE application that is deployed on WildFly and provides a REST API over travel-sample bucket in Couchbase.
  2. Couchbase server can be configured using REST API. The application contains a Maven profile that allows to configure Couchbase server with travel-sample bucket. This can be invoked as:
  3. Deploy the application to WildFly by specifying three parameters:
    1. Host IP address where WildFly is running
    2. Username of a user in WildFly’s administrative realm
    3. Password of the user specified in WildFly’s administrative realm

Access Application

Now that WildFly and Couchbase server have started, lets access the application. You need to specify IP address of the Machine where WildFly is running:

Complete set of REST API for this application is documented at github.com/arun-gupta/couchbase-javaee.

Latest instructions at Docker for Java Developers.

Enjoy!

Docker Networking with Couchbase and WildFly

Docker Multi-Host networking allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. This blog will show how to use it with Docker Compose.

CRUD Java Application with Couchbase, Java EE, and WildFly explained how to use a Java EE application to provide a CRUD/REST interface on a data bucket in Couchbase. It required to manually download and run WildFly. The blog also used Couchbase server using Docker and required manual configuration to load travel-sample bucket.

Configure Couchbase Docker Container using REST API explained how to use Couchbase REST API to configure Couchbase Server.

Docker Multi-Host Networking

This blog will remove the explicit download of WildFly and manual configuration of Couchbase server:

  • Use Docker Compose to start WildFly and Couchbase (no download required)
  • Use a Maven profile to configure Couchbase server (no manual configuration required)
  • Uses Docker multi-host networking so that WildFly and Couchbase server can talk to each other

Lets get started!

Start Couchbase and WildFly using Docker Multi-Host Networking and Compose

  1. Start WildFly and Couchbase server using docker-compose.yml file from github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml:
    arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.

    container_name is specified for Couchbase service and referred in WildFly service using COUCHBASE_URI. This is then used to connect to Couchbase from the Java EE application.

    The application environment is started as:

    --x-networking is an experimental switch added to Docker Compose 1.9 that allows to create a bridge or an overlay network. By default, it creates a bridge network that works on a single host. The network created can be seen as:

    Issue 2221 provide more explanation about the default networks created. wildflycouchbasejavaee7 is the new bridge network created for our application.  Issue #2345 provide some details about incorrect driver name in the output message.

Configure Couchbase Server

  1. Clone couchbase-javaee repo:

  2. Configure Couchbase server:

    exec-maven-plugin is used to invoke REST API and configure Couchbase server and is configured in a Maven profile. Make sure to setup docker.host property in pom.xml.

  3. Deploy the application to WildFly:

    Make sure to specify the correct host on CLI. In this case, this is the IP address obtained using docker-machine ip default.

Invoke the Application

  1. Invoke the REST endpoint using cURL:

    Complete set of REST endpoints are documented at CRUD Java Application with Couchbase, Java EE and WildFly. They are listed here for convenience:

    1. GET a single airline:
    2. Create a new airline using POST:

    3. Update an existing airline using PUT:
    4. Delete an existing airline using DELETE:

Enjoy!

CRUD Java Application with Couchbase, Java EE and WildFly

Couchbase is an open-source, NoSQL, document database. It allows to access, index, and query JSON documents while taking advantage of integrated distributed caching for high performance data access.

Developers can write applications to Couchbase using different languages (Java, Go, .NET, Node, PHP, Python, C) multiple SDKs. This blog will show how you can easily create a CRUD application using Java SDK for Couchbase.

REST with Couchbase

The application will use curl to issue REST commands to a JAX-RS endpoint deployed on WildFly. These commands will then perform CRUD operations on travel-sample bucket in Couchbase. N1QL (SQL query language for JSON) will be used to communicate with Couchbase to retrieve results. Both the “builder pattern” and raw N1QL commands will be used.

Couchbase CRUD using WildFly and Curl

TL;DR

Complete source code and instructions for the sample are available at github.com/arun-gupta/couchbase-javaee.

Lets get started!

Run Couchbase Server

Couchbase server can be easily downloaded from Couchbase Server Downloads page. In a containerized world, its a lot easier to fire up a Couchbase server using Docker.

If Docker is configured on your machine then the easiest way is to use Docker Compose for Couchbase:

Starting up the application server shows:

And then the logs can be seen as:

The database needs to be configured and is explained at Configure Couchbase Server. Make sure to install travel-sample bucket.

Deploy the Java EE Application on WildFly

  • Download WildFly 9.0.2 , unzip, and start WildFly application server as ./wildfly-9.0.0.Final/bin/standalone.sh.
  • Git clone the repo: git clone https://github.com/arun-gupta/couchbase-javaee.git
  • Change directory cd couchbase-javaee
  • Deploy the application to WildFly: mvn install -Pwildfly.

The application uses Java SDK for Couchbase by importing the following Maven coordinates:

Invoke the REST Endpoints Using cURL

GET Airline resources (limit to 10)

Lets query the database to list 10 Airline resources.

Request

Response

The N1QL query for this is written as:
And can also be alternatively written as:
You may optionally update the code to include ORDER BY clause as shown in N1QL Tutorial.

GET one Airline resource

Use id attribute to query a single Airline resource

Request

Response

POST a new Airline resource

Learn how to run N1QL queries from the CLI using CBQ tool and verify the existing sample data:

This query retrieve documents where airline’s name is Airlinair. The count is shown in metrics.resultCount.

Create a new document using POST.

Request

Response

Query again using CBQ and now the results are shown as:
Note that two JSON documents are returned instead of one as before the POST command was issued.

PUT an existing Airline resource

Update an existing resource using HTTP POST.

The data model for travel-sample bucket requires to include “id” attribute in the payload and in the URI as well.

Request

Name of the airline is updated from “Airlinair” to “Airlin Air”, all other attributes stay the same.

Response

The updated record is shown in the response.

Querying for Airlinair gives:

So the previously added record is now updated and thus does not appear in query results. Querying for Airlin Airgives:

This shows the newly updated document.

DELETE an existing Airline resource

Query for a unique id:

Notice that one document is returned.

Lets delete this document.

Request

Response

The deleted document is shown in the response.

Query again for the deleted id:

And no results are returned!

As mentioned earlier, the complete code base is at github.com/arun-gupta/couchbase-javaee.

Enjoy!

WildFly Admin Console Updated – Feedback Requested

Red Hat JBoss Enterprise Application Platform (EAP) and WildFly have a symbiotic relationship. In short, Red Hat JBoss Enterprise Application Platform (JBoss EAP) retains all of the innovation of the WildFly community project (formerly known as JBoss Application Server). But only a subscription to JBoss EAP meets the demanding requirements for mission-critical applications and includes the assurance of service-level agreement (SLA)-based support, patches, updates, and multi-year maintenance policies. Read more details about the comparison between WildFly and JBoss EAP in this whitepaper.

JBoss EAP 6.4 is the latest version as of now, WildFly 10.0.0 Beta2 was released a few days ago. JBoss EAP 7 will be derived from WildFly 10.x. This allows developers to try out the latest features with WildFly (as opposed to other closed source application servers) and then use EAP 7 for mission-critical applications when commercial support is required.

Over the past year, we have been working to improve the user experience of the WildFly Management Console.  You will find several improvements to the overall information architecture and navigation model that will make it easier to find and execute common management tasks. We invite you to try the new console application and tell us what you think.

Getting Started with WildFly Admin Console

  • Download WildFly-10.0.0.Beta2 and unzip
  • Add a user in admin realm as add-user.sh -u u1 -p p1.
  • Access web-based admin console at localhost:9990
  • Use the username as u1 and password as p1

If you don’t want to go through download and install, which is pretty simple BTW, then WildFly 10.0.0.Beta2 is also available in OpenShift (thanks @farahjuma).

WildFly Admin Console Highlights

Spend some time navigating through different sections to quickly learn the WildFly basics. Here are some highlights.

The new navigation makes the WildFly structure more visible. To find a subsystem to configure, simply move from the left to the right within the navigation. You can also get a quick overview about each subsystem before configuring it.

WildFly 10 Beta2 Configuration

Servers can be found through either hosts or server groups. In addition, you can search for the server group or server you are looking for. 

Adding servers and monitoring servers are easier. After adding a server to a server group or host and getting it running, you can choose a subsystem that you want to monitor from the same page.

WildFly10 Beta2 Runtime

Modifying the status of a server can be done on the same page. You can also remove or copy the server.

WildFly10 Beta2 Server

You can add deployments to server groups directly, which means that you don’t have to first upload it and then assign it. Searching for server groups and deployments will also help save your time.

WildFly10 Beta2 Server Group

We welcome your feedback as we continue to improve the user experience of WildFly. Feel free to leave comments here, file bugs, and let us know what you like and what can be further improved.

Getting Started with ELK Stack on WildFly

Your typical business application would consist of a variety of servers such as WildFly, MySQL, Apache, ActiveMQ, and others. They each have a log format, with minimal to no consistency across them. The log statement typically consist of some sort of timestamp (could be widely varied) and some text information. Logs could be multi-line. If you are running a cluster of servers then these logs are decentralized, in different directories.

How do you aggregate these logs? Provide a consistent visualization over them? Make this data available to business users?

This blog will:

  • Introduce ELK stack
  • Explain how to start it
  • Start a WildFly instance to send log messages to the ELK stack (Logstash)
  • View the messages using ELK stack (Kibana)

What is ELK Stack?

ELK stack provides a powerful platform to index, search and analyze your data. It uses  Logstash for log aggregation, Elasticsearch for searching, and Kibana for visualizing and analyzing data. In short, ELK stack:

  • Collect logs and events data (Logstash)
  • Make it searchable in fast and meaningful ways (Elasticsearch)
  • Use powerful analytics to summarize data across many dimensions (Kibana)

logstash-logo

Logstash is a flexible, open source data collection, enrichment, and transportation pipeline.

Logstash

elasticsearch-logo

Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.

Elasticsearch

kibana-logo

Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics.

Kibana

How does ELK Stack work?

Logstash can collect logs from a variety of sources (using input plugins), process the data into a common format using filters, and stream data to a variety of sources (using output plugins). Multiple filters can be chained to parse the data into a common format. Together, they build a Logstash Processing Pipeline.

Logstash Processing Pipeline

Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.

Logstash can then store the data in Elasticsearch and Kibana provides a visualization of that data. Here is a sample pipeline that can collect logs from different servers and run it through the ELK stack.

ELK Stack

Start ELK Stack

You can download individual components of ELK stack and start that way. There is plenty of advise on how to configure these components. But I like to start with a KISS, and Docker makes it easy to KISS!

All the source code on this blog is at github.com/arun-gupta/elk.

  1. Clone the repo:
  2. Run the ELK stack:
    This will use the pre-built Elasticsearch, Logstack, and Kibana images. It is built upon the work done in github.com/nathanleclaire/elk.

    docker ps will show the output as:

    It shows all the containers running.

WildFly and ELK

James (@the_jamezp) blogged about Centralized Logging for WildFly with ELK Stack. The blog explains how to configure WildFly to send log messages to Logstash. It uses the highly modular nature of WildFly to install jboss-logmanager-ext library and install it as a module. The configured logmanager includes @timestamp field to the log messages sent to logstash. These log messages are then sent to Elasticsearch.

Instead of following the steps, lets Docker KISS and use a pre-configured image to get you started.

Start the image as:

Make sure to substitute <DOCKER_HOST_IP> with the IP address of the host where your Docker host is running. This can be easily found using docker-machine ip <MACHINE_NAME>.

View Logs using ELK Stack

Kibana runs on an embedded nginx and is configured to run on port 80 in docker-compose.yml. Lets view the logs using that.

  1. Access http://<DOCKER_HOST_IP> in your machine and it should show the default page as:ELK Stack WildFly PatternThe @timestamp field was created by logmanager configured in WildFly.
  2. Click on Create to create an index pattern and select Discover tab to view the logs as:ELK Stack WildFly Output

Try connecting other sources and enjoy the power of distributed consolidated by ELK!

Some more references …

  • Logstash docs
  • Kibana docs
  • Elasticsearch The Definitive Guide

Distributed logging and visualization is a critical component in a microservices world where multiple services would come and go at a given time. A future blog will show how to use ELK stack with a microservices architecture based application.

Enjoy!

Multi-container Applications using Docker Compose and Swarm

Docker Compose to Orchestrate Containers shows how to run two linked Docker containers using Docker Compose. Clustering Using Docker Swarm shows how to configure a Docker Swarm cluster.

This blog will show how to run a multi-container application created using Docker Compose in a Docker Swarm cluster.

Updated version of Docker Compose and Docker Swarm are released with Docker 1.7.0.

Docker 1.7.0 CLI

Get the latest Docker CLI:

and check the version as:

Docker Machine 0.3.0

Get the latest Docker Machine as:

and check the version as:

Docker Compose 1.3.0

Get the latest Docker Compose as:

and verify the version as:

Docker Swarm 0.3.0

Swarm is run as a Docker container and can be downloaded as:

You can learn about Docker Swarm at docs.docker.com/swarm or Clustering using Docker Swarm.

Create Docker Swarm Cluster

The key components of Docker Swarm are shown below:

and explained in Clustering Using Docker Swarm.

  1. The easiest way of getting started with Swarm is by using the official Docker image:
    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    The last line is the <TOKEN>.

    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with#661.

  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    This will show the output as:

  4. Create a Swarm node

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    This shows the output similar to the one below:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master.

  8. List nodes in the cluster with the following command:

    This shows the output as:

Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Docker Compose to Orchestrate Containers explains how multi container applications can be easily started using Docker Compose.

  1. Use the docker-compose.yml file explained in that blog to start the containers as:
    The docker-compose.yml file looks like:
  2. Check the containers running in the cluster as:
    to see the output as:
  3. “swarm-node-02” is running three containers and so lets look at the list of containers running there:
    and see the list of running containers as:
  4. Application can then be accessed again using:
    and shows the output as:

Latest instructions for this setup are always available at: github.com/javaee-samples/docker-java/blob/master/chapters/docker-swarm.adoc.

Enjoy!

Deploying Java EE Application to Docker Swarm Cluster (Tech Tip #88)

What is Docker Swarm?

Docker Swarm provides native clustering to Docker. Clustering using Docker Swarm 0.2.0 provide a basic introduction to Docker Swarm, and how to create a simple three node cluster. As a refresher, the key components of Docker Swarm are shown below:

In short, Swarm Manager is a pre-defined Docker Host, and is a single point for all administration. Additional Docker hosts are identified as Nodes and communicate with the Manager using TCP. By default, Swarm uses hosted Discovery Service, based on Docker Hub, using tokens to discover nodes that are part of a cluster. Each node runs a Node Agent that registers the referenced Docker daemon, monitors it, and updates the Discovery Service with the node’s status. The containers run on a node.

That blog provide complete details, but a quick summary to create the cluster is shown below:

Listing the cluster shows:

It has one master and two nodes.

Deploy a Java EE application to Docker Swarm

All hosts in the cluster are accessible using a single, virtual host. Swarm serves the standard Docker API, so any tool that communicates with a single Docker host communicate can scale to multiple Docker hosts by communicating to this virtual host.

Docker Container Linking Across Multiple Hosts explains how to link containers across multiple Docker hosts. It deploys a Java EE 7 application to WildFly on one Docker host, and connects it with a MySQL container running on a different Docker host. We can deploy both of these containers using the virtual host, and they will then be deployed to the Docker Swarm cluster.

Lets get started!

MySQL on Docker Swarm

  1. Start the MySQL container
  2. Status of the container can be seen as:
    It shows the container is running on swarm-node-01.

    Make sure you are connected to the Docker Swarm cluster using eval $(docker-machine env --swarm swarm-master).

  3. Find IP address of the host where this container is started:

    Note IP address of the node where MySQL server is running. This will be used when starting WildFly application server later.

    ps: Filtering by name seem to not return accurate results (#10897).

WildFly on Docker Swarm

  1. Start WildFly application server by passing the IP address of the host and the port on which MySQL server is running:

  2. Status of the container can be seen as:

    It shows the container is running on swarm-node-02. IP address of the host is also shown in the PORTS column.

    As explained in Tech Tip #69, JDBC URL of the data source uses the specified IP address and port for connecting with the MySQL server. However passing IP address is very brittle as the MySQL server may restart on a different Docker host. This is filed as #773.

  3. Access the application at:

    This is using the IP address of the host where the container is started.

Enjoy!

 

WildFly Swarm: Building Microservices with Java EE

Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away

Antoine de Saint-Exupery

This quote by the French writer Antoine de Saint-Exupery was made to substantiate that often less is more. This is true for architect, artist, designer, writer, running, software developer, or in any other profession. Simplicity, minimalism, cutting down the cruft always goes a long way and has several advantages as opposed to something bloated.

What is WildFly Swarm?

WildFly is a light weight, flexible, feature rich, Java EE 7 compliant application server. WildFly 9 even introduced a 27MB Servlet-only distribution. These are very well suited for your enterprise and web applications.

WildFly Swarm takes the notch a bit higher. From the announcement:

WildFly Swarm is a new sidecar project supporting WildFly 9.x to enable deconstructing the WildFly AS and pasting just enough of it back together with your application to create a self-contained executable jar.

WildFly Swarm

The typical application development model for a Java EE application is to create an EAR or WAR archive and deploy it in an application server. All the dependencies, such as Java EE implementations are packaged in the application server and provide the functionality required by the application classes. Multiple archives can be deployed and they all share the same libraries. This is a well understood model and have been used over the past several years.

WildFly Swarm turns the table where it creates a “fat jar” that has all the dependencies packaged in a JAR file. This includes a minimalist version of WildFly, any required dependencies, and of course, the application code itself. The application can simply be run using java -jar.

Each fat jar could possibly be a microservice which can then independently upgrade, replace, or scale. Each fat jar would typically follow single responsibility principle and thus will have only the required dependencies packaged. Each JAR can use polyglot persistence, and use only the persistence mechanism that is required.

Show me the code!

A Java EE application can be packaged as WildFly Swarm fat jar by adding a Maven dependency and a plugin. Complete source code for a simple JAX-RS sample is available at github.com/arun-gupta/wildfly-samples/tree/master/swarm.

WildFly Swarm Maven Dependency

Add the following Maven dependency in pom.xml:

WildFly Swarm Maven Plugin

Add the following Maven plugin in pom.xml:

Create WildFly Swarm Fat Jar

The fat jar can be easily created by invoking the standard Maven target:

This generates a JAR file using the usual Maven conventions, and appends -swarm at the end. The generated WAR file name in our sample is swarm-1.0-SNAPSHOT-swarm.jar.

The generated WAR file is ~30MB, has 134 JARs (all in m2repo directory), and 211 classes. The application code is bundled in app/swarm-1.0-SNAPSHOT.war.

Run WildFly Swarm Fat Jar

This far jar can be run as:

The response can be verified as:

WildFly Swarm Release blog refers to lots of blogs about Servlet, JAX-RS with ShrinkWrap, DataSource via Deployment, Messaging and JAX-RS, and much more.

WildFly Swarm Next Steps

This is only 1.0.0.Alpha1 release so feel free to try out samples and give us feedback by filing an issue.

You have the power of all WildFly subsystems, and can even create embeddable Java EE container as shown in the release blog:

Subsequent blogs will show how a microservice can be easily created using WildFly Swarm.

WildFly Swarm Stay Connected

You can keep up with the project through the WildFly HipChat room, @wildflyswarm on Twitter, or through GitHub Issues.

WildFly 9 on NetBeans, Eclipse, IntelliJ, OpenShift, and Maven (Tech Tip #86)

wildfly9-banner

WildFly 9 CR1 was recently released. Lots of cool features are included:

  • Intelligent load balancing
  • HTTP/2 and SPDY support
  • A new offline CLI mode
  • Graceful single node shutdown
  • A new Servlet-only distribution

And this is above the usual Java EE 7 compliance!

This blog is a quick check to verify that it works in all three major IDEs and OpenShift.

WildFly 9 and NetBeans

Lets start with NetBeans 8.0.x first. The screenshot shows WildFly 9 CR1 configured in NetBeans and started. The log is shown in the console.

WildFly 9 CR1 on NetBeans

Complete instructions to setup WildFly in NetBeans are in NetBeans 8 and WildFly 8.

WildFly 9 and Eclipse

Getting Started with JBoss Tools and WildFly 8 shows how to configure WildFly with JBoss Tools. Here are the series of snapshots that shows configuring WildFly 9 in JBoss Tools with Eclipse Mars M6.

A new experimental runtime …

WildFly 9 CR1 Experimental

Specify the directory …

WildFly 9 CR1 Eclipse New Runtime

Now WildFly 9 is configured as a Server in Eclipse …

WildFly 9 CR1 Eclipse Servers

And finally the server is up and running …

WildFly 9 CR1 Eclipse Console

Complete details, including download and update center coordinates, are explained at JBoss Tools Alpha 2 for Eclipse Mars.

WildFly 9 and IntelliJ

WildFly 8 and IntelliJ IDEA Screencast provide complete details on how to setup IntelliJ with WildFly. The snapshot below shows WildFly 9 configured in IntelliJ 14.1.2.

WildFly 9 CR1 with IntelliJ 14

WildFly 9 and OpenShift

Creating an OpenShift application is pretty straightforward as well:

This creates a new application and uses WildFly 9 as the underlying application server. Complete details about the OpenShift cartridge are at github.com/openshift-cartridges/openshift-wildfly-cartridge/tree/wildfly-9. You can find about how to create an OpenShift application with an existing application, how to connect to this WildFly instance using JBoss CLI.

WildFly 8 CR1 on OpenShift also provide more details.

WildFly 9 and Maven

WildFly Maven Plugin  provide the latest information about how to get started with WildFly Maven plugin.

But you just need to fire up a WildFly server as:

And then deploy the Java EE 7 Movieplex application as:

And the plugin definition is very simple:

Enjoy!

WildFly 9.0 CR1 is Released – HTTP/2, Intelligent Load Balancing, Graceful shutdown, Offline CLI, more

WildFly 9 Banner

WildFly 9.0 CR1 is now released, download from wildfly.org/downloads (zip)!

It builds upon the Java EE 7 compliance already achieved in earlier versions and adds the following features:

  • Intelligent load balancing
  • HTTP/2 and SPDY support
  • A new offline CLI mode
  • Server suspend, resume, and graceful shutdown
  • A new Servlet-only distribution

Release Notes provide complete details. 86 issues were fixed since 9.0.0 Beta 2.

How do you get started?

  • Download zip and unzip
  • Start as:
  • Access the main page as localhost:8080 as shown:WildFly 9 Welcome
  • Shutting down the server now shows the following log:

You can already configure this in JBoss Tools Alpha 2. Its currently not supported in NetBeans and has been filed as #252197.

Configure JRebel with Docker containers – Tech Tip #81

JRebel allows you to skip build and redeploy process by instantly deploying your application to the application server of your choice. It is supported in all the major IDEs such as NetBeans, Eclipse, and IntelliJ. It is also supported in a wide variety of application servers such as JBoss EAP, WildFly, WebLogic, WebsFear (err, WebSphere), Tomcat, and many others.

You can easily get started with JRebel in JBoss Developer Studio  or Integrate JRebel with JBoss on your local desktop. It can also be easily used with JBoss Developer Studio and Ticket Monster on OpenShift.

This Tech Tip will explain how do you set up JRebel with Docker containers. Specifically, we’ll use the sample application provided by Java EE 7 Hands-on Lab (jrebel branch), JBoss Tools with Eclipse Mars M5, and running the sample application in WildFly Docker container.

Many thanks to Adam Koblentz (@akoblentz) for helping me through the steps!

Lets get started!

Install JRebel in Eclipse

JRebel runs in three modes:

  • Local: App server is running from inside the IDE
  • External: App server is running from outside the IDE, such as using CLI, but on the same machine
  • Remote: App server is running on a different machine, VM, container, or cloud

Docker containers need to be configured using the “remote” mode.

  1. Install JBoss Tools  as explained at tools.jboss.org/downloads/. JRebel’s remote mode can only be enabled using the IDE. Install JRebel plugin from Eclipse Marketplace.

Package rebel.xml and rebel-remote.xml with the WAR

These files define the location of classes and resources in your archive.

  1. Clone the Java EE 7 HOL repo:
  2. Import the Maven project (from the solution directory) in the IDE, right-click on the project, select JRebel menu, and click on “Enable JRebel Nature”. This will generate rebel.xml in src/main/resources directory and would look something like:
  3. Right-click on the project again, and select “Enable Remoting”. This generates rebel-remote.xml, in src/main/resources directory again, and will look like:
    This needs to be done on the machine where JRebel will be used in the IDE. This will ensure that the public key is generated appropriately.
  4. Package your application as
    This will package rebel.xml and rebel-remote.xml in the WAR file.

Configure and Run the Application Server

Application server needs to know about JRebel agent and platform-specific library. Both of these files are available from Eclipse if JRebel was installed earlier. On Mac these files are available in eclipse/mars/m5/eclipse/plugins/org.zeroturnaround.eclipse.embedder_6.1.1.RELEASE-201503121801/jr6/jrebel/ directory. The exact name would very likely differ in your case.

  1. Build the image using the Dockerfile:
    The key parts in this image are:

    1. Using the official jboss/wildfly Docker image
    2. Copying the JRebel agent and platform-specific library to the image
    3. Configuring application server such that it knows about the “remote” mode and platform-specific library
    4. Start WildFly
    5. Downloads the pre-built WAR file from GitHub. This will not work for you, and you’ll need to replace it with something like:
      This WAR file is the same that was generated earlier.
  2. Actually build the image as:
  3. Run the container as:
    and this should show something like:
    JRebel license information is a good sign that everything is configured properly.

    If you used Docker Machine to Setup Docker Host then the application should now be accessible at 192.168.99.100:8080/movieplex7/.

Configure Eclipse

Last step is to configure Eclipse so that it knows where the application is deployed.

  1. In Eclipse, right-click on the project, select JRebel, Advanced Properties
  2. Click on “Edit” on next to “Deployment URLs”
  3. Click on Add and specify the URL of the application, 192.168.99.100:8080/movieplex7/ in our case.
  4. Click on Continue, Apply, OK.

Voila, the configuration is now complete.

Now changing any class, adding any method, updating any entity or HTML or JSF page will push the changes to the Docker container instantly. No need to redeploy the application.

Enjoy!