Category Archives: java

Creating Smaller Java Image using Docker Multi-stage Build

Two of the announcements at DockerCon 2017 directly relevant to Java developers are:

  • Docker Multi-stage build
  • Oracle JRE in Docker Store

This blog explains the purpose of Docker multi-stage build and provide examples of how they help us generate smaller and more efficient Java Docker images.

Docker Multi-stage Build

Just show me the code: github.com/arun-gupta/docker-java-multistage.

What is the issue?

Building a Docker image for a Java application typically involves building the application and package the generated artifact into an image. A Java developer would likely use Maven or Gradle to build a JAR or WAR file. If you are using the Maven base image to build the application then it will download the required dependencies from the configured repositories and keep them in the image. The number of JARs in the local repository could be significant depending upon the number of dependencies in the pom.xml. This could leave a lot of cruft in the image.

Let’s take a look at a sample Dockerfile:

In this Dockerfile:

  • maven:3.5-jdk-8 is used as the base image
  • Application source code is copied to the image
  • Maven is used to build the application artifact
  • WildFly is downloaded and installed
  • Generated artifact is copied to the deployments directory of WildFly
  • Finally, WildFly is started

There are several issues with this kind of flow:

  • Using maven as the base image restricts on what functionality is available in the image. This requires WildFly to be downloaded and configured explicitly.
  • Building the artifact downloads all Maven dependencies. These stay in the image and are not needed at runtime. This causes an unnecessary bloat in the image size at runtime.
  • Change in WildFly version will require to update the Dockerfile. This would’ve been much easier if we could use the jboss/wildfly base image by itself.
  • In addition, unit tests may run before packaging the artifact and integration tests after the image is created. The test dependencies and results is again not needed to live in the production image.

There are other ways to build the Docker image. For example, splitting the Dockerfile into two files. The first file will then build the artifact and copy the artifact to a common location using volume mapping. The second file will then pick up the generated artifact and then use the lean base image. This approach has also has issues where multiple Dockerfiles need to be maintained separately. Additional, there is an out-of-band hand-off between the two Dockerfiles.

Let’s see how these issues are resolved with multi-stage build.

What are Docker multi-stage build?

Multi-stage build allows multiple FROM statements in a Dockerfile. The instructions following each FROM statement and until the next one, creates an intermediate image. The final FROM statement is the final base image. Artifacts from intermediate stages can be copied using COPY --from=<image-number>, starting from 0 for the first base image. The artifacts not copied over are discarded. This allows to keep the final image lean and only include the relevant artifacts.

FROM syntax is updated to specify stage name using as <stage-name>. For example:

This allows to use the stage name instead of the number with --from option.

Let’s take a look at a sample Dockerfile:

In this Dockerfile:

  • There are two FROM instructions. This means this is a two-stage build.
  • maven:3.5-jdk-8 is the base image for the first build. This is used to build the WAR file for the application. The first stage is named as BUILD.
  • jboss/wildfly:10.1.0.Final is the second and the final base image for the build. WAR file generated in the first stage is copied over to this stage using COPY --from syntax. The file is directly copied in the WildFly deployments directory.

Let’s take a look at what are some of the advantages of this approach.

Advantages of Docker multi-stage build

  • One Dockerfile has the entire build process defined. There is no need to have separate Dockerfiles and then coordinate transfer of artifact between “build” Dockerfile and “run” Dockerfile using volume mapping.
  • Base image for the final image can be chosen appropriately to meet the runtime needs. This helps with reduction of the overall size of the runtime image. Additionally, the cruft from build time is discarded during intermediate stage.
  • Standard WildFly base image is used instead of downloading and configuring the distribution manually. This makes it a lot easier to update the image if a newer tag is released.

Size of the image built using a single Dockerfile is 816MB. In contrast, the size of the image built using multi-stage build is 584MB.

Docker Multi-stage Java Image

So, using a multi-stage helps create a much smaller image.

Is this a typical way of building Docker image? Are there other ways by which the image size can be reduced?

Sure, you can use docker-maven-plugin as shown at github.com/arun-gupta/docker-java-sample to build/test the image locally and then push to repo. But this mechanism allows you to generate and package artifact without any other dependency, including Java.

Sure, maven:jdk-8-alpine image can be used to create a smaller image. But then you’ll have to create or find a WildFly image built using jdk-8-alpine, or something similar, as well. But the cruft, such as maven repository, two Dockerfiles, sharing of artifact using volume mapping or some other similar technique would still be there.

There are other ways to craft your build cycle. But if you are using Dockerfile to build your artifact then you should seriously consider multi-stage builds.

Read more discussion in PR #31257.

As mentioned earlier, the complete code for this is available at github.com/arun-gupta/docker-java-multistage.

Sign up for Docker Online Meetup to get a DockerCon 2017 recap.

Service Discovery with Java and Database application in Kubernetes

This blog will show how a simple Java application can talk to a database using service discovery in Kubernetes.

 Kubernetes Logo WildFly Logo

Service Discovery with Java and Database application in DC/OS explains why service discovery is an important aspect for a multi-container application. That blog also explained how this can be done for DC/OS.

Let’s see how this can be accomplished in Kubernetes with a single instance of application server and database server. This blog will use WildFly for application server and Couchbase for database.

This blog will use the following main steps:

  • Start Kubernetes one-node cluster
  • Kubernetes application definition
  • Deploy the application
  • Access the application

Start Kubernetes Cluster

Minikube is the easiest way to start a one-node Kubernetes cluster in a VM on your laptop. The binary needs to be downloaded first and then installed.

Complete installation instructions are available at github.com/kubernetes/minikube.

The latest release can be installed on OSX as as:

It also requires kubectl to be installed. Installing and Setting up kubectl provide detailed instructions on how to setup kubectl. On OSX, it can be installed as:

Now, start the cluster as:

The kubectl version command shows more details about the kubectl client and minikube server version:

More details about the cluster can be obtained using the kubectl cluster-info command:

Kubernetes Application Definition

Application definition is defined at github.com/arun-gupta/kubernetes-java-sample/blob/master/service-discovery.yml. It consists of:

  • A Couchbase service
  • Couchbase replica set with a single pod
  • A WildFly replica set with a single pod
The key part is where the value of the COUCHBASE_URI environment variable is name of the Couchbase service. This allows the application deployed in WildFly to dynamically discovery the service and communicate with the database.

arungupta/couchbase:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/couchbase/Dockerfile.

arungupta/wildfly-couchbase-javaee:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/Dockerfile.

Java EE application waits for database initialization to be complete before it starts querying the database. This can be seen at github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java#L25.

Deploy Application

This application can be deployed as:

The list of service and replica set can be shown using the command kubectl get svc,rs:

Logs for the single replica of Couchbase can be obtained using the command kubectl logs rs/couchbase-rs:

Logs for the WildFly replica set can be seen using the command kubectl logs rs/wildfly-rs:

Access Application

The kubectl proxy command starts a proxy to the Kubernetes API server. Let’s start a Kubernetes proxy to access our application:

Expose the WildFly replica set as a service using:

The list of services can be seen again using kubectl get svc command:

Now, the application is accessible at:

A formatted output looks like:

Now, new pods may be added as part of Couchbase service by scaling the replica set. Existing pods may be terminated or get rescheduled. But the Java EE application will continue to access the database service using the logical name.

This blog showed how a simple Java application can talk to a database using service discovery in Kubernetes.

For further information check out:

  • Kubernetes Docs
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on Couchbase Forums or Stack Overflow
  • Download Couchbase

Minikube – Rapid Dev & Testing for Kubernetes

One of the attendees from Kubernetes for Java Developers training suggested to try minikube for simplified Kubernetes dev and testing. This blog will show how to get started with minikube using a simple Java application.

minikube-logo

Minikube starts a single node Kubernetes cluster on your local machine for rapid development and testing. Requirements lists the exact set of requirements for different operating systems.

This blog will show:

  • Start one node Kubernetes cluster
  • Run Couchbase service
  • Run Java application
  • View Kubernetes Dashboard

All Kubernetes resource description files used in this blog are at github.com/arun-gupta/kubernetes-java-sample/tree/master/maven.

Start Kubernetes Cluster using Minikube

Create a new directory with the name minikube.

In that directory, download kubectl CLI:

Download minikube CLI:

Start the cluster:

The list of nodes can be seen:

More details about the cluster can be obtained using the kubectl cluster-info command:

Behind the scenes, a Virtual Box VM is started.

Complete set of commands supported can be seen by using --help:

Run Couchbase Service

Create a Couchbase service:

This will start a Couchbase service. The service is using the pods created by the replication controller. The replication controller creates a single node Couchbase server.

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/couchbase-service.yml and looks like:

Run Java Application

Run the application:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/bootiful-couchbase.yml and looks like:

This is run-once job which runs a Java (Spring Boot) application and upserts (insert or update) a JSON document in Couchbase.

In this job, COUCHBASE_URI environment variable value is set to couchbase-service. This is the service name created earlier. Docker image used for this service is arungupta/bootiful-couchbase and is created using fabric8-maven-plugin as shown at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/webapp/pom.xml#L57-L68. Specifically, the command for the Docker image is:

This ensures that COUCHBASE_URI environment variable is overriding spring.couchbase.bootstrap-hosts property as defined in application.properties of the Spring Boot application.

Kubernetes Dashboard

Kubernetes 1.4 included an updated dashboard. For minikube, this can be opened using the following command:

 

The default view is shown below:
minikube-dashboard-1-4

But in our case, a few resources have already been created and so this will look like as shown:

minikube-dashboard-couchbase

Notice, our Jobs, Replication Controllers and Pods are shown here.

Shutdown Kubernetes Cluster

The cluster can be easily shutdown:

couchbase.com/containers provide more details about running Couchbase using different orchestration frameworks. Further references:

  • Couchbase Forums or StackOverflow
  • Follow us at @couchbasedev or @couchbase
  • Read more about Couchbase Server

Source: blog.couchbase.com/2016/september/minikube-rapid-dev–testing-kubernetes

Getting Started with Kubernetes 1.4 using Spring Boot and Couchbase

Kubernetes 1.4 was released earlier this week. Read the blog announcement and CHANGELOG. There are quite a few new features in this release but the key ones that I’m excited about are:

  • Install Kubernetes using kubeadm command. This is in addition to the usual mechanism of downloading from https://github.com/kubernetes/kubernetes/releases. The kubeadm init and kubeadm join commands looks very similar to docker swarm init and docker swarm join for Docker Swarm Mode.
  • Federated Replica Sets
  • ScheduledJob allows to run batch jobs at regular intervals.
  • Constraining pods to a node and affinity and anti-affinity of pods
  • Priority scheduling of pods
  • Nice looking Kubernetes Dashboard (more on this later)

This blog will show:

  • Create a Kubernetes cluster using Amazon Web Services
  • Create a Couchbase service
  • Run a Spring Boot application that stores a JSON document in Couchbase

All the resource description files in this blog are at github.com/arun-gupta/kubernetes-java-sample/tree/master/maven.

Start Kubernetes Cluster

Download binary github.com/kubernetes/kubernetes/releases/download/v1.4.0/kubernetes.tar.gz and extract

Include kubernetes/cluster in PATH

Start a 2-node Kubernetes cluster:

The log will be shown as:

This shows that the Kubernetes cluster has started successfully.

Deploy Couchbase Service

Create Couchbase service and replication controller:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/couchbase-service.yml.

This creates a Couchbase service and the backing replication controller. Name of the service is couchbase-service. This will be used later by the Spring Boot application to communicate with the database.

Check the status of pods:

Note, how the pod status changes from ContainerCreating to Running. The image is downloaded and started in the meanwhile.

Run Spring Boot Application

Run the application:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/bootiful-couchbase.yml. In this service, COUCHBASE_URI environment variable value is set to couchbase-service. This is the service name created earlier.

Docker image used for this service is arungupta/bootiful-couchbase and is created using fabric8-maven-plugin as shown at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/webapp/pom.xml#L57-L68. Specifically, the command for the Docker image is:

This ensures that COUCHBASE_URI environment variable is overriding spring.couchbase.bootstrap-hosts property as defined in application.properties of the Spring Boot application.

Get the logs:

The main output statement to look in this is

This indicates that the JSON document is upserted (either inserted or updated) in the Couchbase database.

Kubernetes Dashboard

Kubernetes Dashboard is look more comprehensive and claimed to have 90% parity with the CLI. Use kubectl.sh config view command to view the configuration information about the cluster. It looks like:

The clusters.cluster.server property value shows the location of Kubernetes master. The users property show two users that can be used to access the dashboard. Second one uses basic authentication and so copy the username and password property value. In our case, Dashboard UI is accessible at https://52.40.9.27/ui.

kubernetes-dashboard-1-4

All the Kubernetes resources can be easily seen in this fancy dashboard.

Shutdown Kubernetes Cluster

Finally, shutdown the Kubernetes cluster:

couchbase.com/containers provide more details about running Couchbase using different orchestration frameworks.

Further references:

  • Couchbase Forums or StackOverflow
  • Follow us at @couchbasedev or @couchbase
  • Read more about Couchbase Server

Source: blog.couchbase.com/2016/september/kubernetes-1.4-spring-boot-couchbase

Deployment Pipeline using Docker, Jenkins, Java and Couchbase

This blog explains how to create a Deployment Pipeline using Jenkins and Docker for a Java application talking to a database.

Jenkins support the creation of pipelines. They are built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language.

The script, typically called Jenkinsfile, defines multiple steps to execute both simple and complex tasks according to the parameters that you establish. Once created, pipelines can build code and orchestrate the work required to drive applications from commit to delivery.

A pipeline consists of steps, node and stage. A pipeline is executed on a node – a computer that is part of Jenkins installation. A pipeline often consists of multiple stages. A stage consists of multiple steps. Read Getting Started with Pipeline for more details.

For our application, here is the basic flow:

docker-pipeline-jenkins

Complete source code for the application used is at github.com/arun-gupta/docker-jenkins-pipeline.

The application is defined in the webapp directory. It opens a connection to the Couchbase database and stores a simple JSON document using Couchbase Java SDK. The application also has a test that verifies that the database indeed contains the document that was persisted.

Many thanks to @alexsotob for helping me with Jenkins configuration.

Let’s get started!

Download and Install Jenkins

  • Download Jenkins from jenkins.io. This was tested with Jenkins 2.21.
  • Start Jenkins:
    This command starts Jenkins by specifying the home directory where all the configuration information is stored. It also defines the port on which Jenkins is listening, 9090 in this case.
  • First start of Jenkins shows the following message in the console:
    Copy the password shown here. This will be used to unlock Jenkins.
  • Access the Jenkins console at localhost:9090 and paste the password:docker-pipeline-jenkins-unlockClick on Next.
  • Create the first admin user as shown:
    docker-pipeline-jenkins-create-admin-user
    Click on Save and Finish.
  • Click on Install suggested plugins:docker-pipeline-jenkins-install-suggested-plugins
    A bunch of default plugins are installed:docker-pipeline-jenkins-installing-suggested-plugins
    Found it surprising that Ant and Subversion are the default plugins.
  • Login screen is prompted.
    docker-pipeline-jenkins-login
    Enter the username and password specified earlier.
  • Finally, Jenkins is ready to use:
    docker-pipeline-jenkins-start-using

That’s quite a bit of steps to get started with basic Jenkins. Do I really have to jump through all these hoops to get started with Jenkins? Is there an easier, simpler, dumber, lazier way to start Jenkins? Follow Convention-over-Configuration and give me one-click pre-configured installation.

Install Jenkins Plugins

Install the required plugins in Jenkins.

  1. If your Java project is built using Maven, then you need to configure Maven in Jenkins. Click on Manage Jenkins, Global Tool Configuration, Maven installations, and specify the location of Maven.docker-pipeline-jenkins-configure-maven
    Name the tool as Maven3 as that is the name used in the configuration later.Again a bit lame, why can’t Jenkins pick up the default location of Maven instead of expecting the user to specify a location.
  2. Click on Manage Jenkins, Manage Plugins, Available tab, search for docker pipe. Select CloudBees Docker Pipeline, click on Install without restart.
    docker-pipeline-jenkins-pipeline-plugin
    Click on Install without restart.Docker Pipeline Plugin plugin understands the Jenkinsfile and executes the commands listed there.
  3. Next screen shows the list of plugins that are installed:docker-pipeline-jenkins-pipeline-plugin-restart-jenkins
    The last line shows that CloudBees Docker Pipeline plugin is installed successfully. Select Restart Jenkins checkbox. This will install restart Jenkins as well.

Create Jenkins Job

Let’s create a job in Jenkins that will run the pipeline.

  1. After Jenkins restarts, it shows the login screen. Enter the username and password created earlier. This brings you back to Installing Plugins/Upgrades page. Click on the Jenkins icon in the top left corner to see the main dashboard:docker-pipeline-jenkins-dashboard
  2. Click on create new jobs, give the name as docker-jenkins-pipeline and choose the type as Pipeline:docker-pipeline-jenkins-create-projectClick on OK.
  3. Configure Pipeline as shown:
    docker-pipeline-jenkins-configure-pipelineLocal git repo is used in this case. You can certainly choose a repo hosted on github. Further, this repo can be configured with a git hook or poll at a constant interval to trigger the pipeline.Click on Save to save the configuration.

Run Jenkins Build

Before you start the job, Couchbase database need to be explicitly started as:

This will be resolved after #9 is fixed.  Make sure you can access Couchbase at http://localhost:8091, use Administrator as the login and password as the password. Click on Data Buckets tab and see the books bucket created.

docker-pipeline-couchbase-books

Click on Build Now and you should see an output similar to:

docker-pipeline-jenkins-build-run

All green is good!

Let’s try to understand what happened behind the scene.

Jenkinsfile describes how the pipeline is built. At the top level, it has four stages – Package, Create Docker Image, Run Application and Run Tests. Each stage is shown as a box in Jenkins dashboard. Total time taken for each stage is shown in the box.

Let’s understand what happens in each stage.

  • Package – Application source code lives in the webapp directory. Maven command mvn clean package -DskipTests is used to create a JAR file of the application. Note that the maven project also includes the tests and are explicitly skipped using -DskipTests. Typically, tests would be in a separate downstream project.Maven project creates a far JAR file of the application and includes all the dependencies.
  • Create Docker Image – Docker image of the application is built using the Dockerfile in the webapp directory. The image simply includes the fat JAR and runs it using java -jar.Each image is tagged with the build number using ${env.BUILD_NUMBER}.
  • Run Application – Running the application involves running the application Docker container.IP address of the database container is identified using the docker inspect command.The database container and the application container are both running in the default bridge network. This allows the two containers to communicate with each other. Another enhancement would be to run the pipeline in a swarm mode cluster. This would require to create and use an overlay network.
  • Run Tests – Tests are run against the container using the mvn test command. If the tests pass the image is pushed to Docker Hub. The test results are captured either way.This stage also shows the usage of try/catch/finally block in Jenkinsfile.If the tests pass then the image is pushed to Docker Hub. In this case, it is available at hub.docker.com/r/arungupta/docker-jenkins-pipeline/tags/.

Some TODOs …

  • Move the tests to a downstream project (#7)
  • Use Git hook or poll to trigger pipeline (#8)
  • Automate database startup/shutdown (#9)
  • Run pipeline in a cluster of Docker Engines with Swarm mode (#10)
  • Show alternate configuration to push image to bintray (#11)

Another pain point is that global variables syntax does not seem to be documented anywhere. It is only available at <JENKINS-HOST>:<JENKINS-PORT>/job/docker-jenkins-pipeline/pipeline-syntax/globals. This is again slightly lame!

“not impossible, just not implemented yet” #sadpanda

Some further references to read:

  • Getting Started with the Jenkinsfile
  • CloudBees Docker Pipeline Plugin
  • CloudBees Docker Pipeline Plugin User Guide
  • Jenkinsfile DSL Reference
  • Jenkins Pipeline Talk from JavaZone 2016

More information about Couchbase:

  • Couchbase Developer Portal
  • Couchbase Forums
  • @couchbasedev or @couchbase

Feel free to file bugs at github.com/arun-gupta/docker-jenkins-pipeline/issues or send PR.

Source: blog.couchbase.com/2016/september/deployment-pipeline-docker-jenkins-java-couchbase

Docker Services, Stack and Distributed Application Bundle

docker-1.12

First Release Candidate of Docker 1.12 was announced over two weeks ago. Several new features are planned for this release.

This blog will show how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode. Many thanks to @friism to help me understand these concepts.

Let’s look at the features first:

  • Built-in orchestration: A typical application is defined using a Docker Compose file. This definition consists of multiple containers and deployed on multiple hosts. This avoids Single Point of Failure (SPOF) and keeps your application resilient. Multiple orchestration frameworks such as Docker Swarm, Kubernetes and Mesos allow you to orchestrate these applications. However it is such an important characteristic of the application, Docker Engine now has built-in orchestration. More details on this topic in a later blog.
  • Service: A replicated, distributed and load balanced service can be easily created using docker service create command. A “desired state” of the application, such as run 3 containers of Couchbase, is provided and the self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node. More on this in a later blog.
  • Zero-configuration Security: Docker 1.12 comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in the swarm, out of the box. More on this in a later blog.
  • Docker Stack and Distributed Application Bundle: Distributed Application Bundle, or DAB, is a multi-services distributable image format. Read further for more details.

So far, you can take a Dockerfile and create an image from it using the docker build command. A container can be started using the docker run command. Multiple containers can be easily started by giving that command multiple times. Or you can also use Docker Compose file and scale up your containers using the docker-compose scale command.

docker-lifecycle

Image is a portable format for a single container. Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime.

docker-stack-lifecycle

Learn more about DAB at docker.com/dab.

For simplicity, here is an analogy that can be drawn:

Dockerfile -> Image -> Container

Docker Compose -> Distributed Application Bundle -> Docker Stack

Let’s use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.

Its important to note that this is an experimental feature in 1.12-RC2.

Create a Distributed Application Bundle from Docker Compose

Docker Compose CLI adds a new bundle command. More details can be found:

Now, let’s take a Docker Compose definition and create a DAB from it. Here is our Docker Compose definition:

This Compose file starts a WildFly and a Couchbase server. A Java EE application is pre-deployed in the WildFly server that connects to the Couchbase server and allows to perform CRUD operations using the REST API.

The source for this file is at: github.com/arun-gupta/oreilly-docker-book/blob/master/hello-javaee/docker-compose.yml.

Generate an application bundle with it:

depends_on only creates dependency between two services and makes them start in a specific order. This only ensures that the Docker container is started but the application within the container may take longer to start. So this attribute only partially solves the problem. container_name gives a specific name to the container. Relying upon a specific container name is tight coupling and does not allow to scale the container.  So both the warnings can be ignored, for now.

This command generates a file using the Compose project name, which is the directory name. So in our case, hellojavaee.dsb file is generated. This file extension has been renamed to .dab in RC3.

The generated application bundle looks like:

This file provides complete description of the services included in the application. I’m not entirely sure if Distributed Application Bundle is the most appropriate name, discuss this in #24250. It would be great if other container formats, such as Rkt, or even VMs can be supported here. But for now, Docker is the only supported format.

Initialize Swarm Mode in Docker

As mentioned above, “desired state” is now maintained by Docker Swarm. And this is now baked into Docker Engine already.

Docker Swarm concepts have evolved as well and can be read at Swarm mode key concepts. A more detailed blog on this will be coming later.

But for this blog, a new command docker swarm is now added:

Initialize a Swarm node (as a manager) in the Docker Engine:

More details about this node can be found using docker node inspect self command.

The detailed output is verbose but the relevant section is:

The output shows that the node is a manager. For a single-node cluster, this node will also act as a worker.

 

More details about the cluster can be obtained using the docker swarm inspect command.

AcceptancePolicy shows that other worker nodes can join this cluster, but a manager requires explicit approval.

Deploy a Docker Stack

Create a stack using docker deploy command:

The command usage can certainly be simplified as discussed in #24249.

See the list of services:

The output shows that two services, WildFly and Couchbase, are running. Services is also a new concept introduced in Docker 1.12. There is what gives you the “desired state” and Docker Engine works to give you that.

docker ps shows the list of containers running:

WildFly container starts up before the Couchbase container is up and running. This means the Java EE application tries to connect to the Couchbase server and fails. So the application never boots successfully.

Self-healing Docker Service

Docker Service maintains the “desired state” of an application. In our case, the desired state is to ensure that one, and only one, container for the service is running. If we remove the container, not the service, then the service will automatically start the container again.

Remove the container as:

Note, you’ve to give -f because the container is already running. Docker 1.12 self-healing mechanisms kick in and automatically restart the container. Now if you list the containers again:

This shows that a new container has been started.

Inspect the WildFly service:

Swarm assigns a random port to the service, or this can be manually updated using docker service update command. In our case, port 8080 of the container is mapped to 30004 port on the host.

Verify the Application

Check that the application is successfully deployed:

Add a new book to the application:

Verify the books again:

Learn more about this Java EE application at github.com/arun-gupta/oreilly-docker-book/tree/master/hello-javaee.

This blog showed how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.

Docker Service and Stack References

  • Docker Service Create
  • FREE book from O’Reilly: Docker for Java Developers
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on @couchbasedev or Stackoverflow
Create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.… Click To Tweet

Source: blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle

Docker 1.10, Machine 0.6.0, Compose 1.6.0 – better volumes and networking

Docker 1.10 is now released!

Docker Logo

Read about all the new features in Docker 1.10. A quick summary:

  • New Compose file format
  • Much better networking
  • Much better security
  • Swarm becomes 1.1, with Mesos integration

Read Docker 1.10 release notes.

Lets look at some of the key components.

Docker Machine 0.6.0

Docker Machine makes it really easy to create Docker hosts on your computer, on cloud providers and inside your own data center. It creates servers, installs Docker on them, then configures the Docker client to talk to them.

Latest version can be installed as:

docker-machine now shows the Docker server version:

The latest server version is 1.10. And so docker upgrade command can be used to fix that:

The updated list of Machines is now shown as:

Notice that Docker version is now 1.10.

Set up the environment variables such that Docker client can talk to it:

Docker Client 1.10

Lets download the latest client to connect to this Docker Engine.

Client and Server versions are shown separately.

Run Couchbase container as:

This starts up a fully-configured Couchbase server. It can be accessed at 192.168.99.100:8091 and looks like as shown:

Docker 1.10 - Couchbase Console

Note, 192.168.99.100 is obtained using docker-machine ip <MACHINE-NAME>.

Couchbase Developer Portal provide more details about the Couchbase Server.

Docker Compose 1.6.0

Docker Compose is a tool for defining and running complex applications with Docker. With Compose, you define a multi-container application in a single file, then spin your application up in a single command which does everything that needs to be done to get it running.

Learn more about Docker Compose 1.6.0.

Install the latest version:

The experimental flags --x-networking and --x-network-driver, introduced in Compose 1.5, have been removed. Its no longer experimental and is the recommended way to enable communication between containers.

Compose 1.6.0 requires Docker Engine 1.9.1 or later, or 1.10.0 if you’re using version 2 of the Compose File format.

Updating Compose File

Compose 1.6 introduces a new version of the Compose file. Read more details about Upgrading Compose File.

Compose 1.6 will continue to run older version of Compose files. But now networking and volumes are first class citizens.

Here is an example of version 1 of Compose file:

Here is a version 2 of Compose file:

For simple use cases, the two main changes are:

  • Add a version: '2' line at the top of the file.
  • Indent the whole file by one level and put a services: key at the top.

Running services in this Compose file is:

This starts a fully configured Couchbase server based upon the image as explained at github.com/arun-gupta/docker-images/tree/master/couchbase-node.

Docker Swarm 1.1

Docker Swarm is native clustering for Docker. It allows you to create and access a pool of Docker hosts using the full suite of Docker tools. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

A new experimental support for container rescheduling on node failure is added.

Read more details about setting up Docker Swarm Cluster.

Finally, here are some useful links:

  • Docker Toolbox 1.10
  • Docker 1.10 Release Notes
  • Docker 1.10 Security Improvements
  • Docker for Java Developers

Enjoy!

JBoss EAP 7 and NoSQL using Java EE and Docker

JBoss EAP 7 Beta is now released, many congratulations to Red Hat and particularly to the WildFly team!

There are plenty of improvements coming in this release as documented in Release Notes. One of the major themes is Java EE 7 compliance.

JBoss EAP 7 and Java EE 7

IBM and Oracle already provide commercially supported Java EE 7-compliant Application Servers. And now Red Hat will be joining this party soon as well. Although WildFly has supported Java EE 7 for 2+ years but commercial support is a critical for open source to be adopted enterprise-wide. So this is good news!

You can learn all about different Java EE 7 APIs in the DZone Refcardz that I authored along with @alrubinger.

Java EE 7 Refcardz

There are plenty of “hello world” Java EE 7 Samples that should all run with JBoss EAP. Hopefully somebody will update the pom.xml and add a new profile.

Why NoSQL?

If you are building a traditional enterprise application then you might be fine using an RDBMS. There are plenty of advantages of using RDBMS but using a NoSQL database instead has a few advantages:

  • No need to have a pre-defined schema and that makes them a schema-less database. Addition of new properties to existing objects is easy and does not require ALTER TABLE. The unstructured data gives flexibility to change the format of data any time without downtime or reduced service levels. Also there are no joins happening on the server because there is no structure and thus no relation between them.
  • Scalability, agility and performance is more important than the entire set of functionality typically provided by an RDBMS. This set of databases provide eventual consistency and/or transactions restricted to single items but more focus on CRUD.
  • NoSQL are designed to scale-out (horizontal) instead of scale-up (vertical). This is important knowing that databases, and everything else as well, is moving into the cloud. RBDMS can scale-out using sharding but requires complex management and not for the faint of heart. Queries requiring JOINs across shards is extremely inefficient.
  • RDBMS have impedance mismatch between the database structure and the domain classes. An Object Relational Mapping, such as one provided by Java Persistence API or Hibernate is needed in such case.
  • NoSQL databases are designed for less management and simpler data models lead to lower administration cost as well.

So you are all excited about NoSQL now and want to learn more:

  • Why NoSQL?
  • Why do successful enterprises rely on NoSQL?
  • Top 10 Enterprise NoSQL Usecases

In short, there are four different types of NoSQL databases:

  • Document: Couchbase, Mongo, and others
  • Key/Value: Couchbase, Redis, and others
  • Graph: Neo4J, OrientDB, and others
  • Column: Cassandra and others

Java EE 7 provides Java Persistence API that does not provide any support for NoSQL. So how do you get started with NoSQL with JBoss EAP 7?

This blog will show how to query a Couchbase database using simple Java EE application deployed on JBoss EAP 7 Beta.

What is Couchbase?

Couchbase is an open-source, NoSQL, document database. It allows to access, index, and query JSON documents while taking advantage of integrated distributed caching for high performance data access.

Developers can write applications to Couchbase using different languages (Java, Go, .NET, Node, PHP, Python, C) multiple SDKs. This blog will show how you can easily create a CRUD application using Java SDK for Couchbase.

Run JBoss EAP 7

There are two ways to start JBoss EAP 7.

Download and Run

  • Download JBoss EAP 7 Beta and unzip.
  • Start the application server as:

Docker Run

In a containerized world, you just docker run to run your JBoss EAP. However, JBoss EAP image does not exist on Docker Hub and so the image needs to be explicitly built. You still need to explicitly download JBoss EAP and then use the following Dockerfile to build the image:

The image is built as:

And then you can run the JBoss EAP 7 container as:

Notice, how application and management ports are bound to all network interfaces. This will simplify to deploy the application to this JBoss EAP instance later.

Stop the server as we will show an easier way to start it later.

Start Application Server and Database

The Java EE application will provide a HTTP CRUD interface over JSON documents stored in Couchbase. The application itself will be deployed on JBoss EAP 7 Beta. So it would require to start Couchbase and JBoss EAP.

Use the Docker Compose file from github.com/arun-gupta/docker-images/blob/master/jboss-eap7-nosql/docker-compose.yml to start Couchbase and JBoss EAP 7 container:

The application is started as:

The started containers can be seen as:

Configure Couchbase Server

Clone couchbase-javaee application. This Java EE application uses Couchbase Java SDK APIs to connect to the Couchbase server. The bootstrap code is:

and is invoked from Database abstraction.

Couchbase Server can be configured using REST API. These REST APIs are defined in a Maven profile in pom.xml of this application. And so configure Couchbase server as:

Deploy Java EE Application to JBoss

Java EE Application can be easily deployed to JBoss EAP 7 Beta using the WildFly Maven Plugin. This is also defined as a Maven profile in pom.xml as well.

Deploy the application as:

Access the Application

As mentioned earlier, the application provides HTTP CRUD API over JSON documents stored in Couchbase.

Access the application as:

CRUD operations (GET, POST, PUT, DELETE) can be performed on Airline resource in the application. Complete CRUD API is documented at github.com/arun-gupta/couchbase-javaee.

This blog explained how to access a NoSQL database from JBoss EAP 7.

Read more about Couchbase 4:

  • What’s New in Couchbase Server 4.1
  • Couchbase Server documentation
  • Talk to us on Couchbase Forums
  • Follow @couchbasedev or @couchbase

Learn more about Couchbase in this recent developer-focused webinar:

Docker Machine, Swarm and Compose for multi-container and multi-host applications with Couchbase and WildFly

This blog will explain how to create multi-container application deployed on multiple hosts using Docker. This will be achieved using Docker Machine, Swarm and Compose.

Yes, all three tools together makes this blog that much more interesting!

Docker Swarm Machine Compose

The diagram explains the key components:

  • Docker Machine is used to provision multiple Docker hosts
  • Docker Swarm will be used to create a multi-host cluster
  • Each node in Docker Swarm cluster is registered/discovered using Consul
  • Multi-container application will be deployed using Docker Compose
  • WildFly and Couchbase are provisioned on different hosts
  • Docker multi-host networking is used for WildFly and Couchbase to communicate

In addition, Maven is used to configure Couchbase and deploy application to WildFly.

Latest instructions at Docker for Java Developers.

No story, just pure code, lets do it!

Create Discovery Service using Docker Machine

  1. Create a Machine that will host discovery service:
  2. Connect to this Machine:
  3. Run Consul service using the following Compose file:
    This Compose file is available at https://github.com/arun-gupta/docker-images/blob/master/consul/docker-compose.yml.
    Started container can be verified as:

Create Docker Swarm Cluster using Docker Machine

Swarm is fully integrated with Machine, and so is the easiest way to get started.

  1. Create a Swarm Master and point to the Consul discovery service:
    Few options to look here:

    1. --swarm configures the Machine with Swarm
    2. --swarm-master configures the created Machine to be Swarm master
    3. --swarm-discovery defines address of the discovery service
    4. --cluster-advertise advertise the machine on the network
    5. --cluster-store designate a distributed k/v storage backend for the cluster
    6. --virtualbox-disk-size sets the disk size for the created Machine to 5GB. This is required so that WildFly and Couchbase image can be downloaded on any of the nodes.
  2. Find some information about this machine:
    Note that the disk size is 5GB.
  3. Connect to the master by using the command:
  4. Find some information about the cluster:
  5. Create a new Machine to join this cluster:
    Notice no --swarm-master is specified in this command. This ensure that the created Machines are worker nodes.
  6. Create a second Swarm node to join this cluster:
  7. List all the created Machines:
    The machines that are part of the cluster have cluster’s name in the SWARM column, blank otherwise. For example,consul-machine is a standalone machine where as all other machines are part of the swarm-master cluster. The Swarm master is also identified by (master) in the SWARM column.
  8. Connect to the Swarm cluster and find some information about it:

    Note, --swarm is specified to connect to the Swarm cluster. Otherwise the command will connect to swarm-masterMachine only.

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm worker nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master. This can be verified by connecting to the master and listing all the containers.

  9. List nodes in the cluster with the following command:

Start Application Environment using Docker Compose

Make sure you are connected to the cluster by giving the command eval "$(docker-machine env --swarm swarm-master)".

  1. List all the networks created by Docker so far:
    Docker create three networks for each host automatically:

    Network Name Purpose
    bridge Default network that containers connect to. This is docker0 network in all Docker installations.
    none Container-specific networking stack
    host Adds a container on hosts networking stack. Network configuration is identical to the host.

    This explains a total of nine networks, three for each node, as shown in this Swarm cluster.

  2. Use Compose file to start WildFly and Couchbase:

    In this Compose file:

    1. Couchbase service has a custom container name defined by container_name. This name is used when creating a new environment variable COUCHBASE_URI during WildFly startup.
    2. arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.Source for this file is at https://github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml.

    This application environment can be started as:

    --x-networking creates an overlay network for the Swarm cluster. This can be verified by listing networks again:

    Three new networks are created:

    1. Containers connected to the multi-host network are automatically connected to the docker_gwbridge network. This network allows the containers to have external connectivity outside of their cluster, and is created on each worker node.
    2. A new overlay network wildflycouchbasejavaee7 is created. Connect to different Swarm nodes and check that the overlay network exists on them.

      Lets begin with master:

      Next, with swarm-node-01:

      Finally, with swarm-node-02:

      As seen, wildflycouchbasejavaee7 overlay network exists on all Machines. This confirms that the overlay network created for Swarm cluster was added to each host in the cluster. docker_gwbridge only exists on Machines that have application containers running.

      Read more about Docker Networks.

  3. Verify that WildFly and Couchbase are running:

Configure Application and Database

  1. Clone https://github.com/arun-gupta/couchbase-javaee.git. This workspace contains a simple Java EE application that is deployed on WildFly and provides a REST API over travel-sample bucket in Couchbase.
  2. Couchbase server can be configured using REST API. The application contains a Maven profile that allows to configure Couchbase server with travel-sample bucket. This can be invoked as:
  3. Deploy the application to WildFly by specifying three parameters:
    1. Host IP address where WildFly is running
    2. Username of a user in WildFly’s administrative realm
    3. Password of the user specified in WildFly’s administrative realm

Access Application

Now that WildFly and Couchbase server have started, lets access the application. You need to specify IP address of the Machine where WildFly is running:

Complete set of REST API for this application is documented at github.com/arun-gupta/couchbase-javaee.

Latest instructions at Docker for Java Developers.

Enjoy!

JavaOne4Kids 2015 Wrapup – Devoxx4Kids and Oracle Academy Together!

JavaOne4Kids is focused on promoting technology to next generation of developers; kids who want to learn more about programming, robotics and engineering.

Oracle Academy collaborated with Devoxx4Kids to bring kids content that includes several topics like Minecraft Modding, Java, Python, Scratch, Raspberry Pi, Arduino, NAO robot, LEGO Mindstorms, Greenfoot, Alice, and others at JavaOne 2015.

The attendance grew 3x from last year and it was certainly very heartening to see that!

If you live in/around San Francisco Bay Area, and want a more continued experience through out the year, then its highly recommend to join meetup.com/Devoxx4Kids-BayArea/!

Here are some statistics from the event:

javaone4kids-2015-boys-girls

A survey was sent to the attendees and some of them responded back. 95% of responses rated were happy with the event:

javaone4kids-2015-rate

90%+ would recommend JavaOne4Kids to a friend:

javaone4kids-2015-recommend

Instructors seem to have done a good job with 97% presenting in good, very good, and excellent way:

javaone4kids-2015-clear-way

Minecraft Modding continues to be the top rated workshop:

javaone4kids-2015-course

Here are some pictures from the event:

 
 Oracle Oracle
 Oracle Oracle
 Oracle Oracle
 
 

Check out the complete album:

JavaOne4Kids 2015 Album

Picture is worth a thousand words, and a video is worth a million words. Check out kids in action from the event, and then subsequently in JavaOne Community Keynote:

It takes a village to run an event like this. This was certainly not possible without the impeccable support from Oracle team, instructors, and volunteers who helped us through out the event!

Do we expect these kids to come back to again next year? Yes, absolutely!

At least, 88% of them want to come back :)

javaone4kids-2015-another-event

Don’t forget to join the local meetup.com/devoxx4kids-bayarea for local events in Bay Area.

Docker Networking with Couchbase and WildFly

Docker Multi-Host networking allows you to create virtual networks and attach containers to them so you can create the network topology that is right for your application. This blog will show how to use it with Docker Compose.

CRUD Java Application with Couchbase, Java EE, and WildFly explained how to use a Java EE application to provide a CRUD/REST interface on a data bucket in Couchbase. It required to manually download and run WildFly. The blog also used Couchbase server using Docker and required manual configuration to load travel-sample bucket.

Configure Couchbase Docker Container using REST API explained how to use Couchbase REST API to configure Couchbase Server.

Docker Multi-Host Networking

This blog will remove the explicit download of WildFly and manual configuration of Couchbase server:

  • Use Docker Compose to start WildFly and Couchbase (no download required)
  • Use a Maven profile to configure Couchbase server (no manual configuration required)
  • Uses Docker multi-host networking so that WildFly and Couchbase server can talk to each other

Lets get started!

Start Couchbase and WildFly using Docker Multi-Host Networking and Compose

  1. Start WildFly and Couchbase server using docker-compose.yml file from github.com/arun-gupta/docker-images/blob/master/wildfly-couchbase-javaee7/docker-compose.yml:
    arungupta/wildfly-admin image is used as it binds WildFly’s management to all network interfaces, and in addition also exposes port 9990. This enables WildFly Maven Plugin to be used to deploy the application.

    container_name is specified for Couchbase service and referred in WildFly service using COUCHBASE_URI. This is then used to connect to Couchbase from the Java EE application.

    The application environment is started as:

    --x-networking is an experimental switch added to Docker Compose 1.9 that allows to create a bridge or an overlay network. By default, it creates a bridge network that works on a single host. The network created can be seen as:

    Issue 2221 provide more explanation about the default networks created. wildflycouchbasejavaee7 is the new bridge network created for our application.  Issue #2345 provide some details about incorrect driver name in the output message.

Configure Couchbase Server

  1. Clone couchbase-javaee repo:

  2. Configure Couchbase server:

    exec-maven-plugin is used to invoke REST API and configure Couchbase server and is configured in a Maven profile. Make sure to setup docker.host property in pom.xml.

  3. Deploy the application to WildFly:

    Make sure to specify the correct host on CLI. In this case, this is the IP address obtained using docker-machine ip default.

Invoke the Application

  1. Invoke the REST endpoint using cURL:

    Complete set of REST endpoints are documented at CRUD Java Application with Couchbase, Java EE and WildFly. They are listed here for convenience:

    1. GET a single airline:
    2. Create a new airline using POST:

    3. Update an existing airline using PUT:
    4. Delete an existing airline using DELETE:

Enjoy!

CRUD Java Application with Couchbase, Java EE and WildFly

Couchbase is an open-source, NoSQL, document database. It allows to access, index, and query JSON documents while taking advantage of integrated distributed caching for high performance data access.

Developers can write applications to Couchbase using different languages (Java, Go, .NET, Node, PHP, Python, C) multiple SDKs. This blog will show how you can easily create a CRUD application using Java SDK for Couchbase.

REST with Couchbase

The application will use curl to issue REST commands to a JAX-RS endpoint deployed on WildFly. These commands will then perform CRUD operations on travel-sample bucket in Couchbase. N1QL (SQL query language for JSON) will be used to communicate with Couchbase to retrieve results. Both the “builder pattern” and raw N1QL commands will be used.

Couchbase CRUD using WildFly and Curl

TL;DR

Complete source code and instructions for the sample are available at github.com/arun-gupta/couchbase-javaee.

Lets get started!

Run Couchbase Server

Couchbase server can be easily downloaded from Couchbase Server Downloads page. In a containerized world, its a lot easier to fire up a Couchbase server using Docker.

If Docker is configured on your machine then the easiest way is to use Docker Compose for Couchbase:

Starting up the application server shows:

And then the logs can be seen as:

The database needs to be configured and is explained at Configure Couchbase Server. Make sure to install travel-sample bucket.

Deploy the Java EE Application on WildFly

  • Download WildFly 9.0.2 , unzip, and start WildFly application server as ./wildfly-9.0.0.Final/bin/standalone.sh.
  • Git clone the repo: git clone https://github.com/arun-gupta/couchbase-javaee.git
  • Change directory cd couchbase-javaee
  • Deploy the application to WildFly: mvn install -Pwildfly.

The application uses Java SDK for Couchbase by importing the following Maven coordinates:

Invoke the REST Endpoints Using cURL

GET Airline resources (limit to 10)

Lets query the database to list 10 Airline resources.

Request

Response

The N1QL query for this is written as:
And can also be alternatively written as:
You may optionally update the code to include ORDER BY clause as shown in N1QL Tutorial.

GET one Airline resource

Use id attribute to query a single Airline resource

Request

Response

POST a new Airline resource

Learn how to run N1QL queries from the CLI using CBQ tool and verify the existing sample data:

This query retrieve documents where airline’s name is Airlinair. The count is shown in metrics.resultCount.

Create a new document using POST.

Request

Response

Query again using CBQ and now the results are shown as:
Note that two JSON documents are returned instead of one as before the POST command was issued.

PUT an existing Airline resource

Update an existing resource using HTTP POST.

The data model for travel-sample bucket requires to include “id” attribute in the payload and in the URI as well.

Request

Name of the airline is updated from “Airlinair” to “Airlin Air”, all other attributes stay the same.

Response

The updated record is shown in the response.

Querying for Airlinair gives:

So the previously added record is now updated and thus does not appear in query results. Querying for Airlin Airgives:

This shows the newly updated document.

DELETE an existing Airline resource

Query for a unique id:

Notice that one document is returned.

Lets delete this document.

Request

Response

The deleted document is shown in the response.

Query again for the deleted id:

And no results are returned!

As mentioned earlier, the complete code base is at github.com/arun-gupta/couchbase-javaee.

Enjoy!

Couchbase at JavaOne 2015

JavaOne Logo

JavaOne 2015 is just a couple of weeks away. Why you should attend?

  • 450+ sessions over a wide variety of topics around Java
  • 5 days of geekgasm with toys, technology, and discussions
  • Best of the best developers gather here
  • Special 20th edition of the event is going to be cherished for ever
  • Venue is the most lovely city in the world – San Francisco!

Couchbase, one of the leading open-source, NoSQL, document datastore company is going to be there with a few talks. Here is where you’ll find us:

Saturday, Oct 24

JavaOne4Kids Day – Arun Gupta (@arungupta)
5pm – 8pm: Chinascaria (invite only)
7pm – 11:30pm: NetBeans Party (invite only)

Sunday, Oct 25

8am – 9:30am: Java Champions/JUG Leaders Brunch – Arun Gupta (@arungupta)
6:30pm – 7:30pm: WildFly, Hadoop, JavaFX and HTML5 in the Enterprise (UGF10306) – Arun Gupta (@arungupta)
8pm – 10pm: NetBeans, GlassFish, and Payara Party (Thirsty Bear)

Monday, Oct 26

8:30am – 10:30am: Docker and Kubernetes Recipes for Java Developers (TUT1708) – Arun Gupta (@arungupta)
12:30pm – 1:30pm: Refactor your Java EE Applications with Microservices and Containers (CON1700) – Arun Gupta (@arungupta)

Tuesday, Oct 27

12:30pm – 1:30pm: Build Scalable And Secure Mobiles with Java That Work Offline (CON11281) – Wayne Carter (@waynecarter), Ali LeClerc (@ali_leclerc)
5:15pm – 7:15pm: JavaOne Ignite – Arun Gupta (@arungupta)

Wednesday, Oct 28

4:30pm – 5:30pm: SQL for JSON: Rich, Declarative Querying for NoSQL Databases and Applications (CON11282) – Keshav Murthy (@rkeshavmurthy), Gerald Sangudi (@sangudi)

Of course, we’ll also be running the “hallway track” and so feel free to meet us there.

8 Things About Couchbase is a good place to start learning about Couchbase!

Couchbase Logo

If you’ve any doubts about San Francisco being the most beautiful city, go visit Golden Gate, Fisherman’s Wharf, Crooked Street, Alcatraz Island, Cable cars, Chinatown, MOMA, and much more :)

Kubernetes Application – Package Multiple Resources Together

Deploying an application in Kubernetes require to create multiple resources such as Pods, Services, Replication Controllers, and others. Typically each resource is define in a configuration file and created using kubectl script. But if multiple resources need to be created then you need to invoke kubectl multiple times. So if you need to create the following resources:

  • MySQL Pod
  • MySQL Service
  • WildFly Replication Controller

Then the commands would look like:

Or for convenience, wrap these invocations in a shell script. But that is not very intuitive! There is a better, and more natural and intuitive way.

Kubernetes allow multiple resources to be specified in a single configuration file. This allows to create a “Kubernetes Application” that can consists of multiple resources easily.

Previous section showed how to deploy the Java EE application using multiple configuration files. This application can be delpoyed using a single configuration file as well.

An application, as discussed above, consisting of MySQL Pod, MySQL Service, and WildFly Replication Controller can be created using the following configuration file:

Notice that each section, one each for MySQL Pod, MySQL Service, and WildFly Replication Controller, is separated by ----.

Such an application can be created as:

Complete details about how to setup Kubernetes and run this application are available at github.com/arun-gupta/kubernetes-java-sample/#kubernetes-application.

More details about creating a Kubernetes application with multiple resources can be found in #12104.

You can learn about how to create Kubernetes resources for a Java application, or otherwise, at github.com/arun-gupta/kubernetes-java-sample/.

Docker and Kubernetes Workshops in Fall 2015

Docker and Kubernetes workshops is going to 4 continents and 9 countries this Fall!

Lets talk about:

  • Get started with Docker and Kubernetes for packaging your applications
  • Microservices using Docker and Kubernetes
  • Clustering architectures
  • Migrating existing applications to Docker and Kubernetes
  • Tooling
  • Debugging tips

I’ll share some of what I know and will learn a lot more from you!

Here is the complete circuit so far:

 Sep 9 -10  javazone-2015
 Sep 15  goto-london-2015
Sep 17 redhat-forum-london-2015
Sep 29 Red Hat Forums, Argentina
 Oct 2  codestars-summit-2015
 Oct 24 – 29  javaone-logo
 Nov 5  Drukwerk (tentative)
 Nov 7  javaday-kiev-2015
 Nov 9 – 13  devoxx-be-2015
 Nov 16 – 18  devoxx-morocco-2015
 Nov 18 – 22  buildstuff-2015

Where will I see you?

Would you like to run with me at any of these events? 5k, 10k, 10mile, half marathon, marathon … you pick the distance and we run together!