Tag Archives: kubernetes

Kubernetes Cluster on Azure and Expose Couchbase Service

Kubernetes Logo

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

  • Part 1 explained how to start Kubernetes cluster using Vagrant – Kubernetes on Vagrant
  • Part 2 did the same for Amazon Web Services – Kubernetes on Amazon Web Services
  • Part 3 did the same for Google Cloud – Kubernetes on Google Cloud

This fourth part will show:

  • How to setup and start the Kubernetes cluster on Azure
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

azure-kubernetes-couchbase-cluster

Many thanks to @colemickens  for helping me through this recipe. This blog content is heavily based upon the instructions at colemickens.github.io/docs/getting-started-guides/azure/.

Install and Configure Azure CLI

Azure CLI is a command-line interface to develop, deploy and manage Azure applications. This is needed in order to install Kubernetes cluster on Azure.

  1. Install Node:
  2. Install Azure CLI:
  3. Sign up for free trial at https://azure.microsoft.com/en-us/free/.
  4. Login to Azure using the command azure login:
  5. Get account information using azure account show command:

    Note the value shown instead of XXX and YYY. These will be used to configure the Kubernetes cluster.

Start Kubernetes Cluster

  1. Download Kubernetes 1.2.4 and extract it.
  2. Kubernetes cluster on Azure can be started as:

    Make sure to specify the appropriate values for XXX and YYY from the previous command. AZURE_SUBSCRIPTION_ID and AZURE_TENANT_ID are specific to Azure.

    These values can also be edited in cluster/azure/config-default.sh.

  3. Start Kubernetes cluster:

    It starts four nodes of Standard_A1 size. Each node gives you 1 core, 1.75 GB RAM, and 40GB HDD.

Run Docker Container in Kubernetes Cluster on Azure

Now that the cluster is up and running, get a list of all the nodes:

Four instances are created as shown – one for master node and three for worker nodes.

Azure Portal shows all the created artifacts in the Resource Group:

azure-portal-kubernetes-resource-group

More details about the created nodes is available:

azure-portal-kubernetes-vnet

Create a Couchbase pod:

Notice, how the image name can be specified on the CLI. Kubernetes pre-1.2 versions created a Replication Controller with this command. This is explained in  Kubernetes on Amazon Web Services or Kubernetes on Google Cloud. Kubernetes 1.2 introduced Deployments and so this creates a Deployment instead. This enables simplified application deployment and management including versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability and rollback.

The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.

Status of the pod can be watched:

Get more details about the pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server? You need to expose the Deployment as a Service outside the Kubernetes cluster.

Typically, this will be exposed using the command:

But Azure does not support --type=LoadBalancer at this time. This feature is being worked upon and will hopefully be available in the near future. So in the meanwhile, we’ll expose the Service as:

Now proxy to this Service using kubectl proxy command:

And now this exposed Service is accessible at http://127.0.0.1:9999/api/v1/proxy/namespaces/default/services/couchbase/index.html. This shows the login screen of Couchbase Web Console:

azure-kubernetes-couchbase-web-console

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

This script shuts down the cluster but the Azure resource group need to be explicitly removed. This can be done by selecting the Resource Group from portal.azure.com:

azure-portal-kubernetes-delete-resource-group

This is filed as #26601.

Enjoy!

Further references …

  • Couchbase Server Developer Portal
  • Couchbase on Containers
  • Questions on StackOverflow, Forums or Slack Channel
  • Follow us @couchbasedev
  • Couchbase 4.5 Beta

Source: http://blog.couchbase.com/2016/june/kubernetes-cluster-azure-couchbase-service

Kubernetes Namespaces, Resource Quota, and Limits for QoS in Cluster

Kubernetes Logo

By default, all resources in Kubernetes cluster are created in a default namespace. A pod will run with unbounded CPU and memory requests/limits.

A Kubernetes namespace allows to partition created resources into a logically named group. Each namespace provides:

  • a unique scope for resources to avoid name collisions
  • policiesto ensure appropriate authority to trusted users
  • ability to specify constraints for resource consumption

This allows a Kubernetes cluster to share resources by multiple groups and provide different levels of QoS each group.

Resources created in one namespace are hidden from other namespaces. Multiple namespaces can be created, each potentially with different constraints.

Default Kubernetes Namespace

By default, each resource created by user in Kubernetes cluster runs in a default namespace, called default.

Any pod, service or replication controller will be created in this namespace. kube-system namespace is reserved for resources created by the Kubernetes cluster.

More details about the namespace can be seen:

This description shows resource quota (if present), as well as resource limit ranges.

So let’s create a Couchbase replication controller as:

Check the existing replication controller:

By default, only resources in user namespace are shown. Resources in all namespaces can be shown using --all-namespaces option:

As you can see, the arungupta/couchbase image runs in the default namespace. All other resources run in the kube-system namespace.

Lets check the context of this replication controller:

Look for contexts.context.name attribute to see the existing context. This will be manipulated later.

Create a Resource in New Kubernetes Namespace

Lets create a new namespace first. This can be done using the following configuration file:

Namespace is created as:

Then querying for all the namespaces gives:

A new replication controller can be created in this new namespace by using --namespace option:

List of resources in all namespaces looks like:

As seen, there are two replication controllers with arungupta/couchbase image – one in default namespace and another in development namespace.

Set Kubernetes Namespace For an Existing Resource

If a resource is already created then it can be assigned a namespace.

On a previously created resource, new context can be set in the namespace:

Viewing the context now shows:

The second attribute in contexts.context array shows that a new context has been created. It also shows that the current context is still couchbase-on-kubernetes_kubernetes. Since no namespace is specified in that context, it belongs to the default namespace.

Change the context:

See the list of replication controllers:

Obviously, no replication controllers are running in this context. Lets create a new replication controller in this new namespace:

And see the list of replication controllers in all namespaces:

Now you can see two arungupta/couchbase replication controllers running in two difference namespaces.

Delete a Kubernetes Resource in Namespace

A resource can be deleted by fully-qualifying the resource name:

Similarly the other replication controller can be deleted as:

Finally, see the list of all replication controllers in all namespaces:

This confirms that all user created replication controllers are deleted.

Resource Quota and Limit using Kubernetes Namespace

Each namespace can be assigned resource quota.

By default, a pod will run with unbounded CPU and memory requests/limits. Specifying quota allows to restrict how much of cluster resources can be consumed across all pods in a namespace.

Resource quota can be specified using a configuration file:

The following resources are supported by the quota system:

Resource Description
cpu Total requested cpu usage
memory Total requested memory usage
pods Total number of active pods where phase is pending or active.
services Total number of services
replicationcontrollers Total number of replication controllers
resourcequotas Total number of resource quotas
secrets Total number of secrets
persistentvolumeclaims Total number of persistent volume claims

This resource quota can be created in a namespace:

The created quota can be seen as:

Now, if you try to create the replication controller that works:

But describing the quota again shows:

We expected a new pod to be created as part of this replication controller but it’s not there. So lets describe our replication controller:

By default, pod consumes all the cpu and memory available. With resource quotas applied, an explicit value must be specified. Alternatively a default value for the pod can be specified using the following configuration file:

This restricts the CPU and memory that can be consumed by a pod. Lets apply these limits as:

Now when you describe the replication controller again, it shows:

This shows successful creation of the pod.

And now when you describe the quota, it shows correct values as well:

Resource Quota provide more details about how to set/update these values.

Creating another quota gives the following error:

Specifying Limits During Pod Creation

Limits can be specified during pod creation as well:

If memory limit for each pod is restricted to 1g, then a valid pod definition would be:

This is because the pod request 0.5G of memory only. And an invalid pod definition would be:

This is because the pod requests 2G of memory. Creating such a pod gives the following error:

Hope you can apply namespaces, resource quotas, and limits for sharing your clusters across different environments.

Source: http://blog.couchbase.com/2016/march/kubernetes-namespaces-resource-quota-limits-qos-cluster

Kubernetes Cluster on Google Cloud and Expose Couchbase Service

kubernetes-logo

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. The second part (Kubernetes on Amazon) explained how run that setup on Amazon Web Services.

This third part will show:

  • How to setup and start the Kubernetes cluster on Google Cloud
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

Here is a quick overview:

Kubernetes Cluster on Google Cloud

Let’s get into details!

Getting Started with Google Compute Engine provide detailed instructions on how to setup Kubernetes on Google Cloud.

Download and Configure Google Cloud SDK

There is a bit of setup required if you’ve never accessed Google Cloud on your machine. This was a bit overwhelming and wish can be simplified.

  • Create a billable account on Google Cloud
  • Install Google Cloud SDK
  • Configure credentials: gcloud auth login
  • Create a new Google Cloud project and name it couchbase-on-kubernetes
  • Set the project: gcloud config set project couchbase-on-kubernetes
  • Set default zone: gcloud config set compute/zone us-central1-a
  • Create an instance: gcloud compute instances create example-instance --machine-type n1-standard-1 --image debian-8
  • SSH into the instance: gcloud compute ssh example-instance
  • Delete the instance: gcloud compute instances delete example-instance

Setup Kubernetes Cluster on Google Cloud

Kubernetes cluster can be created on Google Cloud as:

Make sure KUBERNETES_PROVIDER is either set to gce or not set at all.

By default, this provisions a 4 node Kubernetes cluster with one master. This means 5 Virtual Machines are created.

If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh.

Starting Kubernetes on Google Cloud shows the following log. Google Cloud SDK was behaving little weird but taking the defaults seem to work:

There are a couple of unbound variables and a WARNING message, but that didn’t seem to break the script.

Google Cloud Console shows:

Google Cloud Compute Instances On Kubernetes Cluster

Five instances are created as shown – one for master node and four for worker nodes.

Run Docker Container in Kubernetes Cluster on Google Cloud

Now that the cluster is up and running, get a list of all the nodes:

It shows four worker nodes.

Create a Couchbase pod:

Notice, how the image name can be specified on the CLI. This command creates a Replication Controller with a single pod. The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.

Get all the RC resources:

This shows the Replication Controller that is created for you.

Get all the Pods:

The output shows the Pod that is created as part of the Replication Controller.

Get more details about the Pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server?

You need to expose it outside the Kubernetes cluster.

The kubectl expose command takes a pod, service or replication controller and expose it as a Kubernetes Service. Let’s expose the replication controller previously created and expose it:

Get more details about Service:

The Loadbalancer Ingress attribute gives you the IP address of the load balancer that is now publicly accessible.

Wait for 3 minutes to let the load balancer settle down. Access it using port 8091 and the login page for Couchbase Web Console shows up:

Google Cloud Kubernetes Couchbase Login Page

Enter the credentials as “Administrator” and “password” to see the Web Console:

Google Cloud Kubernetes Couchbase Web Console

And so you just accessed your pod outside the Kubernetes cluster.

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

Enjoy!

Source: http://blog.couchbase.com/2016/march/kubernetes-cluster-google-cloud-expose-service

Kubernetes Cluster on Amazon and Expose Couchbase Service

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. That is a simple and easy way to develop, test, and deploy Kubernetes cluster on your local machine. But this could be of limited use rather soon as the resources are constrained by the local machine. So what do you do?

Kubernetes cluster can be installed on Amazon as well. This second part will show:

  • How to setup and start the Kubernetes cluster on Amazon Web Services
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

Here is a quick overview:

Kubernetes Cluster on Amazon with Couchbase

Let’s dig into the details!

Setup Kubernetes Cluster on Amazon Web Services

Getting Started on AWS EC2 provide complete instructions to start Kubernetes cluster on Amazon. Make sure to have the pre-requisites (AWS account, AWS CLI, Full EC2 access) met before you follow these instructions.

Kubernetes cluster can be created on Amazon as:

By default, this provisions a new VPC and a 4 node Kubernetes cluster in us-west-2a (Oregon) with t2.micro instances running on Ubuntu. This means 5 AMIs (one for master and 4 for the worker nodes) are created. Some properties that are worth updating:

  • Set NUM_MINIONS environment variable to whatever number of nodes are required in the cluster. Set it to 2 if you want only two worker nodes to be created.
  • Each instance size is 1.1.x is t2.micro. Set MASTER_SIZE and MINION_SIZE environment variables to m3.medium otherwise the nodes are going to crawl.

If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh.

Starting Kubernetes on Amazon shows the following log:

Amazon Console shows:

Kubernetes Cluster on Amazon

Three instances are created as shown – one for master node and two for worker nodes.

Username and password for the Kubernetes master are stored in /Users/arungupta/.kube/config. Look for a section like:

Run Docker container in Kubernetes Cluster on Amazon

Now that the cluster is up and running, get a list of all the nodes:

It shows two worker nodes.

Create a new Couchbase pod:

Notice, how the image name can be specified on the CLI. This command creates a Replication Controller with a single pod. The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.

Get all the RC resources:

This shows the Replication Controller that is created for you.

Get all the Pods:

The output shows the Pod that is created as part of the Replication Controller.

Get more details about the Pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server?

You need to expose it outside the Kubernetes cluster.

The kubectl expose command takes a pod, service or replication controller and expose it as a Kubernetes Service. Let’s expose the replication controller previously created and expose it:

Get more details about the Service:

The Loadbalancer attribute Ingress gives you the address of the load balancer that is now publicly accessible.

Wait for 3 minutes to let the load balancer settle down. Access it using port 8091 and the login page for Couchbase Web Console shows up:

Kubernetes on Amazon - Couchbase Login Page

Enter the credentials as “Administrator” and “password” to see the Web Console:

Kubernetes on Amazon - Couchbase Web Console

And so you just accessed your pod outside the Kubernetes cluster.

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

For a complete clean up, you still need to explicitly delete the S3 bucket where Kubernetes binaries are stored.

Enjoy!

Source: http://blog.couchbase.com/2016/march/kubernetes-cluster-amazon-expose-service

Couchbase on Kubernetes

This blog is possible because of this tweet!

kubernetes-logoKubernetes is an open source orchestration system by Google for Docker containers.  It manages containerized applications across multiple hosts and provides basic mechanisms for deployment, maintenance, and scaling of applications.

It allows the user to provide declarative primitives for the desired state, for example “need 5 Couchbase servers”. Kubernetes self-healing mechanisms, such as auto-restarting, re-scheduling, and replicating containers then ensure that this state is met. The user just define the state and Kubernetes ensures that the state is met at all times on the cluster.

Key Concepts of Kubernetes explains the key concepts of Kubernetes.

This multi-part blog series will show how to run Couchbase on Kubernetes in multiple ways. The first part starts with a simple setup using Vagrant.

Getting Started with Kubernetes

There are multiple ways to run Kubernetes but I found the simples (not necessarily predictable 😉 way is to run using Vagrant.

  • Download the latest Kubernetes release, 1.1.8 as of this writing, and expand the archive.
  • Start the Kubernetes cluster as:

    This shows the output as:

Run Couchbase on Kubernetes Cluster

The easiest way to start running a Docker container in Kubernetes is using the kubectl run command.

The command usage is:

The command runs a particular image, possibly replicated. The image replication is handled by creating a Replication Controller to manage the created container(s).

Complete list of options to run this command can be seen using:

Couchbase Docker Container explains the different Docker container for Couchbase. For this blog, we’ll use arungupta/couchbase image as that is pre-configured.

This shows the output:

The output confirms that a Replication Controller is created. Lets verify it:

Now, check the pods:

Lets check the status of the pod:

Fifth line of the output says the node’s IP is 10.245.1.4. This would be used to access the Web Console later.

The last line in this output shows that the pod is now ready. Checking the status of the pod again shows:

Couchbase Web Console on Kubernetes Cluster

Now that your Couchbase container is running in Kubernetes cluster, you may like to view the Web Console.

Each pod is assigned a unique IP address, but this address is only accessible within the cluster. It can exposed using the kubectl expose command.

This command takes a Replication Controller, Service or Pod and expose it as new Kubernetes Service. This can be done by giving the command:

In this command:

  • --target-port is the name or number for the port on the container that the service should direct traffic to
  • --port is the port that the service should serve on
  • --external-ip is the external IP address to set for the service. Note, this IP address was obtained with kubectl describe pod command earlier.

Now, you can access the Couchbase Web Console at http://10.245.1.4:8091 and looks like:

Couchbase Web Console on Kubernetes

Enter the password credentials as Administrator/password. These credentials are specified during Docker image creation at github.com/arun-gupta/docker-images/blob/master/couchbase/configure-cluster.sh#L9.

Couchbase Web Console in Kubernetes

Voila!

Discuss with us at StackOverflow or Couchbase Forums. You can also follow us at @couchbasedev and @couchbase.

Source: http://blog.couchbase.com/2016/couchbase-on-kubernetes

Couchbase Cloud Recipes – Pick your favorite!

Couchbase 4.x Quick Installation provide instructions to install Couchbase on your local machine.

Would you like to run Couchbase 4.x in cloud? There are plenty of recipes available!

Couchbase Cloud Recipes

Looking for detailed instructions on Couchbase Cloud Recipes?

  • Digital Ocean
  • Jelastic
  • OpenShift
  • Docker
  • BigStep
  • Vagrant
  • Kubernetes
  • Joyent Triton
  • Amazon Web Services
  • Microsoft Azure
  • CoreOS and Kubernetes

Read more about it in Couchbase Cloud Deployments. Are there any other cloud/hosting solution where you you would like to see Couchbase running? Did I miss one where Couchbase already runs?

How is your experience of running Couchbase in the cloud?

Couchbase partners provide a complete list of partners.

As always, talk to us using StackOverflow or Forums.

Couchbase on OpenShift 3

OpenShift is Red Hat’s open source PaaS platform. OpenShift 3 provides a holistic experience of running your applications using Docker and Kubernetes. In a classic Red Hat way, all the work is done in the open source at OpenShift Origin. This also drives the next major release of OpenShift Online and OpenShift Enterprise.

OpenShift 3 using Docker and Kubernetes for container orchestration makes it really simple to bring any products that have a Docker image to run with minimal effort. This blog explains how to get started with Couchbase on OpenShift 3.

OpenShift-logoCouchbase Logo

Getting Started with OpenShift 3

  • Download the latest Vagrant box (1.1 as of this writing) and Vagrantfile from: openshift.org/vm/. Copy them in the same directory.Vagrantfile is configured for 2GB memory and can be updated if you need to run more containers. OpenShift Master, Node, Docker Registry, and other components run inside the VM.This blog was written using Vagrant 1.7.4 and VirtualBox 5.0.10r104061.
  • Add the Vagrant Box:
  • Start the Virtual Machine:

Download and Configure OpenShift 3 Client

  • Download Mac 64-bit client tools (gem install rhc is for v2 only) from openshift.org/vm/ and extract them a in directory. The listing looks like:
  • Verify the client version:
  • Remove ~/.kube/configor rename to something else.
  • Login to OpenShift:

Create Couchbase Application in OpenShift 3

  • Create a new Couchbase instance:

    arungupta/couchbase is used as it uses Couchbase REST API to preconfigure the Couchbase server with:

    • Memory and index quota
    • Query, Data, and Index service
    • Username and password credentials
    • Install travel-sample bucket

    This sample bucket will be used later for querying data.

  • Check the status of deployment:

  • Find the list of Pods:

  • Get more details about the Couchbase pod:

Query Couchbase Sample Bucket

  • Log into the Vagrant box:

  • Find a list of all the running containers:

    Search for Couchbase container:

    Get the id for our container:

  • Get IP address of the Pod where Couchbase server is running:

  • Use the IP address shown above to start Couchbase Query CLI:

  • Query the sample bucket:

Enjoy!

This blog shows the very basics of getting started with Couchbase on OpenShift 3. Future blogs will show:

  • How to deploy an application to OpenShift and use this Couchbase
  • How to make this application accessible outside OpenShift
  • How to scale Couchbase in OpenShift
  • Possibly some other interesting items that come along

Do you have a suggestion on what you’d like to see?

Read more about Couchbase 4.1:

  • What’s New in Couchbase Server 4.1
  • Download Couchbase Server 4.1
  • Couchbase Server documentation
  • Talk to us on Couchbase Forums
  • Follow @couchbasedev or @couchbase

Docker, Kubernetes, and Microservices Replay from Devoxx 2015

Devoxx 2015 BE Arun

Java gives us Write Once Run Anywhere (WORA) because of the common abstraction provided by Java Virtual Machine. The binary byte code produced by Java is understood by the JVM running on multiple operating systems. This allows Java code to run on any operating system. But deploying such applications typically requires to tune the JVM, setup application server, install appropriate database drivers, other similar configuration. Docker nicely complements WORA by defining a way to package Java applications, and include all the configuration in a an easy to describe format. This can be called as Package Once Deploy Anywhere, or PODA.

Docker PODA

Do you want to learn more about how Docker helps with PODA?
How do you create multi-container applications using Docker Compose?
How do you deploy this multi-container application on multiple hosts running in a Docker Swarm cluster?
What new features Docker introduced in 1.9 release – particularly multi-host networking and persistent storage?
How do you deploy Docker containers using Kubernetes?
How does Kubernetes help with scaling applications?
How does Docker Swarm compare with Kubernetes?

I had the privilege of delivering a 3-hour university on Docker and Kubernetes followed by a 3-hour hands-on lab. The replay of the university session is now available:

Slides for the university are available at: github.com/javaee-samples/docker-java/tree/master/slides

The contents from the hands-on lab are available at bit.ly/dockerlab.

In addition, I also gave a 3-hour university on Refactor your Java EE Applications using Microservices and Containers. This session explained:

  • Basic characteristics of Microservices
  • Showed a simple shopping cart monolithic application, and how it was refactored to multiple microservices (github.com/arun-gupta/microservices)
  • Talked about transactions, event sourcing and CQRS
  • How KumuluzEE was used to create standalone JARs
  • Explained how such microservices can be deployed using Docker

Slides from this session are available at: github.com/arun-gupta/microservices/tree/master/slides

Replay is available at:

The response on twitter was quite positive:

And in case you are interested, watch a quick interview with Voxxed team:

Enjoy!

 

Kubernetes Application – Package Multiple Resources Together

Deploying an application in Kubernetes require to create multiple resources such as Pods, Services, Replication Controllers, and others. Typically each resource is define in a configuration file and created using kubectl script. But if multiple resources need to be created then you need to invoke kubectl multiple times. So if you need to create the following resources:

  • MySQL Pod
  • MySQL Service
  • WildFly Replication Controller

Then the commands would look like:

Or for convenience, wrap these invocations in a shell script. But that is not very intuitive! There is a better, and more natural and intuitive way.

Kubernetes allow multiple resources to be specified in a single configuration file. This allows to create a “Kubernetes Application” that can consists of multiple resources easily.

Previous section showed how to deploy the Java EE application using multiple configuration files. This application can be delpoyed using a single configuration file as well.

An application, as discussed above, consisting of MySQL Pod, MySQL Service, and WildFly Replication Controller can be created using the following configuration file:

Notice that each section, one each for MySQL Pod, MySQL Service, and WildFly Replication Controller, is separated by ----.

Such an application can be created as:

Complete details about how to setup Kubernetes and run this application are available at github.com/arun-gupta/kubernetes-java-sample/#kubernetes-application.

More details about creating a Kubernetes application with multiple resources can be found in #12104.

You can learn about how to create Kubernetes resources for a Java application, or otherwise, at github.com/arun-gupta/kubernetes-java-sample/.

Docker and Kubernetes Workshops in Fall 2015

Docker and Kubernetes workshops is going to 4 continents and 9 countries this Fall!

Lets talk about:

  • Get started with Docker and Kubernetes for packaging your applications
  • Microservices using Docker and Kubernetes
  • Clustering architectures
  • Migrating existing applications to Docker and Kubernetes
  • Tooling
  • Debugging tips

I’ll share some of what I know and will learn a lot more from you!

Here is the complete circuit so far:

 Sep 9 -10  javazone-2015
 Sep 15  goto-london-2015
Sep 17 redhat-forum-london-2015
Sep 29 Red Hat Forums, Argentina
 Oct 2  codestars-summit-2015
 Oct 24 – 29  javaone-logo
 Nov 5  Drukwerk (tentative)
 Nov 7  javaday-kiev-2015
 Nov 9 – 13  devoxx-be-2015
 Nov 16 – 18  devoxx-morocco-2015
 Nov 18 – 22  buildstuff-2015

Where will I see you?

Would you like to run with me at any of these events? 5k, 10k, 10mile, half marathon, marathon … you pick the distance and we run together!

 

Kubernetes Design Patterns

14,000 commits and 400 contributors (including one tiny commit from me!) is what build Kubernetes 1.0. It is now available!

  • Download here
  • API Docs
  • Kubectl command tool
  • Getting Started Guide
  • Kubernetes Introduction Slides

This blog discusses some of the Kubernetes design patterns. All source code for the design patterns discussed below are available at kubernetes-java-sample.

Key Concepts of Kubernetes

At a very high level, there are three key concepts:

  • Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.
  • Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.
  • Node is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

Kubernetes Key Concepts

 

Some other concepts to be aware of:

  • Replication Controller is a resource at Master that ensures that requested number of pods are running on nodes at all times.
  • Service is an object on master that provides load balancing across a replicated group of pods.
  • Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.

Start Kubernetes Cluster

  1. Easiest way to start a Kubernetes cluster on a Mac OS is using Vagrant:
  2. Alternatively, Kubernetes can be downloaded from github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.0/kubernetes.tar.gz, and cluster can be started as:

Kubernetes Cluster Vagrant

A Pod with One Container

This section will explain how to start a Pod with one Container. WildFly base Docker image will be used as the Container.

Kubernetes One Pod

Pod, Replication Controller, Service, etc are all resources in Kubernetes. They can be created using the kubectl by using a configuration file.

The configuration file in this case:

Complete details on how to create a Pod are explained at github.com/arun-gupta/kubernetes-java-sample#a-pod-with-one-container

Java EE Application Deployed in a Pod with One Container

This section will show how to deploy a Java EE application in a Pod with one Container. WildFly, with an in-memory H2 database, will be used as the container.

Kubernetes Java EE 7 Application

Configuration file is:

Complete details at github.com/arun-gupta/kubernetes-java-sample#java-ee-application-deployed-in-a-pod-with-one-container-wildfly–h2-in-memory-database.

A Replication Controller with Two Replicas of a Pod

This section will explain how to start a Replication Controller with two replicas of a Pod. Each Pod will have one WildFly container.

Kubernetes Replication Controller

Configuration file is:

Complete details at github.com/arun-gupta/kubernetes-java-sample#a-replication-controller-with-two-replicas-of-a-pod-wildfly

Rescheduling Pods

Replication Controller ensures that specified number of pod “replicas” are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more.

Kubernetes Pod Rescheduling

Complete details at github.com/arun-gupta/kubernetes-java-sample#rescheduling-pods.

Scaling Pods

Replication Controller allows dynamic scaling up and down of Pods.

Kubernetes Scaling Pods

Complete details at github.com/arun-gupta/kubernetes-java-sample#scaling-pods.

Kubernetes Service

Pods are ephemeral. IP address assigned to a Pod cannot be relied upon. Kubernetes, Replication Controller in particular, create and destroy Pods dynamically. A consumer Pod cannot rely upon the IP address of a producer Pod.

Kubernetes Service is an abstraction which defines a set of logical Pods. The set of Pods targeted by a Service are determined by labels associated with the Pods.

This section will show how to run a WildFly and MySQL containers in separate Pods. WildFly Pod will talk to the MySQL Pod using a Service.

Kubernetes Service

Complete details at github.com/arun-gupta/kubernetes-java-sample#kubernetes-service.

Here are couple of blogs that will help you get started:

The complete set of Kubernetes blog entries provide more details.

Enjoy!

Scaling Kubernetes Cluster

kubernetes-logo

Automatic Restarting of Pods inside Replication Controller of Kubernetes Cluster shows how Kubernetes reschedule pods in the cluster if one or more of existing Pods disappear for some reason. This is a common usage pattern and one of the key features of Kubernetes.

Another common usage pattern of Replication Controller is scaling:

The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the replicas field.

Replication Controller#Scaling

This blog will show how a Kubernetes cluster can be easily scaled up and down.

All the code used in this blog is available at kubernetes-java-sample.

Start Replication Controller and Verify

  1. Start a Replication Controller as:
  2. Get status of the Pods:

    Make sure to wait for the status to change to Running.

    Note down name of the Pods as wildfly-rc-bgtkg” and wildfly-rc-bgtkg”.

  3. Get status of the Replication Controller:

    If multiple Replication Controllers are running then you can query for this specific one using the label:

Scaling Kubernetes Cluster Up

Replication Controller allows dynamic scaling up and down of Pods.

  1. Scale up the number of Pods:
  2. Status of the Pods can be seen in another shell:

    Notice a new Pod with the name wildfly-rc-aqaqn is created.

Scale Kubernetes Cluster Down

  1. Scale down the number of Pods:
  2. Status of the Pods using -w is not correctly updated (#11338). But status of the Pods can be seen correctly as:

    Notice only one Pod is now running.

Kubernetes dynamically scales the Pods up and down using the scale --replicas command.

All code used in this blog is available at kubernetes-java-sample.

Enjoy!

Automatic Restarting of Pods inside Replication Controller of Kubernetes Cluster

kubernetes-logo

A key feature of Kubernetes is its ability to maintain the “desired state” using declared primitives. Replication Controllers is a key concept that helps achieve this state.

A replication controller ensures that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more.

Replication Controller

Lets take a look on how to spin up a Replication Controller with two replicas of a Pod. Then we’ll kill one pod and see how Kubernetes will start another Pod automatically.

Start Kubernetes Cluster

  1. Easiest way to start a Kubernetes cluster on a Mac OS is using Vagrant:
  2. Alternatively, Kubernetes can be downloaded from github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.0/kubernetes.tar.gz, and cluster can be started as:

Start and Verify Replication Controller and Pods

  1. All configuration files required by Kubernetes to start Replication Controller are in kubernetes-java-sample project.  Clone the workspace:
  2. Start a Replication Controller that has two replicas of a pod, each with a WildFly container:

    The configuration file used is shown:

    Default WildFly Docker image is used here.
  3. Get status of the Pods:

    Notice -w refreshes the status whenever there is a change. The status changes from Pending to Running and then Ready to receive requests.
  4. Get status of the Replication Controller:

    If multiple Replication Controllers are running then you can query for this specific one using the label:
  5. Get name of the running Pods:
  6. Find IP address of each Pod (using the name):

    And of the other Pod as well:
  7. Pod’s IP address is accessible only inside the cluster. Login to the minion to access WildFly’s main page hosted by the containers:

Automatic Restart of Pods

Lets delete a Pod and see how a new Pod is automatically created.

Notice how the Pod with name wildfly-rc-15xg5 was deleted and a new Pod with the name wildfly-rc-0xoms was created.

Finally, delete the Replication Controller:

The latest configuration files and detailed instructions are at kubernetes-java-sample.

In real world, you’ll typically wrap this Replication Controller in a Service and front-end with a Load Balancer. But that’s a topic for another blog!

Enjoy!

OpenShift v3: Getting Started with Java EE 7 using WildFly and MySQL (Tech Tip #73)

OpenShift OriginOpenShift is Red Hat’s open source PaaS platform. OpenShift v3 (due to be released this year) will provide a holistic experience on running your microservices using Docker and Kubernetes. In a classic Red Hat way, all the work is done in the open source at OpenShift Origin. This will also drive the next major release of OpenShift Online and OpenShift Enterprise.

OpenShift v3 uses a new platform stack that is using plenty of community projects where Red Hat contributes such as Fedora, Centos, Docker, Project Atomic, Kubernetes, and OpenStack. OpenShift v3 Platform Combines Docker, Kubernetes, Atomic and More explain this platform stack in detail.

OpenShift v3 Stack

This tech tip will explain how to get started with OpenShift v3, lets get started!

Getting Started with OpenShift v3

Pre-built binaries for OpenShift v3 can be downloaded from Origin at GitHub. However the simplest way to get started is to run OpenShift Origin as a Docker container.

OpenShift Application Lifecycle provide complete details on what it takes to run a sample application from scratch. This blog will use those steps and adapt them to run using boot2docker VM on Mac. And in the process we’ll also deploy a Java EE 7 application on WildFly which will be accessing database on a separate MySQL container.

Here is our deployment diagram:

OpenShift v3 WildFly MySQL Deployment Strategy

  • WildFly and MySQL are running on separate pods.
  • Each of them is wrapped in a Replication Controller to enable simplified scaling.
  • Each Replication Controller is published as a Service.
  • WildFly talks to the MySQL service, as opposed to directly to the pod. This is important as Pods, and IP addresses assigned to them, are ephemeral.

Lets get started!

Configure Docker Daemon

  1. Configure the docker daemon on your host to trust the docker registry service you’ll be starting. This registry will be used to push images for build/test/deploy cycle.
    • Log into boot2docker VM as:
    • Edit the file

      This will be an empty file.
    • Add the following name/value pair:

      Save the file, and quit the editor.

    This will instruct the docker daemon to trust any docker registry on the 172.30.17.0/24 subnet.

Check out OpenShift v3 and Java EE 7 Sample

  1. Download and Install Go and setup GOPATH and PATH environment variable. Check out OpenShift origin directory:

    Note the directory where its checked out. In this case, its ~/workspaces/openshift.

    Build the workspace:

  2. Check out javaee7-hol workspace that has been converted to a Kubernetes application:

    This is also done in ~/workspaces/openshift directory.

Start OpenShift v3 Container

  1. Start OpenShift Origin as Docker container:

    Note ~/workspaces/openshift directory is mounted as /workspaces/openshift volume in the container. Some additional volumes are mounted as well.

    Check that the container is running:

  2. Log into the container as:

  3. Install Docker registry in the container by giving the following command:

  4. Confirm that the registry is running by getting the list of pods:

    osc is OpenShift Client CLI and allows to create and manage OpenShift projects. Some of the kubectl commands can also be using this script.

  5. Confirm the registry service is running. Note the actual IP address may vary:

  6. Confirm that the registry service is accessible:

    And look for the output:

Access OpenShift v3 Web Console

  1. OpenShift Origin server is now up and running. Find out the host’s IP address using boot2docker ip and open http://<IP addresss of boot2docker host>:8444 to view OpenShift Web  Console in your browser.For example, the console is accessible at https://192.168.59.103:8444/ on this machine.
    OpenShift Origin Browser Certificate

    You will need to have the browser accept the certificate at https://<host>:8444 before the console can consult the OpenShift API. Of course this would not be necessary with a legitimate certificate.

  2. OpenShift Origin login screen shows up. Enter the username/password as admin/admin:
    OpenShift Origin Login Screen

    and click on the “Log In” button. The default web console looks like:

    OpenShift v3 Web Console Default

Create OpenShift v3 Project

  1. Use project.json from github.com/openshift/origin/blob/master/examples/sample-app/project.json in the OpenShift v3 container and create a test project as:

    Refreshing the web console now shows:

    OpenShift Origin Test Project

    Clicking on “OpenShift 3 Sample” shows an empty project description:

    OpenShift v3 Empty Project

  2. Request creation of the application template:

  3. Web Console automatically refreshes and shows:

    OpenShift v3 Java EE 7 Default Project

    The list of services running can be seen as:

    OpenShift v3 Java EE 7 Project Services

Build the Project

  1. Trigger an initial build of your project:

  2. Monitor the builds and wait for the status to go to “complete” (this can take a few minutes):

    You can add the –watch flag to wait for updates until the build completes:

    Wait for the STATUS column to show Complete. It will take a few minutes as all the components (WIldFly, MySQL, Java EE 7 application) are provisioned.  Effectively, their new Docker images are created and pushed to the local registry that was started earlier.

    Hit Ctrl+C to stop watching builds after the status changes to Complete.

  3. Complete log of the build can be seen as:

  4. Check for the application pods to start:

    Note, that the “frontend” and “database” pods are now running.

  5. Determine IP of the “frontend” service:

  6. Access the application at http://<IP address of “frontend”>:8080/movieplex7-1.0-SNAPSHOT should work. Note the IP address may (most likely will) vary. In this case, it would be http://172.30.17.115:8080/moviexplex7-1.0-SNAPSHOT.The app would not be accessible yet, as some further debugging is required to configure firewall on Mac when OpenShift v3 is used as Docker container. Until we figure that out, you can do docker ps in your boot2docker VM to see the list of all the containers:

    And then login to the container associated with frontend as:

    This will log in to the Docker container where you can check that the application is deployed successfully by giving the following command:

    This will print the index.html page from the application which has license at the top and rest of the page after that.

    Now once the firewall issue is resolved, this page will then be accessible on host Mac as well.

Lets summarize:

  • Cloned the OpenShift Origin and Java EE 7 sample repo
  • Started OpenShift v3 as Docker container
  • Loaded the OpenShift v3 Web Console
  • Create an OpenShift v3 project
  • Loaded Java EE 7 application template
  • Triggered a build, which deployed the application

Here are some troubleshooting tips if you get stuck.

Enjoy!

MySQL as Kubernetes Service, Access from WildFly Pod (Tech Tip #72)

Java EE 7 and WildFly on Kubernetes using Vagrant (Tech Tip #71) explained how to run a trivial Java EE 7 application on WildFly hosted using Kubernetes and Docker. The Java EE 7 application was the hands-on lab that have been delivered around the world. It uses an in-memory database that is bundled with WildFly and allows to understand the key building blocks of Kubernetes. This is good to get you started with initial development efforts but quickly becomes a bottleneck as the database is lost when the application server goes down. This tech tip will show how to run another trivial Java EE 7 application and use MySQL as the database server. It will use Kubernetes Services to explain how MySQL and WildFly can be easily decoupled.

Lets get started!

Make sure to have a working Kubernetes setup as explained in Kubernetes using Vagrant.

The complete source code used in this blog is available at github.com/arun-gupta/kubernetes-java-sample.

Start MySQL Kubernetes pod

First step is to start the MySQL pod. This can be started by using the MySQL Kubernetes configuration file:

The configuration file used is at github.com/arun-gupta/kubernetes-java-sample/blob/master/mysql.json.

Check the status of MySQL pod:

Wait till the status changes to “Running”. It will look like:

It takes a few minutes for MySQL server to be in that state, so grab a coffee or a quick fast one miler!

Start MySQL Kubernetes service

Pods, and the IP addresses assigned to them, are ephemeral. If a pod dies then Kubernetes will recreate that pod because of its self-healing features, but it might recreate it on a different host. Even if it is on the same host, a different IP address could be assigned to it. And so any application cannot rely upon the IP address of the pod.

Kubernetes services is an abstraction which defines a logical set of pods. A service is typically back-ended by one or more physical pods (associated using labels), and it has a permanent IP address that can be used by other pods/applications. For example, WildFly pod can not directly connect to a MySQL pod but can connect to MySQL service. In essence, Kubernetes service offers clients an IP and port pair which, when accessed, redirects to the appropriate backends.

Kubernetes Services

Lets start MySQL service.

The configuration file used is at github.com/arun-gupta/kubernetes-java-sample/blob/master/mysql-service.json. In this case, only a single MySQL instance is started. But multiple MySQL instances can be easily started and WildFly Pod will continue to refer to all of them using MySQL Service.

Check the status/IP of the MySQL service:

Start WildFly Kubernetes Pod

WildFly Pod must be started after MySQL service has started. This is because the environment variables used for creating JDBC resource in WildFly are only available after the service is up and running. Specifically, the JDBC resource is created as:

$MYSQL_SERVICE_HOST and $MYSQL_SERVICE_PORT environment variables are populated by Kubernetes as explained here.

This is shown at github.com/arun-gupta/docker-images/blob/master/wildfly-mysql-javaee7/customization/execute.sh#L44.

Start WildFly pod:

The configuration file used is at github.com/arun-gupta/kubernetes-java-sample/blob/master/wildfly.json.

Check the status of pods:

Wait until WildFly pod’s status is changed to Running. This could be a few minutes, so may be time to grab another quick miler!

Once the container is up and running, you can check /opt/jboss/wildfly/standalone/configuration/standalone.xml in the WildFly container and verify that the connection URL indeed contains the correct IP address. Here is how it looks on my machine:

The updated status (after the container is running) would look like as shown:

Access the Java EE 7 Application

Note down the HOST IP address of the WildFly container and access the application as:

to see the output as:

Or viewed in the browser as:

Java EE 7 Application using WildFly, MySQL, and Kubernetes

Debugging Kubernetes and Docker

Login to the Minion-1 VM:

Log in as root:

Default root password for VM images created by Vagrant is “vagrant”.

List of Docker containers running on this VM can be seen as:

Last 10 lines of the WildFly log (after application has been accessed a few times) can be seen as:

Similarly, MySQL log is seen as:

Enjoy!