Tag Archives: containers

Gossip-based Kubernetes Cluster on AWS using Kops

Creating a Kubernetes cluster using Kops requires a top-level domain or a sub domain and setting up Route 53 hosted zones. This domain allows the worker nodes to discover the master and the master to discover all the etcd servers. This is also needed for kubectl to be able to talk directly with the master. This worked well but an additional hassle for the developers.

Kubernetes Logo

Kops 1.6.2 adds an experimental support for gossip-based, uses Weave Mesh, discovery of nodes. This makes the process of setting up Kubernetes cluster using Kops DNS-free, and much more simplified.

Let’s take a look!

  1. Install or upgrade kops:
  2. Check the version:
  3. Create an S3 bucket as “state store”:
  4. Create a Kubernetes cluster:
    It shows the output as:
    Wait for a few minutes for the cluster to be created.
  5. Validate the cluster:
  6. Get the list of nodes using kubectl:
  7. Deleting a cluster is pretty straight forward as well:

That’s it!

github.com/arun-gupta/kubernetes-java-sample provide several examples of getting started with Kubernetes.

File issues at github.com/kubernetes/kops/issues.

 

Getting Started with Oracle Container Cloud Service

Oracle Cloud Container Logo

Oracle Container Cloud Service is Oracle’s entry into the the world of managed container service. There are plenty of existing options:

  • Docker for AWS or Azure
  • Amazon Elastic Container Service
  • Google Container Engine
  • Azure Container Service
  • DC/OS by Mesosphere
  • OpenShift by Red Hat

This blog will explain how to get started with Oracle Container Cloud Service. A comparison of different managed services is started at Managed Container Service.

Before we jump into all the details, let’s try to clarify a couple of things about this offering from Oracle.

First, a bit about the name. “Oracle Cloud Container Service” seems more natural and intuitive since its a Container Service in Oracle Cloud. Wonder why is it called “Oracle Container Cloud Service”? Is it because “Oracle Container” is Oracle’s container orchestration framework and its a Cloud Service? Could that mean other orchestration frameworks be offered as a service as well?

Second, don’t confuse it with Oracle Application Container Cloud Service that allows to build cloud-native 12-factor applications using polyglot platform. Now, that confuses me further. Can the Container Service not be used to build 12-factor apps? Are cloud-native and containers mutually exclusive?

Anyway, this is causing more confusion than clarification :) Let’s move on!

One last thing before we dig in. Many thanks to Bruno Borges (@brunoborges) for pushing the buttons for cloud service activation. I don’t know the normal time for the free trial to be activated otherwise. And a much bigger thank to Mike Raab (@mikeraab) for helping me understand the details of Container Service.

Let’s get started!

  1. Get a Free Trial for Oracle Cloud. It takes a few days for the trial to be activated. The trial time bombs after 30 days so make sure you’ve time planned for evaluation. Each free trial comes with 6 OC3 nodes. OC3 is one of the compute node types available on Oracle Cloud. OC3 particularly is 1 OCPU (think vCPU on Amazon Web Services) and 7.5 GB RAM.
  2. Once the account is activated, you get an email as shown:oracle-cloud-welcome-emailThe important piece of information is username, temporary password, identity domain and My Services URL. The My Account URL link is only for account administration.
  3. Click on My Service URL, login using the values from email:oracle-cloud-services-login
    You get an opportunity to change your password afterwards
  4. Oracle Cloud dashboard shows up after logging in:oracle-cloud-services-dashboardA default set of services and their status is shown. The dashboard can also be customized by clicking on Customize Dashboard button on the top right.
  5. Getting to Oracle Container Cloud Service Console is a bit non-intuitive but you get it once you know it. Select Container Cloud Service tab, click on top-right corner and select Open Service Console:oracle-cloud-container-service-console-accessOr you can directly click on the link for Oracle Container Cloud Service Console in the welcome email. Service console looks like:oracle-cloud-container-service-console
  6. Click on Create Service:oracle-cloud-container-service-definitionOracle Container Container Service Instance Details provide more details about each of the field.What is a worker node? We’ll talk about it a bit later. But essentially this is where the container runs. We are asking for only one worker node.

    Its worth noting different capacities for the worker node:
    Oracle Cloud CPUs

    Confirm all the settings:

    oracle-cloud-container-service-definition-confirmation

    and click on Create> to start the service creation.

  7. Wait for about 30 minutes for the service to be created. After that the Service Console looks like:
    oracle-cloud-container-service-console-with-serviceWait, we asked for  one worker node and how come two OCPUs are being consumed.Each Oracle Container Cloud Service has at least two nodes – a manager node and one or more worker nodes. Manager node is responsible for administration of all the workers and and orchestrate containers on different worker nodes. Worker nodes can be organized in different resource pools to meet different workflow needs.

    And, so ~30 minutes are spent provisioning two nodes and installing container service components on each node. This is also evident in the service logs shown in Service Create and Delete History shown in the main Console page:

    No timestamp in the activity feels a bit too clean.

  8. One main question that I kept wondering all along is “when am I ready to deploy the containers?“. Apparently, not yet!A couple of more steps so hang in there …

    In your service, click on the top-right icon to select another menu:
    oracle-cloud-container-console-open

    Select Container Console.  So, now you are transitioning from Oracle Container Cloud Service Console to Container Console. Make sure to use the right terminology otherwise it gets confusing fast.

  9. This attempts to open Container Console but prompts the usual warningoracle-cloud-container-console-open-warning

    Just click on Proceed link. In a typical production setup, this will setup correctly using certificates and so this warning would not happen.

  10. This brings up a login screen:oracle-cloud-container-console-login
  11. Use the username and password specified during service creation earlier. Click on Login to see Container Console:oracle-cloud-container-console-default

Are we there yet?

Yes, now is the time to deploy containers. But we’ll cover that in a subsequent blog!

Just to recap on what is needed to get started with Oracle Container Cloud Service …

  1. Register for Oracle Cloud trial
  2. Login to main Oracle Cloud Dashboard
  3. Create a Oracle Container Cloud Service Instance
  4. Oracle Container Cloud Service Instance Console
  5. Container Console

All the steps need to be done once but a console inside a console inside a dashboard feels like Inception. The good thing is that the IP address of Container Console is a public IP address served by Oracle Cloud and can be used from anywhere.

Oracle Container Cloud Service Docs have lot more details about building and deploying applications using this Console.

In the next blog, we’ll see what it takes to run a Couchbase container using this console? Possibly a cluster of Couchbase across multiple hosts?

Want to learn more about running Couchbase in containers?

  • Couchbase on Containers
  • Couchbase Forums
  • Couchbase Developer Portal
  • @couchhasedev and @couchbase

Source: https://blog.couchbase.com/2017/february/getting-started-oracle-container-cloud-service

Docker Container Anti Patterns

This blog will explain 10 containers anti-patterns that I’ve seen over the past few months:

  1. Data or logs in containers – Containers are ideal for stateless applications and are meant to be ephemeral. This means no data or logs should be stored in the container otherwise they’ll be lost when the container terminates. Instead use volume mapping to persist them outside the containers. ELK stack could be used to store and process logs. If managed volumes are used during early testing process, then remove them using -v switch with the docker rm command.
  2.  IP addresses of container – Each container is assigned an IP address. Multiple containers communicate with each other to create an application, for example an application deployed on an application server will need to talk with a database. Existing containers are terminated and new containers are started all the time. Relying upon the IP address of the container will require constantly updating the application configuration. This will make the application fragile. Instead create services. This will provide a logical name that can be referred independent of the growing and shrinking number of containers. And it also provides a basic load balancing as well.
  3. Run a single process in a container – A Dockerfile has use one CMD and ENTRYPOINT. Often, CMD will use a script that will perform some configuration of the image and then start the container. Don’t try to start multiple processes using that script. Its important to follow separation of concerns pattern when creating Docker images. This will make managing your containers, collecting logs, updating each individual process that much harder. You may consider breaking up application into multiple containers and manage them independently.
  4. Don’t use docker exec – The docker exec command starts a new command in a running container. This is useful for attaching a shell using the docker exec -it {cid} bash. But other than that the container is already running the process that its supposed to be running.
  5. Keep your image lean – Create a new directory and include Dockerfile and other relevant files in that directory. Also consider using .dockerignore to remove any logs, source code, logs etc before creating the image. Make sure to remove any downloaded artifacts after they are unzipped.
  6. Create image from a running container – A new image can be created using the docker commit command. This is useful when any changes in the container have been made. But images created using this are non-reproducible. Instead make changes in the Dockerfile, terminate existing containers and start a new container with the updated image.
  7. Security credentials in Docker image – Do not store security credentials in the Dockerfile. They are in clear text and checked into a repository. This makes them completely vulnerable. Use -e to specify passwords as runtime environment variable. Alternatively --env-file can be used to read environment variables from a file. Another approach is to used CMD or ENTRYPOINT to specify a script. This script will pull the credentials from a third party and then configure your application.
  8. latest tag: Starting with an image like couchbase is tempting. If no tags are specified then a container is started using the image couchbase:latest.  This image may not actually be latest and instead refer to an older version. Taking an application into production requires a fully controller environment with exact version of the image. Read this Docker: The latest confusion post by fellow Docker Captain @adrianmouat.  Make sure to always use tag when running a container. For example, use couchbase:enterprise-4.5.1 instead of just couchbase.
  9. Impedance mismatch – Don’t use different images, or even different tags in dev, test, staging and production environment. The image that is the “source of truth” should be created once and pushed to a repo. That image should be used for different environments going forward. In some cases, you may consider running your unit tests on the WAR file as part of maven build and then create the image. But any system integration testing should be done on the image that will be pushed in production.
  10. Publishing ports – Don’t use -P to publish all the exposed ports. This will allow you to run multiple containers and publish their exposed ports. But this also means that all the ports will be published. Instead use -p to publish specific ports.

Adding more based upon discussion on twitter …

  1. Root user – Don’t run containers as root user. The host and the container share the same kernel. If the container is compromised, a root user can do more damage to the underlying hosts. Use RUN groupadd -r couchbase && useradd -r -g couchbase couchbase to create a group and a user in it. Use the USER instruction to switch to that user. Each USER creates a new layer in the image. Avoid switching the user back and forth to reduce the number of layers. Thanks to @Aleksandar_78 for this tip!
  2. Dependency between containers – Often applications rely upon containers to be started in a certain order. For example, a database container must be up before an application can connect to it. The application should be resilient to such changes as the containers may be terminated or started at any time. In this case, have the application container wait for the database connection to succeed before proceeding further. Do not use wait-for scripts in Dockerfile for the containers to startup in a specific order. Particularly waiting for a certain number of seconds for a particular container to start is very fragile. Thanks to @ratnopam for this tip!

What other anti-patterns do you follow?

Docker for Java developers is a self-paced hands-on workshop that explains how to get started with Docker for Java developers.

Interested in a more deep dive tutorial? Watch this 2-hours tutorial from JavaOne!

couchbase.com/containers shows how to run Couchbase in a variety of frameworks.

Source: blog.couchbase.com/2016/october/docker-container-anti-patterns

Docker on Windows 2016 Server

This blog is the first part of a multi-part series. The first part showed how to set up Windows Server 2016 as a VirtualBox VM. This second part will show how to configure Docker on Windows 2016 VM.

  1. Start an elevated PowerShell session:docker-windows-2016-22
  2. Run the script to install Docker:
    This will install the PowerShell module, enable containers feature and install Docker.

    docker-windows-2016-23

    The VM needs to be restarted in order for the containers to be enabled. Refer to Container Host Deployment – Windows Server for more detailed instructions.

  3. The VM reboots. Start a PowerShell and check Docker version using docker version command:docker-windows-2016-24More details about Docker can be found using the docker info command:docker-windows-2016-25
  4. Run your first Docker container using the docker run -it -p 80:80 microsoft/iis command:docker-windows-2016-26This will download the Microsoft IIS server Docker image. This is going to take a while so please be patient!
  5. Once the 8.9 GB image is downloaded (after a while), the IIS server is started for you. Check the list of images using the docker images command and the list of running containers using the docker ps command:docker-windows-2016-27More details about the container can be found using the docker inspect command:

  6. The exact IP address of the container can be found using the command:

    IIS main page is accessible at http://<container-ip>, as shown below:

    docker-windows-2016-28

The next part will show how to create your own Docker image on Windows Server 2016.

Source: blog.couchbase.com/2016/october/docker-on-windows-2016-server

Minikube – Rapid Dev & Testing for Kubernetes

One of the attendees from Kubernetes for Java Developers training suggested to try minikube for simplified Kubernetes dev and testing. This blog will show how to get started with minikube using a simple Java application.

minikube-logo

Minikube starts a single node Kubernetes cluster on your local machine for rapid development and testing. Requirements lists the exact set of requirements for different operating systems.

This blog will show:

  • Start one node Kubernetes cluster
  • Run Couchbase service
  • Run Java application
  • View Kubernetes Dashboard

All Kubernetes resource description files used in this blog are at github.com/arun-gupta/kubernetes-java-sample/tree/master/maven.

Start Kubernetes Cluster using Minikube

Create a new directory with the name minikube.

In that directory, download kubectl CLI:

Download minikube CLI:

Start the cluster:

The list of nodes can be seen:

More details about the cluster can be obtained using the kubectl cluster-info command:

Behind the scenes, a Virtual Box VM is started.

Complete set of commands supported can be seen by using --help:

Run Couchbase Service

Create a Couchbase service:

This will start a Couchbase service. The service is using the pods created by the replication controller. The replication controller creates a single node Couchbase server.

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/couchbase-service.yml and looks like:

Run Java Application

Run the application:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/bootiful-couchbase.yml and looks like:

This is run-once job which runs a Java (Spring Boot) application and upserts (insert or update) a JSON document in Couchbase.

In this job, COUCHBASE_URI environment variable value is set to couchbase-service. This is the service name created earlier. Docker image used for this service is arungupta/bootiful-couchbase and is created using fabric8-maven-plugin as shown at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/webapp/pom.xml#L57-L68. Specifically, the command for the Docker image is:

This ensures that COUCHBASE_URI environment variable is overriding spring.couchbase.bootstrap-hosts property as defined in application.properties of the Spring Boot application.

Kubernetes Dashboard

Kubernetes 1.4 included an updated dashboard. For minikube, this can be opened using the following command:

 

The default view is shown below:
minikube-dashboard-1-4

But in our case, a few resources have already been created and so this will look like as shown:

minikube-dashboard-couchbase

Notice, our Jobs, Replication Controllers and Pods are shown here.

Shutdown Kubernetes Cluster

The cluster can be easily shutdown:

couchbase.com/containers provide more details about running Couchbase using different orchestration frameworks. Further references:

  • Couchbase Forums or StackOverflow
  • Follow us at @couchbasedev or @couchbase
  • Read more about Couchbase Server

Source: blog.couchbase.com/2016/september/minikube-rapid-dev–testing-kubernetes

Getting Started with Kubernetes 1.4 using Spring Boot and Couchbase

Kubernetes 1.4 was released earlier this week. Read the blog announcement and CHANGELOG. There are quite a few new features in this release but the key ones that I’m excited about are:

  • Install Kubernetes using kubeadm command. This is in addition to the usual mechanism of downloading from https://github.com/kubernetes/kubernetes/releases. The kubeadm init and kubeadm join commands looks very similar to docker swarm init and docker swarm join for Docker Swarm Mode.
  • Federated Replica Sets
  • ScheduledJob allows to run batch jobs at regular intervals.
  • Constraining pods to a node and affinity and anti-affinity of pods
  • Priority scheduling of pods
  • Nice looking Kubernetes Dashboard (more on this later)

This blog will show:

  • Create a Kubernetes cluster using Amazon Web Services
  • Create a Couchbase service
  • Run a Spring Boot application that stores a JSON document in Couchbase

All the resource description files in this blog are at github.com/arun-gupta/kubernetes-java-sample/tree/master/maven.

Start Kubernetes Cluster

Download binary github.com/kubernetes/kubernetes/releases/download/v1.4.0/kubernetes.tar.gz and extract

Include kubernetes/cluster in PATH

Start a 2-node Kubernetes cluster:

The log will be shown as:

This shows that the Kubernetes cluster has started successfully.

Deploy Couchbase Service

Create Couchbase service and replication controller:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/couchbase-service.yml.

This creates a Couchbase service and the backing replication controller. Name of the service is couchbase-service. This will be used later by the Spring Boot application to communicate with the database.

Check the status of pods:

Note, how the pod status changes from ContainerCreating to Running. The image is downloaded and started in the meanwhile.

Run Spring Boot Application

Run the application:

The configuration file is at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/bootiful-couchbase.yml. In this service, COUCHBASE_URI environment variable value is set to couchbase-service. This is the service name created earlier.

Docker image used for this service is arungupta/bootiful-couchbase and is created using fabric8-maven-plugin as shown at github.com/arun-gupta/kubernetes-java-sample/blob/master/maven/webapp/pom.xml#L57-L68. Specifically, the command for the Docker image is:

This ensures that COUCHBASE_URI environment variable is overriding spring.couchbase.bootstrap-hosts property as defined in application.properties of the Spring Boot application.

Get the logs:

The main output statement to look in this is

This indicates that the JSON document is upserted (either inserted or updated) in the Couchbase database.

Kubernetes Dashboard

Kubernetes Dashboard is look more comprehensive and claimed to have 90% parity with the CLI. Use kubectl.sh config view command to view the configuration information about the cluster. It looks like:

The clusters.cluster.server property value shows the location of Kubernetes master. The users property show two users that can be used to access the dashboard. Second one uses basic authentication and so copy the username and password property value. In our case, Dashboard UI is accessible at https://52.40.9.27/ui.

kubernetes-dashboard-1-4

All the Kubernetes resources can be easily seen in this fancy dashboard.

Shutdown Kubernetes Cluster

Finally, shutdown the Kubernetes cluster:

couchbase.com/containers provide more details about running Couchbase using different orchestration frameworks.

Further references:

  • Couchbase Forums or StackOverflow
  • Follow us at @couchbasedev or @couchbase
  • Read more about Couchbase Server

Source: blog.couchbase.com/2016/september/kubernetes-1.4-spring-boot-couchbase

Deployment Pipeline using Docker, Jenkins, Java and Couchbase

This blog explains how to create a Deployment Pipeline using Jenkins and Docker for a Java application talking to a database.

Jenkins support the creation of pipelines. They are built with simple text scripts that use a Pipeline DSL (domain-specific language) based on the Groovy programming language.

The script, typically called Jenkinsfile, defines multiple steps to execute both simple and complex tasks according to the parameters that you establish. Once created, pipelines can build code and orchestrate the work required to drive applications from commit to delivery.

A pipeline consists of steps, node and stage. A pipeline is executed on a node – a computer that is part of Jenkins installation. A pipeline often consists of multiple stages. A stage consists of multiple steps. Read Getting Started with Pipeline for more details.

For our application, here is the basic flow:

docker-pipeline-jenkins

Complete source code for the application used is at github.com/arun-gupta/docker-jenkins-pipeline.

The application is defined in the webapp directory. It opens a connection to the Couchbase database and stores a simple JSON document using Couchbase Java SDK. The application also has a test that verifies that the database indeed contains the document that was persisted.

Many thanks to @alexsotob for helping me with Jenkins configuration.

Let’s get started!

Download and Install Jenkins

  • Download Jenkins from jenkins.io. This was tested with Jenkins 2.21.
  • Start Jenkins:
    This command starts Jenkins by specifying the home directory where all the configuration information is stored. It also defines the port on which Jenkins is listening, 9090 in this case.
  • First start of Jenkins shows the following message in the console:
    Copy the password shown here. This will be used to unlock Jenkins.
  • Access the Jenkins console at localhost:9090 and paste the password:docker-pipeline-jenkins-unlockClick on Next.
  • Create the first admin user as shown:
    docker-pipeline-jenkins-create-admin-user
    Click on Save and Finish.
  • Click on Install suggested plugins:docker-pipeline-jenkins-install-suggested-plugins
    A bunch of default plugins are installed:docker-pipeline-jenkins-installing-suggested-plugins
    Found it surprising that Ant and Subversion are the default plugins.
  • Login screen is prompted.
    docker-pipeline-jenkins-login
    Enter the username and password specified earlier.
  • Finally, Jenkins is ready to use:
    docker-pipeline-jenkins-start-using

That’s quite a bit of steps to get started with basic Jenkins. Do I really have to jump through all these hoops to get started with Jenkins? Is there an easier, simpler, dumber, lazier way to start Jenkins? Follow Convention-over-Configuration and give me one-click pre-configured installation.

Install Jenkins Plugins

Install the required plugins in Jenkins.

  1. If your Java project is built using Maven, then you need to configure Maven in Jenkins. Click on Manage Jenkins, Global Tool Configuration, Maven installations, and specify the location of Maven.docker-pipeline-jenkins-configure-maven
    Name the tool as Maven3 as that is the name used in the configuration later.Again a bit lame, why can’t Jenkins pick up the default location of Maven instead of expecting the user to specify a location.
  2. Click on Manage Jenkins, Manage Plugins, Available tab, search for docker pipe. Select CloudBees Docker Pipeline, click on Install without restart.
    docker-pipeline-jenkins-pipeline-plugin
    Click on Install without restart.Docker Pipeline Plugin plugin understands the Jenkinsfile and executes the commands listed there.
  3. Next screen shows the list of plugins that are installed:docker-pipeline-jenkins-pipeline-plugin-restart-jenkins
    The last line shows that CloudBees Docker Pipeline plugin is installed successfully. Select Restart Jenkins checkbox. This will install restart Jenkins as well.

Create Jenkins Job

Let’s create a job in Jenkins that will run the pipeline.

  1. After Jenkins restarts, it shows the login screen. Enter the username and password created earlier. This brings you back to Installing Plugins/Upgrades page. Click on the Jenkins icon in the top left corner to see the main dashboard:docker-pipeline-jenkins-dashboard
  2. Click on create new jobs, give the name as docker-jenkins-pipeline and choose the type as Pipeline:docker-pipeline-jenkins-create-projectClick on OK.
  3. Configure Pipeline as shown:
    docker-pipeline-jenkins-configure-pipelineLocal git repo is used in this case. You can certainly choose a repo hosted on github. Further, this repo can be configured with a git hook or poll at a constant interval to trigger the pipeline.Click on Save to save the configuration.

Run Jenkins Build

Before you start the job, Couchbase database need to be explicitly started as:

This will be resolved after #9 is fixed.  Make sure you can access Couchbase at http://localhost:8091, use Administrator as the login and password as the password. Click on Data Buckets tab and see the books bucket created.

docker-pipeline-couchbase-books

Click on Build Now and you should see an output similar to:

docker-pipeline-jenkins-build-run

All green is good!

Let’s try to understand what happened behind the scene.

Jenkinsfile describes how the pipeline is built. At the top level, it has four stages – Package, Create Docker Image, Run Application and Run Tests. Each stage is shown as a box in Jenkins dashboard. Total time taken for each stage is shown in the box.

Let’s understand what happens in each stage.

  • Package – Application source code lives in the webapp directory. Maven command mvn clean package -DskipTests is used to create a JAR file of the application. Note that the maven project also includes the tests and are explicitly skipped using -DskipTests. Typically, tests would be in a separate downstream project.Maven project creates a far JAR file of the application and includes all the dependencies.
  • Create Docker Image – Docker image of the application is built using the Dockerfile in the webapp directory. The image simply includes the fat JAR and runs it using java -jar.Each image is tagged with the build number using ${env.BUILD_NUMBER}.
  • Run Application – Running the application involves running the application Docker container.IP address of the database container is identified using the docker inspect command.The database container and the application container are both running in the default bridge network. This allows the two containers to communicate with each other. Another enhancement would be to run the pipeline in a swarm mode cluster. This would require to create and use an overlay network.
  • Run Tests – Tests are run against the container using the mvn test command. If the tests pass the image is pushed to Docker Hub. The test results are captured either way.This stage also shows the usage of try/catch/finally block in Jenkinsfile.If the tests pass then the image is pushed to Docker Hub. In this case, it is available at hub.docker.com/r/arungupta/docker-jenkins-pipeline/tags/.

Some TODOs …

  • Move the tests to a downstream project (#7)
  • Use Git hook or poll to trigger pipeline (#8)
  • Automate database startup/shutdown (#9)
  • Run pipeline in a cluster of Docker Engines with Swarm mode (#10)
  • Show alternate configuration to push image to bintray (#11)

Another pain point is that global variables syntax does not seem to be documented anywhere. It is only available at <JENKINS-HOST>:<JENKINS-PORT>/job/docker-jenkins-pipeline/pipeline-syntax/globals. This is again slightly lame!

“not impossible, just not implemented yet” #sadpanda

Some further references to read:

  • Getting Started with the Jenkinsfile
  • CloudBees Docker Pipeline Plugin
  • CloudBees Docker Pipeline Plugin User Guide
  • Jenkinsfile DSL Reference
  • Jenkins Pipeline Talk from JavaZone 2016

More information about Couchbase:

  • Couchbase Developer Portal
  • Couchbase Forums
  • @couchbasedev or @couchbase

Feel free to file bugs at github.com/arun-gupta/docker-jenkins-pipeline/issues or send PR.

Source: blog.couchbase.com/2016/september/deployment-pipeline-docker-jenkins-java-couchbase

Docker Service and Swarm Mode to Create Couchbase Cluster

Docker 1.12 introduced Services. A replicated, distributed and load balanced service can be easily created using docker service create command. A “desired state” of the application, such as run 3 containers of Couchbase, is provided and the self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node.

This blog will show how to setup a Couchbase cluster using Docker Services.

Many thanks to @marcosnils, another fellow Docker Captain, to help me debug the networking!

Couchbase Cluster

A cluster of Couchbase Servers is typically deployed on commodity servers. Couchbase Server has a peer-to-peer topology where all the nodes are equal and communicate to each other on demand. There is no concept of master nodes, slave nodes, config nodes, name nodes, head nodes, etc, and all the software loaded on each node is identical. It allows the nodes to be added or removed without considering their “type”. This model works particularly well with cloud infrastructure in general.

A typical Couchbase cluster creation process looks like:
  • Start Couchbase: Start n Couchbase servers
  • Create cluster: Pick any server, and add all other servers to it to create the cluster
  • Rebalance cluster: Rebalance the cluster so that data is distributed across the cluster
In order to automate using Docker Services, the cluster creation is split into a “master” and “worker” service.
docker-service-couchbase-cluster
The master service has only one replica. This provides a single reference point to start the cluster creation. This service also exposes port 8091. It allows the Couchbase Web Console to be accessible from outside the cluster.
The worker service uses the exact same image as master service. This keeps the cluster homogenous which allows to scale the cluster easily.
Let’s get started!

Setup Swarm Mode on Ubuntu

  1. Launch an Ubuntu instance on Amazon. This blog used mx4.large size for the AMI.
  2. Install Docker:
  3. Docker Swarm mode is an optional feature and need to be explicitly enabled. Initialize Swarm mode:

Create Couchbase “master” Service

  1. Create an overlay network:
    This is required so that multiple Couchbase Docker containers in the cluster can talk to each other.
  2. Create a “master” service:
    This image is created using the Dockerfile here. This Dockerfile uses a configuration script to configure the base Couchbase Docker image. First, it uses Couchbase REST API to setup memory quota, setup index, data and query services, security credentials, and loads a sample data bucket. Then, it invokes the appropriate Couchbase CLI commands to add the Couchbase node to the cluster or add the node and rebalance the cluster. This is based upon three environment variables:
    • TYPE: Defines whether the joining pod is worker or master
    • COUCHBASE_MASTER: Name of the master service
    • AUTO_REBALANCE: Defines whether the cluster needs to be rebalanced
    For this first configuration file, the TYPE environment variable is set to MASTER and so no additional configuration is done on the Couchbase image.

    This service also uses the previously created overlay network named couchbase. It exposes the port 8091 that makes the Couchbase Web Console accessible outside the cluster. This service contains only one replica of the container.

  3. Check status of the Docker service:

    It shows that the service is running. The “desired” and “expected” number of replicas are 1, and thus are matching.

  4. Check the tasks in the service:

    This shows that the container is running.

  5. Access Couchbase Web Console using public IP address and it should look like:docker-service-couchbase-login
    The image used in the configuration file is configured with the Administrator username and password password. Enter the credentials to see the console:
    docker-service-couchbase-web-console
  6. Click on Server Nodes to see how many Couchbase nodes are part of the cluster. As expected, it shows only one node:docker-service-couchbase-one-active-server

Create Couchbase “worker” Service

  1. Create “worker” service:

    This RC also creates a single replica of Couchbase using the same arungupta/couchbase:swarm image. The key differences here are:

    • TYPE environment variable is set to WORKER. This adds a worker Couchbase node to be added to the cluster.
    • COUCHBASE_MASTER environment variable is passed the name of the master service,  couchbase-master.couchbase in our case. This uses the service discovery mechanism built into Docker for the worker and the master to communicate.
  2. Check service:
  3. Checking the Couchbase Web Console shows the updated output:
    docker-service-couchbase-one-pending-server
    It shows that one server is pending to be rebalanced.During the worker service creation, AUTO_REBALANCE environment variable could have been set to true or false to enable rebalance. This ensures that the node is only added to the cluster but the cluster itself is not rebalanced. Rebalancing the cluster requires to re-distribute the data across multiple nodes of the cluster. The recommended way is to add multiple nodes, and then manually rebalance the cluster using the Web Console.

Add Couchbase Nodes by Scaling Docker Service

  1. Scale the service: 

  2. Check the service:
    This shows that 2 replicas of worker are running.
  3. Check the Couchbase Web Console:
    docker-service-couchbase-two-pending-serversAs expected, two servers are now added in the cluster and pending rebalance.
  4. Optionally, you can rebalance the cluster by clicking on the Rebalance button. and it will show like:docker-service-couchbase-rebalancingAfter the rebalancing is complete, the Couchbase Web Console is updated to as as shown:
    docker-service-couchbase-rebalanced
  5. See all the running containers using docker ps:

In addition to creating a cluster, Couchbase Server supports a range of high availability and disaster recovery (HA/DR) strategies. Most HA/DR strategies rely on a multi-pronged approach of maximizing availability, increasing redundancy within and across data centers, and performing regular backups.

Now that your Couchbase cluster is ready, you can run your first sample application.

Learn more about Couchbase and Containers:

  • Couchbase on Containers
  • Follow us on @couchbasedev or @couchbase
  • Ask questions on Couchbase Forums

Source: http://blog.couchbase.com/2016/september/docker-service-swarm-mode-couchbase-cluster

Stateful Containers on Kubernetes using Persistent Volume and Amazon EBS

This blog will show how to create stateful containers in Kubernetes using Amazon EBS.

Couchbase is a stateful container. This means that state of the container needs to be carried with it. In Kubernetes, the smallest atomic unit of running a container is a pod. So a Couchbase container will run as a pod. And by default, all data stored in Couchbase is stored on the same host.

stateful containers

This figure is originally explained in Kubernetes Cluster on Amazon and Expose Couchbase Service. In addition, this figure shows storage local to the host.

Pods are ephemeral and may be restarted on a different host. A Kubernetes Volume outlives any containers that run within the pod, and data is preserved across container restarts. However the volume will cease to exist when a pod ceases to exist. This is solved by Persistent Volumes that provide persistent, cluster-scoped storage for applications that require long lived data.

Creating and using a persistent volume is a three step process:

  1. Provision: Administrator provision a networked storage in the cluster, such as AWS ElasticBlockStore volumes. This is called as PersistentVolume.
  2. Request storage: User requests storage for pods by using claims. Claims can specify levels of resources (CPU and memory), specific sizes and access modes (e.g. can be mounted once read/write or many times write only). This is called as PersistentVolumeClaim.
  3. Use claim: Claims are mounted as volumes and used in pods for storage.

Specifically, this blog will show how to use an AWS ElasticBlockStore as PersistentVolume, create a PersistentVolumeClaim, and then claim it in a pod.

stateful containers

Complete source code for this blog is at: github.com/arun-gupta/couchbase-kubernetes.

Provision AWS Elastic Block Storage

Following restrictions need to be met if Amazon ElasticBlockStorage is used as a PersistentVolume with Kubernetes:

  • the nodes on which pods are running must be AWS EC2 instances
  • those instances need to be in the same region and availability-zone as the EBS volume
  • EBS only supports a single EC2 instance mounting a volume

Create an AWS Elastic Block Storage:

The region us-west-2 region and us-west-2a availability zone is used here. And so Kubernetes cluster need to start in the same region and availability zone as well.

This shows the output as:

Check if the volume is available as:

It shows the output as:

Note the unique identifier for the volume in VolumeId attribute. You can also verify the EBS block in AWS Console:

kubernetes-pv-couchbase-amazon-ebs

Start Kubernetes Cluster

Download Kubernetes 1.3.3, untar it and start the cluster on Amazon:

Three points to note here:

  • Zone in which the cluster is started is explicitly set to us-west-1a. This matches the zone where EBS storage volume was created.
  • By default, each node size is m3.medium. Here is is set to m3.large.
  • By default, 1 master and 4 worker nodes are created. Here only 3 worker nodes are created.

This will show the output as:

Read more details about starting a Kubernetes cluster on Amazon.

Couchbase Pod w/o Persistent Storage

Let’s create a Couchbase pod without persistent storage. This means that if the pod is rescheduled on a different host then it will not have access to the data created on it.

Here are quick steps to run a Couchbase pod and expose it outside the cluster:

Read more details at Kubernetes cluster at Amazon.

The last command shows the ingress load balancer address. Access Couchbase Web Console at <ip>:8091.

kubernetes-pv-couchbase-amazon-elb

Login to the console using Administrator login and password password.

The main page of Couchbase Web Console shows up:

kubernetes-pv-couchbase-amazon-web-console

A default travel-sample bucket is already created by arungupta/couchbase image. This bucket is shown in the Data Buckets tab:

kubernetes-pv-couchbase-amazon-databucket

Click on Create New Data Bucket button to create a new data bucket. Give it a name k8s, take all the defaults, and click on Create button to create the bucket:

kubernetes-pv-couchbase-amazon-k8s-bucket

Created bucket is shown in the Data Buckets tab:

kubernetes-pv-couchbase-amazon-k8s-bucket-created

Check status of the pod:

Delete the pod:

Watch the new pod being created:

Access the Web Console again and see that the bucket does not exist:

kubernetes-pv-couchbase-amazon-k8s-bucket-gone

Let’s clean up the resources created:

Couchbase Pod with Persistent Storage

Now, lets expose a Couchbase pod with persistent storage. As discussed above, lets create a PersistentVolume and claim the volume.

Request storage

Like any other Kubernetes resources, a persistent volume is created by using a resource description file:

The important pieces of information here are:

  • Creating a storage of 5 GB
  • Storage can be mounted by only one node for reading/writing
  • specifies the volume id created earlier

Read more details about definition of this file at kubernetes.io/docs/user-guide/persistent-volumes/.

This file is available at: github.com/arun-gupta/couchbase-kubernetes/blob/master/pv/couchbase-pv.yml.

The volume itself can be created as:

and shows the output:

Use claim

A PersistentVolumeClaim can be created using this resource file:

In our case, both PersistentVolume and PersistentVolumeClaim are 5 GB but they don’t have to be.

Read more details about definition of this file at kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims.

This file is at github.com/arun-gupta/couchbase-kubernetes/blob/master/pv/couchbase-pvc.yml.

The claim can be created as:

and shows the output:

Create RC with Persistent Volume Claim

Create a Couchbase Replication Controller using this resource file:

Key parts here are:

  • Resource defines a Replication Controller using arungupta/couchbase Docker image
  • volumeMounts define which volumes are going to be mounted. /opt/couchbase/var is the directory where Couchbase stores all the data.
  • volumes define different volumes that can be used in this RC definition

Create the RC as:

and shows the output:

Check for pod as kubectl.sh get -w po to see:

Expose RC as a service:

Get all the services:

Describe the service as kubectl.sh describe svc couchbase to see:

Wait for ~3 mins for the load balancer to settle. Access the Couchbase Web Console at <ingress-lb>:8091. Once again, only travel-sample bucket exists. This is created by arungupta/couchbase image used in the RC definition.

Show Stateful Containers

Lets create a new bucket. Give it a name kubernetes-pv, take all defaults and click on Create button to create the bucket.

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket

The bucket now shows up in the console:

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket-created

Terminate Couchbase pod and see the state getting restored.

Get the pods again:

Delete the pod:

Pod gets recreated:

And now when you access the Couchbase Web Console, the earlier created bucket still exists:

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket-still-thereThat’s because the data was stored in the backing EBS storage.

Cleanup Kubernetes Cluster

Shutdown Kubernetes cluster:

And detach the volume:

Complete source code for this blog is at: github.com/arun-gupta/couchbase-kubernetes.

Enjoy!

Source: blog.couchbase.com/2016/july/stateful-containers-kubernetes-amazon-ebs

Couchbase Docker Container on Amazon ECS

This blog will explain how to run a Couchbase Docker container using Amazon EC2 Container Service (Amazon ECS).

Many thanks to @moviolone for helping understand the concepts and getting this setup running.

What is Amazon ECS?

Amazon ECS is a container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon ECS integrates well with rest of the AWS infrastructure and eliminates the need to operate your own cluster or configuration management systems.

ec2-container-service

One obvious question to wonder is how is this different from other container orchestration frameworks like Docker Swarm, Kubernetes, or Mesos. The first big difference is that each of these frameworks are open source. Amazon uses a proprietary orchestration framework at this time.

A big advantage of ECS is that just like rest of the AWS infrastructure, this is a managed service. And so you only need to worry about deploying your containers without worrying about the infrastructure.

A better comparison of ECS is with Docker for AWS/Azure (backed by newly introduced Swarm Mode in Docker), Google Container Engine (backed by Kubernetes), DC/OS (backed by Mesos) as they are managed services as well.

An advantage point of ECS is that it seamlessly integrates with AWS infrastructure such as deploying container instances using CloudFormation templates, scaling containers using Autoscaling Group, port mapping using Security Groups, manage incoming container traffic using Elastic Load Balancer, viewing logs using CloudWatch and others.

If you are already bought in the Amazon infrastructure, then ECS sounds like a good fit. Docker for AWS, announced at DockerCon, is also a similar offering in this space.

However, there are a couple of cons that you need to be aware of as well:

  • Portability – Application designed Docker Swarm, Kubernetes and Mesos can run on a variety of platforms, such as Amazon, Azure, GCE, OpenStack, on-prem, VMWare, bare metal data centers, etc. But ECS is tied to Amazon only. Do you consider that as a vendor lock-in?
    Amazon may release their orchestration platform or scheduler as a standalone product, but that’s not very typical.
  • Container format – ECS service is focused on Docker containers only. For all practical purposes, at least today, this may be perfectly fine. I’ve not heard or seen any deployments of Rkt or any other container formats. However, this may change once OCI-compliant runtimes start showing up in the future.

One last thing, before we dig in the concepts and code, there is no additional charge for Amazon EC2 Container Service. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.

Amazon ECS Concepts

Here is an overview of the key concepts in ECS:

amazon-ecs-concepts

  • Container Instance: An AMI instance that is primed for running containers. By default, each Amazon instance uses Amazon ECS-Optimized Linux AMI. This is the recommended image to run ECS container service. The key components of this base image are:
    • Amazon Linux AMI
    • Amazon ECS Container Agent – It manages containers lifecycle on behalf of ECS and allows them to connect to the cluster.
    • Docker Engine (as of this writing, this is version 1.11.1)

    Other images like CoreOS, Suse or Ubuntu can be configured to meet Container Instance AMI specification. This can be done because ECS Agent code is available in open source.

  • Task: A task is defined as a JSON file and describes an application that contains one or more container definitions. This usually points to Docker images from a registry, port/volume mapping, etc.
  • Service: ECS maintains the “desired state” of your application. This is achieved by creating a service. A service specifies the number of instances of a task definition that needs to run at a given time. If the task in a service becomes unhealthy or stop running, then the service scheduler will bounce the task. It ensures that the desired and actual state are match. This is what provides resilience in ECS.New tasks within a Service are balanced across Availability Zones in your cluster. Service scheduler figures out which container instances can meet the needs of a service and schedules it on a valid container instance in an optimal Availability Zone (one with the fewest number of tasks running).

Getting Started with Amazon EC2 Container Service

Login to your AWS EC2 console and click on the EC2 Container Service:

aws-ec2-container-1

Click on the Get started button to define your application.

Create ECS Task

In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.

Enter the values as shown:

aws-ec2-container-2

Few items specified in this step:

  • Task definition is description of an application that contains one or more container definitions.
  • Container name is the name that will be given to the container started as part of this task.
  • Image allows to specify one or more images that need to be started as containers as part of this application. The image specified here uses couchbase:latest as the base image and uses Couchbase REST API to configure the server. Dockerfile for this image provide more details about how this image is prepared.
  • Maximum memory is the memory that needs to be allocated for the container (equivalent to -m Docker CLI switch). Couchbase needs 1GB for running in dev and so that is specified here.
  • And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.

More details about these is available in Task Definition Parameters.

Create ECS Service

 

Click on Next step to configure a service.

aws-ec2-container-3

Give a service name. The desired state can be specified here. For now, we’ll keep it simple and launch a single node Couchbase container. And since the desired state is run a single container, no ELB is required.

More details about these is available in Service Definition Parameters.

Create ECS Cluster

Tasks run on a container instance, and these instances need to register in a cluster. This allows us to scale the cluster up/down later to accommodate for running more containers.

Click on Next step to configure the cluster.

aws-ec2-container-4

In this image:

  • Take the default cluster name
  • A homogenous cluster of container instances is created. m3.medium is a good size to run Couchbase node
  • Choose a previously created security key. This will allow to open a ssh connection to the container instance
  • A new IAM role will be created to allow ECS agent to communicate with ECS service

Container instances in a cluster can span multiple availability zones and be balanced with ELB.

Review all the specified options:

aws-ec2-container-5

Click on Launch instance & run service button to start the service.

The following status is shown after the service is created:

aws-ec2-container-6

The output shows that the cluster, service and task definitions are created. It takes a few minutes for the instances to be provisioned and initializedand tasks to run on them.

View ECS Service and Task

Click on View Service button to see the newly created service.

aws-ec2-container-7

Few things in this image:

  • The service shows the task definition couchbase:6. Each service is assigned a task definition and multiple versions are indicated by the trailing number at the end. In this case, a few versions were created earlier but otherwise the version number starts from 1.
  • Desired and Running count is shown as 1.
  • Minimum healthy percent and Maximum percent are used if a new version of task definition needs to be deployed. With 100% and 200% corresponding values, a new version of the task will be deployed first and then the older versions will be terminated. We’ll play with these numbers in a subsequent blog.
  • Running task is shown towards bottom of the screen. Click on the UUID to learn more about the running task.

aws-ec2-container-8

Task definition shows EC2 instance where it is running, current status, port mapping and several other useful information. The critical piece that we need to look at is the External Link. This URL is where our Couchbase Web Console will be accessible.

Couchbase Web Console

Clicking on this link will open a new tab with Couchbase Web Console:

aws-ec2-container-10

 

Enter the login as Administrator and password as password. These are configured in arungupta/couchbase image.

And here you see Couchbase Web Console in full glory!

aws-ec2-container-11

This blog explained how to run a Couchbase Docker container using Amazon ECS.

Future blogs will show …

  • Setup a Couchbase cluster using ECS
  • Deploy a multi-container application using Docker Compose (v2 is now supported)
  • Setup ECS cluster using CLI

Amazon ECS and Couchbase References

  • EC2 Container Service Docs
  • Getting Started with ECS Tutorial
  • ECS Application Architecture
  • Couchbase on Containers
  • Couchbase Server Portal

Source: blog.couchbase.com/2016/july/couchbase-docker-container-amazon-ecs

Getting Started with Docker for AWS and Scaling Nodes

This blog will explain how to get started with Docker for AWS and deploy a multi-host Swarm cluster on Amazon.

Docker Logo

amazon-web-services-logo

Many thanks to @friism for helping me debug through the basics!

boot2docker -> Docker Machine -> Docker for Mac

Are you packaging your applications using Docker and using boot2docker for running containers in development? Then you are really living under a rock!

It is highly recommended to upgrade to Docker Machine for dev/testing of Docker containers. It encapsulates boot2docker and allows to create one or more light-weight VMs on your machine. Each VM acts as a Docker Engine and can run multiple Docker Containers. Running multiple VMs allows you to setup multi-host Docker Swarm cluster on your local laptop easily.

Docker Machine is now old news as well. DockerCon 2016 announced public beta of Docker for Mac. This means anybody can sign up for Docker for Mac at docker.com/getdocker and use it for dev/test of Docker containers. Of course, there is Docker for Windows too!

Docker for Mac is still a single host but has a swarm mode that allows to initialize it as a single node Swarm cluster.

What is Docker for AWS?

So now that you are using Docker for Mac for development, what would be your deployment platform? DockerCon 2016 also announced Docker for AWS and Azure Beta.

Docker for AWS and Azure both start a fleet of Docker 1.12 Engines with swarm mode enabled out of the box. Swarm mode means that the individual Docker engines form into a self-organizing, self-healing swarm, distributed across availability zones for durability.

Only AWS and Azure charges apply, Docker for AWS and Docker for Azure are free at this time. Sign up for Docker for AWS and Azure at beta.docker.com. Note, that it is a restricted availability at this time.

Once your account is enabled, then you’ll get an invitation email as shown below:

docker-aws-invite

Docker for AWS CloudFormation Values

Click on Launch Stack to be redirected to the CloudFormation template page.

Take the defaults:
docker4aws-1

S3 template URL will be automatically populated, and is hidden here.

Click on Next. This page allows you to specify details for the CloudFormation template:
docker4aws-2

The following changes may be made:

  • Template name
  • Number of manager and worker nodes, 1 and 3 in this case. Note that only odd number of managers can be specified. By default, the containers are scheduled on the worker nodes only.
  • AMI size of master and worker nodes
  • A key already configured in your AWS account

Click on Next and take the defaults:
docker4aws-3

Click on Next, confirm the settings:
docker4aws-4

docker4aws-5

Select IAM resources checkbox and click on Create button to create the CloudFormation template.

It took ~10 mins to create a 4 node cluster (1 manager + 3 worker):docker4aws-6

More details about the cluster can be seen in EC2 Console:docker4aws-7

Docker for AWS Swarm Cluster Details

Output tab of EC2 Console shows more details about the cluster:docker4aws-8

More details about the cluster can be obtained in two ways:

  • Log into the cluster using SSH
  • Create a tunnel and then configure local Docker CLI

Create SSH Connection to Docker for AWS

Login using command shown in the Value column of the Output tab.

Create a SSH connection as:

Note, that we are using the same key here that was specified during CloudFormation template. The list of containers can then be seen using docker ps command:

Create SSH Tunnel to Docker for AWS

Alternatively, a SSH tunnel can be created as:

Setup DOCKER_HOST:

The list of containers can be seen as above using docker ps command. In addition, more information about the cluster can be obtained using docker info command:

Here are some key details from this output:

  • 4 nodes and 1 manager, and so that means 3 worker nodes
  • All nodes are running Docker Engine version 1.12.0-rc3
  • Each VM is created using Alpine Linux 3.4

Scaling Worker Nodes in Docker for AWS

All worker nodes are configured in an AWS AutoScaling Group. Manager node is configured in a separate AWS AutoScaling Group.

docker4aws-9

This first release allows you to scale the worker count using the the Autocaling group. Docker will automatically join or remove new instances to the Swarm. Changing manager count live is not supported in this release.

Select the AutoScaling group for worker nodes to see complete details about the group:

docker4aws-10

Click on Edit button to change the number of desired instances to 5, and save the configuration by clicking on Save button:

docker4aws-11

It takes a few seconds for the new instances to be provisioned and auto included in the Docker Swarm cluster. The refreshed Autoscaling group is shown as:

docker4aws-12

And now docker info command shows the updated output as:

This shows that there are a total of 6 nodes with 1 manager.

Docker for AWS References

  • Docker for AWS and Azure Announcement Blog
  • Docker for AWS and Azure
  • Docker for AWS Release Notes

Source: blog.couchbase.com/2016/july/docker-for-aws-getting-started-scaling-nodes

Couchbase Cluster on Docker Swarm using Docker Compose and Docker Machine

This blog post will explain how to create and scale a Couchbase Cluster using full armor of Docker – Docker Machine, Docker Swarm and Docker Compose.

Here is what we’ll do:

  • Create a 3-node Docker Swarm Cluster using Docker Machine
  • Run a Couchbase instance on two nodes
  • Create a cluster
  • Rebalance the cluster
  • Scale and rebalance the cluster again

couchbase-docker-swarm-0

Docker Swarm Cluster using Consul

Create a three-node Docker Swarm cluster using Docker Machine:

Provision a Swarm cluster with Docker Machine provide more details about why and what’s done in this script. Here is a summary:

  • Create a Docker Machine and run Consul for service discovery
  • Create three Docker Machines – one for Master and two for Worker nodes.Each machine is configured to be part of a Swarm cluster using --swarm. It also uses the Consul service discovery specified using --swarm-discovery.

Couchbase Nodes on Docker Swarm

Create two instances of Couchbase using Docker Compose:

arungupta/couchbase image is used here. This image is defined at Couchbase Docker Image. It uses the Official Couchbase Docker Image add configures it as explained:

  1. Setups memory for Index and Data
  2. Configures the Couchbase server with Index, Data, and Query service
  3. Sets up username and password credentials
  4. Loads the travel-sample bucket

Compose file uses host network. This is equivalent to using --net=host on docker run CLI. It allows the container to use the host networking stack. It also limits only a single Couchbase container to run on a single Docker Machine. So this means that our Couchbase cluster can scale based upon the number of Docker Machines – 3 in our case.

The exact command to use this Compose file is:

There are three nodes in the Docker Swarm cluster. The default scheduler strategy is spread and so the containers will be spread on different hosts.

This is evident by docker ps:

Note, one Couchbase server is running on swarm-node-01 and another on swarm-node-02. Each server is configured with an administrator username Administrator and password password.

Find out IP address of the Docker Machine:

If you have jq installed then IP address can be conveniently found as:

Couchbase Cluster on Docker Swarm

All Couchbase server nodes are created equal. This allows the Couchbase cluster to truly scale horizontally to meet your growing application demands. Independently running Couchbase nodes can be added to a cluster by invoking the server-add CLI command. This is typically a two step process. The first step is adding one or more nodes. The second step then rebalances the cluster where the data on the existing nodes is rebalanced across the updated cluster.

In our case, a single Couchbase container is running on each Docker Machine. Lets pick IP address of any one Couchbase node, and add IP address of the other node:

Couchbase Web Console for both the nodes will show a similar output:

couchbase-docker-swarm-1

couchbase-docker-swarm-2

This shows that the two nodes now form a cluster, and needs to be rebalanced.

Rebalance Couchbase Cluster

Now, lets rebalance the cluster:

Couchbase Web Console will be updated to show that rebalance is happening:

couchbase-docker-swarm-3

And finally you’ll see a rebalanced cluster:

couchbase-docker-swarm-4

Scale and Rebalance Couchbase Cluster

Scale the Couchbase cluster:

Check that the container is running on a different Docker Machine:

As mentioned earlier, scaling a Couchbase cluster is a two-step process. This is so because typically you’ll add multiple servers and then rebalance the cluster. However, in cases where you only need to add a single Couchbase node and then rebalance, the rebalance command can be used achieve that.

In our case, this is done as shown:

The rebalanced cluster now looks like:

couchbase-docker-swarm-5

This blog showed how you can easily create and scale a Couchbase Cluster using Docker Swarm, Machine and Compose.

Enjoy!

Further reading …

  • Couchbase Server Developer Portal
  • Couchbase Server Concepts
  • Couchbase Cluster Setup
  • Questions on StackOverflow, Forums or Slack Channel
  • Follow us @couchbasedev
  • Couchbase 4.5 Beta

Source: http://blog.couchbase.com/2016/may/couchbase-cluster-docker-swarm-compose-machine

Create and scale your #Couchbase cluster using #Docker Machine, Swarm and Compose Click To Tweet

Couchbase on Mesos using DC/OS and Amazon

Couchbase Docker container can easily run on a variety of orchestration platforms:

Docker container using Apache Mesos and Marathon explained how to setup Mesos, Marathon, and run a simple Docker image. The setup was quite involving and a bit flaky.

It required to download and Install Mesos Master and Slave, ZooKeeper, Docker Engine, and Marathon. In some cases, the correct repo need to be added first. These components need to talk to each and so must be configured accordingly. Even if you get past that setup, how do you monitor the entire infrastructure as one entity?

Meet DC/OS – Datacenter Operating System. Its a distributed operating system using Apache Mesos as its kernel.

dcos-kernel

DC/OS can installed in a variety of ways:

  • Local using Vagrant
  • Cloud using Amazon/CloudFormation, Microsoft Azure and Packet
  • On-prem using CentOS or CoreOS

This blog will show how to setup a DC/OS cluster using CloudFormation templates on Amazon and run a Couchbase Docker container.

Launch DC/OS cluster

Launch DC/OS cluster:
dcos-couchbase-1

Take the defaults:
dcos-couchbase-2

Give the template a name, select a previously create KeyPair, change the number of slaves:
dcos-couchbase-3

Take the defaults:
dcos-couchbase-4

Verify the configuration:
dcos-couchbase-5

Click on “I acknowledge that …” and on Create to start the template creation.

CloudFormation Stack Status  page comes up:
dcos-couchbase-6

Make sure to choose the appropriate region.

After ~10-15 mins, the status changes:
dcos-couchbase-7

Wait for the status to change from CREATE_IN_PROGRESS to CREATE_COMPLETE.

Download and Configure DC/OS CLI

DC/OS CLI can be used to manage your cluster nodes, install DC/OS packages, inspect the cluster state, and administer the DC/OS service subcommands.

Install DC/OS CLI on your local machine.

On your CloudFormation Stack Status  page, select the created stack, Outputs tab, and copy the address of Mesos master.
dcos-couchbase-8

Configure the DC/OS CLI to use this cluster:

Authenticate:

Enter the URL in the browser, proceed to the unsafe URL:
dcos-couchbase-9

Copy the token to your clipboard:
dcos-couchbase-10

Paste the authentication token in the terminal window:

Mesos and Marathon UI

Mesos UI is available using the address of the Mesos master:
dcos-couchbase-11

Click on Services to see Marathon service already installed:
dcos-couchbase-12

Click on marathon to see the list of tasks:
dcos-couchbase-13

As expected for a freshly created cluster, no tasks have been assigned yet.

Click on Nodes to see the nodes:
dcos-couchbase-14

Install and Configure Marathon Load Balancer

DC/OS slave nodes are not directly exposed on the Internet. An “external” load balancer can be configured to expose the tasks running on the slaves.

Marathon-lb, short for Marathon Load Balancer, is a load balancer available as a Mesos service. It is based on HAProxy that provides proxying and load balancing for TCP and HTTP based applications, with features such as SSL support, HTTP compression, health checking and more. Marathon-lb subscribes to Marathon’s event bus and updates the HAProxy configuration in real time.

Marathon service UI will show the LB task running:
dcos-couchbase-15

AWS Load Balancer allows port 80 and 443 by default. We’ll run a Couchbase server that will be exposed at port 8091.

In CloudFormation Stack Status  page, copy the value from Values column of PublicSlaveDnsAddress:
dcos-couchbase-16

In AWS Console, select Load Balancers, add a new firewall rule to allow port 8091 on TCP:
dcos-couchbase-17

Run Couchbase Server Docker container on DC/OS

Run Couchbase Server Docker container on DC/OS using the following configuration file:

This configuration file uses arungupta/couchbase image that configures the Couchbase Server using pre-defined Couchbase REST API. This image is Couchbase Docker Image.

cpus and mem attributes define the processing memory needed to run this task.

Give the command to run Couchbase in DC/OS cluster:

Use the previously copied for PublicSlaveDnsAddress and access the Couchbase Web Console at http://<URI>:8091. In our case, the URL is: http://couchbase-publicsl-vjzmwpa38k6d-429093455.us-west-1.elb.amazonaws.com:8091/index.html.

This shows up the login page as:
dcos-couchbase-18

Enter the login credentials as Administrator and password:dcos-couchbase-19

Click on Sign In to see:
dcos-couchbase-20

Learn more about Couchbase Web Console.

Marathon UI is updated to show all the running services:
dcos-couchbase-21

Couchbase Docker image log can be seen in Log Viewer:
dcos-couchbase-22

And the standard output view:
dcos-couchbase-23

Mesos dashboard is updated to show the resources that are consumed:
dcos-couchbase-24

Finally, the complete stack can be deleted from StackFormation template page:
dcos-couchbase-25

Further reading:

  • Latest DC/OS docs
  • DC/OS Installation Guide
  • Get Started with DC/OS
  • Manage your DC/OS Cluster
  • Service discovery and load balancing with DC/OS
  • DC/OS Slack Channel
  • Get Started with Couchbase

Enjoy!

Now, you’ve seen Couchbase on Docker Swarm, Couchbase on Kubernetes, Couchbase on OpenShift 3. This blog showed how to run a Couchbase Docker image on Mesos and DC/OS.

Where else would you like Couchbase container to run?

Source: http://blog.couchbase.com/2016/may/couchbase-mesos-dcos-amazon

Docker container using Apache Mesos and Marathon

apache-mesos-logo apache-mesos-marathon-logo Docker Logo

Apache Mesos is an open source cluster manager developed at UC Berkeley. It provides resource isolation and sharing across distributed applications.

The figure shows the main components of Mesos. Mesos consists of a master daemon that manages slave daemons running on each cluster node. Mesos frameworks are applications that run on Mesos and run tasks on these slaves. Slaves are either physical or virtual machines, typically from the same provider.

mesos-architecture

Mesos uses a two-level scheduling mechanism where resource offers are made to frameworks. The Mesos master node decides how many resources to offer each framework, while each framework determines the resources it accepts and what application to execute on those resources.

Marathon is a container orchestration platform running on Mesos. Multiple container formats are supported and Docker is certainly the most common one!

This blog will show how to setup Mesos, Marathon, and run a simple Docker image. This setup is only for the brave of heart. I’m always interested in looking under the hood and that’s what motivated this post. But a future post will show a more seamless install.

Let’s get started!

Configure CentOS VM

Download CentOS and configure the VM as shown:

centos-7.1-install

Install Components

Install the different components required for this setup.

  1. Configure Mesos repo:
  2. Install Mesos and Marathon:
  3. Install ZooKeeper:
  4. Add Docker repo:
  5. Install Docker:

Configure Hostname/IP address Mapping

Edit /etc/hosts and create hostname and IP address mapping. Find IP address using ifconfig and choose the network interface enabled during CentOS installation.

Start Services

Start all the services

  1. Start Docker:
  2. Start ZooKeeper:
  3. Start Mesos master:
  4. Configure mesos and docker containerizers:
  5. Start Mesos slave:
  6. Start Marathon:
  7. Check for services:Mesos UI: http://127.0.0.1:5050
    Marathon UI: http://127.0.0.1:8080
    Logs: tail -f /var/log/messages
  8. Check Mesos master:
  9. Check Mesos slave:
  10. Check ZooKeeper:

Deploy Docker application to Mesos

A simple Docker-based application is defined using the configuration file. Marathon runs on port 8080 and so the updated configuration file looks like:

Deploy the application as:

The application will take some time in order to download the image and then run the container. This setup is slightly sensitive and multiple runs of the application showed that the Docker image was not successfully downloaded all the times. In that case, the Docker image was manually downloaded using docker pull python:3 and then the application could be successfully deployed.

In our case, master and slave are running on the same machine, and so the list of Docker images and running containers can be easily seen:

The application is available at port 31669 and can be seen at http://127.0.0.1:31669 as:

mesos-marathon-app-output

Mesos UI (http://127.0.0.1:5050) shows:
marathon-ui-app-output

Marathon UI (http://127.0.0.1:8080) shows:
mesos-ui-app-output

As you can see, this is quite an involving setup. A future blog post will show how to use DC/OS and set this up more seamlessly.

Further reading …

  • Mesos Marathon
  • Ports in Mesos
  • Mesos CLI
  • Mesos – Under the Hood

Mesos slack channel is awesome! Particularly I learned a lot about Mesos from @jgarcia.mesosphere, @akaplan.mesosphere, @harpreet.mesosphere, @graham.mesosphere. Thanks guys, keep up on engaging with the community!

Enjoy!

Source: http://blog.couchbase.com/2016/may/docker-apache-mesos-marathon

Windows Server 2016 using VirtualBox for Docker Containers

Windows Server 2016 is adding support for Docker containers. Technology Preview 5 was recently released and provides basic support for Docker. This multi-part blog series will show how to configure, build, and run Docker containers on Windows.

The first part shows how to install Windows Server 2016 using VirtualBox. A couple of tweaks are required in order to make sure Docker containers can be started later on, so read on!

Download Windows Server 2016

Download Windows Server 2016 Technology Preview 5 from microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview.

Download the ISO and install it using VirtualBox.

Windows Server 2016 Virtual Box Configuration

Switch to Expert Mode as some of the default configuration settings need to be updated.

Here are the screenshots of configuration.

Change memory to ~8GB.
windows-2016-install-1

Change disk space to 40GB. Default of 20 GB will not work for installing containers later.windows-2016-install-2

Create a new virtual drive from the downloaded ISO:
windows-2016-install-3

Windows Server 2016 Configuration

Select the defaults:
windows-2016-install-4

Click on “Install Now” to start the install:
windows-2016-install-5

Enter the product key as specified in “Preinstall Information” section at microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-previewwindows-2016-install-6

Select the Desktop version, otherwise just a command shell is started:windows-2016-install-7

Accept the license:
windows-2016-install-8

Choose “Custom” installation. “Upgrade” was getting stuck in an infinite loop:windows-2016-install-9

Take defaults:
windows-2016-install-10

Click on “Next” to start installation:
windows-2016-install-11

Create an administrator account:windows-2016-install-12

Click on Finish to come to the welcome screen:
windows-2016-install-13

Windows Server 2016 Login/Welcome

Use VirtualBox Input -> Keyboard menu to send Ctrl+Alt+Del to Windows to get the login screen:
windows-2016-install-14

Enter your password to see the opening screen:
windows-2016-install-15

As of this writing, it is in Technology Preview 5 and so the functionality and experience will evolve.

Stay tuned for subsequent blogs to actually configure, build and run Docker containers.

Source: blog.couchbase.com/2016/april/windows-server-2016-virtualbox-docker