Monthly Archives: July 2016

Stateful Containers on Kubernetes using Persistent Volume and Amazon EBS

This blog will show how to create stateful containers in Kubernetes using Amazon EBS.

Couchbase is a stateful container. This means that state of the container needs to be carried with it. In Kubernetes, the smallest atomic unit of running a container is a pod. So a Couchbase container will run as a pod. And by default, all data stored in Couchbase is stored on the same host.

stateful containers

This figure is originally explained in Kubernetes Cluster on Amazon and Expose Couchbase Service. In addition, this figure shows storage local to the host.

Pods are ephemeral and may be restarted on a different host. A Kubernetes Volume outlives any containers that run within the pod, and data is preserved across container restarts. However the volume will cease to exist when a pod ceases to exist. This is solved by Persistent Volumes that provide persistent, cluster-scoped storage for applications that require long lived data.

Creating and using a persistent volume is a three step process:

  1. Provision: Administrator provision a networked storage in the cluster, such as AWS ElasticBlockStore volumes. This is called as PersistentVolume.
  2. Request storage: User requests storage for pods by using claims. Claims can specify levels of resources (CPU and memory), specific sizes and access modes (e.g. can be mounted once read/write or many times write only). This is called as PersistentVolumeClaim.
  3. Use claim: Claims are mounted as volumes and used in pods for storage.

Specifically, this blog will show how to use an AWS ElasticBlockStore as PersistentVolume, create a PersistentVolumeClaim, and then claim it in a pod.

stateful containers

Complete source code for this blog is at: github.com/arun-gupta/couchbase-kubernetes.

Provision AWS Elastic Block Storage

Following restrictions need to be met if Amazon ElasticBlockStorage is used as a PersistentVolume with Kubernetes:

  • the nodes on which pods are running must be AWS EC2 instances
  • those instances need to be in the same region and availability-zone as the EBS volume
  • EBS only supports a single EC2 instance mounting a volume

Create an AWS Elastic Block Storage:

The region us-west-2 region and us-west-2a availability zone is used here. And so Kubernetes cluster need to start in the same region and availability zone as well.

This shows the output as:

Check if the volume is available as:

It shows the output as:

Note the unique identifier for the volume in VolumeId attribute. You can also verify the EBS block in AWS Console:

kubernetes-pv-couchbase-amazon-ebs

Start Kubernetes Cluster

Download Kubernetes 1.3.3, untar it and start the cluster on Amazon:

Three points to note here:

  • Zone in which the cluster is started is explicitly set to us-west-1a. This matches the zone where EBS storage volume was created.
  • By default, each node size is m3.medium. Here is is set to m3.large.
  • By default, 1 master and 4 worker nodes are created. Here only 3 worker nodes are created.

This will show the output as:

Read more details about starting a Kubernetes cluster on Amazon.

Couchbase Pod w/o Persistent Storage

Let’s create a Couchbase pod without persistent storage. This means that if the pod is rescheduled on a different host then it will not have access to the data created on it.

Here are quick steps to run a Couchbase pod and expose it outside the cluster:

Read more details at Kubernetes cluster at Amazon.

The last command shows the ingress load balancer address. Access Couchbase Web Console at <ip>:8091.

kubernetes-pv-couchbase-amazon-elb

Login to the console using Administrator login and password password.

The main page of Couchbase Web Console shows up:

kubernetes-pv-couchbase-amazon-web-console

A default travel-sample bucket is already created by arungupta/couchbase image. This bucket is shown in the Data Buckets tab:

kubernetes-pv-couchbase-amazon-databucket

Click on Create New Data Bucket button to create a new data bucket. Give it a name k8s, take all the defaults, and click on Create button to create the bucket:

kubernetes-pv-couchbase-amazon-k8s-bucket

Created bucket is shown in the Data Buckets tab:

kubernetes-pv-couchbase-amazon-k8s-bucket-created

Check status of the pod:

Delete the pod:

Watch the new pod being created:

Access the Web Console again and see that the bucket does not exist:

kubernetes-pv-couchbase-amazon-k8s-bucket-gone

Let’s clean up the resources created:

Couchbase Pod with Persistent Storage

Now, lets expose a Couchbase pod with persistent storage. As discussed above, lets create a PersistentVolume and claim the volume.

Request storage

Like any other Kubernetes resources, a persistent volume is created by using a resource description file:

The important pieces of information here are:

  • Creating a storage of 5 GB
  • Storage can be mounted by only one node for reading/writing
  • specifies the volume id created earlier

Read more details about definition of this file at kubernetes.io/docs/user-guide/persistent-volumes/.

This file is available at: github.com/arun-gupta/couchbase-kubernetes/blob/master/pv/couchbase-pv.yml.

The volume itself can be created as:

and shows the output:

Use claim

A PersistentVolumeClaim can be created using this resource file:

In our case, both PersistentVolume and PersistentVolumeClaim are 5 GB but they don’t have to be.

Read more details about definition of this file at kubernetes.io/docs/user-guide/persistent-volumes/#persistentvolumeclaims.

This file is at github.com/arun-gupta/couchbase-kubernetes/blob/master/pv/couchbase-pvc.yml.

The claim can be created as:

and shows the output:

Create RC with Persistent Volume Claim

Create a Couchbase Replication Controller using this resource file:

Key parts here are:

  • Resource defines a Replication Controller using arungupta/couchbase Docker image
  • volumeMounts define which volumes are going to be mounted. /opt/couchbase/var is the directory where Couchbase stores all the data.
  • volumes define different volumes that can be used in this RC definition

Create the RC as:

and shows the output:

Check for pod as kubectl.sh get -w po to see:

Expose RC as a service:

Get all the services:

Describe the service as kubectl.sh describe svc couchbase to see:

Wait for ~3 mins for the load balancer to settle. Access the Couchbase Web Console at <ingress-lb>:8091. Once again, only travel-sample bucket exists. This is created by arungupta/couchbase image used in the RC definition.

Show Stateful Containers

Lets create a new bucket. Give it a name kubernetes-pv, take all defaults and click on Create button to create the bucket.

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket

The bucket now shows up in the console:

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket-created

Terminate Couchbase pod and see the state getting restored.

Get the pods again:

Delete the pod:

Pod gets recreated:

And now when you access the Couchbase Web Console, the earlier created bucket still exists:

kubernetes-pv-couchbase-amazon-kubernetes-pv-bucket-still-thereThat’s because the data was stored in the backing EBS storage.

Cleanup Kubernetes Cluster

Shutdown Kubernetes cluster:

And detach the volume:

Complete source code for this blog is at: github.com/arun-gupta/couchbase-kubernetes.

Enjoy!

Source: blog.couchbase.com/2016/july/stateful-containers-kubernetes-amazon-ebs

Couchbase Connect 2016 Call for Papers

Couchbase Connect is an event where the best Devs and Ops mind in NoSQL get together!

Inviting all developers, architects, administrators, and CxOs to Save The Date for Couchbase Connect. This event is happening in the beautiful Bay Area and could be your excuse to visit Silicon Valley! And if you are living near by, then there is no excuse not to attend!

connect16-save-the-date

You will hear from best speakers in the NoSQL industry. You’ll also get an opportunity to meet Couchbase Product Managers, Engineers who build the technology, Developer Advocates, Community Champions and Experts, Executive Team of Couchbase, and many more. Who knows which discussion will help you replatform your application and get ready to meet the demands of Digital Economy!

You’ll not only learn about the some of the coolest features in Couchbase, but also get to hear about our product roadmap. There will be hands-on workshops, sessions by practitioners, quick tips, informal Birds of Feather discussions and a lot more. And of course, the hallway track will allow you to socialize with attendeesand other Couchbasers.

Couchbase Connect 2016 Call for Papers

Are you using Couchbase and would like to share your success story? We’d love to have you submit a talk.

Some of the suggested topics are …

  • Build a polyglot application with microservices and using Couchbase
  • Deploy Couchbase container using any of the orchestration frameworks like Docker Swarm, Kubernetes, Mesos, and AWS ECS
  • Using Couchbase Mobile in a creative way
  • Making Couchbase Mobile offline work for you in an interesting environment
  • Provisioning, managing, scaling, and monitoring Couchbase clusters
  • Integration with Big Data platform such as Hadoop, Spark, and Kafka
  • Any interesting tools that you’ve built around Couchbase
  • Migration from RDBMS or other NoSQL databases to Couchbase
  • Using Couchbase in production with hundreds of nodes and multiple clusters
  • Connecting edge/IoT devices using Couchbase Mobile and Server

And these are only a suggestion. Feel free to get creative and go crazy!

Submit your talks at: info.couchbase.com/Connect_2016_call_for_papers.html.

Each talk is usually 40 mins long. But you can submit a lightning talk for 10 or 20 mins as well. The Call for Papers end on Sep 1, but don’t delay. Start thinking about those abstracts and submit a session today!

All speakers will be invited to a VIP dinner, get some exclusive swag, and a whole lot more.

Sessions from Connect 2015 can be viewed at connect15.couchbase.com/sessions/.

Check out some pictures from Connect 2015:

connect15.1 connect15.2
connect15.3 connect15.4
connect15.5 connect15.6
connect15.7 connect15.8
connect15.9 connect15.10

If pictures are any indication, Connect will be a complete NoSQL geekgasm. You don’t want to miss the party and all the fun!

Couchbase Connect References

  • Website: couchbase.com/connect-16
  • Venue: Santa Clara Convention Center, California, USA
  • Dates: Nov 7-9, 2016
  • CFP ends: Sep 1, 2016
  • Call for Papers: info.couchbase.com/Connect_2016_call_for_papers.html

Source: blog.couchbase.com/2016/july/couchbase-connect-2016-call-for-papers

Labels and Constraints with Docker Daemon and Service

Metadata, such as labels, can be attached to Docker daemon. A label is a key/value pair and allows the Docker host to be a target of containers. The semantics of labels is completely defined by the application. A new constraint can be specified during service creation targeting the tasks on a particular host.

Let’s see how we can use labels and constraints in Docker for a real-world application.

Couchbase using Multidimensional Scaling (or MDS) allows to split Index, Data, Query and Full-text search service on multiple nodes. The needs for each service is different. For example, Query is CPU heavy, Index is disk intensive and Data is a mix of memory and fast read/write, such as SSD.

MDS allows the hardware resources to be independently assigned and optimized on a per node basis, as application requirements change.

couchbase-mds

Read more about Multidimensional Scaling.

Let’s see how this can be easily accomplished in a three-node cluster using Docker swarm mode.

 

Start Ubuntu Instances

Start three instances on EC2 of Ubuntu Server 14.04 LTS (HVM) (AMI ID: ami-06116566). Take defaults in all cases except for the security group. Swarm mode requires the following three ports open between hosts:

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • TCP and UDP port 4789 for overlay network traffic

Make sure to create a new security group with these rules:

ec2-swarmmode-security-group

Wait for a few minutes for the instances to be provisioned.

Set up Docker on Ubuntu

Swarm mode is introduced in Docker 1.12. At the time of this writing, 1.12 RC4 is the latest candidate. Use the following script to install the RC4 release with experimental features:

This script assumes that AWS CLI is already setup and performs the following configuration for all running instances in your configured EC2 account:

  • Get public IP address of each instance
  • For each instance
    • Install latest Docker release with experimental features
    • Adds ubuntu user to the docker group. This allows Docker to be used as a non-root user.
    • Prints the Docker version

This simple script will setup Docker host on all three instances.

Assign Labels to Docker Daemon

Labels can be defined using DOCKER_OPTS. For Ubuntu, this is defined in the /etc/default/docker file.

Distinct labels need to be assigned to each node. For example, use couchbase.mds key and index value.

You also need to restart Docker daemon. Finally, docker info displays system-wide information:

As you can see, labels are visible in this information.

For the second node, assign a different label:

Make sure to use the IP address of the second EC2 instance. The updated information about the Docker daemon in this case will be:

And finally, the last node:

The updated information about the Docker daemon for this host will show:

In our case, a homogenous cluster is created where machines are exactly alike, including their operating system, CPU, disk and memory capacity. In real world, you’ll typically have the same operating system but the instance capacity, such as disk, CPU and memory, would differ based upon what Couchbase services you want to run on them. These labels would make perfect sense in that case but they do show the point here.

Enable Swarm Mode and Create Cluster

Let’s enable Swarm Mode and create a cluster of 1 Manager and 2 Worker nodes. By default, manager are worker nodes as well.

Initialize Swarm on the first node:

This will show the output:

Add other two nodes as worker:

The exact commands, and output, in this case are:

Complete details about the cluster can now be obtained:

And this shows the output:

This shows that we’ve created a 3-node cluster with one manager.

Run Docker Service with Constraints

Now, we are going to run three Couchbase services with different constraints. Each service specifies constraint using --constraint engine.labels.<label> format where <label> matches the labels defined earlier for the nodes.

Each service is given a unique name as it allows to scale them individually. All commands are directed towards the Swarm manager:

The exact commands in our case are:

The list of services can be verified as:

This shows the output as:

And the list of tasks (essentially containers within that service) for each service can then be verified as:

And the output in our case:

This shows the services are nicely distributed across different nodes. Feel free to check out if the task is indeed scheduled on the node with the right label.

All Couchbase instances can be configured in a cluster to provide a complete database solution for your web, mobile and IoT applications.

Want to learn more?

  • Docker Swarm Mode
  • Couchbase on Containers
  • Follow us on @couchbasedev or @couchbase
  • Ask questions on Couchbase Forums

Source: blog.couchbase.com/2016/july/labels-constraints-docker-daemon-service

Couchbase Docker Container on Amazon ECS

This blog will explain how to run a Couchbase Docker container using Amazon EC2 Container Service (Amazon ECS).

Many thanks to @moviolone for helping understand the concepts and getting this setup running.

What is Amazon ECS?

Amazon ECS is a container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon ECS integrates well with rest of the AWS infrastructure and eliminates the need to operate your own cluster or configuration management systems.

ec2-container-service

One obvious question to wonder is how is this different from other container orchestration frameworks like Docker Swarm, Kubernetes, or Mesos. The first big difference is that each of these frameworks are open source. Amazon uses a proprietary orchestration framework at this time.

A big advantage of ECS is that just like rest of the AWS infrastructure, this is a managed service. And so you only need to worry about deploying your containers without worrying about the infrastructure.

A better comparison of ECS is with Docker for AWS/Azure (backed by newly introduced Swarm Mode in Docker), Google Container Engine (backed by Kubernetes), DC/OS (backed by Mesos) as they are managed services as well.

An advantage point of ECS is that it seamlessly integrates with AWS infrastructure such as deploying container instances using CloudFormation templates, scaling containers using Autoscaling Group, port mapping using Security Groups, manage incoming container traffic using Elastic Load Balancer, viewing logs using CloudWatch and others.

If you are already bought in the Amazon infrastructure, then ECS sounds like a good fit. Docker for AWS, announced at DockerCon, is also a similar offering in this space.

However, there are a couple of cons that you need to be aware of as well:

  • Portability – Application designed Docker Swarm, Kubernetes and Mesos can run on a variety of platforms, such as Amazon, Azure, GCE, OpenStack, on-prem, VMWare, bare metal data centers, etc. But ECS is tied to Amazon only. Do you consider that as a vendor lock-in?
    Amazon may release their orchestration platform or scheduler as a standalone product, but that’s not very typical.
  • Container format – ECS service is focused on Docker containers only. For all practical purposes, at least today, this may be perfectly fine. I’ve not heard or seen any deployments of Rkt or any other container formats. However, this may change once OCI-compliant runtimes start showing up in the future.

One last thing, before we dig in the concepts and code, there is no additional charge for Amazon EC2 Container Service. You pay for AWS resources (e.g. EC2 instances or EBS volumes) you create to store and run your application.

Amazon ECS Concepts

Here is an overview of the key concepts in ECS:

amazon-ecs-concepts

  • Container Instance: An AMI instance that is primed for running containers. By default, each Amazon instance uses Amazon ECS-Optimized Linux AMI. This is the recommended image to run ECS container service. The key components of this base image are:
    • Amazon Linux AMI
    • Amazon ECS Container Agent – It manages containers lifecycle on behalf of ECS and allows them to connect to the cluster.
    • Docker Engine (as of this writing, this is version 1.11.1)

    Other images like CoreOS, Suse or Ubuntu can be configured to meet Container Instance AMI specification. This can be done because ECS Agent code is available in open source.

  • Task: A task is defined as a JSON file and describes an application that contains one or more container definitions. This usually points to Docker images from a registry, port/volume mapping, etc.
  • Service: ECS maintains the “desired state” of your application. This is achieved by creating a service. A service specifies the number of instances of a task definition that needs to run at a given time. If the task in a service becomes unhealthy or stop running, then the service scheduler will bounce the task. It ensures that the desired and actual state are match. This is what provides resilience in ECS.New tasks within a Service are balanced across Availability Zones in your cluster. Service scheduler figures out which container instances can meet the needs of a service and schedules it on a valid container instance in an optimal Availability Zone (one with the fewest number of tasks running).

Getting Started with Amazon EC2 Container Service

Login to your AWS EC2 console and click on the EC2 Container Service:

aws-ec2-container-1

Click on the Get started button to define your application.

Create ECS Task

In ECS, Docker workloads are defined as tasks. A task can contain multiple containers. All containers for a task are co-located on the same machine.

Enter the values as shown:

aws-ec2-container-2

Few items specified in this step:

  • Task definition is description of an application that contains one or more container definitions.
  • Container name is the name that will be given to the container started as part of this task.
  • Image allows to specify one or more images that need to be started as containers as part of this application. The image specified here uses couchbase:latest as the base image and uses Couchbase REST API to configure the server. Dockerfile for this image provide more details about how this image is prepared.
  • Maximum memory is the memory that needs to be allocated for the container (equivalent to -m Docker CLI switch). Couchbase needs 1GB for running in dev and so that is specified here.
  • And finally the port mappings (-p on Docker CLI). Port 8091 is needed for Couchbase administration.

More details about these is available in Task Definition Parameters.

Create ECS Service

 

Click on Next step to configure a service.

aws-ec2-container-3

Give a service name. The desired state can be specified here. For now, we’ll keep it simple and launch a single node Couchbase container. And since the desired state is run a single container, no ELB is required.

More details about these is available in Service Definition Parameters.

Create ECS Cluster

Tasks run on a container instance, and these instances need to register in a cluster. This allows us to scale the cluster up/down later to accommodate for running more containers.

Click on Next step to configure the cluster.

aws-ec2-container-4

In this image:

  • Take the default cluster name
  • A homogenous cluster of container instances is created. m3.medium is a good size to run Couchbase node
  • Choose a previously created security key. This will allow to open a ssh connection to the container instance
  • A new IAM role will be created to allow ECS agent to communicate with ECS service

Container instances in a cluster can span multiple availability zones and be balanced with ELB.

Review all the specified options:

aws-ec2-container-5

Click on Launch instance & run service button to start the service.

The following status is shown after the service is created:

aws-ec2-container-6

The output shows that the cluster, service and task definitions are created. It takes a few minutes for the instances to be provisioned and initializedand tasks to run on them.

View ECS Service and Task

Click on View Service button to see the newly created service.

aws-ec2-container-7

Few things in this image:

  • The service shows the task definition couchbase:6. Each service is assigned a task definition and multiple versions are indicated by the trailing number at the end. In this case, a few versions were created earlier but otherwise the version number starts from 1.
  • Desired and Running count is shown as 1.
  • Minimum healthy percent and Maximum percent are used if a new version of task definition needs to be deployed. With 100% and 200% corresponding values, a new version of the task will be deployed first and then the older versions will be terminated. We’ll play with these numbers in a subsequent blog.
  • Running task is shown towards bottom of the screen. Click on the UUID to learn more about the running task.

aws-ec2-container-8

Task definition shows EC2 instance where it is running, current status, port mapping and several other useful information. The critical piece that we need to look at is the External Link. This URL is where our Couchbase Web Console will be accessible.

Couchbase Web Console

Clicking on this link will open a new tab with Couchbase Web Console:

aws-ec2-container-10

 

Enter the login as Administrator and password as password. These are configured in arungupta/couchbase image.

And here you see Couchbase Web Console in full glory!

aws-ec2-container-11

This blog explained how to run a Couchbase Docker container using Amazon ECS.

Future blogs will show …

  • Setup a Couchbase cluster using ECS
  • Deploy a multi-container application using Docker Compose (v2 is now supported)
  • Setup ECS cluster using CLI

Amazon ECS and Couchbase References

  • EC2 Container Service Docs
  • Getting Started with ECS Tutorial
  • ECS Application Architecture
  • Couchbase on Containers
  • Couchbase Server Portal

Source: blog.couchbase.com/2016/july/couchbase-docker-container-amazon-ecs

Docker Daemon Log with Docker for Mac

Did you know that Docker for Mac is now in general beta?

docker-for-mac

What is Docker for Mac?

Docker for Mac is a native Mac application architected from scratch, with a native user interface and auto-update capability, deeply integrated with OS X native virtualization

If you are using Docker Machine, then you can ssh to the machine using docker-machine ssh <machine-name> command and find the logs at /var/log/docker.

As Docker for Mac provide a native integration with Mac, the logs also can be found using the natural tools.

Mac Console for Docker Daemon Logs

Console is a utility available in Applications -> Utilities. log viewer included with macOS. It allows users to search through all of the system’s logged messages, and can alert the user when certain types of messages are logged. The console allows you to read the system logs, help find certain ones, monitor them, and filter their contents.

File -> New System Log Query…

docker-logs-console1

Give the query a name and set Sender to docker. Click on OK to save the query:

docker-logs-console2

Now the daemon logs can be easily seen here.

Now Console Log Query can be used to search logs, filter the results in various ways, and create reports.

Docker Daemon Log using CLI

You are not a GUI types person, and prefer a CLI approach. Then use syslog CLI. The command to see Docker daemon log is:

syslog -k Sender Docker

And it shows the output as:

Use syslog -help to find all the options for this CLI.

Docker Daemon Log File

If you really want the hard core way, then the log files are available at:

~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log.

Check complete details at: docs.docker.com/docker-for-mac/troubleshoot/#/checking-the-logs.

What is holding you back from using Docker for Mac?

Enjoy!

Source: blog.couchbase.com/2016/july/docker-daemon-log-mac

Getting Started with Docker for AWS and Scaling Nodes

This blog will explain how to get started with Docker for AWS and deploy a multi-host Swarm cluster on Amazon.

Docker Logo

amazon-web-services-logo

Many thanks to @friism for helping me debug through the basics!

boot2docker -> Docker Machine -> Docker for Mac

Are you packaging your applications using Docker and using boot2docker for running containers in development? Then you are really living under a rock!

It is highly recommended to upgrade to Docker Machine for dev/testing of Docker containers. It encapsulates boot2docker and allows to create one or more light-weight VMs on your machine. Each VM acts as a Docker Engine and can run multiple Docker Containers. Running multiple VMs allows you to setup multi-host Docker Swarm cluster on your local laptop easily.

Docker Machine is now old news as well. DockerCon 2016 announced public beta of Docker for Mac. This means anybody can sign up for Docker for Mac at docker.com/getdocker and use it for dev/test of Docker containers. Of course, there is Docker for Windows too!

Docker for Mac is still a single host but has a swarm mode that allows to initialize it as a single node Swarm cluster.

What is Docker for AWS?

So now that you are using Docker for Mac for development, what would be your deployment platform? DockerCon 2016 also announced Docker for AWS and Azure Beta.

Docker for AWS and Azure both start a fleet of Docker 1.12 Engines with swarm mode enabled out of the box. Swarm mode means that the individual Docker engines form into a self-organizing, self-healing swarm, distributed across availability zones for durability.

Only AWS and Azure charges apply, Docker for AWS and Docker for Azure are free at this time. Sign up for Docker for AWS and Azure at beta.docker.com. Note, that it is a restricted availability at this time.

Once your account is enabled, then you’ll get an invitation email as shown below:

docker-aws-invite

Docker for AWS CloudFormation Values

Click on Launch Stack to be redirected to the CloudFormation template page.

Take the defaults:
docker4aws-1

S3 template URL will be automatically populated, and is hidden here.

Click on Next. This page allows you to specify details for the CloudFormation template:
docker4aws-2

The following changes may be made:

  • Template name
  • Number of manager and worker nodes, 1 and 3 in this case. Note that only odd number of managers can be specified. By default, the containers are scheduled on the worker nodes only.
  • AMI size of master and worker nodes
  • A key already configured in your AWS account

Click on Next and take the defaults:
docker4aws-3

Click on Next, confirm the settings:
docker4aws-4

docker4aws-5

Select IAM resources checkbox and click on Create button to create the CloudFormation template.

It took ~10 mins to create a 4 node cluster (1 manager + 3 worker):docker4aws-6

More details about the cluster can be seen in EC2 Console:docker4aws-7

Docker for AWS Swarm Cluster Details

Output tab of EC2 Console shows more details about the cluster:docker4aws-8

More details about the cluster can be obtained in two ways:

  • Log into the cluster using SSH
  • Create a tunnel and then configure local Docker CLI

Create SSH Connection to Docker for AWS

Login using command shown in the Value column of the Output tab.

Create a SSH connection as:

Note, that we are using the same key here that was specified during CloudFormation template. The list of containers can then be seen using docker ps command:

Create SSH Tunnel to Docker for AWS

Alternatively, a SSH tunnel can be created as:

Setup DOCKER_HOST:

The list of containers can be seen as above using docker ps command. In addition, more information about the cluster can be obtained using docker info command:

Here are some key details from this output:

  • 4 nodes and 1 manager, and so that means 3 worker nodes
  • All nodes are running Docker Engine version 1.12.0-rc3
  • Each VM is created using Alpine Linux 3.4

Scaling Worker Nodes in Docker for AWS

All worker nodes are configured in an AWS AutoScaling Group. Manager node is configured in a separate AWS AutoScaling Group.

docker4aws-9

This first release allows you to scale the worker count using the the Autocaling group. Docker will automatically join or remove new instances to the Swarm. Changing manager count live is not supported in this release.

Select the AutoScaling group for worker nodes to see complete details about the group:

docker4aws-10

Click on Edit button to change the number of desired instances to 5, and save the configuration by clicking on Save button:

docker4aws-11

It takes a few seconds for the new instances to be provisioned and auto included in the Docker Swarm cluster. The refreshed Autoscaling group is shown as:

docker4aws-12

And now docker info command shows the updated output as:

This shows that there are a total of 6 nodes with 1 manager.

Docker for AWS References

  • Docker for AWS and Azure Announcement Blog
  • Docker for AWS and Azure
  • Docker for AWS Release Notes

Source: blog.couchbase.com/2016/july/docker-for-aws-getting-started-scaling-nodes

Docker-native CI/CD with Codeship Webinar – Part 1

docker_captian_image

Laura (@rhein_wein) and I are Docker Captains. This means we demonstrate a commitment to sharing our Docker knowledge with others. Amongst other languages, she talks Ruby and Postgres and I talk Java and Couchbase. But we talk to each other using the common language of Docker!

This multi-part interactive webinar will teach you how to build a Docker-native CI/CD pipeline using Codeship. This series will be using the application used for Docker for Java Developers workshop, which in turn uses WildFly and Couchbase.

Codescodeship-logohip is a Docker-native SaaS platform for creating your CI/CD pipelines. SaaS means that you don’t need to manage setting up CI/CD server and workers. It allows to existing Dockerfiles and images on any registry and enjoy full customizability for their dev environments. Learn more at Codeship Docs.

Wildfly_logo WildFly is a Java EE 7 compliant application server that allows you to build amazing web applications. A light memory footprint, a blazing fast startup and customizable runtimes makes it an ideal candidate for deploying in the Cloud. Powerful administration features, intuitive web console and REST API makes it a breeze for management.

Couchbase LogoCouchbase is an open-source NoSQL document database. It allows you to develop your applications with agility and operate at any scale. Agility comes with flexible schema, SQL-like query language, rich Web Console, REST API and CLI, a mobile-to-backend solution and much more. Unlike a master/slave architecture, Couchbase scales linearly and can be deployed on a variety of clouds and on-prem.

Lets learn the basic concepts of Codeship in this introductory webinar:

What CI/CD platform do you use for building your deployment pipelines?

Codeship References

  • Codeship Docs
  • Codeship Features
  • Codeship Github Repo
  • WildFly
  • Couchbase Developer Portal
  • Docker for Java Developers Workshop

Source: blog.couchbase.com/2016/july/docker-native-ci-cd-codeship-part-1

Build a #Docker-native CI/CD pipeline using #Codeship, #WildFly and #Couchbase Click To Tweet

Docker Services, Stack and Distributed Application Bundle

docker-1.12

First Release Candidate of Docker 1.12 was announced over two weeks ago. Several new features are planned for this release.

This blog will show how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode. Many thanks to @friism to help me understand these concepts.

Let’s look at the features first:

  • Built-in orchestration: A typical application is defined using a Docker Compose file. This definition consists of multiple containers and deployed on multiple hosts. This avoids Single Point of Failure (SPOF) and keeps your application resilient. Multiple orchestration frameworks such as Docker Swarm, Kubernetes and Mesos allow you to orchestrate these applications. However it is such an important characteristic of the application, Docker Engine now has built-in orchestration. More details on this topic in a later blog.
  • Service: A replicated, distributed and load balanced service can be easily created using docker service create command. A “desired state” of the application, such as run 3 containers of Couchbase, is provided and the self-healing Docker engine ensures that that many containers are running in the cluster. If a container goes down, another container is started. If a node goes down, containers on that node are started on a different node. More on this in a later blog.
  • Zero-configuration Security: Docker 1.12 comes with mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in the swarm, out of the box. More on this in a later blog.
  • Docker Stack and Distributed Application Bundle: Distributed Application Bundle, or DAB, is a multi-services distributable image format. Read further for more details.

So far, you can take a Dockerfile and create an image from it using the docker build command. A container can be started using the docker run command. Multiple containers can be easily started by giving that command multiple times. Or you can also use Docker Compose file and scale up your containers using the docker-compose scale command.

docker-lifecycle

Image is a portable format for a single container. Distributed Application Bundle, or DAB, is a new concept introduced in Docker 1.12, is a portable format for multiple containers. Each bundle can be then deployed as a Stack at runtime.

docker-stack-lifecycle

Learn more about DAB at docker.com/dab.

For simplicity, here is an analogy that can be drawn:

Dockerfile -> Image -> Container

Docker Compose -> Distributed Application Bundle -> Docker Stack

Let’s use a Docker Compose file, create a DAB from it, and deploy it as a Docker Stack.

Its important to note that this is an experimental feature in 1.12-RC2.

Create a Distributed Application Bundle from Docker Compose

Docker Compose CLI adds a new bundle command. More details can be found:

Now, let’s take a Docker Compose definition and create a DAB from it. Here is our Docker Compose definition:

This Compose file starts a WildFly and a Couchbase server. A Java EE application is pre-deployed in the WildFly server that connects to the Couchbase server and allows to perform CRUD operations using the REST API.

The source for this file is at: github.com/arun-gupta/oreilly-docker-book/blob/master/hello-javaee/docker-compose.yml.

Generate an application bundle with it:

depends_on only creates dependency between two services and makes them start in a specific order. This only ensures that the Docker container is started but the application within the container may take longer to start. So this attribute only partially solves the problem. container_name gives a specific name to the container. Relying upon a specific container name is tight coupling and does not allow to scale the container.  So both the warnings can be ignored, for now.

This command generates a file using the Compose project name, which is the directory name. So in our case, hellojavaee.dsb file is generated. This file extension has been renamed to .dab in RC3.

The generated application bundle looks like:

This file provides complete description of the services included in the application. I’m not entirely sure if Distributed Application Bundle is the most appropriate name, discuss this in #24250. It would be great if other container formats, such as Rkt, or even VMs can be supported here. But for now, Docker is the only supported format.

Initialize Swarm Mode in Docker

As mentioned above, “desired state” is now maintained by Docker Swarm. And this is now baked into Docker Engine already.

Docker Swarm concepts have evolved as well and can be read at Swarm mode key concepts. A more detailed blog on this will be coming later.

But for this blog, a new command docker swarm is now added:

Initialize a Swarm node (as a manager) in the Docker Engine:

More details about this node can be found using docker node inspect self command.

The detailed output is verbose but the relevant section is:

The output shows that the node is a manager. For a single-node cluster, this node will also act as a worker.

 

More details about the cluster can be obtained using the docker swarm inspect command.

AcceptancePolicy shows that other worker nodes can join this cluster, but a manager requires explicit approval.

Deploy a Docker Stack

Create a stack using docker deploy command:

The command usage can certainly be simplified as discussed in #24249.

See the list of services:

The output shows that two services, WildFly and Couchbase, are running. Services is also a new concept introduced in Docker 1.12. There is what gives you the “desired state” and Docker Engine works to give you that.

docker ps shows the list of containers running:

WildFly container starts up before the Couchbase container is up and running. This means the Java EE application tries to connect to the Couchbase server and fails. So the application never boots successfully.

Self-healing Docker Service

Docker Service maintains the “desired state” of an application. In our case, the desired state is to ensure that one, and only one, container for the service is running. If we remove the container, not the service, then the service will automatically start the container again.

Remove the container as:

Note, you’ve to give -f because the container is already running. Docker 1.12 self-healing mechanisms kick in and automatically restart the container. Now if you list the containers again:

This shows that a new container has been started.

Inspect the WildFly service:

Swarm assigns a random port to the service, or this can be manually updated using docker service update command. In our case, port 8080 of the container is mapped to 30004 port on the host.

Verify the Application

Check that the application is successfully deployed:

Add a new book to the application:

Verify the books again:

Learn more about this Java EE application at github.com/arun-gupta/oreilly-docker-book/tree/master/hello-javaee.

This blog showed how to create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.

Docker Service and Stack References

  • Docker Service Create
  • FREE book from O’Reilly: Docker for Java Developers
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on @couchbasedev or Stackoverflow
Create a Distributed Application Bundle from Docker Compose and deploy it as Docker Stack in Docker Swarm Mode.… Click To Tweet

Source: blog.couchbase.com/2016/july/docker-services-stack-distributed-application-bundle