Tag Archives: kubernetes

Kubernetes Cluster on Azure and Expose Couchbase Service

Kubernetes Logo

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

  • Part 1 explained how to start Kubernetes cluster using Vagrant – Kubernetes on Vagrant
  • Part 2 did the same for Amazon Web Services – Kubernetes on Amazon Web Services
  • Part 3 did the same for Google Cloud – Kubernetes on Google Cloud

This fourth part will show:

  • How to setup and start the Kubernetes cluster on Azure
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

azure-kubernetes-couchbase-cluster

Many thanks to @colemickens  for helping me through this recipe. This blog content is heavily based upon the instructions at colemickens.github.io/docs/getting-started-guides/azure/.

Install and Configure Azure CLI

Azure CLI is a command-line interface to develop, deploy and manage Azure applications. This is needed in order to install Kubernetes cluster on Azure.

  1. Install Node:
  2. Install Azure CLI:
  3. Sign up for free trial at https://azure.microsoft.com/en-us/free/.
  4. Login to Azure using the command azure login:
  5. Get account information using azure account show command:

    Note the value shown instead of XXX and YYY. These will be used to configure the Kubernetes cluster.

Start Kubernetes Cluster

  1. Download Kubernetes 1.2.4 and extract it.
  2. Kubernetes cluster on Azure can be started as:

    Make sure to specify the appropriate values for XXX and YYY from the previous command. AZURE_SUBSCRIPTION_ID and AZURE_TENANT_ID are specific to Azure.

    These values can also be edited in cluster/azure/config-default.sh.

  3. Start Kubernetes cluster:

    It starts four nodes of Standard_A1 size. Each node gives you 1 core, 1.75 GB RAM, and 40GB HDD.

Run Docker Container in Kubernetes Cluster on Azure

Now that the cluster is up and running, get a list of all the nodes:

Four instances are created as shown – one for master node and three for worker nodes.

Azure Portal shows all the created artifacts in the Resource Group:

azure-portal-kubernetes-resource-group

More details about the created nodes is available:

azure-portal-kubernetes-vnet

Create a Couchbase pod:

Notice, how the image name can be specified on the CLI. Kubernetes pre-1.2 versions created a Replication Controller with this command. This is explained in  Kubernetes on Amazon Web Services or Kubernetes on Google Cloud. Kubernetes 1.2 introduced Deployments and so this creates a Deployment instead. This enables simplified application deployment and management including versioning, multiple simultaneous rollouts, aggregating status across all pods, maintaining application availability and rollback.

The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.

Status of the pod can be watched:

Get more details about the pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server? You need to expose the Deployment as a Service outside the Kubernetes cluster.

Typically, this will be exposed using the command:

But Azure does not support --type=LoadBalancer at this time. This feature is being worked upon and will hopefully be available in the near future. So in the meanwhile, we’ll expose the Service as:

Now proxy to this Service using kubectl proxy command:

And now this exposed Service is accessible at http://127.0.0.1:9999/api/v1/proxy/namespaces/default/services/couchbase/index.html. This shows the login screen of Couchbase Web Console:

azure-kubernetes-couchbase-web-console

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

This script shuts down the cluster but the Azure resource group need to be explicitly removed. This can be done by selecting the Resource Group from portal.azure.com:

azure-portal-kubernetes-delete-resource-group

This is filed as #26601.

Enjoy!

Further references …

  • Couchbase Server Developer Portal
  • Couchbase on Containers
  • Questions on StackOverflow, Forums or Slack Channel
  • Follow us @couchbasedev
  • Couchbase 4.5 Beta

Source: http://blog.couchbase.com/2016/june/kubernetes-cluster-azure-couchbase-service

Kubernetes Namespaces, Resource Quota, and Limits for QoS in Cluster

Kubernetes Logo

By default, all resources in Kubernetes cluster are created in a default namespace. A pod will run with unbounded CPU and memory requests/limits.

A Kubernetes namespace allows to partition created resources into a logically named group. Each namespace provides:

  • a unique scope for resources to avoid name collisions
  • policiesto ensure appropriate authority to trusted users
  • ability to specify constraints for resource consumption

This allows a Kubernetes cluster to share resources by multiple groups and provide different levels of QoS each group.

Resources created in one namespace are hidden from other namespaces. Multiple namespaces can be created, each potentially with different constraints.

Default Kubernetes Namespace

By default, each resource created by user in Kubernetes cluster runs in a default namespace, called default.

Any pod, service or replication controller will be created in this namespace. kube-system namespace is reserved for resources created by the Kubernetes cluster.

More details about the namespace can be seen:

This description shows resource quota (if present), as well as resource limit ranges.

So let’s create a Couchbase replication controller as:

Check the existing replication controller:

By default, only resources in user namespace are shown. Resources in all namespaces can be shown using --all-namespaces option:

As you can see, the arungupta/couchbase image runs in the default namespace. All other resources run in the kube-system namespace.

Lets check the context of this replication controller:

Look for contexts.context.name attribute to see the existing context. This will be manipulated later.

Create a Resource in New Kubernetes Namespace

Lets create a new namespace first. This can be done using the following configuration file:

Namespace is created as:

Then querying for all the namespaces gives:

A new replication controller can be created in this new namespace by using --namespace option:

List of resources in all namespaces looks like:

As seen, there are two replication controllers with arungupta/couchbase image – one in default namespace and another in development namespace.

Set Kubernetes Namespace For an Existing Resource

If a resource is already created then it can be assigned a namespace.

On a previously created resource, new context can be set in the namespace:

Viewing the context now shows:

The second attribute in contexts.context array shows that a new context has been created. It also shows that the current context is still couchbase-on-kubernetes_kubernetes. Since no namespace is specified in that context, it belongs to the default namespace.

Change the context:

See the list of replication controllers:

Obviously, no replication controllers are running in this context. Lets create a new replication controller in this new namespace:

And see the list of replication controllers in all namespaces:

Now you can see two arungupta/couchbase replication controllers running in two difference namespaces.

Delete a Kubernetes Resource in Namespace

A resource can be deleted by fully-qualifying the resource name:

Similarly the other replication controller can be deleted as:

Finally, see the list of all replication controllers in all namespaces:

This confirms that all user created replication controllers are deleted.

Resource Quota and Limit using Kubernetes Namespace

Each namespace can be assigned resource quota.

By default, a pod will run with unbounded CPU and memory requests/limits. Specifying quota allows to restrict how much of cluster resources can be consumed across all pods in a namespace.

Resource quota can be specified using a configuration file:

The following resources are supported by the quota system:

Resource Description
cpu Total requested cpu usage
memory Total requested memory usage
pods Total number of active pods where phase is pending or active.
services Total number of services
replicationcontrollers Total number of replication controllers
resourcequotas Total number of resource quotas
secrets Total number of secrets
persistentvolumeclaims Total number of persistent volume claims

This resource quota can be created in a namespace:

The created quota can be seen as:

Now, if you try to create the replication controller that works:

But describing the quota again shows:

We expected a new pod to be created as part of this replication controller but it’s not there. So lets describe our replication controller:

By default, pod consumes all the cpu and memory available. With resource quotas applied, an explicit value must be specified. Alternatively a default value for the pod can be specified using the following configuration file:

This restricts the CPU and memory that can be consumed by a pod. Lets apply these limits as:

Now when you describe the replication controller again, it shows:

This shows successful creation of the pod.

And now when you describe the quota, it shows correct values as well:

Resource Quota provide more details about how to set/update these values.

Creating another quota gives the following error:

Specifying Limits During Pod Creation

Limits can be specified during pod creation as well:

If memory limit for each pod is restricted to 1g, then a valid pod definition would be:

This is because the pod request 0.5G of memory only. And an invalid pod definition would be:

This is because the pod requests 2G of memory. Creating such a pod gives the following error:

Hope you can apply namespaces, resource quotas, and limits for sharing your clusters across different environments.

Source: http://blog.couchbase.com/2016/march/kubernetes-namespaces-resource-quota-limits-qos-cluster

Kubernetes Cluster on Google Cloud and Expose Couchbase Service

kubernetes-logo

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. The second part (Kubernetes on Amazon) explained how run that setup on Amazon Web Services.

This third part will show:

  • How to setup and start the Kubernetes cluster on Google Cloud
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

Here is a quick overview:

Kubernetes Cluster on Google Cloud

Let’s get into details!

Getting Started with Google Compute Engine provide detailed instructions on how to setup Kubernetes on Google Cloud.

Download and Configure Google Cloud SDK

There is a bit of setup required if you’ve never accessed Google Cloud on your machine. This was a bit overwhelming and wish can be simplified.

  • Create a billable account on Google Cloud
  • Install Google Cloud SDK
  • Configure credentials: gcloud auth login
  • Create a new Google Cloud project and name it couchbase-on-kubernetes
  • Set the project: gcloud config set project couchbase-on-kubernetes
  • Set default zone: gcloud config set compute/zone us-central1-a
  • Create an instance: gcloud compute instances create example-instance --machine-type n1-standard-1 --image debian-8
  • SSH into the instance: gcloud compute ssh example-instance
  • Delete the instance: gcloud compute instances delete example-instance

Setup Kubernetes Cluster on Google Cloud

Kubernetes cluster can be created on Google Cloud as:

Make sure KUBERNETES_PROVIDER is either set to gce or not set at all.

By default, this provisions a 4 node Kubernetes cluster with one master. This means 5 Virtual Machines are created.

If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh.

Starting Kubernetes on Google Cloud shows the following log. Google Cloud SDK was behaving little weird but taking the defaults seem to work:

There are a couple of unbound variables and a WARNING message, but that didn’t seem to break the script.

Google Cloud Console shows:

Google Cloud Compute Instances On Kubernetes Cluster

Five instances are created as shown – one for master node and four for worker nodes.

Run Docker Container in Kubernetes Cluster on Google Cloud

Now that the cluster is up and running, get a list of all the nodes:

It shows four worker nodes.

Create a Couchbase pod:

Notice, how the image name can be specified on the CLI. This command creates a Replication Controller with a single pod. The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.

Get all the RC resources:

This shows the Replication Controller that is created for you.

Get all the Pods:

The output shows the Pod that is created as part of the Replication Controller.

Get more details about the Pod:

Expose Pod on Kubernetes as Service

Now that our pod is running, how do I access the Couchbase server?

You need to expose it outside the Kubernetes cluster.

The kubectl expose command takes a pod, service or replication controller and expose it as a Kubernetes Service. Let’s expose the replication controller previously created and expose it:

Get more details about Service:

The Loadbalancer Ingress attribute gives you the IP address of the load balancer that is now publicly accessible.

Wait for 3 minutes to let the load balancer settle down. Access it using port 8091 and the login page for Couchbase Web Console shows up:

Google Cloud Kubernetes Couchbase Login Page

Enter the credentials as “Administrator” and “password” to see the Web Console:

Google Cloud Kubernetes Couchbase Web Console

And so you just accessed your pod outside the Kubernetes cluster.

Shutdown Kubernetes Cluster

Finally, shutdown the cluster using cluster/kube-down.sh script.

Enjoy!

Source: http://blog.couchbase.com/2016/march/kubernetes-cluster-google-cloud-expose-service

Kubernetes Cluster on Amazon and Expose Couchbase Service

This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the  Docker container.

The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. That is a simple and easy way to develop, test, and deploy Kubernetes cluster on your local machine. But this could be of limited use rather soon as the resources are constrained by the local machine. So what do you do?

Kubernetes cluster can be installed on Amazon as well. This second part will show:

  • How to setup and start the Kubernetes cluster on Amazon Web Services
  • Run Docker container in the Kubernetes cluster
  • Expose Pod on Kubernetes as Service
  • Shutdown the cluster

Here is a quick overview:

Kubernetes Cluster on Amazon with Couchbase

Let’s dig into the details!

Setup Kubernetes Cluster on Amazon Web Services

Getting Started on AWS EC2 provide complete instructions to start Kubernetes cluster on Amazon. Make sure to have the pre-requisites (AWS account, AWS CLI, Full EC2 access) met before you follow these instructions.

Kubernetes cluster can be created on Amazon as:

By default, this provisions a new VPC and a 4 node Kubernetes cluster in us-west-2a (Oregon) with t2.micro instances running on Ubuntu. This means 5 AMIs (one for master and 4 for the worker nodes) are created. Some properties that are worth updating:

  • Set NUM_MINIONS environment variable to whatever number of nodes are required in the cluster. Set it to 2 if you want only two worker nodes to be created.
  • Each instance size is 1.1.x is t2.micro. Set MASTER_SIZE and MINION_SIZE environment variables to m3.medium otherwise the nodes are going to crawl.

If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh.

Starting Kubernetes on Amazon shows the following log: