Monthly Archives: March 2017

Service Discovery with Java and Database application in Kubernetes

This blog will show how a simple Java application can talk to a database using service discovery in Kubernetes.

 Kubernetes Logo WildFly Logo

Service Discovery with Java and Database application in DC/OS explains why service discovery is an important aspect for a multi-container application. That blog also explained how this can be done for DC/OS.

Let’s see how this can be accomplished in Kubernetes with a single instance of application server and database server. This blog will use WildFly for application server and Couchbase for database.

This blog will use the following main steps:

  • Start Kubernetes one-node cluster
  • Kubernetes application definition
  • Deploy the application
  • Access the application

Start Kubernetes Cluster

Minikube is the easiest way to start a one-node Kubernetes cluster in a VM on your laptop. The binary needs to be downloaded first and then installed.

Complete installation instructions are available at github.com/kubernetes/minikube.

The latest release can be installed on OSX as as:

It also requires kubectl to be installed. Installing and Setting up kubectl provide detailed instructions on how to setup kubectl. On OSX, it can be installed as:

Now, start the cluster as:

The kubectl version command shows more details about the kubectl client and minikube server version:

More details about the cluster can be obtained using the kubectl cluster-info command:

Kubernetes Application Definition

Application definition is defined at github.com/arun-gupta/kubernetes-java-sample/blob/master/service-discovery.yml. It consists of:

  • A Couchbase service
  • Couchbase replica set with a single pod
  • A WildFly replica set with a single pod
The key part is where the value of the COUCHBASE_URI environment variable is name of the Couchbase service. This allows the application deployed in WildFly to dynamically discovery the service and communicate with the database.

arungupta/couchbase:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/couchbase/Dockerfile.

arungupta/wildfly-couchbase-javaee:travel Docker image is created using github.com/arun-gupta/couchbase-javaee/blob/master/Dockerfile.

Java EE application waits for database initialization to be complete before it starts querying the database. This can be seen at github.com/arun-gupta/couchbase-javaee/blob/master/src/main/java/org/couchbase/sample/javaee/Database.java#L25.

Deploy Application

This application can be deployed as:

The list of service and replica set can be shown using the command kubectl get svc,rs:

Logs for the single replica of Couchbase can be obtained using the command kubectl logs rs/couchbase-rs:

Logs for the WildFly replica set can be seen using the command kubectl logs rs/wildfly-rs:

Access Application

The kubectl proxy command starts a proxy to the Kubernetes API server. Let’s start a Kubernetes proxy to access our application:

Expose the WildFly replica set as a service using:

The list of services can be seen again using kubectl get svc command:

Now, the application is accessible at:

A formatted output looks like:

Now, new pods may be added as part of Couchbase service by scaling the replica set. Existing pods may be terminated or get rescheduled. But the Java EE application will continue to access the database service using the logical name.

This blog showed how a simple Java application can talk to a database using service discovery in Kubernetes.

For further information check out:

  • Kubernetes Docs
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on Couchbase Forums or Stack Overflow
  • Download Couchbase

Service Discovery with Java and Database application in DC/OS

This blog will show how a simple Java application can talk to a database using service discovery in DC/OS.

DC/OS logo

Why Service Discovery?

An application typically consist of multiple components such as an application server, a database, a web server, caching and messaging server. Typically, multiple replicas of each component would run based upon the needs of your application. Deploying this application using a container orchestration framework means that each replica would run as a container. So, an application is typically deployed as multi-container application.

Each container is assigned a unique IP address for its lifetime. But containers are ephemeral and may terminate and rescheduled on a different host by the orchestration framework. A container is typically assigned a different IP address in that case. This means an application deployed in application server cannot rely upon the IP address of the database. This is where service discovery is required.

So, multiple replicas of a component are assigned a logical name. For example, web for all the application server containers and db for all the database containers. Now, an application can talk to the database containers using the logical service name. This allows the database containers to be rescheduled anywhere in the cluster, and also scale up and down dynamically.

Let’s see how this can be accomplished in DC/OS with a single instance of application server and database server. This blog will use WildFly for application server and Couchbase for database.

Couchbase Cluster on Mesos with DC/OS provide more details on how to setup a Couchbase cluster on DC/OS.

This blog will use the following main steps:

  • Setup DC/OS Cluster
  • Marathon application definition
  • Deploy the application

The complete source code used in this blog is at github.com/arun-gupta/dcos-java-database.

Many thanks to @unterstein for creating the Maven plugin and helping me understand the inner workings of DC/OS.

Setup DC/OS Cluster

DC/OS cluster can be easily created using the CloudFormation template. Detailed instructions, including system requirements and screenshots and setup, are available at Installing DC/OS on AWS.

CloudFormation output looks as shown:

DC/OS Cluster CloudFormation Output

Note down the value shown for the keys DnsAddress and PublicSlaveDnsAddress. The value of the first key can be used to access DC/OS GUI and looks like:

DC/OS Cluster Console Default Output

Configure DC/OS CLI as explained in CLI. In short, the following commands are used:

  • dcos config set core.dcos_url http://${DnsAddress} Replace ${DnsAddress} with the corresponding value from the CloudFormation output.
  • dcos auth login
  • dcos config show core.dcos_acs_token. If not already done, clone the repo from github.com/arun-gupta/dcos-java-database. Create a new file.dcos-token and copy the output from the command in this file.
  • dcos package install marathon-lb

Marathon Application Definition

Marathon framework is used to schedule containers in DC/OS. A marathon application can be defined by providing an application definition.

As mentioned earlier, this blog will show how a simple Java application can talk to a database. We’ll use a Java EE application deployed in WildFly and use Couchbase as the database. The application definition looks like:

What are the key points in this application definition?

  • Application has two containers: database and web. The web container has a dependency on the database container defined using dependencies attribute.
  • database container uses arungupta/couchbase:travel Docker image. This image is created from github.com/arun-gupta/couchbase-javaee/tree/master/couchbase. It uses Couchbase base image and uses Couchbase REST API to pre-configure the database. A sample bucket is also loaded in the database as well.
  • web container uses arungupta/wildfly-couchbase-javaee:travel image. This image is created from github.com/arun-gupta/couchbase-javaee/blob/master/Dockerfile. This is a Java EE 7 application bundled in WildFly. The app uses COUCHBASE_URI as an environment variable to connect to the Couchbase database. The value of this environment variable is configured to use DNS service discovery and is derived as explained in Virtual Networks.

Make sure to change the value of HAPROXY_0_VHOST to match the value of ${PublicSlaveDnsAddress} from CloudFormation output. The label HAPROXY_0_VHOST instructs Marathon-LB to expose the Docker container, the WildFly application server in our case, on the external load balancer with a virtual host. The 0 in the label key corresponds to the servicePort index, beginning from 0. If you had multiple servicePort definitions, you would iterate them as 0, 1, 2, and so on. Deploying an internally and externally load-balanced app with marathon-lb provide more details about how to configure marathon-lb.

Service Discovery and Load Balancing provide more details about service discovery and load balancing in DC/OS.

Deploy the Application using Maven

The application can be deployed using dcos-maven-plugin.

Plugin looks like:

Main points in this fragment are:

  • Plugin version is 0.2. That indicates the plugin is still in early stages of development.
  • dcosUrl is the value of ${DnsAddress} key from CloudFormation output. This address is used for deployment of the application.
  • <deployable> element enables different types of deployment – app, group or pods. This element is a hint for the plugin and should likely go away in a future version as Marathon API consolidates. Follow #11 for more details.

Other details and configuration about the plugin are at dcos-maven-plugin.

Deploy the application:

The following output is shown:

Here are some of the updated output from DC/OS console.

First updated Services tab:
DC/OS Cluster Web Application

Two applications in the service:
DC/OS Cluster Web Application

Database application has one task:
DC/OS Cluster Web Application

Status of database task:Database Service Discovery

Logs from the database task:
DC/OS Cluster Web Application

It shows the output from Couchbase REST API for configuring the server.

Status of web task:
DC/OS Cluster Web Application

Logs from web task:
DC/OS Cluster WildFly Output

It shows that the Java EE application is deployed successfully.

Access the application:

The address is the value of the key ${PublicSlaveDnsAddress} from CloudFormation output. A formatted output, for example with jq, looks like:

That’s it!

As mentioned earlier, the complete source code used in this blog is at github.com/arun-gupta/dcos-java-database.

This blog showed how a simple Java application can talk to a database using service discovery in DC/OS.

For further information check out:

  • DC/OS Docs
  • Couchbase on Containers
  • Couchbase Developer Portal
  • Ask questions on Couchbase Forums or Stack Overflow
  • Download Couchbase

Source: blog.couchbase.com/service-discovery-java-database-dcos/

NoSQL Simplifies Database DevOps

Does your organization want to simplify Database DevOps?
Is your database becoming a bottleneck to innovate rapidly?
Do you want to save millions of $$ in database licensing cost?

Read on!

State of Database DevOps

State of Database DevOps is a survey on DevOps adoption rates among SQL Server professionals. Over 1000 SQL Server professionals responded to the survey. The respondents came from across the globe and represent a wide range of job roles, company sizes and industries.

There are some good findings in the survey results. A few key findings worth highlighting here:

the greatest challenge with integrating database changes into a DevOps process would still be synchronizing application and database changes

Another one …

The greatest drawback identified with traditional siloed database development is the increased risk of failed deployments or downtime when introducing changes. This is closely followed by slow development and release cycles and the inability to respond quickly to changing business requirements

And another one …

Increasing the speed of delivery of database changes and freeing developers up to do more value added work are the key drivers for automating the delivery of database changes

The challenges highlighted here are not mentioned in the context of SQL server only, but would be applicable towards any relational database. You may be using Oracle, Postgres, MySQL, MariaDB or any other relational database for that matter and would be very much facing these issues. Why?

Why is Relational not well suited for Database DevOps?

It’s common for an application to operate on data from multiple tables in a RDBMS. For example, placing an order may use Customer, Order and Product tables. Each table has multiple columns with standard data types specific to a database. Tables may have primary, reference and foreign key constraints. Developers building applications using a relational database typically use an Object Relational Mapper (ORM), such as Hibernate or Java Persistence API for Java developers. There are similar ORM for other languages as well. ORMs captures the underlying complex database structure and allow programmers to build applications naturally using their language.

ORMs also use a persistence provider and allows your application to be independent of the underlying database. This persistence provider creates a binding between the language specific class to the database structure. For example, it maps a class to a table or multiple tables, binds the language data types to the types defined in the database and captures the relationship between tables. Theoretically, a programmer can use a different persistence provider to use a different RDBMS for the application. But this is far from a practical experience!

Any database change requires the ORM classes to be updated otherwise the application may not work. For example, adding a new table may mean a new Java class or updating an existing class. Change of a data type in a column requires the class definition to be updated otherwise the application will not even compile. Adding a new column means adding a new field in the class. Any change requires the classes to be updated and the application needs to be repackaged.

Changes in database structure are required all the time to meet the evolving needs of business. But if the DBAs make a database change and the ORM classes are not updated then there is a disconnect. Application deployment needs to be coordinated with the updating the database schema. There are tools like Flyway, Liquibase and others that integrate application and database deployment. But developers are often not allowed to make any direct changes to the production database. A disconnect would result in your application not working and the business to suffer. DevOps practices can definitely help solve these issues as it requires a close collaboration between developers that are building applications and DBAs that are updating database scripts.

But as the survey reports, more than 50% of the respondents do not have DevOps adopted today.

Database DevOps Adoption

There are challenges even if you were to integrate database changes into a DevOps process.

Database DevOps Challenges

Synchronizing application and database changes where the ORM classes need to be synchronized with the backend database structure is the biggest challenge. DBAs may want to structure the database in a certain way which may not be optimal for application development. Applying consistency across application and database development is the next major challenge for ensuring a seamless database DevOps.

A siloed development has serious issues on your ability to rapidly innovate and deliver value to your business.

Database DevOps Drawbacks

As shown in this image, failed deployments when introducing changes, slow development/release cycles and inability to respond to business needs account for over 60% of the drawbacks.

Speed of delivery of database changes is the biggest concern for database DevOps.

Database DevOps Driver

So what do you do?

How does NoSQL simplify Database DevOps?

NoSQL document database helps to simplify database DevOps!

How does NoSQL simplify database DevOps?

  • Schema flexibility – Developers need a single database that can store rapidly changing structured, semi-structured and unstructured data. NoSQL document database offers schema flexibility by allowing developers operate directly on JSON data and derive meaning out of it
  • No impedance mismatch – With no ORM for the application, there is no impedance mismatch between domain classes and database structure. Only the application code needs to be updated and no coordination is required with the schema changes
  • Scalability –  One of the drawbacks mentioned in the report is the inability to adapt to changing business requirements. This highlights scalability as a major DevOps challenge. If the volume of data, the number of queries, or the types of indexes required to support the application changes the database needs to change to accommodate those changes. Not in weeks or months, but today! No SQL databases run on commodity hardware and has a scale-out architecture as opposed to scale-up with RDBMS. Sharding can help with scalability in RDBMS but that’s an extra complexity that now need to be dealt with.

Learn more about why enterprises move to NoSQL.

Which NoSQL database is preferred by GE, Marriott, Verizon, United, LinkedIn, DIRECTV and many others?

What are some other advantages of Couchbase?

  • Homogeneous distributed architecture – no master/slave
  • SQL-like query language to query JSON documents
  • Auto-sharding using vBuckets
  • Memory-first architecture reduces the need for an additional caching layer
  • Cross-data center replication
  • Multiple SDKs
  • Database on server or mobile device, with a complete synchronization between them
  • Different container orchestration frameworks

NoSQL is not a panacea by any means. If you are building a system that needs complex transaction logic or real-time data warehousing, then RDBMS may be a better fit. However it addresses your scalability and agility concerns and simplifies database DevOps.

Here is a great video on migrating from relational database to NoSQL:

Here is another interesting video that shows why Marriott transitioned from Relational to NoSQL:

A lot more videos are available on Couchbase Connect 2016.

And some more relevant blogs:

  • Making the Shift from Relational to NoSQL: How to Get Started (whitepaper)
  • Moving from Oracle database to Couchbase
  • Moving from MySQL to Couchbase
  • Moving from SQL Server to Couchbase – Part 1 (data modeling), Part 2 (data migration), Part 3 (coming)
  • Moving from MongoDB to Couchbase
  • Migrate your MongoDB with Mongoose REST API to Couchbase with Ottoman
  • Migrating from Relational Database to Couchbase
  • Top 5 reasons companies replace Oracle with Couchbase

Talk to us:

  • Couchbase Developer Portal
  • Couchbase Forums
  • @couchhasedev and @couchbase

Source: blog.couchbase.com/nosql-simplifies-database-devops/

Stateful Containers using Portworx and Couchbase

Portworx Logo Couchbase Logo

Containers are meant to be ephemeral and so scale pretty well for stateless applications. Stateful containers, such as Couchbase, need to be treated differently. Managing Persistence for Docker Containers provide a great overview of how to manage persistence for stateful containers.

This blog will explain how to use Docker Volume Plugins and Portworx to create a stateful container.

Why Portworx?

Portworx is an easy-to-deploy container data services that provide persistence, replication, snapshots, encryption, secure RBAC and much more. Some of the benefits are:

  1. Container granular volumes – Portworx can take multiple EBS volumes per host and aggregate the capacity and derive container granular virtual (soft) volumes per container.
  2. Cross Availability Zone HA – Portworx will protect the data, at block level, across multiple compute instances across availability zones. As replication controllers restart pods on different nodes, the data will still be highly available on those nodes.
  3. Support for enterprise data operations – Portworx implements container granular snapshots, class of service, tiering on top of the available physical volumes.
  4. Ease of deployment and provisioning – Portworx itself is deployed as a container and integrated with the orchestration tools. DevOps can programmatically provision container granular storage with any property such as size, class of service, encryption key etc.

Setup AWS EC2 Instance

Portworx runs only on Linux or CoreOS. Setup an Ubuntu instance on AWS EC2:

  1. Start Ubuntu 14.04 instance with m3.medium instance type. Make sure to add port 8091 to inbound security rules. This allows Couchbase Web Console to be accessible afterwards.
  2. Login to the EC2 instance using the command: ssh -i ~/.ssh/arun-cb-west1.pem ubuntu@<public-ip>
  3. Update the Ubuntu instance: sudo apt-get update
  4. Install Docker: curl -sSL https://get.docker.com/ | sh. More detailed instructions are available at Get Docker for Ubuntu.
  5. Enable non-root access for the docker command: sudo usermod -aG docker ubuntu
  6. Logout from the EC2 instance and log back in

Create AWS EBS Volume

  1. Create an EBS volume for 10GB using EC2 console as explained in docs.
  2. Get the instance id from the EC2 console. Attach this volume to EC2 instance using this instance id, use the default device name /dev/sdf.
    Portworx EC2 Create Volume
  3. Use lsblk command in EC2 instance to verify that the volume is attached to the instance:

Portworx Container

  1. Physical storage makeup of each node, all the provisioned volumes in the cluster as well as their container mappings is stored in an etcd cluster. Start an etcd cluster:
  2. By default root mounted volumes are not allowed to be shared. Enable this using the command:
    This is explained more at Ubuntu Configuration and Shared Mounts.
  3. PX-Developer (px-dev) container on a server with Docker Engine turns that server into a scale-out storage node. PX-Enterprise, on the other hand, provides multi-cluster and multi-cloud support, where storage under management can be on-premise or in a public cloud like AWS.
    For this blog, we’ll start a px-dev container:
    Complete details about this command are available at Run PX with Docker.
  4. Look for logs using docker container logs -f px and watch out for the following statements:
  5. Check the status of attached volumes that are available to Portworx using sudo /opt/pwx/bin/pxctl status to see the output:
    It shows the total capacity available and used.

Docker Volume

  1. Let’s create a Docker volume:
    More details about this command are at Create Volumes with Docker.
  2. Check the list of volumes available using docker volume ls command:
    As shown, cbvol is created with pxd driver.

Couchbase with Portworx Volume

  1. Create a Couchbase container using the Portworx volume:
    Notice how /opt/couchbase/var where all Couchbase data is stored in the container is mapped to the cbvol volume on the host. This volume is mapped by Portworx.
  2. Login to Couchbase Web Console at http://<public-ip>:8091, use the login Administrator and password as password.
  3. Go to Data Buckets and create a new data bucket pwx:
    Couchbase Bucket with Portworx
  4. In EC2 instance, see the list of containers:
    etcd, px-dev and db containers are running.
  5. Kill the db container:
  6. Restart the database container as:
    Now, because cbvol is mapped to /opt/couchbase/var again, the data is preserved across restarts. This can be verified by accessing the Couchbase Web Consoleand checking on the pwx bucket created earlier.

Another interesting perspective is also at why database are not for containers?. Just because there is Docker, does not mean all your database needs should be Dockerized. But if you need to, then there are plenty of options and can be used in production-grade applications.

Want to learn more about running Couchbase in containers?

  • Couchbase on Containers
  • Couchbase Developer Portal
  • @couchhasedev and @couchbase

Source: blog.couchbase.com/stateful-docker-containers-portworx-couchbase/

Start Couchbase Using Docker Compose

Couchbase Forums has a question Can’t use N1QL on docker-compose. This blog will show how to run Couchbase using Docker Compose and run a N1QL query.

Docker ComposeN1QL

What is Docker Compose?

Docker Compose allows you to define your multi-container application with all of its dependencies in a single file, then spin your application up in a single command.

Docker Compose introduced v3 in Docker 1.13. How do you know what version of Docker are you running?

docker version command gives you that information:

Couchbase Docker Compose File

Now if you see this version of Docker, then you can use the following Compose file:

In this Compose file:

  • v3 version of Compose file. If you are using an older version of Docker, then you can consider using v2 version of Compose file.
  • arungupta/couchbase Docker image is used to start Couchbase server.  This image is created as explained at github.com/arun-gupta/docker-images/tree/master/couchbase. It uses Couchbase REST API to pre-configure the Couchbase server.
  • Ports 8091, 8092, 8093, 8094, 11210 are exposed.
  • Only a single replica of Couchbase server is started.

Couchbase can be started in a couple of ways using this Compose file.

Couchbase using Docker Compose on Single Docker Host

If you want to start Couchbase on a single host (such as provisioned by Docker for Mac or a single Docker Machine), then use the command:

This will show the warning message but starts Couchbase server:

Check the status of started service using the command docker-compose ps:

All the exposed ports are shown and Couchbase is accessible at http://localhost:8091. Use the credentials Administrator/password to access the web console.

Now you can create buckets and connect from CBQ and run N1QL queries. For example:

Typically, you may be able to scale the services started by Docker Compose using docker-compose scale command. But this will not be possible in our case as the ports are exposed. Scaling a service will cause port conflict.

The container can be brought down using the command docker-compose down.

Couchbase using Docker Compose on Multi-host Swarm-mode Cluster

Docker allows multiple hosts to be configured in a cluster using Swarm-mode. This can be configured using the command docker swarm init.

Once the cluster is initialized, then the Compose file can be used to start the cluster:

It shows the output:

This creates a Docker service and the status can be seen using the command docker service ls:

Check the tasks/containers running inside the service using the command docker service ps couchbase_db:

Here again, you can connect to the Couchbase server and run N1QL queries:

The service, and thus the container running in the service, can be terminated using the command docker service couchbase_db.

Any more questions? Catch us on Couchbase Forums.

You may also consider running Couchbase Cluster using Docker or read more about Deploying Docker Services to Swarm.

Want to learn more about running Couchbase in containers?

  • Couchbase on Containers
  • Couchbase Developer Portal
  • @couchhasedev and @couchbase

Source: blog.couchbase.com/couchbase-using-docker-compose/