All posts by arungupta

Multi-container Applications using Docker Compose and Swarm

Docker Compose to Orchestrate Containers shows how to run two linked Docker containers using Docker Compose. Clustering Using Docker Swarm shows how to configure a Docker Swarm cluster.

This blog will show how to run a multi-container application created using Docker Compose in a Docker Swarm cluster.

Updated version of Docker Compose and Docker Swarm are released with Docker 1.7.0.

Docker 1.7.0 CLI

Get the latest Docker CLI:

and check the version as:

Docker Machine 0.3.0

Get the latest Docker Machine as:

and check the version as:

Docker Compose 1.3.0

Get the latest Docker Compose as:

and verify the version as:

Docker Swarm 0.3.0

Swarm is run as a Docker container and can be downloaded as:

You can learn about Docker Swarm at docs.docker.com/swarm or Clustering using Docker Swarm.

Create Docker Swarm Cluster

The key components of Docker Swarm are shown below:

and explained in Clustering Using Docker Swarm.

  1. The easiest way of getting started with Swarm is by using the official Docker image:

    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    The last line is the <TOKEN>.

    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with#661.

  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    This will show the output as:

  4. Create a Swarm node

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    This shows the output similar to the one below:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master.

  8. List nodes in the cluster with the following command:

    This shows the output as:

Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Docker Compose to Orchestrate Containers explains how multi container applications can be easily started using Docker Compose.

  1. Use the docker-compose.yml file explained in that blog to start the containers as:

    The docker-compose.yml file looks like:
  2. Check the containers running in the cluster as:

    to see the output as:
  3. “swarm-node-02” is running three containers and so lets look at the list of containers running there:

    and see the list of running containers as:
  4. Application can then be accessed again using:

    and shows the output as:

Latest instructions for this setup are always available at: github.com/javaee-samples/docker-java/blob/master/chapters/docker-swarm.adoc.

Enjoy!

Microservices and DevOps Journey at Wix

Wix.com started their journey on DevOps and Microservices about two years ago and recently switched from a monolithic application to a microservices-based application. Yes, it took them full two years to complete the transition from monolith to microservices!

I got connected with Aviran Mordo (@aviranm), head of backend engineering at Wix on twitter.

They migrated to microservices because the “system could not scale” and the requirements for functional components were varied. The journey took their WAR-based deployment on Tomcat to fat JAR with embedded Jetty. On a side note, take a look at WildFly Swarm if you are interested in a similar approach for Java EE applications.

Video Interview

I discussed some points with him about this journey and you can watch the same too.

In this discussion, you’ll learn:

  • Why Continuous Delivery and DevOps are important requirements for microservices?
  • How they migrated from a big monolith to smaller monoliths and then a full-blown microservices architecture
  • How database referential integrity constraints were moved from database to application?
  • “micro” in microservices refers to the area of responsibility, nothing to do with LOC
  • Neither REST nor messaging was used for communication between different services. Which protocol was used? JSON-RPC
  • How do services register and discover each other? Is that required during early phases?
  • Why YAGNI and KISS are important?
  • Chef for configuration management and how to make it accessible for massive deployments
  • TeamCity for CI
  • Is 100% automation a requirement? Can 100% automation be achieved? Learn about Petri, Wix’s open source framework for A/B testing
  • Relevance of hybrid cloud (Google, Amazon, Private data center) and redundancy
  • Hardest part of migrating from monolith to microservice
  • How much code was repurposed during refactoring?
  • Where was the most effort spent during the two years of migration?
  • Distributed transactions
  • What was the biggest challenge in DevOps journey? Look out for a nice story towards the end that could be motivating for your team as well 😉

Additional Material

Watch the slides from DevoxxUK:

You can also learn more about their architecture in Scaling Wix to 60m Users.

Enjoy!

DevNation and Red Hat Summit 2015 Wrapup

RedHat Summit Logo DevNation Logo

Red Hat Summit and DevNation is a wrap!

It took two full night sleep and a long afternoon nap to fully recover from the excitement, stimulation, and exhaustion that sets in after meeting awesome developers, customers, partners, colleagues, and geeks from around the world. The fact that I gave four talks, one hands-on lab, participated in two panels, ran Devoxx4Kids event, talked to a lots of analysts, book signing, breakfasts/lunches/dinners/receptions, ran every morning by Charles river – all within 6 days added to the exhaustion as well 😉

In the end, it was very rewarding and inspiring to see the work others are doing!

Complete set of slides are available at redhat.com/summit/2015/presentations. Here are links to the slides from my sessions:

  • Docker Recipes for Java developers – 11381
  • DevOps with Java EE – 11525
  • Docker for Java developers – 11536 (lab)
  • Use OpenShift & PaaS to accelerate DevOps & continuous delivery – 12218
  • How DevOps and microservices impact your application architecture and development – 13639

Watch the middleware keynote by Craig Muzilla:

Watch Burr Sutter geek show starting at ~19:00.

Learn/understand more about our middleware offerings using Accelerate, Integrate, and Automate.

Some pictures from the event …

 
 
   
   
 

Complete album …

Here are some other photo albums:

  • Official DevNation photo album
  • Red Hatters at Summit
  • Pictures from one of the Red Hat Summit official photographer

Enjoy!

 

Minecon 2015 – Minecraft Modding Workshop and Education Panel

Devoxx4Kidsminecon 2015
Devoxx4Kids Minecraft Modding workshop has been used all around the world by kids as their first introduction to Java programming, even programming in several cases. This workshop has been delivered at our meetups, corporate events such as OSCON, Silicon Valley Code Camp, JavaOne and Red Hat Summit, schools, libraries, workplaces, homes, and many other venues. And now its going to the most coveted and sought after place for all minecrafters around the world – Minecon 2015!

Many thanks to Microsoft (now the parent company of Mojang) for extending the invitation!

Minecon is the biggest gathering of all minecrafters on this planet. About 10,000 attendees are expected to gather on Jul 4th and 5th in London this year. This will be our first time there. We are super excited and can’t wait to experience the phenomenon.

In addition, Microsoft has also asked us to lead a panel on “Using Mods for Teaching”. This panel will discuss the relevance of modding in education. We’ll try to answer some of the questions as:

  • Can modding be the right medium for first introduction to programming?
  • What is an appropriate age to start modding?
  • What would you like from Mojang to improve the modding experience?
  • What can be done to fundamentally change STEM education in schools?

Participate in Minecon 2015 Virtually

You can participate in Minecon virtually as well!

Are there any questions that you’d like to ask the panelists? Leave a comment in the blog and I’ll try to accommodate as many of them as possible.

As always, follow me @arungupta to watch out for comments/pictures from the event!

Devoxx4Kids Ancillary Event at Corporate

Running a Devoxx4Kids event as an ancillary to the main event is a great way to engage local community. Such an event can consist of not just minecraft modding, but several other topics such as Scratch, Greenfoot, and HTML5/CSS. Let us know if your corporate event is interested in running a Devoxx4Kids workshop the weekend before/after the main event!

Devoxx4Kids at Red Hat Summit 2015

RedHat Summit LogoDevNation Logo Devoxx4Kids

~70 kids were sprawling in a part of the Hynes Convention Center in Boston last Sunday. They were invited by Red Hat to participate in a variety of workshops and get them introduced to technology in a fun way. The workshops were facilitated by Devoxx4Kids instructors and Red Hatters. This is right before the start of Red Hat Summit – a premier open source technology conference for the “big kids”.

The kids had an opportunity to learn the following technologies:

  • MIT App Inventor
  • Greenfoot
  • Minecraft Modding
  • Dr Racket
  • Scratch
  • Flappy Birds

The feedback from all the kids and parents have been very encouraging, and everybody seem to have a great time. Here are some pictures from the event:

 
   
 
 
 
 

Complete set of pictures are:

Number of emails exchanged with the team: Hundreds
Support from the team/instructors/volunteers to run the event: Invaluable
Flight got delayed, stayed up until 3:30 ensuring machines work: Tiring
Kids enjoyed learning how to build the games: Priceless

Overall, it has been quite a joyful and rewarding experience to promote STEM to elementary and middle school kids!

Running a Devoxx4Kids event as an ancillary to the main event is a great way to engage local community. Red Hat Summit is the fourth corporate event in the US (after OSCON 2nd year, JavaOne 2nd year, and Silicon Valley Code Camp) to partner with Devoxx4Kids and give back to the community. Let us know if your corporate event is interested in running a Devoxx4Kids workshop the weekend before/after the main event!

 

Docker 1.7.0, Docker Machine 0.3.0, Docker Compose 1.3.0, Docker Swarm 0.3.0

Docker 1.7.0 is released (change log) and so time to update Docker Hosts, CLI, and other tools.

Docker 1.7.0

Docker Host is running inside a Docker Machine and so the machine needs to be upgraded. The machine must be stopped otherwise you get an error as:

So start the machine as:

And then upgrade the machine as:

The machine is anyway stopped to perform an upgrade, and so the need to start the machine seems superfluous (#1399).

Upgrading the host updates .docker/machine/cache/boot2docker.iso. Any previously created machines cache the boot2docker.iso in .docker/machine/machines/<MACHINE-NAME> and so they’ll continue to boot using the same version.

Docker CLI

Update the Docker CLI as:

Now docker version shows the following output:

Note, the client API version (1.7.0) and the server API version (1.7.0) are both shown here.

If you update only the CLI and not the Docker Host, then the following error message is shown:

This error messages shows a version mismatch between the client CLI and the Docker Host running in the machine. The will typically happen if the active machine was created a few days ago using an older boot2docker.iso. There seems to be no way straight forward way to find out the exact version currently being used (#1398).

There seems to be no way for a new client to talk to the old server (#14077), and thus the host needs to be upgraded. There is a proposal to override the API version of client (#11486), but at this time there is no ETA for the fix. So the only option is to upgrade the docker machine, which will then then upgrade to the latest version of Docker.

So upgrading the CLI requires to upgrade the machine as well.

Here are the options supported by Docker CLI :

Docker Machine 0.3.0

This was rather straight forward:

There are a lots of new features, including an experimental provisioner for Red Hat Enterprise Linux 7.0.

The version is shown as:

Complete list of commands are:

Docker Compose 1.3.0

Docker Compose can be updated to 1.3.0 as:

The version is shown as:

Two important points to note:

  • At least Docker 1.6.0 is required
  • There are breaking changes from Compose 1.2 and so you either need to remove and recreate your containers, or migrate them.Fortunately docker-compose migrate-to-labels can be used to migrate pre-Compose 1.3.0 containers to the latest format. This will recreate the containers with labels added.

Learn more in Docker Compose to Orchestrate Containers.

Docker Swarm 0.3.0

As of this blog, Docker Swarm 0.3.0 RC3 is available. Clustering Using Docker Swarm provide a good introduction to Docker Swarm and can be used to get started with the latest Docker Swarm release.

34 issues have been fixed since 0.2.0 but the commit notifications since 0.2.0 for Release Candidates seem to show no significant changes.

More detailed blogs on each Docker component will be shared in subsequent blogs.

Enjoy!

ZooKeeper for Microservice Registration and Discovery

In a microservice world, multiple services are typically distributed in a PaaS environment. Immutable infrastructure is provided by containers or immutable VM images. Services may scale up and down based upon certain pre-defined metrics. Exact address of the service may not be known until the service is deployed and ready to be used.

This dynamic nature of service endpoint address is handled by service registration and discovery. In this, each service registers with a broker and provide more details about itself, such as the endpoint address. Other consumer services then queries the broker to find out the location of a service and invoke it. There are several ways to register and query services such as ZooKeeper, etcd, consul, Kubernetes, Netflix Eureka and others.

Monolithic to Microservice Refactoring showed how to refactor an existing monolith to a microservice-based application. User, Catalog, and Order service URIs were defined statically. This blog will show how to register and discover microservices using ZooKeeper.

Many thanks to Ioannis Canellos (@iocanel) for all the ZooKeeper hacking!

What is ZooKeeper?

Apache ZooKeeperZooKeeper is an Apache project and provides a distributed, eventually consistent hierarchical configuration store.

 

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications.

Apache ZooKeeper

So a service can register with ZooKeeper using a logical name, and the configuration information can contain the URI endpoint. It can consists of other details as well, such as QoS.

Apache Curator ZooKeeper has a steep learning curve as explained in Apache ZooKeeper Made Simpler with Curator. So, instead of using ZooKeeper directly, this blog will use Apache Curator.

Curator n ˈkyoor͝ˌātər: a keeper or custodian of a museum or other collection – A ZooKeeper Keeper.

Apache Curator

Apache Curator has several components, and this blog will use the Framework:

The Curator Framework is a high-level API that greatly simplifies using ZooKeeper. It adds many features that build on ZooKeeper and handles the complexity of managing connections to the ZooKeeper cluster and retrying operations.

Apache Curator Framework

ZooKeeper Concepts

ZooKeeper Overview provides a great overview of the main concepts. Here are some of the relevant ones:

  • Znodes: ZooKeeper stores data in a shared hierarchical namespace that is organized like a standard filesystem. The name space consists of data registers – called znodes, in ZooKeeper parlance – and these are similar to files and directories.
  • Node name: Every node in ZooKeeper’s name space is identified by a path. Exact name of a node is a sequence of path elements separated by a slash (/).
  • Client/Server: Clients connect to a single ZooKeeper server. The client maintains a TCP connection through which it sends requests, gets responses, gets watch events, and sends heart beats. If the TCP connection to the server breaks, the client will connect to a different server.
  • Configuration data: Each node in a ZooKeeper namespace can have data associated with it as well as children. ZooKeeper was originally designed to store coordination data, so the data stored at each node is usually small, in less than KB range).
  • Ensemble: ZooKeeper itself is intended to be replicated over a sets of hosts called an ensemble. The servers that make up the ZooKeeper service must all know about each other.
  • Watches: ZooKeeper supports the concept of watches. Clients can set a watch on a znode. A watch will be triggered and removed when the znode changes.

ZooKeeper is a CP system with regards to CAP theorem. This means if there is a partition failure, it will be consistent but not available. This can lead to problems that are explained in Eureka! Why You Shouldn’t Use ZooKeeper for Service Discovery.

Nevertheless, ZooKeeper is one of the most popular service discovery mechanisms used in microservices world.

Lets get started!

Start ZooKeeper

  1. Start a ZooKeeper instance in a Docker container:
  2. Verify ZooKeeper instance by using telnet as:

    Type the command “ruok” to verify that the server is running in a non-error state.The server will respond with “imok” if it is running:

    Otherwise it will not respond at all. ZooKeeper has other similar four-letter commands.

Service Registration and Discovery

Each service, User, Catalog, and Order in our case, has an eagerly initialized bean that registers and unregisters the service as part of lifecycle initialization methods as. Here is the code from CatalogService:

The code is pretty simple, it injects ServiceRegistry class, with @ZooKeeperRegistry qualifier. This is then used to register and unregister the service. Multiple URIs, one each for a stateless service, can be registered under the same logical name.

At this time, the qualifier comes from another maven module. A cleaner Java EE way would be to move the @ZooKeeperRegistry qualifier to a CDI extension (#20). And when this qualifier when specified on any REST endpoint will register the service with ZooKeeper (#22). For now, service endpoint URI is hardcoded as well (#24).

What does ZooKeeper class look like?

  1. ZooKeeper class uses constructor injection and hardcoding IP address and port (#23):

    It does the following tasks:

    1. Loads ZooKeeper’s host/port from a properties file
    2. Initializes Curator framework and starts it
    3. Initializes a hashmap to store the URI name to zNode mapping. This node is deleted later to unregister the service.
  2. Service registration is done using registerService method as:

    Code is pretty straight forward:

    1. Create a parent zNode, if needed
    2. Create an ephemeral and sequential node
    3. Add metadata, including URI, to this node
  3. Service discovery is done using discover method as:

    Again, simple code:

    1. Find all children for the path registered for the service
    2. Get metadata associated with this node, URI in our case, and return.The first such node is returned in this case. Different QoS parameters can be attached to the configuration data. This will allow to return the appropriate service endpoint.

Read ZooKeeper Javadocs for API.

ZooKeeper watches can be setup to inform the client about the lifecycle of the service (#27). ZooKeeper path caches can provide an optimized implementation of the children nodes (#28).

Multiple Service Discovery Implementations

Our shopping cart application has two two service discovery implementations – ServiceDisccoveryStatic and ServiceDiscoveryZooKeeper. The first one has all the service URIs defined statically, and the other one retrieves them from ZooKeeper.

Other means to register and discover can be easily added by creating a new package in services module and implementing ServiceRegistry interface. For example, Snoop, etcd, Consul, and Kubernetes. Feel free to send a PR for any of those.

Run Application

  1. Make sure the ZooKeeper image is running as explained earlier.
  2. Download and run WildFly:
  3. Deploy the application:
  4. Access the application at localhost:8080/everest-web/. Learn more about the application and different components in Monolithic to Microservices Refactoring for Java EE Applications blog.

Enjoy!

Monolithic to Microservices Refactoring for Java EE Applications

Have you ever wondered what does it take to refactor an existing Java EE monolithic application to a microservices-based one?

This blog explains how a trivial shopping cart example was converted to microservices-based application, and what are some of the concerns around it. The complete code base for monolithic and microservices-based application is at: github.com/arun-gupta/microservices.

Read on for full glory!

Java EE Monolith

A Java EE monolithic application is typically defined as a WAR or an EAR archive. The entire functionality for the application is packaged in a single unit. For example, an online shopping cart may consist of User, Catalog, and Order functionalities. All web pages are in root of the application, all corresponding Java classes are in the WEB-INF/classes directory, resources in WEB-INF/classes/META-INF directory.

Lets assume that your monolith is not designed as a distributed big ball of mud and the application is built following good software architecture. Some of the common rules are:

  • Separation of concerns, possibly using Model-View-Controller
  • High cohesion and low coupling using well-defined APIs
  • Don’t Repeat Yourself (DRY)
  • Interfaces/APIs and implementations are separate, and following Law of Demeter. Classes don’t call other classes directly because they happen to be in the same archive
  • Using Domain Driven Design to keep objects related to a domain/component together
  • YAGNI or You Aren’t Going to Need It: Don’t build something that you don’t need now

Here is how a trivial shopping cart monolithic WAR archive might look like:

Java EE Monolithic

This monolithic application has:

  • Web pages, such as .xhtml files, for User, Catalog, and Order component, packaged in the root of the archive. Any CSS and JavaScript resources that are shared across different webpages are also packaged with these pages.
  • Classes for the three components are in separate packages in WEB-INF/classes directory. Any utility/common classes used by multiple classes are packed here as well.
  • Configuration files for each component are packaged inWEB-INF/classes/META-INF directory. Any config files for the application, such as persistence.xml and load.sql to connect and populate the data store respectively, are also packaged here.

It has the usual advantages of well-known architecture, IDE-friendly, easy sharing, simplified testing, easy deployment, and others. But also comes with disadvantages such as limited agility, obstacle for continuous delivery, “stuck” with a technology stack, growing technical debt, and others.

Even though microservices are all the raze these days, but monoliths are not bad. Even those that are not working for you may not benefit much, or immediately, from moving to microservices. Other approaches, such as just better software engineering and architecture, may help. Microservices is neither a free lunch or a silver bullet and requires significant investment to be successful such as service discovery, service replication, service monitoring, containers, PaaS, resiliency, and a lot more.

don’t even consider microservices unless you have a system that’s too complex to manage as a monolith.

Microservice Premium

Microservice Architecture for Java EE

Alright, I’ve heard about all of that but would like to see a before/after, i.e. how a monolith code base and how a refactored microservice codebase looks like.

First, lets look at the overall architecture:

Java EE Microservices

The key pieces in this architecture are:

  • Application should be functionally decomposed where User, Order, and Catalog components are packaged as separate WAR files. Each WAR file should have the relevant web pages (#15), classes, and configuration files required for that component.
    • Java EE is used to implement each component but there is no long term commitment to the stack as different components talk to each other using a well-defined API (#14).
    • Different classes in this component belong to the same domain and so the code is easier to write and maintain. The underlying stack can also change, possibly keeping technical debt to a minimum.
  • Each archive has its own database, i.e. no sharing of data stores. This allows each microservice to evolve and choose whatever type of datastore – relational, NoSQL, flat file, in-memory or some thing else – is most appropriate.
  • Each component will register with a Service Registry. This is required because multiple stateless instances of each service might be running at a given time and their exact endpoint location will be known only at the runtime (#17).Netflix Eureka, Etcd, Zookeeper are some options in this space (more details).
  • If components need to talk to each other, which is quite common, then they would do so using a pre-defined API. REST for synchronous or Pub/Sub for asynchronous communication are the common means to achieve this.In our case, Order component discovers User and Catalog service and talks to them using REST API.
  • Client interaction for the application is defined in another application, Shopping Cart UI in our case. This application mostly discover the services from Service Registry and compose them together. It should mostly be a dumb proxy where the UI pages of different components are invoked to show the interface (#18).A common look-and-feel can be achieved by providing a standard CSS/JavaScript resources.

This application is fairly trivial but at least highlights some basic architectural differences.

Monolith vs Microservice

Some of the statistics for the monolith and microservices-based applications are compared below:

Characteristic Monolith Microservice
Number of archives  1  5

  • Contracts (JAR, ~4 KB)
  • Order (WAR, ~7 KB)
  • User (WAR, ~6 KB)
  • Catalog (WAR, ~8 KB)
  • Web UI (WAR, 27 KB)
Web pages  8  8 (see below)
Configuration Files  4

  • web.xml
  • template.xhtml
  • persistence.xml
  • load.sql
 3 per archive

  • persistence.xml
  • load.sql
  • web.xml
Class files  12 26

  • Service registration for each archive
  • Service discovery classes
  • Application class for each archive
Total archive size  24 KB  ~52 KB (total)

Code base for the monolithic application is at: github.com/arun-gupta/microservices/tree/master/monolith/everest

Code base for the microservices-enabled application is at: github.com/arun-gupta/microservices/tree/master/microservice

Issues and TODOs

Here are the issues encountered, and TODOs, during refactoring of the monolith to a microservices-based application:

  • Java EE already enables functional decomposition of an application using EAR packaging. Each component of an application can be packaged as a WAR file and be bundled within an EAR file. They can even share resources that way. Now that is not a true microservices way, but this could be an interim step to get you started. However, be aware that@FlowScoped bean is not activated in an EAR correctly (WFLY-4565).
  • Extract all template files using JSF Resource Library Templates.
    • All web pages are currently in everest module but they should live in each component instead (#15).
    • Resource Library Template should be deployed at a central location as opposed to packaged with each WAR file (#16).
  • Breakup monolithic database into multiple databases require separate persistence.xml and DDL/DML scripts for each application. Similarly, migration scripts, such as using Flyway, would need to be created accordingly.
  • A REST interface for all components, that need to be accessed by another one, had to be created.
  • UI is still in a single web application. This should instead be included in the decomposed WAR (#15) and then composed again in the dumb proxy. Does that smell like portlets?
  • Deploy the multiple WAR files in a PaaS (#12)
  • Each microservice should be easily deployable in a container (#6)

Here is the complete list of classes for the monolithic application:

Here is the complete list of classes for the microservices-based application:

Once again, the complete code base is at github.com/arun-gupta/microservices.

Future Topics

Some of the future topics in this series would be:

  • Are containers required for microservices?
  • How do I deploy multiple microservices using containers?
  • How can all these services be easily monitored?
  • A/B Testing
  • Continuous Deployment using microservices and containers

What else would you like to see?

Enjoy!

 

Microservices at Red Hat Summit 2015

RedHat Summit Logo

Red Hat Summit is just a few days away. You’ll get the opportunity to learn a plethora of technologies and products in 277 sessions by 402 speakers. Complete agenda shows the list of all sessions, hands-on labs, tracks, and everything else.

When? Jun 21-25, 2015 (DevNation) and Jun 23-26, 2015 (Summit)
Where? Hynes Convention Center, Boston, MA
Agenda: Summit and DevNation
Register: Summit and DevNation

Here is the list of 12 sessions that are talking about Microservices:

  • Taming microservices testing with Docker & Arquillian Cube
  • Immutable infrastructure, containers, & the future of microservices
  • Microservices & you: Practical introduction for real developers
  • Integrating microservices with Apache Camel & Fabric8
  • Microservices on OpenShift: Where we’ve been & where we’re going
  • Making the case for microservices: Faster application development & repair
  • How DevOps and microservices impact your application architecture and development
  • Why real integration developers ride Camels
  • Service federation & the cluster environment
  • Containerizing applications, existing and new
  • Applying practical manufacturing skills to DevOps
  • Development architecture for application-centric microservices deployment in the Intercloud

You’ll learn about:

  • Microservices technical and business aspects
  • Development, Testing, Production of microservices
  • Relationship with PaaS, Containers, and DevOps, Camel, Fabric8

Of course, you can catch any of the speakers in the hallway. You can also stop by at any of the booths in the exhibitor hall, and talk to us about what Red Hat offers for microservices.

I’m giving a few sessions as well, and really looking forward to see you there!

 

Docker Tools in Eclipse

Upcoming Docker Tooling for Eclipse gave a preview of Docker Tooling coming in Eclipse. This Tech Tip will show how to get started with it.

docker-logoeclipse-logo

NOTE: This is pretty bleeding edge and so some of the features may be half baked. But we are looking for all the feedback!

The Docker tooling is aimed at providing at minimum the same basic level features as the command-line interface, but also provide some advantages by having access to a full fledged UI.

Install Docker Tools Plugins

  • Download and Install JBoss Developer Studio 9.0 Nightly, take defaults through out the installation. Alternatively, download Eclipse Mars latest build and configure JBoss Tools plugin from the update site http://download.jboss.org/jbosstools/updates/nightly/mars/.
  • Open JBoss Developer Studio 9.0 Nightly or Eclipse Mars.
  • Add a new site using the menu items: Help > Install New Software… > Add…. Specify the Name: as “Docker Nightly” and Location: as http://download.eclipse.org/linuxtools/updates-docker-nightly/.Eclipse Docker Tooling
  • Expand Linux Tools, select Docker Client and Docker Tooling:Eclipse Docker Tooling
  • Click on Next >, Next >, accept the license agreement, and click on Finish. This will complete the installation of plugins.Restart the IDE for changes to take effect.

Docker Explorer

The Docker Explorer provides a wizard to establish a new connection to a Docker daemon. This wizard can detect default settings if the user’s machine runs Docker natively (such as in Linux) or in a VM using Boot2Docker (such as in Mac or Windows). Both Unix sockets on Linux machines and the REST API on other OSes are detected and supported. The wizard also allows remote connections using custom settings.

  • Use the menu Window, Show View, Other…. Type “docker” to see the output as:Docker Eclipse Tools
  • Select Docker Explorer to open the explorer.Docker Eclipse Tools
  • Click on the link in this window to create a connection to Docker Host. Specify the settings as shown:Docker Eclipse ToolsMake sure to get IP address of the Docker Host using docker-machine ip command.Also, make sure to specify the correct directory for .docker on your machine.
  • Click on Test Connection to check the connection. This should show the output as:Docker Eclipse ToolsClick on OK and Finish to exit out of the wizard.
  • Docker Explorer itself is a tree view that handles multiple connections and provides users with quick overview of the existing images and containers.Docker Eclipse Tools
  • Customize the view by clicking on the arrow in toolbar:Docker Eclipse Tools
  • Built-in filters can show/hide intermediate and dangling images, as well as stopped containers.Docker Eclipse Tools

Docker Images

The Docker Images view lists all images in the Docker host selected in the Docker Explorer view. This view allows user to manage images, including:

  • Pull/push images from/to the Docker Hub Registry (other registries will be supported as well, #469306)
  • Build images from a Dockerfile
  • Create a container from an image

Lets take a look at it.

  • Use the menu  Window, Show View, Other…, select Docker Images. It shows the list of images on Docker Host:Docker Eclipse Tools
  • Right-click on the image ending with wildfly:latest and click on the green arrow in the toolbar. This will show the following wizard:Docker Eclipse ToolsBy default, all exports ports from the image are mapped to random ports on the host interface. This setting can be changed by unselecting the first checkbox and specify exact port mapping.Click on Finish to start the container.
  • When the container is started, all logs are streamed into Eclipse Console:Docker Eclipse Tools

Docker Containers

Docker Containers view lets the user manage the containers. The view toolbar provides commands to start, stop, pause, unpause, display the logs and kill containers.

  • Use the menu Window, Show View, Other…, select Docker Containers. It shows the list of running containers on Docker Host:Docker Eclipse Tools
  • Pause the container by clicking on the pause button in the toolbar (#469310). Show the complete list of containers by clicking on the View Menu, Show all containers.

    Docker Eclipse Tools

  • Select the paused container, and click on the green arrow in the toolbar to restart the container.
  • Right-click on any running container and select Display Log to view the log for this container.

    Docker Eclipse Tools

Information and Inspect on Images and Containers

Eclipse Properties view is used to provide more information about the containers and images.

  • Just open the Properties View and click on a Connection, Container, or Image in any of the Docker Explorer View, Docker Containers View, or Docker Images View. This will fill in data in the Properties view.
  • Info view is shown as:

    Docker Eclipse Tools

  • Inspect view is shown as:

    Docker Eclipse Tools

The code is hosted in Linux Tools project.

File your bugs at: bugs.eclipse.org/bugs/enter_bug.cgi?product=Linux%20Tools and use “Docker” component. Talk to us on IRC.

Enjoy!

Devoxx4Kids workshops at Red Hat Summit and DevNation – Register Now!

RedHat Summit LogoDevNation Logo Devoxx4Kids

Red Hat Summit and DevNation are asking to Bring Your Kids to Conference, aka BYKC!

Are you speaking or attending Red Hat Summit or DevNation? Do you live in and around Boston area? Interested in having your kids attend a workshop?

Register here!

Coordinates

What? Two rooms, six workshops
When? Sunday, Jun 21
Where? Hynes Convention Center, Boston, MA
How do I register? eventbrite.com/e/devoxx4kids-tickets-16296685826

Schedule

Time  Room 1 Room 2
10am – 12pm App Inventor (8 yrs+) Greenfoot (12 yrs+)
12pm – 1pm Lunch Lunch
1pm – 3pm Minecraft Modding (8 yrs+) Dr Racket (10 yrs+)
3pm – 5pm Scratch (8 yrs+) Flappy Bird (12 ys+)

The age limits are only suggested and you better know the capabilities of your kid! Parents would stay with their kids for the duration of the workshop.

Abstracts

Greenfoot: Lets make some games!

Hey you – yes you – do you like playing video or computer games? Well then how about learning to make your own? Come join us as we hang out for two hours and make some computer games with Greenfoot. Along the way you will learn about computer programming – make your computer do things you want. We will wrap up by showing you some places to go to talk with other people making games and more places to become a more powerful wizard! Be sure to bring a laptop – this will definitely be hands on.

App Inventor

The traditional way to create an application is write code. You type commands, one after another, telling the computer or the smartphone what to do next. When you type commands on a keyboard, it’s easy to make mistakes. What’s more, at a glance, the commands look nothing like the things that you’re getting the computer or smartphone to do. Taken together, the commands are just a bunch of text.

But there’s another way to create an application. With App Inventor, you build instructions by fitting building blocks together with one another on the computer screen. The blocks fit together like jigsaw puzzle pieces. When it’s finished, the whole jigsaw puzzle describes all the commands in your application.

In this workshop, you learn to use App Inventor. You can run your applications on the computers in the lab. You can also install your applications on real Android phones.

Dr. Racket: Don’t dictate, evaluate!

This workshop introduces the basics of functional programming using the Dr. Racket programming environment.

In the first part of the workshop you will learn how to evaluate expressions in the Dr. Racket interpreter, including expressions like <if html were available to display the abstract I’ld show here a racket expression that included an actual devox4 kids image>.

In the second part of the workshop we will work on developing a very simple animation. If time permits we will show how we can make our animations interactive, i.e., have them respond to key presses and mouse clicks.

The workshop is constructed to allow roughly equal amounts of time for experimentation and instruction.

Build your own Flappy Bird

In this workshop you will learn about HTML5, CSS and Javascript by create a game. Together we’re going to build our very own Flappy Bird. We’ll do it step by step and it will be a lot of fun!

Minecraft Modding

Minecraft is a multi-player game about building and placing blocks in a three-dimensional environment. The game allows modifications (known as “mods”) that can change the game from what it was originally written. These mods can add content to the game to alter gameplay. For example, new blocks, mobs, and abilities of player can be added.

Have you always wondered what it takes to write these mods ? This workshop is for you!

In this workshop we’ll teach the kids on how to build Minecraft mods. In the process, they also learn some basic Java concepts as well. No programming experience is required.

Scratch

Scratch (scratch.mit.edu) makes it easy to create interactive stories, animations, games, music, and art, and share these creations on the web. You will create several fun and interesting Scratch programs. No programming experience or typing skill is required.

Volunteer

Are you living in/around Boston area? Interested in becoming a volunteer?

No programming experience is required, some familiarity with computers would be useful. You’ll be required to listen to the instructor and help the attendees follow along.

Sign up using the form below.

Once again, don’t forget to register if you or your kid would like to attend the workshop!

Gilt and Microservices: Why and How

Gilt.com provides instant insider access to top designer labels, at up to 70% off retail. The company went through growing pains when the monolithic application was not able to handle the needs of growing business demand. They looked at microservices for a scalable solution and are one of the live examples of how they are using it effectively to keep up with the velocity and agility of the business.

  • What was the need to refactor monolithic application to microservice?
  • What is the technology stack?
  • Is JVM relevant?
  • Relevance of containers
  • Is DevOps a requirement for microservices?
  • Should the microservices be fully contained or can the data be shared?

Watch the answer to these questions in this interview with Adrian Trenaman (@adrian_trenaman), software engineer at Gilt:

Some other details are at:

  • tech.gilt.com
  • Scaling Microservices at Gilt with Scala, Docker, and AWS
  • Scaling Gilt: From Monolithic Ruby Application to Distributed Scala Microservices Architecture

Many thanks to Daniel Bryant (@danielbryantuk) for the introduction!

Cloud, Devops, Microservices Track Program Committee at JavaOne 2015

javaone-logo

JavaOne 2015 Program Committee wheels are churning to make sure we provide the best content for you this year. There are a total of 9 tracks covering the entire Java landscape and all the track leads and program committee members are reviewing and voting upon the submissions. This is not an easy task especially when there are a lot of submission and the quality of submissions is top notch.

Here is the list of program committee members for Cloud and DevOps track:


@danielbryanuk

@myfear

@frankgreco

@mikekeith
 
@jbaruch
 
@wakaleo
   stijn
@stieno
 James Turnbull
@kartar

And I am (@arungupta) leading the track along with @brunoborges!

 

Complete list of all the program committee members across all the tracks is published here.

Many thanks for all the wonderful submissions, and to all the program committee members for reviewing the proposals!

Just to give you an idea, here is the tag cloud generated from the titles of all submissions for Cloud and DevOps track:

 

JavaOne 2015 Title Tag Cloud for Cloud/DevOps Track

 

And a tag cloud from all the abstracts of the same track:

JavaOne 2015 Abstract Tag Cloud for Cloud/DevOps Track

 

And in case you are wondering, here is the tag cloud of titles from submissions across all the tracks:

JavaOne 2015 Tag Cloud Title

 

And the same for submissions across all the tracks:

JavaOne 2015 Tag Cloud Abstract

 

Stay tuned, we are working hard to make sure to provide an excellent content at JavaOne!

Here are a couple of additional links:

  • Register for JavaOne
  • Justify to your boss

And, you can always find all the details at oracle.com/javaone.

Java EE, Docker and Maven (Tech Tip #89)

Java EE applications are typically built and packaged using Maven. For example, github.com/javaee-samples/javaee7-docker-maven is a trivial Java EE 7 application and shows the Java EE 7 dependency:

And the two Maven plugins that compiles the source and builds the WAR file:

This application can then be deployed to a Java EE 7 container, such as WildFly, using the wildfly-maven-plugin:

Tests can be invoked using Arquillian, again using Maven. So if you were to package this application as a Docker image and run it inside a Docker container, there should be a mechanism to seamlessly integrate in the Maven workflow.

Docker Maven Plugin

Meet docker-maven-plugin!

This plugin allows you to manage Docker images and containers from your pom.xml. It comes with predefined goals:

Goal Description
docker:start Create and start containers
docker:stop Stop and destroy containers
docker:build Build images
docker:push Push images to a registry
docker:remove Remove images from local docker host
docker:logs Show container logs

Introduction provides a high level introduction to the plugin including building images, running containers and configuration.

Run Java EE 7 Application as Docker Container using Maven

TLDR;

  1. Create and Configure a Docker Machine as explained in Docker Machine to Setup Docker Host
  2. Clone the workspace as: git clone https://github.com/javaee-samples/javaee7-docker-maven.git
  3. Build the Docker image as: mvn package -Pdocker
  4. Run the Docker container as: mvn install -Pdocker
  5. Find out IP address of the Docker Machine as: docker-machine ip mydocker
  6. Access your application

Docker Maven Plugin Configuration

Lets look little deeper in our sample application.

pom.xml is updated to include docker-maven-plugin as:

Each image configuration has three parts:

  • Image name and alias
  • <build> that defines how the image is created. Base image, build artifacts and their dependencies, ports to be exposed, etc to be included in the image are specified here.Assembly descriptor format is used to specify the artifacts to be included and is defined in src/main/docker directory. assembly.xml in our case looks like:
  • <run> that defines how the container is run. Ports that need to be exposed are specified here.

In addition, package phase is tied to docker:build goal and install phase is tied to docker:start goal.

There are four docker-maven-plugin and you can read more details in the shootout on what serves your purpose the best.

How are you creating your Docker images from existing applications?

Enjoy!

 

Microservices with JBoss EAP 6 Reference Architecture

Microservices follow the divide-and-conquer methodology for technology and teams to create a software that is easier to develop, understand and maintain. The overall quality of the software is much higher because of the CI/CD practices that are an imperative part of using such an architecture.

A new paper is released by the Red Hat Reference Architecture Team titled “Microservice Architecture: Building Microservices with JBoss EAP 6“.

The purpose of the reference architecture is explained in the summary section:

This reference architecture recites the basic tenets of a microservice architecture and analyzes some of the advantages and disadvantages of this approach. This paper expressly discourages a one size fits all mentality, instead envisioning various levels of modularity for services and deployment units.

Microservice Reference Architecture

The reference architecture also provides a sample application that shows how to build a microservice application using JBoss Developer Studio and JBoss EAP 6:

The sample application provided with this reference architecture demonstrates Business Driven Microservices. The design and development of this system is reviewed at length and the steps to create the environment are documented.

Are you building and deploying microservices using JBoss EAP 6? Share your thoughts with us!

Previously published reference architecture documents are available at red.ht/1uFlHpw.

Provide feedback at refarch-feedback@redhat.com.