Minecraft Modding at Schools and Libraries

Introduction to programming languages to kids need to be gamified. They typically seem to like it, and in the process, they  also develop a liking for the language. They also have a better understanding of the typical code and run steps that is common in a software developer’s life.

Getting Started with Java for Kids

As part of Devoxx4Kids, my son and I have been teaching Minecraft Modding workshop for 2+ years now. We’ve personally reached out to 2000+ kids in Bay Area and other parts of the world, and help them write first Hello World program for Java. And its not the conventional public static void main, its a Minecraft Mod. This workshop has also been used at several other Devoxx4Kids chapters around the world.

As part of our workshops, “code” to kids is copy/pasting the text from the website and understanding it. “Build” is just clicking a button in Eclipse bringing Minecraft launcher. Once the game comes up, they know how to play it and see instant modifications. They don’t have to wait for Hello World to appear on their screen. They type “potato” and see a stack of 64 potatoes in their inventor, or they spawn an Ender Dragon from dragon egg or they make skeletons fight with each other. This is the new Hello World!

The installation of JDK, Eclipse or NetBeans, and Forge typically takes ~45 minutes. But after that initial hump, kids are able to build 3-4 mods in a total of ~2 hrs. Our philosophy is more on the lines of teaching them how to think to mod. Yes, we do teach them a few mod but we enable them to create their own mods as well.

Java is often complained about being a verbose language. But Forge decompiled code in Java makes it much easier to read the code, and help kids understand how to change it to make modifications.

The fact that kids love playing the game, this allows the workshop to just leverage their passion and help them in their journey towards becoming a Java programmer. It really helps lowering the average age of Java developer!

Minecraft Modding over Summer

Over the Summer, we delivered Minecraft Modding workshops at:

We were also able to coach some additional volunteers and hopefully spread the Java fever to a wider audience.

Here are some pictures from the event delivered at West Valley San Jose Public Library over the weekend:

 
   
 
   

Check out complete photos at meetup.com/devoxx4kids-bayarea.

Minecraft Modding and You

Would you be interested in delivering this workshop at your local school, library, or corporate event?

Would you like teachers in your school to be trained for Minecraft Modding?

If you are in Bay Area, how about make this a social activity at your workplace for a weekend? Invite us to deliver this workshop.

If you are not in Bay Area, send us an email at info@devoxx4kids.org and we’ll hook you up with the local Devoxx4Kids chapter lead who can then help you get started.

Devoxx4Kids is a NPO and 501(c)(3) and we love to ignite that spark in kids for technology!

Minecraft Modding Resources

  • Workshop instructions are available at minecraftmodding.org.
  • Minecraft Modding with Forge book from O’Reilly. This book is targeted at 8+ year old who has no prior programming experience. Parents with no technical background have also found this book to be a great resource.
     
  • Minecraft Modding Video Tutorial
  • Course Curriculum for School

 

 

WildFly Admin Console Updated – Feedback Requested

Red Hat JBoss Enterprise Application Platform (EAP) and WildFly have a symbiotic relationship. In short, Red Hat JBoss Enterprise Application Platform (JBoss EAP) retains all of the innovation of the WildFly community project (formerly known as JBoss Application Server). But only a subscription to JBoss EAP meets the demanding requirements for mission-critical applications and includes the assurance of service-level agreement (SLA)-based support, patches, updates, and multi-year maintenance policies. Read more details about the comparison between WildFly and JBoss EAP in this whitepaper.

JBoss EAP 6.4 is the latest version as of now, WildFly 10.0.0 Beta2 was released a few days ago. JBoss EAP 7 will be derived from WildFly 10.x. This allows developers to try out the latest features with WildFly (as opposed to other closed source application servers) and then use EAP 7 for mission-critical applications when commercial support is required.

Over the past year, we have been working to improve the user experience of the WildFly Management Console.  You will find several improvements to the overall information architecture and navigation model that will make it easier to find and execute common management tasks. We invite you to try the new console application and tell us what you think.

Getting Started with WildFly Admin Console

  • Download WildFly-10.0.0.Beta2 and unzip
  • Add a user in admin realm as add-user.sh -u u1 -p p1.
  • Access web-based admin console at localhost:9990
  • Use the username as u1 and password as p1

If you don’t want to go through download and install, which is pretty simple BTW, then WildFly 10.0.0.Beta2 is also available in OpenShift (thanks @farahjuma).

WildFly Admin Console Highlights

Spend some time navigating through different sections to quickly learn the WildFly basics. Here are some highlights.

The new navigation makes the WildFly structure more visible. To find a subsystem to configure, simply move from the left to the right within the navigation. You can also get a quick overview about each subsystem before configuring it.

WildFly 10 Beta2 Configuration

Servers can be found through either hosts or server groups. In addition, you can search for the server group or server you are looking for. 

Adding servers and monitoring servers are easier. After adding a server to a server group or host and getting it running, you can choose a subsystem that you want to monitor from the same page.

WildFly10 Beta2 Runtime

Modifying the status of a server can be done on the same page. You can also remove or copy the server.

WildFly10 Beta2 Server

You can add deployments to server groups directly, which means that you don’t have to first upload it and then assign it. Searching for server groups and deployments will also help save your time.

WildFly10 Beta2 Server Group

We welcome your feedback as we continue to improve the user experience of WildFly. Feel free to leave comments here, file bugs, and let us know what you like and what can be further improved.

Silicon Valley Code Camp for Kids 2015 – Submit and Register

SVCC 10

Silicon Valley Code Camp is reaching its 10th anniversary this year!

When? Oct 2nd, 3rd & 4th, 2015 (Friday through Sunday)
Where? Evergreen Valley College (3095 Yerba Buena Rd, San Jose, California 95135)
What? Paid workshops on Friday, FREE sessions on Saturday/Sunday, Kids Sunday

What can you do?

  • Register for the event
  • Submit a session for FREE sessions or Kids workshops
  • What kind of sessions can be submitted? 

    Each session is 75 mins long.For adults, look at the existing list of sessions and think about submitting what would make your session unique.

    For kids, any hands-on session would be great. The facility will only provide classroom style seating with power adapters for laptops to be plugged in. The kids typically bring their own laptops. Any software installation instruction on laptop needs to be included in the abstract and will be shared with the attendees.

    Internet is typically unreliable at such facilities. DO NOT rely upon it!

  • How do I sign up as volunteer?

    Make sure to select “Volunteer to Help” checkbox on www.siliconvalley-codecamp.com/Register/IndexStep2. The exact volunteer jobs will be posted closer to the event and you’ll be notified. You’ll need to pick the exact job at that time.

  • How do I sign up my kid for a workshop?

    Each kid require a parent/guardian to sign up. Parents need to stick around for the entire duration of the event.

    The kid signs up for the entire day and then can signup for a particular workshop once the schedule is available. Each kid registration requires to pay $50. This mostly goes towards logistics of the kids event.

    Make sure to register a parent/guardian first as that will be needed during kid’s registration.

  • When will kids workshop and schedule be available?
    In the next few days!

Any other questions? Ask them at service2015@siliconvalley-codecamp.com or ask here.

Kubernetes Application – Package Multiple Resources Together

Deploying an application in Kubernetes require to create multiple resources such as Pods, Services, Replication Controllers, and others. Typically each resource is define in a configuration file and created using kubectl script. But if multiple resources need to be created then you need to invoke kubectl multiple times. So if you need to create the following resources:

  • MySQL Pod
  • MySQL Service
  • WildFly Replication Controller

Then the commands would look like:

Or for convenience, wrap these invocations in a shell script. But that is not very intuitive! There is a better, and more natural and intuitive way.

Kubernetes allow multiple resources to be specified in a single configuration file. This allows to create a “Kubernetes Application” that can consists of multiple resources easily.

Previous section showed how to deploy the Java EE application using multiple configuration files. This application can be delpoyed using a single configuration file as well.

An application, as discussed above, consisting of MySQL Pod, MySQL Service, and WildFly Replication Controller can be created using the following configuration file:

Notice that each section, one each for MySQL Pod, MySQL Service, and WildFly Replication Controller, is separated by ----.

Such an application can be created as:

Complete details about how to setup Kubernetes and run this application are available at github.com/arun-gupta/kubernetes-java-sample/#kubernetes-application.

More details about creating a Kubernetes application with multiple resources can be found in #12104.

You can learn about how to create Kubernetes resources for a Java application, or otherwise, at github.com/arun-gupta/kubernetes-java-sample/.

Docker and Kubernetes Workshops in Fall 2015

Docker and Kubernetes workshops is going to 4 continents and 9 countries this Fall!

Lets talk about:

  • Get started with Docker and Kubernetes for packaging your applications
  • Microservices using Docker and Kubernetes
  • Clustering architectures
  • Migrating existing applications to Docker and Kubernetes
  • Tooling
  • Debugging tips

I’ll share some of what I know and will learn a lot more from you!

Here is the complete circuit so far:

 Sep 9 -10  javazone-2015
 Sep 15  goto-london-2015
Sep 17 redhat-forum-london-2015
Sep 29 Red Hat Forums, Argentina
 Oct 2  codestars-summit-2015
 Oct 24 – 29  javaone-logo
 Nov 5  Drukwerk (tentative)
 Nov 7  javaday-kiev-2015
 Nov 9 – 13  devoxx-be-2015
 Nov 16 – 18  devoxx-morocco-2015
 Nov 18 – 22  buildstuff-2015

Where will I see you?

Would you like to run with me at any of these events? 5k, 10k, 10mile, half marathon, marathon … you pick the distance and we run together!

 

JBoss EAP gives 509% ROI over closed source application servers

new study by IDC shows how Red Hat JBoss EAP customers are significantly benefitting over closed source commercial application servers:

JBoss EAP IDC 2015

The study says that a common paradigm with JBoss EAP customers three years ago …

There was uncertainty about whether JBoss EAP would scale as well as more expensive options and whether JBoss EAP was as feature rich, particularly for high-end projects. Those concerns prevented IT operations from going all in on a single software standard. Despite that, customers were pleased with the benefits they were able to achieve from their use of JBoss EAP, and customers were able to achieve an impressive return on investment by adopting this approach.

Sounds familiar?
Does your company still think like that?
Do the closed source vendors still give you that pitch?

The study shares how customers’ perspectives have changed over these years …

Today, we’ve found that JBoss EAP customers are more systematic in their use of JBoss EAP or OpenShift by Red Hat as the standard application server or cloud application platform within a standardized environment. Customers are no longer worried that JBoss EAP is not as sophisticated as more expensive alternatives and now believe it’s at performance parity with its competitors.

There are much more fundamental benefits for developing in open source:

One customer said that because JBoss EAP comes from open source, it is built from a lot of good ideas from the community and the internal design is architected to make it simpler to use than non-community-based alternatives.

That’s why we love Community Powered Innovation!

The cost benefit cannot be over emphasized anyway:

the cost benefit associated with JBoss EAP gave them an affordable opportunity to standardize, whereas they would have been too cost challenged using other options.

And the customers are saying:

We did a cost-benefit analysis. Many of our applications need a development platform for multiple environments … and if we compare JBoss EAP with other solutions … it’s a no-brainer. Night and day

Are customers using it for only new projects? Or migrating their existing mission critical projects to JBoss EAP as well?

Three years ago …

Then it was enough to begin using JBoss EAP for new projects.

And now …

we’ve found that customers made the decision to migrate production applications to the new environment in order to gain speed and compliance benefits from standardizing application operations and change management.

Register and download the report now!

What are you waiting for?

Download JBoss EAP (Java EE 6 compliant) and get full commercial support or WildFly 10 Beta1 to try Java EE 7 and lots of other cool features!

Docker Toolbox

One of the new features introduced in Docker 1.8 is Docker Toolbox. What is this toolbox?

Docker Toolbox

The Docker Toolbox is an installer to quickly and easily install and setup a Docker environment on your computer. Available for both Windows and Mac, the Toolbox installs Docker Client, Machine, Compose (Mac only), Kitematic and VirtualBox.

Docker Toolbox is the fastest way to get up and running with Docker in development. In short, it provides the different tools required to get started with Docker:

  • Docker Client docker binary
  • Docker Machine docker-machine binary
  • Docker Compose docker-compose binary
  • Kitematic – Desktop GUI for Docker
  • Docker Quickstart Terminal app

If you have Docker CLI, Machine, Compose, and other tools installed in the /usr/local/bin directory then this would just overwrite them.

Specifically, Docker Toolbox 1.8.0a installs:

  • Docker Client 1.8.0
  • Docker Machine 0.4.0
  • Docker Compose 1.4.0
  • Docker Quickstart Terminal App
  • Kitematic 0.8.1
  • Virtual 5.0.0

After the installation completes, the versions are shown as:

If an older version of VirtualBox is already running then it will show a message as shown:

This flow needs to be slightly cleaned up (#63).

Read more details in DockerToolbox blog.

UPDATE: Virtual Box 5.0.0 prohibits Kubernetes cluster from starting. Not sure if Docker 1.8.0 will work with Virtual Box 4.3.30 but I downgraded VirtualBox 5.0.0 to 4.3.30 and also downgraded Docker 1.8.0 to 1.7.0 as explained in #12614.

Docker Quickstart Terminal

It also created a new Docker category in Applications with links to Docker Quickstart Terminal and Kitematic. Clicking on the terminal app creates a default Machine instance and shows the following output:

The configured Docker environment variables are:

VirtualBox is also updated to 5.0.0 r101573.

The Quickstart Terminal is mostly a regular shell but allows to create a default machine. It can be used to connect to other machines as well:

Update existing Docker scripts to Docker 1.8

If you’d like to update existing Docker scripts to 1.8, then they are available at:

Upgrade Docker CLI:

Upgrade Docker Machine:

Upgrade Docker Compose:

Virtual Box needs can be downloaded from virtualbox.org.

Upgrade Docker VMs

Docker version of existing Machines can be found as:

This can only be done after the machine is running though.

So start an existing machine as:

And then upgrade it as:

Java Applications using Docker

Ready to start deploying your Java applications to Docker?

Get started with github.com/javaee-samples/docker-java.

Getting Started with ELK Stack on WildFly

Your typical business application would consist of a variety of servers such as WildFly, MySQL, Apache, ActiveMQ, and others. They each have a log format, with minimal to no consistency across them. The log statement typically consist of some sort of timestamp (could be widely varied) and some text information. Logs could be multi-line. If you are running a cluster of servers then these logs are decentralized, in different directories.

How do you aggregate these logs? Provide a consistent visualization over them? Make this data available to business users?

This blog will:

  • Introduce ELK stack
  • Explain how to start it
  • Start a WildFly instance to send log messages to the ELK stack (Logstash)
  • View the messages using ELK stack (Kibana)

What is ELK Stack?

ELK stack provides a powerful platform to index, search and analyze your data. It uses  Logstash for log aggregation, Elasticsearch for searching, and Kibana for visualizing and analyzing data. In short, ELK stack:

  • Collect logs and events data (Logstash)
  • Make it searchable in fast and meaningful ways (Elasticsearch)
  • Use powerful analytics to summarize data across many dimensions (Kibana)

logstash-logo

Logstash is a flexible, open source data collection, enrichment, and transportation pipeline.

elasticsearch-logo

Elasticsearch is a distributed, open source search and analytics engine, designed for horizontal scalability, reliability, and easy management.

kibana-logo

Kibana is an open source data visualization platform that allows you to interact with your data through stunning, powerful graphics.

How does ELK Stack work?

Logstash can collect logs from a variety of sources (using input plugins), process the data into a common format using filters, and stream data to a variety of sources (using output plugins). Multiple filters can be chained to parse the data into a common format. Together, they build a Logstash Processing Pipeline.

Logstash Processing Pipeline

Inputs and outputs support codecs that enable you to encode or decode the data as it enters or exits the pipeline without having to use a separate filter.

Logstash can then store the data in Elasticsearch and Kibana provides a visualization of that data. Here is a sample pipeline that can collect logs from different servers and run it through the ELK stack.

ELK Stack

Start ELK Stack

You can download individual components of ELK stack and start that way. There is plenty of advise on how to configure these components. But I like to start with a KISS, and Docker makes it easy to KISS!

All the source code on this blog is at github.com/arun-gupta/elk.

  1. Clone the repo:
  2. Run the ELK stack:

    This will use the pre-built Elasticsearch, Logstack, and Kibana images. It is built upon the work done in github.com/nathanleclaire/elk.

    docker ps will show the output as:

    It shows all the containers running.

WildFly and ELK

James (@the_jamezp) blogged about Centralized Logging for WildFly with ELK Stack. The blog explains how to configure WildFly to send log messages to Logstash. It uses the highly modular nature of WildFly to install jboss-logmanager-ext library and install it as a module. The configured logmanager includes @timestamp field to the log messages sent to logstash. These log messages are then sent to Elasticsearch.

Instead of following the steps, lets Docker KISS and use a pre-configured image to get you started.

Start the image as:

Make sure to substitute <DOCKER_HOST_IP> with the IP address of the host where your Docker host is running. This can be easily found using docker-machine ip <MACHINE_NAME>.

View Logs using ELK Stack

Kibana runs on an embedded nginx and is configured to run on port 80 in docker-compose.yml. Lets view the logs using that.

  1. Access http://<DOCKER_HOST_IP> in your machine and it should show the default page as:ELK Stack WildFly PatternThe @timestamp field was created by logmanager configured in WildFly.
  2. Click on Create to create an index pattern and select Discover tab to view the logs as:ELK Stack WildFly Output

Try connecting other sources and enjoy the power of distributed consolidated by ELK!

Some more references …

Distributed logging and visualization is a critical component in a microservices world where multiple services would come and go at a given time. A future blog will show how to use ELK stack with a microservices architecture based application.

Enjoy!

Kubernetes Design Patterns

14,000 commits and 400 contributors (including one tiny commit from me!) is what build Kubernetes 1.0. It is now available!

This blog discusses some of the Kubernetes design patterns. All source code for the design patterns discussed below are available at kubernetes-java-sample.

Key Concepts of Kubernetes

At a very high level, there are three key concepts:

  • Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.
  • Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.
  • Node is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

Kubernetes Key Concepts

 

Some other concepts to be aware of:

  • Replication Controller is a resource at Master that ensures that requested number of pods are running on nodes at all times.
  • Service is an object on master that provides load balancing across a replicated group of pods.
  • Label is an arbitrary key/value pair in a distributed watchable storage that the Replication Controller uses for service discovery.

Start Kubernetes Cluster

  1. Easiest way to start a Kubernetes cluster on a Mac OS is using Vagrant:
  2. Alternatively, Kubernetes can be downloaded from github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.0/kubernetes.tar.gz, and cluster can be started as:

Kubernetes Cluster Vagrant

A Pod with One Container

This section will explain how to start a Pod with one Container. WildFly base Docker image will be used as the Container.

Kubernetes One Pod

Pod, Replication Controller, Service, etc are all resources in Kubernetes. They can be created using the kubectl by using a configuration file.

The configuration file in this case:

Complete details on how to create a Pod are explained at github.com/arun-gupta/kubernetes-java-sample#a-pod-with-one-container

Java EE Application Deployed in a Pod with One Container

This section will show how to deploy a Java EE application in a Pod with one Container. WildFly, with an in-memory H2 database, will be used as the container.

Kubernetes Java EE 7 Application

Configuration file is:

Complete details at github.com/arun-gupta/kubernetes-java-sample#java-ee-application-deployed-in-a-pod-with-one-container-wildfly–h2-in-memory-database.

A Replication Controller with Two Replicas of a Pod

This section will explain how to start a Replication Controller with two replicas of a Pod. Each Pod will have one WildFly container.

Kubernetes Replication Controller

Configuration file is:

Complete details at github.com/arun-gupta/kubernetes-java-sample#a-replication-controller-with-two-replicas-of-a-pod-wildfly

Rescheduling Pods

Replication Controller ensures that specified number of pod “replicas” are running at any one time. If there are too many, the replication controller kills some pods. If there are too few, it starts more.

Kubernetes Pod Rescheduling

Complete details at github.com/arun-gupta/kubernetes-java-sample#rescheduling-pods.

Scaling Pods

Replication Controller allows dynamic scaling up and down of Pods.

Kubernetes Scaling Pods

Complete details at github.com/arun-gupta/kubernetes-java-sample#scaling-pods.

Kubernetes Service

Pods are ephemeral. IP address assigned to a Pod cannot be relied upon. Kubernetes, Replication Controller in particular, create and destroy Pods dynamically. A consumer Pod cannot rely upon the IP address of a producer Pod.

Kubernetes Service is an abstraction which defines a set of logical Pods. The set of Pods targeted by a Service are determined by labels associated with the Pods.

This section will show how to run a WildFly and MySQL containers in separate Pods. WildFly Pod will talk to the MySQL Pod using a Service.

Kubernetes Service

Complete details at github.com/arun-gupta/kubernetes-java-sample#kubernetes-service.

Here are couple of blogs that will help you get started:

The complete set of Kubernetes blog entries provide more details.

Enjoy!

Scaling Kubernetes Cluster

kubernetes-logo

Automatic Restarting of Pods inside Replication Controller of Kubernetes Cluster shows how Kubernetes reschedule pods in the cluster if one or more of existing Pods disappear for some reason. This is a common usage pattern and one of the key features of Kubernetes.

Another common usage pattern of Replication Controller is scaling:

The replication controller makes it easy to scale the number of replicas up or down, either manually or by an auto-scaling control agent, by simply updating the replicas field.

This blog will show how a Kubernetes cluster can be easily scaled up and down.

All the code used in this blog is available at kubernetes-java-sample.

Start Replication Controller and Verify

  1. Start a Replication Controller as:
  2. Get status of the Pods:

    Make sure to wait for the status to change to Running.

    Note down name of the Pods as wildfly-rc-bgtkg” and wildfly-rc-bgtkg”.

  3. Get status of the Replication Controller:

    If multiple Replication Controllers are running then you can query for this specific one using the label:

Scaling Kubernetes Cluster Up

Replication Controller allows dynamic scaling up and down of Pods.

  1. Scale up the number of Pods:
  2. Status of the Pods can be seen in another shell:

    Notice a new Pod with the name wildfly-rc-aqaqn is created.

Scale Kubernetes Cluster Down

  1. Scale down the number of Pods:
  2. Status of the Pods using -w is not correctly updated (#11338). But status of the Pods can be seen correctly as:

    Notice only one Pod is now running.

Kubernetes dynamically scales the Pods up and down using the scale --replicas command.

All code used in this blog is available at kubernetes-java-sample.

Enjoy!

Automatic Restarting of Pods inside Replication Controller of Kubernetes Cluster

kubernetes-logo

A key feature of Kubernetes is its ability to maintain the “desired state” using declared primitives. Replication Controllers is a key concept that helps achieve this state.

A replication controller ensures that a specified number of pod “replicas” are running at any one time. If there are too many, it will kill some. If there are too few, it will start more.

Lets take a look on how to spin up a Replication Controller with two replicas of a Pod. Then we’ll kill one pod and see how Kubernetes will start another Pod automatically.

Start Kubernetes Cluster

  1. Easiest way to start a Kubernetes cluster on a Mac OS is using Vagrant:
  2. Alternatively, Kubernetes can be downloaded from github.com/GoogleCloudPlatform/kubernetes/releases/download/v1.0.0/kubernetes.tar.gz, and cluster can be started as:

Start and Verify Replication Controller and Pods

  1. All configuration files required by Kubernetes to start Replication Controller are in kubernetes-java-sample project.  Clone the workspace:
  2. Start a Replication Controller that has two replicas of a pod, each with a WildFly container:

    The configuration file used is shown:

    Default WildFly Docker image is used here.
  3. Get status of the Pods:

    Notice -w refreshes the status whenever there is a change. The status changes from Pending to Running and then Ready to receive requests.
  4. Get status of the Replication Controller:

    If multiple Replication Controllers are running then you can query for this specific one using the label:
  5. Get name of the running Pods:
  6. Find IP address of each Pod (using the name):

    And of the other Pod as well:
  7. Pod’s IP address is accessible only inside the cluster. Login to the minion to access WildFly’s main page hosted by the containers:

Automatic Restart of Pods

Lets delete a Pod and see how a new Pod is automatically created.

Notice how the Pod with name wildfly-rc-15xg5 was deleted and a new Pod with the name wildfly-rc-0xoms was created.

Finally, delete the Replication Controller:

The latest configuration files and detailed instructions are at kubernetes-java-sample.

In real world, you’ll typically wrap this Replication Controller in a Service and front-end with a Load Balancer. But that’s a topic for another blog!

Enjoy!

Minecon 2015 Wrapup

minecon2015-logo

From a gathering of ~50 people in 2010, Minecon 2015 with 10,000 attendees created a new world record for the biggest convention for a single game.

Minecon 2015 Experience

Do you want to know what what it feels like to be at Minecon?

Minecraft Modding Workshop

Devoxx4Kids was fortunate to give Minecraft Modding workshops to ~200 kids at Minecon 2015. Feedback from all the parents and kids was quite outstanding. Glad we were able to ignite spark in some kids and get them excited in programming, and open source tools like Java, Eclipse, and Minecraft Forge.

Here are a couple of tweets:

All the instructions for minecraft modding are at minecraftmodding.org.

Many thanks to Mark Little and his son Adam, and my son for helping with a successful workshop. Its very important that kids feel comfortable to play with open source tools, and be willing to hack!

Using Mods for Teaching Panel

I also got the opportunity to lead a panel on Using Mods for Teaching with @DorineFlies, @YouthDigital, and @_moonlapse.

Here are some of the questions we addressed:

  1. How are you involved with modding?
  2. How many students/kids have you reached out so far?
  3. What languages/platforms do you use?
  4. Can modding be the right medium for first introduction to programming?
  5. What is an appropriate age to start modding?
  6. What can be done to fundamentally change STEM education in schools?
  7. What would you like from Mojang to improve the modding experience?

The panel was recorded and should be made available at youtube.com/user/TeamMojang/videos. I’ll update this blog when the exact link is available.

Minecraft Youtubers

One of the big craze, and genuine one, in the Minecraft world is about youtubers who produce video of game plays and most of them have 1m+ subscribers. Several of them were attending Minecon and we were fortunate to meet a few of them:

 

As Lydia walked around the main hall, most of the kids were super excited to meet their favorite youtubers!

Minecon 2015 Cape

Every Minecon attendee also get a cape that their in-game character can wear it and show-off the fact you attended a big celebration! Theme for this year was Iron Golem and it looks like as shown:

Minecon 2015 Cape

Minecraft Characters with Snaak

We also met the team behind Snaak and played around with creating some of the Minecraft characters using it.

 

Minecraft and HoloLens

A re-run of HoloLens and Minecraft video was also shown during one of the keynotes, and a preview is available here:

Here is the complete opening ceremony animation:

Minecon 2015 Photo Album

Check out some pictures from our trip:

 
   
 
 

And the complete photo album:

To me the highlight of the conference was meeting @SeargeDP. If there is one name that is responsible for starting modding in Minecraft, that would be him! Many thanks to him for giving us a chance to deliver minecraft modding workshops at Minecon.

And then, of course, meeting @lexmanos who is the lead developer for Minecraft Forge. We’ve authored an O’Reilly book (targeted at 8+ years old kid) and video on this topic. Several Devoxx4Kids chapters around the world have delivered workshops using the instructions based on Minecraft Forge and explained at minecraftmodding.org.

Check out a nice credential about book from one of the parents we met:

And last, but not the least, many thanks to the Mojang team for keeping the release cadence and supporting different modding communities.

Minecraft is truly a revolutionary game and allows to introduce Java programming to kids at a very early age!

Hopefully we get invited to Minecon 2016 again :)

 

Multi-container Applications using Docker Compose and Swarm

Docker Compose to Orchestrate Containers shows how to run two linked Docker containers using Docker Compose. Clustering Using Docker Swarm shows how to configure a Docker Swarm cluster.

This blog will show how to run a multi-container application created using Docker Compose in a Docker Swarm cluster.

Updated version of Docker Compose and Docker Swarm are released with Docker 1.7.0.

Docker 1.7.0 CLI

Get the latest Docker CLI:

and check the version as:

Docker Machine 0.3.0

Get the latest Docker Machine as:

and check the version as:

Docker Compose 1.3.0

Get the latest Docker Compose as:

and verify the version as:

Docker Swarm 0.3.0

Swarm is run as a Docker container and can be downloaded as:

You can learn about Docker Swarm at docs.docker.com/swarm or Clustering using Docker Swarm.

Create Docker Swarm Cluster

The key components of Docker Swarm are shown below:

and explained in Clustering Using Docker Swarm.

  1. The easiest way of getting started with Swarm is by using the official Docker image:

    This command returns a discovery token, referred as <TOKEN> in this document, and is the unique cluster id. It will be used when creating master and nodes later. This cluster id is returned by the hosted discovery service on Docker Hub.

    It shows the output as:

    The last line is the <TOKEN>.

    Make sure to note this cluster id now as there is no means to list it later. This should be fixed with#661.

  2. Swarm is fully integrated with Docker Machine, and so is the easiest way to get started. Let’s create a Swarm Master next:

    Replace <TOKEN> with the cluster id obtained in the previous step.

    --swarm configures the machine with Swarm, --swarm-master configures the created machine to be Swarm master. Swarm master creation talks to the hosted service on Docker Hub and informs that a master is created in the cluster.

  3. Connect to this newly created master and find some more information about it:

    This will show the output as:

  4. Create a Swarm node

    Replace <TOKEN> with the cluster id obtained in an earlier step.

    Node creation talks to the hosted service at Docker Hub and joins the previously created cluster. This is specified by --swarm-discovery token://... and specifying the cluster id obtained earlier.

  5. To make it a real cluster, let’s create a second node:

    Replace <TOKEN> with the cluster id obtained in the previous step.

  6. List all the nodes created so far:

    This shows the output similar to the one below:

    The machines that are part of the cluster have the cluster’s name in the SWARM column, blank otherwise. For example, “lab” and “summit2015” are standalone machines where as all other machines are part of the “swarm-master” cluster. The Swarm master is also identified by (master) in the SWARM column.

  7. Connect to the Swarm cluster and find some information about it:

    This shows the output as:

    There are 3 nodes – one Swarm master and 2 Swarm nodes. There is a total of 4 containers running in this cluster – one Swarm agent on master and each node, and there is an additional swarm-agent-master running on the master.

  8. List nodes in the cluster with the following command:

    This shows the output as:

Deploy Java EE Application to Docker Swarm Cluster using Docker Compose

Docker Compose to Orchestrate Containers explains how multi container applications can be easily started using Docker Compose.

  1. Use the docker-compose.yml file explained in that blog to start the containers as:

    The docker-compose.yml file looks like:
  2. Check the containers running in the cluster as:

    to see the output as:
  3. “swarm-node-02” is running three containers and so lets look at the list of containers running there:

    and see the list of running containers as:
  4. Application can then be accessed again using:

    and shows the output as:

Latest instructions for this setup are always available at: github.com/javaee-samples/docker-java/blob/master/chapters/docker-swarm.adoc.

Enjoy!

Microservices and DevOps Journey at Wix

Wix.com started their journey on DevOps and Microservices about two years ago and recently switched from a monolithic application to a microservices-based application. Yes, it took them full two years to complete the transition from monolith to microservices!

I got connected with Aviran Mordo (@aviranm), head of backend engineering at Wix on twitter.

They migrated to microservices because the “system could not scale” and the requirements for functional components were varied. The journey took their WAR-based deployment on Tomcat to fat JAR with embedded Jetty. On a side note, take a look at WildFly Swarm if you are interested in a similar approach for Java EE applications.

Video Interview

I discussed some points with him about this journey and you can watch the same too.

In this discussion, you’ll learn:

  • Why Continuous Delivery and DevOps are important requirements for microservices?
  • How they migrated from a big monolith to smaller monoliths and then a full-blown microservices architecture
  • How database referential integrity constraints were moved from database to application?
  • “micro” in microservices refers to the area of responsibility, nothing to do with LOC
  • Neither REST nor messaging was used for communication between different services. Which protocol was used? JSON-RPC
  • How do services register and discover each other? Is that required during early phases?
  • Why YAGNI and KISS are important?
  • Chef for configuration management and how to make it accessible for massive deployments
  • TeamCity for CI
  • Is 100% automation a requirement? Can 100% automation be achieved? Learn about Petri, Wix’s open source framework for A/B testing
  • Relevance of hybrid cloud (Google, Amazon, Private data center) and redundancy
  • Hardest part of migrating from monolith to microservice
  • How much code was repurposed during refactoring?
  • Where was the most effort spent during the two years of migration?
  • Distributed transactions
  • What was the biggest challenge in DevOps journey? Look out for a nice story towards the end that could be motivating for your team as well 😉

Additional Material

Watch the slides from DevoxxUK:

You can also learn more about their architecture in Scaling Wix to 60m Users.

Enjoy!

DevNation and Red Hat Summit 2015 Wrapup

RedHat Summit Logo DevNation Logo

Red Hat Summit and DevNation is a wrap!

It took two full night sleep and a long afternoon nap to fully recover from the excitement, stimulation, and exhaustion that sets in after meeting awesome developers, customers, partners, colleagues, and geeks from around the world. The fact that I gave four talks, one hands-on lab, participated in two panels, ran Devoxx4Kids event, talked to a lots of analysts, book signing, breakfasts/lunches/dinners/receptions, ran every morning by Charles river – all within 6 days added to the exhaustion as well 😉

In the end, it was very rewarding and inspiring to see the work others are doing!

Complete set of slides are available at redhat.com/summit/2015/presentations. Here are links to the slides from my sessions:

Watch the middleware keynote by Craig Muzilla:

Watch Burr Sutter geek show starting at ~19:00.

Learn/understand more about our middleware offerings using Accelerate, Integrate, and Automate.

Some pictures from the event …

 
 
   
   
 

Complete album …

Here are some other photo albums:

Enjoy!