Miles to go …

August 12, 2009

TOTD #92: Session Failover for Rails applications running on GlassFish

Filed under: glassfish, rails, totd — Tags: , — arungupta @ 3:00 am

The GlassFish High Availability allows to setup a cluster of GlassFish instances and achieve highly scalable architecture using in-memory session state replication. This cluster can be very easily created and tested using the “clusterjsp” sample bundled with GlassFish. Here are some clustering related entries published on this blog so far:

  • TOTD #84 shows how to setup Apache + mod_proxy balancer for Ruby-on-Rails load balancing
  • TOTD #81 shows how to use nginx to front end a cluster of GlassFish Gems
  • TOTD #69 explains how a GlassFish cluster can be front-ended using Sun Web Server and Load Balancer Plugin
  • TOTD #67 shows the same thing using Apache httpd + mod_jk

#67 & #69 uses a web application “clusterjsp” (bundled with GlassFish) that uses JSP to demonstrate in-memory session replication state replication. This blog creates a similar application “clusterrails” – this time using Ruby-on-Rails and deploy it on GlassFish v2.1.1. The idea is to demonstrate how Rails applications can leverage the in-memory session replication feature of GlassFish.

Rails applications can be easily deployed as a WAR file on GlassFish v2 as explained in TOTD #73. This blog will guide through the steps of creating the Controller and View to mimic “clusterjsp” and configuring the Rails application for session replication.

  1. Create a template Rails application and create/migrate the database. Add a Controller/View as:
    ~/samples/jruby/session >~/tools/jruby/bin/jruby script/generate controller home index
    JRuby limited openssl loaded. gem install jruby-openssl for full support.

    http://wiki.jruby.org/wiki/JRuby_Builtin_OpenSSL

    exists  app/controllers/
    exists  app/helpers/
    create  app/views/home
    exists  test/functional/
    create  test/unit/helpers/
    create  app/controllers/home_controller.rb
    create  test/functional/home_controller_test.rb
    create  app/helpers/home_helper.rb
    create  test/unit/helpers/home_helper_test.rb
    create  app/views/home/index.html.erb

  2. Edit the controller in “app/controllers/home_controller.rb” and change the code to (explained below):
    class HomeController < ApplicationController
    include Java

    def index
    @server_served = servlet_request.get_server_name
    @port = servlet_request.get_server_port
    @instance = java.lang.System.get_property “com.sun.aas.instanceName”
    @server_executed = java.net.InetAddress.get_local_host().get_host_name()
    @ip = java.net.InetAddress.get_local_host().get_host_address
    @session_id = servlet_request.session.get_id
    @session_created = servlet_request.session.get_creation_time
    @session_last_accessed = servlet_request.session.get_last_accessed_time
    @session_inactive = servlet_request.session.get_max_inactive_interval

    if (params[:name] != nil)
    servlet_request.session[params[:name]] = params[:value]
    end

    @session_values = “”
    value_names = servlet_request.session.get_attribute_names
    unless (value_names.has_more_elements)
    @session_values = “<br>No parameter entered for this request”
    else
    @session_values << “<UL>”
    while (value_names.has_more_elements)
    param = value_names.next_element
    unless (param.starts_with?(“__”))
    value = servlet_request.session.get_attribute(param)
    @session_values << “<LI>” + param + ” = ” + value + “</LI>”
    end
    end
    @session_values << “</UL>”
    end

    end

    def adddata
    servlet_request.session.set_attribute(params[:name], params[:value])
    render :action => “index”
    end

    def cleardata
    servlet_request.session.invalidate
    render :action => “index”
    end
    end

    The “index” action initializes some instance variables using the “servlet_request” variable mapped from “javax.servlet.http.ServletRequest” class. The “servlet_request” provides access to different properties of the request received such as server name/port, host name/address and others. It also uses an application server specific property ”com.sun.aas.instanceName” to fetch the name of particular instance serving the request. In this blog we’ll create a cluster with 2 instances. The action then prints the servlet session attributes name/value pairs entered so far.

    The “adddata” action takes the name/value pair entered on the page and stores them in the servlet request. The “cleardata” action clears any data that is storied in the session.

  3. Edit the view in “app/views/home/index.html.erb” and change to (explained below):
    <h1>Home#index</h1>
    <p>Find me in app/views/home/index.html.erb</p>
    <B>HttpSession Information:</B>
    <UL>
    <LI>Served From Server:   <b><%= @server_served %></b></LI>
    <LI>Server Port Number:   <b><%= @port %></b></LI>
    <LI>Executed From Server: <b><%= @server_executed %></b></LI>
    <LI>Served From Server instance: <b><%= @instance %></b></LI>
    <LI>Executed Server IP Address: <b><%= @ip %></b></LI>
    <LI>Session ID:    <b><%=
    @session_id %></b></LI>
    <LI>Session Created:  <%= @session_created %></LI>
    <LI>Last Accessed:    <%= @session_last_accessed %></LI>
    <LI>Session will go inactive in  <b><%= @session_inactive %> seconds</b></LI>
    </UL>
    <BR>
    <% form_tag “/session/home/index” do %>
    <label for=”name”>Name of Session Attribute:</label>
    <%= text_field_tag :name, params[:name] %><br>

    <label for=”value”>Value of Session Attribute:</label>
    <%= text_field_tag :value, params[:value] %><br>

    <%= submit_tag “Add Session Data” %>
    <% end  %>
    <% form_tag “/session/home/cleardata” do %>
    <%= submit_tag “Clear Session Data” %>
    <% end %>
    <% form_tag “/session/home/index” do %>
    <%= submit_tag “Reload Page” %>
    <% end %>
    <BR>
    <B>Data retrieved from the HttpSession: </B>
    <%= @session_values %>

    The view dumps the property value retrieved from the servlet context in the action. Then it consists of some forms to enter the session name/value pairs, clear the session and reload the page. The application is now ready, lets configure it for WAR packaging.

  4. Generate a template “web.xml” and copy it to “config” directory as:
    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble war:webxml
    mkdir -p tmp/war/WEB-INF
    ~/samples/jruby/session >cp tmp/war/WEB-INF/web.xml config/
    1. Edit “tmp/war/WEB-INF/web.xml” and change the first few lines from:
      <!DOCTYPE web-app PUBLIC
      “-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN”
      “http://java.sun.com/dtd/web-app_2_3.dtd”>
      <web-app>

      to

      <web-app version=”2.4″ xmlns=”http://java.sun.com/xml/ns/j2ee” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://java.sun.com/xml/ns/j2ee http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd”>

      This is required because the element to be added next is introduced in the Servlet 2.4 specification.

    2. Add the following element:
      <distributable/>

      as the first element, right after “<web-app>”. This element marks the web application to be distributable across multiple JVMs in a cluster.

  5. Generate and configure “warble/config.rb” as described in TOTD #87. This configuration is an important step otherwise you’ll encounter JRUBY-3789. Create a WAR file as:
    ~/samples/jruby/session >~/tools/jruby/bin/jruby -S warble
    mkdir -p tmp/war/WEB-INF/gems/specifications
    cp /Users/arungupta/tools/jruby-1.3.0/lib/ruby/gems/1.8/specifications/rails-2.3.2.gemspec tmp/war/WEB-INF/gems/specifications/rails-2.3.2.gemspec

    . . .

    mkdir -p tmp/war/WEB-INF
    cp config/web.xml tmp/war/WEB-INF
    jar cf session.war  -C tmp/war .

  6. Download latest GlassFish v2.1.1, install/configure GlassFish and create/configure/start a cluster using the script described here. Make sure to change the download location and filename in the script. This script creates a cluster “wines” with two instances – “cabernet” runing on the port 58080 and “merlot” running on the port 58081.
  7. Deploy the application using the command:
    ~/samples/jruby/session >asadmin deploy –target wines –port 5048 –availabilityenabled=true session.war

Now, the screenshots from the two instances are shown and explained below. The two (or more) instances are front-ended by a load balancer so none of this is typically visible to the user but it helps to understand.
Here is a snapshot of this application deployed on “cabernet”:

The instance name and the session id is highlighted in the red box. It also shows the time when the session was created in “Session Created” field.

And now the same application form “merlot”:

Notice, the session id exactly matches the one from the “cabernet” instance. Similarly “Session Created” matches but “Last Accessed” does not because the same session session is accessed from a different instance.

Lets enter some session data in the “cabernet” instance and click on “Add Session Data” button as shown below:

The session attribute is “aaa” and value is “111″. Also the “Last Accessed” time is updated. In the “merlot” page, click on the “Reload Page” button and the same session name/value pairs are retrieved as shown below:

Notice, the “Last Accessed” time is after the time showed in “cabernet” instance. The session information added in “cabernet” is automatically replicated to the “merlot” instance.

Now, lets add a new session name/value pair in “merlot” instance as shown below:

The “Last Accessed” is updated and the session name/value pair (“bbb”/”222″) is shown in the page. Click on “Reload page” in “cabernet” instance as shown below:

This time the session information added to “merl
ot” is replicated to “cabernet”.

So any session information added in “cabernet” is replicated to “merlot” and vice versa.

Now, lets stop “cabernet” instance as shown below:

and click on “Reload Page” in “merlot” instance to see the following:

Even though one instance from which the session data was added is stopped, the replicating instance continues to serve both the session values.

As explained earlier, these two instances are front-ended by a load-balancer typically running at port 80. So the user makes a request to port 80 and the correct session values are served even if one of the instance goes down and there by providing in-memory session replication.

Please leave suggestions on other TOTD that you’d like to see. A complete archive of all the tips is available here.

Technorati: totd glassfish clustering rubyonrails jruby highavailability loadbalancer

Share and Enjoy:
  • Print
  • Digg
  • Sphinn
  • del.icio.us
  • Facebook
  • Google Bookmarks
  • DZone
  • StumbleUpon
  • Technorati
  • Twitter
  • Slashdot

April 30, 2009

TOTD #81: How to use nginx to load balance a cluster of GlassFish Gem ?

Filed under: glassfish, rails, totd — Tags: , , — arungupta @ 4:00 am
nginx (pronounced as “engine-ex”) is an open-source and high-performance HTTP server. It provides the common features such as reverse proxying with caching, load balancing, modular architecture using filters (gzipping, chunked responses, etc), virtual servers, flexible configuration and much more.

nginx is known for it’s high performance and low resource consumption. It’s a fairly popular front-end HTTP server in the Rails community along with Apache, Lighttpd, and others. This TOTD (Tip Of The Day) will show how to install/configure nginx for load-balancing/front-ending a cluster of Rails application running on GlassFish Gem.

  1. Download, build, and install nginx using the simple script (borrowed from dzone):

    ~/tools > curl -L -O http://sysoev.ru/nginx/nginx-0.6.36.tar.gz
    ~/tools > tar -xzf nginx-0.6.36.tar.gz
    ~/tools > curl -L -O http://downloads.sourceforge.net/pcre/pcre-7.7.tar.gz
    ~/tools > tar -xzf pcre-7.7.tar.gz
    ~/tools/nginx-0.6.36 > ./configure –prefix=/usr/local/nginx –sbin-path=/usr/sbin –with-debug –with-http_ssl_module –with-pcre=../pcre-7.7
    ~/tools/nginx-0.6.36 > make
    ~/tools/nginx-0.6.36 > sudo make install
    ~/tools/nginx-0.6.36 > which nginx
    /usr/sbin/nginx

    OK, nginx is now roaring and can be verified by visiting “http://localhost” as shown below:

  2. Create a simple Rails scaffold as:
    ~/samples/jruby >~/tools/jruby/bin/jruby -S rails runner
    ~/samples/jruby/runner >~/tools/jruby/bin/jruby script/generate scaffold runlog miles:float minutes:integer
    ~/samples/jruby/runner >sed s/’adapter: sqlite3′/’adapter: jdbcsqlite3′/ <config/database.yml >config/database.yml.new
    ~/samples/jruby/runner >mv config/database.yml.new config/database.yml
    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S rake db:migrate
  3. Run this application using GlassFish Gem on 3 separate ports as:
    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish
    Starting GlassFish server at: 192.168.1.145:3000 in development environment…
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    The default port is 3000. Start the seond one by explicitly specifying the port using “-p” option ..

    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish -p 3001
    Starting GlassFish server at: 192.168.1.145:3001 in development environment…
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    and the last one on 3002 port …

    ~/samples/jruby/runner >~/tools/jruby/bin/jruby -S glassfish -p 3002
    Starting GlassFish server at: 192.168.1.145:3002 in development environment…
    Writing log messages to: /Users/arungupta/samples/jruby/runner/log/development.log.
    Press Ctrl+C to stop.

    On Solaris and Linux, you can run GlassFish as a daemon as well.

  4. Nginx currently uses a simple round-robin algorithm. Other load balancers such as nginx-upstream-fair (fair proxy) and nginx-ey-balancer (maximum connections) are also available. The built-in algorithm will be used for this blog. Edit “/usr/local/nginx/conf/nginx.conf” to specify an upstream module which provides load balancing:
    1. Create a cluster definition by adding an upstream module (configuration details) right before the “server” module:

      upstream glassfish {
              server 127.0.0.1:3000;
              server 127.0.0.1:3001;
              server 127.0.0.1:3002;
          }

      The cluster specifies a bunch of GlassFish Gem instances running at the backend. Each server can be weighted differently as explained here. The port numbers must exactly match as those specified at the start up. The modified “nginx.conf” looks like:

      The changes are highlighted on lines #35 through #39.

    2. Configure load balancing by specifying this cluster using “proxy_pass” directive as shown below:
      proxy_pass http://glassfish;

      in the “location” module. The updated “nginx.conf” looks like:

      The change is highlighted on line #52.

  5. Restart nginx by using the following commands:
    su
    do kill -15 `cat /usr/local/nginx/logs/nginx.pid`
    sudo nginx

    Now “http://localhost” shows the default Rails page as shown below:

    “http://localhost/runlogs” now serves the page from the deployed Rails application.

    Now lets configure logging so that the upstream server IP address and port are printed in the log files. In “nginx.conf”, uncomment “log_format” directive and add “$upstream_addr” variable as shown:

        log_format  main  ‘$remote_addr – [$upstream_addr] $remote_user [$time_local] $request ‘
                          ‘”$status” $body_bytes_sent “$http_referer” ‘
                          ‘”$http_user_agent” “$http_x_forwarded_for”‘;

        access_log  logs/access.log  main;

    Also change the log format to “main” by uncommenting “access_log logs/access.log main;” line as shown above (default format is “combined”). Accessing “http://localhost/runlogs” shows the following lines in “logs/access.log”:

    127.0.0.1 – [127.0.0.1:3000] – [29/Apr/2009:15:27:57 -0700] GET /runlogs/ HTTP/1.1 “200″ 3689 “-” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”
    127.0.0.1 – [127.0.0.1:3001] – [29/Apr/2009:15:27:57 -0700] GET /favicon.ico HTTP/1.1 “200″ 0 “http://localhost/runlogs/” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”
    127.0.0.1 – [127.0.0.1:3002] – [29/Apr/2009:15:27:57 -0700] GET /stylesheets/scaffold.css?1240977992 HTTP/1.1 “200″ 889 “http://localhost/runlogs/” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”

    The browser makes multiple requests (3 in this case) to load resources on a page and they are nicely load-balanced on the cluster. If an instance running on port 3002 is killed, then the access log show the entries like:

    127.0.0.1 – [127.0.0.1:3000] – [29/Apr/2009:15:28:53 -0700] GET /runlogs/ HTTP/1.1 “200″ 3689 “-” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”
    127.0.0.1 – [127.0.0.1:3002, 127.0.0.1:3000] – [29/Apr/2009:15:28:53 -0700] GET /favicon.ico HTTP/1.1 “200″ 0 “http://localhost/runlogs/” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”
    127.0.0.1 – [127.0.0.1:3001] – [29/Apr/2009:15:28:53 -0700] GET /stylesheets/scaffold.css?1240977992 HTTP/1.1 “200″ 889 “http://localhost/runlogs/” “Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_5_6; en-us) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/3.2.1 Safari/525.27.1″ “-”

    The second log line shows that server running on port 3002 did not respond and so it automatically fall back to 3000, this is nice!

    But this is inefficient because a back-end trip is made even for serving a static file (“/favicon.ico” and “/stylesheets/scaffold.css?1240977992″). This can be easily solved by enabling Rails page caching as described here and here.

    More options about logging are described in NginxHttpLogModule and upstream module variables are defined in NginxHttpUpstreamModule.

    Here are some nginx resources:

    • nginx Website
    • nginx Forum (very useful)
    • nginx Wiki
    • IRC #nginx

    Are you using nginx to front-end your GlassFish cluster ?

    Apache + JRuby + Rails + GlassFish = Easy Deployment! shows similar steps if you want to front-end your Rails application running using JRuby/GlassFish with Apache.

    Hear all about it in Develop with Pleasure, Deploy with Fun: GlassFish and NetBeans for a Better Rails Experience session at Rails Conf next week.

    Please leave suggestions on other TOTD (Tip Of The Day) that you’d like to see. A complete archive of all tips is available here.

    Technorati: rubyonrails glassfish v3 gem jruby nginx loadbalancing clustering

    Share and Enjoy:
    • Print
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Google Bookmarks
    • DZone
    • StumbleUpon
    • Technorati
    • Twitter
    • Slashdot

    February 12, 2009

    TOTD #69: GlassFish High Availability/Clustering using Sun Web Server + Load Balancer Plugin on Windows Vista

    Filed under: general, totd — Tags: , , , , , , , , — arungupta @ 12:15 am

    TOTD #67 shows how to configure GlassFish High Availability using Apache httpd + mod_jk on Mac OS X. Even though that’s a standard and supported configuration, there are several advantages for replacing Apache httpd with Sun Web Server and mod_jk with Load Balancer plugin that comes with GlassFish.

    This Tip Of The Day (TOTD) shows how to configure Clustering and Load Balancing using GlassFish v2.1, Sun Web Server, Load Balancer plugin on Windows Vista. This blog is using JDK 6 U7, GlassFish v2.1 (cluster profile), Sun Web Server 7 U4, and Load Balancer plug-in with Sun GlassFish Enterprise Server 2.1 Enterprise Profile (with HADB link).

    Lets get started!

    1. Install the required software
      1. Download JDK (if not already installed).
      2. Download and Install GlassFish v2.1. Make sure to configure using “ant -f setup-cluster.xml”. This will ensure that the created domain is capable of creating clusters and can perform in-memory session replication for applications deployed on the cluster.
      3. Download and Install Sun Web Server. The process is very simple by unzipping the downloaded bundle, clicking on “setup.exe” and taking all the defaults.
      4. Download GlassFish Enterprise Profile for Load Balancer plugin bits. Start the install by clicking on the downloaded file and select the options as shown below:
      5. Copy the following “loadbalancer.xml” in “https-<host>” (replace <host> with the host name of your machine) directory of Sun Web Server installation directory:
        <?xml version=”1.0″ encoding=”UTF-8″?>
        <!DOCTYPE loadbalancer PUBLIC “-//Sun Microsystems Inc.//DTD Sun Java
        System Application Server 9.1//EN”
        “file:///C:/Sun/WebServer7/https-LH-KRKZDW6CJE1V/config/sun-loadbalancer_1_2.dtd
        “>

        <loadbalancer>
        <cluster name=”cluster1″ policy=”round-robin” policy-module=”">
        <instance name=”instance1″ enabled=”true”
        disable-timeout-in-minutes=”60″ listeners=”http://localhost:38080” weight=”100″/>
        <instance name=”instance2″ enabled=”true”
        disable-timeout-in-minutes=”60″ listeners=”http://localhost:38081” weight=”100″/>
        <web-module context-root=”/clusterjsp
        disable-timeout-in-minutes=”30″ enabled=”true” error-url=”"/>
        <health-checker interval-in-seconds=”7″ timeout-in-seconds=”5″ url=”/”/>
        </cluster>
        <property name=”response-timeout-in-seconds” value=”120″/>
        <property name=”reload-poll-interval-in-seconds” value=”7″/>
        <property name=”https-routing” value=”false”/>
        <property name=”require-monitor-data” value=”false”/>
        <property name=”active-healthcheck-enabled” value=”false”/>
        <property name=”number-healthcheck-retries” value=”3″/>
        <property name=”rewrite-location” value=”true”/>
        </loadbalancer>

        The parameters to be changed are highlighted in bold and explained below:

        1. Sun Web Server installation directory
        2. HTTP port of instances created in the cluster. The ports specified are the default ones and can be found by clicking on the instance as shown below:
        3. Context root of the application that will be deployed in the cluster. The Domain Administration Server (DAS) can be configured to populate this file whenever any application is deployed to the cluster.
    2. Create the cluster as explained in TOTD #67. The admin console shows the following screenshot after the cluster is created and all instances are created/started:

      and the following for 2 instances:

    3. Deploy “clusterjsp” as explained in TOTD #67. The admin console shows the following screenshot after “clusterjsp” is deployed:
    4. Start Sun Web Server using “startserv.bat” in “https-<host>” directory.

    This concludes the installation and configuration steps, now show time!

    Accessing “http://localhost/clusterjsp” shows:

    The Sun Web Server is running on port 80 and uses “loadbalancer.xml” to serve the request from the configured instances in <loadbalancer> fragment. This particular page is served by “instance1″ as indicated in the image. Lets add session data with property name “aaa” and value “111″. The value is shown as:

    The instance serving the data, “instance1″ in this case, and the session data are highlighted.

    Now lets stop “instance1″ using the admin console and it looks like:

    Click on “RELOAD PAGE” and it looks like:

    Exactly same session data is served, this time by “instance2″.

    The sequence above proves that the session data created by the user is preserved even if the instance serving the data goes down. This is possible because of GlassFish High Availability. The session data is served by the ”replica partner” where its already copied using in-memory session replication.

    The following articles are also useful:

    • Load balancing for Glassfish V2 deployments using BIG-IP System
    • Configure the Cluster/Load Balancer with GlassFish v2

    Please leave suggestions on other TOTD (Tip Of The Day) that you’d like to see. A complete archive of all tips is available here.

    Technorati: totd glassfish highavailability clustering loadbalancing lbplugin sunwebserver windows vista

    Share and Enjoy:
    • Print
    • Digg
    • Sphinn
    • del.icio.us
    • Facebook
    • Google Bookmarks
    • DZone
    • StumbleUpon
    • Technorati
    • Twitter
    • Slashdot

    The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.
    Powered by WordPress