Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 7 (Tech Tip #41)

This is the seventh part (part 1part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree), George Gastaldi (@gegastaldi) and I are interactively building a Forge addon to add Java EE 7 Batch functionality. So far, here is what different parts have shown:

  • Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.
  • Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.
  • Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.
  • Part 4 added a new test for the command and showed how Forge can be used in debug mode.
  • Part 5 fixed a bug reported by a community member and started work to make processor validation optional.
  • Part 6 upgraded from 2.6.0 to 2.7.1 and started work on reader, processor, and writer template files.

This part shows:

  • Merged the request by George in the workspace
  • Reader, processor, and writer source files are created if they do not exist


As always, the evolving source code is available at

Next episode will add a new test for this functionality.

Posted in jboss, techtip | Leave a comment

Red Hat JBoss Data Grid 6.3 is now available!

Red Hat’s JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly.

JBoss Data Grid is accessible for both Java and non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocol, or directly in process through a traditional Java Map API.

The key features of JBoss Data Grid are:

  • Schema-less key/value store for storing unstructured data
  • Querying to easily search and find objects
  • Security to store and restrict access to your sensitive data
  • Multiple access protocols with data compatibility for applications written in any language, using any framework
  • Transactions for data consistency
  • Distributed execution and map/reduce API to perform large scale, in-memory computations in parallel across the cluster
  • Cross-datacenter replication for high availability, load balancing and data partitioning

What’s new in 6.3 ?

  • Expanded security for your data
    • User authentication via Simple Authentication and Security Layer (SASL)
    • Role based authorization and access control to Cache Manager and Caches
    • New nodes required to authenticate before joining a cluster
    • Encrypted communication within the cluster
  • Deploy into Apache Karaf and WebLogic
    • Use as an embedded or distributed cache in Red Hat JBoss Fuse integration flows
  • Enhanced map/reduce
    • Improved scalability by storing computation results directly in the grid instead of pushing them back to the application
    • Takes advantages of hardware’s parallel processing power for greater computing efficiencies
  • New JPA cache store that preserves data schema
  • Improved remote query and C# Hot Rod client in technology preview
  • JBoss Data Grid modules for JBoss Enterprise Application Platform (JBoss EAP)

The complete list of new and updated features is described here.

How can this be installed on JBoss EAP ?

JBoss Data Grid has 2 deployment modes:

  • Library mode (embedded distributed caches)
  • Client-Server mode (remote distributed cache) - Install the Hot Rod client JARs in EAP, and have application reference these jars to use the Hot Rod protocol to connect to the JBoss Data Grid Server (remote cache).

Why a new C# client ?

The remote Hot Rod client is aware of the cluster topology and hashing scheme on Server and can get to a (k,v) entry in a single hop. In contrast, REST and memcached usually require an extra hop to get to an entry. As a results, Hot Rod protocol has higher performance, and is the preferred protocol (in Client-Server mode). JBoss Data Grid 6.1 only had a Java Hot Rod client – for all other languages, customers had to use memcached or REST. JBoss Data Grid 6.2 added C++ Hot Rod client. And now JBoss Data Grid 6.3 added a Tech Preview of C# client.

Infinispan has a lot more Hot Rod clients.

How would somebody use JBoss Data Grid with JBoss Fuse ?

The primary purpose is caching in integration workflows.

For example, remote JBoss Data Grid can be used with Fuse to cache search results.
REST can be used to communicate with a remote cache, but Hot Rod can now be used starting with JBoss Data Grid 6.3.

Fuse currently has camel-cache component which is based on EHCache. There is also a new camel-infinispan component was released in the community.

We plan to productize the camel-infinispan component in a future release.

Why would somebody use JBoss Data Grid on WebLogic ?

Customers who run WebLogic stack and eventually want to migrate to JBoss stack can start migration by replacing Oracle Coherence with JBoss Data Grid. And here is a comparison between the two offerings:

The complete documentation is available here and quick references are below:

Some useful references:


Posted in jboss, redhat | Leave a comment

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 6 (Tech Tip #40)

This is the sixth part (part 1part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

Part 5 fixed a bug reported by a community member and started work to make processor validation optional.

This part shows:

  • Upgrade from Forge 2.6.0 to 2.7.1
  • Fix the failing test
  • Reader, processor, and writer files are now templates instead of source files
  • Reader, processor, and writer are injected appropriately in test’s temp project


As always, the evolving source code is available at The debugging will continue in the next episode.

Posted in jboss, techtip | Tagged , , | Leave a comment

Shape the future of JBoss EAP and WildFly Web Console

Are you using WildFly ?

Any version of JBoss EAP ?

Would you like to help us define how the Web Console for future versions should look like ?


Help the Red Hat UX Design team shape the future of JBoss EAP and WildFly!

We are currently working to improve the usability and information architecture of the web-based admin console. By taking part in a short exercise you will help us better understand how users interpret the information and accomplish their goals.

You do not need to be an expert of the console to participate in this study. The activity shouldn’t take longer than 10 to 15 minutes to complete.

To start participating in the study, click on the link below and follow the instructions.

I completed the study in about 12 mins and was happy that my clicking around helped shape the future of JBoss EAP and WildFly!

Just take a quick detour from your routine for 10-15 mins and take the study.

Thank you in advance for taking the time to complete the study.

Posted in jboss, redhat | Tagged , | Leave a comment

Getting Started with Docker (Tech Tip #39)

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well!

Lets hear it from @solomonstre - creator of Docker project!

In short, Docker simplifies software delivery by making it easy to build and share images that contain your application’s entire environment, or application operating system.

What does it mean by application operating system ?

Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system.

You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker.

What are the main components of Docker ?

Docker has two main components:

  • Docker: the open source container virtualization platform
  • Docker Hub: SaaS platform for sharing and managing Docker images

Docker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this.

Images are “build component” of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are “run component” of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the “distribution component” of Docker.

Docker in turn contains two components:

  • Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers.
  • Client is a Docker binary that accepts commands from the user and communicates back and forth with daemon

How do these work together ?

Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.


Client can then start the Container using run command. The complete list of client commands can be seen here.

Client communicates with Daemon using sockets or REST API.

Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ?

Docker daemon and client for different operating systems can be installed from As you can see, it can be installed on a wide variety of platforms, including Mac and Windows.

For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac:

mkdir -p ~/.boot2docker
if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi
/usr/local/bin/boot2docker init 
/usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375
docker version
~> bash
~> mkdir -p ~/.boot2docker
~> if [ ! -f ~/.boot2docker/boot2docker.iso ]; then cp /usr/local/share/boot2docker/boot2docker.iso ~/.boot2docker/ ; fi
~> /usr/local/bin/boot2docker init 
2014/07/16 09:57:13 Virtual machine boot2docker-vm already exists
~> /usr/local/bin/boot2docker up && export DOCKER_HOST=tcp://$(/usr/local/bin/boot2docker ip 2>/dev/null):2375
2014/07/16 09:57:13 Waiting for VM to be started...
2014/07/16 09:57:35 Started.
2014/07/16 09:57:35 To connect the Docker client to the Docker daemon, please set:
2014/07/16 09:57:35     export DOCKER_HOST=tcp://
~> docker version
Client version: 1.1.1
Client API version: 1.13
Go version (client): go1.2.1
Git commit (client): bd609d2
Server version: 1.1.1
Server API version: 1.13
Go version (server): go1.2.1
Git commit (server): bd609d2

For example, Docker Daemon and Client can be installed on Mac following the instructions at

The VM can be stopped from the CLI as:

boot2docker stop

And then restarted again as:

boot2docker boot

And logged in as:

boot2docker ssh

The complete list of boot2docker commands are available in help:

~> boot2docker help
Usage: boot2docker []  []

boot2docker management utility.

    init                    Create a new boot2docker VM.
    up|start|boot           Start VM from any states.
    ssh [ssh-command]       Login to VM via SSH.
    save|suspend            Suspend VM and save state to disk.
    down|stop|halt          Gracefully shutdown the VM.
    restart                 Gracefully reboot the VM.
    poweroff                Forcefully power off the VM (might corrupt disk image).
    reset                   Forcefully power cycle the VM (might corrupt disk image).
    delete|destroy          Delete boot2docker VM and its disk image.
    config|cfg              Show selected profile file settings.
    info                    Display detailed information of VM.
    ip                      Display the IP address of the VM's Host-only network.
    status                  Display current state of VM.
    download                Download boot2docker ISO image.
    version                 Display version information.

Enough talk, show me an example ?

Some of the JBoss projects are available as Docker images at and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as:

~> docker pull jboss/wildfly
Pulling repository jboss/wildfly
2f170f17c904: Download complete 
511136ea3c5a: Download complete 
c69cab00d6ef: Download complete 
88b42ffd1f7c: Download complete 
fdbe853b54e1: Download complete 
bc93200c3ba0: Download complete 
0daf76299550: Download complete 
3a7e1274035d: Download complete 
e6e970a0db40: Download complete 
1e34f7a18753: Download complete 
b18f179f7be7: Download complete 
e8833789f581: Download complete 
159f5580610a: Download complete 
3111b437076c: Download complete

The image can be verified using the command:

~> docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
jboss/wildfly       latest              2f170f17c904        8 hours ago         1.048 GB

Once the image is downloaded, the container can be started as:

docker run jboss/wildfly

By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY.

In addition, we’d also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port.

So we’ll run the container as:

docker run -i -t -p 80:8080 jboss/wildfly

  JBoss Bootstrap Environment

  JBOSS_HOME: /opt/wildfly

  JAVA: java

  JAVA_OPTS:  -server -Xms64m -Xmx512m -XX:MaxPermSize=256m -Djboss.modules.system.pkgs=org.jboss.byteman -Djava.awt.headless=true


22:08:29,943 INFO  [org.jboss.modules] (main) JBoss Modules version 1.3.3.Final
22:08:30,200 INFO  [org.jboss.msc] (main) JBoss MSC version 1.2.2.Final
22:08:30,297 INFO  [] (MSC service thread 1-6) JBAS015899: WildFly 8.1.0.Final "Kenny" starting
22:08:31,935 INFO  [] (Controller Boot Thread) JBAS015888: Creating http management service using socket-binding (management-http)
22:08:31,961 INFO  [org.xnio] (MSC service thread 1-7) XNIO version 3.2.2.Final
22:08:31,974 INFO  [org.xnio.nio] (MSC service thread 1-7) XNIO NIO Implementation Version 3.2.2.Final
22:08:32,057 INFO  [] (ServerService Thread Pool -- 31) WFLYIO001: Worker 'default' has auto-configured to 16 core threads with 128 task threads based on your 8 available processors
22:08:32,108 INFO  [] (ServerService Thread Pool -- 32) JBAS010280: Activating Infinispan subsystem.
22:08:32,110 INFO  [] (ServerService Thread Pool -- 40) JBAS011800: Activating Naming Subsystem
22:08:32,133 INFO  [] (ServerService Thread Pool -- 45) JBAS013171: Activating Security Subsystem
22:08:32,178 INFO  [] (ServerService Thread Pool -- 38) JBAS012615: Activated the following JSF Implementations: [main]
22:08:32,206 WARN  [] (ServerService Thread Pool -- 46) JBAS010153: Node identifier property is set to the default value. Please make sure it is unique.
22:08:32,348 INFO  [] (MSC service thread 1-3) JBAS013170: Current PicketBox version=4.0.21.Beta1
22:08:32,397 INFO  [] (ServerService Thread Pool -- 48) JBAS015537: Activating WebServices Extension
22:08:32,442 INFO  [] (MSC service thread 1-13) JBAS010408: Starting JCA Subsystem (IronJacamar 1.1.5.Final)
22:08:32,512 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-9) JBAS017502: Undertow 1.0.15.Final starting
22:08:32,512 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017502: Undertow 1.0.15.Final starting
22:08:32,570 INFO  [] (ServerService Thread Pool -- 27) JBAS010403: Deploying JDBC-compliant driver class org.h2.Driver (version 1.3)
22:08:32,660 INFO  [] (MSC service thread 1-10) JBAS010417: Started Driver service with driver-name = h2
22:08:32,736 INFO  [org.jboss.remoting] (MSC service thread 1-7) JBoss Remoting version 4.0.3.Final
22:08:32,836 INFO  [] (MSC service thread 1-15) JBAS011802: Starting Naming Service
22:08:32,839 INFO  [] (MSC service thread 1-15) JBAS015400: Bound mail session [java:jboss/mail/Default]
22:08:33,406 INFO  [org.wildfly.extension.undertow] (ServerService Thread Pool -- 47) JBAS017527: Creating file handler for path /opt/wildfly/welcome-content
22:08:33,540 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017525: Started server default-server.
22:08:33,603 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-8) JBAS017531: Host default-host starting
22:08:34,072 INFO  [org.wildfly.extension.undertow] (MSC service thread 1-13) JBAS017519: Undertow HTTP listener default listening on /
22:08:34,599 INFO  [] (MSC service thread 1-11) JBAS015012: Started FileSystemDeploymentService for directory /opt/wildfly/standalone/deployments
22:08:34,619 INFO  [] (MSC service thread 1-9) JBAS010400: Bound data source [java:jboss/datasources/ExampleDS]
22:08:34,781 INFO  [] (MSC service thread 1-13) JBWS022052: Starting JBoss Web Services - Stack CXF Server 4.2.4.Final
22:08:34,843 INFO  [] (Controller Boot Thread) JBAS015961: Http management interface listening on
22:08:34,844 INFO  [] (Controller Boot Thread) JBAS015951: Admin console listening on
22:08:34,845 INFO  [] (Controller Boot Thread) JBAS015874: WildFly 8.1.0.Final "Kenny" started in 5259ms - Started 184 of 233 services (81 services are lazy, passive or on-demand)

Container’s IP address can be found as:

~> boot2docker ip

The VM's Host only interface IP address is:

The started container can be verified using the command:

~> docker ps
CONTAINER ID        IMAGE                  COMMAND                CREATED             STATUS              PORTS                NAMES
b2f8001164b0        jboss/wildfly:latest   /opt/wildfly/bin/sta   46 minutes ago      Up 12 minutes       8080/tcp, 9990/tcp   sharp_pare

And now the WildFly server can now be accessed on your local machine as and looks like as shown:

Finally the container can be stopped by hitting Ctrl + C, or giving the command as:

~> docker stop b2f8001164b0

The container id obtained from “docker ps” is passed to the command here.

More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at

What else would you like to see in the WildFly Docker image ? File an issue at

Other images that are available at are:


Did you know that Red Hat is amongst one of the top contributors to Docker, with 5 Red Hatters from Project Atomic working on it ?

Posted in redhat, techtip, wildfly | Tagged , | 1 Comment

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 5 (Tech Tip #38)

This is the fifth part (part 1part 2, part 3, part 4) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

This part shows:

  • Fix a bug reported by a community member
  • Started work another issue to make processor validation optional


As always, the evolving source code is available at The debugging will continue in the next episode.

Posted in javaee, redhat, techtip | Tagged , , | Leave a comment

Defaults in Java EE 7 (Tech Tip #37)

Java EE 7 platform added a few new specifications to the platform:

  • Java API for WebSocket 1.0
  • Batch Applications for Java 1.0
  • Java API for JSON Processing 1.0
  • Concurrency Utilities for Java EE 1.0

This is highlighted in the pancake diagram shown below:


Several of the existing specifications were updated to fill the gaps and provide a more cohesive platform. Some small, but rather significant additions, were made to the platform to provide defaults for different features. These defaults would lower the bar for application developers to build Java EE applications.

Lets take a look at them.

  • Default CDI: Java EE 6 required “beans.xml” in an archive to enable CDI. This was mostly a marker file. So you could bundle a completely empty “beans.xml” in the archive and that would enable injection. Of course, you could specify a lot of other elements in this file such as interceptors, decorators, alternative but the basic dependency injection was enabled by just the mere inclusion of this file.This was one of the biggest source of confusion of why beans were not getting injected in a Java EE 7 archive, and was asked on several forums and other channels.

    Java EE 7 made that “beans.xml” optional and provided a default behavior. Now if this file is not bundled, all CDI-scoped beans are available for injection. So any bean with an explicitly specified scope is available for injection. Scopes defined by the CDI specification are listed at Specifically, here are the scopes defined by CDI:

    • @ApplicationScoped
    • @ConversationScoped
    • @Dependent
    • @NormalScope
    • @RequestScoped
    • @SessionScoped

    In addition, two new scopes are introduced in Java EE 7:

    • @FlowScoped
    • @TransactionScoped

    So, any bean with these scopes will be available for injection, in other beans only, without the presence of “beans.xml”.

    Check it out in action at

  • Default data source: A Java EE runtime, a.k.a application server, requires to package a database with it. If you are building a Java EE application, you likely will need some sort of data store or RBDMS to store the data. So this makes perfect sense.For example, WildFly bundles in-memory H2 database.Now, you can certainly use another JDBC-compliant database but bundling a database makes it convenient to start with. However, in order to get started, Java EE 6 still required to create JDBC resources in an application server-specific way. This would mean understanding app server-specific tools.

    Java EE 7 simplified it by providing a default data source with a pre-defined JNDI name.This mean you can inject a data source as:

    DataSource ds;

    Also, your persistence.xml can look like:

    <?xml version="1.0" encoding="UTF-8"?>
     <persistence-unit name="myPU" transaction-type="JTA"/>

    Note, no <jta-data-source>.

    In both of these circumstances, a default data source with JNDI name java:comp/DefaultDataSource is bound to your application-server specific JDBC resource.

    The exact data source in WildFly can be verified using jboss-cli script as:

    wildfly-8.1.0.Final> ./bin/ --connect --command="/subsystem=datasources:read-resource"
        "outcome" => "success",
        "result" => {
            "data-source" => {"ExampleDS" => undefined},
            "jdbc-driver" => {"h2" => undefined},
            "xa-data-source" => undefined

    Check it out in action at

  • Create JMS connection factory, queues, and topics: An application using JMS topics and queues in Java EE 6 would require a deployment script to create Connection Factory and Queues/Topics. These would again be done in an application server-specific way.Java EE 7 provide annotations @JMSConnectionFactoryDefinition and @JMSConnectionFactoryDefinitions that are read by the Java EE 7 runtime and ensures that the ConnectionFactory specified by these annotations is provisioned in the operational environment.

    Similarly, @JMSDestinationDefinition and @JMSDestinationDefinitions can be used to create Topics/Queues as part of application deployment.So no more deployment scripts, just include annotation in your code ?

    Check it out in action at

  • Default JMS connection factory: Just like default data source, a default JMS resource allows you to avoid creating a JMSConnectionFactory in an appserver-specific way to deploy the application using JMS resources.Injection of a JMS Producer or Consumer in Java EE 6 required to get an instance of application-managed or container-managed JMSConnectionFactory. This factory had to be manually created in an application-server specific way.Providing a default JMSConnectionFactory simplifies this step further.

    JMS 2.0 also introduced JMSContext as entry point to the simplified API, and it can be injected simply as:

    JMSContext context;

    Not specifying a ConnectionFactory means the default one will be used. And it has the JNDI name of jms/DefaultJMSConnectionFactory.

    The JNDI name may be mapped to the appserver-specific JMS provider. For example, in case of WildFly it is defined as:

    ./bin/ -c --command="/subsystem=messaging/hornetq-server=default/pooled-connection-factory=hornetq-ra:read-resource"
        "outcome" => "success",
        "result" => {
            "auto-group" => false,
            "block-on-acknowledge" => false,
            "block-on-durable-send" => true,
            "block-on-non-durable-send" => false,
            "cache-large-message-client" => false,
            "call-failover-timeout" => -1L,
            "call-timeout" => 30000L,
            "client-failure-check-period" => 30000L,
            "client-id" => undefined,
            "compress-large-messages" => false,
            "confirmation-window-size" => -1,
            "connection-load-balancing-policy-class-name" => "org.hornetq.api.core.client.loadbalance.RoundRobinConnectionLoadBalancingPolicy",
            "connection-ttl" => 60000L,
            "connector" => {"in-vm" => undefined},
            "consumer-max-rate" => -1,
            "consumer-window-size" => 1048576,
            "discovery-group-name" => undefined,
            "discovery-initial-wait-timeout" => undefined,
            "dups-ok-batch-size" => 1048576,
            "entries" => [
            "failover-on-initial-connection" => false,
            "failover-on-server-shutdown" => undefined,
            "group-id" => undefined,
            "ha" => false,
            "initial-connect-attempts" => 1,
            "initial-message-packet-size" => 1500,
            "jndi-params" => undefined,
            "max-pool-size" => -1,
            "max-retry-interval" => 2000L,
            "min-large-message-size" => 102400,
            "min-pool-size" => -1,
            "password" => undefined,
            "pre-acknowledge" => false,
            "producer-max-rate" => -1,
            "producer-window-size" => 65536,
            "reconnect-attempts" => -1,
            "retry-interval" => 2000L,
            "retry-interval-multiplier" => 1.0,
            "scheduled-thread-pool-max-size" => 5,
            "setup-attempts" => undefined,
            "setup-interval" => undefined,
            "thread-pool-max-size" => 30,
            "transaction" => "xa",
            "transaction-batch-size" => 1048576,
            "use-auto-recovery" => true,
            "use-global-pools" => true,
            "use-jndi" => undefined,
            "use-local-tx" => undefined,
            "user" => undefined

    Check it out in action at

  • Default executors: Concurrency Utilities for Java EE introduced four different managed objects:
    • ManagedExecutorService
    • ScheduledManagedExecutorService
    • ContextService
    • ManagedThreadFactory

    These objects allow user to create application threads that are managed by the Java EE server runtime. Once again, a default and pre-configured managed object, with a well-defined JNDI name, is made available for each one of them.

    This allows a user to inject a ManagedExecutorService as:

    ManagedExecutorService myExecutor;

    instead of:

    ManagedExecutorService myExecutor;

    Default ManagedExecutorService in WildFly can be found as:

    ./bin/ -c --command="/subsystem=ee/managed-executor-service=default:read-resource"
        "outcome" => "success",
        "result" => {
            "context-service" => "default",
            "core-threads" => 5,
            "hung-task-threshold" => 60000L,
            "jndi-name" => "java:jboss/ee/concurrency/executor/default",
            "keepalive-time" => 5000L,
            "long-running-tasks" => false,
            "max-threads" => 25,
            "queue-length" => 0,
            "reject-policy" => "ABORT",
            "thread-factory" => undefined

    Similarly other default managed objects can be found.

    Check out different executors in action at

With so many simplifications, why would you not like to use Java EE 7 platform ?

And WildFly is a fantastic application server too :-)

Download WildFly now, and get started!

Posted in javaee, wildfly | Tagged , | 2 Comments

Schedule Java EE 7 Batch Jobs (Tech Tip #36)

Java EE 7 added the capability to perform Batch jobs in a standard way using JSR 352.

<job id="myJob" xmlns="" version="1.0">
  <step id="myStep">
    <chunk item-count="3">
    <reader ref="myItemReader"/>
    <processor ref="myItemProcessor"/>
  <writer ref="myItemWriter"/>

This code fragment is the Job Specification Language defined as XML, a.k.a. Job XML. It defines a canonical job, with a single step, using item-oriented or chunk-oriented processing. A chunk can have a reader, optional processor, and a writer. Each of these elements are identified using the corresponding elements in the Job XML, and are CDI beans packaged in the archive.

This job can be easily started using:

BatchRuntime.getJobOperator().start("myJob", new Properties());

A typical question asked in different forums and conferences is how to schedule these jobs in a Java EE runtime. Batch 1.0 API itself does not offer anything to be schedule these jobs. However Java EE platform offers three different ways to schedule these jobs:

  1. Use the @javax.ejb.Schedule annotation in an EJB.
    Here is a sample code that will trigger the execution of batch job at 11:59:59 PM every day.

    public class MyEJB {
      @Schedule(hour = "23", minute = "59", second = "59")
      public void myJob() {
        BatchRuntime.getJobOperator().start("myJob", new Properties());

    Of course, you can change the parameters of @Schedule to start the batch job at the desired time.

  2. Use ManagedScheduledExecutorService using javax.enterprise.concurrent.Trigger as shown:
    public class MyStatelessEJB {
        ManagedScheduledExecutorService executor;
        public void runJob() {
            executor.schedule(new MyJob(), new Trigger() {
                public Date getNextRunTime(LastExecution lastExecutionInfo, Date taskScheduledTime) {
                    Calendar cal = Calendar.getInstance();
                    cal.add(Calendar.DATE, 1);
                    return cal.getTime();
                public boolean skipRun(LastExecution lastExecutionInfo, Date scheduledRunTime) {
                    return null == lastExecutionInfo;
        public void cancelJob() {

    Call runJob to initiate job execution and cancelJob to terminate job execution. In this case, a new job is started a day later than the previous task. And its not started until previous one is terminated. You will need more error checks for proper execution.

    MyJob is very trivial:

    public class MyJob implements Runnable {
        public void run() {
            BatchRuntime.getJobOperator().start("myJob", new Properties());

    Of course, you can automatically schedule it by calling this code in @PostConstruct.

  3. A slight variation of second technique allows to run the job after a fixed delay as shown:
    public void runJob2() {
        executor.scheduleWithFixedDelay(new MyJob(), 2, 3, TimeUnit.HOURS);

    The first task is executed 2 hours after the runJob2 method is called. And then with a 3 hours delay between subsequent execution.

This support is available to you within the Java EE platform. In addition, you can also invoke BatchRuntime.getJobOperator().start("myJob", new Properties()); from any of your Quartz-scheduled methods as well.

You can try all of this on WildFly.

And there are a ton of Java EE 7 samples at

This particular sample is available at

How are you scheduling your Batch jobs ?

Posted in javaee, techtip, wildfly | 2 Comments

Markus Eisele joining Red Hat JBoss Middleware

Markus Eisele is a Java Champion, Oracle ACE Director, Java EE Expert Group member, Java community leader of German DOAG, founder of JavaLand, reputed speaker at Java conferences around the world, and a very well known figure in the Enterprise Java world. Now he is joining as Developer Advocate in JBoss Middleware team at Red Hat.

You’ve known and seen him at different conferences, JUGs, meetups, blogs, social media talking about middleware for many years. And you’ll continue to hear him talk about that going forward as well. And it will still be focused on educating the latest in enterprise technology and any thing around ~100 projects at Red Hat.

I had the honor of presenting his Java Champion jacket during the inaugural JavaLand conference earlier this year. And that lovely moment is captured below (photo from his blog):

Read more about his farewell message here.

Subscribe to his blog at or follow him at @myfear.

Red Hat is hiring, see more at Are you interested ?

Posted in redhat | Leave a comment

Eclipse Luna and JBoss Tools (Tech Tip #35)

Eclipse Luna (4.4) was released a few days ago, download it at the usual location: The big feature of course is full support for Java 8 but there are a tons of other features as listed here.

JBoss Tools is a set of plugins for Eclipse that complements, enhances and goes beyond the support that exists for JBoss and related technologies in the default Eclipse distribution. If you use JBoss Tools, then a compatible release is already available. Download 4.2.0 Beta 2 here.

The installation of the plugins is rather simple as shown on the web page:


After downloading, participate in the Community Acceptance Testing by following the instructions at What’s your incentive ?

  • JBoss Tools team will be paying close attention to the bugs filed by CAT members and ensuring they are responded/reacted to
  • Your name will be included in the JBoss Tools release notes
  • Help us decide if JBoss Tools is ready for release

I filed JBIDE-17773 and JBIDE-17774.

Also see the welcome message from Max Andersen (@maxandersen).

Looking forward to your bugs!

Posted in jboss, techtip | Leave a comment