All posts by arungupta

JBoss Middleware: Accelerate, Integrate, Automate

RHJB_Middleware_Logotype_RGB-Gray_0213_cw_300

Ever wondered how to understand multiple Red Hat JBoss middleware offerings ?

Mike Piech (@mpiech), GM of JBoss Middleware Business Unit explains them in a succinct video:

Its structured in three buckets:

Accelerate – Building things faster

  • JBoss Enterprise Application Platform provides an innovative modular, cloud-ready architecture, powerful management and automation, and world class developer productivity. It is Java™ EE 6 certified and features powerful yet flexible management, improved performance and scalability, and many new features to maximize developer productivity.
  • Red Hat JBoss Web Server is the only enterprise-class web server solution you need for large-scale websites and lightweight web applications. It provides a more secure and a more stable environment of open source web software, like Apache and Tomcat.
  • Red Hat JBoss Developer Studio provides superior support for your entire development lifecycle. It includes a broad set of tooling capabilities and support for multiple programming models and frameworks, including Java™ Enterprise Edition 6, HTML5, and many other popular technologies. It provides developer choice in supporting multiple JVMs, productivity with Maven, and in testing with Arquillian. It is fully tested and certified to ensure that all its plug-ins, runtime components, and their dependencies are compatible with each other.
  • Red Hat JBoss Data Grid gives you a straightforward approach to overcoming data obstacles, providing the ability to quickly access accurate, real-time information, meet high uptime requirements, streamline interaction with complex and rigid data tiers, and handle unprecedented transaction volumes.
  • Red Hat JBoss Portal is a proven solution for building high-impact, self-service applications

Integrate – Pull the pieces together

  • Red Hat JBoss Fuse is a flexible, small-footprint enterprise service bus (ESB) that enables rapid integration across the extended enterprise. Integration everywhere – on-premise or in the cloud.
  • Red Hat JBoss A-MQ is a flexible, high-performance messaging platform that delivers information reliably, enabling real-time integration and the Internet of Things (IOT).
  • Red Hat JBoss Fuse Service Works is a platform that creates reusable, changeable, and flexible business services that hide the complexity of connecting to different applications in your enterprise. It sets the stage for faster and easier cloud applications, mobile applications, and business process development projects.
  • Red Hat JBoss Data Virtualization is a lean, virtual data integration solution that unlocks trapped data and delivers it as easily consumable, unified, and actionable information. JBoss Data Virtualization makes data spread across physically diverse systems—such as multiple databases, XML files, and Hadoop systems—appear as a set of tables in a local database.

Automate – Automate significant parts of your business

  • Red Hat JBoss BRMS is a comprehensive platform for business rules management and complex event processing. Organizations can incorporate sophisticated decision logic into line-of-business applications and quickly update underlying business rules as market conditions change.
  • Red Hat JBoss BPM Suite is a comprehensive platform for business process management. It includes all the business rules and complex event processing (CEP) capabilities of Red Hat JBoss BRMS [business rules management system], along with advanced tools and runtime support for Business Process Model and Notation v2.0 (BPMN2)-compliant business processes.

And this is also shown in the following image:

accelerate-integrate-automate

Learn all about it at:

  • http://www.redhat.com/products/jbossenterprisemiddleware/
  • Get a quick overview in this technology brochure
  • Why choose JBoss Middleware over IBM Web’s fear ?
  • Why choose JBoss Middleware over Oracle conFusion ?

WildFly Administration Course by Alexis Hassler

Alexis Hassler (@alexishassler) is a software developer, specialized in Java and Java EE. He is using JBoss since version 2.0, more than twelve years ago. His business is to code for other companies or help them to improve the way they develop and deploy Java applications. He is co-leader of LyonJUG and helps to organize Mix-IT, an annual conference in Lyon.

He recently concluded a WildFly administration course. Here is a brief Q&A with him:

Q. You recently concluded a 4-day WildFly administration course. Tell us more about that.
A. This course will help you understand the operation, configuration principles of WildFly. The core of the course is 3 days long. It deals with installation, deployment, administration tools, security and tuning. The forth day is on clustering and is optional. The recently concluded session was the first one and was conducted on-site in Brussels.

Q. Who is this course targeted for ?
A.  The course is designed for application server administrators. But it is useful for developers and software architects too. In fact, it is useful for anybody who have to work with WildFly.

Q. Where can users register for this course ?
A. The schedule of the course is on my company web site : http://www.sewatech.fr/formation-wildfly.html. Registration can be done by e-mail : formation -at- sewatech.fr.

Q. How much of this course would be relevant for JBoss EAP users/customers ?
A. JBoss EAP 6 is based on JBoss AS 7, but a lot of administration features of the latest EAP has been added in WildFly 8. So if you have EAP 6.2 or 6.3, this course will match ~ 80%. If you have an older EAP, a JBoss AS 7 course would be better (http://www.sewatech.fr/formation-jboss-7.html). JBoss EAP users is not the main target of this course.  RedHat has some great courses on JBoss Middleware Development and JBoss Middleware Administration.

Q. How is your experience with JBoss / WildFly management tools ?
A. Comparing with older versions of JBoss (I mean before JBoss AS 6), WildFly management tools are a huge step ahead. You can choose your tool:

  • Web console
  • Command line
  • JMX
  • HTTP API
  • Java API

Jboss-cli is the most useful one. It allow an admin to automate the whole setup his application server. The HTTP API is great too, as it allow to make custom tools in any language. For example, you can build your own simplified admin console in pure JavaScript. I don’t really like the Java API because, it’s a detyped API and it doesn’t fit with a compiled-time typed language. Maybe a Groovy guy could make a nice DSL on top of it. All in one, the greatest thing is that the different tools are really consistent : same data and same logic to manipulate these data. If you learn jboss-cli, it’s really easy to understand the HTTP API.

Q. What tips would you give for users coming new to WildFly ?
A. If you know JBoss AS 7, it will be easy. WildFly 8 is just the next version. Read the changelog and everything will be fne. If you know older versions of JBoss AS, forget everything you know, and start from the beginning with a good book. If you know Tomcat, be prepared to discover the power to “fly”. And for everybody, learn the CLI tool.

Q. Which application servers developers should choose, for both development and deployment ?
A. It mainly depends on what you want to do. Tomcat can be great too, because it’s really simple. When it comes with Java EE, TomEE is a very nice application server but lacks of administration tools. For Glassfish, we will see with the next versions if the current quality is maintained or if it comes back to a simple RI software. So, OK, WildFly is my favorite choice : great module system, great admin tools and great community.

Q. What resources do you generally refer to when looking for help on WildFly ?
A. For me, the main resource are the official documentation (https://docs.jboss.org/author/display/WFLY8/) and Francesco Marchioni’s blog (http://www.mastertheboss.com/). For a beginner, Francesco’s book WildFly 8, administration guide would be easier. The last resource I would recommend is the WildFly section on my wiki (http://jtips.info/index.php?title=Cat%C3%A9gorie:WildFly), but it’s in French.

 

JBoss EAP 6.3 now available!

JBoss EAP 6.3 is now available!

jboss-eap-logo

This release brings continued progress on the road to making EAP the most manageable and secure Java EE 6-compliant Application Server for traditional and cloud based workloads. It also continues the core themes of the EAP 6 major version family of better user experience, improved manageability, and enhanced performance.

Where to download ?

For current customers with active subscriptions, the Beta can be downloaded from the customer support portal.

For community users developing applications that will be deployed on a supported EAP, the bits will soon be made available from www.jboss.org/products/eap under development terms & conditions, and questions can be posed to the EAP Forum.

Where are docs ?

Complete documentation is available on customer support portal, and here are quick links:

  • Release Notes
  • Administration and Configuration Guide
  • Development Guide
  • Getting Started Guide
  • Installation Guide
  • Migration Guide
  • Security Guide

What’s new ?

  • PicketLink enhancements
  • Domain recovery improvements
  • Support for PKCS11 keystores
  • Web management console
    • Patching available
    • Testing DataSources
    • Unified navigation labels
    • Opt-in analytics collection
  • Deployment overlay enhancement
  • Support for Microsoft Windows Server 2012 R2 and Red Hat Enterprise Linux 7

Complete list of new features is described here.

Some of the features are available in Tech Preview mode such as WebSockets, adding/removing modules using JBoss CLI, multi-JSF and many more.

Features like JBoss OSGi, STOMP with HornetQ, mod_jk and mod_cluster with Apache on RHEL 7 and some others are not supported.

If you are looking for a Java EE 7-compliant Application Server, then download WildFly.

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 7 (Tech Tip #41)

This is the seventh part (part 1part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree), George Gastaldi (@gegastaldi) and I are interactively building a Forge addon to add Java EE 7 Batch functionality. So far, here is what different parts have shown:

  • Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.
  • Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.
  • Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.
  • Part 4 added a new test for the command and showed how Forge can be used in debug mode.
  • Part 5 fixed a bug reported by a community member and started work to make processor validation optional.
  • Part 6 upgraded from 2.6.0 to 2.7.1 and started work on reader, processor, and writer template files.

This part shows:

  • Merged the request by George in the workspace
  • Reader, processor, and writer source files are created if they do not exist

Enjoy!

As always, the evolving source code is available at github.com/javaee-samples/forge-addons.

Next episode will add a new test for this functionality.

Red Hat JBoss Data Grid 6.3 is now available!

Red Hat’s JBoss Data Grid is an open source, distributed, in-memory key/value data store built from the Infinispan open source software project. Whether deployed in client/server mode or embedded in a Java Virtual Machine, it is built to be elastic, high performance, highly available and to scale linearly.

JBoss Data Grid is accessible for both Java and non-Java clients. Using JBoss Data Grid, data is distributed and replicated across a manageable cluster of nodes, optionally written to disk and easily accessible using the REST, Memcached and Hot Rod protocol, or directly in process through a traditional Java Map API.

  • Download bits
  • Supported configurations
  • Component details

The key features of JBoss Data Grid are:

  • Schema-less key/value store for storing unstructured data
  • Querying to easily search and find objects
  • Security to store and restrict access to your sensitive data
  • Multiple access protocols with data compatibility for applications written in any language, using any framework
  • Transactions for data consistency
  • Distributed execution and map/reduce API to perform large scale, in-memory computations in parallel across the cluster
  • Cross-datacenter replication for high availability, load balancing and data partitioning

What’s new in 6.3 ?

  • Expanded security for your data
    • User authentication via Simple Authentication and Security Layer (SASL)
    • Role based authorization and access control to Cache Manager and Caches
    • New nodes required to authenticate before joining a cluster
    • Encrypted communication within the cluster
  • Deploy into Apache Karaf and WebLogic
    • Use as an embedded or distributed cache in Red Hat JBoss Fuse integration flows
  • Enhanced map/reduce
    • Improved scalability by storing computation results directly in the grid instead of pushing them back to the application
    • Takes advantages of hardware’s parallel processing power for greater computing efficiencies
  • New JPA cache store that preserves data schema
  • Improved remote query and C# Hot Rod client in technology preview
  • JBoss Data Grid modules for JBoss Enterprise Application Platform (JBoss EAP)

The complete list of new and updated features is described here.

How can this be installed on JBoss EAP ?

JBoss Data Grid has 2 deployment modes:

  • Library mode (embedded distributed caches)
  • Client-Server mode (remote distributed cache) – Install the Hot Rod client JARs in EAP, and have application reference these jars to use the Hot Rod protocol to connect to the JBoss Data Grid Server (remote cache).

Why a new C# client ?

The remote Hot Rod client is aware of the cluster topology and hashing scheme on Server and can get to a (k,v) entry in a single hop. In contrast, REST and memcached usually require an extra hop to get to an entry. As a results, Hot Rod protocol has higher performance, and is the preferred protocol (in Client-Server mode). JBoss Data Grid 6.1 only had a Java Hot Rod client – for all other languages, customers had to use memcached or REST. JBoss Data Grid 6.2 added C++ Hot Rod client. And now JBoss Data Grid 6.3 added a Tech Preview of C# client.

Infinispan has a lot more Hot Rod clients.

How would somebody use JBoss Data Grid with JBoss Fuse ?

The primary purpose is caching in integration workflows.

For example, remote JBoss Data Grid can be used with Fuse to cache search results.
REST can be used to communicate with a remote cache, but Hot Rod can now be used starting with JBoss Data Grid 6.3.

Fuse currently has camel-cache component which is based on EHCache. There is also a new camel-infinispan component was released in the community.

We plan to productize the camel-infinispan component in a future release.

Why would somebody use JBoss Data Grid on WebLogic ?

Customers who run WebLogic stack and eventually want to migrate to JBoss stack can start migration by replacing Oracle Coherence with JBoss Data Grid. And here is a comparison between the two offerings:

The complete documentation is available here and quick references are below:

  • Release Notes
  • Getting Started Guide
  • Administration and Configuration Guide
  • API Documentation
  • Developer Guide
  • Infinispan Query Guide
  • Feature Support Document

Some useful references:

  • Getting started with Infinispan Refcard
  • Infinispan 6.x user guide

 

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 6 (Tech Tip #40)

This is the sixth part (part 1part 2, part 3, part 4, part 5) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

Part 5 fixed a bug reported by a community member and started work to make processor validation optional.

This part shows:

  • Upgrade from Forge 2.6.0 to 2.7.1
  • Fix the failing test
  • Reader, processor, and writer files are now templates instead of source files
  • Reader, processor, and writer are injected appropriately in test’s temp project

Enjoy!

As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.

Shape the future of JBoss EAP and WildFly Web Console

Are you using WildFly ?

Any version of JBoss EAP ?

Would you like to help us define how the Web Console for future versions should look like ?

wildfly-8.1-admin-console

Help the Red Hat UX Design team shape the future of JBoss EAP and WildFly!

We are currently working to improve the usability and information architecture of the web-based admin console. By taking part in a short exercise you will help us better understand how users interpret the information and accomplish their goals.

You do not need to be an expert of the console to participate in this study. The activity shouldn’t take longer than 10 to 15 minutes to complete.

To start participating in the study, click on the link below and follow the instructions.

http://ows.io/tj/12t0qr48

I completed the study in about 12 mins and was happy that my clicking around helped shape the future of JBoss EAP and WildFly!

Just take a quick detour from your routine for 10-15 mins and take the study.

Thank you in advance for taking the time to complete the study.

Getting Started with Docker (Tech Tip #39)

If the numbers of articles, meetups, talk submissions at different conferences, tweets, and other indicators are taken into consideration, then seems like Docker is going to solve world hunger. It would be nice if it would, but apparently not. But it does solve one problem really well!

Lets hear it from @solomonstre – creator of Docker project!

In short, Docker simplifies software delivery by making it easy to build and share images that contain your application’s entire environment, or application operating system.

What does it mean by application operating system ?

Your application typically require a specific version of operating system, application server, JDK, database server, may require to tune the configuration files, and similarly multiple other dependencies. The application may need binding to specific ports and certain amount of memory. The components and configuration together required to run your application is what is referred to as application operating system.

You can certainly provide an installation script that will download and install these components. Docker simplifies this process by allowing to create an image that contains your application and infrastructure together, managed as one component. These images are then used to create Docker containers which run on the container virtualization platform, provided by Docker.

What are the main components of Docker ?

Docker has two main components:

  • Docker: the open source container virtualization platform
  • Docker Hub: SaaS platform for sharing and managing Docker images

Docker uses Linux Containers to provide isolation, sandboxing, reproducibility, constraining resources, snapshotting and several other advantages. Read this excellent piece at InfoQ on Docker Containers for more details on this.

Images are “build component” of Docker and a read-only template of application operating system. Containers are runtime representation, and created from, images. They are “run component” of Docker. Containers can be run, started, stopped, moved, and deleted. Images are stored in a registry, the “distribution component” of Docker.

Docker in turn contains two components:

  • Daemon runs on a host machine and does the heavy lifting of building, running, and distributing Docker containers.
  • Client is a Docker binary that accepts commands from the user and communicates back and forth with daemon

How do these work together ?

Client communicates with Daemon, either co-located on the same host, or on a different host. It requests the Daemon to pull an image from the repository using pull command. The Daemon then downloads the image from Docker Hub, or whatever registry is configured. Multiple images can be downloaded from the registry and installed on Daemon host.

docker-architecture-techtip39

Client can then start the Container using run command. The complete list of client commands can be seen here.

Client communicates with Daemon using sockets or REST API.

Because Docker uses Linux Kernel features, does that mean I can use it only on Linux-based machines ?

Docker daemon and client for different operating systems can be installed from docs.docker.com/installation/. As you can see, it can be installed on a wide variety of platforms, including Mac and Windows.

For non-Linux machines, a lightweight Virtual Machine needs to be installed and Daemon is installed within that. A native client is then installed on the machine that communicates with the Daemon. Here is the log from booting Docker daemon on Mac:

For example, Docker Daemon and Client can be installed on Mac following the instructions at docs.docker.com/installation/mac.

The VM can be stopped from the CLI as:

And then restarted again as:

And logged in as:

The complete list of boot2docker commands are available in help:

Enough talk, show me an example ?

Some of the JBoss projects are available as Docker images at www.jboss.org/docker and can be installed following the commands explained on that page. For example, WildFly Docker image can be installed as:

The image can be verified using the command:

Once the image is downloaded, the container can be started as:

By default, Docker containers do not provide an interactive shell and input from STDIN. So if WildFly Docker container is started using the command above, it cannot be terminated using Ctrl + C.  Specifying -i option will make it interactive and -t option allocated a pseudo-TTY.

In addition, we’d also like to make the port 8080 accessible outside the container, i.e. on our localhost. This can be achieved by specifying -p 80:8080 where 80 is the host port and 8080 is the container port.

So we’ll run the container as:

Container’s IP address can be found as:

The started container can be verified using the command:

And now the WildFly server can now be accessed on your local machine as http://192.168.59.103 and looks like as shown:

Finally the container can be stopped by hitting Ctrl + C, or giving the command as:

The container id obtained from “docker ps” is passed to the command here.

More detailed instructions to use this image, such as booting in domain mode, deploying applications, etc. can be found at github.com/jboss/dockerfiles/blob/master/wildfly/README.md.

What else would you like to see in the WildFly Docker image ? File an issue at github.com/jboss/dockerfiles/issues.

Other images that are available at jboss.org/docker are:

  • KeyCloak
  • TorqueBox
  • Immutant
  • LiveOak
  • AeroGear

 

Did you know that Red Hat is amongst one of the top contributors to Docker, with 5 Red Hatters from Project Atomic working on it ?

Adding Java EE 7 Batch Addon to JBoss Forge ? – Part 5 (Tech Tip #38)

This is the fifth part (part 1part 2, part 3, part 4) of a multi-part video series where Lincoln Baxter (@lincolnthree) and I are interactively building a Forge addon to add Java EE 7 Batch functionality.

Part 1 showed how to get started with creating an addon, add relevant POM dependencies, build and install the addon using Forge shell, add a new command batch-new-jobxml, and add --reader--processor--writer parameters to the newly added command.

Part 2 showed how to identify classes for each CLI parameter that already honor the contract required by the Batch specification.

Part 3 showed how parameters can be made required, created templates for reader, processor, and writer, validated the specified parameters.

Part 4 added a new test for the command and showed how Forge can be used in debug mode.

This part shows:

  • Fix a bug reported by a community member
  • Started work another issue to make processor validation optional

Enjoy!

As always, the evolving source code is available at github.com/javaee-samples/forge-addons. The debugging will continue in the next episode.

Defaults in Java EE 7 (Tech Tip #37)

Java EE 7 platform added a few new specifications to the platform:

  • Java API for WebSocket 1.0
  • Batch Applications for Java 1.0
  • Java API for JSON Processing 1.0
  • Concurrency Utilities for Java EE 1.0

This is highlighted in the pancake diagram shown below:

javaee7-pancake

Several of the existing specifications were updated to fill the gaps and provide a more cohesive platform. Some small, but rather significant additions, were made to the platform to provide defaults for different features. These defaults would lower the bar for application developers to build Java EE applications.

Lets take a look at them.

  • Default CDI: Java EE 6 required “beans.xml” in an archive to enable CDI. This was mostly a marker file. So you could bundle a completely empty “beans.xml” in the archive and that would enable injection. Of course, you could specify a lot of other elements in this file such as interceptors, decorators, alternative but the basic dependency injection was enabled by just the mere inclusion of this file.This was one of the biggest source of confusion of why beans were not getting injected in a Java EE 7 archive, and was asked on several forums and other channels.

    Java EE 7 made that “beans.xml” optional and provided a default behavior. Now if this file is not bundled, all CDI-scoped beans are available for injection. So any bean with an explicitly specified scope is available for injection. Scopes defined by the CDI specification are listed at docs.oracle.com/javaee/7/api/javax/enterprise/context/package-summary.html. Specifically, here are the scopes defined by CDI:

    • @ApplicationScoped
    • @ConversationScoped
    • @Dependent
    • @NormalScope
    • @RequestScoped
    • @SessionScoped

    In addition, two new scopes are introduced in Java EE 7:

    • @FlowScoped
    • @TransactionScoped

    So, any bean with these scopes will be available for injection, in other beans only, without the presence of “beans.xml”.

    Check it out in action at github.com/javaee-samples/javaee7-samples/tree/master/cdi/nobeans-xml.

  • Default data source: A Java EE runtime, a.k.a application server, requires to package a database with it. If you are building a Java EE application, you likely will need some sort of data store or RBDMS to store the data. So this makes perfect sense.For example, WildFly bundles in-memory H2 database.Now, you can certainly use another JDBC-compliant database but bundling a database makes it convenient to start with. However, in order to get started, Java EE 6 still required to create JDBC resources in an application server-specific way. This would mean understanding app server-specific tools.

    Java EE 7 simplified it by providing a default data source with a pre-defined JNDI name.This mean you can inject a data source as:

    Also, your persistence.xml can look like:

    Note, no <jta-data-source>.

    In both of these circumstances, a default data source with JNDI name java:comp/DefaultDataSource is bound to your application-server specific JDBC resource.

    The exact data source in WildFly can be verified using jboss-cli script as:

    Check it out in action at github.com/javaee-samples/javaee7-samples/tree/master/jpa/default-datasource.

  • Create JMS connection factory, queues, and topics: An application using JMS topics and queues in Java EE 6 would require a deployment script to create Connection Factory and Queues/Topics. These would again be done in an application server-specific way.Java EE 7 provide annotations @JMSConnectionFactoryDefinition and @JMSConnectionFactoryDefinitions that are read by the Java EE 7 runtime and ensures that the ConnectionFactory specified by these annotations is provisioned in the operational environment.

    Similarly, @JMSDestinationDefinition and @JMSDestinationDefinitions can be used to create Topics/Queues as part of application deployment.So no more deployment scripts, just include annotation in your code ?

    Check it out in action at github.com/javaee-samples/javaee7-samples/tree/master/jms/send-receive.

  • Default JMS connection factory: Just like default data source, a default JMS resource allows you to avoid creating a JMSConnectionFactory in an appserver-specific way to deploy the application using JMS resources.Injection of a JMS Producer or Consumer in Java EE 6 required to get an instance of application-managed or container-managed JMSConnectionFactory. This factory had to be manually created in an application-server specific way.Providing a default JMSConnectionFactory simplifies this step further.

    JMS 2.0 also introduced JMSContext as entry point to the simplified API, and it can be injected simply as:

    Not specifying a ConnectionFactory means the default one will be used. And it has the JNDI name of jms/DefaultJMSConnectionFactory.

    The JNDI name may be mapped to the appserver-specific JMS provider. For example, in case of WildFly it is defined as:

    Check it out in action at github.com/javaee-samples/javaee7-samples/tree/master/jms/send-receive.

  • Default executors: Concurrency Utilities for Java EE introduced four different managed objects:
    • ManagedExecutorService
    • ScheduledManagedExecutorService
    • ContextService
    • ManagedThreadFactory

    These objects allow user to create application threads that are managed by the Java EE server runtime. Once again, a default and pre-configured managed object, with a well-defined JNDI name, is made available for each one of them.

    This allows a user to inject a ManagedExecutorService as:

    instead of:

    Default ManagedExecutorService in WildFly can be found as:

    Similarly other default managed objects can be found.

    Check out different executors in action at github.com/javaee-samples/javaee7-samples/tree/master/concurrency.

With so many simplifications, why would you not like to use Java EE 7 platform ?

And WildFly is a fantastic application server too :-)

Download WildFly now, and get started!

Schedule Java EE 7 Batch Jobs (Tech Tip #36)

Java EE 7 added the capability to perform Batch jobs in a standard way using JSR 352.

This code fragment is the Job Specification Language defined as XML, a.k.a. Job XML. It defines a canonical job, with a single step, using item-oriented or chunk-oriented processing. A chunk can have a reader, optional processor, and a writer. Each of these elements are identified using the corresponding elements in the Job XML, and are CDI beans packaged in the archive.

This job can be easily started using:

A typical question asked in different forums and conferences is how to schedule these jobs in a Java EE runtime. Batch 1.0 API itself does not offer anything to be schedule these jobs. However Java EE platform offers three different ways to schedule these jobs:

  1. Use the @javax.ejb.Schedule annotation in an EJB.
    Here is a sample code that will trigger the execution of batch job at 11:59:59 PM every day.
    Of course, you can change the parameters of @Schedule to start the batch job at the desired time.
  2. Use ManagedScheduledExecutorService using javax.enterprise.concurrent.Trigger as shown:
    Call runJob to initiate job execution and cancelJob to terminate job execution. In this case, a new job is started a day later than the previous task. And its not started until previous one is terminated. You will need more error checks for proper execution.

    MyJob is very trivial:

    Of course, you can automatically schedule it by calling this code in @PostConstruct.

  3. A slight variation of second technique allows to run the job after a fixed delay as shown:

    The first task is executed 2 hours after the runJob2 method is called. And then with a 3 hours delay between subsequent execution.

This support is available to you within the Java EE platform. In addition, you can also invoke BatchRuntime.getJobOperator().start("myJob", new Properties()); from any of your Quartz-scheduled methods as well.

You can try all of this on WildFly.

And there are a ton of Java EE 7 samples at github.com/javaee-samples/javaee7-samples.

This particular sample is available at github.com/javaee-samples/javaee7-samples/tree/master/batch/scheduling.

How are you scheduling your Batch jobs ?

Markus Eisele joining Red Hat JBoss Middleware

Markus Eisele is a Java Champion, Oracle ACE Director, Java EE Expert Group member, Java community leader of German DOAG, founder of JavaLand, reputed speaker at Java conferences around the world, and a very well known figure in the Enterprise Java world. Now he is joining as Developer Advocate in JBoss Middleware team at Red Hat.

You’ve known and seen him at different conferences, JUGs, meetups, blogs, social media talking about middleware for many years. And you’ll continue to hear him talk about that going forward as well. And it will still be focused on educating the latest in enterprise technology and any thing around ~100 projects at Red Hat.

I had the honor of presenting his Java Champion jacket during the inaugural JavaLand conference earlier this year. And that lovely moment is captured below (photo from his blog):

Read more about his farewell message here.

Subscribe to his blog at blog.eisele.net or follow him at @myfear.

Red Hat is hiring, see more at jobs.redhat.com. Are you interested ?

Eclipse Luna and JBoss Tools (Tech Tip #35)

Eclipse Luna (4.4) was released a few days ago, download it at the usual location: eclipse.org/downloads. The big feature of course is full support for Java 8 but there are a tons of other features as listed here.

JBoss Tools is a set of plugins for Eclipse that complements, enhances and goes beyond the support that exists for JBoss and related technologies in the default Eclipse distribution. If you use JBoss Tools, then a compatible release is already available. Download 4.2.0 Beta 2 here.

The installation of the plugins is rather simple as shown on the web page:

eclipse-luna-jboss-tools

After downloading, participate in the Community Acceptance Testing by following the instructions at tools.jboss.org/cat/. What’s your incentive ?

  • JBoss Tools team will be paying close attention to the bugs filed by CAT members and ensuring they are responded/reacted to
  • Your name will be included in the JBoss Tools release notes
  • Help us decide if JBoss Tools is ready for release

I filed JBIDE-17773 and JBIDE-17774.

Also see the welcome message from Max Andersen (@maxandersen).

Looking forward to your bugs!

Testable Java EE 7 Maven Archetype, using Arquillian (Tech Tip #34)

There is a Maven archetype to create Java EE 7 application:

It generates a simple “pom.xml” with Java EE 7 API <dependency>. It does the job to get started with building the application. But how do you test this app ?

Of course, you write unit and integration tests. But how do you run these tests, especially in a container-independent manner ?

That’s where Arquillian comes in!

Arquillian guides explain how to write real tests, but you still need to figure out Maven dependencies, create profiles, figure out container dependencies, and more. That’s still too much work :)

Meet a new Maven archetype that generates a Java EE 7 app, with profiles pre-configured for WildFly and GlassFish.

The four profiles are:

  1. wildfly-remote-arquillian
  2. wildfly-managed-arquillian
  3. glassfish-remote-arquillian
  4. glassfish-embedded-arquillian

The first profile is the most natural to start with. It requires to download WildFly 8.1, unzip and start using ./bin/standalone.sh. Then you can run the test as:

to see the result as:

This is useful if tests need to be executed multiple times on the same WildFly instance.

The second profile is the easiest to start with, and does not require any manual downloading. Using the profile downloads WildFly (8.0.0 at this time) to Maven repository, installs it in the “target” directory, starts the server, deploys the WAR file, runs the test, and stops the server.

“glassfish-remote-arquillian” profile is like “wildfly-remote-arquillian” where an instance of GlassFish is started externally and tests are run in the usual manner. This profile does not work at this moment because of ARQ-1596.

“glassfish-embedded-arquillian” is like “wildfly-managed-arquillian” where GlassFish container is downloaded transparently using the Maven dependencies, starts the container, deploys the app, runs the test, and stops the container.

Archetype source code is at: github.com/javaee-samples/javaee7-archetypes/tree/master/javaee7-archetype and the archetype is published at search.maven.org/#search%7Cga%7C1%7Cjavaee7-arquillian-archetype.

Many thanks to @aslakknutsen for publishing this archetype!

A complete working sample can be checked out from github.com/arun-gupta/wildfly-samples/tree/master/arquillian.

Let us know if you find this useful and how would you use it.

Java EE 7 Hands-on Lab on WildFly and OpenShift (Tech Tip #33)

Thanks to @dmueller for inspiring this blog entry and @FarahJuma for keeping WildFly cartridge continuously updated!

Java EE 7 hands-on lab has been delivered at several conferences, meetups, Java User Groups, and other venues around the world. It provides instructions for a typical 3-tier application using several technologies in the Java EE 7 platform, such as WebSocket 1.0 (JSR 356), Batch Applications (JSR 352), JSON-P 1.0 (JSR 353), JAX-RS 2.0 (JSR 339), JMS 2.0 (JSR 343), CDI 1.1 (JSR 346), JPA 2.1 (JSR 338), and many more. The self-paced instructions allows the attendees to learn the design patterns in Java EE 7, and be productive right away.

This lab can be built using NetBeans, JBoss Tools/Eclipse, or IntelliJ. The deployment can be done on WildFly or GlassFish.

Do you want to get a taste of the application without trying out all the steps ? You can download the solution and deploy on application server of your choice.

Don’t have time for downloading and installing your application server ? OpenShift is your answer!

OpenShift provides an open source hybrid cloud application platform by Red Hat. It enables polyglot applications to be deployed on a public, private, and a hybrid cloud very easily. It provides an extensible cartridge-based architecture that allows a wide range of functionality such as frameworks, databases, monitoring services, or connectors to external backends to be easily added. WildFly cartridge allows you to start a WildFly instance in OpenShift Online.

This Tech Tip shows how to deploy Java EE 7 hands-on lab solution easily on WildFly cartridge on OpenShift.

  1. Register for a free OpenShift account.
  2. Login to OpenShift Console.
  3. Create a new WildFly application using the quickstart. Take the defaults, or change the name to whatever you want, and click on “Create Application”. The following page is shown:techtip33-app-created-credsThe default application page looks like:techtip33-wildfly-default-output
  4. Clone the workspace using the credentials shown for your application.
  5. Delete the generated “src” directory and copy the “src” from solution.
  6. Commit and push the changes to restart the cartridge:

Refreshing the page at javaee7lab-milestogo.rhcloud.com now shows the application deployed successfully:

techtip33-javaee7lab-default-output

Simple, isn’t it ?

Try it and let us know your feedback!

Tech Tip #21 also talks about how to get started with WildFly in OpenShift and JBoss Developer Studio.