Developer Interview (#DI 7) Matthias Wessendorf (@mwessendorf) about Openshift, Aerogear and how to bring Java EE to Mobiles

Welcome to another episode of my developer interviews. It is my pleasure to introduce Matthias Wessendorf (@mwessendorf, blog), whom I know since a long time already. We've been working in and around Java EE for quite a while and met again in Red Hat finally.

Matthias is working at Red Hat where he is leading the AeroGear project. Previously, he was the PMC Chair of the Apache MyFaces project. Matthias is a regular conference speaker.

We've been talking about all kinds of mobile and his project AeroGear and OpenShift and how to connect Java EE backends to Mobiles of any kind.

Sit back, relax and get a #Coffee+++! Thanks Matthias for taking the time!

Virtual JBUG at JavaOne – Infinispan, Java EE 7, Hibernate, CDI, Ceylon and Arquillian

You have heard about the Virtual JBoss User Group before. It is your unique chance to catch up with all kinds of technologies and methodologies presented by well known Redhatters and community members from all around the world. Just a short week ago the vJBUG was live-streaming the Red Hat mini-booth sessions from JavaOne and we made all of them available for your to watch if you haven't been able to catch up.
Keep an eye on our Youtube Channel, join the Google+ page, follow us on Twitter @vJBUG and make sure to register on our Meetup group.


Test ride of the Arquillian Universe
Aslak Knutsen (@aslakknutsen, Blog, GitHub) & Bartosz Majsak (@majson, Blog, GitHub)
Learn about Arquillian's unknown features and how to use the testing related services which are provided by Drone and other extensions.


Ceylon's fast-growing ecosystem
Stéphane Épardaud (@UnFroMage, Blog, GitHub)
Learn all about Ceylon's fast growing ecosystem. Ceylon is a new modern JVM and JSVM language with a nice blend of functional and object, modularity and great tooling, designed first and foremost for team work.


Java and Mongo for a fun mapping experience
Steve Citron-Pousty (@TheSteve0, Blog, GitHub)
You have a great idea for quick and interesting mapping application with pins on a map with some basic search. Then you lose interest because of all the pieces to install. NOT ANYMORE! In this workshop we are going to use 1 command to spin up all our infrastructure (EAP and MongoDB). Then write some quick and easy Java EE code to build a REST service on MongoDB. To wrap it up we will use a simple Leaflet application to display the results. You will witness the transformation from idea to cool pinny map in this quick session.


Going Native: Bringing FFI to the JVM
Charles Nutter (@headius, Blog, GitHub)
How to make it easier to call native code on the JVM and what the future might be.


The Path to CDI 2.0
Antoine Sabot-Durand (@antoine_sd, Blog, GitHub)
CDI has proven itself to be a great asset for Java. The many features it provides (dependency injection, contextual lifecycle, configuration, interception, event notification, and more) and the innovative way it provides them (through the use of meta-annotations) explain its rapid adoption. This session reviews the features introduced in CDI 1.1 and 1.2 and discusses improvements planned for CDI 2.0.


Automatically scaling Java applications in the cloud
Steve Citron-Pousty (@TheSteve0, Blog, GitHub)
Steve shows how to automatically scale Java EE applications on a PaaS environment using JBoss EAP and OpenShift. In a live demo he deploys an application to the cloud and then turn up the heat by running a load test with thousands of users.


Building Java EE Applications FAST
George Gastaldi (@gegastaldi, Blog, GitHub) & Lincoln Baxter (@lincolnthree, Blog, GitHub)
George and Lincoln will demonstrate the power of JBoss Forge, while creating an end-to-end Java EE application in mere minutes.


Mythbusters: ORMs and SQL - Good or Bad?
Emmanuel Bernard (@emmanuelbernardBlogGitHub)
Java is an object-oriented kingdom where ORMs have flourished. This episode explores key myths and preconceptions about ORMs in a NoSQL and polyglot era. Join this journey to challenge these myths and find out if they are busted, plausible, or confirmed.


Developing Modern Mobile Applications
Sébastien Blanc (@sebi2706, Blog, GitHub)
This live coding session, driven by Java and using a familiar development environment, goes step by step through building a complete mobile, hybrid, multiplatform application ready to be distributed on different vendors’ stores, such as the Apple store or Google Play.


Develop Modern Java Web Applications with Java EE 7
Grant Shipley (@gshipleyBlogGitHub)
Grant showcases how easily you can develop Java EE 7 applications with OpenShift. A live coding session building a Java EE 7 application on WildFly application server using MongoDB as a database.


Scaling Your Database With Infinispan
Mircea Markus (@mirceamarkusBlogGitHub)
Ways of scaling out database systems using the Infinispan data grid.

Come and learn about OpenShift, JBoss Fuse, Fabric8 and HawtIO

Attend Red Hat's complementary, hands-on technical workshop and experience Red Hat JBoss Fuse and OpenShift. Learn how middleware and cloud solutions bridge the theory with reality. Come and learn about OpenShift, JBoss Fuse, Fabric8 and HawtIO, and how these technologies can help you implement a successful DevOps strategy with Automation, Continuous Delivery, and a deep understanding of your middleware.


After attending this FREE 1-day workshop you’ll be able to:
  • Learn how integration Platform-as-a-Service (iPaaS) connects on premise and cloud solutions, and reap the operational efficiencies that OpenShift brings combined with the messaging and routing/mediation/transformation of Red Hat JBoss Fuse.
  • Work with hands-on labs based on real-world case studies led by experienced solution architects.
  • Learn how to use open source integration and messaging software safely and securely in your enterprise.
Locations:
October 22, 2014 – San Francisco
October 23, 2014 – Los Angeles
October 28, 2014 – Chicago
October 29, 2014 – New York
November 4, 2014 – Houston
December 2, 2014 – Boston
December 3, 2014 – Atlanta

Please make sure to bring a laptop with a minimum of 2GB of RAM (4GB if using a virtual machine) and register at the official event website.

Developer Interview (#DI 6) Geert Schuring (@geertschuring) about Fuse, Tinkerforge with Apache Camel and Open Source

I'm happy to welcome you to another episode of my developer interviews. This time happening in the same timezone and a bright awake Markus got the chance to talk to Geert Schuring (@geertschuring, blog) wo works for Dutch Luminis. We talked about his latest project with JBoss Fuse and why he loves it a lot. Beside that he gave a very impressive demo which will make my daughters jealous: He controlled a Tinkerforge LED panel with the help of Apache Camel routes (Find the source code for the Camel component in his Bitbucket repository).

Geert is working as a Java developer since 2006. He always loved the open source concept, and particularly anything that had to do with messaging. For him working on messaging systems and service oriented applications is like playing transport tycoon. He specialized in the Camel/ActiveMQ combination, and particularly in the RedHat Fuse product. At home he likes to install computers and attach a bunch of sensors to them to collect all data. So he is monitoring temperature, humidity, lighting levels, and motion in every room.

As usual, time to grab a coffee+++ and lean back while listening! Thank you Geert, for taking the time!!

The Heroes of Java: Dan Allen

The "Heroes of Java" series took a long break. Honestly, I thought it might end in the middle of nowhere, even if there are still so many people I would love to include here. One of them is Dan. The first time I asked him to contribute is almost one and a half year back and with everything which happened in the meantime, I made my peace with not getting an answer anymore. But the following arrived in my inbox during JavaOne and was basically a birthday present for me. So, I open the Heroes of Java book again today and add another chapter to it! Thank you Dan! It is very good to call you a friend!

Dan Allen
Dan Allen is an open source and standards advocate and innovator. He worked at Red Hat as a Principal Software Engineer. In that role, he served as the Arquillian community manager, contributed to various open source projects (including Arquillian, Asciidoctor, Awestruct and JBoss Forge) and participated in the JCP. He helped a variety of open source projects become wildly successful. He's also the author of Seam in Action (Manning, 2008), has written technical articles for various publications and is an internationally recognized speaker.

General
Who are you?
I’m an open source advocate and developer, community catalyst, author, speaker and business owner. Currently, I’m working to improve the state of documentation by leading the Asciidoctor project, advocating for better software quality by advocating for Arquillian, and, generally, doing whatever I can to make the open source projects to which I contribute, and their communities, wildly successful. After a long conference day, you’ll likely find me geeking out with fellow community members over a Trappist beer.

Your official job title at your company?
Vice President, Open Source Hacker and Community Strategist at OpenDevise, a consulting firm I founded with Sarah White.

Do you care about it?
I care more about this title, compared to titles I’ve had in the past, primarily because I got to define it.

In general, though, titles can be pretty meaningless. Take my previous title, Middleware Principal Software Engineer. All titles like this really manage to accomplish is to communicate an employee’s pay grade. The honorary that follows “Principal” is “Senior Principal”. Then, what next? “Principal Principal?” What was I before? A Junior Insignificant Engineer? We might as well just use number grades like in the US government (e.g. GS-10). At least that’s a logical system.

Like many of my peers, I’ve always sought to define my own title for my role. To me, the purpose of a title is to help others know your specialty and focus. That way, they know when you’re the one they need to seek out. That’s why I chose the title “Open Source Hacker and Community Strategist”

I live and breathe open source, so the “Open Source” part of the title fits. If you want to discuss anything about open source, I’m always game.

I also love community, especially passionate ones. I’m always thinking about it and how to make it work better. That’s where the term “community strategist” comes in.

I enjoy getting people excited about a technology and then being there to help get them going when they find their passion to improve or innovate on it. It’s such a thrilling and proud experience for both sides. To me, that feeling is called open source. I simply work to reproduce it over and over as an “Open Source Hacker and Community Strategist”. Maybe one day people will recognize me as a “Serial Community Creator” ;)

Those of us in open source also identify ourselves by the projects we lead or help manage, if any. Currently, I’m the Asciidoctor project lead—​and it’s about as much as I can handle.

Do you speak foreign languages? Which ones?
I wish. I studied French in high school, but consider that experience purely academic. I’m challenging myself to read tweets in French to brush up on what I once knew.

My real life experience with foreign languages comes from interacting with open source community members from around the globe and spending time in other countries. Even though I cannot understand other languages, I enjoy taking in the sounds and rhythms like music. There’s a certain amount of enjoyment I get from listening without the distraction of comprehension.

My favorite foreign language experience was working with the translations—​and their translators—​of the Arquillian User Guides. Not only did it expose me to a lot of languages (over a dozen), it gave me a first-hand appreciation for how much language plays into a person’s identity and the feeling of pride for one’s country.

The experience also pushed me to understand Unicode and fonts. I’m proud to say that I get the whole point of Unicode and how it works (at least from a programming standpoint).

I look forward to working more with translations, rethinking how translations are managed and continuing to take in the sounds and rhythms of languages. One day, perhaps, I will be fluent in at least one of them.

How long is your daily "bootstrap" process?
A more interesting question might be “when?” since I keep some pretty odd hours. My daily goal is usually to get to bed before the sun comes up. That makes my breakfast and bootstrap process your lunch. That all depends on timezone, of course. As one of my colleagues pointed out, I’m surprisingly non-Vampirish at conferences.

You may be wondering what’s with the crazy schedule. The thing about managing an open source project is that you never know when someone is going to be ready to participate. When someone shows up ready to participate, you need to jump on the opportunity. It could be a while (if ever) before they have time again. And that person could be in any time zone in the world.

Truth be told, I like the night just as much as the day anyway. There’s a solitude at night that I enjoy and I often do some of my best work then. Other times, I just enjoy the silence. I look forward to the day too, especially when the view of the Colorado Rockies is clear. I do some of my best work against the backdrop of their purple or white peaks. You might say that I draw inspiration from both the day and night to feed my creativity.

I only do coffee first thing in my “morning”, but I do the other bootstrap activities (like Twitter) several times a day. It takes me about an hour or two to sift through my e-mail and Twitter, with a pit stop at Google+.

Twitter
You have a twitter handle? Why?
For sure. It’s @mojavelinux.
I have a Twitter account:

  • to be open
  • to connect
  • to discover
  • to report
  • to keep in touch

When I first started using Twitter (over 6 years ago), many people thought it was ridiculous and pointless. I was drawn to it because it offered a way to communicate without any prior arrangements. It’s sort of like a global IRC channel with a contextual filter applied to it.

Twitter has changed the way I do business, and the way I interact with my colleagues and community. Rather try to explain it, I’ll give two examples.

When we were growing the Seam 3 community, we didn’t just wait for people to come join the mailinglist. We looked for people talking about JSF and Java EE on Twitter. One of the more vocal people at that time was Brian Leathem. When he posted feedback or a complaint about JSF, we would engage him by responding to him directly. That turned his post into the start of a conversation or design session. When it came time to hire someone for a related position, he was already a top candidate, and has since become a top employee. There are stories like Brian’s.

It’s easy to conclude that we “hired someone we met on Twitter”. That misses the whole point. Twitter’s public channel gave us an opportunity to find someone who has deep interest and experience with a particular technology or platform. So public that we don’t even have to know where to look for each other (except on Twitter). The meetup is inevitable.

Twitter has also eliminated the overhead of communicating with associates in your own company or even other companies. You just put out a broadcast on Twitter, usually planting a few trigger words or tags, and that person will see it, or someone will pass it on to that person. Either way, you cut out the whole hassle of an employee directory. There’s a global conversation happening on Twitter and we’re all a part of it. Now that’s open.

Whom are you following in general?
First and foremost, my fellow community members. As I mentioned, Twitter is how I keep the pulse on my community and communicate with them throughout the day. I follow a few company and project feeds, such as GitHub and Java EE, but mostly I like to know there is a person behind the account.

I’m hesitant about following anyone I haven’t met, either in person or through a conversation online. I follow the same policy for LinkedIn and Google+ as well.

Do you have a personal "policy" for twitter?
One policy is to stay dialed in. I plow thorough my timeline at least once a day and try to respond to any questions I’m asked. As a community leader, it’s important to be present and participate in the global conversation. Some days, I iron out my agenda only after consulting my stream.

I do make sure to not let it take over (sort of). When I find myself only reading or retweeting, but not sharing, I realize I need to get back to creating so that I have something to share (or just take a break).

I’m very careful to post and retweet useful information. That’s an important part of my personal policy. I use tools like Klout, the Twitter mentions tab and the new Twitter analytics to learn what people consider useful or interesting and focus on expanding on those topics. I dialing down topics that get little response because I respect the time of my followers.

Does your company restricts or encourages you with your twitter usage?
The company policy is, use your own judgment.

Public social networks have had a tremendously positive impact on open source, primarily because open source is both public and social. That makes Twitter pretty central to my position. We often discover new contributors (and vice-versa) on Twitter. We also use it as a 140 character limit mailing list at times (which, trust me, is a relief from the essays that are often found on real mailing lists).

Simply put, I couldn’t do my job (in this day and age) without Twitter (or something like it).

Work
What’s your daily development setup?
A tabbed terminal with lots of Vim and a web browser. Nearly all the work I do happens in these environments. Since I’ve been heavily involved in AsciiDoc and writing content in general, many of my Vim sessions have an AsciiDoc document queued up.

I do all my Ruby development in Vim. I rely on syntax highlighting and my own intuition as my Ruby IDE. If you saw the number of times I split the window, it would frighten you. Don’t mimic what I do, it’s probably terribly inefficient, but somehow it works for me.

When I need to do some Java hacking, I absolutely must fire up an IDE. Editing Java in Vim (without any additional plugins) is just a waste of time. I’m most comfortable in Eclipse because that’s what I used first in my career. However, I’m been firing up IntelliJ IDEA more often lately and I do like Netbeans on occasion. When I have to edit XML in the project, I flip back to Vim because copy-paste is much more efficient :)

The development tools in the browser are a life and time saver when editing CSS. I like to work out the CSS rules I want in a live session, then transfer them to the stylesheet in the project. It all begins with “Inspect element”.

Which is the tool providing most productivity to your work?
Vim. I’ve used Vim every single day I’ve been at a computer for the last decade. I couldn’t imagine life without it. Vim is my hammer.

Your prefered way of interacting with co-workers?

Primarily async communication, with a few face-to-face meetups a year.

The async communication is a mix of mailinglists, social networks, emails and (on and off) IRC. Most personal emails with my close colleagues have been replaced by Google+ and Twitter private messages, since we all have too much email. You’d be amazed how much more effective those private messages are. Something certainly worth noting.

We usually get face time at conferences like Devoxx and JavaOne. This time is so important because it’s when we form the impression of the person behind the screenname. After you’ve met someone, and heard their voice, you’ll never read an email from them the same again. You’ll hear it coming from them, with their voice and expressions. Those impression, and the bonds you form when in person, is what make the virtual relationships work. You also discover some other things to talk about besides tech (or your tech in particular).

Occasionally, I get put on these teams that like to do phone meetings. First, will someone please kill conference lines? They are horrible and a buzz kill. Besides that, phone calls in a global company simply don’t work. No time is a good time for someone. When we finally do manage to get (most) everyone on the phone, no one knows when to talk (or shut up). It’s a circus. Return me to my async communication.

If I do need to be “on the phone”, I prefer Google Hangout (when it works). I’m not exaggerating when I say it’s almost as good as being in person.

What’s your favorite way of managing your todo’s?
I did a lot of research in this area and decided on an online application named Nirvana. It adheres to David Allen’s GTD method more faithfully than any other one I evaluated. When I’m good about sticking to it, it serves me well.

When I’m not so good, I fall back to my two anchors, a text file named WORKLOG and my email inbox.

One trick I’ve used for years, which works great for context switching, is maintaining a WORKLOG file in each project that I work on. The tasks in this file aren’t perk pressing, but do remind me of what I want to do next when I have time to work on the project. It’s especially useful when you return to a project after a long break.

If you could make a wish for a job at your favorite company, what would that be?
I’m at the point now where my ideal job isn’t at someone else’s company, but at my own. One of the main reasons I love open source is the autonomy it grants. I don’t have problems finding ways to create value, but I do sometimes have problems convincing my employer to pursue that value creation.

In my ideal job, which I’m now pursuing, I can create value anyway I want, I can judge when I’ve succeeded and when I’ve failed for myself, I can decide when growth is necessary and when it isn’t and I can defend the principles important to me. That’s why my wife and I took the step to create our own business. Our goals are pretty simple: survive, be happy & healthy, create value, work in open source and help clients be wildly successful.

Java
You’re programming in Java. Why?
I’m a strong believer in portability and choice. And I believe the JVM provides us that freedom. The fact it’s one of the most optimized and efficient runtimes is just icing on the cake.

I use Java because it’s the default language on the JVM. If another language replaced it as the default, I’d probably use that instead. Java is a means to and end to run and integrate code on the common runtime of the JVM. There are some compelling features that have made Java enjoyable, such as annotations and now lambdas and streams. However, if I have my choice, I prefer other languages, such as Ruby, Groovy and Clojure…​as long as the language runs well on the JVM :)

What’s least fun with Java?
The ceremony and verbosity. It’s too much to type. I like code that can get a lot done in a little amount of space, but still be readable and intuitive. Java requires a lot of space.

Java is also missing some really key features from the standard library that you find in most other languages. A good example is a single function that can read all the content from a file or URL. It’s a simple concept. It should have a simple function. Not so with Java.

Also, getters and setters are dumb.

If you could change one thing with Java, what would that be?
Less ceremony for imports. I know, that’s not the first thing that comes to a lot of people’s minds…​that is unless you’ve done a lot of work in a dynamic language.

One of the biggest differences between Java and dynamic languages not often mentioned is the number of types in the default language set and the number of import statements you need to get more.

It may not seem such a big deal, especially since IDEs help manage the import statements, but you’d be surprised how much they still slow you down, and outright paralyze development without the help of an IDE. In Ruby (and to some extent, Groovy), you can write most simple programs without a single import (require) statement. That means you can just keep plugging away.

Ruby also let’s you import a whole library so it’s accessible to all the files in your application with a single statement (a RubyGem). In Java, you have to import every single type you use (or at least every package that contains them) in every single file. That’s a huge number of extra lines to manage.

My hope is that this improvement comes along with Java modularity. You can import a module into your application, then use the types from it anywhere. That would be game changing for me. Combined with the language improvements in Java 8, my efficiency in Java just might be able to catch up to my efficiency in Ruby.

What’s your personal favorite in dynamic languages?
Ruby. I’ve now written more code in Ruby than in any other programming language (https://www.openhub.net/accounts/mojavelinux/languages). (I’ve also explored the Ruby and Java interop extensively). I can attest that Ruby is very natural, just as the language designer intended it to be.

I’m also a fan of Groovy and Clojure. I like Groovy for the reasons I like Ruby, with the added benefit that it integrates seamlessly with Java.

Clojure is my “challenge yourself language”. I wouldn’t say it feels natural to me yet, but it pushes me to write better code. It’s true what they say about a LISP. It does expand your thinking.

Which programming technique has moved you forwards most and why?
Functional programming, no doubt. This is a popular response, but for good reason. It’s more than just a trend.

From my experience working with Java EE, Seam and CDI, I believe I’m qualified to say that managing state in a shared context is difficult in the best cases and usually fallible or impossible. As isolated processes become increasingly rare, we must change our approach to development.

Functional programming gives us the necessary tools. Higher order functions allow us to compose logic without having to rely on class hierarchy and the temptation of relying on shared state. Persistent collections and no side effects let’s us write code that is thread safe by default and, better yet, prepared to be optimized for multi-core and even distributed.

Don’t take my word for it, though. Just listen to a few of Rich Hickey’s talks, then grab a book or tutorial on Clojure and start studying it. Your mind will convince you.

What was the biggest project you’ve ever worked on?
It was a J2EE web application that facilitated mortgage lending and automated appraisal services. The application was written in a somewhat obscure component-based framework that predated JSF that talked to an EJB2 backend and webMethods services. It had to be loaded on the bootclasspath of Weblogic in order for it to run for reasons I’ll never understand. In my time working there, the test suite never completed successfully and no one could figure out how to fix the behemoth. Debugging was a nightmare. It wasn’t pretty. Let’s just say I appreciated the need for a lightweight framework like Spring and changed my career path once I lost the stomach to work on this system.

The nice part about that job was that I got experience using the XP development methodology (story cards, pair programming, continuously failing integration, etc). It’s probably the only reason the application was staying afloat and moving forward at all.

Which was the worst programming mistake you did?
Not documenting (and not testing).

I’m always getting on myself for not documenting. We think of programming mistakes as logic or syntax errors, but the worst crimes we can commit are not passing on knowledge and stability. It’s like spreading land mines around a property, forgetting about them and then turning the property into a park. The mistakes are going to be made by the next person who isn’t aware of all those things you need to know to keep the system running securely.

I’ll end with a variation on the most popular Tweet at this year’s OSCON to help encourage you to be a more disciplined programmer.
Always [write documentation] as if the [person] who ends up maintaining your code will be a violent psychopath who knows where you live.
— John Woods (source)

The future is Micro Service Architectures on Apache Karaf

This is a guest blog post by Jamie Goodyear (blog@icbts). He is an open source advocate, Apache developer, and computer systems analyst with Savoir Technologies; he has designed, critiqued, and supported architectures for large organizations worldwide. He holds a Bachelor of Science degree in Computer Science from Memorial University of Newfoundland.

Jamie has worked in systems administration, software quality assurance, and senior software developer roles for businesses ranging from small start-ups to international corporations. He has attained committer status on Apache Karaf, ServiceMix, and Felix, Project Management Committee member on Apache Karaf, and is an Apache Software Foundation member. His first printed publication was co-authoring Instant OSGi Starter, Packt Publishing, with Johan Edstrom followed by Learning Apache Karaf, Packt Publishing, with Johan Edstrom and Heath Kesler. His third and latest publication is Apache Karaf Cookbook, Packt Publishing, with Johan Edstrom, Heath Kesler, and Achim Nierbeck.

I like Micro Service Architectures.

There are many descriptions of what constitutes a micro service, and many specifications that could be described as following the pattern. In short, I tend to describe them as being the smallest unit of work that an application can do as a service for others. Bringing together these services we’re able to build larger architectures that are modular, light weight, and resilient to change.

From the point of view of modern systems architecture the ability to provision small applications with full life cycle control is our idea platform. Operators need only deploy the services they need, updating them in place, spinning up additional instances as required. Another way of describing this is as Applications as a Service (AaaS). Take particular small services such as Apache Camel routes or Apache CXF endpoints and bring them up and down with out destroying the whole application. Apache Karaf IS the platform to do micro services.

To make micro services easier, Karaf provides many helpful features right out of the box;

  • A collection of well tested libraries and frameworks to help taken the guess work out of assembling a platform for your applications.
  • Provisioning of libraries or applications via a variety of mechanisms such as Apache Maven.
  • Feature descriptors to allow deployment of related services & resources together.
  • Console and web-based commands to help make fine grained control easy, and
  • Simplified integration testing via Pax Exam.

One of my favourite micro service patterns is to use Apache Camel with a Managed Service Factory (MSF) on Apache Karaf. Camel provides a simple DSL for wiring together Enterprise Integration Patterns, moving data from endpoint A to endpoint B as an example. A Managed Service Factory is an Modular Pattern for configuration driven deployments of your micro services - it ties together ConfigAdmin, the OSGi Service Registry, and our application code.

For instance, a user could create a configuration to wire their Camel route, using a MSF, unique PIDs will be generated per a configuration. This pattern is truly powerful. Create 100 configurations, and 100 corresponding micro services (Camel routes) will be instantiated. Only one set of code however requires maintenance.

Let’s take a close look at the implementation of the Managed Service Factory. The ManagedServiceFactory is responsible for managing instantiations (configurationPid), creating or updating values of instantiated services, and finally, cleaning up after service instantiations. Read more on the ManagedServiceFactory API.

public class HelloFactory implements ManagedServiceFactory {

@Override
public String getName() { return configurationPid; }

@Override
public void updated(String pid, Dictionary dict) throws ConfigurationException {
// Create a dispatching engine for given configuration.
}

@Override
public void deleted(String pid) {
// Delete corresponding dispatch engine for given configuration.
}

//We wire in blueprint
public void init() {}
public void destroy() {}
public void setConfigurationPid(String configurationPid) {}
public void setBundleContext(BundleContext bContext) {}
public void setCamelContext(CamelContext camelContext) {}
}

We override the given ManageServiceFactory interface to work with DispatchEngines. The DispatchEngine is a simple class that contains code for instantiating a Camel route using a given configuration.

public class HelloDispatcher {

public void start() {
// Create routeBuilder using configuration, add to CamelContext.
// Here ‘greeting’ and ‘name’ comes from configuration file.

from(“timer://helloTimer?fixedRate=true&period=1000").
routeId("Hello " + name).
log(greeting + " " + name);
}

public void stop() {
// remove route from CamelContext.
}
}



When we deploy these classes as a bundle into Karaf we obtain a particularly powerful Application as a Service. Each configuration we provision to the service instantiates a new Camel router (these configuration files quite simply consist of Greeting and Name). Camel’s Karaf commands allow for fine grained control over these routes, providing the operator with simple management.

Complete code for the above example is available via github, and is explored in detail in Packt Publishing’s Apache Karaf Cookbook.

Micro Service Architectures such as above unleash the power of OSGi for common applications such as a Camel route or CXF endpoint. These are not the only applications which benefit however. I’d like to share one of our Karaf success stories that highlights how Apache Karaf helped bring structure to an existing large scale micro service based project.

Imagine having hundreds of bundles distributed over dozens of interconnected projects essentially being deployed in a plain OSGi core and left to best luck to successfully boot properly. This is the situation that OpenDaylight, a platform for SDN and NFV, found themselves in a few months ago.


Using Karaf Feature descriptors each project was able to organize their dependencies, bundles, and other resources into coherent structures. Custom commands were developed to interact with their core services. Integration testing of each project into the project’s whole were automated. Finally all of these projects have been integrated into their own custom distribution.

Their first Karaf-based release, Helium, is due out very soon. We’re all looking forward to welcoming the SDN & NFV community to Karaf.

While Apache Karaf 3.0.x line is maintained as our primary production target, the community has been busy as ever developing the next generation of Karaf containers.

The 4.0.x line will ship with OSGi Rev5 support via Felix 4.4.1 and Equinox 3.9.1-v20140110-1610, and a completely refactored internal framework based on Declarative Services instead of Blueprint. From a users point of view these changes will yield a smaller, more efficient Karaf core. There will be a Blueprint feature present in Karaf so that you can easily install Blueprint based applications. You will always be capable of using Blueprint in Karaf. So the main difference from a user perspective is that you’d need to depend on the Blueprint service if you need it.

This has been a very brief overview of Micro Service Architectures on Apache Karaf, and Karaf’s future direction. I’d suggest anyone interested in Micro Services to visit the OSGi Alliance website, and join Apache Karaf community. For those whom whom would like to dive into an advanced custom Karaf distribution have a look into Aetos. Apache Karaf is also part of JBoss Fuse.

Virtual JBoss User Group (Virtual:JBUG)

Do you know what the number one stop for all kinds of great JBoss developer sessions is? No? It is the Virtual JBoss User Group. It has been launched at the beginning of this year and gained more and more attraction. The idea behind it is pretty simple. Make it easy for all interested developers around the world to attend a user group and take advantage of the knowledge that is shared in those meetings. With all our days becoming more and more busy it is just a bit like NetFlix. Instead of having to walk or drive to attend a session you're interested in you could either join the live-stream or watch a recording of the session. Some pretty awesome ones have already been held since it's launch. But this is only the beginning. We're looking for speakers and content and of course appreciate any ideas and hints to improve on the meetings.

So, if you are interested in latest and greatest from the JBoss ecosystem, just tune in to the meetings or subscribe to the Youtube channel.

If you want to give a session, send your suggestions to Paul Robinson (@Pfrobinson) or myself (@myfear), by shooting out to us on twitter or leave a comment on this blog-post or send me an email to markus at jboss dot org.

Review: "Java EE 7 Performance Tuning and Optimization" by Osama Oransa

Latest Packt Publishing Java EE 7 books are all around performance and tuning. I had the pleasure to review another book, the  "Java EE 7 Performance Tuning and Optimization" by Osama Oransa.

Abstract
With the expansion of online enterprise services, the performance of an enterprise application has become a critical issue. Even the smallest change to service availability can severely impact customer satisfaction, which can cause the enterprise to incur huge losses. Performance tuning is a challenging topic that focuses on resolving tough performance issues.
In this book, you will explore the art of performance tuning from all perspectives using a variety of common tools, while studying many examples.
This book covers performance tuning in Java enterprise applications and their optimization in a simple, step-by-step manner. Beginning with the essential concepts of Java, the book covers performance tuning as an art. It then gives you an overview of performance testing and different monitoring tools. It also includes examples of using plenty of tools, both free and paid.

Book: "Java EE 7 Performance Tuning and Optimization"
Language : English
Paperback: 478 pages
Release Date: June 23, 2014
ISBN-10: 178217642X
ISBN-13: 978-1782176428

About the Author
Osama Oransa (blog) is an IT solution architect with more than 12 years of technical experience in Java EE. He is a certified Java enterprise architect and an SME in web services technology. He is currently working with the Vodafone Group as a solution architect. He has a diploma in IT from the Information Technology Institute (ITI) and a diploma in CS from the Arab Academy for Science, Technology and Maritime Transport (AASTM). He is currently working towards a Master's degree in CS. In 2010, one of his projects in Pulse Corp, "Health Intact", won Oracle Duke's Choice Award. He is the founder of more than 12 open source projects hosted on SourceForge.

The Content
Chapter 1, Getting Started with Performance Tuning, takes you through the art of performance tuning with its different components and shows you how to think when we face any performance issue. It focuses on preparing you to deal with the world of performance tuning and defining the handling tactics.
Chapter 2, Understanding Java Fundamentals, lays the foundation of required knowledge of the new features in Java Enterprise Edition 7 and different important Java concepts, including the JVM memory structure and Java concurrency. It also focuses on the different Java Enterprise Edition concurrency capabilities.
Chapter 3, Getting Familiar with Performance Testing, discusses performance testing with its different components, defines useful terminologies that you need to be aware of, and then gives hands-on information about using Apache JMeter to create your performance test plans for different components and get the results.
Chapter 4, Monitoring Java Applications, dissects the different monitoring tools that will be used in performance tuning, starting from the operating system tools, different IDE tools, JDK tools, and standalone tools. It covers JProfiler as an advanced profiling tool with its offline profiling capabilities.
Chapter 5, Recognizing Common Performance Issues, discusses the most common performance issues, classifies them, describes the symptoms, and analyzes the possible root causes.
Chapter 6, CPU Time Profiling, focuses on the details of getting the CPU and time profiling results, ways to interpret the results, and ways to handle such issues. It discusses the application logic performance and ways to evaluate different application logics. It provides the initial performance fixing strategy.
Chapter 7, Thread Profiling, discusses thread profiling with details on how to read and interpret thread profiling results and how to handle threading issues. It also highlights the ways to get, use, and read the thread dumps.
Chapter 8, Memory Profiling, discusses how to perform memory profiling, how to read and interpret the results, and how to identify and handle possible issues. It also shows how to read and query memory heap dumps and analyze the different out of memory root causes. The chapter finishes your draft performance fixing strategy.
Chapter 9, Tuning an Application's Environment, focuses on tuning the application environment, starting from the JVM and passing through other elements such as the application servers, web servers, and OS. We will focus on selected examples for each layer and discuss the best practices for tuning them.
Chapter 10, Designing High-performance Enterprise Applications, discusses design and architecture decisions and the performance impact. This includes SOA, REST, cloud, and data caching. It also discusses the performance anti-patterns.
Chapter 11, Performance Tuning Tips, highlights the performance considerations when using the Agile or Test-driven Development methodologies. This chapter also discusses some performance tuning tips that are essential during the designing and development stages of the Java EE applications, including database interaction, logging, exception handling, dealing with Java collections, and others. The chapter also discusses the javap tool that will help you to understand the compiled code in a better way.
Chapter 12, Tuning a Sample Application, includes hands-on, step-by-step tuning of a sample application that has some performance issues. We will measure the application performance and tune the application issues, and re-evaluate the application performance.

Writing and Style
The language is clear and easy to follow. Illustrations and tables makes understanding of the written word easier. Even a non native speaker can follow easily.

Conclusion and Recommendation
80 percent of the book cover general performance tuning, monitoring and profiling. Only the last three chapters cover additional information for enterprisy applications. The sample application walk through is kind of helpful and gives beginners a decent idea at what to look if you have never done that kind of things before. The title is highly confusing and I would have picked a more general name. This book is not intended to cover Java EE features; it simply highlights some essential fundamentals we should be aware of while dealing with Java performance tuning and applies more general to Java SE based applications.

Exploring the SwitchYard 2.0.0.Alpha2 Quickstarts

In one of my last posts I explained how you get started with SwitchYard on WildFly 8.1. In the meantime the project was busy and released another Alpha2. A very good opportunity to explore the quickstarts here and refresh your memory about it. Beside the version change, you can still use the earlier blog to setup you local WildFly 8 server with latest Switchyard. As with all frameworks there is plenty of stuff to explore and a prerequisite for doing this is to have a working development environment to make this easier.

Setting up JBoss Developer Studio
First things first. Download a copy of the latest JBoss Developer Studio (JBDS) 7.1.1.GA for your operating system and install it. You should already have a JDK in place so a simple

java -jar jbdevstudio-product-eap-universal-7.1.1.GA-v20140314-2145-B688.jar

will work. A simply 9 step installer will guide you through the steps necessary. Make sure to select the suitable JDK installation. JBDS works and has been tested with Java SE 6.x and 7.x. If you like to, install the complete EAP but it's not a requirement for this little how-to. A basic setup without EAP requires roughly 400 MB disc space and shouldn't take longer than a couple of minutes. If you're done with that part launch the IDE and go on and configure the tooling. We need the JBoss Tools Integration Stack (JBTIS). Configure them by visiting "Help -> Install New Software" and add a new Update Site with the "Add" button. Call it SY-Development and point it to: "http://download.jboss.org/jbosstools/updates/development/kepler/integration-stack/"
Wait for the list to refresh and expand the JBoss Integration and SOA Development and select all three SwitchYard entries. Click your way through the wizards and you're ready for a re-start.

SY Tooling 2.0.0
Please make sure to disable Honour all XML schema locations in preferences, XML→XML Files→Validation after installation.  This will prevent erroneous XML validation errors from appearing on switchyard.xml files.

Preventing erroneous XML validation
That's it for sure. Go ahead and import the bean-service example from the earlier blog-post (Import -> Maven -> Existing Maven Projects)

General Information about SwitchYard Projects
Lets find out more about the general SwitchYard project layout before we dive into the bean-service example.  A SwitchYard project is a Maven based project with the following characteristics:
  • a switchyard.xml file in the project's META-INF folder
  • one or more SwitchYard runtime dependencies declared in the pom.xml file
  • org.switchyard:switchyard-plugin mojo configured in the pom.xml file
Generally, a SwitchYard project may also contain a variety of other resources used to implement the application, for example: Java, BPMN2, DRL, BPEL, WSDL, XSD, and XML files. The tooling supports you with creating, changing and developing your SY projects. You can also add SY capabilities to existing Maven projects. More details can be found in the documentation for the Eclipse tooling.

Exploring the Bean-Service Example
The Bean-Service example is one of the more simpler ones to get a first impression about SY. All of the example applications in the Quickstarts repository are included in quickstarts/ directory of your installation and also available on GitHub. The bean-service quickstart demonstrates the usage of the bean component. The scenario is easy: An OrderService, which is provided through the OrderServiceBean, and an InventoryService which is provided through the InventoryServiceBean implementation take care of orders. Orders are submitted through the OrderService.submitOrder, and the OrderService then looks up items in the InventoryService to see if they are in stock and the order can be processed. Up to here it is basically a simple CDI based Java EE application. In this application the simple process is invoked through a SOAP gateway binding (Which is indicated by the little envelope).
Bean Service Quickstart Overview
Let's dive into the implementation a bit. Looking at the OrderServiceBean reveals some more details. It is the implementation of the OrderService interface which defines the operations. The OrderServiceBean is just a bean class few extra CDI annotations. Most notably is the
@org.switchyard.component.bean.Service(OrderService.class)
The @Service annotation allows the SwitchYard CDI Extension to discover your bean at runtime and register it as a service. Every bean service must have an @Service annotation with a value identifying the service interface for the service. In addition to providing a service in SwitchYard, beans can also consume other services. Those references need to be injected. In this example the InventoryService is injected
 @Inject
@org.switchyard.component.bean.Reference
private InventoryService _inventory;
Finally, all you need is the switchyard.xml configuration file where your Service, Components, Types and implementations are described.
<composite name="orders" >
<component name="OrderService">
<implementation.bean class="org.switchyard.quickstarts.bean.service.OrderServiceBean"/>
<service name="OrderService">
<interface.java interface="org.switchyard.quickstarts.bean.service.OrderService"/>
</service>
</component>
</composite>

That was a very quick rundown. We've not touched the webservice endpoints, the WSDL and the Transformer configuration and implementation. Have a look at the SwitchYard tutorial which was published by mastertheboss and take the chance to read more about SY at the following links:



Developer Interview (#DI 5) Jeff Genender (@jgenender) about Apache, Karaf, Data and Integration

In my evening hours I had the chance to talk to Jeff Genender (@jgenender) which resulted in a new episode of my developer interviews. We talked about all things Apache and integration and also a bit about Java EE and microservices and what today's customers want from a modern architecture.

Jeff Genender is a Java Champion, Apache Member, and Java Open Source consultant specializing in SOA and enterprise service implementation. Jeff has over 23 years of software architecture, team lead, and development experience in multiple industries. He is a frequent speaker at such events as TheServerSide Symposium, JavaZone, Java In Action, JavaOne, JFokus, and numerous Java User Groups on topics pertaining to Enterprise Service Bus (ESBs), Service Oriented Architectures (SOA), and application servers.
Jeff is an active committer and Project Management Committee (PMC) member for Apache ServiceMix, CXF, Geronimo, a comitter on OpenEJB and Mina, and author of several very popular Mojo (Maven plugins). He is the author of Enterprise Java Servlets (Addison Wesley Longman, 2001), co – author of Professional Apache Geronimo (2006, Wiley), and co-author of Professional Apache Tomcat (2007, Wiley). Jeff also serves as a member of the Java Community Process (JCP) expert group for JSR-342 (Java Platform, Enterprise Edition 7 (Java EE 7) Specification) as a representative of the Apache Software Foundation.

As usual, time to grab a coffee+++ and lean back while listening! Thank you Jeff, for taking the time..

 

Red Hat to Acquire FeedHenry – It takes a Village to raise an app

Red Hat, Inc.today announced that it has signed a definitive agreement to acquire FeedHenry, a leading enterprise mobile application platform provider. This will expand Red Hat’s portfolio of application development, integration, and Platform-as-a-Service (PaaS) solutions and support our customers with mobile application development in public and private environments.
A warm welcome to the FeedHenry Team at Red Hat!

What is FeedHenry?
It is a cloud-based mobile application platform to design, develop, deploy and mobile applications. The platform provides specific services for security, notification and data synchronization. You can build hybrid apps not only for iOS, Android, Blackberry and Windows Phone mobile devices but
also as web apps accessible from any browser. On top it enables developer to build access to corporate data and applications into those applications and build backend logic that supports their mobile applications.

What are the technical components?
The open and extensible architecture is based on Node.js for client and server side mobile app development. It supports a wide variety of popular toolkits including native SDKs, hybrid Apache Cordova, HTML5 and Titanium, as well as frameworks such as Xamarin, Sencha Touch, and other JavaScript frameworks. Out-of-the-box Node.js plugins are a set of best-in-class Node.js modules that have been tested and curated, ready for developers to cut and paste into their app projects. Those exist for things like Dropbox, Facebook, Google APIs, EC2, Remote Databases, SaaS Connectors,

Why and where does it fit in?
Feedhenry is going to be alligned with the open hybrid cloud strategy and will enable enterprises to accelerate mobile app development and backend integration via private clouds, public clouds, and on-premises. So this is an important addition to Red Hat’s JBoss xPaaS for OpenShift strategy. Learn more about xPaaS in a recent blog-entry. Mobile application services are a key part of that vision and FeedHenry provides the security, policy management, synchronization, and integration features to support mobile applications.


Join the Webcast
Craig Muzilla, senior vice president, Application Platform Business, Red Hat, and Cathal McGloin, chief executive officer, FeedHenry, will host a webcast to discuss this announcement tomorrow, Sept. 19, 2014, at 11 a.m. EDT. Following remarks, press and analysts are invited to participate in a live question and answer session. Join the webcast or view the replay after the event.

Further Readings
FAQ – Red Hat acquisition of FeedHenry
Read the complete press-release on the official Red Hat website.
@feedhenry
Announcement on the Feedhenry website.
Some real-life demos and tutorials on vimeo
Blog-POst from Craig Muzilla

WildFly 9 – Don’t cha wish your console was hawt like this!

Everybody heard the news probably. The first WildFly 9.0.0.Alpha1 release came out Monday. You can download it from the wildfly.org website The biggest changes are that it is built by a new feature provisioning tool which is layered on the now separate core distribution and also contains a new Servlet Distribution (only a 25 MB ZIP) which is based on it. It is called "web lite" until there'll be a better name.
The architecture now supports server suspend mode which is also known as graceful shutdown. For now only Undertow and EJB3 use this so far. Additional subsystems still need to be updated. The management APIs also got notification support. Overall 256 fixes and improvements were included in this release. But let's put all the awesomeness aside for a second and talk about what this post should be about.

Administration Console
WildFly 9 got a brushed up admin console. After you downloaded, unzipped and started the server you only need to add a user (bin/add-user.sh/.bat) and point your browser to http://localhost:9990/ to see it.

With some minor UI tweaks this is looking pretty hot already. BUT there's another console out there called hawtio! And what is extremely hot is, that it already has some very first support for WildFly and EAP and here are the steps to make it work.

Get Hawtio!
You can use hawtio from a Chrome Extension or in many different containers - or outside a container in a stand alone executable jar. If you want to deploy hawtio as a console on WildFly make sure to look at the complete how-to written by Christian Posta. The easiest way is to just download latest executable 1.4.19 jar and start it on the command line:
java -jar hawtio-app-1.4.19.jar --port 8090
The port parameter lets you specify on which port you want the console to run. As I'm going to use it with WildFly which also uses the hawtio default port this is just directly using another free port.
Next thing to do is to install the JMX to JSON bridge, on which hawtio relies to connect to remote processes. Instead of directly using JMX which is blocked on most networks anyway the Jolokia project bridges JMX MBeans to JSON and hawtio operates on them. Download latest Jolokia WAR agent and deploy it to WildFly. Now you're almost ready to go. Point your browser to the hawtio console (http://localhost:8090/hawtio/) and switch to the connect tab. Enter the following settings:
And press the "Connect to remote server" button below. Until today there is not much to see here. Beside a very basic server information you have the deployment overview and the connector status page.
But the good news is: Hawtio is open source and you can fork it from GitHub and add some more features to it. The WildFly/EAP console is in a hawtio-web subproject. Make sure to check out the contributor guidelines.

JBoss Fuse Component Details and Versions

JBoss Fuse combines several technologies like core Enterprise Service Bus capabilities (based on Apache Camel, Apache CXF, Apache ActiveMQ), Apache Karaf and Fabric8 in a single integrated distribution. The best part is, that no matter what your skill level is, contributing to JBoss Fuse can be very rewarding and a great learning experience. You’ll meet lots of smart, passionate developers who are all driven to create the best middleware possible in open source! A good way to start is to look at the individual technologies and versions which contribute to JBoss Fuse. Here's a short list about the most important ones:

Component
FuseSource 7.1JBoss Fuse 6.0JBoss Fuse 6.1
Apache Camel
2.10.02.10.22.12.3
Apache ActiveMQ
5.7.05.8.05.9.0
Apache CXF
2.6.02.6.02.7.0
Apache Karaf
2.3.02.3.02.3.0
Fuse Fabric
7.1.07.2.0
Fabric8
1.0.0
Spring Framework
3.0.73.1.33.2.4
Fuse IDE
7.1.606.0 with latest updates

If you want to see a bit more about what Fuse can do for you, here's a short little introduction to it:

API Management in WildFly 8.1 with Overlord

I gave a brief introduction about the Overlord project family yesterday. Today it's time to test-drive a bit. The API Management sub-project released a 1.0.0.Alpha1 two days ago and introduces the first set of features according to the 18-Month roadmap.

What is APIMan exactly?
It is an API management system which can either be embedded with existing frameworks or applications or even run as a separate system. So far, so good. But what is API Management and why should you care about it? Fact is, that today's applications grow in size and complexity and get distributed more widely. Add more consumers to the mix like mobile devices, TVs or the whole bunch of upcoming IoT devices and think about how you would implement access control or usage consistently over a whole bunch of applications. A nightmare candidate. But don't worry too much. This is where API Management comes in. APIMan provides a flexible, policy-based runtime governance for your APIs. It allows API providers to offers the same API through multiple plans, allowing different levels of service to different API consumers. Sounds complicated still? Let's give it a try.

The Library REST-Service
Imagine that a public library has a nice RESTful service which lists books. It's running somewhere and usually is not really access restricted. Now someone came up with the idea to build an amazing mobile app which can find out if a book is in the library or not. A next step should be to add the option to reserve a book for a couple of hours, which the old system really can't do for now. Instead of heavily tweaking the older version of the library applications we're going to use APIMan to provide a consistent API to the mobile application and let it manage the authentication for now. The API I'm using here is a simple resteasy example. You can use whatever web-service endpoint you have to play around with.

Getting Started on WildFly 8.1
The project can be built and deployed on a variety of runtime platforms, but if you want to see it in action as quickly as possible you just need to fork and clone the APIMan GitHub repository and simply build it with Maven 3.x. If you use the "run-all-wildfly8" profile, you're ready to instantly test drive it, because it does not only build the project, but also downloads and configures latest WildFly 8.1 and finally starts it for you. It takes a while to build and then start up, so you'd better bring some patience.
So, all you have to do to explore it is to fire up the admin console at http://localhost:8080/apiman-dt-ui/ and use one of the following users to log-in (the "!" is part of the password, btw):
  • admin/admin123!
  • bwayne/bwayne123!
  • ckent/ckent123!
  • dprince/dprince123!

Test-Driving The Quickstart
The documentation is a bit weak for now so I will give you a short walk through the console.
Open the console and log-in with the admin user. Now you can "Create a new Organisation" let's call it "Public Library" for now. The newly created organization shows you some tabs (Applications, Services, Plans, Members). Switch to the services tab and click on the button "New Service". Enter "BookListing" as a name, leave the 1.0 as Version and you might give it a description for informational purpose.
After you click the "Create Service" button you are redirected to the overview page. Switch to the "Implementation" and fill in the final API Endpoint. In my case this would be:http://localhost:9080/jaxb-json/resteasy/library/books/badger (note: it is deployed on a different WildFly instance) Click "Save" when you're done.

If you switch back to the overview page, you see, that the service is in status "Created" and the Publish button is still grayed out. In order to reach this goal, we need to add some more information to APIMan. The next step is to add a so called Plan to the Organisation. Switch back to it and select the Plan tab and click the "New Plan" button. Plans basically allow to group individual policies and assign them to services. Call it "InternetBlackList" and create it by clicking the accompanying button. From the "Plan" overview select "Policies" and "Add Policy" by clicking the button. Define an "IP Blacklist Policy" and enter a potentially malicious IP address you don't want the service to be accessed by.


To be able to publish our service, we need to link the newly created Plan to the BookListing service. Navigate back there and select the Plans tab. Select the "InternetBlackList" plan and click "Save". Reviewing the "Overview" page on the Service now finally shows the "Ready" state and let's us publish it.


Now that it is published, we can actually use it. But we'll take one additional step here and link the service to an application via a contract. Creating a Contract allows you to connect an Application to a Service via a particular Plan offered by the Service. You would want to do this so that your Application can invoke the Service successfully.
Create an application by navigating back to the Public Library Organization and clicking the "New App" button. Call it "Munich", leave the 1.0 as a version and enter a description if you like to; Click "Create Application". The one step left to do is to link the service and the application. This is done via a contract. Select the "Contracts" page and create a "New Contract" with the button. Enter "book" in the "Find a Service" field and search for our BookListing service. Select it. Now you can create the Contract.


The last step is to register the newly created application in the "Overview" page.

That was it. We now have a published service and a registered application. If you navigate to the API page of the application you can see the managed endpoints for the application. If you hover over the service, you get a "copy" button which let's you copy the URL of the managed endpoint funneled through the APIMan gateway.


If you try to access the service through the specified BlackListed IP address, you will now get an error. If not, you get proxied to the service by the gateway.

Notice the apikey query-string? This is the key with which the gateway locates your service and proxies your call to the managed endpoint. If you don't want to sent it as part of the query string you can also use a custom HTTP header called X-API-Key.

What's Next?
That was a very quick and incomplete walk through. But you hopefully got an idea about the basic concepts behind it. APIMan and the other Overlord sub-projects are evolving quickly. They are happy to receive contributions and if you like what you've seen or have other feedback, don't hesitate to get in touch with the project. If you want to see the more API like approach you can also watch and listen to the following screencast. It is a bit outdated, but still helpful.

Overlord – The One Place To Rule And Manage your APIs

We're living in a more and more distributed world today. Instead of having individual, departmental projects running on some hardware below a random desk, today's computer systems run at large scale, centralized or even distributed. The needs for monitoring and managing never changed but got far more complex over time. If you'd put all those cross functional features into a bucket it would most likely be called "Governance". This can happen on many levels. People, processes and of course infrastructure components.

What is Overlord?
Overlord is a a set of sub-projects which deal with different aspects of system governance. All four sub-projects are so called "upstream" projects for JBoss Fuse Service Works. But Service Works is even more, so let's just focus on the four for now.

SRAMP
Overlord S-RAMP is a full-featured artifact repository comprised of a common data model, powerful query language, multiple rich interfaces, flexible integration, and useful tools. It aims to provide a full implementation of the OASIS S-RAMP specification.

Developer Links:

DTGov
This component provides the capability to manage the lifecycle of systems from inception through deployment through subsequent change management. A flexible workflow driven approach is used to enable organizations to customize governance to fit the way they work.

Developer Links:

Runtime Government (RTGov)
This component provides the infrastructure to capture service activity information and then correlate, analyse and finally present the information in a form that can be used by a business to police Business/Service Level Agreements, and optimize their business.

Developer Links:

API Management
If you want to centralize the governance of your APIs, this is the project for you! The API Management project provides a rich management layer used to configure the governance policies you want applied to your APIs. Once configured, the API Management runtime Policy Engine can run as part of a standard Gateway or embedded in any application.

Developer Links:

What's going on lately?
Overlord just got a brand new website up and running. Have a look at it and don't forget to give feedback or work on it, as it is also open source you are free to fork it an send a pull request. Make sure to look at the contributor guidelines before.

Developer Interviews (#DI 4) Stan Lewis (@gashcrumb) about #hawtio

Already the fourth edition of my pod- and screencast crossover. Today it was Red Hatter Stan Lewis (@gashcrumb) who took some time to talk about his work and about the hot web-console which is the new front-end for all things JBoss Fabric8/Fuse. He is a Principal Software Engineer at Red Hat and came on board with the Fusesource acquisition end of 2012. As one of the primary developers on the hawtio web console, an AngularJS web application written in Typescript for managing JVMs; he also work closely with the Fabric8 project to develop a poly-container deployment and management platform.

Time to grep a coffee+++ and watch the roughly 20 minute recording. Thank you Stan for taking the time!



If you can't get enough and want to know more, take a look at the recording from this year's DevNation conference, where Stan gave a complete overview about how to extend hawtio.

Inside JBoss Data Virtualization – iPaaS Demystified (Part 1)

This is another blog the ongoing series about the Red Hat xPaaS solutions, where I am trying to demystify the acronyms a bit and give you more information about the projects and products composed around it. After the initial overview this post focuses on the first aspect of the iPaaS solution: JBoss Data Virtualization.

What is Data Virtualization and why should I care?
Think of Data Virtualization as of a distinct layer between your business applications and your data-sources. It can also be described as an integration layer for data. So, instead of pulling different datasources into your business application and following a polyglot persistence approach you take advantage of not only the data-access aspects but also get a consistent view on your distributed data-models. All perspectives are encapsulated: data abstraction, federation, integration, transformation, and delivery capabilities to combine data from one or multiple sources into reusable and unified logical data models.



To successfully implement such an approach, you need to follow a three step approach:
  • Connect: Access Data From Multiple Data Sources
  • Compose: Create a Business Friendly Virtual Data Model
  • Consume: Make the Data Model Available to Consumers
Sounds complicated - How Do I Get Started?
There are a couple of different ways to get some first experiences. In no particular order:
The Community Projects
Behind the supported Red Hat solution are:
A short seven minute video introduction by Blaine Mincey:

Web Based SSH Access your OpenShift Applications

I recently came across KeyBox. This is a Apache licensed SSH console for applications in an OpenShift Domain. The cool thing is, that it is completely web-based. And by far cooler: The client is completely written in JavaScript (using term.js) connecting to JSch (Java implementation of SSH2) running as a web-application on the JBoss Enterprise Web Server (EWS 2.0).
This is a quick and easy way to get hand on your machine, if you can't use a native ssh client. And it is a great tool in your xPaaS developer toolbox.

Prerequisites
There's not a hell lot to get started: But you obviously need a free OpenShift account first. After that, install the OpenShift client tools (aka rhc). They require Ruby 1.8.7 or higher. If you want to get the most out of it, make sure to install Git for your system, too.

Installing
Installing is just a one-liner in the terminal:
rhc app create keybox jbossews-2.0 --from-code git://github.com/skavanagh/KeyBox-OpenShift.git
It might take a while, but after the command finished, you can access KeyBox via:
https://keybox-<namespace>.rhcloud.com
All members of the domain can login with their OpenShift account.

Now you can open a SSH session for every application in your domain. KeyBox generates an SSH key pair and associate the public key with the user account for every login.



Make sure to follow Sean Kavanagh on Twitter (@spkavanagh6) and star the KeyBox-OpenShift repository if you like it!

Start your xPaaS Journey with OpenShift.

After you've hopefully read the short little introduction to xPaaS you're excited to try out all the new features and just want to get started without further reading? That is easy. The only true prerequisite for everything you do around xPaaS is an OpenShift account. And believe it or not, it is free. Like in free. If you don't believe me, follow a few simple steps to get yours today.


First and Only Step
is to visit http://www.openshift.com. You're presented with three choices. "Online", "Enterprise" and "Origin". Feel free to look around, what OpenShift has to offer, but what you are looking for is the "Online" version, which is Red Hat's public cloud application development and hosting platform.

Click the red "Signup for Free" button and simply enter your email-address, a 6 character password including the validation of it and the number/word from the captcha. When you're done, click "Signup".

What's next?
Check your inbox for an email confirming your account. You must click the link in the email to complete the registration process. If you do not receive an email within a few minutes, check your Spam folder to ensure it was not incorrectly moved. If you still run into problems you might consult the FaQ, send an email to the openshift team or see them on IRC (freenode/#openshift).
The link in the email sends you to a website, where you have to validate and accept the terms and conditions. Now you're all set. No credit-card, no mailing-address, no nothing. You have your own Openshift account ready.


Getting Started with OpenShift Online
You basically have three ways to continue your journey. Via the web-based console, via the command-line tools or via Eclipse/JBoss Developer Studio. Whatever way you decide to go, the Quickstarts are a very good thing to start with. You will be overwhelmed with the polyglot nature and the variety you can find there.

As next steps you might want to find out about:

Bootstrapping Apache Camel in Java EE7 with WildFly 8

Since Camel version 2.10 there is support for CDI (JSR-299) and DI (JSR-330). This offers new opportunities to develop and deploy Apache Camel projects in Java EE  containers but also in standalone Java SE or CDI containers. Time to try it out and get familiar with it.

What exactly is Camel?
Camel is an integration framework. Some like to call it ESB-lite. But in the end, it is a very developer and component focused way of being successful at integration projects. You have more than 80 pre-build components to pick from and with that it basically contains a complete coverage of the Enterprise Integration Pattern which are well known and state of the art to use. With all that in mind, it is not easy to come up with a single answer. If you need one, it could be something like this: It is messaging technology glue with routing. It joins together messaging start and end points allowing the transference of messages from different sources to different destinations.

Why Do I Care?
I'm obviously excited about enterprise grade software. But always been a fan of more pragmatic solutions. There's been some good blog posts, about when to use Apache Camel and with the growing need to integrate different systems over very heterogeneous platforms it is always handy to have a mature solutions at hand. Most of the samples out there start with bootstrapping the complete Camel magic, including the XML based Spring DSL and with it the mandatory dependencies. That blows everything up to a extend I don't want to accept. Knowing that there has to be a lightweight way of doing it (Camel-Core is 2.5 MB at Version 12.13.2) I was looking into how to bootstrap it myself. And use some of it's CDI magic.

The Place to Look for Ideas first
Is obviously the Java EE samples project on GitHub. Some restless community members collected an awesome amount of examples for you to get started with. The ultimate goal here is to be a reference for how to use the different specifications within the Java EE umbrella. But even some first extra bits have been included and showcase an example from different areas like NoSQL, Twitter, Quartz Scheduling and last but not least Camel integration. If you run it as it is in latest WildFly 8.1 it is not working. The cdi extension of Camel makes it a bit tricky to do it, but as mentioned in the corresponding issue, there is a way to get rid of the ambiguous CDI dependency by just creating a custom veto extension. The issue is filed with Camel and I heard, that they are looking into improving the situation. If you want to to try out the example, go to my GitHub repository and look for the CamelEE7 project.

How Did I Do It?
The Bootstrap.java is a @Singleton EJB which is loaded on application startup (remember, there are different ways to start up things in Java EE) and by @Inject ing an org.apache.camel.cdi.CdiCamelContext you get access to Camel. The tiny example uses another HelloCamel bean to show how to work with payload in the CDI integration.
Make sure to look at the CamelCdiVetoExtension.java and how it is configured in the META-INF folder. Now you're ready to go. Happy Coding.

And The Best For Last
Camel 12.14 is on the horizon already, scheduled to be released in September. If you have issues or wishes you want to see in it, now is the time to speak up!
Excerpt of the awesome new features, that are upcoming:


Time to get excited!