JavaOne sessions now online

Sun has recently posted MP3s, PDFs, and combo slideshows (slides in sequence with the audio) from this year’s JavaOne conference. I wanted to post links to some of the sessions that I enjoyed the most.

You need a Sun Developer Network account (free) in order to access the presentations.

There were a number of sessions that I was not able to attend at the conference that I look forward to watching now. I’d highly recommend taking a troll through http://developers.sun.com/learning/javaoneonline/ to see what else might peak your interest.

A theme at JavaOne – Beyond Java

I noticed a few themes at JavaOne this year. One of the big ones was JavaFX. It had sessions galore, and plenty of stage time at the general sessions. But, another theme I picked up on was the amount of sessions dedicated to non-Java programming languages; perhaps a bit odd considering this was JavaOne.

JRuby and Groovy were all over the place. There was also a session on Scala. These other languages bring new ways of solving problems to the table. In addition to their expressiveness, JRuby and Groovy bring the power of meta programming to the Java world. Scala also brings a more expressive syntax, and the power and flexibility of two programming paradigms: object oriented and functional.

These were the only non-Java programming languages that had sessions associated with them. At the CommunityOne session I attended which was presented by Charlie Nutter, Charlie said that virtually every language out there is capable of running on the JVM, usually via a sub-project. I think this is a very profound statement. It shows that the language implementers or those strongly associated with the language realize the benefits of running on the JVM, and the ability to integrate with the billions of lines of existing Java code out there. Granted that each one of these projects varies in the level it can interact with existing Java code. However, Sun appears to be making a strong effort to work with the implementers of these languages to make integration with Java easier.

I think this is great. I’m sure that Sun realizes how powerful and mature the Java platform has become. And, as a Java programmer, I love the idea of being able to pick a language that best fits the problem I am trying to solve, while still maintaining interoperability with my existing Java code.

The Java language itself appears to be stalling. No widely adopted programming language lives forever. In order for a language to be successful, it must maintain some sort of backward compatibility. Maintaining backward compatibility, while necessary, slows the evolution of the language, and prevents the language from adopting vastly different programming models that may be a better fit for new problems that developers may be facing. Newer languages, or less widely adopted languages, do not face this dilemma and can change more rapidly. I’m also unconvinced of the effectiveness of the whole JCP process. I some situations I feel it is best to have a small group of intelligent individuals at the helm of a project, making all of the decisions. Usually, the more people involved, the longer it takes to get things done.

I think that making the JVM a more attractive environment for non-Java programming languages will only benefit the platform. I think it will also prevent some developers from jumping platforms to use a different programming language that better suits the problem they are trying to solve. I think this is a win for the Java community.

JavaOne 2008 – Day 4

Sun General Session Extreme Innovation

The last general session of JavaOne 2008 consisted of James Gosling inviting several people on stage to showcase what they have been using Java to create. There were several presentations, but I’m going to only talk about a few that interested me.

First up was Visual VM. Visual VM is a free JVM monitoring tool that can look under the covers of applications running Java 1.4.6 (I think) or greater. The stats gathered include memory usage, thread usage, CPU usage, and more. It also has a series of nice features, like getting a thread dump on your application by simply clicking a button. Best of all is that it does this with almost no overhead on the application. The tool looked very nice, and worth checking out in further detail.

Second was by far THE coolest thing that I have seen here this week; a pen. Yep, a pen…but a VERY smart pen. This pen can record your voice (or somebody else’s) as you write, and when you tap on an item that you wrote with the tip of the pen, it will play back just the portion of that audio that you record when you wrote that particular item. The pen also has several tools, like a translator that will take a word written in English and translate it (verbally) to a number of different languages, by simply tapping the word you wrote. It also stores images of everything you write. You can basically toss the paper you wrote on in the trash, because you can easily transfer the images to your computer through the USB port on the pen. The text in the notes is searchable via the pen software, and audio captured can also be accessed via that same software.

JMars is an open source application that contains loads of detailed images of Mars, collected via the several NASA missions to that planet. It is fully interactive, lets you see several types of maps of a terrain (and combine particular maps), and basically navigate the planet as you wish.

Complex Event Processing at Orbitz

Matt O’Keefe and Doug Barth did a great job presenting our event processing framework here at Orbitz. They took the audience step by step through our API and library that collect the data (ERMA), the commercial third-party tool that we use to aggregate and route that data (Streambase), and our graphing tool that visually represents that data (Graphite). The event processing framework has very high throughput, has little overhead on the running application, and requires very little code in the application.

At the end of the presentation, Doug announced that we would be open sourcing the two pieces of the framework that we own, ERMA and Graphite, and we were looking into bundling an open source data aggregation tool (since we can’t open source Streambase) to provide a complete event processing solution. The audience applauded this announcement, which took Doug a bit by surprise based on the look on his face :)

Good job guys!

Improving the Engineering Process Through Automation by Hudson

Hudson is a continuous integration (CI) tool that can be used to automate building, testing, and deploying your code. PCs are cheap and getting cheaper by the day. Developers on the other hand are not. Hudson is advertised as a cheap “team member” that can take on some of the easier to automate activities.

One of Hudson’s obvious goals is ease of use. And, this is a goal that they have reached in all areas. Even something as complex as distributed builds is a snap in Hudson (more on that later). It installs in a snap, and can run either by itself, or within another web container. It was designed to be extensible, allowing the community to continue development of the tool through plug-ins. These plugins provide integration with several popular version control systems and bug tracking systems.

Several best practices were suggested:

  • Componentize builds to reduce the time needed to get feedback. Replace that one, monolithic build with several smaller builds. And, only build what changed.
  • Run tests in parallel, or run groups of related tests in parallel.
  • Hudson can build, test, promote, do some QA, deploy, integrate, and more. Take advantage of its power and flexibility to automate whatever can be automated. Set it up to do as much as possible.

Hudson also makes it very easy to do distributed builds. The master machine serves HTTP build requests to the slave machines, and stores all of the information collected from the builds. The slave machines are the ones that do the build. The nice thing about distributed builds in Hudson is that slave boxes can come and go as they please. At the beginning of a build, the master checks to see how many slaves are available, and delegates the build to a slave machine. And, slave configuration is easy. A client needs to run on the slave, and some basic configuration is needed on the server for each slave. After that, Hudson takes care of the rest. This feature is great for building and testing on multiple environments and operating systems.

Oh, did I mention that Hudson was free? We use it on the Transaction Services team at Orbitz, even though the company has standardized on Quickbuild. Putting up with maintaining two ci tools tells you 1) how much we like Hudson and 2) how easy it must be to get running, and keep running.

Automated Heap Dump Analysis for Developers, Testers, and Support Employees

This session focused on the use of an open source tool, Memory Analyzer, to help track down memory leaks in a Java application. Finding memory leaks in Java has always been difficult. Who in their right mind wants to wade through a heap dump? Memory Analyzer is a sweet little tool that analyzes the heap dump for you, and provides you with easy to decipher diagnostic information that it pulled from the dump.

Since Memory Analyzer doesn’t understand your application, it can’t really tell you where a leak is. However, it can tell you which classes are holding the majority of the heap, how many instances for a given class have been instantiated and how much memory those instances occupy, and more. This is usually enough to point you in the right direction. Memory Analyzer can also give you the stack trace to a specific memory allocation, helping you further track down the leak.

The reports generated by Memory Analyzer are very comprehensive, and provide tons of useful information about your application’s memory usage. These reports can be used by all areas. They can help support employees track down production issues. They can help developers fix and test memory related issues. And, they can help testers very that the memory usage for a given application doesn’t dramatically vary from release to release. The reports have several useful features to track down leaks, like the ability to group the results by classloader, in an effort to further isolate the problem.

Memory Analyzer also does a static analysis of you code to look for common memory related anti-patterns. This can help find a bug before it is introduced into the codebase. This tool has a lot of promise. I hope I never have to use it, but it’s comforting to know that it’s out there to use, just in case.

Top 10 Patterns for Scaling Out Java Technology-Based Applications

Scalability seems to be one of the industry’s biggest buzz words these days. And, I don’t think that many people really know what it means. It was pointed out to me during a Q and A session after a talk how many people were asking scalability questions regarding topics that had nothing to do with scalability. “Does JAXB scale?” was one of these. Scalability != Performance.

Scalability is the ability to handle an ever increasing amount of requests gracefully. This could include adding servers to your server farm, or upgrading some key components. If you can add capacity to your system or tweak your system to handle increasing numbers of requests, you can scale. If there is a bottle neck in your system that maxes out your capacity, and you can’t easily fix that bottleneck, you can’t scale. Linear scalability, the ability to handle increased traffic with increased hardware, keeping the latency at its normal rate, is the goal. Any sort of up-trend in latency indicates that there will come a point in time where your latency will hit unacceptable rates, and cause a scalability bottleneck.

Scalability is not limited by a technology, a programming language, or an operating system. People have built scalable systems on every possible combination of these. The design and architecture of your system is what will determine if your system will scale or not.

Availability and reliability must be baked into the design of your system. It cannot be an afterthought. Refactoring your code to deal with scalability issues after launch is very difficult, as major design changes are often necessary.

The speaker then went on to discuss some things to consider when thinking about availability.

Latency is not always predictable. Network IO and other tasks performed outside of the application can be unpredictable. Reduce or eliminate these where possible. Remote messaging brings its own set of challenges. If order of the messages is important, how do you control it? How do you make sure the message will get there? How can you make sure you get a response quickly? How can you make sure that subsequent executions (a retry) won’t cause repercussions? Managing these complexities can be difficult, and if done improperly, can limit scalability.

Durability, the ability to survive a failure, is also a major challenge. Writing data to disk or to a DB takes time, and the coordination of writing and reading data necessary to perform a failover can be difficult to manage.

The speaker went on to identify some key areas to focus on when trying to build a system that can scale.

  • Routing – Reliable routing is essential for scalability. You must be able to reliably send requests to components that can process those requests in timely manner.
  • Partitioning – Spreading out the responsibilities of your system into different components enables scalability. If you notice that one area of the system is becoming a bottleneck, you can always add more capacity to that area, without touching the other areas.
  • Replication – The replication of data is necessary for surviving a failure in the middle of a transaction. The routing of a system must also be able to recognize failure, and route the request to another component who can handle the request.

The presentation also covered how to handle load on a system. Some common ways to deal with load:

  • Load balancing – Send the requests to a component that has the capacity to process them.
  • Partitioning – Partition your system so that you can add capacity to stressed areas.
  • Queue – Queue requests for processing when the system has available capacity.
  • Parallelization – Execute requests, or parts or requests in parallel where possible.

There are also strategies for when your system is overloaded:

  • Turn away requests
  • Queue requests to be processed when the system has capacity
  • Add capacity to the system (usually in the form of hardware)
  • Relax any kind of read/write consistency that you are enforcing
  • Increase the size of any batch jobs that you run

The speaker also spent some time talking about failure recovery. You should plan for failures at the component level (a piece of the system) and the systematic level (the entire system). Build some redundancy into your system so you have the ability to failover to a redundant component if one component fails. If you can’t failover to another component, then you need to build recoverability into your component, so that it can handle problems by itself. Critical data should be replicated so one component can pick up where another left off in the event of a failure. This however adds overhead to the system. So, only the critical data should be replicated.

At the end of the talk, the speaker left us with the “secret” to scalability: simplification. The simpler your system is, the easier it will be to scale. If you can’t get it to work on the whiteboard, then there is no way it will work in production.

Spring Framework 2.5: New and Notable

I’ve been working with Spring for quite a while now, and we are currently using 2.0. This presentation gave a 10,000 foot overview of what is coming in version 2.5 of the Spring framework.

  • As always, 2.5 will be backwards compatible with previous 2.x releases.
  • Greater annotation support.
  • Enhanced support of the testing framework provided by Spring. Using the test framework, you can easily test your configuration, your database connections, and even the database transactions you plan on executing.
  • Support for Java 6, Java EE 5, and OSGi.
  • This release of Spring will be the last release to support Java 1.4.
  • Support for OSGi allows for greater modularization.

JavaOne 2008 – Day 3

Groovy and Grails: Changing the Landscape of Java Platform, Enterprise Edition (Java EE Platform) Patterns

Why did I sign up for ANOTHER Groovy and Grails talk? That’s a great question. I have no idea. Nothing new was presented here that wasn’t already presented elsewhere.

JRuby on Rails Deployment: What They Didn’t Tell You

This was a pretty comprehensive presentation on what it takes to deploy a Rails application via JRuby. The talk started out with a brief history of how Rails apps have been deployed on the traditional Ruby implementation. CGI/FastCGI was always slow and kind of buggy. The Mongrel Ruby web server is widely used, but large clusters need to be deployed for high traffic sites, due to Rails’ non-multithreading nature. These large clusters have proven to be difficult to manager. There are tools that help with Mongrel cluster management, but it doesn’t change the fact that your cluster size is huge.

Deploying Rails apps on JRuby allows you to take advantage of JVM threading, shrinking the number of real processes you need to have running out there to handle requests. It is possible to configure JRuby on Rails to use a fixed number of JVM runtimes to process requests, so that you can scale it out to fit the needs of your web application.

There are tools available to help with the deployment of a Rails app on a Java application server. Warbler can be used to package the Rails application into a war file that can be easily deployed to the application server. Warbler is pretty easy to setup and use, and only requires you to list the Ruby gems that your application depends on (which can be done in a configuration file).

However, Rails apps on JRuby still require a bit of fiddling to get up and running. You can use JNDI for database connections, but Rails does not close the connections at the end of each request (by design…this is just how rails works). So, workarounds are needed to get the application to behave a little more like a Java app, which usually opens a connection at the beginning of the transaction, and closes the connection at the end. Several configuration modifications are needed to get the Rails app to run optimally in a production Java environment. And, there are several areas where the Rails and the application server don’t mesh too well together.

The author has just released a library, called JRuby Rack, which attempts to bridge the gap between your Rails application and the application server. Among other things, it allows Rails to use JSPs to render the view, makes the Java servlet context available to the Rails app, and passes servlet request attributes from Java to the Rails application.

Programming with Functional Objects in Scala

Martin Odersky, the creator of the Scala language, presented a good overview Scala. Scala is another programming language that runs on the JVM, and boasts full Java interoperability. It was designed to be scalable across all areas of development, which is where it got its name. It can do everything from power the smallest scripts to the largest applications.

Martin considers Scala to be the Java of the future. It can do everything that Java can, and more. Unlike another fully Java interoperable language on the JVM that is getting a lot of attention, Groovy, Scala is fast. Martin says it benchmarks just as fast as Java. It also is much more expressive than Java, letting you write less code. Martin stated that the average Scala project contains 2x (or more) less code than an equivalent Java project. This is due to the syntax of the language, and to the better abstraction that is provided by Scala.

Scala was designed to be extensible. Users can easily add to the language to fit their needs. This is nice, as it keeps the language itself trim. There is no need for the language itself to solve everybody’s issues. You can take it upon yourself to solve your own problems :)

I’m not going to list out all of the features of the language that were identified during the presentation. Check out http://www.scala-lang.org if you are interested. I will however mention some of the features that peeked my interest.

First, Scala supports mixins. You can think of a mixin as a Java interface with a default implementation. You can “mixin” functionality to your class, without clouding the object hierarchy by allowing multiple inheritance. After working with mixins in Ruby, I’ve really started to miss them in Java. It’s nice to see them here.

Second is the concept of actors. Actors add Erlang like concurrency to Scala. Erlang is well known for the way it handles concurrency, using many lightweight processes that interact with each other via simple asynchronous messages. This is great to see in a language with full Java interoperability.

Another thing I like about Scala is the fact that it is an object oriented language AND a functional language. You can code using the paradigm that best suits the problem.

Scala is quickly growing in popularity. Though not as easy to adopt as Groovy, due to the radically different syntax, it is without a doubt a very strong language.

Defective Java Code: Turning WTF Code into a Learning Experience

I was a little disappointed with this talk. I was expecting a lecture on ways to best learn from existing bugs. There was a little bit of that, but the majority of the session simply showcased some not so obvious bugs that the presenter has run into in the past. There was however some good advice that I took away from this session.

First, never assume that a bug is so stupid, or so unique that it doesn’t exist elsewhere in your code or in somebody else’s code. The speaker showed several bugs fitting this description that he was able to find in several different code bases using findbugs. And, these weren’t obscure projects he found these bugs in either. He found several of these bugs in the Java codebase, the Eclipse codebase, and the codebases of several other large, well respected projects.

Make use of every available tool you have to flush out bugs as early in the development cycle as possible. This includes things like the @Override annotation. The speaker demonstrated a bug where a subclass was supposedly overriding the method of a parent class, but used a class in the method signature with the same name as the one in the original method…but from a different package. So, simply looking at the method, it looked like a perfectly valid override. You had to compare the import statements of the parent class and the child class in order to figure out what was happening. Using the @Override annotation would have caught this bug at compile time.

Design Patterns Reconsidered

Design patterns have been getting a lot of negative press lately. This is due to a number of factors:

  • They result in people copying/pasting code to implement the patterns without actually knowing what is really going on.
  • They are accused of being workarounds for shortcomings in the language.
  • They are overused.

Each of these criticisms has its valid points. However, design patterns still play an important role in software development today.

  • They form a vocabulary that developers can use to talk about problems, and solutions.
  • They help expose real design issues.
  • They help compare alternative design choices.

The speaker took us through three design patterns documented in the original Gang of Four Design Patterns book, and how they might be applied differently today.

First up was Singleton. The speaker acknowledged that there is sometimes a valid need to ensure that there is only one instance of a specific object. However, the Singleton design pattern causes the following problems:

  • It makes testing difficult by carrying state from one test run to the next. This could affect how the test behaves, and can make test results unpredictable.
  • It creates “hidden” dependencies. It is hard to tell, without digging through the code, that an object may have a hidden dependency on a Singleton. This also makes the Singleton hard to mock for testing.
  • There is not really just one instance per the JVM. There could be one per classloader.
  • Could create memory leaks, since Singletons never fall out of scope.
  • Subclassing is difficult, and ugly.

What do we do about this? Simple; don’t use Singletons. Define an interface, and use a dependency injecting framework to handle the creation of the object, and to inject it where it needs to go. This way you can have your one instance of the class, and avoid all of the problems mentioned above.

Next up was the Template Method design pattern, which has the following problems:

  • Algorithms in the base class(es) can become very complex, and hard to follow.
  • This design pattern usually leads to an explosion of base classes.
  • Inheritance cuts of many of your design options for the sub classes.
  • No way to combine functionality in sibling sub classes. You may have a FastCar, and a CheapCar, which are both subclasses of Car…but there’s no easy way to have a FastCheapCar without an additional subclass.
  • It is hard, without exhaustingly comprehensive documentation, to document the intent of your framework to your user.

How do we address these issues? Use composition over inheritance. Move the logic that would exist in the sub classes into a series of command objects, which you can then inject into the framework. Inheritance is a very strong form of coupling, and should be avoid unless it makes absolute sense.

Lastly, the speaker took a look at the Visitor design pattern, which has the following issues:

  • You can easily expand the list of visiting methods, but you can’t easily expand the data structure you visit.
  • It is difficult to return a value while visiting.
  • It is difficult to throw an exception while visiting.

The speaker offered the following suggestions for the Visitor pattern:

  • Put the data structure navigation code in its own visitor, which would take the logic visitor along for the ride, applying it to each node it visits.
  • Hold onto return values and exceptions in the navigation visitor.
  • Closuers, when they are available in Java, may be able to improve upon this pattern further.

The speaker finished with a list of general design principles:

  • Code to interfaces, not implementations, and use dependency injection to reduce coupling.
  • Favor composition over inheritance.
  • Don’t rely on object identity.
  • Separate parts of the design that change at different rates.
  • When in Java, take advantage of the strong, static typing system. It is one thing Java does very well. So, use it to your advantage.

Developing Service-Oriented Architecture Applications with OSGi

This was a Q/A session with some OSGi experts. The experts discussed how they have used OSGi on past projects, what they liked about, and what they felt it could do better.

What is OSGI?

  • OSGi is a framework that manages the services in a service oriented architecture.
  • It has the ability to add services at runtime, and makes it very easy to do so.
  • It provides total control over what services you expose, and what services you consume.
  • Service “bundles” have a life cycle that is independent of the JVM, meaning you can start and stop services without starting and stopping the JVM.
  • OSGi bundles make the dependencies of a service very clear, making sure there are no “surprises”.
  • It allows you to deploy separate components of a SOA independent of one another. So, if a service has a bug, you can simply upgrade that service without touching the rest.
  • There is no need to restart the application server to deploy a new version of a service.
  • Service bundles are platform independent. Your service can run in a web container, a cell phone, a zerox machine…pretty much anything that supports OSGi.

I’ve been hearing a lot of buzz about OSGi the past few weeks leading up to JavaOne, which is why I attend this talk. I have to say, I like what I heard. I think that Orbitz may be able to benefit from such a framework, as the majority of our architecture is service oriented. This could possibly help in the deployment of some of our services. I will definitely be reading some more on this topic, and thinking about how we might be able to use this powerful new technology.

Testing in Groovy

The last session I attended for the day was one on testing using Groovy. Groovy provides many features that make it an ideal language to use to write tests for your Java code:

  • Groovy is much more expressive than Java, and has the ability to dramatically reduce the amount of test code you have to write.
  • Groovy is fully interoperable with Java, giving you access to all of your Java code that you want to test.
  • You can re-use existing Java test frameworks.
  • Groovy’s metaprogramming capabilities make mocking objects and stubbing methods a breeze.
  • Groovy has several features that can be used for testing, including a GroovyTestCase class, and built in mocking and stubbing capabilities.

I was pretty convinced going into JavaOne that I wanted to start writing tests in Groovy. Everything I have seen here at JavaOne regarding Groovy has re-enforced this decision.

JavaOne 2008 – Day 2

Oracle General Session Enterprise Application Platform

Here is my summary this session:

Buzzword, buzzword, buzzword. Look at this fancy GUI tool you can use to create applications without writing code. Buzzword, buzzword, buzzword. Look how well this integrates with Excel. Buzzword, buzzword, buzzword. Give us lots of money to make your life as a developer as mundane as possible. Bleh.

Developers like writing code. We WANT to know what is going on behind the scenes, and we WANT the ability to completely control that behavior. I think the presenters forgot where it was that they were actually presenting.

Groovy, the Red Pill: Metaprogramming–How to Blow the Mind of Developers on the Java Platform

Sorry Charlie (Nutter), but you just lost the “Best of JavaOne” title to Scott Davis (although, I still REALLY liked yours). This presentation was fantastic. Not only is Scott a very engaging, enthusiastic, and funny presenter, but the material was great. In the matter of an hour, he was able to shoot through many of Groovy’s strongest features.

For those new to Groovy, it is a dynamic programming language that runs on the JVM which has FULL interoperability with Java. Groovy IS Java in the sense that the Groovy is converted Java bytecode before it is run. Groovy can use Java libraries, and Java can use compiled Groovy libraries. Groovy is also able to run valid, syntactically correct Java code. This enables you to start using Groovy with your existing Java coding style. Then, gradually, you can start to use some of the features that make Groovy such a remarkable language. This ability to “ease into” the language is one of Groovy’s greatest strengths.

What are some of these “cool features” you ask? Method pointers, closures, dynamic addition of getters and setters, operator overloading, the ability to dynamically add methods at runtime (even to Java standard library classes), the ability to handle calls to methods that don’t exist, and much, much more.

These capabilities allow you to create domain specific languages that let you code much closer to the problem domain, increasing the readability and maintainability of your code. Scott took us through an IPod example, where an IPod object lets you load songs onto it. Say we have an ArrayList of songs, named songs. “songs.add(new Song())” doesn’t really express what we want to do. We want to “load” a song onto the Ipod, not add a song to the list of songs on the IPod object. So, we can create a new method pointer, named load, and point it to the add method:

load = songs.&add

This lets us code much closer to the domain, loading songs onto the IPod by calling songs.load(new Song()). Very cool.

But, the true power of Groovy, as demonstrated in Grails, is its metaprogramming capabilities. Want to add a new method, shout, to java.lang.String? No problem:

String.metaClass.shout = { ->
delegate.toUpperClass()
}

Methods can be re-defined in a similar way. Re-defining methods gives you a way to easily stub out methods for testing.

Groovy also provides hooks into methods that either allow you to do something before a method in a class is invoked, via invokeMethod(), or when a call is made to a non-existant method on a class, via methodMissing(). These capabilities allow you to intercept method invocations, add dynamic behavior based on a methods name and/or parameters, and more.

Groovy also adds TONS of syntactical sugar on top of Java. Easy iteration of collections, overloaded operators that just make sense, additional operations to the Java Standard Library classes, etc. All of this, and more, is what makes Groovy such an attractive programming language. I’m looking forward to learning more.

Introduction to Web Beans

What is a web bean you ask? That’s a good question. Even after attending this presentation, I’m not quire sure that I know the answer. Web beans appear to be a way to make services available to a client in a way that is non-invasive, and in a way that prevents tight coupling. Service implementations can be injected into a client for use in any layer of the stack, without the need to go through layers upon layers of middle-ware. I’m sorry to report that this is all I got out of this presentation. I will likely review the slides from the presentation and read up on the subject on the web after JavaOne to see if I can learn more about this, as I am disappointed that I sat through this hour long presentation and still do not know that a web bean is.

It’s All About the SOA: RESTful Service-Oriented Architecture at Overstock.com

The folks at Overstock.com gave a good presentation about their use of REST in their service oriented architecture. They moved to an SOA from a monolithic, self contained application (written in a single C file!) to eliminate business logic duplication and the drifting of that logic into other areas. SOA also helped with the isolation of sensitive data, reduced coupling (which enabled easier cross team development via interfaces), and provided a service level domain focus.

Before choosing REST, Overstock.com evaluated some competing SOA technologies, including SOAP/WS-*, JINI, and a few others. REST won out for a couple of reasons. First, they liked that it provided a uniform interface (which is basically HTTP). Also, they wanted to take advantage of their experience as web application developers who already had detailed knowledge of the HTTP protocol. Since they were already familiar with the technology (HTTP), it enabled them to get them up and running quickly, which was a requirement due to an approaching holiday shopping season.

They developed a web framework, built upon the open source Restlet REST framework for Java, to take care of some routine operations that they did not want to burden their developers with. For example, since they didn’t want to muck with the XML directly, their web framework took care of marshalling/unmarshalling the XML to/from their object model via JAXB. It also took care of some logging, and other common concerns. In addition to developing their own framework on top of Restlet, they also contributed code to the Restlet project, and to the JAXB project.

One of their goals was to keep things as simple as possible. Their solution was developed in J2SE, and is deployed via Tomcat web containers, with standard HTTP load balancing. They are able to perform rolling deployments because they always maintain backward compatibility with at least the last version of the application.

Dealing with Asynchronicity in Java Technology-Based Web Services

This presentation covered a technique that was developed to deal with asynchronicity in web services. I must admit, one of the reasons I signed up for this session was to find out what “asynchronicity” meant :) Using asynchronous web services adds complexity, but also provides some benefits. For one, you don’t have to sit there and wait for long running services to complete. You can call the service, and immediately return so that you can do something else. Second, using asynchronous web services gives you the ability to cancel requests at any time.

There are two ways to implement asynchronous web services. The first involves polling. The client will call a web service, which will queue the request for processing, and immediately return to the client. With polling, the client will periodically check back with the server, via another web service call, to see if their request has finished processing and to retrieve the result. This has many issues. It is chatty, especially when you’re dealing with multiple clients issuing multiple requests. It also adds unnecessary latency, as there may be a period of time that passes after the request finishes processing, but before you make the call to retrieve the results.

The second technique, the one covered in detail during the presentation, involves a callback from the server. This technique involves the server exposing the web service, as normal, AND the client exposing a callback web service. When the client calls the server’s web service, the request is thrown into a queue for processing, and the server immediately returns to the client. In the request is a unique ID for the request, and information about the callback service. When the request is finished being processed by the worker thread, the server calls the callback web service on the client with the unique ID passed in the original request and the response, so the client knows which request was processed.

The speakers then proceeded to show how the callback technique would be implemented, step by step. However, I think the main value in this presentation was the description of the technique. The implementation was pretty straight forward and easy to understand, especially if you have experience dealing with web services.

One item the speakers did caution us about was the necessity to persist the queue of requests (either to disk or database). Keeping the queue in memory is not sufficient, because all of the requests will be lost if the application crashes or is suddenly terminated.

JRuby at ThoughtWorks

This was a very interesting talk about JRuby, and how ThoughtWorks uses JRuby for both internal projects, and client projects. Thoughtworks is a consulting company that is well known for its agile methods.

Those who have worked with Ruby know that it is great for agile development because it is very readable and expressive, it leads the industry when it comes to testing innovation, and it allows for easy, quick prototyping.

But, the standard Ruby implementation has many issues. Ruby does not use native threads, which presents scheduling issues for multithread applications. It has weak Unicode support. It has poor performance; even the new version, 1.9, leaves much to be desired. The garbage collector doesn’t work that well, stopping all processing while it collects garbage. The C language extensions to the library have caused many issues, especially in the area of cross platform compatibility. It is also a challenge, politically, to get a Ruby application deployed to production environments in some companies.

JRuby addresses all of these issues. Java threading is used, which addresses all of Ruby’s threading issues. Java has full Unicode support. JRuby performs better than even the new version of Ruby (in long haul testing). The Java garbage collector is used, providing quick, reliable, and dependable garbage collection. All language extensions are written in Java instead of C, solving the portability issue. And, companies who already have a Java production environment in place usually don’t have an issue with deploying JRuby applications, since they can be packaged and deployed just like Java applications. In fact, some may not even know that they are deploying a Ruby application.

Another major strength of JRuby is the ability to fall back on Java in the case were Ruby is not a good fit, like a performance intensive part of the application, or maybe connecting to a database that does not have a Ruby adaptor.

The speaker then described some of the projects that have been going on at Thoughtworks, and demonstrated one of the web applications they user internally to manage projects.