JavaOne 2008 – Day 1

General Session: Java + You

The opening session to JavaOne 2008 was pretty much a showcase of what’s new and upcoming in the world of Java development. There was a preview of several Java enabled devices, including a couple of mobile phones, a mobile book reader from Amazon (with web and integration), and a Blue Ray disc demo presented by Neil Young, who is just about the last person I would have expected to see at JavaOne. All of the products demoed were developed with Java as a core component.

The first part of the keynote talked about how the RFIDs embedded in our JavaOne IDs are used to validate us for the sessions that we have registered for. In addition to that basic usage, the RFIDs are also used to track people as they enter/leave the sessions. This information can be displayed on a graph to see how well the speakers are keeping their audience throughout their presentation.

The speakers also demonstrated how Java can interact with industrial tools that are used to track energy efficiency in buildings. The data collected via these tools can be fed to any number of applications to process or display the data. During the demo, they demonstrated a JavaFX application that visually represented the data, and a graphing tool that created a series of graphs based on the data. Sun has hooked up this monitoring to their data centers, and release information to the public regarding how much energy is being used by their data centers. The speakers stressed the goal to use technology to improve society.

There was a pretty cool JavaFX demo that brought up a portlet like component on a web page. They demonstrated how a user can then drag that component from the web page, out of the browser, and drop it on the desktop. The user can then choose to “save” that component, essentially installing a local copy of that app on their PC. They also demonstrated how that same application was easily accessible via a mobile device. This was the SAME copy of the app. It was not written differently for these different environments.

Overview of the JavaFX Script Programming Language

This presentation, given by the creator of JavaFX, demonstrated the features of the JavaFX language. JavaFX is an object oriented scripting language, built on top of Java, which allows for easier GUI development. It supports closures, first class functions, multiple inheritance, and other syntactical sugar currently not available in the Java programming language. Although JavaFX was crated with GUI development in mind, it is a complete language on its own, and can be used for other tasks.

One of the strengths of JavaFX appears to be in the way that properties can be specified for GUI components. An example (this is going off memory, may not be syntactically correct):

Rectangle {
x : 10
Y : 10
width : 50
height : 50
color : Color.RED

The above example draws a 50px x 50px red square, at position 10,10. Instead of hard coding the values for the properties, you can bind them to variables. Binding the properties to variables allows JavaFX to dynamically re-draw your component when the values of those properties change. For example, if you set the width property to “w”, and the height property to “h”, then in an event handler that responded to say, a mouse drag event, you can dynamically resize the component based on how the user dragged the mouse.

I don’t do any GUI development, but I’ve heard a lot about JavaFX lately, and wanted to see what it had to offer. The demos during this presentation were simple, but we have also seen some pretty cool animated, 3D demos using JavaFX here at JavaOne. Some of those demos were very impressive.

Enterprise-Level Testing Strategies

This presentation, given by some fellow Chicagoans at Navteq, described how the presenters went about solving some performance and maintainability problems they were having with their enormous unit test suite.

Some general pointers they offered:

  • Know, in detail, what it is that you need to test. Code coverage doesn’t mean squat if your tests don’t exercise your real use cases.
  • Build your test framework in such a way that it can grow with your team.
  • Poorly written unit tests are viral, and spread quickly. Don’t tolerate poor unit tests. Treat test code like production code, and enforce best practices.
  • Grow your test framework with your software. As you refactor your software, refactor your tests and test framework as well.

More specifically, they mentioned a few specific items that you can do to increase test performance and reduce test maintenance (or at least make it easier to do):

  • Standardize your test data, and standardize access to that test data. Share standard test data among all of your tests. If you do this, and standardize access to that data, you can cache that data to avoid re-creating it, or re-fetching it from the data store.
  • Use long-lived objects to avoid the repeated instantiation of expensive objects.
  • Analyze your dependencies and componentize your builds. Break your builds up into smaller parts, so that you only need to run the tests on the component that changes.
  • Use a continuous integration tool. Using a CI tool helps you catch problems with your code much earlier in the development cycle, making it easier and less costly to fix.
  • Have a configurable and deployable database schema.
  • Apply “best practices” to test code, just as you would to production code. Bad tests are always copied more, and propagate faster, than good tests.

With regard to caching test data, the speakers said it is necessary to hand each test a COPY of that test data. This way, the test can modify that data as it needs to without affecting the tests that run after it. Copying the test data for each test is much simpler than implementing a rollback mechanism to be executed after each test. Since this copy would need to be a deep copy, the speakers suggested using your objects’ copy constructors, if they have them, or to serialize the test data when it is created, and de-serialize it into a new copy at the beginning of each test.

Implementing these changes cut Navteq’s test time in half when running the entire suite (from 3 hours to 1.5 hours). However, they pointed out that they rarely need to run the entire suite anymore, since they have componentized their build. Now, they only run the tests for the component that was modified, which takes 20 minutes for their largest component.

Introducing eBay’s Scalable Innovation Platform

This was by far the most interesting presentation of the day. eBay took us on a whirlwind tour of their application platform.

First up was the UI layer. With eBay, everything is in Java. And, everything means everything. They have a home grown DOM implementation to build the web pages. The have home grown, re-usable, self contained UI components which have their own CSS, JavaScript, and content. In addition, the UI components contain dependencies to other UI components, and have a sample data model to aid in testing. One very nice thing about their UI components is their level of documentation. eBay keeps a Component Catalog that fully documents each of the components, how they can be configured, how they can be used, and more. This fosters re-use among the developers.

The platform is aware of all uses of web resources (images, links, external CSS, external JavaScript, etc). eBay has tools that will verify the resources referenced in code at build time. If a resource is missing, or not specified correctly in the code (typo), the build will break. This prevents broken links and images from making it onto the site. They have also developed a set of Eclipse plug-ins to aid in resource management.

All CSS is also defined exclusively in Java. Eclipse plug-ins have been developed to help with this area as well. Code completion is available for all CSS properties for the specific element that they are styling. Build tools have been created to verify that only valid properties are used, and valid values are used for those properties where possible. This helps prevent many common CSS errors.

eBay has crated a set of content creation tools that allow in-browser, in-page content editing. While testing, if you see a content error on the site, you can edit that content right in the browser, and immediately start the patch process to get that content change on the site. To ease internationalization, tools highlight content that does not come from their content management system. This makes it very easy to spot content that will not be translated when viewed in international locales.

As I mentioned before, EBay’s UI is composed of a series of components. A particular key-stroke in your browser will enable the highlighting of all of the individual components on that page. Another custom key-stroke displays the code for that particular component in Eclipse. This enables you to quickly find and easily fix bugs in components.

EBay’s platform was built with scalability in mind. Monitoring and operations was not an afterthought. What to monitor, and how to monitor it was carefully considered when designing the platform. Because of this, they have great visibility into all areas of the site, which enables them to easily find bottlenecks so they can be addressed.

The data access layer was built with performance in mind. It consists of 500+ databases, which store different sets of data. The Data Access Layer (DAL) does caching, partial object persistence (hydration), and routing to a specific database based on the data that is being accessed or stored. eBay has a template-ized SQL framework that will dynamically insert column names, table names, and inserts an SQL JOIN into the query if necessary. Like the UI layer, the data access layer also provides a set of Eclipse plugins to aid in the development of DAL components.

As far as a connection pool, eBay was not happy with any of the open source options, so they rolled their own. Among other features, their connection pool provides easy configuration, complete transparency, supports throttling, and has a storm suppression system that throttles the creation of new DB connections in times of exceptionally high traffic…to keep the database from falling over.

The platform supports a fail fast architecture that determines when to “cut the cord” to another component to prevent that problem from propagating through the platform and affecting other areas.

According to EBay, the Java everywhere approach works great. It enables them to quickly develop and deploy new features to the site (they do a build every week), which was a major goal of the architecture. It does however require that everybody, even sight web designers, know a little bit of Java. But, this does not appear to have caused a problem.

Real-Time Specification for Java (JSR 1): The Revolution Continues

This session attracted my interest, as I was not aware of any effort to provide real-time support in Java. I had always considered real time systems to be based in a language that was sitting a little closer to the operating system (preferably a real time operating system), and not running in any type of virtual machine. But, this effort has been going on for quite some time. After all, it is in fact JSR numero uno.

Currently in Java, there is no way to specify the real priority of a thread. You can “suggest” to the JVM the priority to run the thread as, but there is no guarantee that the thread will run at that priority. Also, there is no real way to specify an exact “wake up time” for a thread that needs to wake up every x seconds to perform some task.

JSR-1 aims to add some of these capabilities. It isolates the application from pauses caused by garbage collection…by essential disabling garbage collection. It also works around other features in the JVM that may delay processing, like JIT compiling.

However, the fact that it disables garbage collection brings up other issues. The app is now responsible for maintaining a tighter grip on its memory usage. There are two ways to do this. Uses “Immortal Memory”, which is just fancy talk for letting your objects live forever, or used a scoped based memory approach provided by the language as part of the JSR.

JSR-282 expands upon JSR-1 by allowing you to specify a processor for your real time processing to use, to specify data to be included along with the time-based events that are fired, and by providing an alternative to the memory management model suggested by JSR-1. JSR-282 is moving slowly (though apparently not as slowly as JSR-1), and the development of this JSR is not funded. So, don’t count on seeing it implemented anytime soon.

JSR-50 contains the specification for distributed real time support. This includes the ability to provide end-to-end time constraints across processors.

JSR-302 is not specifically a real time JSR, but it is related. JSR-302 is the specification for safety critical java applications. Safety critical applications, like the applications that run the flight instruments of a airplane, are required to go through a rigorous certification process. Every line of code that makes up that application, including the JVM that runs it, must be thoroughly inspected. Because of this, these applications are often stripped to the bare necessities. Developing these apps require a different mindset than what most web developers are used to. Instead of a “failures will happen…learn to handle them” mindset, one needs a adopt a “all failures must be prevented at all cost…but handle them gracefully when they happen” mindset.

This talk didn’t quite live up to what I was expecting, but it did provide some insight into an area of Java development that I have little knowledge of.

Let’s Resync: What’s New for Concurrency on the Java Platform, Standard!

This presentation covered the new concurrently features of the upcoming Java 7 release. CPU clock speeds have been holding steady for the past several years. Chip manufactures are providing faster chips by adding cores to the chip instead of by increasing clock speed. This provides an interesting challenge to developers, since taking advantage of those multiple cores is not an option if your application is single threaded. Course grained threading (per user request threading models) works well with a small number of cores. However, once that number of cores starts to dramatically increase, as it is predicted to do, this threading model will be unable to keep all of those cores busy.

Java 7 will be providing a fork/join threading framework to offer a finer grained threading solution, which aims to take advantage of these newer multi-core systems. Fork/join involves taking a data set, chopping it up into chunks, having a series of threads process those chunks of data, and combining the results of that processing at the end. Fork/join frameworks are a good way of solving common divide and conquer problems, like sorting or searching. However, some playing around with the number of threads in a fork/join pool and the size of the data set that a thread operates on is required to get the best performance of this model.

A common problem with the fork/join model is that sometimes threads work through their data sets quicker than other threads. When this happens, the threads that finish early just sit there waiting for the others to finish. This is not a good use of their time. Enter “work stealing”. Work stealing is an idea that allows threads that have completed their work early to “steal” items from the tail of another thread’s work queue. Since the worker threads only pay attention to the heads of their queue, this provides little chance of concurrent access. One common strategy for implementing work stealing is to put the more costly work at the end of your queue. That way, if another thread steals an item off of your queue, at least they’ll grab something that will keep them busy for a while.

Java 7 will come with a ParallelArray class that abstracts away much of the detail around the fork/join model. It lets you filter and map the data set it operates on to reduce the data set before parallelizing the work. Parallelizing with ParallelArray makes it very clear as to what each thread is doing, as opposed to parallelizing work with a series of arbitrary thread objects that simply join to one another. Anonymous inner classes, or Java 7 closures, can provide filtering and mapping logic for the ParallelArray to use when working on the data set.

Although the advantage of processing data like this is clear, I still struggle to find examples of data processing at my job that can benefit from this type of multi-threaded processing. Maybe this requires a different way of thinking about our current set of problems.

Leave a Reply

Your email address will not be published. Required fields are marked *