Dave Martin

Subscribe to Dave Martin: eMailAlertsEmail Alerts
Get Dave Martin: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Java EE Journal

J2EE Journal: Article

The Performance Dragnet

The Performance Dragnet

For any organization, the two fundamental requirements for ensuring that enterprise applications meet high standards for performance are the ability to monitor the application with near-zero overhead and the ability to determine the root cause of problems quickly when they arise, regardless of whether the application is in QA, staging, or deployment.

The stories you are about to read are true. The names have been changed to protect the (sometimes not-so) innocent.

This article is the documented drama of actual Java performance problems. It begins with a summary of Wily's Enterprise Java Application Management solution and the role it plays in pre- and post-deployment performance analysis. Then I'll look at true stories of performance gone bad. You will learn - the easy way - why skipping Java application management doesn't pay.

The Wily 4 Solution
As a healthy organization matures, it will naturally develop a set of strategies and processes for eliminating the amount of turmoil it has experienced in fighting performance problems associated with its mission-critical Java applications. These organizations also understand the heavy cost associated with poor application performance: the returns on IT investments, the risks to the organization's revenue stream, and ultimate customer satisfaction.

The Wily 4 solution from Wily Technology allows enterprises to monitor, improve, and manage enterprise Java applications in each stage of the application life cycle. It provides a common language that each department within an IT organization can use to quickly identify and fix Java application performance problems when they arise.

With Wily 4, IT teams can monitor the business function of their production applications 24/7, isolating performance bottlenecks in fund transfers, bill payments, purchase of goods, and other core use cases - all of which can be mapped, without source code changes, onto the underlying methods of the servlets, JSPs, EJBs, and custom code that implement them. Customizable dashboards and a wide variety of options for integration make Wily 4 a perfect complement to popular systems management solutions. It allows organizations to publish Introscope Alerts with information about the availability of the production application's customer functions right into their existing operations' problem resolution pipeline (see Figure 1)

Key elements of the Wily 4 solution include Introscope, its flagship Java application monitor; Introscope LeakHunter for identifying the potential sources of memory leaks in production systems; and Introscope Transaction Tracer for solving performance problems associated with individual transactions. The Wily 4 solution also includes Introscope SQL Agent and Introscope PowerPacks for identifying application performance problems associated with the connections between Java applications and supporting back-end systems. Together, these solutions offer a "whole application" view into the entire Java environment.

The Performance Dragnet: Setting and Achieving Performance Goals
As a best practice, organizations use Wily's Introscope to benchmark and record the responsiveness of an application's key components to achieve performance goals. Successful development and deployment of enterprise Java applications should include continual baseline measurements of core use cases even as they are developed. As developers add and commit changes to an application, these baselines provide hard, statistical means by which each subsequent build of the application can be evaluated. If you fail to watch carefully through each step of the development process, you can end up with bugs or, far worse, architectural bottlenecks that are far more costly to fix in staging or production.

Like a good detective, a good baseline or benchmark just keeps asking the same questions until it gets the answers it wants. This kind of process encourages a "measure twice, cut once" approach that has proven itself time and again as the most effective means of improving software performance.

To successfully load all the key use cases of an application, the application must be fully functional; a good set of performance load-generation scripts can double as excellent sanity checks for functional availability of core application features.

Once the application is under load, organizations need to be able to continuously monitor the application's performance - at a feature level - to identify bottlenecks, especially during the first few months of production. Wily's Introscope can abstract the details of your Java metrics into views and reports that actually describe the application's features. In the ubiquitous PetStore example, this would be metrics about purchase of goods, return of receipt, view of catalog items, inventory search, and so on. Taking this approach, you're not just tuning a component because it is slow, but because you have observed that it makes the time for return of a customer's receipt take unacceptably long. Without this kind of a view, you risk wasting valuable development time on tuning features that may not be as important to the application's overall success.

Wily's Introscope reports the call volume, mean, minimum, and maximum response time of all the key components of the customer's applications. Introscope can also measure concurrency (a measure of the threads busy in each method or component of business logic), memory statistics, file and socket i/o statistics, pooled resource utilization statistics (e.g., MQ connections), environmental statistics, and more.

In the prototypical J2EE application, the controller servlet, model EJBs, back-end connector APIs (e.g., JDBC, MQ Series, etc.), and view JSPs would be measured. Organizations often take the additional, simple step of customizing Introscope so that they can monitor, without source code changes, their own proprietary business logic implementation or back-end connectors. For example, Introscope has monitored SABRE transactions during work at major airlines, legacy telecommunications company mainframe callbacks, and a myriad of other such custom back-end systems and business logic.

Data from the application can be recorded to CSV flat files, the most popular databases, or integrated directly into many load-generation tools. Each customer chooses the means of data recording and analysis that best suits their needs. Data written to flat files is automatically archived at configurable file size limits, making for easy scripted automation of both reporting and cleanup. Some of the most common databases also have example data archiving scripts available.

The remainder of this article focuses on a number of real-world instances in which Introscope was used.

"The Big Sleep"
It was a Monday in Philadelphia, overcast and cold. I was working the day shift. The customer was an insurance company with an application in staging.

The application we were monitoring, we'll call it AUDIT, was a Web front end to a legacy green-screen application. Underwriters for the customer use the application to create new policy quotes. AUDIT uses CICS primarily for its back-end data, issuing incoming transactions against the same legacy stored procedures it has for the last 30 years, then parsing them up into JSPs. Pages were designed to be loaded in 1-2 seconds, but requests were taking 7-8 seconds each click. Data from CICS indicated that all requests were being honored quickly. We needed to get to the bottom of the situation fast.

Wily Introscope Custom Tracers implemented on the application's data object parsing procedures showed response times of exactly 100 milliseconds for each call, with 50-60 calls being made per transaction. Quick investigation into the data object construction revealed a developer's placeholder logic. Each object required a unique ID. A 100-millisecond sleep was programmed into the constructor in order to ensure a unique timestamp from each subsequent call to the system clock.

Epilogue: The developer who controlled the knob for remotely adjusting the sleep time was never found.

"The Adventure of the Speckled Bean"
It was a Tuesday afternoon in Texas. I was at a customer who had already logged several months trying to solve their application performance problem. When I asked Dr. Watson's opinion, he replied with a core dump.

In production, the STORE application was losing about 30% of customer purchases due to slow response times. Some requests inexplicably took longer than two minutes, then timed out. Introscope quickly isolated the performance bottleneck to a method that was being used to calculate sales tax for each item of the purchase. A single line of Introscope configuration was enough to trace the business logic interface implemented by all STORE's business logic.

Further investigation revealed that the method used Runtime.exec to invoke a new JVM. Out-of-process, this new JVM calculated a single item's sales tax, printing its results to system out. The application server patiently waited for the printout, then did the same thing again for each item purchased. Under modest load, as many as 50-60 JVMs were being started up and shut down simultaneously, requiring 20-40 seconds of wait time for each. The transactions being lost were those with the most purchased items.

Why? The person tasked with using the EJB interface included with the third-party sales tax calculation software had been unable to load it into the server's context. Frustrated, he fell back on using the software's example code. This was easily reworked into a script that took appropriate command-line parameters and generated the desired result on system out.

Epilogue: The suspect pleaded Perl and was dismissed. Case closed.

"And Then There Were None (JDBC Cursors)"
It's Wednesday and I am steps away from the historic Boston Common. The customer was consistently running out of JDBC cursors and MQ Series connections during the course of their lengthy load tests. Wily's Introscope monitored the allocate and de-allocate semantics of these resources. With Introscope's Blame feature, the team was able to quickly isolate the guilty EJB: under heavy load the application was throwing exceptions that skipped the usual close calls, which should have been enclosed in finally blocks to ensure release.

Epilogue: The missing "close" is now safely behind finally block bars.

"Dial S for Synchronized"
Thursday, and my dogs are barking. I drink my fifth cup of joe and head out the door.

Wily's Introscope measured the average responsiveness of transactions with a proprietary back-end system as 10 seconds in length. In a load test of 10 concurrent users, however, the average response times of transactions that used this back-end system were in the 90-100 second range. This case was getting more and more interesting. A concurrency metric on the call to the back-end system quickly isolated the discrepancy: nine concurrent user threads were sitting on a call to the back-end system while one thread actually transacted work.

Problems such as a misplaced synchronized keyword in the class signature instead of the method signature, a resource pool with a single instance where 10 should have been available, or a remotely accessed server that becomes unavailable in fits and bursts can and have been caught with these Wily concurrency metrics (see Figure 2).

In this case, the application was simply incapable of dealing with unexpected failures of key back-end systems, taking minutes or even hours to time out a call that customers were expecting to come back in seconds. Concurrency is a quick and easy way to identify such spots in the application, making it simple to handle similar failures in production more gracefully.

Epilogue: The application and its owners are now doing hard time in a production facility.

"The Maltese ResultSet"
It's Friday and I'm in Atlanta. Peachtree was everywhere: the street, not so much the tree.

This customer had a problem with large database result sets. In neglecting to architect a strategy by which search queries could be limited and doled out in reasonable increments, they created a "pig in the python" memory problem for their application server. The wrong kind of search or lookup resulted in a JVM gone wild with thrash on the GC Heap. The application server struggled to page-in the result set as it was handed off by the database. Utter destruction of responsiveness for all other transactions with the application was the ugliest side effect.

Wily's Introscope SQL Agent immediately diagnosed the problem by showing the difference between the database queries' response time and "roundtrip" response time. The latter metric included any clock time spent before calling close on the ResultSet, while the former included only the time taken to recover an answer from the database. Roundtrip response times showed extremely large discrepancies from their reciprocal metric. This metric was, of course, recoverable only from inside the application server.

Epilogue: The development team made the necessary corrections and the application is now a fully reformed and contributing member of society.

"Murder on the JDBC Express"
I volunteered to work a Saturday up in San Francisco. After that last customer visit to Vegas, I needed the overtime.

Little did I know that my JDBC woes were far from over. The customer had generally figured out how to tune the response time of their key database interactions - JDBC statements were measured with a rough mean of 10 milliseconds, and most were in the 1 millisecond range.

What about the transaction that fell out of that range? I slapped Introscope Transaction Tracer on the application and recorded a host of 30-second transactions during the standard load test. Previously, no amount of profiling had helped the customer to find the "guilty" method. The Transaction Trace call graphs, automatically weeded of all extraneous data by Introscope, allowed the customer to see the forest for all of its trees: nearly 4,000 JDBC statements were being issued by a single transaction. Even with these fast responses from the database, so many queries are enough to sink a click.

Analysis of the Transaction Trace reveals a single validation method responsible for six JDBC statements for each call - it's called enough to account for almost 40% of the JDBC statements in each transaction. With a trivial number of changes from the development team - less than a day's work - these validation checks were implemented with a caching strategy. CPU burn and transaction time plunged; throughput leapt.

Epilogue: The database made a full recovery but the development team has since been sentenced to Swing.

"The Vector Who Knew Too Much"
It was Saturday night, the end of a long week. The city lights filter through the fog rolling off the bay. The phone rings just as I'm tilting a tall glass of rye.

The customer had a J2EE-based trading infrastructure with calls made directly into their EJB remote interfaces. Months ago, they swapped their JMS implementation and were now experiencing intense CPU burn and reduction in throughput. The "GC Heap Bytes in Use" were rocketing with each subsequent transaction, resulting in an out-of-memory exception. The team rounded up all the usual profiling tools but none of them could identify the source of the leak (see Figure 3.)

Wily's Introscope LeakHunter installed on the applications in minutes and immediately flagged anomalous growth in a single Vector - one of the 45,000 data structures used by the application. The Vector in question contained transaction identifiers, meant to be cached only temporarily for quicker rollback. Instead, the Vector grew one-to-one with the number of transactions made against the application. The new JMS implementation answered questions about its transactions in a slightly different way, leading to a different, unexpected result when "contain" was called on the Vector. The unexpected false answers when "contain" was called resulted in new items being added with every transaction.

Epilogue: The offending Vector now goes by the name of Victor and is living in Florida under the Witness Protection Program.

After a long goodbye, I headed off into the night, not knowing where my next case would take me.

More Stories By Dave Martin

Dave Martin is a System Engineer for Wily Technology. He has worked extensively with dozens of enterprise customers to identify and correct Java application performance issues. A former software developer for Wily technology, Dave has a comprehensive knowledge of J2EE architecture and software systems design and implementation.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.