J2EE Performance Optimization

J2EE Performance Optimization

J2EE applications are required to run efficiently, even when the demand on the system is high. Improving the performance of the system is quite a complex task, within the core application and the use of the J2EE services, and also with the interaction with front-end Web servers and database back-end systems.

This article walks through an actual case study of the optimization of a J2EE application in order to illustrate best practices for developers in order to obtain high performance.

The major system performance counters areas follow:
Disk: We compared some configurations at higher injection rates due to hardware and software improvements with the baseline data and found that increasing the injection rate caused an increase in disk activities. This information alone only shows the symptom of a problem. We combined this with other information to pinpoint the source of the problem, which is related to insufficient cache size for some EJBs. The performance impact due to disk I/O can be from 5-50%, depending on how severe it is.

Memory: Physical memory (used and available), page faults and pages/sec; paging should be minimized by having sufficient physical memory for the workload. The performance impact can be orders of magnitude if excessive paging is observed.
System: Minimizing context switches would be a good idea to allow the processors to spend time on real work. System calls and number of threads should be monitored. We found that we had quite reasonable performance with about 100 threads running on the application server.

Processor: CPU utilizations (user/privileged, individual processors and total) and interrupts/sec; among all the PERFMON counters, total CPU utilizations would be the single most important indicator. If the CPU utilization can only reach a peak, for instance, of 30% while the injection rate is gradually increased, there is likely a bottleneck somewhere outside the scope of the application server. In such a case, further application software, application server and JVM performance improvement may not be realizable, and we may expect low return on such investment. We must find ways to increase the CPU utilization on the application server in order to continue performance analysis. The problems can be related to the network, OS configurations, or interference from some anti-virus checkers that inherently limit the performance of the systems.

Database Configuration: A common way for database systems to reduce the impact of I/O is to reduce it (through the use of memory) and to make it fast (through the use of multiple disks in a RAID array system). Not adequately optimizing for the use of memory can result in a performance penalty of orders of magnitude. Once that is done and a high hit ratio is obtained (e.g., >99% for SPECjAppServer2002), further performance gains due to the use of RAW partitions on disk arrays was found to be about 10%.

Our focus here is to remove the database system as bottleneck and thus affect the tuning of the application-server performance. Therefore, once a reasonable database configuration has been found, further tuning on the database system may result in low return on investment.

Application-Server Considerations: We have covered in some detail the importance and the mechanisms to avoid having the network, the database system, or the driver/emulator to be the bottleneck. That condition is necessary in order for us to optimize the performance on the application server.

Application-server tuning and optimizations can come from several areas of improvement:

Deployment descriptors: Different application servers enable different ways to use the application servers to run with the application in the form of deployment descriptors. We found that in the case of our workload, the max-beans-in-cache values for stateful session beans need to be monitored for optimal performance. The use of memory to reduce the number of passivations of EJB is usually a good tradeoff. Relationship caching and nested relationship caching for entity beans can reduce the number of round trips to the database servers.

Choice of JDBC drivers: The choice of JDBC drivers is important. While in general, a type 2 driver will tend to shift the work done on the database server to a native client on the application server, the complex relationship often requires measurement to make sure. Using a type 2 driver may help multiple application servers to share a single database, but we found it not to be ideal for our workload analysis, as we need to max out the CPU utilization on the application server. Early measurements also indicated approximately a 5-10% performance gain for one configuration after changing the JDBC driver from type 2 to type 4.

The choice of the database back-end systems can also restrict the choice of JDBC drivers within the same type (e.g. type 4 thin drivers). An application server can support multiple third-party JDBC drivers to choose for optimal performance for a particular application. The measurement indicated that the performance difference between two very competitive JDBC drivers can reach 25%.

Application-server configurations and run-time parameters: Different application servers may provide specific optimization for a platform. Simply increasing every thread queue, however, would increase the total number of threads on the system and will generally have an adverse impact on performance. The adjustment of thread queues often demands careful design of experiments to study. We found that the performance tends to suffer when the total number of those threads needed is outside the range of 50 to 100 on the Windows system. It is important to have a one-to-one mapping between the worker threads and the JDBC connections for optimal performance.

The parameter “StatementCacheSize” avoids recompiling statements that are already in the cache. We found that the performance is relatively insensitive to this parameter, so long as it is within the range of 50-300.

Conclusion: Optimizing the performance of a J2EE application is a complex task, due to the nature of the workload which involves a network of connected computers in multiple tiers. The performance characteristics of the workload can be unexpected unless the statistics are adequately monitored. When the software or hardware systems are upgraded, the performance bottlenecks might shift from one place to another. It is important to apply a top-down, data-driven approach to identify and remove bottlenecks outside the focus of application-server performance.

The same tools and methodology can be used to detect performance issues for different workloads. It is very important, however, to use tools that are as non-intrusive as possible, in order to provide a true analysis of the real environment. One simple way to improve performance of your application is to use a JVM that is already optimized for your platform. Understanding the JVM and additional tuning can yield additional benefits.

References:
1. www.developers.net/intelisnshowcase/

admin

Back to top