Header background

Java Memory Problems

Memory Leaks and other memory-related problems are among the most prominent performance and scalability problems in Java. Reason enough to discuss this topic in more detail.

The Java memory model- or more specifically the garbage collector – has solved many memory problems. At the same time new ones have been created. Especially in J EE Environments with a large number of parallel users, memory is more and more becoming a critical resource. In times with cheap memory available, 64bit JVMs and modern garbage collection algorithms this might sound strange at first sight.

So let us now take a closer look at Java memory problems. Problems can be categorized into four: groups

  • Memory leaks in Java are created by referencing objects that are no longer used. This easily happens when multiple references to objects exist and developer forget to clear them, when the object is no longer needed.
  • Unnecessarily high memory usage caused by implementations consuming to much memory. This is very often a problem in web applications where a large amount of state information is managed for “user comfort”. When the number of active users increases, memory limits are reached very fast. Unbound or inefficiently configured caches are another source of constant high memory usage.
  • Inefficient object creation easily results in a performance problem when user load increases, as the garbage collector must constantly clean up the heap. This leads to unnecessarily high CPU consumption by the garbage collector. As the CPU is blocked by garbage collection, application response times increases often already under moderate load. This behavior is also referred to as GC trashing.
  • Inefficient garbage collector behavior is caused by missing or wrong configuration of the garbage collector. The garbage collector will take care that object are cleaned up. How and when this should happen must however by configured by the programmer or system architect. Very often people simple “forget” to properly configure and tune the garbage collecotr. I was involved in a number of performance workshops where a “simple” parameter change resulted in a performance improvement of up to 25 percent.

In most cases memory problems affect not only performance but also scalability. The higher the amount of consumed memory per request, user or session the less parallel transactions can be executed. In some cases memory problems also affect availabilty. When the JVM runs out of memory or it is close to memory limits it will quit with an OutOfMemory error. This is when management enters your office and you know you are in serious trouble.

Memory problems are often difficult to resolve for two reasons: In some case analysis will get complex and difficult – especially if you are missing the right methodology to resolve them. Secondly their foundation is often in the architecture of the application. Simple code changes will not help to resolve them.

In order to make life easier, I present a couple of memory antipatterns that are often found in real world applications. Those patterns should help to be able to already avoid memory problems during development.

HTTP Session as Cache

This antipattern refers to the misuse of the HTTPSession object as a data cache. The session object serves as means to store information which should “survive” a single HTTP request. This is also referred to a as conversational state. Meaning data is stored over a couple of requests until it is finally processed. This approach can be found in any non-trivial web application. Web applications have no other means than storing this information on the server. Well, some information can be put into the cookie, but this has a number of other implications.

It is important to keep as few data as possible and as short as possible. It can easily happen that the session contains megabytes of object data. This immediately results in high heap usage and memory shortages. At the same time the number of parallel users is very limited. The JVM will respond to an increasing number of users with an OutOfMemoryError. Large user sessions have other performance penalties as well. In case of session replication in clusters increased serialization and communication effort will result in additional performance and scalability problems.

In some projects the answer to this kind of problems is increasing the amount of memory and switching to 64bit JVMs. They cannot resisit the temptation of just increasing heap size up to several gigabytes. However this is often only hiding symptoms than providing a cure to the real problem. This “solution” is only temporal and also introduces a new problem. Bigger and bigger heaps make it more difficult to find “real” memory problems. Memory dumps for very large heaps (greated 6 gigabytes) cannot be processed by most available analysis tools. We at Dynatrace invested a lot of R&D effort to be able to efficiently analyze large memory dumps. As this problem is gaining more and more importance a new JSR specification is also addressing it.

Session caching problems often arise because the application architecture has not been clearly defined. During development, data is simply put into the session as it is comfortable. This very often happens in an “add and forget” manner, as nobody ensures that this data is removed when no longer needed. Normally unneeded session data should be handled by the session timeout. In enterprise applications that are constantly under heavy use the session timeout, this will not work. Additionally very often very high session timeouts are used – up to 24 hours – to provide additional “comfort” to users so that they do not have to login again.

A practical example is putting selection choices from list, which have to be fetched from the database, in the session. The intention is to avoid unnecessary database queries. (Smells like premature optimization – doesn’t it). This results in several kilobytes being put into the session object for every single user. While it is reasonable to cache this information the user session is definitely the wrong place for it.

Another example is abusing the Hibernate session for managing conversational state. The Hibernate session object is simply put into the HTTP session to be have fast access to data. This however results in much more data to be stored as necessary and the memory consumption per users rises significantly.

In modern AJAX applications conversational state can also be managed at the client-side. Ideally, this leads to a stateless or nearly stateless server application which also scales signifcantly better.

ThreadLocal Memory Leak

ThreadLocal variables are used in Java to bind variables to a specific thread. This means every thread gets it’s own single instance. This approach is used to handle status information within a thread. An example would be user credentials. The lifecycle of a ThreadLocal variable is however related to the lifecycle of the thread. ThreadLocal variables are cleaned up when the thread is terminated and removed by the garbage collector – if not explicitly removed by the programmer.

Forgotten ThreadLocal variables can especially in application servers easily result in memory problems. Application servers use ThreadPools in avoid constant creation and destruction of threads. An HTTPServletRequest for example gets a free thread assigned at runtime, which is passed back to the ThreadPool after execution. If the application logic uses ThreadLocal variables and forget to explicitly remove them, the memory will not be freed up.

Depending on the pool size – in production systems, this can be several hundred threads – and the size of the objects reference by the ThreadLocal variable this can lead to problems. A pool of 200 threads and a ThreadLocal size of 5MB will in the worst case lead to 1 GB of unnecessarily occupied memory. This will immediately result in high GC activity leading to bad response times and potentially to an OutOfMemoryError.

A practical example was a bug in jbossws version 1.2.0 which was fixed in version 1.2.1 – “DOMUtils doesn’t clear thread locals”. The problem was a ThreadLocal variable which referenced a parsed document having a size of 14 MB.

Large Temporary Objects

Large temporary objects can in the worst case also lead to OutOfMemoryErrors or at least to high GC activity. This will for example happen if very big documents (XML, PDF, images, …) have to be read and processed. In a specific case the application was not responsive for a couple of minutes or performance was so limited that it was not practically usable. The root cause was the garbage collection going crazy. A detailed analysis lead down to the following code for reading a PDF document.

byte tmpData[] = new byte [1024];
int offs = 0;
do{
  int readLen = bis.read (tmpData, offs, tmpData.length - offs);
  if (readLen == -1)
      break;
  offs+= readLen;
  if (oofs == tmpData.length){
    byte newres[] = new byte[tmpData.length + 1024];
    System.arraycopy(tmpData, 0, newres, 0, tmpData.length);
  tmpData = newres;
  }
} while (true);

The documents which have been read using the method had a size of several megabytes. They were read into the bytearray and then send to the user’s browser. Several parallel requests rapidly led to a full heap. The problem got even worse due to highly inefficient algorithm for reading the document. The idea is that an initial byte array of one KB is created. If this array is full a new array which is 1 KB large is created and the old array is copied into the new one. This means when reading the document a new array is created and copied for each KB read. This results in a huge number of temporary objects and a memory consumption which is two times the size of the actual amount of data – as the data is permantently copied.

When working with large amounts of data, optimization of the processing logical is crucial to performance. In this case a simple load test would have shown the problem.

Bad Garbage Collector Configuration

In the scenarios presented so far the problem was caused by the application code. In a lot of cause the root cause however is wrong – or missing – configuration of the garbage collector. I frequently see people trusting the default settings of their application servers and believing these application server guys know best what is ideal for their application. The configuration of the heap however strongly depends on the application and the actual usage scenario. Depending on the scenario parameters have to adopted to get a well-performing application. An application processing a high number of short lasting requests has to be configured completely different than a batch application, which is execution long-lasting tasks. The actual configuration additionally also depends from the used JVM. What works fine for Sun JVM might be a nightmare for IBM (or at least not ideal).

Misconfigured garbage collectors are often not immediately identified as the root cause of a performance problem (unless you monitor Garbage Collector acitvity anyway). Often the visual manifestation of problems are bad response times. Understand the relation of garbage collector activity to response times is not obivous. If garbage collector times cannot be correlated to response times, people find themselves hunting a very complex performance problem. Response times and execution time metric problems will manifest across the applications – at different places without an obvious pattern behind this phenomenon.

I found cases where optimizations in garbage collector settings solved performance problems in minutes which people were hunting for weeks.

ClassLoader Leaks

When talking about memory leaks most people primarily think about objects on the heap. Besides objects, classes and constants are also managed on the heap. Depending on the JVM they are put into specific areas of the heap. The Sun JVM for example uses the so called permanent generation or PermGen. Classes very often are put on the heap several times. Simply because they have been loaded by different classloaders. The memory occupation of loaded classes can be up to several hundred MB in modern enterprise applications.

Key is to avoid unnecessarily increasing the size of classes. A good sample is the definition of large amount of String constants – for example in GUI applications. Here all texts are often stored in constants. While the approach of using constants for Strings is in principle a good design approach, the memory consumption should not be neglected. In a real world case all constants where define in one class per language in an internationalized application. A not obviously visible coding error resulted in all of this classed being loaded. The result was a JVM crash with an OutOfMemoryError in the PermGen of the application.

Application servers suffer additional problems with classloader leaks. These leaks are causes as classloaders cannot be garbage collected because an object of one of the classes of this classloader is still alive. As a result, memory occupied by these classes will not be freed up. While this problem today is handled well by J EE application server, it seems to appear more often in OSGI-based application environments.

Conclusion

Memory problems in Java applications are manifold and easily lead to performance and scalability problems. Especially in J EE applications with a high number of parallel users memory management must be a central part of the application architecture.

While the garbage collector takes care that unreferenced objects are clean up, the developer still is responsible for proper memory management. In addition to application design memory management is a central part of application configuration.

Your Experience

These are the anti-patterns I found in real world applications. If you have additional anti-patterns or common problems, I am happy to learn more about them.

Credits

This article is based on the Performance Antipatterns Series I am working on together with Mirko Novakovic of codecentric.

Interesting Other Blogs

As performance anti-patterns are kind of my hobby, I am posting regularly on performance antipatterns. Here is a selection of posts you might be interested in as well.

If you want more information about how to solve memory problems like these and other issues if you Java environment, you might also be interesting in our eBook Java enterprise performance.