Thank you! We will contact you to schedule your trial.

Manage Your Memory Consumption

21/05/2018

In this blog I want to stress how important it is to manage the data that you fetch and load into your ADF application. I blogged on this subject earlier. It is still underestimated in my opinion. Recently I was involved in troubleshooting the performance in two different ADF projects. They had one thing in common: their servers became frequently unavailable, and they fetched far too many rows from the database. This will likely lead to memory over-consumption, ‘stop the world’ garbage collections that can run far too long, a much slower application, or in the worst case even servers that run into an OutOfMemoryError and become unavailable.

Developing a plan to manage and monitor fetched data during the whole lifetime of your ADF application is an absolute must. Keeping your sessions small is indispensable to your performance success. This blog shows a few examples of what can happen if you do not do that.

Normal JVM Heap and Garbage Collection

First, just or our reference, let’s have a look at how a normal, ‘healthy’ JVM heap and garbage collection should look like (left bottom). The ADF Performance Monitor shows real-time or historic heap usage and garbage collection times. The heap space (purple) over time is like a saw-tooth shaped line – showing a healthy JVM. There are many small and just a few big garbage collections (pink). This is because there are basically two types of garbage collectors. The big garbage collections do not run longer than 5 seconds:

Server Just Survives Extreme High Fetch

Now we have a look at a server that nearly survives an extreme high fetch. In this example we see an expanded JVM chart. Very Suddenly a long running garbage collection happened of around 15 seconds. We see that the JVM heap space of managed server 6 suddenly grows from 5 -6 GB to over 7 GB. This server was set to maximum 8 GB Java heap space. It seems to be lucky and just survives this high fetch; after some time, the heap space decreases back to a normal level of around 5 to 6 GB:

Root Cause in ADF Callstack

The root cause we can see in an ADF callstack. It turned out that more than two billion rows (!) were loaded in the JVM memory of our server:

Runtime Fetched Rows in ADFBC Memory Analyzer

We can also see this in the ADFBC Memory Analyzer of the ADF Performance Monitor- filtered on managed server 6. The ADF BC Memory Analyzer detects how many database rows are fetched by ADF ViewObjects. We see the extreme high load of two billion rows (blue):

Server Unavailable after OutOfMemoryError

Another day another server – manager server 3 – was less lucky; it did not survive a high load of again around two billion rows. The heap space (purple) over time evolves more into a horizontal line than the saw-tooth shaped line like a healthy JVM would be characterized by. In this case an out-of-memory-error occurred, and the server needed to be restarted. We also see a lot of red in the top chart from 09:00 onwards – indicating a problem with the availability and response times:

This should be a trigger to investigate whether the JVM heap space is set too low or that the ADF application over-consumes memory. Very frequently the latter is the case.

A Report High Fetch Causes the Server Become Unavailable

At a different customer the very same thing happened. One end-user ran an Excel report, fetching more than thirty thousand of rows. We see a garbage collection run that runs for 458 seconds (!). The server – with a heap space just set to 1 GB – did not survive this high load and ran into an OutOfMemoryError. We can see that from 10:00 to 11:00 the server was unavailable (in the top chart). After a restart the application ran ‘fine’ again – still with the vulnerability that if someone runs such a report the same thing will happen:

Develop a Plan

Developing a plan to manage and monitor this fetched data during the whole lifetime of an application is an absolute must. It is indispensable to your performance success. The first step is to measure the current number of rows fetched by the ADF application and determine the appropriate maximum fetch sizes for your data collections. You can measure real-time and historically how many rows the ViewObjects load into Java memory with the ADFBC Memory Analyzer. Limit all Viewobjects that frequently fetch more than 500 rows on a regular basis with a maximum fetchsize. Or use extra bind variables, add extra ViewCriteria to limit the number of rows. See also this blog for more details on this subject: limiting the JVM memory consumption.

Set at Least a Global Maximum Fetch Limit:

This property is not known that well (I never see it set at customers). And if known underestimated I think – it is very important to at least set this property. Set this to for instance 1000 or preferably 500 rows. Then you will be surer your server will not run in an OutOfMemoryError and become unavailable.

You can still – if really needed – override this property on a ViewObject instance at the tuning section.

After setting this global fetch size our daily server unavailable problems did not occur anymore – we prevented any server from becoming unavailable. First, we set this to 20.000. That is still a lot, but this already prevented servers from becoming unavailable. Later we gradually set it down to 10.000, 5000 and 1000. It would have been better if this would have been done at the start of the project.

Later we analyzed the problem further and found the root cause in the ADF callstacks of the ADF Performance Monitor; on a particular ViewObject instance there were missing bind variables and ViewCriteria at runtime, after an ApplicationModule activation.

Conclusion

Conclusion: it is an absolute must to set this global row fetch limit to prevent servers from becoming unavailable. In addition to that, it keeps sessions small, prevents memory over-consumption and improves the performance a lot.

 

 

 

 

 

Share this article on social media!