Thank you! We will contact you to schedule your trial.

Geplaatst: 25 August 2022

How to do a Bad Trial

Frequently companies ask for our seven-day trial – to try out the capabilities of the ADF Performance Monitor on their ADF application. This trial is meant only for companies that are interested in purchasing an ADFPM license. Often, we have had trials where everything that can go wrong went wrong. With disappointing results. In this blog I will describe what frequently went wrong and how to prevent it.

(Lees meer..)

Tags: , , , , , ,

Geplaatst: 29 March 2022

New Introduction Video

We have a new introduction video of the ADF Performance Monitor (3:40 minutes) ! It gives a quick introduction on the product.

(Lees meer..)
Tags: , , , , , , , , , , ,

Geplaatst: 2 December 2021

Thread Wait and Blocked Time

Last week we had a new version of the ADF Performance Monitor available – version 9.5.

In this blog I will write on one of the new features; thread wait and thread blocked time of requests. Sometimes we cannot explain a poor performance, disruptions, hiccups. If we dive into the world of Java threads, we often can. It can be that some threads were waiting on some resources or were being blocked. Or if there was JVM garbage collection during the request (that froze all threads). We can see all this now in the monitor for each HTTP request in detail. We have much more insight into time gaps that were sometimes hard to explain before.
(Lees meer..)

Tags: , , , , , , ,

Geplaatst: 29 September 2021

New Version 9.5

We have again a major new version of the ADF Performance Monitor available – version 9.5 ! We have added many new valuable features and improvements. Many overview screens have got a facelift and new charts. In several blogs I will write on them.

This blog is on one of those new features, automatic SLA and health KPI warnings. The monitor will automatically interpret the metrics and will show warnings if the ADF application is not meeting the configured SLA thresholds (KPIs). Or if configured JVM and system health thresholds are not met – like JVM garbage collection, JVM CPU load, system CPU load, OS memory, database, webservice, application server, network, and browser. From now on it will be even more fast and simple to interpret the metrics. You do not have to be a performance expert/engineer, the monitor will already show the (type of) problems!
(Lees meer..)

Tags: , , , , , , , , , , , ,

Geplaatst: 15 July 2020

Major New Version 9.0 (Part 2)

Last week I blogged in part 1 on our major new version of the ADF Performance Monitor – version 9.0. It was about monitoring the CPU load of the JVM process and of the whole underlying operating system. It was also about the total used and free physical (RAM) memory of the whole system, and the Linux load averages that provides an excellent view on the system load.

This blog (part 2) describes more new features. The CPU execution time of individual HTTP requests and click actions is now available. “What request/click action in the application is responsible for burning that CPU ? ” That question you can now answer with the monitor. The monitor gives a clear indication how expensive certain HTTP requests and click actions are in terms of CPU cost. Further we added browser (user-agent) metrics for each request. We also improved the ADF callstacks (snapshot that gives visibility into which ADF method caused other methods to execute, organized by the sequence of their execution and execution times).
(Lees meer..)

Tags: , , , , , , , , , , , ,

Geplaatst: 8 July 2020

Major New Version 9.0 (Part 1)

I’m very excited to announce that we have a major new version of the ADF Performance Monitor – version 9.0 !

We have added many valuable new features; new metrics that can detect and help explain poor performance, disruptions, hiccups, and help troubleshooting ADF applications. Like operating system metrics: the CPU usage of the ADF application, the total CPU usage of the whole underlying operating system, the total used and free physical (RAM) memory of the whole system, and the Linux load averages. A high CPU usage rate and memory usage may indicate a poorly tuned or designed application. Optimizing the application can lower CPU utilization. Generic APM tools have these kinds of metrics too in some way, but the combination of system metrics with ADF specific metrics of the ADF Performance Monitor makes it even more possible to relate performance problems.

Another reason to pay attention to system metrics is that nowadays more and more applications are deployed on the cloud. Very likely there will be shared virtual machines and resources (CPU, memory, network). Applications and processes could influence each other if frequently other processes have a very high usage of the available CPU or memory capacity.

This blog (part 1) describes the first part of these new features. Part 2 describes the CPU execution time of individual HTTP requests and click actions. It answers the question: “What request/click action in the application is responsible for burning that CPU ? (Lees meer..)

Tags: , , , , , , , , , , , ,

Geplaatst: 19 March 2019

New Server Infrastructure Halves Server Process Time

Recently I was analyzing and troubleshooting the performance of an ADF application. Much was already improved before I came. Due to a very recent new hardware/infrastructure environment, the server and database process time was nearly 50% faster after migration. In this blog I want to show you the impact it had on the total server process time of HTTP requests. Such a sudden improvement is visible in the ADF Performance Monitor in a glance, and in Week and Month trend analysis overviews. Maybe you need to investigate your hardware/infrastructure as well ,and consider an upgrade; if your hardware/infrastructure is relatively old, if your machines are full, or if virtualization software is not implemented efficiently.

(Lees meer..)

Tags: , , , , , , , , ,

Geplaatst: 1 October 2018

Top 10 Typical Bottlenecks

In this blog I will discuss the top 10 typical performance problems I see in general in Oracle ADF projects – and will discuss solutions.

Top 10 Typical Bottlenecks – Illustrated by ADF Callstacks

I will illustrate the top 10 typical bottlenecks with ‘ADF callstacks’ – a feature of the ADF Performance Monitor. An ADF callstack, a kind of snapshot of the ADF framework, gives visibility into which ADF methods caused other methods/operations to be executed, organized by the sequence of their execution. A complete breakdown of the HTTP request is shown by actions in the ADF framework (Fusion lifecycle phases, model (BindingContainer) and ADF BC executions, start & end of taskflows, e.g.), with elapsed times in milliseconds and a view of what happened when. The parts of the ADF Request that consume a lot of time are highlighted and indicated with an alert signal.

Nr 1: Slow ViewObject SQL Queries

The number one bottleneck is – as is in many web applications – SQL queries of ViewObjects to the database. This can be caused by many things: the SQL query is written in a suboptimal way, the data model is not efficient, the datasets in the database are far too high, indexes are lacking, indexes are not working as expected, e.g.

The first step is to get visibility and see which SQL queries are slow – and with what runtime parameter values:

In this blog is described in detail how you can instrument your ViewObect to get visibility into slow ViewObject queries.

It is always god to be able to analyze the runtime generated SQL (including applied ViewCriteria, and runtime bind parameter values) that is executed in the database. This to be able to reproduce problematic slow queries:

Nr 2: Too Frequent and Slow ApplicationModule Pooling

A very typical and big bottleneck in ADF is too frequent and too slow ApplicationModule Pooling. ApplicationModule pooling is a mechanism in ADF that enables multiple users to share several application module instances. It involves saving and retrieving session state data from the database or file. This mechanism is provided to make the application scalable and becomes very important under high load with many concurrent users. The default values of ApplicationModule pools are far too small; especially if you have more than 10 end-users.

I think this is one of the most important things to tune in general in ADF applications. Activations and passivations are the root cause of many very slow click actions. It is the root cause of errors after incomplete activations. In general, in my experience it is better to try to turn off the whole ApplicationModule pooling mechanism – for so far as it is possible.

Look at an example – a callstack showing what can happen when passivation/activation is not configured well. When there are too many ViewObject rows and attributes in memory, and passivated/activated:

We can see in the callstack:

To avoid the expensive passivations/activations we can increase the size of the ApplicationModule pools. In this way we make the application more scalable and avoid the passivations/activations. I usually increase the following parameters – depending of the usage:


If you want to know more on ApplicationModule pools and tuning watch the video I made on ADF Performance tuning a few years ago (a big part of the video is on pooling parameters).

Nr 3: Redundant ViewAccessor Processing

The default value of the property of Row Level Bind Values on a ViewObject ViewAccessor is true. Set this value explicitly to false if you have no bind variables that really depend other attribute values in the row. This property is meant to set if there are bind variables in the lookup view object that can have a different value for each row. The first time it really executes the query, the second time the framework still calls the method executeQueryForCollection() on the ViewObject. Internally, in the ADF framework, it recognizes that the query already has been executed, and does not execute the query itself to the database again.

However, it is still a big inefficiency that for all rows the executeQueryForCollection() method is executed, and internally processed in the ADF framework. As we can see in the example below (callstack metrics of the ADF Performance Monitor in JDeveloper console log), this whole process still takes around 200 milliseconds extra for 35 rows (check the execution timeline on the left).

Now we set it to false:

After we set it to false there will be far less processing internally in the ADF framework. The query is executed once, and not any more for each row the whole ViewAccessor is evaluated. The whole HTTP request is around 200 milliseconds faster:

Nr 4: Memory Overconsumption (Fetching Too Many Database Rows and Attributes)

ADF applications potentially use a lot of JVM memory. Many times, the root cause of a high memory usage is that application data retrieved from the database into memory is not properly bounded; too many rows are fetched and held in ADF BC memory. This can lead to memory over-consumption, very long running JVM garbage collections, a freeze of all current requests or even OutOfMemoryErrors. To make matters worse, these rows and their attributes are frequently passivated and activated for an unnecessary long period of time. The solution to this problem can be found in reducing the size of sessions by decreasing of the amount of data loaded and held in the session.

Developing a plan to manage and monitor this fetched data during the whole lifetime of an application is an absolute must. It is indispensable to your performance success. The first step is to measure the current number of rows fetched by the ADF application and determine the appropriate maximum fetch sizes for your data collections. You can measure real-time and historically how many rows the ViewObjects load into Java memory with the ADFBC Memory Analyzer. Limit all Viewobjects, that frequently fetch more than 500 rows on a regular basis, with a maximum fetchsize. Or use extra bind variables, add extra ViewCriteria to limit the number of rows.

Read more in two of my previous blog on managing fetched data from the database, and on limiting the memory consumption of an ADF application:

In the screenshot we can see that nearly two billion rows (!), to be precise 1.855.223, were loaded in the JVM memory of our server. This proces of loading took 186.538 milliseconds (186 seconds).

Nr 5: Slow PL/SQL Executions executed from ADF application

Just like SQL queries from ViewObjects, PLSQL procedures/functions executed from the ADF application be very slow if not implemented well:

In the screenshot we see a PLSQL procedure call hr_main.sleep() with a bind param value of 4. The whole execution takes 4004 milliseconds (more than 4 seconds).

In this case the PLSQL procedure/function is the root cause, not the ADF application code in the ADF application. The problems must be resolved in the database, and can be anything; suboptimal written SQL queries in the procedure/function, the data model is not efficient, the datasets in the procedure/function are far too high, e.g.

Nr 6: ‘Too Rich’ Pages Result in Slow Browser Load Time

We shouldn’t make our ADF pages too ‘rich’ – we should limit the number of ADF Faces components to some extent. Especially the content of ADF Faces container components. Rendering too many table columns – in combination with sending too many rows to the browser – causes a long browser load time. The same applies to af:listView, af:tree and af:treeTable components with hundreds of rows. But also, other container components like af:iterator – when too many child components are rendered – the browser needs seconds to do the hard work.

For example: the loading of a table component (lazy loading of an af:table component showing Locations) takes around 1 to 1.5 seconds (grey color represents browser load time):

A page should spent not more than half a second or less in the browser to load.

Some general tips:


Read more on this subject in one of my previous blogs on slow browser load time.

Nr 7: End-Users on Slow Internet Explorer

Many companies and organizations do have the complete control over which type of browser end-users are using. They have control over it because they have a Citrix environment or internal network where end-users work on. Many times, still Internet Explorer is the only browser that is installed and allowed to use.

Web applications in general, and ADF applications in particular, perform very poor in Internet Explorer (regardless of the version). This is because of the inefficient JavaScript engine and the very slow browser load time. Google Chrome and Firefox are currently the most performance friendly browsers. This is very well known under developers. Still many companies, that do have the possibility to change to Chrome/Firefox, insist on Internet Explorer/Edge. My estimation is that changing to Chrome/Firefox will improve the performance in general with 20% or more. In general, it is a good practice to stimulate end-users to install and use Google Chrome or Firefox for ADF applications.

On the next screenshot we can see in which layer processing time has been spent; database (yellow), webservice (pink), application server (blue), network (purple), and browser load time (grey).

In this example from a field report, more than one third of the time spent in is grey, meaning that more than one third of the process time is spent in the browser (!). This is the process time spent by the browser, after receiving the response from the server, to build the DOM-tree  and to render/load the content. It turned out that all end-users were using Internet Explorer 11.

Nr 8: Slow SOAP/REST Webservice Calls

SOAP/REST Webservice calls can be consume a lot of time as well – depending on the network time, and the response time of the webservice request. It is good to have insight into this, especially if the response time takes very long:

In the above screenshot we can see three very slow SOAP webservice calls (executed in 4085, 5347, and 15321 milliseconds(!) ).

Nr 9: Inefficient ViewObject fetchsize

Many developers do not set the fetchSize every time on a ViewObject when they create a ViewObject. The default size of 1 is very inefficient. For example, a page with a table of only the EmployeesViewObject. The fetchsize is 1; this means that for each row the ADF application makes a complete roundtrip to the database (very inefficient):

The time to fetch the 107 employee rows takes 56 milliseconds. Not that super much, but it can be far more efficient, for example with a fetchsize of 120 (using 1 roundtrip to the database):

Now the fetch action takes only 11 milliseconds (we win 45 milliseconds). This seems to be minimal, but if you do this on each ViewObject this will be much better the for the performance!

Nr 10: ChangeEventPolicy is set to PPR

From ADF Release 11G R2 and onwards, the default value is of the global ChangeEventPolicy property is ppr. This is not good for performance, as there is a lot of performance overhead in this property. Too many iterators and components are automatically refreshed on each HTTP request.

In my opinion it is better to change it to none:

You better use manual partialTriggers in the page to refresh components. Be aware that if you change it in your current application, you should add the manual partialTriggers and test if everything works.


Of course, there are many, many more possible causes of poor performance (not well configured JVM, JVM heap too small, long running JVM garbage collections, slow network, hardware, e.g.), but these were our top ten typical bottlenecks we see frequently at companies that use the ADF Performance Monitor.

Tags: , , , , , , , ,

Geplaatst: 9 July 2018

Monitoring with Percentiles

What is best metric in performance monitoring – averages or percentiles? Statistically speaking there are many methods to determine just how good of an overall experience your application is providing. Averages are used widely. They are easy to understand and calculate – however they can be misleading.

This blog is on percentiles. Percentiles are part of our recent new 7.0 version of the ADF Performance Monitor. I will explain what percentiles are and how they can be used to understand your ADF application performance better. Percentiles, when compared with averages, tell us how consistent our application response times are. Percentiles make good approximations and can be used for trend analysis, SLA agreement monitoring and daily to evaluate/troubleshoot the performance. (Lees meer..)

Tags: , , , , , , , , ,