Thank you! We will contact you to schedule your trial.

Geplaatst: 25 August 2022

How to do a Bad Trial

Frequently companies ask for our seven-day trial – to try out the capabilities of the ADF Performance Monitor on their ADF application. This trial is meant only for companies that are interested in purchasing an ADFPM license. Often, we have had trials where everything that can go wrong went wrong. With disappointing results. In this blog I will describe what frequently went wrong and how to prevent it.

(Lees meer..)


Tags: , , , , , ,

Geplaatst: 21 June 2022

New Whitepaper Published

We are happy to announce that we have a new whitepaper on the ADF Performance Monitor. This blog publishes a new whitepaper that gives more information about the architecture, features and implementation of the ADF Performance Monitor. It is updated with the many features of our new major version 9.5. Recently we also made also a quick introduction video on the product.

(Lees meer..)


Tags: , , , , , ,

Geplaatst: 2 December 2021

Thread Wait and Blocked Time

Last week we had a new version of the ADF Performance Monitor available – version 9.5.

In this blog I will write on one of the new features; thread wait and thread blocked time of requests. Sometimes we cannot explain a poor performance, disruptions, hiccups. If we dive into the world of Java threads, we often can. It can be that some threads were waiting on some resources or were being blocked. Or if there was JVM garbage collection during the request (that froze all threads). We can see all this now in the monitor for each HTTP request in detail. We have much more insight into time gaps that were sometimes hard to explain before.
(Lees meer..)


Tags: , , , , , , ,

Geplaatst: 15 July 2020

Major New Version 9.0 (Part 2)

Last week I blogged in part 1 on our major new version of the ADF Performance Monitor – version 9.0. It was about monitoring the CPU load of the JVM process and of the whole underlying operating system. It was also about the total used and free physical (RAM) memory of the whole system, and the Linux load averages that provides an excellent view on the system load.

This blog (part 2) describes more new features. The CPU execution time of individual HTTP requests and click actions is now available. “What request/click action in the application is responsible for burning that CPU ? ” That question you can now answer with the monitor. The monitor gives a clear indication how expensive certain HTTP requests and click actions are in terms of CPU cost. Further we added browser (user-agent) metrics for each request. We also improved the ADF callstacks (snapshot that gives visibility into which ADF method caused other methods to execute, organized by the sequence of their execution and execution times).
(Lees meer..)


Tags: , , , , , , , , , , , ,

Geplaatst: 2 April 2020

Error Diagnostics

Application errors are often hard to retrieve, or take a lot of time to resolve. When you are suffering from errors, and have a lack of clarity when errors happen, you would like to have useful error diagnostics for analysis.

The ADF Performance Monitor automatically captures detailed diagnostics for each and every error/exception occurrence. You can view your errors to see the highest priority issues your team should focus on. This blog shows the renewed error overview of our newest version of the ADF Performance Monitor – with real production metrics.

(Lees meer..)


Tags: , , , , , , ,

Geplaatst: 7 January 2019

Performance Improvements and Insight at Intris

Intris is the leading Belgian provider of freight forwarding, customs and warehousing management solutions. Headquartered in Antwerp, Intris provides its integrated software and cloud-based solutions to logistics services providers in Belgium and the Netherlands.

Ben Rombouts is Chief Operating Officer at Intris. Recently he has written a detailed review on the ADF Performance Monitor – a tool Intris uses for monitoring the performance of their large Oracle ADF application.

For what and how the ADF Performance Monitor is used

The ADF Performance Monitor is used within our development team as an extra quality check when building new functionalities. After developing the code, the developers carry out their test scenarios and check the results based on the metrics generated by the tool. With this, non-performing queries are instantly removed, and we get a better insight where we need to work on additional performance improvements.

Since our standard application consists of several modules, our customers do not use all functionalities in the same way, or equally frequently. That is why the tool is also used for many LIVE customers in production, considering the following parameters:

Our account managers use the data in different ways and base themselves mainly on the dashboard:

1 – Average Response Time

General information about the average response time to give a correct indication of the performance during steering committee meetings. Previously there was much more subjectivity here (type “Every action takes seconds in your application”). This has ensured that these discussions are over now and that we can focus on the real issues.

2 – Errors that are reported

These are split into effective technical errors and errors that are of a more functional nature. This also gives a good impression of the fact that some users make the same mistakes and so there is a need for additional training. The technical errors are made into issues that are passed on to the development team and, depending on the importance, included in new releases.

3 – Discussions about what the performance issues are related to

Since we enable the tool at different clients on different platforms, we can also compare this over the environments. For example, we can see that the database time is always a constant, but that there are variations in network and browser time. This can then be addressed to the system administrator of the customer.

4 – Click Actions

From the ADF Click Actions overview we also get very useful information about the specific use of our application:

This makes it much more convenient to focus on the real problems and clearly report to the customer why we focus on certain matters, and why we give other things a lower priority.

5 – Addressing Technical Problems

At frequent intervals we also try to go through several environments with a senior developer to check more technical problems that can be improved in the application. Sometimes, for example, if we notice things at a customer where certain actions take longer and longer, so there is a problem in the queries. Other customers do not have any problems with this now, for example because they have less data, but in the future, they will not run into this type of problem because we can take them pro-actively from the application.

How the ADF Performance Monitor helped

1 – View things in an objective way

The tool mainly helped us to view things in an objective way. For example, some actions in the application can take quite a long time for an end-user but are only executed 2-3 times a week. If we put this in perspective in relation to actions that are carried out 100 times a day, it is already much clearer where you need to focus.

2 – Quickly Troubleshoot problems

When customers report certain errors via our support, we can consult logging much faster because we can see very quickly which actions were performed by which user at that specific moment.

3 – ADFBC Memory Overview

From the overview ADFBC memory overview you can quickly find out where there are any problems in queries. These are issues that are sometimes not noticed by customers, but where you can prevent problems in a proactive way.

4 – Objective Insight in Use of the Application

The tool also gave us a much clearer and more objective insight into the use of the application. This is rather a ‘side effect’ of using the tool, but it gives a quick and clear overview to prepare steering committees for reporting.

How the ADF Performance Monitor saved much time (and money)

To express this in time/money is quite difficult, but you can safely say that you can win a lot of time in the following areas:

Read all our customer reviews on our reviews page.

 


Tags: , , , , ,

Geplaatst: 7 August 2018

ADF Performance Tuning: A Field Report

Last week I was doing an extensive performance analysis / health check on a large ADF project, with the newest version of our ADF Performance Monitor product. In this performance assessment/analysis I have focused high-level on the most important performance bottlenecks. We could see in the ADF Performance Monitor that end-users experience very slow page load times, they were waiting much more than needed. This ADF application needed attention; it could run more efficient like nearly all ADF applications can. In this blog I describe some of my findings, maybe interesting for other ADF projects as well.

Complete overview

The first thing I always do is configuring the ADF Performance Monitor on all WebLogic managed servers (in this case 4) to have a complete overview of the performance:

In this case a typical daily performance summary was (top left section):

What already is strange here is that the AVG total time end-users needs to wait (0,57 Sec) is more than double the time the AVG process time by the application server (0,25 Sec)!

Problem 1: Very Slow Browser Load Time

On the chart at the right bottom we can see the explanation for this. In this chart we see in a glance in which layer processing time has been spent; database (yellow), webservice (pink), application server (blue), network (purple), and browser load time (grey).

More than one third of the time spent in is grey, meaning that more than one third of the process time is spent in the browser! This is the time spent by the browser, after receiving the response from the server to build the DOM-tree, and rendering/loading the content. Also, as we can see the purple color representing the time spent in the network (HTTP request network time, HTTP response network time) is relatively high: around 1/6th of a request on average. This is far more compared to other ADF projects.

This is the biggest ‘bottleneck’. It turned out that Explorer 11 was the current (and the only installed browser) of all their end-users, installed on its Citrix workstations. Web applications in general, and ADF applications in particular, perform very poor in Internet Explorer (regardless of the version). This is because of the inefficient JavaScript engine and the very slow browser load time. My first recommendation: install and use Google Chrome or Firefox as web browser as these have a very fast browser load time, and very performance friendly browsers. My estimation is that this will improve the performance in general with at least 25% or more for this project. Read here more on browser load time in ADF apps. In generally it is a good practice to stimulate end-users to install and use use Google Chrome or Firefox for ADF applications.

Click Actions Analysis

The next analysis was an ADF click action analysis. A click action is the start trigger event of an HTTP request by the browser, by an action that a user takes within the UI. These are most often physical clicks of end-users on UI elements such as buttons, links, icons, charts, and tabs. But it can also be scrolling and selection events on tables, rendering of charts, polling events, auto-submits of input fields and much more. With monitoring by click action you get insight in the click actions that have the worst performance.

I go very frequently to this overview to see what click action has the worst performance (is responsible for the most total processing time, and thus where we can win the most in terms of performance):

We see here that a poll event (ADF Faces component of type oracle.adf.RichPoll, with id ‘p1′) is responsible by far for the most total processing time (!). On this day there were in total 106.855 poll requests. That is more than one third of all the HTTP requests (296.435)!

Problem 2: Far Too Often Polling

There was a mechanism implemented in the application to force an end-user to be logged in at maximum one time. The way this was implemented was very bad for the server load; every minute a poll (HTTP request) was send to the WebLogic server that called Java code that updated a database table. It had also a side-effect that many end-user sessions were kept alive on the server for many hours (even for the many inactive users that never closed their browser window). The poll was responsible for the most time-consuming action in the application in terms of serving processing time. For now, as we couldn’t change this whole functionality quickly, we reduced the number of calls to three times less (one third now); we kept the same polling mechanism but now every three minutes (to avoid 2/3 of all the polling and to reduce the server load as well). Of course, later we should find an alternative solution.

We saw that the poll caused many very slow HTTP requests that included very slow database queries, frequent expensive ApplicationModule pooling, and other slow executions because it was restoring pages after passivation. It was responsible for 1/3 of all the processing time of the most frequent actions:

Problem 3: Memory Overconsumption

The third – a typical bottleneck in ADF – was an increase in response time (and decline in performance) because of the huge memory usage. The cause of this huge memory usage is that the application data which is retrieved from the database into memory is not properly limited; too many rows (thousands). To make matters worse, these rows and their attributes were retained in the session for an unnecessary period of time (by very frequent expensive ApplicationModule pooling). We can see in the ADFBC Memory Analyzer the total number of rows fetched by ViewObjects at runtime, and the maximum fetched rows. In this case we saw many ViewObjects fetching thousands of rows during an HTTP request:

The solution of this main problem is found in reducing the size of sessions by decreasing of the amount of data loaded and held in the session (setting maximum fetchsizes, adding bind params, fixing ViewCriterias). We have already identified all the locations in the source code and solved the most important of this list. Read more on this subject here.

Problem 4: Too Frequent ‘Expensive’ ApplicationModule Passivations & Activations.

As you know ApplicationModule pooling is a mechanism in ADF that enables multiple users to share several application module instances. It involves saving and retrieving session state data from the database or file. This mechanism is provided to make the application scalable and becomes very important under high load with many concurrent users. The default values of ApplicationModule pools are far too small; especially if you have more than 10 end-users.

I think this is one of the most important things to ‘tune’ in general in ADF applications. Activations and passivations are the root cause of many very slow click actions. It is the root cause of errors after incomplete activations. In general, in my opinion it is better to try to turn off the whole ApplicationModule pooling mechanism – for so far as it is possible.

To do this we increased the size of all the ApplicationModule pools. In this way we make the application more scalable and avoid very expensive passivations and activations. We increased the following parameters – depending of the usage.

Further:

If you want to know more on ApplicationModule pools and tuning watch the video I made on ADF Performance tuning a few years ago (a big part of the video is on pooling parameters).

Problem 5: Too many UIShell Tabs Could be Opened Simultaneously

This was a UIShell application. To avoid resource (but also memory) overconsumption we reduced the maximum number of opened tabs from 10 to 5. This will reduce the resource and memory consumption of the server as well and force the end-user to close unused tabs (and free up resources).

Conclusion

We have found many other bottlenecks as well. But already addressing/resolving these 5 big bottlenecks we have put already a smile on the face of many end-users!


Tags: , , , , , , ,

Geplaatst: 9 July 2018

Monitoring with Percentiles

What is best metric in performance monitoring – averages or percentiles? Statistically speaking there are many methods to determine just how good of an overall experience your application is providing. Averages are used widely. They are easy to understand and calculate – however they can be misleading.

This blog is on percentiles. Percentiles are part of our recent new 7.0 version of the ADF Performance Monitor. I will explain what percentiles are and how they can be used to understand your ADF application performance better. Percentiles, when compared with averages, tell us how consistent our application response times are. Percentiles make good approximations and can be used for trend analysis, SLA agreement monitoring and daily to evaluate/troubleshoot the performance. (Lees meer..)


Tags: , , , , , , , , ,

Geplaatst: 23 April 2018

Major New Version 7.0

We are very happy to announce that a major new version 7.0 of the ADF Performance Monitor will be available from May 2018. There are many improvements and major new features. This blog describes one of the new features; on usage statistics and performance metrics of end-user click actions.

A click action is the start trigger event of an HTTP request by the browser, by an action that a user takes within the UI. These are most often physical clicks of end-users on UI elements such as buttons, links, icons, charts, and tabs. But it can also be scrolling and selection events on tables, rendering of charts, polling events, auto-submits of input fields and much more. With monitoring by click action you get insight in the click actions that have the worst performance, that cause most errors, that are used most frequently, e.g. You can see in which layer (database, webservice, application server, network, browser) the total execution time has been spent. You can SLA monitor the business functions that are behind the click actions – from the perspective of the end-user. (Lees meer..)


Tags: , , , , ,