ADF Performance Monitor – Major New Version 7.0
We are very happy to announce that a major new version 7.0 of the ADF Performance Monitor will be available from May 2018. There are many improvements and major new features. This blog describes one of the new features; on usage statistics and performance metrics of end-user click actions.
A click action is the start trigger event of an HTTP request by the browser, by an action that a user takes within the UI. These are most often physical clicks of end-users on UI elements such as buttons, links, icons, charts, and tabs. But it can also be scrolling and selection events on tables, rendering of charts, polling events, auto-submits of input fields and much more. With monitoring by click action you get insight in the click actions that have the worst performance, that cause most errors, that are used most frequently, e.g. You can see in which layer (database, webservice, application server, network, browser) the total execution time has been spent. You can SLA monitor the business functions that are behind the click actions – from the perspective of the end-user.
Worst Performing Click Actions Overview
The ADF Performance Monitor has a new overview of the worst performing click actions – based on ADF click history, ordered by total (sum) processing time. The overview shows:
- Component ID (ADF Faces component ID that started the request)
- Component Type (ADF Faces component Java Class)
- Display name (label/text if present on component)
- Event type (action, query, fetch, valueChange, selection, popupRemove, dvtImageView, e.g.)
- Total (sum) processing time (split by time spent in database, webservice, application server, network, browser)
- AVG Server processing time
- AVG End-user time (exactly as the end-user experiences it in the browser)
- Total requests (split by SLA; error rates, normal-, slow-, and very slow requests)
Let’s analyze the top 3 in this click actions overview:
- The component with the ID loc_t1 consumes the most sum execution time in total (1,8 minute, around 110 seconds). We see that the Java class component is a RichTable, displayed as LocationsTable, and that the event is a fetch event. We see that all 13 HTTP requests are very slow (red). And that almost one third of the request time is spent in the browser (grey). This should be a trigger to further investigate.
- The component with the ID qryEmp consumes secondly the most sum time in total (1,8 minute, around 110 seconds). We see that the Java class component is a RichQuickQuery, with a label value of Search, and that the event type is a query event. We can see that 7 out 8 requests have been very slow (red), and that nearly half of the execution time has been spent in the database.
- The component with the ID DepartmentSaveButton, Java class RichCommandButton, with the label Save, and an action event, has apparently a big problem. We can see that there are ten errors (!) related to this component. This should be a trigger to investigate further immediately. We see that most of the time related to this component is spent in the database (yellow).
It is useful to take time during the development phase to give ‘click action components’ like buttons, links, icons, menu tabs, charts, input fields with auto-submits, e.g. a unique recognizable name across the whole application.
Get insight In Which Layer Time is Spent
To get a glance in which layer processing time of click actions have been spent – the monitor already shows the status-meter gauge: database (yellow), webservice (pink), application server (blue), network (purple), and browser load time (grey). It can also show the exact processing time spent in these layers:
Usage and SLA Metrics
To get insight in usage metrics – the monitor already shows a status-meter gauge for the total requests started by each click action; split by SLA: error rate (black), normal- (green), slow- (yellow), and very slow (red) requests. It can also show the exact numbers. What is considered normal, slow and very slow is configurable – according to your SLA.
End-User Usage Statistics
There is more – you can drill down to specific end-users. This is handy for example when an end-user calls the support team with an urgent problem – and you need metrics immediately for this person. Select a userID from the select list (in this case Klaas), and the overview shows the click actions of this end-user:
Drill Down to HTTP request Occurrences
There is even more – you can drill down to all the HTTP request occurrences of a click action – for further analysis and to inspect ADF callstacks. You can analyze all the individual HTTP request metrics that where started by the same component ID. For example, all the 13 very slow requests of the LocationsTable with ID loc_t1, and where the event a fetch is. We can see that nearly all these requests have a slow browser load time (grey color), and that one request spent a lot of time in the database (yellow):
Export to Excel or CSV File
You can export all overviews of the monitor to an Excel or CSV file. In this way you can save it for analysis later or do trend analysis after new releases, check whether the performance has been improved or not:
Monitor Click Actions During Development
In the development phase the ADF Performance Monitor is used to detect performance problems during development; it prints an ADF callstack/snapshot to JDeveloper’s console log with all important key ADF executions, organized by the Fusion lifecycle. Now it also prints the click action that started the HTTP request:
Version 7.0 available in May 2018
Version 7.0 will be available from May 2018. See the full list of new exiting features by version on our product page.