NOTICE: Branded Content
NOTICE: Certain versions of content (“Material”) accessible here may contain branding from Hewlett-Packard Company (now HP Inc.) and Hewlett Packard Enterprise Company. As of September 1, 2017, the Material is now offered by Micro Focus, a separately owned and operated company. Any reference to the HP and Hewlett Packard Enterprise/HPE marks is historical in nature, and the HP and Hewlett Packard Enterprise/HPE marks are the property of their respective owners.
LoadRunner and Performance Center Blog
cancel

Explore the new online screen enhancements in Performance Center 12.55

Explore the new online screen enhancements in Performance Center 12.55

 Honored Contributor... hilale  Honored Contributor...

This post was written by Roy Sheaffer from Performance Center R&DRoy.jpg

 

 

 

 

 

The online screen was one of our main focal points when developing Performance Center 12.55. We wanted to add powerful new capabilities that would make Performance Center a more effective tool for performance testers worldwide. This blog discusses these new features:

 

Automatic Anomaly Detection

Before discussing this feature, we need to first understand what is meant by the term “anomaly”. One definition is something that deviates from what is standard, normal, or expected. Based on this explanation, how should we define what is standard, normal, or expected of a measurement?

In Performance Center 12.55, a continuous weighted statistical analysis is performed for each measurement during the running of a test. This analysis produces a ‘sleeve’, which is then considered the expected range of a measurement.

To create this sleeve, we start by calculating the mean and standard deviation of a measurement. This is done for a learning period of 30 points (90 seconds). Then, we transition to a weighted algorithm to continuously update the ‘weighted mean’ and ‘weighted standard deviation’.

This algorithm gives higher priority to more recent points, and to points that are within the sleeve. Such an algorithm is required to allow the sleeve to be flexible enough to change quickly with the measurement. The sleeve is then centered at the weighted mean, with a width of 6 weighted standard deviations (3 above, and 3 below).

The output of our algorithm looks something like this:sleeve.png

 

 

Now that we have the expected behavior, we still need to decide what constitutes an anomaly. Is the deviation of a single point from the sleeve enough to be considered an anomaly? Would you consider that the above example contains an anomaly?

In the case of the above example, our algorithm will not alert that an anomaly has occurred. This is because such small deviations are far too common to justify notification. To help demonstrate this point, let’s see another example:Anomaly.png

 

 

This second example (from the ‘CPU Utilization’ graph) shows a clear anomaly. The CPU is initially between 0 and 20 percent. While there are several deviations from the sleeve, they are too small and brief to be considered significant. That is, until a sharp spike occurs, that shows the CPU exceeding 50 percent. (Note: The plot band indicates the area in which the anomaly occurred).

Let’s suppose that a user runs a long test, and after a couple of hours they open the online screen and discovers that some of the graphs had anomalies ,indicated by the following icon:Icon3.png 

To see the anomalies, the user would have to use the ‘whole load test’ granularity. But in this granularity, the level of detail is very low, and a 30 second spike might not even be noticeable.

To solve this issue and offer additional clarity, we developed the history view.

The History View

For users to really benefit from the anomaly detection feature, it was necessary to create the history view. In this view, the user can inspect the data of any time range during the test. This is much more powerful than the granularity settings in Performance Center 12.53.

To open the history view, click the options icon  for a graph and select History.history.png

 

 

The control displays the whole load test granularity, and the chart displays the time range that is selected in the control. Notice how you can immediately see when anomalies occurred throughout the load test, as indicated by the plot bands in the control. Furthermore, focusing on them is easy and intuitive and only requires changing the selected time range.

The selection can be changed by:

  1. Clicking the selection area and dragging it. This method changes the time range while maintaining the granularity.
  2. Clicking the left/right border of the selection and dragging it. This method will make the selection larger/smaller, and will therefore change the granularity.
  3. Creating a new selection by clicking an area in the control that is not within the existing selection, moving the mouse, and then releasing the mouse button.
  4. Clicking an anomaly plot band. This will select a time range that contains the anomaly.

Note: The highest granularity of the data is three minutes. You can select an area smaller than three minutes, but this will only result in fewer points displayed, not in a higher granularity.

While the history view was created with the anomaly detection feature in mind, it is a useful feature in its own right. Even users that have no interest in our automatic anomaly detection can still benefit from the history view.

Currently, the existing granularity settings are not flexible. As previously mentioned, viewing the first minutes of a test that was started several hours ago would require setting the granularity level to the whole load test. However, the degree of detail at such a granularity is very low. With the history view, users can now view the first minutes of the test at high granularity.

To allow the user to view any time range of any measurement, it was necessary to create a new database. This database is created at the start of a run, and is filled with data points throughout the duration of the run.

Now that such a database exists, what else can we do with it?  This leads us to our third and final feature.

 

 

The Offline Screen

The offline screen allows you to view the data of runs that have finished running.Offline.png

 

 

 

The offline screen is very similar to the online screen—but it displays completed actions. All of the graph-related actions that can be performed in the online screen are also available in the offline screen. For example, merged graphs can be created and measurements can be selected as favorites. Note: merged graphs that were created and settings that were applied online will be available offline.

There are also several differences, due to the fact that the run that is being viewed has already finished. For example, the graphs in the offline screen automatically appear in the history view, and the runtime view is not available. And of course, none of the Vuser/load generator/Controller-related actions are available.

The online screen is comprised of the following parts:

  1. A summary section that provides general data about the run.
  2. A global control that allows setting the time range for all visible graphs.
  3. The graph area and graph tree (same as in the online screen).

It can be opened from the following locations:open.png

 

 

 Note: To avoid issues with disk space, there is a process that is responsible for deleting the databases that are created for the history view and the offline screen. This process is based on two parameters:

  1. The last modification date. This field is updated every time that a run is viewed in the offline screen. The default setting is 30 days.
  2. The size limitation placed on the folder in which the run databases are stored. The default size is 10 GB.

Both of these parameters can be configured in the Performance Center Administration site to meet your personal requirements.

 

You can read more about these features in the Performance Center Help:

 

So what are you waiting for… try these performance testing features for yourself today.

Comments
Member.

Very impressive! This will help answer the frequently asked questions:

Is there a performance problem?
Where is the problem.

I know my clients (and myself) will have the following questions:

Does the overhead of anomaly detection affect performance?
Are all monitored measurements checked for anomalies?  
Are anomalies checked if using SiteScope monitors?