HPE Software is now Micro Focus
HPE Software is now Micro Focus
IT Operations Management (ITOM)
cancel

Micro Focus Operations Orchestration – How to effectively determine and interpret work load

Micro Focus Operations Orchestration – How to effectively determine and interpret work load

Lakshmikanth_Mathur

In this blog, I will provide an overview of Micro Focus Operations Orchestration (OO) work load, how to determine OO work load and finally, how to interpret OO work load data.

Overview of OO work load

Operations Orchestration IT process automation software helps enterprises automate IT tasks, operations and processes. Enterprises often leverage the run book automation capabilities of Operations Orchestration in:

  • Virtualization
  • Cloud service automation
  • Event management
  • Incident management
  • Change management
  • Request management
  • Other areas of IT Service Management, etc.

As the needs of automation increase, the work load demands on Operations Orchestration also increases. Understanding the current workload of Operations Orchestration is recommended in the following situations:

  • Introduction of a new process work stream to use the runbook automation functionality of OO
  • Troubleshooting OO performance related issues
  • OO architecture and sizing issues
  • Observe trends in OO work load and predict future work load

How to determine OO work load

Operations Orchestration work load is characterized by the average of number of flow executions and the average of number of step executions performed by OO in a given time period. OO stores flow start time as epoch time in the START_TIME_LONG column and flow end time as epoch time in END_TIME_LONG column in OO_EXECUTION_SUMMARY table in OO database. OO stores the step-start time as epoch time in START_TIME column in OO_STEP_LOG_STARTED table in OO database. OO stores step end time as epoch time in the END_TIME column in OO_STEP_LOG_ENDED table in OO database.

OO supports following databases: 1. Oracle 2. Microsoft SQL (MS SQL) 3. PostgreSQL 4. MySQL. The following steps describe how to determine average of number of flow executions and the average of number of step executions for the following databases: Oracle and MS SQL. Note: SQL query syntax varies from one database vendor to another. If you have a PostgreSQL or a MySQL database, adapt these SQL queries accordingly.

Oracle:

1. Login to OO database using SQL Plus or any other SQL client. Use the same database login account used by OO application to connect to OO database.

2. Run the following query to get an average number of flows run by OO in a day over past 30 days:

select count(*)/30 from OO_EXECUTION_SUMMARY where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME_LONG)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

3. Run the following query to get an average number of flows run by OO in an hour over the past 30 days:

select count(*)/(30*24) from OO_EXECUTION_SUMMARY where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME_LONG)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

4. Run the following query to get an average number of flows run by OO in a minute over the past 30 days:

select count(*)/(30*24*60) from OO_EXECUTION_SUMMARY where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME_LONG)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

5. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in a day over the past 30 days:

select count(*)/30 from OO_STEP_LOG_STARTED where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

6. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in an hour over the past 30 days:

select count(*)/(30*24) from OO_STEP_LOG_STARTED where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

7. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in a minute over past 30 days:

select count(*)/(30*24*60) from OO_STEP_LOG_STARTED where (to_date('19700101', 'YYYYMMDD') + ( 1 / 24 / 60 / 60 / 1000) * START_TIME)  between sys_extract_utc(systimestamp)-30 and sys_extract_utc(systimestamp);

 

MS SQL:

1. Login to the OO database using MS SQL Server management studio or any other SQL client. Use the same database login account used by OO application to connect to OO database.

2. Run the following query to get an average number of flows run by OO in a day over the past 30 days:

select count(*)/30.0 from OO_EXECUTION_SUMMARY where dateadd(S,(START_TIME_LONG)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

 3. Run the following query to get an average number of flows run by OO in an hour over the past 30 days:

select count(*)/(30.0*24) from OO_EXECUTION_SUMMARY where dateadd(S,(START_TIME_LONG)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

4. Run the following query to get an average number of flows run by OO in a minute over the past 30 days:

select count(*)/(30.0*24*60) from OO_EXECUTION_SUMMARY where dateadd(S,(START_TIME_LONG)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

5. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in a day over the past 30 days:

select count(*)/30.0 from OO_STEP_LOG_STARTED where dateadd(S,(START_TIME)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

6. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in an hour over the past 30 days:

select count(*)/(30.0*24) from OO_STEP_LOG_STARTED where dateadd(S,(START_TIME)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

7. Run the following query to get an average number of flow steps that were started (does not take into account when a step was ended) by OO in a minute over the past 30 days:

select count(*)/(30.0*24*60) from OO_STEP_LOG_STARTED where dateadd(S,(START_TIME)/1000,'1970-01-01') between dateadd(day,-30, getutcdate()) and getutcdate();

 

How to interpret the OO work load

The Operation Orchestration work load data over a period of time can provide insights about the existing environment. There are many different ways in which OO work load data can be interpreted. Below, I have some sample graphs of OO work load under various scenarios and their potential interpretations:

1. Holidays/Low usage: A sudden dip in OO work load, followed by levelling off to its original level.OO average flows.JPG

 2. New process work stream: A sudden jump in OO work load followed by staying at new levelOO new process workstream.JPG

3. Large number of incidents/requests/change requests: A sudden jump in OO work load followed by return to its original levels.OO number of incidents.JPG

4. Some OO flows may not have completed and/or ended prematurely: The number of step execution does not correlate with flow execution. A drop in OO work load in terms of step execution but number of flows started remain at the same level. Among other possibilities, this may have been caused by changes in environment (or process), where flows execute some sanity checks early in the flow execution that cannot be satisfied in the changed environment.

 

OO flows.JPG

5. OO performance tuning/Scale up: OO work load has increased over a period of time and sluggish performance is seen. Current/historic OO work load data is not sufficient to decide whether to scale up or scale down OO central servers and/or OO workers. OO work load data has to be used in conjunction with JVM tuning parameters, HTTP sessions and other sizing related parameters to determine sizing requirements.OO Performance.JPG

 

 

Learn more about Micro Focus Operations Orchestration at the product page.

 

 

  • operations bridge
About the Author

Lakshmikanth_Mathur