Oracle® Enterprise Manager Administration 11g Release 1 (11.1.0.1) Part Number E16790-03 |
|
|
PDF · Mobi · ePub |
Oracle Enterprise Manager 11g Grid Control has the ability to scale for hundreds of users and thousands of systems and services on a single Enterprise Manager implementation.
This chapter describes techniques for achieving optimal performance using the Oracle Enterprise Manager application. It can also help you with capacity planning, sizing and maximizing Enterprise Manager performance in a large scale environment. By maintaining routine housekeeping and monitoring performance regularly, you insure that you will have the required data to make accurate forecasts of future sizing requirements. Receiving good baseline values for the Enterprise Manager Grid Control vital signs and setting reasonable warning and critical thresholds on baselines allows Enterprise Manager to monitor itself for you.
This chapter also provides practical approaches to backup, recovery, and disaster recovery topics while addressing different strategies when practical for each tier of Enterprise Manager.
This chapter contains the following sections:
Oracle Enterprise Manager Grid Control Architecture Overview
Enterprise Manager Grid Control Sizing and Performance Methodology
Oracle Enterprise Manager Backup, Recovery, and Disaster Recovery Considerations
The architecture for Oracle Enterprise Manager 11g Grid Control exemplifies two key concepts in application performance tuning: distribution and parallelization of processing. Each component of Grid Control can be configured to apply both these concepts.
The components of Enterprise Manager Grid Control include:
The Management Agent - A process that is deployed on each monitored host and that is responsible for monitoring all services and components on the host. The Management Agent is also responsible for communicating that information to the middle-tier Management Service and for managing and maintaining the system and its services.
The Management Service - A J2EE Web application that renders the user interface for the Grid Control Console, works with all Management Agents to process monitoring and jobs information, and uses the Management Repository as its data store.
The Management Repository - The schema is an Oracle Database that contains all available information about administrators, services, and applications managed within Enterprise Manager.
Figure 9-1 Overview of Enterprise Manager Architecture Components
For more information about the Grid Control architecture, see the Oracle Enterprise Manager 11g documentation:
The Oracle Enterprise Manager 11g documentation is available at the following location on the Oracle Technology Network (OTN):
http://otn.oracle.com/documentation/oem.html
An accurate predictor of capacity at scale is the actual metric trend information from each individual Enterprise Manager Grid Control deployment. This information, combined with an established, rough, starting host system size and iterative tuning and maintenance, produces the most effective means of predicting capacity for your Enterprise Manager Grid Control deployment. It also assists in keeping your deployment performing at an optimal level.
Here are the steps to follow to enact the Enterprise Manager Grid Control sizing methodology:
If you have not already installed Enterprise Manager Grid Control 11g, choose a rough starting host configuration as listed in Table 9-1.
Periodically evaluate your site's vital signs (detailed later).
Eliminate bottlenecks using routine DBA/Enterprise Manager administration housekeeping.
Eliminate bottlenecks using tuning.
Extrapolate linearly into the future to plan for future sizing requirements.
Step one need only be done once for a given deployment. Steps two, three, and four must be done, regardless of whether you plan to grow your Enterprise Manager Grid Control site, for the life of the deployment on a regular basis. These steps are essential to an efficient Enterprise Manager Grid Control site regardless of its size or workload. You must complete steps two, three, and four before you continue on to step five. This is critical. Step five is only required if you intend to grow the deployment size in terms of monitored targets. However, evaluating these trends regularly can be helpful in evaluating any other changes to the deployment.
If you have not yet installed Enterprise Manager Grid Control on an initial platform, this step helps you choose a rough approximation based on experiences with real world Enterprise Manager Grid Control deployments. If you have already installed Enterprise Manager Grid Control, proceed to Step 2. Three typical deployment sizes are defined: small, medium, and large. The number and type of systems (or targets) it monitors largely defines the size of an Enterprise Manager Grid Control deployment. This table represents Intel-based platforms.
Deployment Size | Hosts | CPUs/Hosts | Memory/Host (GB) |
---|---|---|---|
Small (100 monitored targets) |
1 |
1 (3 GHz) |
4 |
Medium (1,000 monitored targets) |
1 |
2 (3 GHz) |
Greater than or equal to 4 |
Large (10,000 monitored targets) |
2 |
2 (3 GHz) 2 |
Greater than or equal to 6 |
The following table lists the minimum required sizing information for other platforms and operating systems.
Table 9-2 Sizing Requirements for Other Platforms
Platform and Operating System | Physical Memory | Processor | Number of CPU Processors |
---|---|---|---|
Solaris Sparc 64 -- SunOS dscgaa03-3 5.10 Generic_137137-09 sun4v sparc SUNW, SPARC-Enterprise-T5220 |
12,288 MB |
SUNW,UltraSPARC-T2 - 1415 MHz |
12 |
HP IA -- HP-IA 64 B.11.23 U ia64 |
8161 MB |
Intel(R) Itanium 2 Processor |
4 |
Microsoft Windows NT-- Windows NT Windows Server 2003 Service Pack 2 |
4 GB |
AMD Opteron™ Processor 248 |
2 |
In any OMS host box, OPMN processes, Admin Server Process, Node Manager processes, and/or DB processes will be running, so the minimum memory requirement is 4 GB per OMS host.
Table 9-3 Management Repository
Deployment Size | Hosts | CPUs/Host | Memory/Host (GB) |
---|---|---|---|
Small |
Shares host with Management Server |
Shares host with Management Server |
Shares host with Management Server |
Medium |
1 |
2 |
4 |
Large |
2 |
4 |
6 |
Table 9-4 Total Management Repository Storage
Deployment Size | Minimum Tablespace Sizes* | ||||
---|---|---|---|---|---|
SYSTEM** | MGMT_TABLESPACE | MGMT_ECM_DEPOT_TS | MGMT_AD4J_TS | TEMP | |
*These are strictly minimum values and are intended as rough guidelines only. The actual size of the MGMT_TABLESPACE could vary widely from deployment to deployment due to variations in target type distribution, user customization, and several other factors. These tablespaces are defined with AUTOEXTEND set to ON by default to help mitigate space constraint issues. On raw file systems Oracle recommends using more than the minimum size to help prevent space constraint issues. **The SYSTEM and TEMP tablespace sizes are minimums for Enterprise Manager only repositories. If Enterprise Manager is sharing the repository database with other application(s), these minimums may be too low. Note: You cannot monitor tablespaces through the use of alerts with auto extended files in version 11g of Enterprise Manager. You can either set up TABLESPACE FULL alerts generate if you want to have greater control over the management of your tablespaces, or you can allow Oracle to grow your database and not alert you through the AUTOEXTEND feature. Therefore to exercise greater control of the TABLESPACE FULL alerts, you can turn off autoextend. |
|||||
Small |
600 MB |
50 GB |
1 GB |
100 MB |
10 GB |
Medium |
600 MB |
200 GB |
4 GB |
200 MB |
20 GB |
Large |
600 MB |
300 GB |
Greater than 4 GB |
400 MB |
40 GB |
The previous tables show the estimated minimum hardware requirements for each deployment size. Management Servers running on more than one host, as portrayed in the large deployment above, will divide work amongst themselves.
Deploying multiple Management Servers also provides basic fail-over capabilities, with the remaining servers continuing to operate in the event of the failure of one. Use of a Server Load Balancer, or SLB, provides transparent failover for Enterprise Manager UI clients in the event of a Management Server host failure, and it also balances the request load between the available Management Servers. SLBs are host machines dedicated for load balancing purposes. SLBs can be clustered to provide fail-over capability.
Using multiple hosts for the Management Repository assumes the use of Oracle Real Application Clusters (RAC). Doing so allows the same Oracle database to be accessible on more than one host system. Beyond the storage required for the Management Server, Management Repository storage may also be required. Management Server storage is less impacted by the number of management targets. The numbers suggested in the Enterprise Manager Grid Control documentation should be sufficient in this regard.
A critical consideration when deploying Enterprise Manager Grid Control is network performance between tiers. Enterprise Manager Grid Control ensures tolerance of network glitches, failures, and outages between application tiers through error tolerance and recovery. The Management Agent in particular is able to handle a less performant or reliable network link to the Management Service without severe impact to the performance of Enterprise Manager as a whole. The scope of the impact, as far as a single Management Agent's data being delayed due to network issues, is not likely to be noticed at the Enterprise Manager Grid Control system wide level.
The impact of slightly higher network latencies between the Management Service and Management Repository will be substantial, however. Implementations of Enterprise Manager Grid Control have experienced significant performance issues when the network link between the Management Service and Management Repository is not of sufficient quality. The following diagram that displays the Enterprise Manager components and their connecting network link performance requirements. These are minimum requirements based on larger real world Enterprise Manager Grid Control deployments and testing.
You can see in Figure 9-2 that the bandwidth and latency minimum requirements of network links between Enterprise Manager Grid Control components greatly impact the performance of the Enterprise Manager application.
This is the most important step of the five. Without some degree of monitoring and understanding of trends or dramatic changes in the vital signs of your Enterprise Manager Grid Control site, you are placing site performance at serious risk. Every monitored target sends data to the Management Repository for loading and aggregation through its associated Management Agent. This adds up to a considerable volume of activity that requires the same level of management and maintenance as any other enterprise application.
Enterprise Manager has "vital signs" that reflect its health. These vital signs should be monitored for trends over time as well as against established baseline thresholds. You must establish realistic baselines for the vital signs when performance is acceptable. Once baselines are established, you can use built-in Oracle Enterprise Manager Grid Control functionality to set baseline warning and critical thresholds. This allows you to be notified automatically when something significant changes on your Enterprise Manager site. The following table is a point-in-time snapshot of the Enterprise Manager Grid Control vital signs for two sites:
Module | Metrics | EM Site 1 | EM Site 2 |
---|---|---|---|
Site URL | emsite1.acme.com | emsite2.acme.com | |
Target Counts | Database Targets | 192 (45 not up) | 1218 (634 not up) |
Host Targets | 833 (12 not up) | 1042 (236 not up) | |
Total Targets | 2580 (306 not up) | 12293 (6668 not up) | |
Loader Statistics | Loader Threads | 6 | 16 |
Total Rows/Hour | 1,692,000 | 2,736,000 | |
Rows/hour/load/thread | 282,000 | 171,000 | |
Rows/second/load thread | 475 | 187 | |
Percent of Hour Run | 15 | 44 | |
Rollup Statistics | Rows per Second | 2,267 | 417 |
Percent of Hour Run | 5 | 19 | |
Job Statistics | Job Dispatchers | 2 | 4 |
Job Steps/second/dispatcher | 32 | 10 | |
Notification Statistics | Notifications per Second | 8 | 1 |
Percent of Hour Run | 1 | 13 | |
Alert Statistics | Alerts per Hour | 536 | 1,100 |
Management Service Host Statistics | Average % CPU (Host 1) | 9 (emhost01) | 13 (emhost01) |
Average % CPU (Host 2) | 6 (emhost02) | 17 (emhost02) | |
Average % CPU (Host 3) | N/A | 38 (em6003) | |
Average % CPU (Host 4) | N/A | 12 (em6004) | |
Number of CPUs per host | 2 X 2.8 (Xeon) | 4 X 2.4 (Xeon) | |
Memory per Host (GB) | 6 | 6 | |
Management Repository Host Statistics | Average % CPU (Host 1) | 12 (db01rac) | 32 (em6001rac) |
Average % CPU (Host 2) | |||
Average % CPU (Host 3) | |||
Average % CPU (Host 4) | |||
Number of CPUs per host | |||
Buffer Cache Size (MB) | |||
Memory per Host (GB) | 6 | 12 | |
Total Management Repository Size (GB) | 56 | 98 | |
RAC Interconnect Traffic (MB/s) | 1 | 4 | |
Management Server Traffic (MB/s) | 4 | 4 | |
Total Management Repository I/O (MB/s) | 6 | 27 | |
Enterprise Manager UI Page Response/Sec | Home Page | 3 | 6 |
All Host Page | 3 | 30+ | |
All Database Page | 6 | 30+ | |
Database Home Page | 2 | 2 | |
Host Home Page | 2 | 2 |
The two Enterprise Manager sites are at the opposite ends of the scale for performance.
EM Site 1 is performing very well with high loader rows/sec/thread and high rollup rows/sec. It also has a very low percentage of hours run for the loader and the rollup. The CPU utilization on both the Management Server and Management Repository Server hosts are low. Most importantly, the UI Page Response times are excellent. To summarize, Site 1 is doing substantial work with minimal effort. This is how a well configured, tuned and maintained Oracle Enterprise Manager Grid Control site should look.
Conversely, EM Site 2 is having difficulty. The loader and rollup are working hard and not moving many rows. Worst of all are the UI page response times. There is clearly a bottleneck on Site 2, possibly more than one.
The following table outlines metric guidelines for the different modules based on tests that were run with the configurations outlined. It can serve as a reference point for you to extrapolate information and data based on the metrics and test environment used in the specified environment.
Table 9-5 Metric Guidelines for Modules Based On Test Environments
Module | Metrics | Value | Test Environment |
---|---|---|---|
Loader Statistics |
Loader Threads |
10 |
OMS Details # of OMS Hosts = 2 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 2 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details Shared Recv Directory = Yes # of Agents = 867 # of Hosts = 867 Total Targets = 1803 The Metrics are collected for 5 hours after 2 OMSs were started and each agent had 50 MB of upload backlog files. |
Total Rows/Hour |
4,270,652 |
||
Rows/Hour/loaderthread |
427,065 |
||
Rows/second/loaderthread |
120 |
||
Rollup Statistics |
Rows per second |
||
Job Statistics |
Job Dispatchers |
1 x Number of OMSs |
|
Job Steps/second/dispatcher |
|||
Notification Statistics |
Notifications per second |
16 |
OMS Details # of OMS Hosts = 1 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 1 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details # of OMSs = 1 # of Repository Nodes = 1 # of Agents = 2474 # of Hosts = 2474 DB Total Targets = 8361 |
Alert Statistics |
Alerts per hour |
7200 |
OMS Details # of OMS Hosts = 1 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 1 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details # of OMSs = 1 # of Repository Nodes = 1 # of Agents = 2474 # of Hosts = 2474 DB Total Targets = 8361 |
Management Service Host Statistics |
Average % CPU (Host 1) |
31% |
OMS Details # of OMS Hosts = 2 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 2 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details Shared Recv Directory = Yes # of Agents = 867 # of Hosts = 867 Total Targets = 1803 The Metrics are collected for 5 hours after 2 OMSs were started and each agent had 50 MB of upload backlog files. |
Average % CPU (Host 2) |
34% |
||
Number of CPUs per host |
4 (Xeon) |
||
Memory per Host (GB) |
6 |
||
Management Repository Host Statistics |
Average % CPU (Host 1) |
32% |
OMS Details # of OMS Hosts = 2 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 2 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details Shared Recv Directory = Yes # of Agents = 867 # of Hosts = 867 Total Targets = 1803 The Metrics are collected for 5 hours after 2 OMSs were started and each agent had 50 MB of upload backlog files. |
Average % CPU (Host 2) |
26% |
||
Number of CPUs per host |
4 |
||
SGA Target |
2 GB |
||
Memory per Host (GB) |
6 |
||
Total Management Repository Size (GB) |
94 |
||
RAC Interconnect Traffic (MB/s) |
1 |
||
Management Server Traffic (MB/s) |
|||
Total Management Repository I/O (MB/s) |
|||
Enterprise Manager UI Page Response/Sec |
Home Page |
9.1 secs |
OMS Details # of OMS Hosts = 1 # of CPU Per Host = 4 Intel Xeon Memory = 6 GB Repository Details # of Repository Nodes = 1 # of CPU per host = 4 Intel Xeon Memory = 6 GB EM Details # of OMSs = 1 # of Repository Nodes = 1 # of Agents = 2474 # of Hosts = 2474 DB Total Targets = 8361 |
All Host Page |
9.8 secs |
||
All Database Page |
5.7 secs |
||
Database Home Page |
1.7 secs |
||
Host Home Page |
< 1 sec |
These vital signs are all available from within the Enterprise Manager interface. Most values can be found on the All Metrics page for each host, or the All Metrics page for Management Server. Keeping an eye on the trends over time for these vital signs, in addition to assigning thresholds for warning and critical alerts, allows you to maintain good performance and anticipate future resource needs. You should plan to monitor these vital signs as follows:
Take a baseline measurement of the vital sign values seen in the previous table when the Enterprise Manager Grid Control site is running well.
Set reasonable thresholds and notifications based on these baseline values so you can be notified automatically if they deviate substantially. This may require some iteration to fine-tune the thresholds for your site. Receiving too many notifications is not useful.
On a daily (or weekly at a minimum) basis, watch for trends in the 7-day graphs for these values. This will not only help you spot impending trouble, but it will also allow you to plan for future resource needs.
The next step provides some guidance of what to do when the vital sign values are not within established thresholds. Also, it explains how to maintain your site's performance through routine housekeeping.
It is critical to note that routine housekeeping helps keep your Enterprise Manager Grid Control site running well. The following are lists of housekeeping tasks and the interval on which they should be done.
Analyze the three major tables in the Management Repository: MGMT_METRICS_RAW, MGMT_METRICS_1HOUR, and MGMT_METRICS_1DAY. If your Management Repository is in an Oracle 11g database, then these tables are automatically analyzed weekly and you can skip this task. If your Management Repository is in an Oracle version 9 database, then you will need to ensure that the following commands are run weekly:
exec dbms_stats.gather_table_stats('SYSMAN', 'MGMT_METRICS_RAW', null, .2, false, 'for all indexed columns', null, 'global', true, null, null, null);
exec dbms_stats.gather_table_stats('SYSMAN', 'MGMT_METRICS_1HOUR', null, .2, false, 'for all indexed columns', null, 'global', true, null, null, null);
exec dbms_stats.gather_table_stats('SYSMAN', 'MGMT_METRICS_1DAY', null, .2, false, 'for all indexed columns', null, 'global', true, null, null, null);
exec dbms_stats.gather_table_stats('SYSMAN', 'MGMT_STRING_METRIC_HISTORY', null, .2, false, 'for all indexed columns', null, 'global', true, null, null, null);
Enterprise Manager Administrators should monitor the database built-in Segment Advisor for recommendations on Enterprise Manager Repository segment health. The Segment Advisor advises administrators which segments need to be rebuilt/reorganized and provides the commands to do so.
For more information about Segment Advisor and issues related to system health, refer to notes 242736.1 and 314112.1 in the My Oracle Support Knowledge Base.
The most common causes of performance bottlenecks in the Enterprise Manager Grid Control application are listed below (in order of most to least common):
Housekeeping that is not being done (far and away the biggest source of performance problems)
Hardware or software that is incorrectly configured
Hardware resource exhaustion
When the vital signs are routinely outside of an established threshold, or are trending that way over time, you must address two areas. First, you must ensure that all previously listed housekeeping is up to date. Secondly, you must address resource utilization of the Enterprise Manager Grid Control application. The vital signs listed in the previous table reflect key points of resource utilization and throughput in Enterprise Manager Grid Control. The following sections cover some of the key vital signs along with possible options for dealing with vital signs that have crossed thresholds established from baseline values.
When you are asked to evaluate a site for performance and notice high CPU utilization, there are a few common steps you should follow to determine what resources are being used and where.
The Management Server is typically a very minimal consumer of CPU. High CPU utilization in the Enterprise Manager Grid Control almost always manifests as a symptom at the Management Repository.
Use the Processes display on the Enterprise Manager Host home page to determine which processes are consuming the most CPU on any Management Service or Management Repository host that has crossed a CPU threshold.
Once you have established that Enterprise Manager is consuming the most CPU, use Enterprise Manager to identify what activity is the highest CPU consumer. Typically this manifests itself on a Management Repository host where most of the Management Service's work is performed. It is very rare that the Management Service itself is the source of the bottleneck. Here are a few typical spots to investigate when the Management Repository appears to be using too many resources.
Click on the CPU Used database resource listed on the Management Repository's Database Performance page to examine the SQL that is using the most CPU at the Management Repository.
Check the Database Locks on the Management Repository's Database Performance page looking for any contention issues.
High CPU utilization is probably the most common symptom of any performance bottleneck. Typically, the Management Repository is the biggest consumer of CPU, which is where you should focus. A properly configured and maintained Management Repository host system that is not otherwise hardware resource constrained should average roughly 40 percent or less total CPU utilization. A Management Server host system should average roughly 20 percent or less total CPU utilization. These relatively low average values should allow sufficient headroom for spikes in activity. Allowing for activity spikes helps keep your page performance more consistent over time. If your Enterprise Manager Grid Control site interface pages happen to be responding well (approximately 3 seconds) while there is no significant (constant) loader backlog, and it is using more CPU than recommended, you may not have to address it unless you are concerned it is part of a larger upward trend.
The recommended path for tracking down the root cause of high Management Repository CPU utilization is captured under step 3.b above. This allows you to start at the Management Repository Performance page and work your way down to the SQL that is consuming the most CPU in its processing. This approach has been used very successfully on several real world sites.
If you are running Enterprise Manager on Intel based hosts, the Enterprise Manager Grid Control Management Service and Management Repository will both benefit from Hyper-Threading (HT) being enabled on the host or hosts on which they are deployed. HT is a function of certain late models of Intel processors, which allows the execution of some amount of CPU instructions in parallel. This gives the appearance of double the number of CPUs physically available on the system. Testing has proven that HT provides approximately 1.5 times the CPU processing power as the same system without HT enabled. This can significantly improve system performance. The Management Service and Management Repository both frequently have more than one process executing simultaneously, so they can benefit greatly from HT.
The vital signs for the loader indicate exactly how much data is continuously coming into the system from all the Enterprise Manager Agents. The most important items here are the percent of hour runs and rows/second/thread. The (Loader) % of hour run indicates whether the loader threads configured are able to keep pace with the incoming data volume. As this value approaches 100%, it becomes apparent that the loading process is failing to keep pace with the incoming data volume. The lower this value, the more efficiently your loader is running and the less resources it requires from the Management Service host. Adding more loader threads to your Management Server can help reduce the percent of hour run for the loader.
Rows/Second/Thread is a precise measure of each loader thread's throughput per second. The higher this number, the better. Rows/Second/Thread as high as 1200 have been observed on some smaller, well configured and maintained Enterprise Manager Grid Control sites. If you have not increased the number of loader threads and this number is trending down, it may indicate a problem later. One way to overcome a decreasing rows/second/thread is to add more loader threads.
The number of Loader Threads is always set to one by default in the Management Server configuration file. Each Management Server can have a maximum of 10 loader threads. Adding loader threads to a Management Server typically increases the overall host CPU utilization by 2% to 5% on a Enterprise Manager Grid Control site with many Management Agents configured. Customers can change this value as their site requires. Most medium size and smaller configurations will never need more than one loader thread. Here is a simple guideline for adding loader threads:
Max total (across all Management Servers) loader threads = 2 X number of Management Repository host CPUs
There is a diminishing return when adding loader threads. You will not yield 100% capacity from the second, or higher, thread. There should be a positive benefit, however. As you add loader threads, you should see rows/second/thread decrease, but total rows/hour throughput should increase. If you are not seeing significant improvement in total rows/hour, and there is a constantly growing loader file backlog, it may not be worth the cost of the increase in loader threads. You should explore other tuning or housekeeping opportunities in this case.
To add more loader threads, you can change the following configuration parameter where n is a positive integer [1-10]:
em.loader.threadPoolSize=n
The default is 1 and any value other than [1-10] will result in the thread pool size defaulting to 1. This property file is located in the {ORACLE_HOME}/sysman/config directory. Changing this parameter will require a restart of the Management Service to be reloaded with the new value.
The following two parameters are set for the Receiver module which receives files from agents.
em.loader.maxDataRecvThreads=n (Default 75)
Where n is a positive integer and default value is 75. This is used to limit the maximum number of concurrent data file receiver threads. So at the peak time only 75 receiver threads will be receiving files and an extra request will be rejected with a Server Busy error. These rejected requests will be resent by the agent after the default retry time.
Care should be taken while settting this value as too high a value will put an increased load on the OMS machine and shared receiver directory box. If too low a value is set then data file receive throughput will be low.
oracle.sysman.emRep.dbConn.maxConnForReceiver=n (Default 25)
Where n is a positive integer and default value is 25. This is used to set the maximum number of Repository Database connections for the receive threads. Oracle recommends you set this value equal to em.loader.maxDataRecvThreads, as each Receiver thread gets one DB session and there will be no wait for DB connections.
The rollup process is the aggregation mechanism for Enterprise Manager Grid Control. Once an hour, it processes all the new raw data loaded into the Management Repository table MGMT_METRICS_RAW, calculates averages and stores them in the tables MGMT_METRICS_1HOUR and MGMT_METRICS_1DAY. The two vital signs for the rollup are the rows/second and % of hour run. Due to the large volume of data rows processed by the rollup, it tends to be the largest consumer of Management Repository buffer cache space. Because of this, the rollup vital signs can be great indicators of the benefit of increasing buffer cache size.
Rollup rows/second shows exactly how many rows are being processed, or aggregated and stored, every second. This value is usually around 2,000 (+/- 500) rows per second on a site with a decent size buffer cache and reasonable speedy I/O. A downward trend over time for this value may indicate a future problem, but as long as % of hour run is under 100 your site is probably fine.
If rollup % of hour run is trending up (or is higher than your baseline), and you have not yet set the Management Repository buffer cache to its maximum, it may be advantageous to increase the buffer cache setting. Usually, if there is going to be a benefit from increasing buffer cache, you will see an overall improvement in resource utilization and throughput on the Management Repository host. The loader statistics will appear a little better. CPU utilization on the host will be reduced and I/O will decrease. The most telling improvement will be in the rollup statistics. There should be a noticeable improvement in both rollup rows/second and % of hour run. If you do not see any improvement in any of these vital signs, you can revert the buffer cache to its previous size. The old Buffer Cache Hit Ratio metric can be misleading. It has been observed in testing that Buffer Cache Hit Ratio will appear high when the buffer cache is significantly undersized and Enterprise Manager Grid Control performance is struggling because of it. There will be times when increasing buffer cache will not help improve performance for Grid Control. This is typically due to resource constraints or contention elsewhere in the application. Consider using the steps listed in the High CPU Utilization section to identify the point of contention. Grid Control also provides advice on buffer cache sizing from the database itself. This is available on the database Memory Parameters page.
One important thing to note when considering increasing buffer cache is that there may be operating system mechanisms that can help improve Enterprise Manager Grid Control performance. One example of this is the "large memory" option available on Red Hat Linux. The Linux OS Red Hat Advanced Server™ 2.1 (RHAS) has a feature called big pages. In RHAS 2.1, bigpages is a boot up parameter that can be used to pre-allocate large shared memory segments. Use of this feature, in conjunction with a large Management Repository SGA, can significantly improve overall Grid Control application performance. Starting in Red Hat Enterprise Linux™ 3, big pages functionality is replaced with a new feature called huge pages, which no longer requires a boot-up parameter.
The Rollup process introduces the concept of rollup participating instance; where rollup processing will be distributed among all participating instances. To add a candidate instance to the participating EMROLLUP group, the parameter instance_groups should be set on the instance level as follows:
Add EMROLLUP_1 to the instance_group parameter for node 1
Add EMROLLUP_2 to the instance_group parameter for node 2
Introduce the PQ and PW parallel processing modes where:
PQ is the parallel query/parallel dml mode. In this mode, each participating instance will have one worker utilizing the parallel degree specified.
PW is the parallel worker mode. In this mode, each participating instance will have a number of worker jobs equal to the parallel level specified
Distribute the work load for all participating RAC instances as follows:
Each participating instance will be allocated equal number of targets. So for (n) number of participating instances with total workload (tl), each instance will be allocated (tl/n)
Each worker on any participating instance will be allocated equal number of targets of that instance workload. So for (il) number of targets per instance with (w) number of workers, each worker will be allocated (il/w)
For each worker, the load is further divided into batches to control the number of times the rollup SQL is executed. The number of rows per batch will be the total number of rows allocated for the worker divided by the number of batches.
Use the following recommendations as guidelines during the Rollup process:
Use the parallel worker (PW) mode, and utilize the participating EMROLLUP_xx instance group.
The recommendation is to use the parallel worker mode
Splitting the work among more workers will improve the performance and scalability until a certain point where the diminishing returns rule will apply. This is dependent on the number of CPUs available on each RAC node. In this test case, running with 10 workers was the optimal configuration, balancing the response time, machine CPU and IO utilization
It is important to set a proper batch size (10 recommended). The optimal run was the one with 10 batches, attributed to balancing the number of executions of the main SQL (calling EMD_1HOUR_ROLLUP) and the sort space needed for each individual execution
Start by setting the number of batches to 10 bearing in mind the number of batches can be changed based on the data distribution
The recommendations above will yield the following results. Using the multi-instance parallel worker (8 PW) mode (with the redesigned code described earlier) improves the performance by a factor of 9-13 when utilizing two participating RAC instances.
Rollup row count (in millions) in MGMT_METRICS_1HOUR | Time (min) | Workers | Batch Size |
---|---|---|---|
29.5 | 30 | 8 | 1 |
9.4 | 5 | 8 | 10 |
** For the entire test there were 15779 distinct TARGET_GUID
** The test produced “29.5 Million” new rollup rows in MGMT_METRICS_1HOUR |
Run ** | Rows/Workers | Batches/Workers | Rows/Batch | Time (min) |
---|---|---|---|---|
8 PW /1 instance | 3945 | 3945 | 1 | 40 |
8 PW /2 instances | 1973 | 1973 | 1 | 30 |
Jobs, notifications, and alerts are indicators of the processing efficiency of the Management Service(s) on your Enterprise Manager Grid Control site. Any negative trends in these values are usually a symptom of contention elsewhere in the application. The best use of these values is to measure the benefit of running with more than one Management Server. There is one job dispatcher in each Management Server. Adding Management Servers will not always improve these values. In general, adding Management Servers will improve overall throughput for Grid Control when the application is not otherwise experiencing resource contention issues. Job, Notification, and Alert vital signs can help measure that improvement.
Monitoring the I/O throughput of the different channels in your Enterprise Manager Grid Control deployment is essential to ensuring good performance. At minimum, there are three different I/O channels on which you should have a baseline and alert thresholds defined:
Disk I/O from the Management Repository instance to its data files
Network I/O between the Management Server and Management Repository
RAC interconnect (network) I/O (on RAC systems only)
You should understand the potential peak and sustained throughput I/O capabilities for each of these channels. Based on these and the baseline values you establish, you can derive reasonable thresholds for warning and critical alerts on them in Grid Control. You will then be notified automatically if you approach these thresholds on your site. Some Grid Control site administrators can be unaware or mistaken about what these I/O channels can handle on their sites. This can lead to Enterprise Manager Grid Control saturating these channels, which in turn cripples performance on the site. In such an unfortunate situation, you would see that many vital signs would be impacted negatively.
To discover whether the Management Repository is involved, you can use Grid Control to check the Database Performance page. On the Performance page for the Management Repository, click on the wait graph showing the largest amount of time spent. From this you can continue to drill down into the actual SQL code or sessions that are waiting. This should help you to understand where the bottleneck is originating.
Another area to check is unexpected I/O load from non-Enterprise Manager Grid Control sources like backups, another application, or a possible data-mining co-worker who engages in complex SQL queries, multiple Cartesian products, and so on.
Total Repository I/O trouble can be caused by two factors. The first is a lack of regular housekeeping. Some of the Grid Control segments can be very badly fragmented causing a severe I/O drain. Second, there can be some poorly tuned SQL statements consuming much of the site I/O bandwidth. These two main contributors can cause most of the Grid Control vital signs to plummet. In addition, the lax housekeeping can cause the Management Repository's allocated size to increase dramatically.
One important feature of which to take advantage is asynchronous I/O. Enabling asynchronous I/O can dramatically improve overall performance of the Grid Control application. The Sun Solaris™ and Linux operating systems have this capability, but may be disabled by default. The Microsoft Windows™ operating system uses asynchronous I/O by default. Oracle strongly recommends enabling of this operating system feature on the Management Repository hosts and on Management Service hosts as well.
Automatic Storage Management (ASM) is recommended for Enterprise Manager Grid Control repository database storage.
There may be occasions when Enterprise Manager user interface pages are slow in the absence of any other performance degradation. The typical cause for these slow downs will be an area of Enterprise Manager housekeeping that has been overlooked. The first line of monitoring for Enterprise Manger page performance is the use of Enterprise Manager Beacons. These functionalities are also useful for web applications other than Enterprise Manager.
Beacons are designed to be lightweight page performance monitoring targets. After defining a Beacon target on an Management Agent, you can then define UI performance transactions using the Beacon. These transactions are a series of UI page hits that you will manually walk through once. Thereafter, the Beacon will automatically repeat your UI transaction on a specified interval. Each time the Beacon transaction is run, Enterprise Manager will calculate its performance and store it for historical purposes. In addition, alerts can be generated when page performance degrades below thresholds you specify.
When you configure the Enterprise Manager Beacon, you begin with a single predefined transaction that monitors the home page you specify during this process. You can then add as many transactions as are appropriate. You can also set up additional Beacons from different points on your network against the same web application to measure the impact of WAN latency on application performance. This same functionality is available for all Web applications monitored by Enterprise Manager Grid Control.
After you are alerted to a UI page that is performing poorly, you can then use the second line of page performance monitoring in Enterprise Manager Grid Control. This new end-to-end (or E2E) monitoring functionality in Grid Control is designed to allow you to break down processing time of a page into its basic parts. This will allow you to pinpoint when maintenance may be required to enhance page performance. E2E monitoring in Grid Control lets you break down both the client side processing and the server side processing of a single page hit.
The next page down in the Middle Tier Performance section will break out the processing time by tier for the page. By clicking on the largest slice of the Processing Time Breakdown pie chart, which is JDBC time above, you can get the SQL details. By clicking on the SQL statement, you break out the performance of its execution over time.
The JDBC page displays the SQL calls the system is spending most of its page time executing. This SQL call could be an individual DML statement or a PL/SQL procedure call. In the case of an individual SQL statement, you should examine the segments (tables and their indexes) accessed by the statement to determine their housekeeping (rebuild and reorg) needs. The PL/SQL procedure case is slightly more involved because you must look at the procedure's source code in the Management Repository to identify the tables and associated indexes accessed by the call.
Once you have identified the segments, you can then run the necessary rebuild and reorganization statements for them with the Management Server down. This should dramatically improve page performance. There are cases where page performance will not be helped by rebuild and reorganization alone, such as when excessive numbers of open alerts, system errors, and metric errors exist. The only way to improve these calls is to address (for example, clean up or remove) the numbers of these issues. After these numbers are reduced, then the segment rebuild and reorganization should be completed to optimize performance. These scenarios are covered in Step 3: Use DBA and Enterprise Manager Tasks To Eliminate Bottlenecks Through Housekeeping. If you stay current, you should not need to analyze UI page performance as often, if at all.
Determining future storage requirements is an excellent example of effectively using vital sign trends. You can use two built-in Grid Control charts to forecast this: the total number of targets over time and the Management Repository size over time.
Both of the graphs are available on the All Metrics page for the Management Service. It should be obvious that there is a correlation between the two graphs. A straight line applied to both curves would reveal a fairly similar growth rate. After a target is added to Enterprise Manager Grid Control for monitoring, there is a 31-day period where Management Repository growth will be seen because most of the data that will consume Management Repository space for a target requires approximately 31 days to be fully represented in the Management Repository. A small amount of growth will continue for that target for the next year because that is the longest default data retention time at the highest level of data aggregation. This should be negligible compared with the growth over the first 31 days.
When you stop adding targets, the graphs will level off in about 31 days. When the graphs level off, you should see a correlation between the number of targets added and the amount of additional space used in the Management Repository. Tracking these values from early on in your Enterprise Manager Grid Control deployment process helps you to manage your site's storage capacity proactively. This history is an invaluable tool.
The same type of correlation can be made between CPU utilization and total targets to determine those requirements. There is a more immediate leveling off of CPU utilization as targets are added. There should be no significant increase in CPU cost over time after adding the targets beyond the relatively immediate increase. Introducing new monitoring to existing targets, whether new metrics or increased collections, would most likely lead to increased CPU utilization.
Enterprise Manager incorporates a portable browser-based interface to the Grid Control console, as well as the Oracle application server technology, to serve as the middle-tier Management Service tool. The foundation of the tool remains rooted in database server technology to manage both the Management Repository and historical data. This section provides practical approaches to these high availability topics and discusses different strategies when practical for each tier of Enterprise Manager.
For the Oracle database, the best backup practice is to use the standard database tools and do the following:
Have the database in archivelog mode
Perform regular online backups using the Oracle Suggested Backup strategy option available through Grid Control. This strategy uses Recovery Manager (RMAN).
This strategy creates a full backup and then creates incremental backups on each subsequent run. The incremental changes are then rolled up into the baseline, creating a new full backup baseline.
Using the Oracle Suggested Backup strategy also takes advantage of the capabilities of Grid Control to execute the backups. Backup jobs are automatically scheduled through the Grid Control Job subsystem. The history of the backups is available for review and the status of the backup displays in the Job Activity section of the database target's home page.
Use of this job along with archiving and flashback technologies provides a restore point in the event of the loss of any part of the Management Repository. This backup, along with archive and online logs, allows the Management Repository to be recovered to the last completed transaction.
To enable archiving and flashback technologies, use the Recovery Settings page and enable:
Archive Logging
Bounce the database and restart all Management Service processes
Flashback Database
Bounce the database and restart all Management Service processes
Block Change Tracking feature to speed up backup operations.
A summary of how to configure backups using Enterprise Manager is available in the Oracle Database 2 Day DBA manual.
For additional information on database high availability best practices, review the Oracle Database High Availability Architecture and Best Practices manual.
You can set the frequency of the backup job depending on how much data is generated in the Grid Control environment and how much outage time you can tolerate if a restore is required. If the outage window is small and the Service Level Agreement can not be satisfied by restoring the database, consider additional strategies for Management Repository availability such a Real Application Cluster (RAC) or Data Guard database. Additional High Availability options for the Management Repository are documented in the Configuring Enterprise Manager for High Availability paper available from the Maximum Availability Architecture (MAA) page on the Oracle Technology Network (OTN) at http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm
For the Oracle Database, the best practice for recovery is to be prepared. Because in some situations the Management Repository, Management Service, and Management Agents will not have access to Grid Control, you will need to use the command-line interface to enter the RMAN commands.
If something happens to affect the Management Repository, Grid Control will not be available to provide the management interface to RMAN.
A sample syntax for database recovery using RMAN follows. For detailed information, review the information on database recovery in the Oracle Database Backup and Recovery User's Guide.
RMAN> STARTUP MOUNT; RMAN> RESTORE DATABASE; RMAN> RECOVER DATABASE; RMAN> ALTER DATABASE OPEN;
When considering recovery of the Management Repository, there are two cases to consider:
Full recovery of the Management Repository is possible
There are no special considerations for Enterprise Manager. When the database is recovered, restart the database and Management Service processes. Management Agents will then upload pending files to the Management Repository.
Only point in time and incomplete recovery is possible
Management Agents will be unable to communicate to the Management Repository correctly until they are reset. You must perform the following steps manually:
Shut down the Management Agent
Delete the agntstmp.txt
and lastupld.xml
files in the $AGENT_HOME/sysman/emd
directories
Go the /state
and /upload
subdirectories and clear their contents
Restart the Management Agent.
You must repeat these steps for each Management Agent.
In the case of incomplete recovery, Management Agents may not be able to upload data until the previous steps are completed. Additionally, there is no immediate indication in the interface that the Management Agents cannot communicate with the Management Service after this type of recovery. This information would be available from the Management Agent logs or command line Management Agent status. If incomplete recovery is required, you must perform this procedure for each Management Agent.
Because the Management Service is stateless, the task is to restore the binaries and configuration files in the shortest time possible. There are two alternatives in this case.
Backup the entire software directory structure. You can restore the directory structure to the same directory path should a Management Service failure occur. At the same time, backup the Management Agent associated with this Management Service install. You will need to restore this Management Agent should a restore of the Management Service be required.
Reinstall from the original media.
For any highly available Management Service install, it is a recommended practice that you ensure that the /recv
directory is protected with a mirroring technology. The /recv
directory is the directory the Management Service uses to stage files it receives from Management Agents before writing their contents to the database Management Repository.
After the Management Agent finishes transmitting its XML files to the Management Service, the Management Agent deletes its copy of the XML files. Metric data sent from the Management Agents would then be lost.
The recovery of the Management Agent is similar to the Management Service recovery except that the Management Agent is not stateless. There are two strategies that can be used:
If the host name has changed, and you are using an SLB to manage connections, you have to modify the connection pools in the SLB to drop the old host name and add the new name. If you are not using an SLB, each agent that previously pointed to the old OMS host must have its emd. properties file modified to point to the new OMS host name. You can use this procedure to handle a case where you need to bring up a new OMS on a new host because the former machine has crashed.
Assuming the host name has not changed, a disk backup and restore is sufficient.
Delete the agntstmp.txt
and the lastupld.xml
files from the /sysman/emd
directory.
Clear the /state
and /upload
subdirectories of all entries before restarting the Management Agent.
Start the Management Agent. This step forces a rediscovery of the targets on the host.
Reinstall the Management Agent from the original media.
As with the Management Service, it is recommended you protect the /state
and /upload
directories with a mirroring technology.
Data loss and down time can be prevented while switching from a Non-shared File System to a Shared File System by switching each OMS to a Shared File System in rolling fashion and ensuring that there is no backlog in the receive directory. To prevent data loss, follow these steps for each OMS:
Shutdown the http server on the OMS.
Wait for the existing backlog to get processed. To determine whether the existing backlog has been processed, continue to monitor the Loader receive directory. Wait until all the files in the receive directory are uploaded.
Run emctl config oms loader -shared yes -dir <sharedfs>. If there is any backlog, this command prompts you to clear the backlog.
Bounce the OMS.
In the event of a node failure, you can restore the database using RMAN or OS commands. To speed this process, implement Data Guard to replicate the Management Repository to a different hardware node.
If you are restoring the Management Repository to a new host, restore a backup of the database and modify the emoms.properties
file for each Management Service manually to point to the new Management Repository location. In addition, you must update the targets.xml
file for each Management Service to reflect the new Management Repository location. If there is a data loss during recovery, see Recovering the Management Repository for information.
To speed Management Repository reconnection from the Management Service in the event of a single Management Service failure, configure the Management Service with a Transparent Application Failover (TAF) aware connect string. You can configure the Management Service with a TAF connect string in the emoms.properities
file that will automatically redirect communications to another node using the FAILOVER syntax. An example follows:
EM= (description= (failover=on) (address_list= (failover=on) (address=(protocol=tcp)(port=1522)(host=EMPRIM1.us.oracle.com)) (address=(protocol=tcp)(port=1522)(host=EMPRIM2.us.oracle.com))) (address_list= (failover=on) (address=(protocol=tcp)(port=1522)(host=EMSEC1.us.oracle.com)) (address=(protocol=tcp)(port=1522)(host=EMSEC2.us.oracle.com))) (connect_data=(service_name=EMrep.us.oracle.com)))
Preinstall the Management Service and Management Agent on the hardware that will be used for Disaster Recovery. This eliminates the step of restoring a copy of the Enterprise Manager binary files from backup and modifying the Management Service and Management Agent configuration files.
Note:
In the event of a disaster, do not restore the Management Service and Management Agent binaries from an existing backup to a new host because there are host name dependencies. Always do a fresh install.