Ansicht
Dokumentation

RSORAISQN - Index Storage Quality Analysis

RSORAISQN - Index Storage Quality Analysis

Vendor Master (General Section)   RFUMSV00 - Advance Return for Tax on Sales/Purchases  
This documentation is copyright by SAP AG.
SAP E-Book

Purpose

This report performs mass storage analysis and defragmentations of nonpartitioned B*TREE indexes on the local or a remote database.

Prerequisites

Before using this report check note 970538 for prerequisites, latest news etc.. Especially the sideeffects of oracle index online rebuilds and index coalesces need to be known.

Workingsets

A workingset is a collection of indexes to be processed together. A workingsets is an ID for a set of indexes so that actions on these indexes can be triggered just specifying the workingset ID and the action.

A new workingset is defined by choosing an ID for the workingset, restricting the set of indexes on the selection screen and choosing the action 'Start workingset'.
If no index, table, tablespace, storage quality or index size is specified, the workingset is not restricted by this criteria. Especially when leaving all fields empty ALL indexes are processed.
When a workingset is started all indexes matching the selection criterias on the selection screen are collected to the workingset (Workingsetstatus 'Initialization') and then processed (Workingsetstatus 'Active') .

When the analyze and/or defragmentation has been executed successfully on an index, the index is removed from the workingset and history information about the successful action(s) is generated. Indexes with failed operations remain in the workingset and do not generate history information.

If all indexes of a workingset are processed without any error the workingset is empty. Empty workingsets are deleted automatically. The history information of each index action contains the workingset ID and the starttime of the workingset to be able to group history information by indexes processed together.

If at least one operation on one index failed the workingset continues to exists and can be restarted after the root cause for the failed operation is eliminated. All indexes still belonging to the workingset are reprocessed when restarting a workingset.. Alternatively the workingset can be cleaned up, ignoring that some indexes have not been processed in the way wanted.

It is also possible to stop a workingset and restart it at a later point in time. Restarting a workingset that was stopped processes all indexes that were not processed or processed with an error in the stopped run.
Stopping a workingset sets only a flag signalling the currently running process(es) to terminate. The ongoing operation(s) on the index(es) currently in process are finished normally. Dependent on the index size and operation it may last a while until the workingset status changes from 'Terminating' to 'Stopped'.

Before reusing the ID of a workingset the old workingset using this ID must be processed successfully or manually cleaned up.

Selection Screen

Depending on the action selected the data specified in Workingset Selection and Object Restriction is evaluated or not:

................................Workingset...Object.......Blank.....
................................Selection....Restriction..Connection
..........................................................means.....
Show History....................optional.....optional.....*.........
Start Workingset................necessary....optional.....DEFAULT...
Show Status of Workingsets......optional.....connection...*.........
Stop Workingset.................necessary... ---------....---------.
Restart Workingset..............necessary....---------....---------.
Cleanup Workingset..............necessary....---------....---------.
Set Stop Status for Workingset..necessary....---------....---------.
Cleanup History.................---------....---------....---------.

If no explicit connection name or DEFAULT as connection name is specified, the indexes of user sapr3/sap on the local DB are processed.
To process the indexes of a user in a remote DB, this remote DB must be maintained in transaction DBCO.

If no index, table, tablespace, storage quality or index size is specified, the workingset to be analyzed is not restricted by this criteria. Especially when leaving all fielda empty ALL indexes are processed.

It is recommended to process only indexes larger than 10 MB to keep runtime, datavolume etc. small. The algorithm to calculate the storage quality is optimized for large indexes. Small indexes can have low storage qualities although they are not unbalanced. Low storage qualities for small indexes or indexes on tables with a few rows but large initial extent can be ignored.

Actions in Detail

Show History

Use this action to get information about successful index storage quality analysis and defragmentations in the past. Only successful operations are tracked in the history.
The output can be restricted using the Object Restriction fields on the selection screen. If only the latest action of each type should be displayed for each index the output can be restricted further by flagging 'Only last Action'. This flag is evaluated after evaluating all other selection criteria.

Column description of the history view

Action
Possible values are 'Defragment' or 'Analyze'

Method
Possible values for action 'Defragment' are 'Coalesce' or 'Rebuild' and for 'Analyze' either 'Fast' or 'Exact'

Startdate, Starttime, Enddate, Endtime, Duration
Timing information when the action on the index was started, ended and how long it took

Storage Quality
For action 'Defragment' this column contains always '-1' because defragmentation does not calculate a storage quality. For action 'Analyze' the storage quality in percent is shown.

KB
Size of the index when the action was started

Workingset, WS Startdate, WS Starttime
ID and start timestamp of the workingset this action belonged to. This column can be used to group indexes processed in one run of a workingset together.

Start Workingset

Use this action to define and process a new set of indexes.
If Defragment and Analyze is selected, for each index the analysis of the storage quality is done directly after the defragmentation before starting the defragmentation of the next index.

Defragmentation

To defragment the indexes either coalesce or rebuild online can be used.
If option 'Check Locks' is choosen the index rebuild is not started if there are currently open transactions on the corresponding table. This reduces - not excludes - the probability of complete table locks at the beginning of the index rebuild. The index is tried to be rebuild at a later point in time again (after at least 120 secons). If after 10 trials the rebuild cannot be started this is tracked as an error.
If option 'Compute Statistics' is choosen on Oracle 9 the command 'alter index # rebuild online compute statistics' is executed performing an implicit statistic calculation during the rebuild. On Oracle 8.1 this option is ignored because the command extension is not available. With Oracle 10 statistics are always calculated implicitly.
If option 'Tablespace Coalesce' is choosen after rebuilding indexes of summed size more than 100 MB in one tablespace the freespace in the tablespace is coalesced.

Analyze

For calculating the storage quality the number of indexblocks, th number of rows in the table and the average length of an index entry has to be determined. Both information can either be gotten from a current snapshot leading to an index fast full scan on the index (Exact) or from Oracle statistics (Fast).
The exact methods runtime depends on the size of the index. The fast method has constant runtime, but may be not as exact.. Usually the fast method is sufficent. Because of the significant runtime difference the fast method is recommended.
In case statistics are not in place when using the fast method the exact method is taken as a fallback automatically. Table monitoring is taken into account from the fast method if switched on.

Show Status of Workingsets

As long as a workingset is not processed completely successfully it can be displayed choosing this action. By doubleclicking on the appropriate field of the output, the indexes currently in process, to be processed and processed successfully or with error can be displayed.
If a workprocess is working on a workingset but currently not defragmenting or analyzing an index no entry for this workprocess is displayed (e.g. when waiting for taking the next index).
'Show Workingset' can be used to check the success of processing a workingset and to see which errors came up.

Column description of the workingset header view

Workingset, WS Startdate, WS Starttime
ID and start timestamp of the workingset. Every (re)start of a workingset sets a new timestamp.

Last Update Date, Last Update Time
Timestamp when the last time an action on an index was completed in this workingset. If the timestamp is long in the past and the status of the workingset is 'Active' this may be an indicator for a hard termination of a workingset (see also 'Set Stop Status for Workingset' below).

Status
Possible values are
- 'Initializing': The workingset is created by reading data from indexes fitting the conditions of the selection screen. The initialization can last up to several minutes if the number of indexes to be processed is large.
- 'Active': The workingset is currently processed.
- ''Terminating': The workingset should be stopped (see also 'Stop Workingset' below). Indexes currently in process are finished normally. If all processes working on the workingset finished their current index the workingset status changes to 'Stopped'
- 'Stopped': No process is currently working on the workingset. There are indexes in the workingset that need to be (re)processed. This status either indicates that a workingset was manually stopped or all indexes have been tried to be processed and at least one error occured during processing.
If all indexes of a workingset are successfully processed the workingset is automatically deleted and its name can be reused.

Active Processes
The number of workprocesses working on the workingset. The first workprocess is assigned to a workingset by 'Start Workingset'. Further workprocesses can be assigned to the workingset by 'Increase Workingsetparallelity'.
As long as the workingset is in status 'Initializing' the number of active processes is set to one regardless how many processes started using 'Increase Workingsetparallelity'.
Doubleclicking this field when the workingset is already initialized shows on which indexes the processes are working. If currently no index is processed by some of the processes, less lines than indicated by 'Active Processes' are returned.

Indexes Total
Number of Indexes assigned to the workingset when it was (re)started.

Indexes to do
Number of indexes for which processing was not started or which are currently processed. Doubleclicking this field returns detail information about the indexes.

Indexes ok
Number of indexes that have been successfully processed. Doubleclicking this field returns the history rows for those indexes.

Indexes error
Number of indexes that have been processed with errors. Doubleclicking this field shows the Oracle error code.

KB Total, KB To Do, KB OK, KB Error
Shows the volume based progress of processing the workingset. This is an alternative to the number-of-indexes based progress.

Action
Posible values are: Defragment(Coalesce) or Defragment(Rebuild) and/or Analyze(Fast) or Analyze(Exact)

Options
Further selection screen options like 'Lock Check', 'Tablespace Coalesce', 'Rebuild Options'

Increase Workingsetparallelity

With this action another instance of the report can be started to work on a workingset currently in process. In other words the parallel degree of processes working on one workingset can be increased by one.

In general there are three levels of parallelity available:

Parallelity between workingsets with different IDs

Several workingsets can be processed in parallel. It is recommended to put each index only to one workingset to avoid the possibility of parallel processing of the same index.
This kind of parallelism can be used if a set of indexes should be processed daily and another set weekly. During the weekly execution also the daily set of indexes can be processed.

Parallelity within a workingset (this can be increased by 'Increase Workingsetparallelity')

Several workprocesses can work on the same workingset. Therefore the processing of a workingset is started in the normal way and afterwards the workingset parallelity is increased by planning further jobs with the action 'Increase Workingsetparallelity'.

Stopping processing of a workingset stops all processes working on this workingset.
This kind of parallelism can be used if the parallel degree should be increased dynamically e.g. if it turns out that after starting a workingset workprocess/CPU/memory/IO resources can still cope with the load of additional processes.

Parallelity within a command on an index of a workingset

For rebuilding or exactly analyzing an index a parallel degree can be specified. The parallel degree is handed over to Oracle for the rebuild command (rebuild)/index fast full scan (exact analysis).
This kind of parallelity can be used if just one workprocess should be allocated. For doing a fast storage analysis in parallel this option cannot be used.

Stop Workingset

Active workingsets can be stopped with this option. The remaining indexes can be processed later using 'Restart Workingset' or cleaned up using 'Cleanup Workingset' .

Stopping a workingset just sets a flag signalling the processes working on the workingset to terminate after processing of their current index is done. If the current index is large it can last a while until the status of the workingset changes from 'Terminating' to 'Stopped''.

Restart Workingset

Stopped workingsets can be restarted with this option. A restart may be neccessary after stopping a workingset or to reprocess indexes for which errors in processing occured.

Cleanup Workingset

If the rest of indexes in a workingset should not be processed anymore the workingset can be deleted with this option. Afterwards the ID of the Workingset can be reused.

Set Stop Status for Workingset

This functionality is needed after a hard termination during processing.
A hard termination of a workingset is a termination that cannot be handled by the report itself. Usually hard terminations result in a shortdump. Hard terminations can be caused by a stop of the database which is currently accessed, killing the workprocess working on a workingset etc.
The problem with a hard terminations is - compared with a controlled stop of a workingset - that it cannot adjust the status of the workingset anymore to 'Stopped'. Using the option 'Stop Workingset' after a hard termination would change the status to 'Terminating'. but this status would never be left. As a result the workingset can neither be restarted nor cleaned up nor the ID of the workingset can be reused.
To come out of this situation the administrator needs to check if there are still processes working on this workingset. If so they need to be terminated. If it is ensured that no workprocesses are working on the workingset anymore the option 'Set stop status for workingset' can be used to enforce the 'Stopped' status. Afterwards the workingset can be restarted or cleaned up.
To check which workprocesses are working on a workingset choose 'Show Status of Workingsets'->doubleclick the 'Active Workprocesses' field of the workingset->Get the number of the Workprocesses in the column 'Status Description'.

Cleanup History,,

With this action all the history information about successful defragmentations and analysis older than the specified number of days is deleted. '0' means cleanup the complete history.

Example Usage

Cleanup Job

Before you start using this report define a cleanup batchjob ISQ_CLEANUP with the following variant that deletes the history of future actions.
1. Variant name: CLEANUP :
Keep the last 100 days.
Run the job once a week.

Complete DB check scenario

To avoid unnecessary IO and buffer cache usage by fragmented indexes define a batch job ISQ_ with the following two variants.
1. Variant: ALL__STEP1:
Name the workingset id as the variant. Analyze (fast) all objects larger than 10000 KB.
2. Variant: ALL__STEP2
Name the workingset id as the variant. Rebuild (online) and analyze (fast) all objects larger than 10000 KB and with a storage quality less than 60%. Use options Lockcheck, Tablespace Coalesce and Compute Statistics.
Plan the batchjob on demand or periodically in times of low load and ideally after a statistic calculation.
Check the Joblog afterwards. If errors occured handle them and restart the workingset or cleanup the workingset to enable reuseage of the workingset id.
To force a reusage of a workingset automatic cleanup steps can be added to the batchjob BEFORE the above steps are executed.






ROGBILLS - Synchronize billing plans   RFUMSV00 - Advance Return for Tax on Sales/Purchases  
This documentation is copyright by SAP AG.

Length: 18879 Date: 20240520 Time: 120455     sap01-206 ( 336 ms )