Ansicht
Dokumentation

05156 - Record deletion and index performance?

05156 - Record deletion and index performance?

SUBST_MERGE_LIST - merge external lists to one complete list with #if... logic for R3up   BAL_S_LOG - Application Log: Log header data  
This documentation is copyright by SAP AG.
SAP E-Book

Record deletion and index performance?

Hi Jim, Volker,

I'm no expert on archiving either but have done a fair amount of archiving
testing with production orders and sales orders. A couple of points:

1) The archiving process will delete the records as part of the process
if/when you tell it to. If your production archive variants are set to
delete immediately after archive, a delete job is dropped off after each
archive file is built - the way to go if you have most of the system
resources to yourself. Otherwise you will have a "delete button" to do this
after your satisfied with the archive run.

2) Not only do your journal receivers consume a lot of space (if you aren't
running savdltrcv - this is a good time to turn it on), your IFS grows in
almost direct proportion to the data archived as well. System storage used
always goes UP after an archiving run until you reorg (compress out deleted
records).

3) I know of no single archive object to clean up COEP. You can use the
archiving tools (in SARA) to find all of the archiving objects that will hit
COEP. (it's a pretty big list)

4) COEP is one of the files you can almost never reorg with the SAP system
up as it is "always" locked for update by someone.

5) An old "programmer's tale" says when you are reorganinzing huge physical
files, it is more efficient to drop the views (logical dependencies) first.
Then reorg the physical and re-create the logicals maually (to rebuild the
alternate indices). Personally, I'm usually "lazy" - I let the DBMS find all
the dependent indices and take care of rebuilding them.

6) DB2/400 does a pretty darn good job of handling index fragmentation all
by itself without much intervention. The reorg gets back some "usuable
space" on the wrksyssts screen but all tables in SAP are defined
REUSEDLT(*YES) (reuse deleted record space) so you "get it back" with new
inserts anyway - you are talking about a lot of new inserts though so I
think you should do the reorg.

7) a fairly common "alternative reorg" is a copy file out with
compress(*yes) (compress out deleted records). a clear physical file member
of the real file and a copy file back. Why bother? the reorg process was
fixed years ago (and probably does that under the covers anyway)

Rick

-----Original Message-----
From: Gueldenpfennig, Volker [mailto:volker.gueldenpfennigZs...]
Sent: Thursday, February 21, 2002 1:31 AM
To: 'jdollZp...'
Cc: Sap400 (E-mail)
Subject: RE: Record deletion and index performance?


Hi Jim,

unfortuntaly I don't know the SAP archiving process.
I thought, that SAP would archive the old stuff out and then automatically
deletes every archived row in the table. This would result in your case
"automatically" in a table with 50GB used records and 50GB deleted records.
You won't have issues with index-fragmentation on AS/400. Only, as long as
you don't use the command RGZPFM on this table, you won't get shrinked down
the table and the indexes and won't get an improvement in accestimes because
the DB2/400 still has to look through all the records - but it shouldn't
become worth in that part of time as it is at the moment.

If the SAP archiving process only "copies" the data out and leaves it up to
you how to get rid of the old records, your 3 choices are fully correct.
> 1 - OS/400 reorganizes it in place--pretty dangerous,
> probably not done
advantage:
- can be done while the system is active
disadvantage:
- use native SQL WITHOUT commit control ( "with NC" !) for this big
DELETE-SQL, because otherwise this would result in a terrible locking
situation!

> 2 - OS/400 copies everything out, and copies only the good
> stuff back--seems inefficient
advantage:
disadvantage:
- can't be done while the system is active
- very inefficient

> 3 - OS/400 copies just the good stuff out, then renames tables
advantage:
disadvantage:
- can't be done while the system is active
- you have to keep trak on the indexes, because they would become renamed as
well!

I think, approach #1 is the absolute best idea. As everything is journaled
to the journal-receiver, you would have the possibility, to recover back, if
really necessary. This is the way it is always done within SAP, just delete
the rows you want to remove with SQL. You needn't care about other things.
You can do it during up-time, even if I would use a time when then system is
in a low usage, because of performance reasons. You only have to care, that
the SQL is really correct and not by any chance is perhaps exactly the
negative result or so! But then, it has a big advantage more: If you see
that it took too long to complete, you could just kill the delete-job. As
you are running without commit-control, this would result in stopping now
and all deleted records remain outsite and you could issue the same
statement afterwards again. You would do that that often, that you are
complete.
Run it in batch instead of dialog! (You could do this with RUNSQLSTMT or
STRQM) Otherwise, you might see big performance degradations because of
CFINT.
Please keep in mind, that as everything is journaled, your journal receiver
will grow about the size you are deleting. In your case about 50 GB!

Hope, this helps.

Volker

> -----Original Message-----
> From: Jim Doll [mailto:jdollZp...]
> Sent: Mittwoch, 20. Februar 2002 21:48
> To: sap400Zm...
> Cc: Margie Teppo
> Subject: Record deletion and index performance?
>
>
> I'm going to state this as clearly as I can, but I don't
> understand it completely myself:
>
> We've got an archive consultant in who initially raised the
> question. His concern is what happens when many records of a
> table are deleted? In the past (in, ...ahem... a non-AS/400
> shop), he's witnessed a problem where many records get
> deleted, which fragments the corresponding index.
> Apparently, with a "fragmented index", the optimizer
> eventually starts processing sequentially. Never heard of
> such a thing on the AS/400, but I've never deleted half of a
> 100GB table before, either! (Working on COEP, by the way, in
> a 4.6C environment.)
>
> Related question: assuming that we delete half of a 100GB
> table and try to organize it. I see three choices:
>
> 1 - OS/400 reorganizes it in place--pretty dangerous,
> probably not done
> 2 - OS/400 copies everything out, and copies only the good
> stuff back--seems inefficient
> 3 - OS/400 copies just the good stuff out, then renames tables
>
> If I had to guess, I would think it follows #3. My question
> comes down to this: if I'm reorganizing a 100GB table that's
> got half of the records deleted, what do I need for free
> space in order to reorganize--100GB or 50GB?
>
> Thanks for any replies!
>
> Jim Doll, Perrigo SAP Technical Support
>
>
> Have a look to our homepage at: http://www.consolut.net
> DoNotReply@consolut.eu
>
> Your use of consolut is subject to
http://www.consolut.net

Durban Tours - Südafrika Safari

ABAP Short Reference   General Data in Customer Master  
This documentation is copyright by SAP AG.

Length: 8582 Date: 20240305 Time: 090431     sap01-206 ( 3 ms )