Ansicht
Dokumentation

01850 - Database Size affecting Performance

01850 - Database Size affecting Performance

BAL Application Log Documentation   ABAP Short Reference  
This documentation is copyright by SAP AG.
SAP E-Book

Database Size affecting Performance



There's two parts to your question. The RGZ and deleted row issues
concern how much disk IO is necessary. The disk rebalancing is concerned
with making sure that your data is spread across all drives evenly so that
one drive doesn't become a bottleneck for system performance.

RGZPFM will help when you have a lot of deleted rows. Not so much when
there aren't any. Here's why. Usually you'll use a key thru the ix to
access an individual row. The OS gets the page (or pages) of data from
disk where that row lives. Say the there are 10 rows per page. If most
of them are deleted, then you are more likely to have to do another disk IO
next time you want a row from this file. After a while, you've paged in a
lot of pages from disk to get only a few "good" rows. Eventually you'll
need to start throwing the old pages out to make room for new ones to come
in. If no rows are deleted, then you have a better chance that the row
you want has already been paged in by a previous disk access, and your hit
ratio goes up.

Reuse deleted does have a little overhead in terms of CPU. However, it is
worth it because it "fills in" the "holes" in files caused by deleted rows,
and makes disk IO (paging) more efficient as in the example I gave above.

When reorganizing files, you might want to have them reorganized by key.
It is a little more likely that the "next" row you'll want is the next one
in the primary index, than just any random order, so it can help the
paging.

Disk rebalancing deals with the data on the actual disks. Normally, you
shouldn't have to mess with it because OS/400 automatically tries to keep
the %used on each disk equal as it goes. If you do WRKDSKSTS and see some
units with very different percentages than others (>10%), you might want
to run STRASPBAL with the *CAPACITY option. Make sure you do this with R/3
down to get an effective spread.
Another option is to do TRCASPBAL first. This lets the OS collect stats
about which disk places get accessed the most, and then the STRASPBAL can
be used with the *USAGE option to spread the highly accessed data evenly
across the arms. Here in the lab I haven't noticed that the TRCASPBAL
causes any noticable performance difference, but you might notice it if
your disks are already really busy...a little pain for a day might get you
better performance though.
STRDSKRGZ can help too, but it mostly just collects up the unused parts of
each disk and makes them contiguous. STRASPBAL does the same and more.
The help for these commands is pretty good. Type the command name, then
PF4, then PF1 on the top line to read all about them.

Ron Schmerbauch 507-253-4880 rschmerbZu...
iSeries 400 - ERP Development - Rochester, MN


Rick Githens <RGithensZg...> on 04/24/2001 01:33:36 PM

To: "'SAP400 ListServer'" <sap400Zm...>
cc:
Subject: RE: Database Size affecting Performance



As to:
1) We typically reorg all physical files once a quarter (archived or not)
just because it's easier - 1 CL pgm submits a zillion jobs (really about
12K
I think). The CL pgm is another quick and dirty that is a modified version
of the RTVCLSRC for RCLSPACE. Used to do it by member and check for any
deleted records first but the outfile was 8x as large and the job(s) didn't
run any faster (also didn't have thousands of joblogs to delete either but
..) As to the benefits, I have no statistics but face time tells me
reorging
the top 20 space-eaters (deleted records wise) accomplishes the same thing
as the "one swell foop" approach.

2: The command is STRDSKRGZ - start disk reorganization. We have only used
it when adding disk so I'm not sure what the gain is with existing systems
You could always use the old way - back up your system, clear your system,
Reload your system and let the restore balance the disk arms - just
kidding,
I don't ever want to go back there.

Rick
-----Original Message-----
From: Mike Martin, IS, Sousa [mailto:MMartinZs...]
Sent: Tuesday, April 24, 2001 1:32 PM
To: sap400Zm...
Subject: RE: Database Size affecting Performance



Thanks to all for their replies. Excuse the pun, but it appears that size
does matter. 🙂

I do have a follow-up question, though:

As an SAP database library grows (let's say doubles from 120GB to 240GB),
what are the benefits, with regards to performance, with:

1. Table reorganization?
2. Disk rebalancing?

For part 1, we typically reorganize tables where we have archived data. For
this, we normally pick just the obvious, large tables associated with that
item (e.g. for SD_VBAK, we will reorg VBAK and VBAP).

a.) What would be the added benefit of reorganizing any table that has
deleted records?
b.) In addition, we have the tables reuse deleted records. Will this
degrade performance? Will reorganizing the tables help abate any
performance
degradation, or is there minimal gain since the DB400's algorithms to
search
indexes is independent of the tuple order?

c.) Lastly, are there any benefits with reorganizing tables w/out deleted
records? (i.e. in Oracle land, a reorganization of a table w/out deleted
records is helpful to recreate the table in contiguous space, combining all
the extents that were created due to growth).

As for part 2, I am uncertain what command is used for this function
(perhaps a part of DDM?). Our former AS/400 administrator explained its use
as mainly when adding more disks to an ASP. But my question is, will this
help performance as well, even when no disks were added. In other words,
since some tables are now 10x larger than they were on initial installation
and conversion, will a rebalancing help performance by optimizing the
spread
of data, at its current size, over several disks/arms?

Sorry to be such a windbag, but inquiring minds want to know. Also, I hope
this is a good topic for the group to discuss.

Thanks in advance for your input.

Regards,
Mike D. Martin
SAP Basis Administrator
SOLA Optical, USA
707-763-9911 x6106
mmartinZs...


-----Original Message-----
From: Ron Schmerbauch [ mailto:rschmerbZu...
<mailto:rschmerbZu...> ]
Sent: Monday, April 23, 2001 2:07 PM
To: sap400Zm...
Subject: Re: Database Size affecting Performance



You guys are good... you've got all the answers before I have a chance to
reply...

Yes, it will matter some. How much? It depends. Here's some of the
factors that would come into play....
On the negative side,
-Memory... small files might tend to naturally become memory resident if
they are getting used a lot. If they are big, they might not fit anymore
and would get swapped to disk.
-SAP buffers... the memory comment, but with SAP buffers instead of DB
files. Of course, DB files are sometimes in the buffers...
-An index is a binary tree. Searches are "fast", but the bigger the tree,
the longer it may take to traverse it.
-If the extra space is eaten up by deleted rows you are just making the
system wade through garbage.

On the positive side,
-A bigger DB means you will have probably more disk drives, and thus better
spread across more arms.

In general, a 10x increase across a DB lib doesn't mean you need 10x more
CPU and memory, but modest increase needs could be expected.

Ron Schmerbauch 507-253-4880 rschmerbZu...
iSeries 400 - ERP Development - Rochester, MN


"Betsy Strebe" <bstrebeZt...> on 04/23/2001 02:14:33 PM

To: "Mike Martin, IS, Sousa" <MMartinZs...>
cc: sap400Zm...
Subject: Re: Database Size affecting Performance




Mike,

Surely this has to be so, it only makes sense. But I would also say that
in addition to file growth you would have to look at the file organization.
Because a file sorted by it's primary key and purged of deleted records
will perform much better than one left to expand and contract on it's own.
A file that grows in an orderly fashion with minimal deletion won't be as
much of a problem child as one that does. Periodically, I run the deleted
rows analysis of DB02 and look for files with large numbers of deleted rows
and/or using a large of amount of storage. I reorganize these on the
primary key. For example, this helped tremendously after the initial
loading of our CO-PA files, where they were throwing in and deleting large
amounts of history data - and gained a lot of disk space, around 5G.

Is anything performance related ever simple? Ah that it would be.

regards,
Betsy


~~~~~~~~~~~~~~~~~~~~~~
Betsy Strebe
bstrebeZt...
Systems Technical Manager
Trinchero Family Estates
Sutter Home Wines, Inc.
(707) 963-3104 ext. 2439
www.tfewines.com
~~~~~~~~~~~~~~~~~~~~~~~





"Mike Martin,
IS, Sousa" To: "SAP - AS/400 List
(E-mail)" <sap400Zm...>
<MMartinZsola cc:
.com> Subject: Database Size
affecting Performance

04/23/01
10:02 AM








SAPers,

I have a (hopefully) simple and generic question for the group. As
database size grows on DB2/400, will performance degrade?

For example, with an SAP database grows from 150GB to 300GB over two years,
will this correlate into degraded performance? If a table grows from 20MB
to 20GB will it take longer on reads, writes, and deletes? Will more
memory/CPU cycles be used for the index search?

If not, this tells me that a 10GB system will perform the same as a 10TB
system (all other machine specs being equal) and the only difference isthat
one system houses a lot more disk. However, my belief is the
opposite, that DB size does affect system performance. That simply having
larger tables will lead to longer reads, inserts, changes, and deletes.
But that's just my opinion, I could be wrong...What's your opinion?

Thanks in advance,
Mike D. Martin
SAP Basis Administrator
SOLA Optical, USA
707-763-9911 x6106
mmartinZs...










Durban Tours - Südafrika Safari

Fill RESBD Structure from EBP Component Structure   CPI1466 during Backup  
This documentation is copyright by SAP AG.

Length: 12794 Date: 20240603 Time: 133222     sap01-206 ( 3 ms )