See http://www.statspackanalyzer.com
There's even a sample of a report - which
happens to have a link to a Burleson article
on average about every 3rd line of text
Here's a couple of details from the report:
---------------------------------------------------------------
Parses: 63/s
Hard parses: 0/s
You are performing more than 63 SQL parses per second. A parse
is the process of executing your SQL, checking for proper security
authorization, checks for the existence of tables, columns, and other
referenced objects, and generating an execution plan. Your high parses
suggest that your system has many incoming unique SQL statements or
that your SQL is not reentrant (i.e. literal values in the WHERE
clause,
not using bind variables)
------------------------------------------------------------------
Interesting - so if that's what "Parses" are, what are "Hard Parses" ?
Parses that were for really difficult SQL statements perhaps ?
And if a "parse" executes your SQL, what does an "execute" do ?
(Actually, DDL can be executed on a parse call, and statements
can be optimized on the execute call - but that's just one of those
little details).
For the less well informed, "Parses" which are not "Hard Parses"
are re-using shared cursors which have not been explicitly held open
for re-execution by some layer of the client software. And if you have
your session_cached_cursors set, then a parse IS nearly as efficient
as re-using a held cursor.
If you want to ANALYZE parses, you have to look at three or four
statistics at once - then check the activity and impact of the relevant
latches.
----------------------------------------------------------------------
Top Timed Events:
db file sequential read 22%
Moving your indexes to solid state disks can reduce the
amount of time spent waiting for this event.
CPU time 21%
Solid state disks help to increase the CPU time by reducing
IO related wait events. If this is the main wait event, tuning
SQL statements and/or increasing server CPU resources
will provide the greatest performance improvement.
------------------------------------------------------------------------
I wonder what the other 60% was ? Neither of these figures seems too
extreme - after all you've got to spend your time somewhere. But
Solid State Disk for both as a 'solution' ? Is it a coincidence that
the other name on the website is Texas Memory ?
Of course, single block reads from tables are also registered
as db file sequential reads, and funnily enough indexes are
more likely to be buffered in the database cache because they
are a lot smaller than tables and have a different pattern of use.
So why is the advice 'move the indexes' and not 'move the tables' ?
(Maybe because moving the tables is clearly no feasible in most
cases, but moving the indexes sounds as if it might be affordable ?)
Possibly, to ANALYZE the statspack report this bit of code
should have jumped to the Segment Statistics at this point to see
if any indexes really needed to be on SSD.
By the way - Solid state disks help to INCREASE CPU time,
but if this is the main wait (!!!) event then tuning the SQL will
provide the greatest improvement. (Or adding more CPU !!!)
So if we combine the two pieces of advice:
Add Solid State Disk until CPU time is the highest "wait" event,
then tune the SQL:.
And let's not forget that if you add CPU, you are probably going to
use the same amount of CPU, but spread over more CPUs, so that
"wait" time won't drop. In fact, with more CPUs, you are likely to
get more competition for the Bus, which means more CPU stall time,
which could mean spending much more CPU time to do the same
amount of work.
------------------------------------------------------------
Physical writes: 1,947/s
You are performing more than 1,947 disk writes per second.
Check to ensure that you are using direct I/O (non buffer) on
your database and perform an IO balance and an IO timing
analysis to determine if a disk or array is undergoing IO related
stress, and verify your DBWR optimization. Check you average
disk read speed later in this report and ensure that it is under
7 milliseconds. If not, consider SSD for faster physical reads.
------------------------------------------------------------
Changing the I/O patterns or configuration doesn't change
the number of writes you do. And we didn't see (apparently)
any indication that we might have a write TIMING problem
in the sample Top Timed Events. And why is this ANALYZER
telling us to go and analyze the rest of the statspack file - shouldn't
it be cross-referencing related bits of information for us.
At least it does tell us to look at disk speeds. But why is it
talking about read speeds when the statistic is about writes ?
Oh, look - it's that bit about solid state disc again.
=========================================
Competition for OraPerf ? Maybe not; still, as the home page
says:
This website is not a substitute for your favorite Oracle consultant
or even to replace the well-liked OraPerf.com website. .
Now, Oraperf isn't a miracle worker - but it's been around so many
years you would have thought that the next person who seriously wanted
to try something similar would have produced something a little better.
=========================================
--
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
Author: Cost Based Oracle: Fundamentals
http://www.jlcomp.demon.co.uk/cbo_book/ind_book.html
The Co-operative Oracle Users' FAQ
http://www.jlcomp.demon.co.uk/faq/ind_faq.html
Thanks for sharing the write up. I can only wonder how confused a
person who is unfamilar with Oracle would be after reading portions of
the report.
Some of the suggestions offered by the sample report would have been
better if they were lifted directly from the Oracle documentation.
Just a couple examples:
-------------------------------
"Your average wait time for SQL*Net message from client events is 241
milliseconds. Check your application to see if it might benefit from
bulk collection (using PL/SQL "forall" or "bulk collect" operators. In
addition, optimize your TNS settings in your tnsnames.ora file and
investigate consolidating your Oracle requests into larger TNS
packets."
-------------------------------
Maybe I am wrong, but SQL*Net message from client wait events are
entirely client side wait events, possibly indicating that the client
computer is waiting for the user to press the Any key, or actively
processing the data returned by Oracle. PL/SQL runs server side. From
an accuracy standpoint, should SQL*Net message from client wait events
even be interpretted from a Statspack report? The events may have
taken place immediately before or immediately after the database
instance processing activity that was of concern. Would you
tune/optimize settings in the tnsnames.ora fie to fix SQL*Net message
from client wait events? Probably not.
-------------------------------
"You have excessive buffer busy waits with 3.1 per transaction. Buffer
busy waits are most commonly caused by segment header contention and
can be remedied by increasing the value of the tables & index freelists
or freelist_groups parameters, tuning your database writer (DBWR
process, or by using Automatic Segment Storage Management (ASSM) in the
tablespace definition. Using super-fast SSD will also reduce buffer
busy waits because transactions are completed many times faster."
-------------------------------
>From the Oracle Database Performance Tuning Guide 10g Release 2
"10.3.2 buffer busy waits
This wait indicates that there are some buffers in the buffer cache
that multiple processes are attempting to access concurrently. Query
V$WAITSTAT for the wait statistics for each class of buffer. Common
buffer classes that have buffer busy waits include data block, segment
header, undo header, and undo block."
Without examining V$WAITSTAT, you could very well be throwing
hardware/disk storage at a problem that is caused by contention for the
same block that is already in memory - this hardware/disk storage
improvement may adversely affect the performance of a critical
application process, if it removes the bottleneck that prevented a less
critical application process from consuming a greater percentage of
system resources (Cary Millsap discusses this in his tuning book, and
it seems as though it was also mentioned in Tales of the Oak Table).
Trying to learn how to best utilize Oracle by reading online articles,
print magazines, and some books is a bit like taking a stroll through a
mine field at night.
Charles Hooper
PC Support Specialist
K&M Machine-Fabricating, Inc.
> -------------------------------
> "Your average wait time for SQL*Net message from client events is 241
> milliseconds. Check your application to see if it might benefit from
> bulk collection (using PL/SQL "forall" or "bulk collect" operators. In
> addition, optimize your TNS settings in your tnsnames.ora file and
> investigate consolidating your Oracle requests into larger TNS
> packets."
> -------------------------------
> Maybe I am wrong, but SQL*Net message from client wait events are
> entirely client side wait events, possibly indicating that the client
> computer is waiting for the user to press the Any key, or actively
> processing the data returned by Oracle. PL/SQL runs server side. From
> an accuracy standpoint, should SQL*Net message from client wait events
> even be interpretted from a Statspack report? The events may have
> taken place immediately before or immediately after the database
> instance processing activity that was of concern. Would you
> tune/optimize settings in the tnsnames.ora fie to fix SQL*Net message
> from client wait events? Probably not.
Over a 2-hour period, an average of 241 milliseconds for
SQL*Net message from client events tends to suggest a
client program performing a massive load one row at a
time. This does suggest the need for array processing,
but as you say, pl/sql is (barring Forms) serverside. I think
you would only look at SQL*Net tuning if you were seeing
lots of 'more data' from or to client - because there might be
some mileage in a larger TDU or SDU.
>
> -------------------------------
>
> "You have excessive buffer busy waits with 3.1 per transaction. Buffer
> busy waits are most commonly caused by segment header contention and
> can be remedied by increasing the value of the tables & index freelists
> or freelist_groups parameters, tuning your database writer (DBWR
> process, or by using Automatic Segment Storage Management (ASSM) in the
> tablespace definition. Using super-fast SSD will also reduce buffer
> busy waits because transactions are completed many times faster."
> -------------------------------
>
> Without examining V$WAITSTAT, you could very well be throwing
> hardware/disk storage at a problem that is caused by contention for the
> same block that is already in memory - this hardware/disk storage
> improvement may adversely affect the performance of a critical
> application process, if it removes the bottleneck that prevented a less
> critical application process from consuming a greater percentage of
> system resources (Cary Millsap discusses this in his tuning book, and
> it seems as though it was also mentioned in Tales of the Oak Table).
>
And in this particular case, given the constant table-scanning going
on, the buffer busy waits might simply be the result of sessions
waiting for other sessions to finish a scattered read. So in a small
sense, the suggestion of buying SSD is correct - it might reduce the
wait times for other reads: but (a) it would probably be cheaper to
stop doing the excess tablescans, and (b) the reason given for buying
the SSD is not the right reason.
Thanks for the reply to my comments.
When will the other two books in series that includes "Cost-Based
Oracle Fundamentals" be available? I am looking for another Oracle
book or two that attacks an Oracle feature with as much detailed
analysis as the "Cost-Based Oracle Fundamentals" book.
Lots of high end systems apparently that one of the Cherry Sisters
works on.
63 parses a second is too much.
Maybe just ask the people to use the system less often?
I'm hoping to finish writing by the end of Dec,
but probably closer to mid Jan. Volume 2 is
going to be about execution plans.
Not necessarily - if all 63 parse calls result in
hit on the session cursor cache, then they might
as well be held cursors for the amount of impact
they have on the library cache and related latches.
Even if every single one of them results in a library
cache seek their impact is likely to be pretty insignificant
compared to the hammering that's going on with the
tablescans and the client round-tripping that's hammering
away like crazy.
Sorry I was only kidding about the 63 parses a second being too much.
My error - I should have realised that from
the second sentence.
It wasn't so long ago that people would quote
some paper about buffer busy waits being close
to zero and worrying what to do because they'd
had a few hundred in the last 6 weeks.
Hi Jonathan
All depends on who the next person might be ...
This tool reminds me of those awful remakes of Hollywood classics where the
originals are sooo much better and you wonder why they bothered.
There are sooo many errors, inconsistencies, ambiguities and down right
misleading advice in the sample report that it's hard to know where to
begin. Some of the things that instantly spring to mind however include:
As you say, only 43% of the wait events are accounted for and that the
report appears to simply just ignore more than ½ of all database waits. It's
actually quite "normal" for CPU and db file sequential reads to be among the
top timed events although it's a little unusual for them to account for just
43% of activity. There are likely to be a least 3 or 4 other significant
events the report just ignores which could be of more interest than the two
listed.
With CPU contributing just 21% of time, one of the key recommendations is to
increase the speed or number of CPUs !!
There's at least a dozen or more recommendations for the use of Solid State
Disks (SSD) with little justification. For example it recommends SSD for
caching frequently accessed small tables/indexes and then immediately
recommends storing them of SSD !! As you mention, it recommends storing just
indexes on SSD to reduce db file sequential reads waits when indexes are
likely to contributed only a tiny proportion of these waits as indexes are
more likely to be cached than their tables and are only accessed a small
percentage of times in comparison to table blocks for any significant range
scan. Also, db file sequential read waits are only 2ms on average, and yet
SSD are still recommended. Indeed even if SSD were to be used, the
recommendation to move to SSD again would still be prevalent in the report
!!
Indeed it recommends SSD to tune the small table scans without considering
that the large block size in combination with a large
db_file_multiblock_read_count (64) in combination with missing indexes in
combination with missing system statistics could all be the culprit/cure.
In the load Profile part of the report, it recommends dealing with physical
writes by moving to SSD to increase the physical read performance. What the
?
It totally confuses and misinterprets the meaning of parses and hard parses,
and provides inappropriate advice as a result.
It recommends a large block size for indexes even though the block size is
already at 16K and there's no indication that index performance is an issue
and no mention of the disadvantages of this approach.
In the Instance activity stats part of the report, it recommends dealing
with the so-called high physical I/O by increasing the buffer cache or
moving to SSD (of course), not one mention of perhaps tuning the code that
is driving the high PIO demand.
It totally confuses the distinction between a migrated and a chain row as it
doesn't always cause double I/Os in both cases, aren't not necessary both
fixed by reorganising tables, are neither necessarily best fixed with
dbms_redefinition, may not need pctfree to be readjusted and it's migrated
rows that might be fixed and prevented with a better pctfree. Oh, and
pctfree may actually need to be reduced during the reorg.
Tablespace I/O stats appear to miss a few rather important tablespaces,
namely SYSTEM, UNDO, TEMP. Perhaps their performance is not so important or
perhaps they're already on SSD !!
A little odd that tablespace I/O stats is concerned about the speed of
tablespace TABLE_C at 11.4 ms average but not TABLE_B which is only a little
faster at 9.6 ms (somewhat slower than the 7ms recommended at the start of
the report) or TABLE_G at 24.8 ms or TABLE_H at 31.5 ms. Especially
considering existing hardware is capable of supplying 50% of I/O at 1 ms as
per TABLE_A (perhaps it's already on SSD :) Also, low usage doesn't
necessarily mean low importance.
Latch activity lists Gets Requests statistics for the redo copy latch
despite it being a Nowait Request latch (no wonder it's 0 !!). It also
complains about cache buffer related latch performance but in the next
section recommends doubling the size of the buffer cache from 10G to 20G,
potentially putting more stress on these latches. Also, decreasing the
compactness of tables by reorganising them with a higher pctfree is a rather
expensive method to fix a potentially minor problem (0.2 percent latch
misses) and could introduce much much bigger ones. Perhaps reducing LIO
might be a slightly more effective strategy !! It also recommends reducing
redo allocation contention by increasing the redo log size although this may
also have the opposite effect.
In the shared pool advisory section, the report recommends doubling the
shared_pool and in the next section warns that the shared pool is already
"unusually large" !!
I could go on and on but life is too short. This is nothing more than a
rather silly and embarrassingly blatant SSD marketing exercise. At the end
of the day, what does the report actually achieve, what are the performance
issues that have been resolved, what were the database problems that needed
addressing ?
I think it's all best summed up with the following quote from the disclaimer
on the home page "Even a perfect interpretation of your Statspack could lead
to inaccurate recommendations that do not improve Oracle database
performance". In other words, if we happened to guess a real problem, we may
not be able to guess our way to a real solution !!
Enough said !!
Richard
> At the end of the day, what does the report actually achieve,
I'd say it achieves it's purpose
I suspect that purpose is to provide marketing and credibility (in the
eyes of decision makers) for a technology by providing a document by a
credible source (one recognized in the industry).
In North America (at the least), credibility is equated to popularity and
recognition.
Popularity implies trust, which lowers apparent risk, which increases
credibility, which results in revenue.
In North America (at the least), MacDonalds and Microsoft have long been
considered credible in their fields. Both are extremely heavily marketed
and therefore have regognition. Both are popular. Therefore both are
credible, even though some experts question whether either is truly 'good'.
Note that technical accuracy is NOT a requirement.
--
Hans Forbrich (mailto: Fuzzy.GreyBeard_at_gmail.com)
*** Feel free to correct me when I'm wrong!
*** Top posting [replies] guarantees I won't respond.
> With CPU contributing just 21% of time, one of the key recommendations is to
> increase the speed or number of CPUs !!
Actually, I strongly disagree with this recommendation. If your machine
spends most of the time waiting for the IO, new CPU board(s) will not do
much for the performance. In that case Don Burleson's recommendation to
move things to SSD would actually make sense.
I would say that with the complexities of modern Oracle databases and the
cost of people who know how to tune them, SSD devices for redo logs are
starting to make much more sense. A consultant like me would cost you
$/hour and I am on the lower end. Mind you, I'm not Anjo Kolk, Cary
Millsap, Jonathan Lewis, Steve Adams or Howard Rogers, they're much more
expensive. An 8GB flash disk like Ritek CFR8G-80X-G costs $150 and will
probably do a lot for your transaction rate, if you put redo logs there.
It's all cost-benefit. With the complexities of present days Oracle
databases, spending some money to speed up your I/O is always a good
decision.
Personally, I've never seen a database server with CPU set more then 40%
on average. Databases are almost always stuck on I/O. While it is
important to write SQL properly, it is getting harder and harder to do.
Oracle optimizer is increasingly complex and hard to use. Analyzing 10053
trace and 10046 traces is really a hard thing to do and, also a very
expensive one. Buying few SSD devices for a few grands will probably do
you more good then having Tom Kyte on site for a day, and for the same
price. SSD devices are affordable, a cigarette lighter-sized USB memory
stick with 1GB ram costs $30 at Staples. It's a perfect thing for
industrial espionage, a gadget coming directly from 007 movies. Why not
take advantage of it?
> In North America (at the least), credibility is equated to popularity and
> recognition.
>
> Popularity implies trust, which lowers apparent risk, which increases
> credibility, which results in revenue.
>
> In North America (at the least), MacDonalds and Microsoft have long been
> considered credible in their fields. Both are extremely heavily marketed
> and therefore have recognition. Both are popular. Therefore both are
> credible, even though some experts question whether either is truly 'good'.
>
>
> Note that technical accuracy is NOT a requirement.
Very good essay about McDonalds and Microsoft without much relevance here.
Actually, with falling prices of flash memory, SSD devices are becoming
extremely affordable, while CBO has become more complex and harder to know.
Fast disk devices are definitely a good option which is, in many cases,
cheaper then a top-notch consultant or a DBA needed to tune an application
system. Supersize me, but in this case Don Burleson's recommendation makes
very good sense.
> Actually, with falling prices of flash memory, SSD devices are becoming
> extremely affordable, while CBO has become more complex and harder to know.
> Fast disk devices are definitely a good option which is, in many cases,
> cheaper then a top-notch consultant or a DBA needed to tune an application
> system.
I absolutely agree that SSDs are an option. Iit's definitely worth keeping
as a possibility in the toolkit.
Not sure I'd be willing to stake the performance and availability of my
system on a removeable device on an externally shared bus. Especially one
that tends to be restricted to FAT.
But it's worth investigating. Especially if it speeds up the redo enough
to delay using the async option of the commit statement.
The thing that worries me is simple: is it a problem resolution? or is it
sweeping a problem under the rug, to fix later?
Is flash (drive) memory synonymous with SSD? Flash drive memory,
riding in a USB2 slot is not fast, especially when compared with a RAID
controller with 256MB or more cache memory that is connected to a PCI
Express bus. Compare the burst and throughput of a 6, 8, 10, 12, or
more drive RAID 10 array with the performance of one of these 8GB flash
drive:
http://www.pcmall.com/pcmall/shop/detail~dpno~7036586.asp
>From that website:
"Speed: Read 8M bit/sec, Write 6.4M bit/sec (Max)"
Read speed on this 8MB flah drive is 1MB per second maximum, and write
speed is 0.8MB per second maximum. Flash drives are also prone to
corruption.
I am hoping that flash drives and SSD are not one in the same. My bet
is that SSD plugs into an internal PCI slot and is essentially battery
backed RAM. While fast, it would still be constrained to the maximum
speed of the PCI bus, which is likely shared with several other
devices, possibly the hard drive RAID controller. If this is the case,
the addition of the SSD device could slow the maximum throughput of the
hard drive RAID controller as it must now compete on the PCI bus with
the SSD. This of course leads to the question, could adding an SSD
device slow performance? Some servers with PCI buses actually have two
independent PCI buses, so maybe this is not a concern.
Correction: The line reading "Read speed on this 8MB flah drive is 1MB
per second maximum"
Should appear as: "Read speed on this 8GB flash drive is 1MB per second
maximum"
Charles Hooper
> Charles Hooper wrote:
>> Is flash (drive) memory synonymous with SSD? Flash drive memory,
>> riding in a USB2 slot is not fast, especially when compared with a RAID
You will note that Mladen referred to "8GB flash disk like Ritek
CFR8G-80X-G". That unit uses the CompactFlash specification ... the
larger of the 'digital camera' removable cards. Fits in the universal
card adapter that connects to the USB bus on my PC. I suspect you can get
a variant of the adapter that fits in a PCI slot.
He's absolutely correct - it's a solid state disk device. Admittedly not
quite like the SCSI or IDE SSDs from M-Systems or SolidData, which are the
more common interpretation of 'SSD' and still generally run in the $10K
range last time I checked (about 2 months ago).
And, theoretically the USB spec takes one up to very fast transfer rates.
As long as one has USB 2, single device and no conflicts on the bus.
And, under Linux the USB disk mounts as /dev/sda[0-9], so one could even
use it as a raw device.
I've found with the flash drives is that write speed tends to be a bit on
the low side, even compared to read speed.
> He's absolutely correct - it's a solid state disk device. Admittedly not
> quite like the SCSI or IDE SSDs from M-Systems or SolidData, which are the
> more common interpretation of 'SSD' and still generally run in the $10K
> range last time I checked (about 2 months ago).
I am not sure, but I think I've seen one with SCSI3 interface for $6000.
Capacity was rather small, something like 20GB. I haven't checked that
for a while. That can still do wonders for "log file sync" waits, which
are a major problem on any high intensity OLTP database.
>
>
> And, theoretically the USB spec takes one up to very fast transfer rates.
> As long as one has USB 2, single device and no conflicts on the bus.
>
> And, under Linux the USB disk mounts as /dev/sda[0-9], so one could even
> use it as a raw device.
An you can only make it into a FAT fat file system, which is very @#$%%^
annoying.
>
>
> I've found with the flash drives is that write speed tends to be a bit on
> the low side, even compared to read speed.
I have only a personal impressions, which are unreliable at best.
On the other hand, RamSan devices with FC/AL interface are rather
expensive and, after reading the specification, I don't believe are meant
for laptops. Power consumption is in the dishwasher range (350W) and the
contraption weighs 80lbs. You would need a big laptop for that.
http://www.superssd.com/products/ramsan-400/indexb.htm
Hi Mladen,
Because it's one thing to ensure you have sufficient hardware for your
requirements, whatever the requirements and hardware might be.
But it's quite another thing to simply throw hardware at a problem, it
invariable doesn't work unless the root issue is addressed.
However it's much much much worse and quite another thing again to throw
hardware at a problem that doesn't actually exist.!!
Give me Tom Kyte any day ...
Cheers
Richard
Unfortunately it's worse than this, it isn't that it invariably doesn't
work - if that were the case the charlatans would have been exposed by
now, and by the people who matter most those who sign the cheques - no
the problem is that it variably works :) So if you get your problem
resolved by accident then the investment in hardware was good, and if
you don't get your problem resolved also by accident then you have to
find someone to fire and talk to the charlatans again :(
> However it's much much much worse and quite another thing again to throw
> hardware at a problem that doesn't actually exist.!!
There, of course we are agreed.
Cheers
Niall Litchfield
Oracle DBA
http://www.orawin.info/
Hi Niall
I guess it kinda depends on one's definition of "works". If there's some
improvement, if things run a little faster, a little better, for a little
while but no where near the potential of the application and no where near
the capability of the hardware infrastructure and not within the
expectations of the users, then I'm not sure that it's really "worked".
Poor logical design, poor application design, poor SQL, poor database
design, poor definition of user requirements, etc, etc. are generally not
suddenly made hunky dory by throwing hardware at these problems. SSD would
likely have negligible impact in addressing these issues as it would
addressing those issues that don't actually exist.
However, if one could just get Tom Kyte for a day ...
Cheers ;)
Richard
> But it's quite another thing to simply throw hardware at a problem, it
> invariable doesn't work unless the root issue is addressed.
Unless you throw enough hardware. There a famous 80:20 rule: 80% of the
possible improvement is resolved by investing 20% effort. I've seen
hardware thrown at the problem, with good results. It wasn't SSD, it was
EMC cache, pumped up to an incredible proportion, but the intention was
the same and the result was, generally speaking, satisfactory. In other
words, it's a myth that throwing hardware at the problem doesn't work. It
does work, provided you throw enough of it.
On the other hand, with SQL generators, Java ORM (object-relational
mappers), cheap outsourced developers and unrealistic project deadlines
you cannot get any quality most of the time. Companies will move 250
developers to Elbonia, save 80% of the IT payroll, which amounts to
several millions per year, and then buy a decent piece of hardware to run
the systems. It usually works and it rids them of unbearable and
expensive geeks like me. Companies work just fine, Dow-Jones reached
record heights. It's us, the consultants, who should be worried.
From my experience USB 2 is slow than GigE which I would interpret as
meaning that writing to the cache on a NetApp or EMC is going to be very
substantially faster. I don't know of too many people whose hardware
writes directly to disk anymore.
--
Daniel A. Morgan
University of Washington
damo...@x.washington.edu
(replace x with u to respond)
Puget Sound Oracle Users Group
www.psoug.org
Good point.
This thread continues on with many people "wondering about" the
performance characteristics of solid state disk and getting confused
with flash memory and usb devices.
Flash memory and usb drives aren't intended to be used as ssd. Flash
memory is not terrible at read performance but doesn't work well for
write performance. In addition flash memory gets destroyed eventually
with enough writes. Not exactly a stable situation for redo logs or
any other oracle files.
Read ssd can/would use a high speed data transfer technology ( scsi
interface or sata etc ) and contains high quality memory chips ( not
flash ).
Wonder no more.
>
> I think it's all best summed up with the following quote from the disclaimer
> on the home page "Even a perfect interpretation of your Statspack could lead
> to inaccurate recommendations that do not improve Oracle database
> performance". In other words, if we happened to guess a real problem, we may
> not be able to guess our way to a real solution !!
>
That sounds like a chorus line by a song performed by the Cherryh
Sisters. Of course Cary Millsap's believes in predicting improvements
in oracle performance but that involves real work and improving
problems that yield a tangible benefit to the customer.
On the other hand ... had they fixed the problem ... they would have had
better architecture and likely spent the same amount of money.
I haven't seen a problem, in many years, where throwing hardware at it
was the best solution.
Hi Mladen
I hate 80-20 rules, 80 out of 100 are invalid on 80 out of 100 of occasions
;)
I'm a consultant and I'm not the slightest bit worried. That's because I
don't rely on reports that recommend throwing hardware at problems that
don't exist and solve problems that really do exist by determining what's
actually wrong ...
Weird eh ;)
Cheers
Richard
>From a white paper on the site:
"The Only Solid State Disk with Active BackupTM. The RamSan-400
Active
Backup™ mode constantly backs up data from memory to the internal
hard
disks, helping mission critical data survive even catastrophic
problems."
I'm still wondering whether putting redo logs on such a device is a
joke regarding
instance crash recovery. Or is the joke that an SSD is really hard
disks with a
big write-behind cache? Slap me with a sturgeon, I don't get it.
jg
--
@home.com is bogus.
Dissing the dead:
http://www.signonsandiego.com/uniontrib/20061105/news_1n5unkind.html
"Don Burleson
Don Burleson is one of the world's most prolilfic Oracle authors and
experts. A retired adjunct professor emeritus, he has authored more
than 30 books on Oracle database management, published hundreds of
articles in national magazines, and is a popular lecturer at
international database conferences. Don crafted many of the original
rules that form the foundation of StatspackAnalyzer.com."
Explains why the product is so good ... at throwing hardware at
problems instead of solving them. Don admitted in print (and I can't
remember where, at the moment, although a thread on asktom.oracle.com
might be where I saw that) that he'd readily throw hardware at a
problem (and has done so) rather than take the time to actually solve
it and provided some tale of 'the client had budgetary constraints'
(but, gee, they could spend $550 an hour for old Don and $200,000+ for
equipment).
David Fitzjarrell
Obviously Richard you just don't understand the American concept of
"shop 'till you drop" rampant materialism. Why learn something and tune
when you can spend someone else's money?
> I'm a consultant and I'm not the slightest bit worried. That's because I
> don't rely on reports that recommend throwing hardware at problems that
> don't exist and solve problems that really do exist by determining what's
> actually wrong ...
Richard, such a statement would hold water if the world was clearly cut
into right and wrong things but, alas, it is not. I thought it was us
Americans who tend to over-simplify things, but I see that Aussies are not
immune to that, either. I have to ask you, just out of curiosity, have
you recently taken a brush-clearing course in Crawford, TX? In the Real
World(TM) there are applications that you don't have source code for,
legacy applications, departments in turmoil, company acquisitions and such
things. Throwing hardware at the problem at hand frequently is the right
thing to do. There are situations in which investing significant time and
effort to learn and develop is the right thing to do and there are
situations in which it is not the right thing to do. I was once working
for a medium sized HMO which, at one point, acquired a small HMO. Small
HMO has had its members database on a single PC, using SQL server. I used
Oracle's migration workbench to stuff things into an Oracle database and
the programmers concocted something that emulated the application that was
previously running against the SQL server. The result was unacceptably
slow. This application had to run only until each member was not properly
entered into the buying HMO's Oracle database. After that, the whole thing
was quietly extinguished. So, Oracle on a Win2k box wasn't good enough.
There were two options:
1) To tune the application properly
2) Create a tablespace and two schemas on the UAT machine, a much
more powerful IBM midrange computer with EMC storage attached to it
and let the thing perform through the sheer power of the hardware.
I recommended the second solution and everybody was happy. In 4 months,
the whole thing went away and I was told to drop the tablespace, which I
gladly did. That was throwing hardware at the problem and it was justified.
In principle, there are situations in which I will gladly recommend
throwing hardware at the problem.
Hi Mladen
Ummmm, such a statement holds all the water I need in this desert called
Oracle consulting because it's exactly what I do.
It's not a simplification, it's a fact. Perhaps a rather simple fact but a
statement of fact nonetheless ...
Cheers
Richard
Perhaps you guys are talking at cross-purposes here because of the
different ways one can determine a problem statement. It is certainly
near-trivial to come up with realistic examples of how a dumb profiler
can spit out inanity - the example on the web site is ironically one of
those, as Richard demonstrated. And I think Mladen's example is a good
one - it demonstrates that short-circuiting a proper tuning methodology
by someone who knows what he is doing can be more cost-effective to a
business. Will it always be? Of course not, but that is only a
problem when it is always used by the business, which removes the
"knows what he is doing" from the solution. Will it _never_ be?
Also, of course not, there is implicit value to experience, the problem
domain allows intuition to extrapolate correctly from past experience.
Will it be the optimal solution to the problem? Most likely not - but
then, when you insert business valuation into the process, optimal no
longer necessarily equals best. Or maybe the other way round.
I often see people (yes, including myself) making extrapolations that
are ridiculously wrong. When it blows up in some huge project, that
can be both educational and entertaining. On the other hand, many
incorrect assumptions may be innocuous or lead to correct results
through fallacies. With the limited resources we all have, it has to
be a judgement call about many things. The important thing is that we
make an effort to make things more rational, while accepting that some
things will be difficult to change. Doing nothing invites
irrationality. Limiting the solution domain too severely can also be
destructive. The trick is to know how and when to use observation to
feedback control to go in the right direction.
Sometimes it's good to throw the hardware at it first, because that
doesn't rule out tuning it properly later, whereas if you got it going
good enough, you suffer when you do need the capacity and can't get it
due to arcane capital expenditure rules. So fight BS with BS :-)
jg
--
@home.com is bogus.
"Television is a vast wasteland.
A desert is a vast wasteland.
A television show about a desert is the vastest wasteland of all." -
Mad Magazine
Cary Millsap's book is currently the state of the art in regards to how
to approach this area.
I have a feeling that he wouldn't exactly agree with the proposition of
throwing hardware first into any of the scenarios that this thread is
discussing.
"Doing nothing invites irrationality" ... sorry I can't buy that
statement at all.
Often doing nothing is the best decision that can be made in many
situations. It all centers around whether you will provide a tangible
benefit to a business function that yields a better rate of return than
doing nothing, as well as the cost of the resources and hardware ( if
any ) needed to implement some kind of change.
Eliminating workload that's un-necessary and fixing needed SQL so that
it uses less resources ... that's what should be driving any competent
DBA.
I agree, and yet, even it has some shortcomings. Sometimes ordering
everything by business priority misses technical problems (for example,
rating the 15th most important problem as identified by the
methodology, when in actuality it should be a topper), and there are
perhaps some unspoken assumptions in the methodology. Unless I am
mistaken (quite possible, as I haven't looked at it too closely), one
assumption is that there are a few issues that cause most of the
problems. This is often true, but not always, especially in newbie
systems and systems that have many different apps. So any testimonials
would be suspect.
>
> I have a feeling that he wouldn't exactly agree with the proposition of
> throwing hardware first into any of the scenarios that this thread is
> discussing.
But would that be for a technical reason, or a not invented here
reason?
>
> "Doing nothing invites irrationality" ... sorry I can't buy that
> statement at all.
>
> Often doing nothing is the best decision that can be made in many
> situations. It all centers around whether you will provide a tangible
> benefit to a business function that yields a better rate of return than
> doing nothing, as well as the cost of the resources and hardware ( if
> any ) needed to implement some kind of change.
Ah, sorry, I wasn't clear that I was referring in general to trying to
make things better, as the sort of scientific inquiry Richard and
others champion, not in evaluating a specific situation. Absolutely,
doing nothing is often correct in the latter. I should have made a new
paragraph at "The important thing..." and explained the transition.
>
> Eliminating workload that's un-necessary and fixing needed SQL so that
> it uses less resources ... that's what should be driving any competent
> DBA.
Agreed. But that needs to exist within the greater business context,
where overall business considerations can be at odds with tuning
specific workloads.
What do you think Cary would say about tuning an arbitrary SQL that is
one of 10000 bad SQL's? Of course his methodology orders the badness
so you may never get there. The "wrong" throwing of hardware works
when it solves a more global problem (including the cost of hardware v.
cost of people, and postponing proper fixes until they can be made).
They are not necessarily mutually exclusive, there's only a problem
when people say they are mutually exclusive, or use one when the other
is appropriate. Throwing hardware at an untuned newbie system would be
inappropriate - I've seen some pretty old systems that could be
described that way (maybe every Oracle system put together by people
who think it is another db, too...), and I think the Competition for
OraPerf would be particularly egregious for those.
jg
--
@home.com is bogus.
"Actually, I thought we were going to do fine yesterday, shows what I
know. But I thought we were going to be fine in the election. My point
to you is that, win or lose, Bob Gates was going to become the
nominee." - Bush
"A week ago, Bush told The Associated Press and other reporters in an
interview that he expected Rumsfeld and Cheney to stay through the end
of his last two years in the White House. Asked Wednesday about that
comment, Bush acknowledged he intentionally misled reporters because he
want to avoid a change at the Pentagon during a hotly contested
election. " - AP
Umm no that's all covered in chapter 1 which documents what method R is
and how it operates.
If that's the level of understanding that you hard regarding the book I
suggest re-reading the preface and chapters 1 thru 4 at a minimum.
> This is often true, but not always, especially in newbie
> systems and systems that have many different apps. So any testimonials
> would be suspect.
What testimonials are you referring to? Sorry you completely lost me
with the last 2 sentences.
>
> >
> > I have a feeling that he wouldn't exactly agree with the proposition of
> > throwing hardware first into any of the scenarios that this thread is
> > discussing.
>
> But would that be for a technical reason, or a not invented here
> reason?
I don't think Carry is a big believer in either technical reasons or
the not invented here reasons.
Again it's all covered in the chapters cited above.
>
> >
> > "Doing nothing invites irrationality" ... sorry I can't buy that
> > statement at all.
> >
> > Often doing nothing is the best decision that can be made in many
> > situations. It all centers around whether you will provide a tangible
> > benefit to a business function that yields a better rate of return than
> > doing nothing, as well as the cost of the resources and hardware ( if
> > any ) needed to implement some kind of change.
>
> Ah, sorry, I wasn't clear that I was referring in general to trying to
> make things better, as the sort of scientific inquiry Richard and
> others champion, not in evaluating a specific situation. Absolutely,
> doing nothing is often correct in the latter. I should have made a new
> paragraph at "The important thing..." and explained the transition.
Sorry you lost me again. Richard who?
>
> >
> > Eliminating workload that's un-necessary and fixing needed SQL so that
> > it uses less resources ... that's what should be driving any competent
> > DBA.
>
> Agreed. But that needs to exist within the greater business context,
> where overall business considerations can be at odds with tuning
> specific workloads.
>
> What do you think Cary would say about tuning an arbitrary SQL that is
> one of 10000 bad SQL's? Of course his methodology orders the badness
> so you may never get there. The "wrong" throwing of hardware works
> when it solves a more global problem (including the cost of hardware v.
> cost of people, and postponing proper fixes until they can be made).
> They are not necessarily mutually exclusive, there's only a problem
> when people say they are mutually exclusive, or use one when the other
> is appropriate. Throwing hardware at an untuned newbie system would be
> inappropriate - I've seen some pretty old systems that could be
> described that way (maybe every Oracle system put together by people
> who think it is another db, too...), and I think the Competition for
> OraPerf would be particularly egregious for those.
Are you postulating that sometimes throwing hardware is viable?
Or that it's only bad when it doesn't work? ( You have me worried with
the sentence that reads "The "wrong" throwing of hardware ... then you
say "use one when the other is appropriate" ).
Well hindsight is always 20-20 once a problem is solved. Cary's
approach guides you to the correct solution from the beginning if it is
followed correctly.
The other approach sounds suspiciously like Method C except that you
front end it by throwing hardware first?
>> Cary Millsap's book is currently the state of the art in regards to how
>> to approach this area.
>
> I agree, and yet, even it has some shortcomings. Sometimes ordering
> everything by business priority misses technical problems (for example,
> rating the 15th most important problem as identified by the
> methodology, when in actuality it should be a topper), and there are
> perhaps some unspoken assumptions in the methodology.
I think Tom Kyte has addressed the issue of trusting the experts on
more than a few occasions. Cary's book is definitely one of the most
valuable resources available. But synapses must be applied to his
advice in order to best use it.
Joel is incorrect in stating that there are perhaps some unspoken
assumptions in the methodology. The methodology is documented clearly,
at length, in the first part of the book.
It does depend on properly scoped diagnostic data to succeed.
Sometimes getting that data is easier than in other circumstances.
Read those sections again if you still think there are unspoken
assumptions.
As far as Mr. Morgans valuable contribution, where in the book does
Cary ever suggest that you should trust what he wrote? The whole book
is founded centered on proof's.
With a comment like that I wonder if he has even read the book.
Thought must be applied to succeed in doing anything with any computer
based system, there's nothing unique about how that differs related to
oracle.
I stated that 'in my opinion' Cary's book is state of the art for
"Optimizing Oracle Performance".
If you disagree then provide specifics.
Mladen Gogala <mgogala.s...@verizon.net> wrote:
> Power consumption is in the dishwasher range (350W) and the
> contraption weighs 80lbs. You would need a big laptop for that.
> http://www.superssd.com/products/ramsan-400/indexb.htm
Not to mention a big lap!
Paul...
--
plinehan __at__ yahoo __dot__ __com__
XP Pro, SP 2,
Oracle, 9.2.0.1.0 (Enterprise Ed.)
Interbase 6.0.1.0;
When asking database related questions, please give other posters
some clues, like operating system, version of db being used and DDL.
The exact text and/or number of error messages is useful (!= "it didn't work!").
Thanks.
Furthermore, as a courtesy to those who spend
time analysing and attempting to help, please
do not top post.