I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?
The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.co m...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment