Showing posts with label time. Show all posts
Showing posts with label time. Show all posts

Monday, March 19, 2012

Login Failed

Can anyone explain why each time I want to connect to SQL server, I get this error:
Login Failed "WEBCOMPUTER\ASPNET"?(WEBCOMPUTER is my computer name)Because you are using integrated security, and by default, your ASP.NET pages run under the security context of the ASPNET Windows user. You need to add the ASPNET windows user as a user in SQL Server, or alternately, use SQL Server security and provide a username and password.

Wednesday, March 7, 2012

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
--
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.com...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?
The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.co m...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.com...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

Logical Design Question

Sorry for the long post but there is no way short way to explain this. So
read only if you have the time. If you take the time, I really, really
appreciate it.
Bill
I have a logical design situation that seems not to fit well in the
relational model (at least the way I am doing it – with no help from
triggers). I work for a group of commercial photographers who are contracte
d
to photograph events (graduations, races, fraternity social, etc.). They
offer Packages to their customers. A Package may consist of {4-5x7’s,
2-8x10’s, 16-Wallets}. The customer can choose the package as is or apply
an
Option to the package for a small additional price. An option is Black &
White, Sepia (antique), or Colorize. There are many options, but the
example I just gave is an Option Set from which only one selection can be
made. Another Option Set might consist of border styles to be applied aroun
d
each photo.
An option set and all its subsets are possible way to construct the options
that will be presented to the customer. For example, the example I just gav
e
you has 3 members. But there are 6 additional subsets of this set not
counting the empty set. So the maximum set of options and all its subsets
form an OptionGroup. Only one OptionSet from within an OptionGroup is
allowed to be set up as an option for Package.
Furthermore, an option can belong to only one OptionGroup. But, as
explained above, an option can belong to many OptionSet’s.
Here is the problem:
Given a Package table consisting of Package_ID as the PK, I need to
associate one or more OptionSet’s. I can associate multiple optional sets
to
the same package as long as they are from different OptionGroup’s.
If that associative table consists of Package_ID, OptionGroup_ID,
OptionSet_ID , Price columns with the first three columns as the PK, I
violate the business rule that only one OptionSet from the same OptionGroup
can be associated to the same package.
If I make Package_ID, OptionGroup_ID as the Primary key , I can enforce this
rule but the other relationships look weird. Here is what I came up with (i
f
you model these tables you will see the weirdness) . . .
OptionGroup (table)
OptionGroup_ID (PK)
OptionGroupName
OptionSet (table)
OptionSet_ID (PK)
OptionGroup_ID (FK) (non-identifying relationship with OptionGroup table)
Option_OptionSet (table)
OptionSet_ID (PK, FK) (identitying relationship with OptionSet table)
Option_ID (PK, FK) (identifying relationship with Option table)
OptionGroup_ID (FK) (non-identifying relationship with OptionGroup table)
Option (table)
Option_ID (PK)
OptionName
Package (table)
Package_ID (PK)
PackageName
PackageOptionSet (table)
Package_ID (PK, FK) (identifying relationship with Package table)
OptionGroup_ID (PK, FK) (identifying relationship with OptionGroup table)
OptionSet_ID (FK) (non-identifying relationship with OptionSet table)
PriceHave you ever been to this site:
http://www.datamodel.org/
ML
p.s. sometimes it's up, sometimes it's down...|||Well apparently it is down now, in any case, I prefer MSDN Groups.
"ML" wrote:

> Have you ever been to this site:
> http://www.datamodel.org/
>
> ML
> p.s. sometimes it's up, sometimes it's down...|||This might also help:
http://www.agiledata.org/essays/dataModeling101.html
ML|||I know how to model data. This is a particularly hard one to model, that is
why I am asking the question. I am looking for another's modeler's
perspective on this particular problem not how to model data.
"ML" wrote:

> This might also help:
> http://www.agiledata.org/essays/dataModeling101.html
>
> ML|||I'd choose an identifying relationship:
OptionSet (table)
OptionGroup_ID (PK,FK) (identifying relationship with OptionGroup
table)
OptionSet_ID (PK)|||You need to isolate relationships in your proposed Option_OptionSet and
PackageOptionSet tables. It seems you started designing tables before you
laid out the entire logical model.
Try something like this (<-- = 'one to many'):
1)
Package <-- Photo <-- OptionSet --> OptionGroup
OptionSet <-- Option
or:
Package <-- Photo
Package <-- OptionSet --> OptionGroup
OptionSet <-- Option
2)
Photographer <-- Customer (don't know if you really need this)
Customer <-- Package
ML|||ML, AK,
Using your notation (< is the one side and -- is the many side), the correct
relationships are:
OptionGroup<--OptionSet <-- Option
Package <-- PackageOptionSet --> OptionGroup
The problem with the first relationship is that you can have groups 1 and 2
which is ok. You can have options A and B which is also OK. You can have
1,A and 2,A in OptionSet which not OK. An option can only belong to one
group.
Assuming we have Options assigned to only one group we still have a problem
with the second relationship. For example, suppose we have the OptionSet
{1,A; 1,B; 1, C} called OS1 and OptionSet{1,A; 1,B} called OS2 (the first
number is the group id and the 2nd number is the Option id). This is OK.
The 2nd relationship permits the record -- P1,1,OS1 and the record --
P1,1,OS2 in PackageOptionSet which is not Ok. Only one option set from a
group can be assigned to a Package (P1 is the Package ID, 1 is the Group ID,
and OS1 or OS2 is the assigned option set.)
The above situation is an example of offering the Options of Black & White,
Sepia, and Colorize for one package but only offering Black & White, Sepia
for another package. You don't want the possibility of both option sets be
assigned to a package.
Bill
"ML" wrote:

> You need to isolate relationships in your proposed Option_OptionSet and
> PackageOptionSet tables. It seems you started designing tables before you
> laid out the entire logical model.
> Try something like this (<-- = 'one to many'):
> 1)
> Package <-- Photo <-- OptionSet --> OptionGroup
> OptionSet <-- Option
> or:
> Package <-- Photo
> Package <-- OptionSet --> OptionGroup
> OptionSet <-- Option
> 2)
> Photographer <-- Customer (don't know if you really need this)
> Customer <-- Package
>
> ML|||In general, requesting solutions for data design problems in newsgroups is
somewhat risky since there is a fair chance that several aspects of the
business model is misunderstood and miscommunicated. Since your exact
requirements are transparent to others in this group the suggested solutions
can sometimes be misguided as well. If this is a critical project, I would
suggest you s the help of an experienced and trained professional who can
have direct access to your business model and data requirements.
With that preface, can you explain your tables a bit more elaborately? Based
on what I understood, Optiongroup is just a derivable resultset out of
existing data, right? Why not use a view for displaying the data to the
customers? Is the set of options time-varying -- if not, based on your
descriptions, aren't the options really attributes rather than entities?
Anith|||I know it is very hard to understand business rules communicated in this way
and just as hard to figure out a way to present the rules. While it is
obvious to me, that is because I have worked this busines so long.
An OptionGroup is not a derviable resultset. Options exists independently
and some are mutually exclusive for example you can't have Black & White and
Sepia on the same photo.
For example one OptionGroup ( call it the Tone Group) consist of these
OptionSets:
{BW, Sep, Cz}
{BW, Sep}
{BW, CZ}
{Sep, CZ}
{BW}
{Sep}
{CZ}
Only one of these sets can be assigned to an Package. The concept of an
OptionGroup came into being soley as way to facilitate this rule. If all
these OptionSets belong to a the same group, the application and database
have a way to know that only one option set from a group can be assigned to
a
package. Also the OptionGroup facilitate the user being able to offer
different combinations of the same options.
The concept of a OptionSet came into being as way for the application to
know that when presenting a set of options from the same option set only one
choice is allowed.
Suppose you have another Option Group called Border consisting of:
{FC, BN}
{FC}
{BN}
It is permissble to assign {BW, Sep} and {FC} as option sets. The
application know these are from different groups. At order taking time, the
application knows to present BW and SEP as a one choice and FC as a another
choice. So the customer can choose (BW and FC ) or (SEP and FC) or (just BW
or just FC) but not (BW and Sep).
I hope this helps
"Anith Sen" wrote:

> In general, requesting solutions for data design problems in newsgroups is
> somewhat risky since there is a fair chance that several aspects of the
> business model is misunderstood and miscommunicated. Since your exact
> requirements are transparent to others in this group the suggested solutio
ns
> can sometimes be misguided as well. If this is a critical project, I would
> suggest you s the help of an experienced and trained professional who c
an
> have direct access to your business model and data requirements.
> With that preface, can you explain your tables a bit more elaborately? Bas
ed
> on what I understood, Optiongroup is just a derivable resultset out of
> existing data, right? Why not use a view for displaying the data to the
> customers? Is the set of options time-varying -- if not, based on your
> descriptions, aren't the options really attributes rather than entities?
> --
> Anith
>
>

Friday, February 24, 2012

Logging use of stored procedures

Hello,
I would like to know if it is possible to log the use of stored procedures
in SQL 2000 (and later in SQL 2005)
We want to log every time if a user makes use of a stored procedure without
adding a entry in each Stored procedure.
Is this possible ? if yes, how ?
thanks,
Frank Wijten
You can use SQL Trace to implement you custom tracing.
Look at Books on line under the path:
SQLServer Database Engine -> Administering the database Engine -> Monitoring
and Tuning for Performance -> Monitoring Events -> Introducing SQL Trace.
You should find it really interesting and may be a way to satisfy your
requirements
Gilberto
"Frank Wijten" wrote:

> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure without
> adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
>
|||"Frank Wijten" <f.wijten@.info.umcn.nl_NO_SPAM> wrote in message
news:Ox$65tQdHHA.4616@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure
> without adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
As others have pointed out, this IS possible using SQL trace or Profiler.
HOWEVER, it can potentially be very costly in terms of performance.
Just be aware.
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html

Logging use of stored procedures

Hello,
I would like to know if it is possible to log the use of stored procedures
in SQL 2000 (and later in SQL 2005)
We want to log every time if a user makes use of a stored procedure without
adding a entry in each Stored procedure.
Is this possible ? if yes, how ?
thanks,
Frank WijtenAfraid not - You would need to collect the data via a profiling session and
then take a look at the results to see your procedure usage.
"Frank Wijten" <f.wijten@.info.umcn.nl_NO_SPAM> wrote in message
news:Ox$65tQdHHA.4616@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure
> without adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>|||You can use SQL Trace to implement you custom tracing.
Look at Books on line under the path:
SQLServer Database Engine -> Administering the database Engine -> Monitoring
and Tuning for Performance -> Monitoring Events -> Introducing SQL Trace.
You should find it really interesting and may be a way to satisfy your
requirements
Gilberto
"Frank Wijten" wrote:

> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure withou
t
> adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
>|||"Frank Wijten" <f.wijten@.info.umcn.nl_NO_SPAM> wrote in message
news:Ox$65tQdHHA.4616@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure
> without adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
As others have pointed out, this IS possible using SQL trace or Profiler.
HOWEVER, it can potentially be very costly in terms of performance.
Just be aware.
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html

Logging use of stored procedures

Hello,
I would like to know if it is possible to log the use of stored procedures
in SQL 2000 (and later in SQL 2005)
We want to log every time if a user makes use of a stored procedure without
adding a entry in each Stored procedure.
Is this possible ? if yes, how ?
thanks,
Frank WijtenAfraid not - You would need to collect the data via a profiling session and
then take a look at the results to see your procedure usage.
"Frank Wijten" <f.wijten@.info.umcn.nl_NO_SPAM> wrote in message
news:Ox$65tQdHHA.4616@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure
> without adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>|||You can use SQL Trace to implement you custom tracing.
Look at Books on line under the path:
SQLServer Database Engine -> Administering the database Engine -> Monitoring
and Tuning for Performance -> Monitoring Events -> Introducing SQL Trace.
You should find it really interesting and may be a way to satisfy your
requirements
Gilberto
"Frank Wijten" wrote:
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure without
> adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
>|||"Frank Wijten" <f.wijten@.info.umcn.nl_NO_SPAM> wrote in message
news:Ox$65tQdHHA.4616@.TK2MSFTNGP03.phx.gbl...
> Hello,
> I would like to know if it is possible to log the use of stored procedures
> in SQL 2000 (and later in SQL 2005)
> We want to log every time if a user makes use of a stored procedure
> without adding a entry in each Stored procedure.
> Is this possible ? if yes, how ?
> thanks,
> Frank Wijten
>
As others have pointed out, this IS possible using SQL trace or Profiler.
HOWEVER, it can potentially be very costly in terms of performance.
Just be aware.
--
Greg Moore
SQL Server DBA Consulting Remote and Onsite available!
Email: sql (at) greenms.com http://www.greenms.com/sqlserver.html