Showing posts with label logical. Show all posts
Showing posts with label logical. Show all posts

Friday, March 9, 2012

Logical XOR operator in T-SQL (SQL Server 2000)

Hi all....

I was wondering why SQL Server 2000 lacks a logical XOR operator. Do you have a ready to use function that implements it?

How can I program one by myself? I know that A XOR B is equivalent to NOT A * B + A * NOT B. So I tried to create this function :

CREATEFUNCTION XOR

(

-- Add the parameters for the function here

@.A BIT,

@.B BIT

)

RETURNS BIT

AS

BEGIN

RETURNNOT @.A AND @.B OR @.A +NOT @.B

END

But it didn't compile.

How can I pass boolean values as parameters? any other hint?

Thanks

Jaime

There is an XOR operator -- it is the caret (^). In the Books On-line look up operators(symbols) bitwise.|||

As you said, that is a bitwise operator, not a logical one.

For example, using the ^ I couldn't do something like :

SELECT *

FROM TABLE

WHERE (A = 10) ^ (B = 5)

which should return records when either A = 10 or B = 5, but if conditions are both satisfied or both not satisfied, I don't want records to be returned.

Another suggestion to accomplish that? Of course this is a simplified scenario. In case of 2 conditions I can do perfectly :

WHERE (A <> 10 AND B = 5) OR (A = 10 AND B <> 5)

The only solution I see is to implement a XOR function myself and to nest them, but I couldn't (or don't know how) pass a boolean value as a function parameter. If you know how, please tell me.

Thanks

Jaime

|||

There is no boolean data type in TSQL and there are only few system built-ins like CONTAINS that can be used directly in a WHERE clause. So you need to use the XOR operator and test for the return value of expression like:

WHERE (case A when 10 then 1 else 0 end)^(case B when 5 then 1 else 0 end) = 1

|||? The reason your UDF does not work is that BIT is not the same as BOOLEAN. There is no Boolean type in SQL Server, so you can't do: SELECT * FROM Table WHERE Column Instead, (assuming that Column is typed as BIT) you have to do: SELECT * FROM Table WHERE Column = 1 Anyway, it's quite easy to do XOR inline: SELECT * FROM Table WHERE (Column1 = 1 AND Column2 <> 1) OR (Column1 <> 1 AND Column2 = 1) -- Adam MachanicPro SQL Server 2005, available nowhttp://www..apress.com/book/bookDisplay.html?bID=457-- <Jaime Stuardo@.discussions.microsoft.com> wrote in message news:b2649891-1305-4b27-a074-5b0a896807e8@.discussions.microsoft.com... Hi all.... I was wondering why SQL Server 2000 lacks a logical XOR operator. Do you have a ready to use function that implements it? How can I program one by myself? I know that A XOR B is equivalent to NOT A * B + A * NOT B. So I tried to create this function : CREATE FUNCTION XOR ( -- Add the parameters for the function here @.A BIT, @.B BIT ) RETURNS BIT AS BEGIN RETURN NOT @.A AND @.B OR @.A + NOT @.B END But it didn't compile. How can I pass boolean values as parameters? any other hint? Thanks Jaime

Logical XOR

How do I write this in T-SQL?

Topic Is Null XOR TopicKey Is Null

Currently I write this,

(Topic Is Null OR TopicKey Is Null) AND (Topic Is NOT Null OR TopicKey Is NOT Null)

but that starts to get really long when I have three or four values.

Jonathan Allen

Of dubious value…

CREATE TABLE dbo.XOR

(

id int

,this varchar(10)

,that varchar(10)

,what varchar(10)

,how varchar(10)

)

GO

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(1,'a', 'b','c','d')

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(2,'a', 'b','c',NULL)

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(3,'a', 'b',NULL,'d')

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(4,'a', 'b','c',NULL)

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(5,NULL, 'b','c','d')

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(6,'a', NULL,'c','d')

INSERT INTO dbo.XOR (id,this,that,what,how) VALUES(7,'a', NULL,'c',NULL)

SELECT *

FROM dbo.XOR

WHERE LEN(this) ^ LEN(that) IS NULL

OR LEN(what) ^ LEN(how) IS NULL

idthisthatwhathow

2abcNULL

3abNULLd

4abcNULL

5NULLbcd

6aNULLcd

7aNULLcNULL

See SQL Server 2005 Books Onlinetopic:

^ (Bitwise Exclusive OR) (Transact-SQL)

|||

You can do below:

case when Topic is null then 1 else 0 end ^ case when TopicKey is null then 1 else 0 end = 1

Note however this rewrite will not be able to use any index on the Topic/TopicKey columns efficiently - no seeks. So this may or may not work depending on your schema.

Logical Tables in Trigger?

Hi,
What are the Logical Tables in triggers?
Is there anything like that is available?
Please help me in this regard.
Thanks,
Shhyam
WHile you are in a trigger you have access to 2 tables, named inserted and
deleted. They are read-only and have the same structure as the table the
trigger is based on.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
> Hi,
> What are the Logical Tables in triggers?
> Is there anything like that is available?
> Please help me in this regard.
> Thanks,
> Shhyam
|||Hi Wayne,
I get your point,but it would be helpful for me,if you can provide more
details on this.Any Url/helpfile would be apperciated.
Regards,
shyam
"Wayne Snyder" wrote:

> WHile you are in a trigger you have access to 2 tables, named inserted and
> deleted. They are read-only and have the same structure as the table the
> trigger is based on.
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
>
>
|||Hi Shyam
The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGER
in the Books Online and that should get you started, as there are quite a
few examples under that topic.
HTH
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...[vbcol=seagreen]
> Hi Wayne,
> I get your point,but it would be helpful for me,if you can provide more
> details on this.Any Url/helpfile would be apperciated.
> Regards,
> shyam
> "Wayne Snyder" wrote:
|||Thanks all for the quick help.
Regards,
shyam
"Kalen Delaney" wrote:

> Hi Shyam
> The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGER
> in the Books Online and that should get you started, as there are quite a
> few examples under that topic.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...
>
>

Logical Tables in Trigger?

Hi,
What are the Logical Tables in triggers?
Is there anything like that is available?
Please help me in this regard.
Thanks,
ShhyamWHile you are in a trigger you have access to 2 tables, named inserted and
deleted. They are read-only and have the same structure as the table the
trigger is based on.
--
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
> Hi,
> What are the Logical Tables in triggers?
> Is there anything like that is available?
> Please help me in this regard.
> Thanks,
> Shhyam|||Hi Wayne,
I get your point,but it would be helpful for me,if you can provide more
details on this.Any Url/helpfile would be apperciated.
Regards,
shyam
"Wayne Snyder" wrote:
> WHile you are in a trigger you have access to 2 tables, named inserted and
> deleted. They are read-only and have the same structure as the table the
> trigger is based on.
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
> > Hi,
> >
> > What are the Logical Tables in triggers?
> > Is there anything like that is available?
> > Please help me in this regard.
> >
> > Thanks,
> > Shhyam
>
>|||Hi Shyam
The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGER
in the Books Online and that should get you started, as there are quite a
few examples under that topic.
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...
> Hi Wayne,
> I get your point,but it would be helpful for me,if you can provide more
> details on this.Any Url/helpfile would be apperciated.
> Regards,
> shyam
> "Wayne Snyder" wrote:
>> WHile you are in a trigger you have access to 2 tables, named inserted
>> and
>> deleted. They are read-only and have the same structure as the table the
>> trigger is based on.
>> --
>> Wayne Snyder, MCDBA, SQL Server MVP
>> Mariner, Charlotte, NC
>> www.mariner-usa.com
>> (Please respond only to the newsgroups.)
>> I support the Professional Association of SQL Server (PASS) and it's
>> community of SQL Server professionals.
>> www.sqlpass.org
>> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
>> news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
>> > Hi,
>> >
>> > What are the Logical Tables in triggers?
>> > Is there anything like that is available?
>> > Please help me in this regard.
>> >
>> > Thanks,
>> > Shhyam
>>|||Thanks all for the quick help.
Regards,
shyam
"Kalen Delaney" wrote:
> Hi Shyam
> The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGER
> in the Books Online and that should get you started, as there are quite a
> few examples under that topic.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...
> > Hi Wayne,
> >
> > I get your point,but it would be helpful for me,if you can provide more
> > details on this.Any Url/helpfile would be apperciated.
> > Regards,
> > shyam
> >
> > "Wayne Snyder" wrote:
> >
> >> WHile you are in a trigger you have access to 2 tables, named inserted
> >> and
> >> deleted. They are read-only and have the same structure as the table the
> >> trigger is based on.
> >>
> >> --
> >> Wayne Snyder, MCDBA, SQL Server MVP
> >> Mariner, Charlotte, NC
> >> www.mariner-usa.com
> >> (Please respond only to the newsgroups.)
> >>
> >> I support the Professional Association of SQL Server (PASS) and it's
> >> community of SQL Server professionals.
> >> www.sqlpass.org
> >>
> >> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> >> news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
> >> > Hi,
> >> >
> >> > What are the Logical Tables in triggers?
> >> > Is there anything like that is available?
> >> > Please help me in this regard.
> >> >
> >> > Thanks,
> >> > Shhyam
> >>
> >>
> >>
>
>

Logical Tables in Trigger?

Hi,
What are the Logical Tables in triggers?
Is there anything like that is available?
Please help me in this regard.
Thanks,
ShhyamWHile you are in a trigger you have access to 2 tables, named inserted and
deleted. They are read-only and have the same structure as the table the
trigger is based on.
Wayne Snyder, MCDBA, SQL Server MVP
Mariner, Charlotte, NC
www.mariner-usa.com
(Please respond only to the newsgroups.)
I support the Professional Association of SQL Server (PASS) and it's
community of SQL Server professionals.
www.sqlpass.org
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
> Hi,
> What are the Logical Tables in triggers?
> Is there anything like that is available?
> Please help me in this regard.
> Thanks,
> Shhyam|||Hi Wayne,
I get your point,but it would be helpful for me,if you can provide more
details on this.Any Url/helpfile would be apperciated.
Regards,
shyam
"Wayne Snyder" wrote:

> WHile you are in a trigger you have access to 2 tables, named inserted and
> deleted. They are read-only and have the same structure as the table the
> trigger is based on.
> --
> Wayne Snyder, MCDBA, SQL Server MVP
> Mariner, Charlotte, NC
> www.mariner-usa.com
> (Please respond only to the newsgroups.)
> I support the Professional Association of SQL Server (PASS) and it's
> community of SQL Server professionals.
> www.sqlpass.org
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:9F7FB9AF-6953-426C-9142-3F3BBEC61899@.microsoft.com...
>
>|||Hi Shyam
The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGER
in the Books Online and that should get you started, as there are quite a
few examples under that topic.
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Shyam" <Shyam@.discussions.microsoft.com> wrote in message
news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...[vbcol=seagreen]
> Hi Wayne,
> I get your point,but it would be helpful for me,if you can provide more
> details on this.Any Url/helpfile would be apperciated.
> Regards,
> shyam
> "Wayne Snyder" wrote:
>|||Thanks all for the quick help.
Regards,
shyam
"Kalen Delaney" wrote:

> Hi Shyam
> The best helpfile is SQL Server's own documentation. Look up CREATE TRIGGE
R
> in the Books Online and that should get you started, as there are quite a
> few examples under that topic.
> --
> HTH
> --
> Kalen Delaney
> SQL Server MVP
> www.SolidQualityLearning.com
>
> "Shyam" <Shyam@.discussions.microsoft.com> wrote in message
> news:D95816D6-2BED-4E95-ADE2-B4AC3E6EF753@.microsoft.com...
>
>

Logical Scan Fragmentation..issue!

Hello,
I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
all tables on my database to see what is condition of index fragmentation.
For an example here is on of those tables:
DBCC SHOWCONTIG scanning 'studenti' table...
Table: 'studenti' (1635536910); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned........................: 63
- Extents Scanned.......................: 11
- Extent Switches.......................: 10
- Avg. Pages per Extent..................: 5.7
- Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
- Extent Scan Fragmentation ...............: 36.36%
- Avg. Bytes Free per Page................: 492.7
- Avg. Page Density (full)................: 93.91%
It's not big table..so far 1209 rows.
If you noticed there is not value for Logical Scan Fragmentation? All my
tables in database have that value (45 od them) and just 4 of them not.
Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
doesn't change and of values in exemple above.
Other tables, with Logical Scan Fragmentation values shown, are ok and DBCC
INDEXDRAG do just fine..as expected.
What are possible reasons for this issue or problem!
Thanx
It's because the table is a heap (has no clustered index). You can tell
because the index ID is 0. If you check the other tables you should see that
you don't get Logical Scan Fragmentation for any heaps. Here's a quick query
that will determine tables with no clustered index
select [name] from sysobjects
where objectproperty(id,'IsUserTable')=1
and objectproperty(id,'TableHasClustIndex')=0
Logical Scan Fragmentation is not relavent for heaps
HTH
Jasper Smith (SQL Server MVP)
http://www.sqldbatips.com
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Jasmin" <Jasmin@.discussions.microsoft.com> wrote in message
news:F729DB47-DA92-4319-AC14-7BADBE5AC17E@.microsoft.com...
> Hello,
> I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
> all tables on my database to see what is condition of index fragmentation.
> For an example here is on of those tables:
> DBCC SHOWCONTIG scanning 'studenti' table...
> Table: 'studenti' (1635536910); index ID: 0, database ID: 5
> TABLE level scan performed.
> - Pages Scanned........................: 63
> - Extents Scanned.......................: 11
> - Extent Switches.......................: 10
> - Avg. Pages per Extent..................: 5.7
> - Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
> - Extent Scan Fragmentation ...............: 36.36%
> - Avg. Bytes Free per Page................: 492.7
> - Avg. Page Density (full)................: 93.91%
> It's not big table..so far 1209 rows.
> If you noticed there is not value for Logical Scan Fragmentation? All my
> tables in database have that value (45 od them) and just 4 of them not.
> Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
> doesn't change and of values in exemple above.
> Other tables, with Logical Scan Fragmentation values shown, are ok and
> DBCC
> INDEXDRAG do just fine..as expected.
> What are possible reasons for this issue or problem!
> Thanx

Logical Scan Fragmentation..issue!

Hello,
I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
all tables on my database to see what is condition of index fragmentation.
For an example here is on of those tables:
DBCC SHOWCONTIG scanning 'studenti' table...
Table: 'studenti' (1635536910); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned........................: 63
- Extents Scanned.......................: 11
- Extent Switches.......................: 10
- Avg. Pages per Extent..................: 5.7
- Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
- Extent Scan Fragmentation ...............: 36.36%
- Avg. Bytes Free per Page................: 492.7
- Avg. Page Density (full)................: 93.91%
It's not big table..so far 1209 rows.
If you noticed there is not value for Logical Scan Fragmentation? All my
tables in database have that value (45 od them) and just 4 of them not.
Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
doesn't change and of values in exemple above.
Other tables, with Logical Scan Fragmentation values shown, are ok and DBCC
INDEXDRAG do just fine..as expected.
What are possible reasons for this issue or problem!
ThanxIt's because the table is a heap (has no clustered index). You can tell
because the index ID is 0. If you check the other tables you should see that
you don't get Logical Scan Fragmentation for any heaps. Here's a quick query
that will determine tables with no clustered index
select [name] from sysobjects
where objectproperty(id,'IsUserTable')=1
and objectproperty(id,'TableHasClustIndex')=
0
Logical Scan Fragmentation is not relavent for heaps
HTH
Jasper Smith (SQL Server MVP)
http://www.sqldbatips.com
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Jasmin" <Jasmin@.discussions.microsoft.com> wrote in message
news:F729DB47-DA92-4319-AC14-7BADBE5AC17E@.microsoft.com...
> Hello,
> I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
> all tables on my database to see what is condition of index fragmentation.
> For an example here is on of those tables:
> DBCC SHOWCONTIG scanning 'studenti' table...
> Table: 'studenti' (1635536910); index ID: 0, database ID: 5
> TABLE level scan performed.
> - Pages Scanned........................: 63
> - Extents Scanned.......................: 11
> - Extent Switches.......................: 10
> - Avg. Pages per Extent..................: 5.7
> - Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
> - Extent Scan Fragmentation ...............: 36.36%
> - Avg. Bytes Free per Page................: 492.7
> - Avg. Page Density (full)................: 93.91%
> It's not big table..so far 1209 rows.
> If you noticed there is not value for Logical Scan Fragmentation? All my
> tables in database have that value (45 od them) and just 4 of them not.
> Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
> doesn't change and of values in exemple above.
> Other tables, with Logical Scan Fragmentation values shown, are ok and
> DBCC
> INDEXDRAG do just fine..as expected.
> What are possible reasons for this issue or problem!
> Thanx

Logical Scan Fragmentation..issue!

Hello,
I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
all tables on my database to see what is condition of index fragmentation.
For an example here is on of those tables:
DBCC SHOWCONTIG scanning 'studenti' table...
Table: 'studenti' (1635536910); index ID: 0, database ID: 5
TABLE level scan performed.
- Pages Scanned........................: 63
- Extents Scanned.......................: 11
- Extent Switches.......................: 10
- Avg. Pages per Extent..................: 5.7
- Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
- Extent Scan Fragmentation ...............: 36.36%
- Avg. Bytes Free per Page................: 492.7
- Avg. Page Density (full)................: 93.91%
It's not big table..so far 1209 rows.
If you noticed there is not value for Logical Scan Fragmentation? All my
tables in database have that value (45 od them) and just 4 of them not.
Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
doesn't change and of values in exemple above.
Other tables, with Logical Scan Fragmentation values shown, are ok and DBCC
INDEXDRAG do just fine..as expected.
What are possible reasons for this issue or problem!
ThanxIt's because the table is a heap (has no clustered index). You can tell
because the index ID is 0. If you check the other tables you should see that
you don't get Logical Scan Fragmentation for any heaps. Here's a quick query
that will determine tables with no clustered index
select [name] from sysobjects
where objectproperty(id,'IsUserTable')=1
and objectproperty(id,'TableHasClustIndex')=0
Logical Scan Fragmentation is not relavent for heaps
--
HTH
Jasper Smith (SQL Server MVP)
http://www.sqldbatips.com
I support PASS - the definitive, global
community for SQL Server professionals -
http://www.sqlpass.org
"Jasmin" <Jasmin@.discussions.microsoft.com> wrote in message
news:F729DB47-DA92-4319-AC14-7BADBE5AC17E@.microsoft.com...
> Hello,
> I have some issue or problems I dont know?. I have tray DBCC SHOWCONTIG on
> all tables on my database to see what is condition of index fragmentation.
> For an example here is on of those tables:
> DBCC SHOWCONTIG scanning 'studenti' table...
> Table: 'studenti' (1635536910); index ID: 0, database ID: 5
> TABLE level scan performed.
> - Pages Scanned........................: 63
> - Extents Scanned.......................: 11
> - Extent Switches.......................: 10
> - Avg. Pages per Extent..................: 5.7
> - Scan Density [Best Count:Actual Count]......: 72.73% [8:11]
> - Extent Scan Fragmentation ...............: 36.36%
> - Avg. Bytes Free per Page................: 492.7
> - Avg. Page Density (full)................: 93.91%
> It's not big table..so far 1209 rows.
> If you noticed there is not value for Logical Scan Fragmentation? All my
> tables in database have that value (45 od them) and just 4 of them not.
> Executin command DBCC INDEXDRAG on that 4 tables, with any fill factor,
> doesn't change and of values in exemple above.
> Other tables, with Logical Scan Fragmentation values shown, are ok and
> DBCC
> INDEXDRAG do just fine..as expected.
> What are possible reasons for this issue or problem!
> Thanx

Logical Scan fragmentation ?

Can someone explain this statement ?
"An out of order page is one for which the next page indicated in an IAM is
a different page than the page pointed to by the next page pointer in the
leaf page."
Does this mean that DBCC checktable may result in some allocation errors ?Hi Hassan
No, this is not an error. The IAM indicates the next physical page and the
pointers indicate the next logical page. It just means that the a page is
out of order. :-)
So let's say your table is on page 82, 86 and 95. The IAM would indicate
them in that physical order. But it may be that the first rows in the index
are on page 95 (for example, valus A-G) then the next logical values are on
page 86 (values H - S) and the next are on page 82 (values T - Z). When you
are on page 86, the pointers will say that the next page is 82, but the IAM
will say that the next page is 95. This is just fragmentation, not an
error.
--
HTH
--
Kalen Delaney
SQL Server MVP
www.SolidQualityLearning.com
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:egCGeILnDHA.1708@.TK2MSFTNGP12.phx.gbl...
> Can someone explain this statement ?
> "An out of order page is one for which the next page indicated in an IAM
is
> a different page than the page pointed to by the next page pointer in the
> leaf page."
> Does this mean that DBCC checktable may result in some allocation errors ?
>|||Read the whitepaper at:
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/prodtechnol/sql/maintain/optimize/ss2kidbp.asp
which should explain everything for you.
--
Paul Randal
DBCC Technical Lead, Microsoft SQL Server Storage Engine
This posting is provided "AS IS" with no warranties, and confers no rights.
"Hassan" <fatima_ja@.hotmail.com> wrote in message
news:egCGeILnDHA.1708@.TK2MSFTNGP12.phx.gbl...
> Can someone explain this statement ?
> "An out of order page is one for which the next page indicated in an IAM
is
> a different page than the page pointed to by the next page pointer in the
> leaf page."
> Does this mean that DBCC checktable may result in some allocation errors ?
>

Logical Records

The organisation I'm working for has SQL Server 2005 merge replication up and running on 11 servers in 3 different countries.

Ocasionally we experience problems caused by rows in a logical record relationship being replicated between servers in the wrong order, i.e. an attempt is made to insert child rows before the parent. I am not able to reproduce this behaviour on my test servers.

Has anyone else experienced this problem?

It's entirely possible for a child row to be inserted before parent row, but it should be greatly reduced in SQL 2005. What kind of errors are you having?|||

i'm sorry, i misspoke. the whole point of logical records is that a logical record relationship is committed in one transaction, the whole logical record should be committed as one transaction.

Can you tell me what the errors are that you're seeing, and can you verify that the article that's failing is marked for logical records?

|||

We have a master database filtered and published to 2 separate databases on the same cluster (overlapping partitions).

Each of these 2 databases is republished to another 4 servers, where it is republished again.

Users update a local SQL Express database which is programmatically synchronised (anonymous pull subscription, via RMO) with one of the 4 servers when a transaction completes.

Very occasionally, if a user updates an existing parent row and then adds a new parent row with all its children, and both programmatic synchronisations occur within the 60 second polling interval on the server, then the data is passed up to the cluster as a parent update, child inserts and a parent insert, in that order. As a result the child rows in the server and user databases are deleted. I have not been able to reproduce the problem in the test environment but I have enough logging in place to know exactly what's happening.

I know that the logical row relationships exist because I can see them in sysmergesubsetfilters but sp_helpmergefilter won't show them to me unless they're filter_type 3 (filter and logical record relationship).

|||Sorry but it's not clear to me what the problem is. If you update parent row with ID1, and insert new parent row with ID2 and child row with FK=ID2, why would the child rows in the server and user dbs get deleted? And for which ID are they getting deleted, ID1? Are you saying the updates are moving the row out of partition?|||

The child rows for ID2 get deleted.

Inserting the child rows into the 3rd level database is not a problem because we have NOT FOR REPLICATION on the foreign keys.

When the 2nd level merge agent runs it receives the child rows before the parent and therefore appears not to be able to determine which partition they belong to, so deletes them.

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted before the parent.

Currently all article filters are type 3 (filter and logical record). Changing filter type on the 2nd and 3rd levels to 2 (logical record only) appears to make no difference in my test environment because I can't reproduce the problem to start with.

|||

That should be:

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted *after* the parent.

|||

Books Online says NFR (Not For Replication) cannot be placed on FK constraint involved with logical records. Are you sure the articles are involved in logical record relationship, that's what it seems you're implying above. Logical records should be commiting the whole thing as one txn, so even if the child goes before the parent, it shouldn't matter in a single transaction. Can you verify that the articles involved are indeed part of logical records?

|||OK, I read sp_addmergearticle in BOL and understand what option 3 is, I guess this would allow you to add NFR to your FK constraint. May I ask why you have it set to 3 instead of 2?|||

It wasn't intentional.

I scripted the sp_addmergearticle definitions from the top-level database and used them. It seemed to work, at least to start with....

|||

Can I have the URL for that BOL reference please?

I believe the articles are involved in a logical record relationship because I can see them in sysmergesubsetfilters and because sp_helpmergefilter says so.

However, I take your point that the relationship might not be working at all as opposed to working intermittently although I'm surprised at how rarely it causes a problem.

|||

See topic "Grouping Changes to Related Rows with Logical Records", http://msdn2.microsoft.com/en-us/library/ms152507.aspx.

Easiest way to fix the problem is to set @.filter_type = 2, this will ensure your relational changes get committed all at once. Other than that, the fact that a child row goes before a parent row is by design (as long as @.filter_type <> 2), it's possible for this to happen. Do know that this doesn't pose any pure data loss, it just seems that way. This usually happens with some weird timing issue, or if your connection gets broken mid-sync. What happens is the child row insert gets to the publisher, the publisher will see the child row originated from the subscriber, but knows there's no parent row to match the filter criteria, thus send a delete back down to the subscriber. But as soon as the parent row makes it to the publisher, it will recognize the child row doesn't exist at the subscriber and send the child insert back down to the subscriber. This works only if NFR is enabled, which I believe is what you have, without it you'd probably get conflicts.

|||

Thanks.

We have this change scheduled for 29/5 (due to the need to re-initialise subscriptions) and I'll let you know what happens.

|||

Problem not solved.

We completely rebuilt the replication infrastructure from scratch using filter_type 2 and with no NFR clause on the foreign keys supporting the logical record relationships.

What we're now seeing is very clearly a bug in SQL Server.

If the republisher begins synchronising with the publisher and, before it finishes, a client synchronises its anonynous pull subscription with the republisher, then some of the client changes are lost (present on the republisher but never uploaded to the publisher)

Here are the messages captured during programmatic synchronisation on the client:

06/06/2007 11:15:32 Initializing
06/06/2007 11:15:32 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:32 Retrieving publication information
06/06/2007 11:15:32 Retrieving subscription information.
06/06/2007 11:15:32 Uploading data changes to the Publisher
06/06/2007 11:15:33 Enumerating deletes in all articles
06/06/2007 11:15:33 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Outing' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Stint' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Lap' (1 insert): 1 total
06/06/2007 11:15:33 Downloading data changes to the Subscriber
06/06/2007 11:15:34 Merge completed after processing 3 data change(s) (1 insert(s), 2 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:15:34 Synchronisation complete

06/06/2007 11:15:59 Initializing
06/06/2007 11:15:59 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:59 Retrieving publication information
06/06/2007 11:15:59 Retrieving subscription information.
06/06/2007 11:16:00 Uploading data changes to the Publisher
06/06/2007 11:16:01 Enumerating deletes in all articles
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Event'
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:16:01 Uploaded 1 change(s) in 'EventTyreset' (1 update): 1 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Outing' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 27 change(s) in 'Outing_AeroComponent' (27 inserts): 27 total
06/06/2007 11:16:01 Uploaded 8 change(s) in 'Outing_CofGComponent' (8 inserts): 8 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Stint' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 16 change(s) in 'Outing_BallastComponent' (16 inserts): 16 total
06/06/2007 11:16:01 Downloading data changes to the Subscriber
06/06/2007 11:16:02 Merge completed after processing 54 data change(s) (53 insert(s), 1 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:16:02 Synchronisation complete

And here's what happened on the republisher (in reverse chronological order):

No data needed to be merged.

06/06/2007 11:19:54

No data needed to be merged.

06/06/2007 11:19:41

No data needed to be merged.

06/06/2007 11:18:30

No data needed to be merged.

06/06/2007 11:18:18

Uploaded 1 change(s) in 'EventTyreset' (1 update): 11 total

06/06/2007 11:18:17

Uploading data changes to the Publisher

06/06/2007 11:16:48

No data needed to be merged.

06/06/2007 11:16:32

Uploaded 1 change(s) in 'Lap' (1 insert): 4 total

06/06/2007 11:16:27

Uploaded 1 change(s) in 'Stint' (1 update): 13 total

06/06/2007 11:16:25

Uploaded 1 change(s) in 'Outing' (1 update): 13 total

06/06/2007 11:16:19

Uploading data changes to the Publisher

06/06/2007 11:14:50

No data needed to be merged.

I am desperate to get a solution to this problem. Are there any post-SP2 hot fixes affecting replication?

Thanks in advance.

|||We are seeing something very similar. Updates from a CE database (SDF file) are coming up to the republisher database, but not getting to the Master database. Results are inconsistent at best. The replication seems to be working properly if a direct insert is done at either the republisher level or the Master level. We have not seen a failure in that scenario. We would also like an answer to the problem.

Logical Records

The organisation I'm working for has SQL Server 2005 merge replication up and running on 11 servers in 3 different countries.

Ocasionally we experience problems caused by rows in a logical record relationship being replicated between servers in the wrong order, i.e. an attempt is made to insert child rows before the parent. I am not able to reproduce this behaviour on my test servers.

Has anyone else experienced this problem?

It's entirely possible for a child row to be inserted before parent row, but it should be greatly reduced in SQL 2005. What kind of errors are you having?|||

i'm sorry, i misspoke. the whole point of logical records is that a logical record relationship is committed in one transaction, the whole logical record should be committed as one transaction.

Can you tell me what the errors are that you're seeing, and can you verify that the article that's failing is marked for logical records?

|||

We have a master database filtered and published to 2 separate databases on the same cluster (overlapping partitions).

Each of these 2 databases is republished to another 4 servers, where it is republished again.

Users update a local SQL Express database which is programmatically synchronised (anonymous pull subscription, via RMO) with one of the 4 servers when a transaction completes.

Very occasionally, if a user updates an existing parent row and then adds a new parent row with all its children, and both programmatic synchronisations occur within the 60 second polling interval on the server, then the data is passed up to the cluster as a parent update, child inserts and a parent insert, in that order. As a result the child rows in the server and user databases are deleted. I have not been able to reproduce the problem in the test environment but I have enough logging in place to know exactly what's happening.

I know that the logical row relationships exist because I can see them in sysmergesubsetfilters but sp_helpmergefilter won't show them to me unless they're filter_type 3 (filter and logical record relationship).

|||Sorry but it's not clear to me what the problem is. If you update parent row with ID1, and insert new parent row with ID2 and child row with FK=ID2, why would the child rows in the server and user dbs get deleted? And for which ID are they getting deleted, ID1? Are you saying the updates are moving the row out of partition?|||

The child rows for ID2 get deleted.

Inserting the child rows into the 3rd level database is not a problem because we have NOT FOR REPLICATION on the foreign keys.

When the 2nd level merge agent runs it receives the child rows before the parent and therefore appears not to be able to determine which partition they belong to, so deletes them.

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted before the parent.

Currently all article filters are type 3 (filter and logical record). Changing filter type on the 2nd and 3rd levels to 2 (logical record only) appears to make no difference in my test environment because I can't reproduce the problem to start with.

|||

That should be:

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted *after* the parent.

|||

Books Online says NFR (Not For Replication) cannot be placed on FK constraint involved with logical records. Are you sure the articles are involved in logical record relationship, that's what it seems you're implying above. Logical records should be commiting the whole thing as one txn, so even if the child goes before the parent, it shouldn't matter in a single transaction. Can you verify that the articles involved are indeed part of logical records?

|||OK, I read sp_addmergearticle in BOL and understand what option 3 is, I guess this would allow you to add NFR to your FK constraint. May I ask why you have it set to 3 instead of 2?|||

It wasn't intentional.

I scripted the sp_addmergearticle definitions from the top-level database and used them. It seemed to work, at least to start with....

|||

Can I have the URL for that BOL reference please?

I believe the articles are involved in a logical record relationship because I can see them in sysmergesubsetfilters and because sp_helpmergefilter says so.

However, I take your point that the relationship might not be working at all as opposed to working intermittently although I'm surprised at how rarely it causes a problem.

|||

See topic "Grouping Changes to Related Rows with Logical Records", http://msdn2.microsoft.com/en-us/library/ms152507.aspx.

Easiest way to fix the problem is to set @.filter_type = 2, this will ensure your relational changes get committed all at once. Other than that, the fact that a child row goes before a parent row is by design (as long as @.filter_type <> 2), it's possible for this to happen. Do know that this doesn't pose any pure data loss, it just seems that way. This usually happens with some weird timing issue, or if your connection gets broken mid-sync. What happens is the child row insert gets to the publisher, the publisher will see the child row originated from the subscriber, but knows there's no parent row to match the filter criteria, thus send a delete back down to the subscriber. But as soon as the parent row makes it to the publisher, it will recognize the child row doesn't exist at the subscriber and send the child insert back down to the subscriber. This works only if NFR is enabled, which I believe is what you have, without it you'd probably get conflicts.

|||

Thanks.

We have this change scheduled for 29/5 (due to the need to re-initialise subscriptions) and I'll let you know what happens.

|||

Problem not solved.

We completely rebuilt the replication infrastructure from scratch using filter_type 2 and with no NFR clause on the foreign keys supporting the logical record relationships.

What we're now seeing is very clearly a bug in SQL Server.

If the republisher begins synchronising with the publisher and, before it finishes, a client synchronises its anonynous pull subscription with the republisher, then some of the client changes are lost (present on the republisher but never uploaded to the publisher)

Here are the messages captured during programmatic synchronisation on the client:

06/06/2007 11:15:32 Initializing
06/06/2007 11:15:32 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:32 Retrieving publication information
06/06/2007 11:15:32 Retrieving subscription information.
06/06/2007 11:15:32 Uploading data changes to the Publisher
06/06/2007 11:15:33 Enumerating deletes in all articles
06/06/2007 11:15:33 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Outing' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Stint' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Lap' (1 insert): 1 total
06/06/2007 11:15:33 Downloading data changes to the Subscriber
06/06/2007 11:15:34 Merge completed after processing 3 data change(s) (1 insert(s), 2 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:15:34 Synchronisation complete

06/06/2007 11:15:59 Initializing
06/06/2007 11:15:59 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:59 Retrieving publication information
06/06/2007 11:15:59 Retrieving subscription information.
06/06/2007 11:16:00 Uploading data changes to the Publisher
06/06/2007 11:16:01 Enumerating deletes in all articles
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Event'
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:16:01 Uploaded 1 change(s) in 'EventTyreset' (1 update): 1 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Outing' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 27 change(s) in 'Outing_AeroComponent' (27 inserts): 27 total
06/06/2007 11:16:01 Uploaded 8 change(s) in 'Outing_CofGComponent' (8 inserts): 8 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Stint' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 16 change(s) in 'Outing_BallastComponent' (16 inserts): 16 total
06/06/2007 11:16:01 Downloading data changes to the Subscriber
06/06/2007 11:16:02 Merge completed after processing 54 data change(s) (53 insert(s), 1 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:16:02 Synchronisation complete

And here's what happened on the republisher (in reverse chronological order):

No data needed to be merged.

06/06/2007 11:19:54

No data needed to be merged.

06/06/2007 11:19:41

No data needed to be merged.

06/06/2007 11:18:30

No data needed to be merged.

06/06/2007 11:18:18

Uploaded 1 change(s) in 'EventTyreset' (1 update): 11 total

06/06/2007 11:18:17

Uploading data changes to the Publisher

06/06/2007 11:16:48

No data needed to be merged.

06/06/2007 11:16:32

Uploaded 1 change(s) in 'Lap' (1 insert): 4 total

06/06/2007 11:16:27

Uploaded 1 change(s) in 'Stint' (1 update): 13 total

06/06/2007 11:16:25

Uploaded 1 change(s) in 'Outing' (1 update): 13 total

06/06/2007 11:16:19

Uploading data changes to the Publisher

06/06/2007 11:14:50

No data needed to be merged.

I am desperate to get a solution to this problem. Are there any post-SP2 hot fixes affecting replication?

Thanks in advance.

|||We are seeing something very similar. Updates from a CE database (SDF file) are coming up to the republisher database, but not getting to the Master database. Results are inconsistent at best. The replication seems to be working properly if a direct insert is done at either the republisher level or the Master level. We have not seen a failure in that scenario. We would also like an answer to the problem.

Wednesday, March 7, 2012

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
--
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.com...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?
The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.co m...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

Logical reads to tune sql

I was wondering if somebody could critique my query tuning technique.
I am primarily a developer and only a part time dba. I was trained by
a great dba, but it was back in the sql 6.5 and 7.0 days. I
concentrate pretty heavily on logical reads, and I am wondering if my
technique is still valid or if I should be concentrating elsewhere?
We use sql profiler and find queries that are taking longer than 15
seconds to run. I get the sql and run it in query analyzer and turn
statistics io on and look at logical reads. I keep tuning the sql and
try to get logical reads down. My goal is to get them to less than
10,000 logical reads. But if I get a query from 4,000,000 read to
200,000, I believe I have made a good improvement.
Some of our code calls multiple sql statements in sequence, so I
analyze the code and have a pretty good idea what tables will probably
be cached. If I see that the tables will be cached because of previous
sql statements, then I can live with a query that has higher logical
reads.
I pretty much don't understand the graphical Execution Plan. Am I
crazy ignoring this? Or is my above technique pretty good?The goal is always to get the logical reads to a minimum. But there is no
way to tell if a particular # is as low as it can be without looking at the
query plan to some degree. I don't know what kind of queries you are doing
but if your only looking for queries over 15 seconds and think 10K reads is
great you guy's must have a lot of time on your hands to wait for
results<g>. Usually anything over 150ms is too long for me. Hopefully
these are some type of reporting queries and the amount of reads are
justified..
Andrew J. Kelly SQL MVP
"Bill Ray" <rinfo@.mail.com> wrote in message
news:63b44ea.0407020726.156a0c50@.posting.google.com...
> I was wondering if somebody could critique my query tuning technique.
> I am primarily a developer and only a part time dba. I was trained by
> a great dba, but it was back in the sql 6.5 and 7.0 days. I
> concentrate pretty heavily on logical reads, and I am wondering if my
> technique is still valid or if I should be concentrating elsewhere?
> We use sql profiler and find queries that are taking longer than 15
> seconds to run. I get the sql and run it in query analyzer and turn
> statistics io on and look at logical reads. I keep tuning the sql and
> try to get logical reads down. My goal is to get them to less than
> 10,000 logical reads. But if I get a query from 4,000,000 read to
> 200,000, I believe I have made a good improvement.
> Some of our code calls multiple sql statements in sequence, so I
> analyze the code and have a pretty good idea what tables will probably
> be cached. If I see that the tables will be cached because of previous
> sql statements, then I can live with a query that has higher logical
> reads.
> I pretty much don't understand the graphical Execution Plan. Am I
> crazy ignoring this? Or is my above technique pretty good?

logical read vs actual row count

Hi

Can someone tell me what the difference between the logical read from statistics io and the actual row count from actual execution plan. I have a query that has the following

statistics io
Scan count 1, logical reads 484

actual execution plan
Clustered Index Seek
Actual Row Count : 40067

does this mean that sql has to read 484 pages from disk, and that 484 pages has over 40067 rows?

I wrote my query another way, and numbers becomes

statistics io
Scan count 1, logical reads 1207

actual execution plan
Clustered Index Seek
Actual Row Count : 133

the 1st query took about 2 seconds, but the seconds query took about 4 seconds. does this mean the logical read is more important

thanks

Paul


See SQL Server 2005 Books Online topics:
Reading Pages
http://msdn2.microsoft.com/en-us/library/ms191475.aspx

SET STATISTICS IO (Transact-SQL)
http://msdn2.microsoft.com/en-US/library/ms184361.aspx

Logical read and DBCC dropcleanbuffers

Hi! I post a question last week but didn't get any responses, so trying it
again.
Basically, I am in a process of tuning a query and number of logical reads
gets change after I run dbcc dropcleanbuffers.
I understand that physical reads will be different after cache is flushed
out but why logical read get changed after flushing the cache but with exact
same execution plan.
Sql 200 sp4
Hi James
The Reads reported in Profiler also includes reads performed in the
procedure cache, which is variable depending on activity levels & dbcc
commands.
Profiler's reads data is useful for zero'ing on problems, but you should use
SET STATISTICS IO ON for more accurate report on the logical reads
performed by an individual query in the Query Analyser / Management Studio.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
> Hi! I post a question last week but didn't get any responses, so trying it
> again.
> Basically, I am in a process of tuning a query and number of logical reads
> gets change after I run dbcc dropcleanbuffers.
> I understand that physical reads will be different after cache is flushed
> out but why logical read get changed after flushing the cache but with
> exact same execution plan.
> Sql 200 sp4
>
|||Hi Greg,
Thanks for the reply. I am already using Set statistics io on to get the
logical reads, its not from Profiler.
Do you think of any reason why logical read would be different for same
qyery, same dataset with exact same execution plan and only difference is I
flushed the cache with DBCC dropcleanbuffer.
Any input is greatly appreciated.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
> Hi James
> The Reads reported in Profiler also includes reads performed in the
> procedure cache, which is variable depending on activity levels & dbcc
> commands.
> Profiler's reads data is useful for zero'ing on problems, but you should
> use SET STATISTICS IO ON for more accurate report on the logical reads
> performed by an individual query in the Query Analyser / Management
> Studio.
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
>
|||Hi James
Not off the top unfortunately. How much difference are you seeing? Are you
seeing only small differences or large differences/
Is your scenario reproducable via a script that you could post? If so, I'd
be happy to spend some time looking into it. If not, any chance you could
post the set statistics io output & profiler showplan all event's BinaryData
output for each call?
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
> Hi Greg,
> Thanks for the reply. I am already using Set statistics io on to get the
> logical reads, its not from Profiler.
> Do you think of any reason why logical read would be different for same
> qyery, same dataset with exact same execution plan and only difference is
> I flushed the cache with DBCC dropcleanbuffer.
> Any input is greatly appreciated.
> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
>
|||Hi Greg,
I have replied to your email address the actual query, statistics io output
and profiler trace output that you asked for. Forgive me for sending to your
email rather than entire group. I thought with all the attachment and stuff
it would be better send it to directly. Let me know if any problem with
that. Thanks once again.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:u5NBAjz5GHA.4112@.TK2MSFTNGP04.phx.gbl...
> Hi James
> Not off the top unfortunately. How much difference are you seeing? Are you
> seeing only small differences or large differences/
> Is your scenario reproducable via a script that you could post? If so, I'd
> be happy to spend some time looking into it. If not, any chance you could
> post the set statistics io output & profiler showplan all event's
> BinaryData output for each call?
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
>

Logical read and DBCC dropcleanbuffers

Hi! I post a question last week but didn't get any responses, so trying it
again.
Basically, I am in a process of tuning a query and number of logical reads
gets change after I run dbcc dropcleanbuffers.
I understand that physical reads will be different after cache is flushed
out but why logical read get changed after flushing the cache but with exact
same execution plan.
Sql 200 sp4Hi James
The Reads reported in Profiler also includes reads performed in the
procedure cache, which is variable depending on activity levels & dbcc
commands.
Profiler's reads data is useful for zero'ing on problems, but you should use
SET STATISTICS IO ON for more accurate report on the logical reads
performed by an individual query in the Query Analyser / Management Studio.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
> Hi! I post a question last week but didn't get any responses, so trying it
> again.
> Basically, I am in a process of tuning a query and number of logical reads
> gets change after I run dbcc dropcleanbuffers.
> I understand that physical reads will be different after cache is flushed
> out but why logical read get changed after flushing the cache but with
> exact same execution plan.
> Sql 200 sp4
>|||Hi Greg,
Thanks for the reply. I am already using Set statistics io on to get the
logical reads, its not from Profiler.
Do you think of any reason why logical read would be different for same
qyery, same dataset with exact same execution plan and only difference is I
flushed the cache with DBCC dropcleanbuffer.
Any input is greatly appreciated.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
> Hi James
> The Reads reported in Profiler also includes reads performed in the
> procedure cache, which is variable depending on activity levels & dbcc
> commands.
> Profiler's reads data is useful for zero'ing on problems, but you should
> use SET STATISTICS IO ON for more accurate report on the logical reads
> performed by an individual query in the Query Analyser / Management
> Studio.
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
>|||Hi James
Not off the top unfortunately. How much difference are you seeing? Are you
seeing only small differences or large differences/
Is your scenario reproducable via a script that you could post? If so, I'd
be happy to spend some time looking into it. If not, any chance you could
post the set statistics io output & profiler showplan all event's BinaryData
output for each call?
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
> Hi Greg,
> Thanks for the reply. I am already using Set statistics io on to get the
> logical reads, its not from Profiler.
> Do you think of any reason why logical read would be different for same
> qyery, same dataset with exact same execution plan and only difference is
> I flushed the cache with DBCC dropcleanbuffer.
> Any input is greatly appreciated.
> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
>|||Hi Greg,
I have replied to your email address the actual query, statistics io output
and profiler trace output that you asked for. Forgive me for sending to your
email rather than entire group. I thought with all the attachment and stuff
it would be better send it to directly. Let me know if any problem with
that. Thanks once again.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:u5NBAjz5GHA.4112@.TK2MSFTNGP04.phx.gbl...
> Hi James
> Not off the top unfortunately. How much difference are you seeing? Are you
> seeing only small differences or large differences/
> Is your scenario reproducable via a script that you could post? If so, I'd
> be happy to spend some time looking into it. If not, any chance you could
> post the set statistics io output & profiler showplan all event's
> BinaryData output for each call?
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
>

Logical read and DBCC dropcleanbuffers

Hi! I post a question last week but didn't get any responses, so trying it
again.
Basically, I am in a process of tuning a query and number of logical reads
gets change after I run dbcc dropcleanbuffers.
I understand that physical reads will be different after cache is flushed
out but why logical read get changed after flushing the cache but with exact
same execution plan.
Sql 200 sp4Hi James
The Reads reported in Profiler also includes reads performed in the
procedure cache, which is variable depending on activity levels & dbcc
commands.
Profiler's reads data is useful for zero'ing on problems, but you should use
SET STATISTICS IO ON for more accurate report on the logical reads
performed by an individual query in the Query Analyser / Management Studio.
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
> Hi! I post a question last week but didn't get any responses, so trying it
> again.
> Basically, I am in a process of tuning a query and number of logical reads
> gets change after I run dbcc dropcleanbuffers.
> I understand that physical reads will be different after cache is flushed
> out but why logical read get changed after flushing the cache but with
> exact same execution plan.
> Sql 200 sp4
>|||Hi Greg,
Thanks for the reply. I am already using Set statistics io on to get the
logical reads, its not from Profiler.
Do you think of any reason why logical read would be different for same
qyery, same dataset with exact same execution plan and only difference is I
flushed the cache with DBCC dropcleanbuffer.
Any input is greatly appreciated.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
> Hi James
> The Reads reported in Profiler also includes reads performed in the
> procedure cache, which is variable depending on activity levels & dbcc
> commands.
> Profiler's reads data is useful for zero'ing on problems, but you should
> use SET STATISTICS IO ON for more accurate report on the logical reads
> performed by an individual query in the Query Analyser / Management
> Studio.
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
>> Hi! I post a question last week but didn't get any responses, so trying
>> it again.
>> Basically, I am in a process of tuning a query and number of logical
>> reads gets change after I run dbcc dropcleanbuffers.
>> I understand that physical reads will be different after cache is flushed
>> out but why logical read get changed after flushing the cache but with
>> exact same execution plan.
>> Sql 200 sp4
>|||Hi James
Not off the top unfortunately. How much difference are you seeing? Are you
seeing only small differences or large differences/
Is your scenario reproducable via a script that you could post? If so, I'd
be happy to spend some time looking into it. If not, any chance you could
post the set statistics io output & profiler showplan all event's BinaryData
output for each call?
Regards,
Greg Linwood
SQL Server MVP
http://blogs.sqlserver.org.au/blogs/greg_linwood
"james" <kush@.brandes.com> wrote in message
news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
> Hi Greg,
> Thanks for the reply. I am already using Set statistics io on to get the
> logical reads, its not from Profiler.
> Do you think of any reason why logical read would be different for same
> qyery, same dataset with exact same execution plan and only difference is
> I flushed the cache with DBCC dropcleanbuffer.
> Any input is greatly appreciated.
> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
> news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
>> Hi James
>> The Reads reported in Profiler also includes reads performed in the
>> procedure cache, which is variable depending on activity levels & dbcc
>> commands.
>> Profiler's reads data is useful for zero'ing on problems, but you should
>> use SET STATISTICS IO ON for more accurate report on the logical reads
>> performed by an individual query in the Query Analyser / Management
>> Studio.
>> Regards,
>> Greg Linwood
>> SQL Server MVP
>> http://blogs.sqlserver.org.au/blogs/greg_linwood
>> "james" <kush@.brandes.com> wrote in message
>> news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
>> Hi! I post a question last week but didn't get any responses, so trying
>> it again.
>> Basically, I am in a process of tuning a query and number of logical
>> reads gets change after I run dbcc dropcleanbuffers.
>> I understand that physical reads will be different after cache is
>> flushed
>> out but why logical read get changed after flushing the cache but with
>> exact same execution plan.
>> Sql 200 sp4
>>
>|||Hi Greg,
I have replied to your email address the actual query, statistics io output
and profiler trace output that you asked for. Forgive me for sending to your
email rather than entire group. I thought with all the attachment and stuff
it would be better send it to directly. Let me know if any problem with
that. Thanks once again.
"Greg Linwood" <g_linwood@.hotmail.com> wrote in message
news:u5NBAjz5GHA.4112@.TK2MSFTNGP04.phx.gbl...
> Hi James
> Not off the top unfortunately. How much difference are you seeing? Are you
> seeing only small differences or large differences/
> Is your scenario reproducable via a script that you could post? If so, I'd
> be happy to spend some time looking into it. If not, any chance you could
> post the set statistics io output & profiler showplan all event's
> BinaryData output for each call?
> Regards,
> Greg Linwood
> SQL Server MVP
> http://blogs.sqlserver.org.au/blogs/greg_linwood
> "james" <kush@.brandes.com> wrote in message
> news:enD3PXz5GHA.1860@.TK2MSFTNGP04.phx.gbl...
>> Hi Greg,
>> Thanks for the reply. I am already using Set statistics io on to get the
>> logical reads, its not from Profiler.
>> Do you think of any reason why logical read would be different for same
>> qyery, same dataset with exact same execution plan and only difference is
>> I flushed the cache with DBCC dropcleanbuffer.
>> Any input is greatly appreciated.
>> "Greg Linwood" <g_linwood@.hotmail.com> wrote in message
>> news:OB$Xvum5GHA.2536@.TK2MSFTNGP06.phx.gbl...
>> Hi James
>> The Reads reported in Profiler also includes reads performed in the
>> procedure cache, which is variable depending on activity levels & dbcc
>> commands.
>> Profiler's reads data is useful for zero'ing on problems, but you should
>> use SET STATISTICS IO ON for more accurate report on the logical reads
>> performed by an individual query in the Query Analyser / Management
>> Studio.
>> Regards,
>> Greg Linwood
>> SQL Server MVP
>> http://blogs.sqlserver.org.au/blogs/greg_linwood
>> "james" <kush@.brandes.com> wrote in message
>> news:u51QEfm5GHA.3560@.TK2MSFTNGP03.phx.gbl...
>> Hi! I post a question last week but didn't get any responses, so trying
>> it again.
>> Basically, I am in a process of tuning a query and number of logical
>> reads gets change after I run dbcc dropcleanbuffers.
>> I understand that physical reads will be different after cache is
>> flushed
>> out but why logical read get changed after flushing the cache but with
>> exact same execution plan.
>> Sql 200 sp4
>>
>>
>

logical read

Hi

I am trying to tune a sql query. The query return about 300 rows of uniqueindentifier. But in the statistics IO, I see Logical reads of 7228, that does not seems right? I tried to rebuild the index, but the logical read did not help. But I set up the database on another computer, and the IO on this on came down to 10. Is there something wrong with the computer with logical read of 7228

thanks

Pauli

There may not be enough memory available to SQL Server, and the data cache is being flushed.

What is the memory/load differences between the two computers?

|||Are the indexes the same in both databases? It really sounds like you are missing an index in the first database. Even on a machine with memory pressure, that much of a difference in logical IO's points at an index or statistics problem. Are you seeing physical reads when you run the query in the first database?

Logical Processing of SQL

I just glanced at the logical processing flow of SQL Statements in Itziks
TSQL Querying Book and wanted to know if this applies to SQL of any RDBMS
such as Oracle, DB2,Sybase,MySQL,etc or is it strictly for SQL Server ?
Also what are the different ANSI Standards. I've heard of 92,99,2003..but
can someone refer me to what part of SQL conforms to 92,99,2003 or whatever
else is out there.
Thanks>I just glanced at the logical processing flow of SQL Statements in Itziks TSQL Querying Book and
>wanted to know if this applies to SQL of any RDBMS such as Oracle, DB2,Sybase,MySQL,etc or is it
>strictly for SQL Server ?
It is how the SQL language is constructed, so it applies to all product (that confirms at least a
little bit to the SQL language standard). Of course, stuff not in other products doesn't apply (like
TOP, GROUP BY ALL etc).
> Also what are the different ANSI Standards. I've heard of 92,99,2003..but can someone refer me to
> what part of SQL conforms to 92,99,2003 or whatever else is out there.
There is no such document out there (to the best of my knowledge). The standard is *huge*, and so is
any of the major payer's implementation. Creating such a document would be a huge work. The closest
thing I've seen was a word document for the SQL Server 2000 TSQL grammar (in BNF format) where it
syntax colored the non-ANSI parts of the language. I believe it was in the SQL Server 2000 resource
kit.
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Hassan" <Hassan@.hotmail.com> wrote in message news:e1yJi3RKHHA.3936@.TK2MSFTNGP02.phx.gbl...
>I just glanced at the logical processing flow of SQL Statements in Itziks TSQL Querying Book and
>wanted to know if this applies to SQL of any RDBMS such as Oracle, DB2,Sybase,MySQL,etc or is it
>strictly for SQL Server ?
> Also what are the different ANSI Standards. I've heard of 92,99,2003..but can someone refer me to
> what part of SQL conforms to 92,99,2003 or whatever else is out there.
> Thanks
>|||How can there by anything like a 'logical processing flow' if everything
happens 'all at once'?
-:)
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:uqsM$JTKHHA.1240@.TK2MSFTNGP03.phx.gbl...
> >I just glanced at the logical processing flow of SQL Statements in Itziks
> >TSQL Querying Book and wanted to know if this applies to SQL of any RDBMS
> >such as Oracle, DB2,Sybase,MySQL,etc or is it strictly for SQL Server ?
> It is how the SQL language is constructed, so it applies to all product
> (that confirms at least a little bit to the SQL language standard). Of
> course, stuff not in other products doesn't apply (like TOP, GROUP BY ALL
> etc).
>
>> Also what are the different ANSI Standards. I've heard of 92,99,2003..but
>> can someone refer me to what part of SQL conforms to 92,99,2003 or
>> whatever else is out there.
> There is no such document out there (to the best of my knowledge). The
> standard is *huge*, and so is any of the major payer's implementation.
> Creating such a document would be a huge work. The closest thing I've seen
> was a word document for the SQL Server 2000 TSQL grammar (in BNF format)
> where it syntax colored the non-ANSI parts of the language. I believe it
> was in the SQL Server 2000 resource kit.
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Hassan" <Hassan@.hotmail.com> wrote in message
> news:e1yJi3RKHHA.3936@.TK2MSFTNGP02.phx.gbl...
>>I just glanced at the logical processing flow of SQL Statements in Itziks
>>TSQL Querying Book and wanted to know if this applies to SQL of any RDBMS
>>such as Oracle, DB2,Sybase,MySQL,etc or is it strictly for SQL Server ?
>> Also what are the different ANSI Standards. I've heard of 92,99,2003..but
>> can someone refer me to what part of SQL conforms to 92,99,2003 or
>> whatever else is out there.
>> Thanks
>|||"Logical Processing"?
Do you mean the Query Optimizer? Or perhaps the parser?
Either way, your talking about internals and not how you use things.
However, I would assume that the base theory is going to be the same.
IMHO the Query Optimizer is written by a group of programmers that are
locked into a room and raw meat is tossed in once, or twice a day to keep
them fed.
"Hassan" wrote:
> I just glanced at the logical processing flow of SQL Statements in Itziks
> TSQL Querying Book and wanted to know if this applies to SQL of any RDBMS
> such as Oracle, DB2,Sybase,MySQL,etc or is it strictly for SQL Server ?
> Also what are the different ANSI Standards. I've heard of 92,99,2003..but
> can someone refer me to what part of SQL conforms to 92,99,2003 or whatever
> else is out there.
> Thanks
>
>