Showing posts with label situation. Show all posts
Showing posts with label situation. Show all posts

Friday, March 30, 2012

Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

Hello,
First let me describe the situation.
I have win2003 server (name: SRV) where I have SQL 2000 Reporting Services
installed, win2003 server with SQL2000 server (name: DBSRV) where I have a
database and a workstation where I create reports in Visual Studio.
When I am creating reports, everything is OK - I can connect to database,
get the data etc.
When I publish the report to http://SRV/ReportServer and try to access it
from with web browser from my workstation, it throws this message:
An error has occurred during report processing. (rsProcessingAborted)
Cannot create a connection to data source 'Database'.
(rsErrorOpeningConnection)
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.
Of course that user doesn't have access to database.
But how to tell Repoting Services to get MY user name I am using to log on
to windows?
Thanks,
Armands BrantsThe funiest thing is that I can open the report from my workstation if I do
that way:
* open the report from server http://localhost...
* write the same address (http://localhost...) in my workstation's internet
explorer - page can not be displayed, of course
* replace localhost with srv and everything works fine!
Can't understand why...
Armands
"Armands Brants" <armands.brants@.navigator.lv> wrote in message
news:uRNDlSmxEHA.1204@.TK2MSFTNGP10.phx.gbl...
> Hello,
> First let me describe the situation.
> I have win2003 server (name: SRV) where I have SQL 2000 Reporting Services
> installed, win2003 server with SQL2000 server (name: DBSRV) where I have a
> database and a workstation where I create reports in Visual Studio.
> When I am creating reports, everything is OK - I can connect to database,
> get the data etc.
> When I publish the report to http://SRV/ReportServer and try to access it
> from with web browser from my workstation, it throws this message:
> An error has occurred during report processing. (rsProcessingAborted)
> Cannot create a connection to data source 'Database'.
> (rsErrorOpeningConnection)
> Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.
> Of course that user doesn't have access to database.
> But how to tell Repoting Services to get MY user name I am using to log on
> to windows?
> Thanks,
> Armands Brants
>|||If you are trying to authenticate in a scenerio which involves more than two
endpoints eg: client connects to remote RS web service which connects to a
remote database (3 endpoints) NTLM authentication will not work, you will
have to use delegation using kerberos authentication which supports more than
one network hop. Look here for help about setting up a asp.net app for
delegation.
http://support.microsoft.com/default.aspx?scid=kb;en-us;810572
To test that this is your problem first access the report from the reporting
services console, this seems to cache your credential locally, now try from
your remote client try the same report, it should work until you log off the
rs server.
HTH,
Dan
"Armands Brants" wrote:
> The funiest thing is that I can open the report from my workstation if I do
> that way:
> * open the report from server http://localhost...
> * write the same address (http://localhost...) in my workstation's internet
> explorer - page can not be displayed, of course
> * replace localhost with srv and everything works fine!
> Can't understand why...
> Armands
> "Armands Brants" <armands.brants@.navigator.lv> wrote in message
> news:uRNDlSmxEHA.1204@.TK2MSFTNGP10.phx.gbl...
> > Hello,
> >
> > First let me describe the situation.
> >
> > I have win2003 server (name: SRV) where I have SQL 2000 Reporting Services
> > installed, win2003 server with SQL2000 server (name: DBSRV) where I have a
> > database and a workstation where I create reports in Visual Studio.
> >
> > When I am creating reports, everything is OK - I can connect to database,
> > get the data etc.
> >
> > When I publish the report to http://SRV/ReportServer and try to access it
> > from with web browser from my workstation, it throws this message:
> >
> > An error has occurred during report processing. (rsProcessingAborted)
> > Cannot create a connection to data source 'Database'.
> > (rsErrorOpeningConnection)
> > Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'.
> >
> > Of course that user doesn't have access to database.
> > But how to tell Repoting Services to get MY user name I am using to log on
> > to windows?
> >
> > Thanks,
> >
> > Armands Brants
> >
>
>

Friday, March 23, 2012

Login Failed for user '(null)' - Multiple Servers

Greets,
I realize that this is a common issue--however, the situation seems a bit
different. Or maybe I'm naive :) Anyway, I have two servers, one hosting
the data. The other only has what it takes to run SRS.
I loaded visual studio on the SRS server for ease of use. From there, I can
create, preview, and deploy any report I like. However, even from the local
IE, I can seem to pass through the authentication. I have removed anonymous
access to the reporting website--hoping to force authentication. I have
looked at all the Web.config files, all are set for impersonation = 'true'.
What am I missing here?
Thanks!Hi Joseph,
Thank you for your posting!
My understanding of this issue is: You can not view the Report via URL and
you get the Login Failed for user '(null)' error. If I misunderstood your
concern, please feel free to let me know.
It seems that the datasource credential does not set correctly. Please try
to do the following:
1. Open Report Manager via URL.
2. Find the report and click it.
3. Click Properties, and then click Data Sources.
4. If you are using custom data source, please click the Credentials
supplied by the user running the report.
5. If you are using shared data source, then go to the datasource, and
click it. Also click the Credentials supplied by the user running the
report.
6. If the login is a Windows Credential, please check the Use as Windows
credentials when connecting to the data source.
Please let me know the result so that I can provide further assistance.
Thank you!
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Thanks for the response,
However, I do not believe that this is the issue. I am using Windows
authentication only--so as far as I can determine, no creditials can be
specified. Also, the datasource works FINE inside visual studio.
This is an IIS issue, probably having to do with one of two things:
1. The SRS Servers's computer account needs to be Trusted for Delegation
2. I need to be running the SQL services as a Domain User account...this
second one is a little suspect, because I am using the database with an
Access 2003 frontend, and all the Windows authentication works well.
Any other thoughts?
"Wei Lu" <weilu@.online.microsoft.com> wrote in message
news:JOeiC7PlGHA.2260@.TK2MSFTNGXA01.phx.gbl...
> Hi Joseph,
> Thank you for your posting!
> My understanding of this issue is: You can not view the Report via URL and
> you get the Login Failed for user '(null)' error. If I misunderstood your
> concern, please feel free to let me know.
> It seems that the datasource credential does not set correctly. Please try
> to do the following:
> 1. Open Report Manager via URL.
> 2. Find the report and click it.
> 3. Click Properties, and then click Data Sources.
> 4. If you are using custom data source, please click the Credentials
> supplied by the user running the report.
> 5. If you are using shared data source, then go to the datasource, and
> click it. Also click the Credentials supplied by the user running the
> report.
> 6. If the login is a Windows Credential, please check the Use as Windows
> credentials when connecting to the data source.
> Please let me know the result so that I can provide further assistance.
> Thank you!
> Sincerely,
> Wei Lu
> Microsoft Online Community Support
> ==================================================> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ==================================================> This posting is provided "AS IS" with no warranties, and confers no
> rights.
>
>|||Hi Joseph,
Thank you for the information and clarification.
Let me know some information.
1. Could you access the Report Manager correctly?
2. Could you access the Report Server correctly?
3. If you could access the above site correctly, I don't think you need to
change the SQL Reporting Services start up account. Please try the method I
provided in the previous post and you could supply the Windows Credential
you use to connect to your database. ( I think it will be your domain
account or you windows login account. )
4. If you could not access the above site correctly, please let me know the
error message and so that I can provide further assistance. Thank you!
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||Hi Joseph,
Have you get any chance to collect the information? Please feel free to let
me know if you have any question. Thank you!
Sincerely,
Wei Lu
Microsoft Online Community Support
==================================================
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
==================================================This posting is provided "AS IS" with no warranties, and confers no rights.|||I've had the same problem with my reports, and my solution seems to have
been the simplest:
When you deploy a report from VS, the data source sometimes gets messed up.
I created a new data source in Report Manager and changed my report to use
this one instead. Both are using Windows Integrated Authentication. My
report didn't work with the data source generated by VS, but it worked
without any problems with the new data source.
You can still upload reports to the server, and they will use the new data
source, so you only have to do this once (at least once with every new
report).
I don't know why the data sources gets messed up. But it seems to fix my
problem. I found the clue in a news group post, so it has probably solved
the problem for someone else as well.
Kaisa M. Lindahl Lervik
"Joseph Baxter" <tanstaafl32@.newsgroups.nospam> wrote in message
news:uDTfE9IlGHA.1320@.TK2MSFTNGP04.phx.gbl...
> Greets,
> I realize that this is a common issue--however, the situation seems a bit
> different. Or maybe I'm naive :) Anyway, I have two servers, one hosting
> the data. The other only has what it takes to run SRS.
> I loaded visual studio on the SRS server for ease of use. From there, I
> can create, preview, and deploy any report I like. However, even from the
> local IE, I can seem to pass through the authentication. I have removed
> anonymous access to the reporting website--hoping to force authentication.
> I have looked at all the Web.config files, all are set for impersonation => 'true'.
> What am I missing here?
> Thanks!
>

Friday, February 24, 2012

Logic problem - Please help :-(

Hi all,
I have a situation that I need some help with as soon as possible. If
youre good at logic puzzles then please help as soon as you have a
minute :-)
As some of you know I have a situation in which I need to copy data from
a table in QuickRes, to an identical table in a new database (called
Arrivals). The databases are on physically separate machines. For
operational reasons, the query that copies the data over from QuickRes
to Arrivals will be kicked off by a Web Service, which will then return
the data as XML.
I only want to copy changed rows. Each time a row is changed, its
LastUpdated field is changed to represent the time the update took
place. The idea is simply that each time the synchronisation process
occurs, it will copy across any rows changed since the last time the
sync process ran.
I'm worried about this though because it seems to me that there might be
the possibility of missing some changes under the following
circumstances (or some permutation of them anyway):
Consider:
Time Event
10:46:10 The query to copy all changed rows begins. It starts
returning any rows that have been updated since, say, 10:31:00.
10:46:11 Under a separate thread, an update statement us executed
that changes a row in the target table. It's LastUpdated field is
changed to 10:46:11. The synchronisation query does *not* detect this
row change because it is just about to complete.
10:46:12 The synchronisation process completes at 10:46:12
The next time the sync process executes, its going to request any rows
changed since 10:46:12.
This will *not* include the row changed in step two because it's
LastUpdated value is 10:46:11
In other words, this update is lost and won't be brought over.
Questions:
1. Have I identified a real problem here, or am I worried about nothing
- perhaps because SQL Server somehow "magically" takes care of stuff
like this. It seems like a real problem to me, but I'm not very good at
logic stuff like this!
2. Assuming there is a real problem - what do I need to do to fix it?
Would putting the sync query in a transaction prevent the problem? I've
thought about that one and it seems that even in a transaction, because
the computer is so fast, the times entered into the last updated field
could still cause this problem.
I did think about using the actual Timestamp data type, but I remember
reading somewhere that the timestamp is based on the machine that it was
generated on. I was worried that it might not work because of there
being two machines involved in the process.
I know how to deal with the synchronisation process in several different
ways but for the moment were assuming that typical sync methods such as
transactional replication are off the table.
Any help or advice would be very greatly appreciated :-)
Many thanks
Kindest Regards
SimonJust thinking out loud, why don't start the sync process next time
to take records having time stamp > 10:46:10|||Simon,
why don't you use replication?|||Alexander Kuznetsov wrote:
> Simon,
> why don't you use replication?
>
Hi Alex,
I would like to use replication but my boss has a thing about web
services! He wants it done that way.
I may have to kick up a stink if there's no sure fire way to do it the
webservice way...
Thanks|||Ahhhh...
The old "newer technology is better" solution. I could go on and on about
how the database is built for replication, tried and tested, and web
services only add extra, unnecessary layers of complexity and ten times the
code (not to mention and extra server and all sorts of extra network
traffic), but I'll let other folks who are more knowledgeable about
replication do that. A separate post on the benefits of replication VS a
web service should get all sorts of expert responses that you can show to
your boss as justification for doing it the right way.
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:OgJ4EfbeGHA.1208@.TK2MSFTNGP02.phx.gbl...
> Alexander Kuznetsov wrote:
>
> Hi Alex,
> I would like to use replication but my boss has a thing about web
> services! He wants it done that way.
> I may have to kick up a stink if there's no sure fire way to do it the
> webservice way...
> Thanks|||Simon,
I would
1. lock the table.
2. insert getdate() into some other table, say replication_batch
3. release the lock
4. copy the rows changed between the last 2 values in replication_batch
Good luck!|||If you are concerned with missing updates due to timing issues, and if you
ALWAYS want to overwrite the data, just change your date to the start of the
last run, rather than the end of it (as I think Omni suggested). Yes, you
will copy over more rows than is needed, but you will have the accuracy you
are looking for. Although replication is likely a much simpler, and
dependable solution.
"Alexander Kuznetsov" <AK_TIREDOFSPAM@.hotmail.COM> wrote in message
news:1147882907.441123.246700@.38g2000cwa.googlegroups.com...
> Simon,
> I would
> 1. lock the table.
> 2. insert getdate() into some other table, say replication_batch
> 3. release the lock
> 4. copy the rows changed between the last 2 values in replication_batch
> Good luck!
>

Logic problem - Please help :-(

Hi all,
I have a situation that I need some help with as soon as possible. If
youre good at logic puzzles then please help as soon as you have a
minute :-)
As some of you know I have a situation in which I need to copy data from
a table in QuickRes, to an identical table in a new database (called
Arrivals). The databases are on physically separate machines. For
operational reasons, the query that copies the data over from QuickRes
to Arrivals will be kicked off by a Web Service, which will then return
the data as XML.
I only want to copy changed rows. Each time a row is changed, its
LastUpdated field is changed to represent the time the update took
place. The idea is simply that each time the synchronisation process
occurs, it will copy across any rows changed since the last time the
sync process ran.
I'm worried about this though because it seems to me that there might be
the possibility of missing some changes under the following
circumstances (or some permutation of them anyway):
Consider:
Time Event
10:46:10 The query to copy all changed rows begins. It starts returning
any rows that have been updated since, say, 10:31:00.
10:46:11 Under a separate thread, an update statement us executed that
changes a row in the target table. It's LastUpdated field is changed to
10:46:11. The synchronisation query does *not* detect this row change
because it is just about to complete.
10:46:12 The synchronisation process completes at 10:46:12
The next time the sync process executes, its going to request any rows
changed since 10:46:12.
This will *not* include the row changed in step two because it's
LastUpdated value is 10:46:11
In other words, this update is lost and won't be brought over.
Questions:
1. Have I identified a real problem here, or am I worried about nothing
- perhaps because SQL Server somehow "magically" takes care of stuff
like this. It seems like a real problem to me, but I'm not very good at
logic stuff like this!
2. Assuming there is a real problem - what do I need to do to fix it?
Would putting the sync query in a transaction prevent the problem? I've
thought about that one and it seems that even in a transaction, because
the computer is so fast, the times entered into the last updated field
could still cause this problem.
I did think about using the actual Timestamp data type, but I remember
reading somewhere that the timestamp is based on the machine that it was
generated on. I was worried that it might not work because of there
being two machines involved in the process.
I know how to deal with the synchronisation process in several different
ways but for the moment were assuming that typical sync methods such as
transactional replication are off the table.
Any help or advice would be very greatly appreciated :-)
Many thanks
Kindest Regards
SimonHi Simon,
Why didnt you try Update trigger and Insert trigger '
If you have Inserted new rows or updated several rows, you can insert
in to a temporary table and later you can transfer into a destination
table. What you want to do is You have to truncate that temporary table
after transfering the table.
Regards,
Muthu kumaran .D
(Unknown Is Ocean)|||Hi Mutha
Hmmm... Thats an interesting idea.
I'll definately have to have a think about that one - it might just do
the trick! :-)
Many thanks for your advice
Simon|||Simon.
Something else you may consider is using the max last update date from the
synchronised rows instead of the completion time for the thread.
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:%23bHjNBaeGHA.3468@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> I have a situation that I need some help with as soon as possible. If you
re
> good at logic puzzles then please help as soon as you have a minute :-)
> As some of you know I have a situation in which I need to copy data from a
> table in QuickRes, to an identical table in a new database (called
> Arrivals). The databases are on physically separate machines. For
> operational reasons, the query that copies the data over from QuickRes to
> Arrivals will be kicked off by a Web Service, which will then return the
> data as XML.
> I only want to copy changed rows. Each time a row is changed, its
> LastUpdated field is changed to represent the time the update took place.
> The idea is simply that each time the synchronisation process occurs, it
> will copy across any rows changed since the last time the sync process
> ran.
> I'm worried about this though because it seems to me that there might be
> the possibility of missing some changes under the following circumstances
> (or some permutation of them anyway):
> Consider:
> Time Event
> 10:46:10 The query to copy all changed rows begins. It starts returning
> any rows that have been updated since, say, 10:31:00.
> 10:46:11 Under a separate thread, an update statement us executed that
> changes a row in the target table. It's LastUpdated field is changed to
> 10:46:11. The synchronisation query does *not* detect this row change
> because it is just about to complete.
> 10:46:12 The synchronisation process completes at 10:46:12
> The next time the sync process executes, its going to request any rows
> changed since 10:46:12.
> This will *not* include the row changed in step two because it's
> LastUpdated value is 10:46:11
> In other words, this update is lost and won't be brought over.
> Questions:
> 1. Have I identified a real problem here, or am I worried about nothing -
> perhaps because SQL Server somehow "magically" takes care of stuff like
> this. It seems like a real problem to me, but I'm not very good at logic
> stuff like this!
> 2. Assuming there is a real problem - what do I need to do to fix it?
> Would putting the sync query in a transaction prevent the problem? I've
> thought about that one and it seems that even in a transaction, because
> the computer is so fast, the times entered into the last updated field
> could still cause this problem.
> I did think about using the actual Timestamp data type, but I remember
> reading somewhere that the timestamp is based on the machine that it was
> generated on. I was worried that it might not work because of there being
> two machines involved in the process.
> I know how to deal with the synchronisation process in several different
> ways but for the moment were assuming that typical sync methods such as
> transactional replication are off the table.
> Any help or advice would be very greatly appreciated :-)
> Many thanks
> Kindest Regards
> Simon|||On Wed, 17 May 2006 11:54:42 +0100, Simon Harvey wrote:
(snip)
>Questions:
>1. Have I identified a real problem here, or am I worried about nothing
>- perhaps because SQL Server somehow "magically" takes care of stuff
>like this. It seems like a real problem to me, but I'm not very good at
>logic stuff like this!
Hi Simon,
If you wrote the process to identify changed roows and export them as
XML, then SQL Server will do nothing magically - it will do exactly what
yoou tell it to do. (All else would qualify as a bug).
Would it be a major problem if a changed row can be exported twice (in
rare cases)? If not, thhen there's a simple change - have the export
process select all rows changed since the START, rather than the end, of
the previous execution.

>I did think about using the actual Timestamp data type
Don't. The name is misleading. A timestamp column has nothing to do with
timestamps. It's only intended to be used for optimistic concurrency
control.
Hugo Kornelis, SQL Server MVP|||On 17 May 2006 07:25:54 -0700, mkumaran wrote:

>Hi Simon,
>Why didnt you try Update trigger and Insert trigger '
>If you have Inserted new rows or updated several rows, you can insert
>in to a temporary table and later you can transfer into a destination
>table. What you want to do is You have to truncate that temporary table
>after transfering the table.
Hi Muthu,
That would work, but only if Simon makes sure that the temporary table
is locked as soon as the export process starts and that this lock is
held until the TRUNCATE TABLE command has been executed. Otherwise,
there's still the possibility that a row is inserted in the temp table
after the actual export but before the TRUNCATE TABLE.
Of course, such lokcing would seriously hampper concurrency...
Hugo Kornelis, SQL Server MVP|||Hi Hugo Kornelis,
you are correct. But What I have thought is the ideal transferring time
is very fast enough if we are using Insert Into Command. Even if new
rows are inserted when we are tranferring data, It also getting
transfered I think.If not We can check the existing of such rows and
than we can insert it or update it.
Please tell me Am I Correct'
--
Muthu Kumaran.D|||Hi Guys
Thanks so much for your help!
I think I'm going to go for the trigger option on this one. Seems a bit
simpler to understand and implement - and I can be reasonably confident
that it's working cos I actually understand it!
In terms of performance, I'm not very concerned - this refresh process
is going to be running frequently enough that there will only ever be a
few rows in the temp tables, so locking it will be for a very short
amount of time.
Many thanks again for all your help
Simon
mkumaran wrote:
> Hi Hugo Kornelis,
> you are correct. But What I have thought is the ideal transferring time
> is very fast enough if we are using Insert Into Command. Even if new
> rows are inserted when we are tranferring data, It also getting
> transfered I think.If not We can check the existing of such rows and
> than we can insert it or update it.
> Please tell me Am I Correct'
> --
> Muthu Kumaran.D
>

Logic problem - Please help :-(

Hi all,
I have a situation that I need some help with as soon as possible. If
you?re good at logic puzzles then please help as soon as you have a
minute :-)
As some of you know I have a situation in which I need to copy data from
a table in QuickRes, to an identical table in a new database (called
Arrivals). The databases are on physically separate machines. For
operational reasons, the query that copies the data over from QuickRes
to Arrivals will be kicked off by a Web Service, which will then return
the data as XML.
I only want to copy changed rows. Each time a row is changed, its
LastUpdated field is changed to represent the time the update took
place. The idea is simply that each time the synchronisation process
occurs, it will copy across any rows changed since the last time the
sync process ran.
I'm worried about this though because it seems to me that there might be
the possibility of missing some changes under the following
circumstances (or some permutation of them anyway):
Consider:
Time Event
10:46:10 The query to copy all changed rows begins. It starts returning
any rows that have been updated since, say, 10:31:00.
10:46:11 Under a separate thread, an update statement us executed that
changes a row in the target table. It's LastUpdated field is changed to
10:46:11. The synchronisation query does *not* detect this row change
because it is just about to complete.
10:46:12 The synchronisation process completes at 10:46:12
The next time the sync process executes, its going to request any rows
changed since 10:46:12.
This will *not* include the row changed in step two because it's
LastUpdated value is 10:46:11
In other words, this update is lost and won't be brought over.
Questions:
1. Have I identified a real problem here, or am I worried about nothing
- perhaps because SQL Server somehow "magically" takes care of stuff
like this. It seems like a real problem to me, but I'm not very good at
logic stuff like this!
2. Assuming there is a real problem - what do I need to do to fix it?
Would putting the sync query in a transaction prevent the problem? I've
thought about that one and it seems that even in a transaction, because
the computer is so fast, the times entered into the last updated field
could still cause this problem.
I did think about using the actual Timestamp data type, but I remember
reading somewhere that the timestamp is based on the machine that it was
generated on. I was worried that it might not work because of there
being two machines involved in the process.
I know how to deal with the synchronisation process in several different
ways but for the moment we?re assuming that typical sync methods such as
transactional replication are off the table.
Any help or advice would be very greatly appreciated :-)
Many thanks
Kindest Regards
SimonHi Simon,
Why didnt you try Update trigger and Insert trigger '
If you have Inserted new rows or updated several rows, you can insert
in to a temporary table and later you can transfer into a destination
table. What you want to do is You have to truncate that temporary table
after transfering the table.
Regards,
Muthu kumaran .D
(Unknown Is Ocean)|||Hi Mutha
Hmmm... Thats an interesting idea.
I'll definately have to have a think about that one - it might just do
the trick! :-)
Many thanks for your advice
Simon|||Simon.
Something else you may consider is using the max last update date from the
synchronised rows instead of the completion time for the thread.
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:%23bHjNBaeGHA.3468@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> I have a situation that I need some help with as soon as possible. If you?re
> good at logic puzzles then please help as soon as you have a minute :-)
> As some of you know I have a situation in which I need to copy data from a
> table in QuickRes, to an identical table in a new database (called
> Arrivals). The databases are on physically separate machines. For
> operational reasons, the query that copies the data over from QuickRes to
> Arrivals will be kicked off by a Web Service, which will then return the
> data as XML.
> I only want to copy changed rows. Each time a row is changed, its
> LastUpdated field is changed to represent the time the update took place.
> The idea is simply that each time the synchronisation process occurs, it
> will copy across any rows changed since the last time the sync process
> ran.
> I'm worried about this though because it seems to me that there might be
> the possibility of missing some changes under the following circumstances
> (or some permutation of them anyway):
> Consider:
> Time Event
> 10:46:10 The query to copy all changed rows begins. It starts returning
> any rows that have been updated since, say, 10:31:00.
> 10:46:11 Under a separate thread, an update statement us executed that
> changes a row in the target table. It's LastUpdated field is changed to
> 10:46:11. The synchronisation query does *not* detect this row change
> because it is just about to complete.
> 10:46:12 The synchronisation process completes at 10:46:12
> The next time the sync process executes, its going to request any rows
> changed since 10:46:12.
> This will *not* include the row changed in step two because it's
> LastUpdated value is 10:46:11
> In other words, this update is lost and won't be brought over.
> Questions:
> 1. Have I identified a real problem here, or am I worried about nothing -
> perhaps because SQL Server somehow "magically" takes care of stuff like
> this. It seems like a real problem to me, but I'm not very good at logic
> stuff like this!
> 2. Assuming there is a real problem - what do I need to do to fix it?
> Would putting the sync query in a transaction prevent the problem? I've
> thought about that one and it seems that even in a transaction, because
> the computer is so fast, the times entered into the last updated field
> could still cause this problem.
> I did think about using the actual Timestamp data type, but I remember
> reading somewhere that the timestamp is based on the machine that it was
> generated on. I was worried that it might not work because of there being
> two machines involved in the process.
> I know how to deal with the synchronisation process in several different
> ways but for the moment we?re assuming that typical sync methods such as
> transactional replication are off the table.
> Any help or advice would be very greatly appreciated :-)
> Many thanks
> Kindest Regards
> Simon|||On Wed, 17 May 2006 11:54:42 +0100, Simon Harvey wrote:
(snip)
>Questions:
>1. Have I identified a real problem here, or am I worried about nothing
>- perhaps because SQL Server somehow "magically" takes care of stuff
>like this. It seems like a real problem to me, but I'm not very good at
>logic stuff like this!
Hi Simon,
If you wrote the process to identify changed roows and export them as
XML, then SQL Server will do nothing magically - it will do exactly what
yoou tell it to do. (All else would qualify as a bug).
Would it be a major problem if a changed row can be exported twice (in
rare cases)? If not, thhen there's a simple change - have the export
process select all rows changed since the START, rather than the end, of
the previous execution.
>I did think about using the actual Timestamp data type
Don't. The name is misleading. A timestamp column has nothing to do with
timestamps. It's only intended to be used for optimistic concurrency
control.
--
Hugo Kornelis, SQL Server MVP|||On 17 May 2006 07:25:54 -0700, mkumaran wrote:
>Hi Simon,
>Why didnt you try Update trigger and Insert trigger '
>If you have Inserted new rows or updated several rows, you can insert
>in to a temporary table and later you can transfer into a destination
>table. What you want to do is You have to truncate that temporary table
>after transfering the table.
Hi Muthu,
That would work, but only if Simon makes sure that the temporary table
is locked as soon as the export process starts and that this lock is
held until the TRUNCATE TABLE command has been executed. Otherwise,
there's still the possibility that a row is inserted in the temp table
after the actual export but before the TRUNCATE TABLE.
Of course, such lokcing would seriously hampper concurrency...
--
Hugo Kornelis, SQL Server MVP|||Hi Hugo Kornelis,
you are correct. But What I have thought is the ideal transferring time
is very fast enough if we are using Insert Into Command. Even if new
rows are inserted when we are tranferring data, It also getting
transfered I think.If not We can check the existing of such rows and
than we can insert it or update it.
Please tell me Am I Correct'
--
Muthu Kumaran.D|||Hi Guys
Thanks so much for your help!
I think I'm going to go for the trigger option on this one. Seems a bit
simpler to understand and implement - and I can be reasonably confident
that it's working cos I actually understand it!
In terms of performance, I'm not very concerned - this refresh process
is going to be running frequently enough that there will only ever be a
few rows in the temp tables, so locking it will be for a very short
amount of time.
Many thanks again for all your help
Simon
mkumaran wrote:
> Hi Hugo Kornelis,
> you are correct. But What I have thought is the ideal transferring time
> is very fast enough if we are using Insert Into Command. Even if new
> rows are inserted when we are tranferring data, It also getting
> transfered I think.If not We can check the existing of such rows and
> than we can insert it or update it.
> Please tell me Am I Correct'
> --
> Muthu Kumaran.D
>