Friday, February 24, 2012

Logic problem - Please help :-(

Hi all,
I have a situation that I need some help with as soon as possible. If
you?re good at logic puzzles then please help as soon as you have a
minute :-)
As some of you know I have a situation in which I need to copy data from
a table in QuickRes, to an identical table in a new database (called
Arrivals). The databases are on physically separate machines. For
operational reasons, the query that copies the data over from QuickRes
to Arrivals will be kicked off by a Web Service, which will then return
the data as XML.
I only want to copy changed rows. Each time a row is changed, its
LastUpdated field is changed to represent the time the update took
place. The idea is simply that each time the synchronisation process
occurs, it will copy across any rows changed since the last time the
sync process ran.
I'm worried about this though because it seems to me that there might be
the possibility of missing some changes under the following
circumstances (or some permutation of them anyway):
Consider:
Time Event
10:46:10 The query to copy all changed rows begins. It starts returning
any rows that have been updated since, say, 10:31:00.
10:46:11 Under a separate thread, an update statement us executed that
changes a row in the target table. It's LastUpdated field is changed to
10:46:11. The synchronisation query does *not* detect this row change
because it is just about to complete.
10:46:12 The synchronisation process completes at 10:46:12
The next time the sync process executes, its going to request any rows
changed since 10:46:12.
This will *not* include the row changed in step two because it's
LastUpdated value is 10:46:11
In other words, this update is lost and won't be brought over.
Questions:
1. Have I identified a real problem here, or am I worried about nothing
- perhaps because SQL Server somehow "magically" takes care of stuff
like this. It seems like a real problem to me, but I'm not very good at
logic stuff like this!
2. Assuming there is a real problem - what do I need to do to fix it?
Would putting the sync query in a transaction prevent the problem? I've
thought about that one and it seems that even in a transaction, because
the computer is so fast, the times entered into the last updated field
could still cause this problem.
I did think about using the actual Timestamp data type, but I remember
reading somewhere that the timestamp is based on the machine that it was
generated on. I was worried that it might not work because of there
being two machines involved in the process.
I know how to deal with the synchronisation process in several different
ways but for the moment we?re assuming that typical sync methods such as
transactional replication are off the table.
Any help or advice would be very greatly appreciated :-)
Many thanks
Kindest Regards
SimonHi Simon,
Why didnt you try Update trigger and Insert trigger '
If you have Inserted new rows or updated several rows, you can insert
in to a temporary table and later you can transfer into a destination
table. What you want to do is You have to truncate that temporary table
after transfering the table.
Regards,
Muthu kumaran .D
(Unknown Is Ocean)|||Hi Mutha
Hmmm... Thats an interesting idea.
I'll definately have to have a think about that one - it might just do
the trick! :-)
Many thanks for your advice
Simon|||Simon.
Something else you may consider is using the max last update date from the
synchronised rows instead of the completion time for the thread.
"Simon Harvey" <nothanks@.hotmail.com> wrote in message
news:%23bHjNBaeGHA.3468@.TK2MSFTNGP03.phx.gbl...
> Hi all,
> I have a situation that I need some help with as soon as possible. If you?re
> good at logic puzzles then please help as soon as you have a minute :-)
> As some of you know I have a situation in which I need to copy data from a
> table in QuickRes, to an identical table in a new database (called
> Arrivals). The databases are on physically separate machines. For
> operational reasons, the query that copies the data over from QuickRes to
> Arrivals will be kicked off by a Web Service, which will then return the
> data as XML.
> I only want to copy changed rows. Each time a row is changed, its
> LastUpdated field is changed to represent the time the update took place.
> The idea is simply that each time the synchronisation process occurs, it
> will copy across any rows changed since the last time the sync process
> ran.
> I'm worried about this though because it seems to me that there might be
> the possibility of missing some changes under the following circumstances
> (or some permutation of them anyway):
> Consider:
> Time Event
> 10:46:10 The query to copy all changed rows begins. It starts returning
> any rows that have been updated since, say, 10:31:00.
> 10:46:11 Under a separate thread, an update statement us executed that
> changes a row in the target table. It's LastUpdated field is changed to
> 10:46:11. The synchronisation query does *not* detect this row change
> because it is just about to complete.
> 10:46:12 The synchronisation process completes at 10:46:12
> The next time the sync process executes, its going to request any rows
> changed since 10:46:12.
> This will *not* include the row changed in step two because it's
> LastUpdated value is 10:46:11
> In other words, this update is lost and won't be brought over.
> Questions:
> 1. Have I identified a real problem here, or am I worried about nothing -
> perhaps because SQL Server somehow "magically" takes care of stuff like
> this. It seems like a real problem to me, but I'm not very good at logic
> stuff like this!
> 2. Assuming there is a real problem - what do I need to do to fix it?
> Would putting the sync query in a transaction prevent the problem? I've
> thought about that one and it seems that even in a transaction, because
> the computer is so fast, the times entered into the last updated field
> could still cause this problem.
> I did think about using the actual Timestamp data type, but I remember
> reading somewhere that the timestamp is based on the machine that it was
> generated on. I was worried that it might not work because of there being
> two machines involved in the process.
> I know how to deal with the synchronisation process in several different
> ways but for the moment we?re assuming that typical sync methods such as
> transactional replication are off the table.
> Any help or advice would be very greatly appreciated :-)
> Many thanks
> Kindest Regards
> Simon|||On Wed, 17 May 2006 11:54:42 +0100, Simon Harvey wrote:
(snip)
>Questions:
>1. Have I identified a real problem here, or am I worried about nothing
>- perhaps because SQL Server somehow "magically" takes care of stuff
>like this. It seems like a real problem to me, but I'm not very good at
>logic stuff like this!
Hi Simon,
If you wrote the process to identify changed roows and export them as
XML, then SQL Server will do nothing magically - it will do exactly what
yoou tell it to do. (All else would qualify as a bug).
Would it be a major problem if a changed row can be exported twice (in
rare cases)? If not, thhen there's a simple change - have the export
process select all rows changed since the START, rather than the end, of
the previous execution.
>I did think about using the actual Timestamp data type
Don't. The name is misleading. A timestamp column has nothing to do with
timestamps. It's only intended to be used for optimistic concurrency
control.
--
Hugo Kornelis, SQL Server MVP|||On 17 May 2006 07:25:54 -0700, mkumaran wrote:
>Hi Simon,
>Why didnt you try Update trigger and Insert trigger '
>If you have Inserted new rows or updated several rows, you can insert
>in to a temporary table and later you can transfer into a destination
>table. What you want to do is You have to truncate that temporary table
>after transfering the table.
Hi Muthu,
That would work, but only if Simon makes sure that the temporary table
is locked as soon as the export process starts and that this lock is
held until the TRUNCATE TABLE command has been executed. Otherwise,
there's still the possibility that a row is inserted in the temp table
after the actual export but before the TRUNCATE TABLE.
Of course, such lokcing would seriously hampper concurrency...
--
Hugo Kornelis, SQL Server MVP|||Hi Hugo Kornelis,
you are correct. But What I have thought is the ideal transferring time
is very fast enough if we are using Insert Into Command. Even if new
rows are inserted when we are tranferring data, It also getting
transfered I think.If not We can check the existing of such rows and
than we can insert it or update it.
Please tell me Am I Correct'
--
Muthu Kumaran.D|||Hi Guys
Thanks so much for your help!
I think I'm going to go for the trigger option on this one. Seems a bit
simpler to understand and implement - and I can be reasonably confident
that it's working cos I actually understand it!
In terms of performance, I'm not very concerned - this refresh process
is going to be running frequently enough that there will only ever be a
few rows in the temp tables, so locking it will be for a very short
amount of time.
Many thanks again for all your help
Simon
mkumaran wrote:
> Hi Hugo Kornelis,
> you are correct. But What I have thought is the ideal transferring time
> is very fast enough if we are using Insert Into Command. Even if new
> rows are inserted when we are tranferring data, It also getting
> transfered I think.If not We can check the existing of such rows and
> than we can insert it or update it.
> Please tell me Am I Correct'
> --
> Muthu Kumaran.D
>

No comments:

Post a Comment