Showing posts with label working. Show all posts
Showing posts with label working. Show all posts

Monday, March 26, 2012

login failed for user '(null)'| trusted connection

I had a report that was working and then this error started coming up whenever I tried to view it in the preview window.

login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection

When I upload the report rdl to our server and use an established data source the report works fine.

I have two questions. 1) Why would this error occur and 2) how do I fix it?

My guess is that I have to restablish a conenction to a working datasource but I don't know how to do it.

Any help would be greatly appreciated.

Thanks.

Is your datasource set to use integrated security? If so then you are probably hitting the double-hop problem.

http://blogs.msdn.com/jgalla/archive/2006/03/16/553314.aspx

|||I'm looking into this now but as some reports work and I can remake this report so that it functions I think it might have to be the report itself.

This report uses a private data source and I suspect that there might be a problem with it. How can I change a private data source without editing the RDL manually?

Again thank you for the help.|||I suspect that the reports that are functioning are using different credential options. You can view how the data sources are configured in Report Manager from the "Data Sources" tab for each report.|||I've received the same error but under different circumstances :

Login failed for user ". The user is not associated with trusted SQL Server connection.

The datasource is connecting to the database using SQL Server

Authentication. First time round this report previews fine.

I then copy and paste the report to obtain an identical one (with the

name of "Copy of report1") and then try and preview this new report,

but get the above error. The original report still works

fine. In the copy, all the dataset queries work fine when run

individually, the error only comes when i try and preview the

report. Any ideas why this is happening?

Guy|||Further to the previous post...

I later closed the original report and reopened it to find the original

gave the same error as above. After much investagation it

appeared that the datasets had somehow changed there datasources.

They were all pointing at datasources with the correct names, but not

shared ones - i dont recall setting up any that were not shared,

and they certainly are not showing in the solution explorer.

On re-pointing the sets to the correct shared sources it now all works

fine and copy and pastes fine too. But why should they have

changed during a copy and paste and even during a save and

reopen? ?

login failed for user '(null)'| trusted connection

I had a report that was working and then this error started coming up whenever I tried to view it in the preview window.

login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection

When I upload the report rdl to our server and use an established data source the report works fine.
I have two questions. 1) Why would this error occur and 2) how do I fix it?

My guess is that I have to restablish a conenction to a working datasource but I don't know how to do it.

Any help would be greatly appreciated.

Thanks.

Is your datasource set to use integrated security? If so then you are probably hitting the double-hop problem.

http://blogs.msdn.com/jgalla/archive/2006/03/16/553314.aspx

|||I'm looking into this now but as some reports work and I can remake this report so that it functions I think it might have to be the report itself.

This report uses a private data source and I suspect that there might be a problem with it. How can I change a private data source without editing the RDL manually?

Again thank you for the help.
|||I suspect that the reports that are functioning are using different credential options. You can view how the data sources are configured in Report Manager from the "Data Sources" tab for each report.|||I've received the same error but under different circumstances :
Login failed for user ". The user is not associated with trusted SQL Server connection.
The datasource is connecting to the database using SQL Server Authentication. First time round this report previews fine. I then copy and paste the report to obtain an identical one (with the name of "Copy of report1") and then try and preview this new report, but get the above error. The original report still works fine. In the copy, all the dataset queries work fine when run individually, the error only comes when i try and preview the report. Any ideas why this is happening?
Guy|||Further to the previous post...
I later closed the original report and reopened it to find the original gave the same error as above. After much investagation it appeared that the datasets had somehow changed there datasources. They were all pointing at datasources with the correct names, but not shared ones - i dont recall setting up any that were not shared, and they certainly are not showing in the solution explorer.
On re-pointing the sets to the correct shared sources it now all works fine and copy and pastes fine too. But why should they have changed during a copy and paste and even during a save and reopen? ?sql

Login failed for user (null) error when trying to use Windows authentication

I'm working on an ASP.NET project, and we're attempting to switch from SQL Server authentication to Windows authentication to connect to the database.

It works fine locally when I debug in Visual Studio, but when it hits the web server, I get the error: "Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection."

IIS has anonymous access disabled, and Integrated Windows Authentication is checked. The web.config file has:

<authentication mode="Windows"/>
<identity impersonate="true"/>

The database I'm attempting to access is on a different machine than IIS, and the connection string doesn't contain a username or password, and has "Trusted_Connection=Yes", and "Integrated Security=SSPI".

From what I've been able to find online, I'm guessing the problem is due to IIS and SQL Server residing on different machines, but I'm not sure how to fix it.

Any ideas?

You might find this blog post fromranganh to be helpful:You may receive the error "Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection" - Part II|||Correct me if I'm wrong, but won't the solution in that blog preventSQL Server from differentiating between different Windows users?

I need to actually pass the Windows user information to SQL, asdifferent users have different permissions to certain stored procedures.|||A follow-up:

I read this article:

http://www.awprofessional.com/articles/article.asp?p=350387&rl=1

As per its advice, I checked the web server in Active Directory, andenabled it for delegation. We're waiting for the changes to propegate,but maybe this will solve the problem.

Friday, March 23, 2012

Login failed for user '(null)' - Not associated with a trusted connection

We had a DTS package that was working fine until yesterday and since
today it does not work and we get the following error message:
[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
'(null)'. Reason not associated with a trusted SQL Server connection.
Please clarify how do we fix this problem.
Thanks
KarenHi
Did you change Authentication mode of SQL Server?
<karenmiddleol@.yahoo.com> wrote in message
news:1118220803.108259.111330@.f14g2000cwb.googlegroups.com...
> We had a DTS package that was working fine until yesterday and since
> today it does not work and we get the following error message:
>
> [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
> '(null)'. Reason not associated with a trusted SQL Server connection.
>
> Please clarify how do we fix this problem.
> Thanks
> Karen
>

Login failed for user '(null)' - Not associated with a trusted connection

We had a DTS package that was working fine until yesterday and since
today it does not work and we get the following error message:
[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
'(null)'. Reason not associated with a trusted SQL Server connection.
Please clarify how do we fix this problem.
Thanks
Karen
Hi
Did you change Authentication mode of SQL Server?
<karenmiddleol@.yahoo.com> wrote in message
news:1118220803.108259.111330@.f14g2000cwb.googlegr oups.com...
> We had a DTS package that was working fine until yesterday and since
> today it does not work and we get the following error message:
>
> [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
> '(null)'. Reason not associated with a trusted SQL Server connection.
>
> Please clarify how do we fix this problem.
> Thanks
> Karen
>
sql

Login failed for user '(null)' - Not associated with a trusted connection

We had a DTS package that was working fine until yesterday and since
today it does not work and we get the following error message:
[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
'(null)'. Reason not associated with a trusted SQL Server connection.
Please clarify how do we fix this problem.
Thanks
KarenHi
Did you change Authentication mode of SQL Server?
<karenmiddleol@.yahoo.com> wrote in message
news:1118220803.108259.111330@.f14g2000cwb.googlegroups.com...
> We had a DTS package that was working fine until yesterday and since
> today it does not work and we get the following error message:
>
> [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
> '(null)'. Reason not associated with a trusted SQL Server connection.
>
> Please clarify how do we fix this problem.
> Thanks
> Karen
>|||Hey Karen!
Generally when you see the "login failed for user NULL" it's a kerberos issue.
I know that sounds far fetched, but it's true.
If you get "login failed for user BOB" that's a different story. But the
fact that we are passing a NULL value to SQL instead of a user token means
that Kerberos is failing.
Sometimes this is called a double hop scenario, but you can get that error
even when there is only a single hop.
Follow this article to resolve that error:
How to troubleshoot the "Cannot generate SSPI context" error message
This step-by-step article describes how to troubleshoot the most typical
sources of the "Cannot generate SSPI context" error message. You may receive
this error message under the following conditions: You are connecting to SQL
Server.You are using...
http://support.microsoft.com/default.aspx?scid=kb;en-us;811889
Basically you will be getting the utility setspn, dropping the old spn and
recreating a new SPN (rebooting might be necessary).
Donna Lambert
"karenmiddleol@.yahoo.com" wrote:
> We had a DTS package that was working fine until yesterday and since
> today it does not work and we get the following error message:
>
> [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
> '(null)'. Reason not associated with a trusted SQL Server connection.
>
> Please clarify how do we fix this problem.
> Thanks
> Karen
>|||You may also want to check that the account logging in hasn't been
locked out because of invalid password entries exceeding the limit set.
Usually this scenario takes place when you are forced to change your
password, but services are running using old passwords and try to login
multiple times. If the account is locked out because of this, you will
see this error message.
Simon Worth
Donna Lambert wrote:
> Hey Karen!
> Generally when you see the "login failed for user NULL" it's a kerberos issue.
> I know that sounds far fetched, but it's true.
> If you get "login failed for user BOB" that's a different story. But the
> fact that we are passing a NULL value to SQL instead of a user token means
> that Kerberos is failing.
> Sometimes this is called a double hop scenario, but you can get that error
> even when there is only a single hop.
> Follow this article to resolve that error:
> How to troubleshoot the "Cannot generate SSPI context" error message
> This step-by-step article describes how to troubleshoot the most typical
> sources of the "Cannot generate SSPI context" error message. You may receive
> this error message under the following conditions: You are connecting to SQL
> Server.You are using...
> http://support.microsoft.com/default.aspx?scid=kb;en-us;811889
> Basically you will be getting the utility setspn, dropping the old spn and
> recreating a new SPN (rebooting might be necessary).
> Donna Lambert
> "karenmiddleol@.yahoo.com" wrote:
>
>>We had a DTS package that was working fine until yesterday and since
>>today it does not work and we get the following error message:
>>
>>[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed for user
>>'(null)'. Reason not associated with a trusted SQL Server connection.
>>
>>Please clarify how do we fix this problem.
>>Thanks
>>Karen
>>

Login failed for user '(null)' - Not associated with a trusted connection

We had a DTS package that was working fine until yesterday and since
today it does not work and we get the following error message:
[Microsoft][ODBC SQL Server Driver][SQL Server] Login failed fo
r user
'(null)'. Reason not associated with a trusted SQL Server connection.
Please clarify how do we fix this problem.
Thanks
KarenHi
Did you change Authentication mode of SQL Server?
<karenmiddleol@.yahoo.com> wrote in message
news:1118220803.108259.111330@.f14g2000cwb.googlegroups.com...
> We had a DTS package that was working fine until yesterday and since
> today it does not work and we get the following error message:
>
> [Microsoft][ODBC SQL Server Driver][SQL Server] Login failed
for user
> '(null)'. Reason not associated with a trusted SQL Server connection.
>
> Please clarify how do we fix this problem.
> Thanks
> Karen
>

Login failed for user (null)

Hi,

I am using MS SQL 2000 Personal Edition on my Windows XP Professional machine. It is working fine with Enterprice manager and SQL Query analyser.

To use the db in my web application I configured a system dsn of "SQL Server" driver. I used "sa" user with my pass and tested the connection. It said it was ok. When I used in my web page it throws this exception

ODBC Error Code = 28000 (Invalid authorization specification)

[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.

Any one could help me figure this prob.Can you post the connection string you are using? To the best of my knowledge, if you are using DSN connection, you still need to supply the user name and password.

Regards,

Hugh Scott

Originally posted by karthiknataraj
Hi,

I am using MS SQL 2000 Personal Edition on my Windows XP Professional machine. It is working fine with Enterprice manager and SQL Query analyser.

To use the db in my web application I configured a system dsn of "SQL Server" driver. I used "sa" user with my pass and tested the connection. It said it was ok. When I used in my web page it throws this exception

ODBC Error Code = 28000 (Invalid authorization specification)

[Microsoft][ODBC SQL Server Driver][SQL Server]Login failed for user '(null)'. Reason: Not associated with a trusted SQL Server connection.

Any one could help me figure this prob.|||I use Cold fusion. So I created a system DSN and in my <cfquery> tag I simply give the dsn name. Another one site which is live uses sql server and I checked the code it the same.

<CFQUERY NAME="Q1" DATASOURCE="Test">
Select * from emp
</CFQUERY>|||I don't have any experience with Cold Fusion, and I try to stay away from DSN connections as much as possible. I am digging through the cobwebs of my memory to try to come up with an answer.

I would:
1. Re-verify that the password is saved in your DSN config
2. Try specifying the UID and password in the CFQUERY; your production web server may be using a Windows Trusted connection to your SQL Server.
3. See if Cold Fusion can make DSN-less connections to source databases.

HTH,

Hugh Scott

Originally posted by karthiknataraj
I use Cold fusion. So I created a system DSN and in my <cfquery> tag I simply give the dsn name. Another one site which is live uses sql server and I checked the code it the same.

<CFQUERY NAME="Q1" DATASOURCE="Test">
Select * from emp
</CFQUERY>|||I had the same problem. Microsft had me chage the AllowInProcess reg key to 0 for the provider type I was having issues with (in my case MADASQL). This will move these connections out of the MemToLeave area the SQL uses. However you will have to use SQL authentication after this.|||Even if you use a pedifined DSN for your connection, you still HAVE to supply the user="sa" and the pass="". I don't how you can do this with cold fusion, but I that's a guess:

"DATASOURCE=DSN2;USER=SA:PASSWORD="

Wednesday, March 21, 2012

Login failed for user '<<domain>>\<<server>>$'.

I am working on asp application which basically is an interface to the report server. I am currently unable to determine the source of the problem so it may have nothing to do with reporting services.

Currently the back in off this application consists of a reports and web server on a machine named tango and a SQL database server on a machine named tango.

The SQL logs show a failed login for user organization\polo$ (organization is a made up domain as i don’t what to give away where I work). There is no logon in SQL for the user polo$ and as far as I can see there is no user polo$ in the domain.

Has anyone seen this before? Can anyone account for the user polo$?

Could this be caused by reporting services trying to login to the two databases (ReportServer, ReportServerTempDB) that reporting services needs. These to databases are not on the report server. They are on tango.

Thanks in advanced

Is there a computer in the domain with the name "polo"? Network resources are accessed using the ASP.NET machine account on Windows Server 2003(BUILTIN\Network Service). The machine account name is DOMAIN\ComputerName$. The $ at the end hides it in normal views.

|||polo is the web/report server for the application.|||DOMAIN\polo$ is the name of the machine account for the web/report server. You need to grant access to ReportServer and ReportServerTempDB to DOMAIN\polo$.|||

Where can I get more information on this asp.net machine account? I don’t understand why it would be trying to access the database.

Secondly when I give it access to the database other user accounts stop working. It is like the asp.net machine account took precedence over the permissions I had set for a normal user.

|||This article (http://www.awprofessional.com/articles/article.asp?p=357694&seqNum=2&rl=1) explains the accounts that various Reporting Services components run under on Windows 2000 Server, Windows XP, and Windows Server 2003. I suspect that you selected service account for accessing the SQL Server hosting the Reporting Services databases. You can change that option to a domain account using the rsconfig utility from the command line.|||

I was after information on SQL and ASP.NET machine account.

Because as I give the ASP.NET machine account access to the databases other user accounts stop working. It is like the ASP.NET machine account took precedence over the permissions I had set for other users.

Is this behavior normal for the ASP.NET Machine account.

|||I've honestly never seen other logins stop working when you enable the machine account. SQL Server just sees the incoming access token (DOMAIN\pogo$, DOMAIN\MyName, or whatever) and grants the appropriate permissions. Adding a user shouldn't affect the permissions for other users. Can you query using Query Analyzer and/or Enterprise Manager? Both those tools would be running under your current account. What exactly do you mean when you say the other user accounts stop working? What is the exact error message that you receive both before and after adding the DOMAIN\pogo$ account?|||After giving the machine account access to all stored procedures and databases...
Our application threw a stored procedure can not be executed user has insuffiecent privleages or something like that. When looking at the stored procedure the user did have premission to execute the stored procedure in question.

What im intersted in finding out though is why when i create a connection in my asp.net code do i get 2 logins occur One of the asp.net machine account and one of the user i am impersonating. In the sql error logs i get a login failure for the asp.net machine account. I want to know why the asp.net machine account is tyring to log on to sql in the first place when the asp.net application is impersonating a diffrent user.

I see you have read my other post at http://forums.microsoft.com/msdn/ShowPost.aspx?PostID=66810. Which pretty much explain what i have just said. Please write all posts concerning this problem into the above mentioned thread. Thanks for you help so far.

I dont belive this to be a reporting services error anymore.

Regards

Dean Dalby

Friday, March 9, 2012

Logical Records

The organisation I'm working for has SQL Server 2005 merge replication up and running on 11 servers in 3 different countries.

Ocasionally we experience problems caused by rows in a logical record relationship being replicated between servers in the wrong order, i.e. an attempt is made to insert child rows before the parent. I am not able to reproduce this behaviour on my test servers.

Has anyone else experienced this problem?

It's entirely possible for a child row to be inserted before parent row, but it should be greatly reduced in SQL 2005. What kind of errors are you having?|||

i'm sorry, i misspoke. the whole point of logical records is that a logical record relationship is committed in one transaction, the whole logical record should be committed as one transaction.

Can you tell me what the errors are that you're seeing, and can you verify that the article that's failing is marked for logical records?

|||

We have a master database filtered and published to 2 separate databases on the same cluster (overlapping partitions).

Each of these 2 databases is republished to another 4 servers, where it is republished again.

Users update a local SQL Express database which is programmatically synchronised (anonymous pull subscription, via RMO) with one of the 4 servers when a transaction completes.

Very occasionally, if a user updates an existing parent row and then adds a new parent row with all its children, and both programmatic synchronisations occur within the 60 second polling interval on the server, then the data is passed up to the cluster as a parent update, child inserts and a parent insert, in that order. As a result the child rows in the server and user databases are deleted. I have not been able to reproduce the problem in the test environment but I have enough logging in place to know exactly what's happening.

I know that the logical row relationships exist because I can see them in sysmergesubsetfilters but sp_helpmergefilter won't show them to me unless they're filter_type 3 (filter and logical record relationship).

|||Sorry but it's not clear to me what the problem is. If you update parent row with ID1, and insert new parent row with ID2 and child row with FK=ID2, why would the child rows in the server and user dbs get deleted? And for which ID are they getting deleted, ID1? Are you saying the updates are moving the row out of partition?|||

The child rows for ID2 get deleted.

Inserting the child rows into the 3rd level database is not a problem because we have NOT FOR REPLICATION on the foreign keys.

When the 2nd level merge agent runs it receives the child rows before the parent and therefore appears not to be able to determine which partition they belong to, so deletes them.

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted before the parent.

Currently all article filters are type 3 (filter and logical record). Changing filter type on the 2nd and 3rd levels to 2 (logical record only) appears to make no difference in my test environment because I can't reproduce the problem to start with.

|||

That should be:

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted *after* the parent.

|||

Books Online says NFR (Not For Replication) cannot be placed on FK constraint involved with logical records. Are you sure the articles are involved in logical record relationship, that's what it seems you're implying above. Logical records should be commiting the whole thing as one txn, so even if the child goes before the parent, it shouldn't matter in a single transaction. Can you verify that the articles involved are indeed part of logical records?

|||OK, I read sp_addmergearticle in BOL and understand what option 3 is, I guess this would allow you to add NFR to your FK constraint. May I ask why you have it set to 3 instead of 2?|||

It wasn't intentional.

I scripted the sp_addmergearticle definitions from the top-level database and used them. It seemed to work, at least to start with....

|||

Can I have the URL for that BOL reference please?

I believe the articles are involved in a logical record relationship because I can see them in sysmergesubsetfilters and because sp_helpmergefilter says so.

However, I take your point that the relationship might not be working at all as opposed to working intermittently although I'm surprised at how rarely it causes a problem.

|||

See topic "Grouping Changes to Related Rows with Logical Records", http://msdn2.microsoft.com/en-us/library/ms152507.aspx.

Easiest way to fix the problem is to set @.filter_type = 2, this will ensure your relational changes get committed all at once. Other than that, the fact that a child row goes before a parent row is by design (as long as @.filter_type <> 2), it's possible for this to happen. Do know that this doesn't pose any pure data loss, it just seems that way. This usually happens with some weird timing issue, or if your connection gets broken mid-sync. What happens is the child row insert gets to the publisher, the publisher will see the child row originated from the subscriber, but knows there's no parent row to match the filter criteria, thus send a delete back down to the subscriber. But as soon as the parent row makes it to the publisher, it will recognize the child row doesn't exist at the subscriber and send the child insert back down to the subscriber. This works only if NFR is enabled, which I believe is what you have, without it you'd probably get conflicts.

|||

Thanks.

We have this change scheduled for 29/5 (due to the need to re-initialise subscriptions) and I'll let you know what happens.

|||

Problem not solved.

We completely rebuilt the replication infrastructure from scratch using filter_type 2 and with no NFR clause on the foreign keys supporting the logical record relationships.

What we're now seeing is very clearly a bug in SQL Server.

If the republisher begins synchronising with the publisher and, before it finishes, a client synchronises its anonynous pull subscription with the republisher, then some of the client changes are lost (present on the republisher but never uploaded to the publisher)

Here are the messages captured during programmatic synchronisation on the client:

06/06/2007 11:15:32 Initializing
06/06/2007 11:15:32 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:32 Retrieving publication information
06/06/2007 11:15:32 Retrieving subscription information.
06/06/2007 11:15:32 Uploading data changes to the Publisher
06/06/2007 11:15:33 Enumerating deletes in all articles
06/06/2007 11:15:33 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Outing' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Stint' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Lap' (1 insert): 1 total
06/06/2007 11:15:33 Downloading data changes to the Subscriber
06/06/2007 11:15:34 Merge completed after processing 3 data change(s) (1 insert(s), 2 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:15:34 Synchronisation complete

06/06/2007 11:15:59 Initializing
06/06/2007 11:15:59 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:59 Retrieving publication information
06/06/2007 11:15:59 Retrieving subscription information.
06/06/2007 11:16:00 Uploading data changes to the Publisher
06/06/2007 11:16:01 Enumerating deletes in all articles
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Event'
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:16:01 Uploaded 1 change(s) in 'EventTyreset' (1 update): 1 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Outing' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 27 change(s) in 'Outing_AeroComponent' (27 inserts): 27 total
06/06/2007 11:16:01 Uploaded 8 change(s) in 'Outing_CofGComponent' (8 inserts): 8 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Stint' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 16 change(s) in 'Outing_BallastComponent' (16 inserts): 16 total
06/06/2007 11:16:01 Downloading data changes to the Subscriber
06/06/2007 11:16:02 Merge completed after processing 54 data change(s) (53 insert(s), 1 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:16:02 Synchronisation complete

And here's what happened on the republisher (in reverse chronological order):

No data needed to be merged.

06/06/2007 11:19:54

No data needed to be merged.

06/06/2007 11:19:41

No data needed to be merged.

06/06/2007 11:18:30

No data needed to be merged.

06/06/2007 11:18:18

Uploaded 1 change(s) in 'EventTyreset' (1 update): 11 total

06/06/2007 11:18:17

Uploading data changes to the Publisher

06/06/2007 11:16:48

No data needed to be merged.

06/06/2007 11:16:32

Uploaded 1 change(s) in 'Lap' (1 insert): 4 total

06/06/2007 11:16:27

Uploaded 1 change(s) in 'Stint' (1 update): 13 total

06/06/2007 11:16:25

Uploaded 1 change(s) in 'Outing' (1 update): 13 total

06/06/2007 11:16:19

Uploading data changes to the Publisher

06/06/2007 11:14:50

No data needed to be merged.

I am desperate to get a solution to this problem. Are there any post-SP2 hot fixes affecting replication?

Thanks in advance.

|||We are seeing something very similar. Updates from a CE database (SDF file) are coming up to the republisher database, but not getting to the Master database. Results are inconsistent at best. The replication seems to be working properly if a direct insert is done at either the republisher level or the Master level. We have not seen a failure in that scenario. We would also like an answer to the problem.

Logical Records

The organisation I'm working for has SQL Server 2005 merge replication up and running on 11 servers in 3 different countries.

Ocasionally we experience problems caused by rows in a logical record relationship being replicated between servers in the wrong order, i.e. an attempt is made to insert child rows before the parent. I am not able to reproduce this behaviour on my test servers.

Has anyone else experienced this problem?

It's entirely possible for a child row to be inserted before parent row, but it should be greatly reduced in SQL 2005. What kind of errors are you having?|||

i'm sorry, i misspoke. the whole point of logical records is that a logical record relationship is committed in one transaction, the whole logical record should be committed as one transaction.

Can you tell me what the errors are that you're seeing, and can you verify that the article that's failing is marked for logical records?

|||

We have a master database filtered and published to 2 separate databases on the same cluster (overlapping partitions).

Each of these 2 databases is republished to another 4 servers, where it is republished again.

Users update a local SQL Express database which is programmatically synchronised (anonymous pull subscription, via RMO) with one of the 4 servers when a transaction completes.

Very occasionally, if a user updates an existing parent row and then adds a new parent row with all its children, and both programmatic synchronisations occur within the 60 second polling interval on the server, then the data is passed up to the cluster as a parent update, child inserts and a parent insert, in that order. As a result the child rows in the server and user databases are deleted. I have not been able to reproduce the problem in the test environment but I have enough logging in place to know exactly what's happening.

I know that the logical row relationships exist because I can see them in sysmergesubsetfilters but sp_helpmergefilter won't show them to me unless they're filter_type 3 (filter and logical record relationship).

|||Sorry but it's not clear to me what the problem is. If you update parent row with ID1, and insert new parent row with ID2 and child row with FK=ID2, why would the child rows in the server and user dbs get deleted? And for which ID are they getting deleted, ID1? Are you saying the updates are moving the row out of partition?|||

The child rows for ID2 get deleted.

Inserting the child rows into the 3rd level database is not a problem because we have NOT FOR REPLICATION on the foreign keys.

When the 2nd level merge agent runs it receives the child rows before the parent and therefore appears not to be able to determine which partition they belong to, so deletes them.

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted before the parent.

Currently all article filters are type 3 (filter and logical record). Changing filter type on the 2nd and 3rd levels to 2 (logical record only) appears to make no difference in my test environment because I can't reproduce the problem to start with.

|||

That should be:

Incidentally, we have the NOT FOR REPLICATION clause to prevent the converse problem, where child rows get deleted *after* the parent.

|||

Books Online says NFR (Not For Replication) cannot be placed on FK constraint involved with logical records. Are you sure the articles are involved in logical record relationship, that's what it seems you're implying above. Logical records should be commiting the whole thing as one txn, so even if the child goes before the parent, it shouldn't matter in a single transaction. Can you verify that the articles involved are indeed part of logical records?

|||OK, I read sp_addmergearticle in BOL and understand what option 3 is, I guess this would allow you to add NFR to your FK constraint. May I ask why you have it set to 3 instead of 2?|||

It wasn't intentional.

I scripted the sp_addmergearticle definitions from the top-level database and used them. It seemed to work, at least to start with....

|||

Can I have the URL for that BOL reference please?

I believe the articles are involved in a logical record relationship because I can see them in sysmergesubsetfilters and because sp_helpmergefilter says so.

However, I take your point that the relationship might not be working at all as opposed to working intermittently although I'm surprised at how rarely it causes a problem.

|||

See topic "Grouping Changes to Related Rows with Logical Records", http://msdn2.microsoft.com/en-us/library/ms152507.aspx.

Easiest way to fix the problem is to set @.filter_type = 2, this will ensure your relational changes get committed all at once. Other than that, the fact that a child row goes before a parent row is by design (as long as @.filter_type <> 2), it's possible for this to happen. Do know that this doesn't pose any pure data loss, it just seems that way. This usually happens with some weird timing issue, or if your connection gets broken mid-sync. What happens is the child row insert gets to the publisher, the publisher will see the child row originated from the subscriber, but knows there's no parent row to match the filter criteria, thus send a delete back down to the subscriber. But as soon as the parent row makes it to the publisher, it will recognize the child row doesn't exist at the subscriber and send the child insert back down to the subscriber. This works only if NFR is enabled, which I believe is what you have, without it you'd probably get conflicts.

|||

Thanks.

We have this change scheduled for 29/5 (due to the need to re-initialise subscriptions) and I'll let you know what happens.

|||

Problem not solved.

We completely rebuilt the replication infrastructure from scratch using filter_type 2 and with no NFR clause on the foreign keys supporting the logical record relationships.

What we're now seeing is very clearly a bug in SQL Server.

If the republisher begins synchronising with the publisher and, before it finishes, a client synchronises its anonynous pull subscription with the republisher, then some of the client changes are lost (present on the republisher but never uploaded to the publisher)

Here are the messages captured during programmatic synchronisation on the client:

06/06/2007 11:15:32 Initializing
06/06/2007 11:15:32 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:32 Retrieving publication information
06/06/2007 11:15:32 Retrieving subscription information.
06/06/2007 11:15:32 Uploading data changes to the Publisher
06/06/2007 11:15:33 Enumerating deletes in all articles
06/06/2007 11:15:33 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Outing' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Stint' (1 update): 1 total
06/06/2007 11:15:33 Uploaded 1 change(s) in 'Lap' (1 insert): 1 total
06/06/2007 11:15:33 Downloading data changes to the Subscriber
06/06/2007 11:15:34 Merge completed after processing 3 data change(s) (1 insert(s), 2 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:15:34 Synchronisation complete

06/06/2007 11:15:59 Initializing
06/06/2007 11:15:59 Connecting to Publisher 'TT-SQL1\SQL2005'
06/06/2007 11:15:59 Retrieving publication information
06/06/2007 11:15:59 Retrieving subscription information.
06/06/2007 11:16:00 Uploading data changes to the Publisher
06/06/2007 11:16:01 Enumerating deletes in all articles
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Event'
06/06/2007 11:16:01 Enumerating inserts and updates in article 'Team_Session'
06/06/2007 11:16:01 Uploaded 1 change(s) in 'EventTyreset' (1 update): 1 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Outing' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 27 change(s) in 'Outing_AeroComponent' (27 inserts): 27 total
06/06/2007 11:16:01 Uploaded 8 change(s) in 'Outing_CofGComponent' (8 inserts): 8 total
06/06/2007 11:16:01 Uploaded 1 change(s) in 'Stint' (1 insert): 1 total
06/06/2007 11:16:01 Uploaded 16 change(s) in 'Outing_BallastComponent' (16 inserts): 16 total
06/06/2007 11:16:01 Downloading data changes to the Subscriber
06/06/2007 11:16:02 Merge completed after processing 54 data change(s) (53 insert(s), 1 update(s), 0 delete(s), 0 conflict(s)).
06/06/2007 11:16:02 Synchronisation complete

And here's what happened on the republisher (in reverse chronological order):

No data needed to be merged.

06/06/2007 11:19:54

No data needed to be merged.

06/06/2007 11:19:41

No data needed to be merged.

06/06/2007 11:18:30

No data needed to be merged.

06/06/2007 11:18:18

Uploaded 1 change(s) in 'EventTyreset' (1 update): 11 total

06/06/2007 11:18:17

Uploading data changes to the Publisher

06/06/2007 11:16:48

No data needed to be merged.

06/06/2007 11:16:32

Uploaded 1 change(s) in 'Lap' (1 insert): 4 total

06/06/2007 11:16:27

Uploaded 1 change(s) in 'Stint' (1 update): 13 total

06/06/2007 11:16:25

Uploaded 1 change(s) in 'Outing' (1 update): 13 total

06/06/2007 11:16:19

Uploading data changes to the Publisher

06/06/2007 11:14:50

No data needed to be merged.

I am desperate to get a solution to this problem. Are there any post-SP2 hot fixes affecting replication?

Thanks in advance.

|||We are seeing something very similar. Updates from a CE database (SDF file) are coming up to the republisher database, but not getting to the Master database. Results are inconsistent at best. The replication seems to be working properly if a direct insert is done at either the republisher level or the Master level. We have not seen a failure in that scenario. We would also like an answer to the problem.

Wednesday, March 7, 2012

Logical Error: using expressions & package variables

Hi,

I have encountered some strange output when testing an SSIS package I've been working on.

Please see this thread for background information: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=1590716&SiteID=1. The previous thread was resolved and i have used the tips listed here to create a package with Data Flow Tasks that use expressions to create 'dynamic' queries.

The Data Flow tasks use two package-scope variables to determine the date range that they're operating on - this allows me to script the from / to dates (usually 'last week' range) or, in event of error, 'back date' the job easily without having to update each individual components.

However, something weird is happening - when I have two data flow tasks (DFT) in the same Sequence Container (tasks that are not 'connected' or configured to operate in a flow) only one DFT is picking up the package variable & returning the rows. The other DFT is not returning any rows & seems to be using the default value of the variables.

I have tested this by simply enabling DFT A and disabling DFT B - debugging shows only one DFT executes correctly. The same point is illustrated when I disable DFT A and enable DFT B. However, if I configure DFT A to run after DFT B then both pick up the correct variable values and return the correct row counts!!

I want Data Flow Tasks that pull data from the same remote source to be the same sequence containers so when an error occurs I can get container level transaction support.

Note:

I have thoroughly checked the 'evaluated' expressions in IDE and in Management Studio - this is an SSIS specific quirk (or me!).

Questions:

What's happening here, have I made a conceptual error of how this should work? On the same variable topic: is there a way to set values for package level variables at execute time .e.g via command line execution / dtsexecui?

Let me know if you need any extra explaination / diagrams etc to understand this, the other (closed) thread I have listed above should provide more context for my problem

Thanks for your support!!

You can set any property on any object in the package from the command line, using the same syntax as you see with configurations. See the /SET syntax, e.g.

/SET \package\DataFlowTask.Variables[User::MyVariable].Value;newValue

I'll think about the other bit....

|||

Hi Darren,

Thanks for your response & useful command line syntax. Slight update on my posting, this happens, not only with Data Flow Tasks in a single Sequence Container but accross seperate Sequence Containers.

For example I have 5 sequence containers with 2-3 data flow tasks in each, unless all the sequence containers 'flow'/'connect' to each other (on success) then the variable value never seems to get read properly. I see no reason why the containers should be linked to execute correctly.

Admiteddly, the impact of this is minimal: if one sequence container fails then the whole package has failed & it should backs out & notify an admin - it's still a weird occurrence tho!

Question:

I've seen 'container level' transaction management - my package is 'read only' in the sense that it reads from remote endpoints and writes to a local db, it never alters the remote DB's. Container level transactions aren't an option for me as the package seems to want transaction control over the remote data sources (i guess important if you've done remote db insert and updates). I can't get network access to the remote machine's DTC service.

Is there a global 'on package error' event that I can hook into my package where i can run some sql to truncate all local data that was created created during a failed/partial run?

Thanks for your help

Friday, February 24, 2012

Logic problem

Hi,

This might just be my brain not working after the weekend, but I'm having problems working out just how to do this.

The table:

CREATE TABLE [dbo].[tblQuiz] (
[id] [int] IDENTITY (1, 1) NOT NULL ,
[q1] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q2] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q3] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q4] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q5] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q6] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q7] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[q8] [nvarchar] (100) COLLATE Latin1_General_CI_AS NULL ,
[quizdate] [datetime] NULL ,
[ipaddress] [nvarchar] (50) COLLATE Latin1_General_CI_AS NULL ,
[sessionid] [nvarchar] (50) COLLATE Latin1_General_CI_AS NULL ,
[score] [int] NULL
)

The field [q5] has one of three values in it: "Happy", "Unhappy" or "Neither".

I have a list of about 50 session ID's. If a record in [tblQuiz] has a [sessionid] that matches one in this list, then:
if [q5]= 'Happy' then change it to 'Unhappy'
if [q5]= 'Unhappy' then change it to 'Happy'

The only way I can think of is to change all of one matching record type to some other value, then change all the other type, then change all the original type back to the other type. Which doesn't make much sense even when I've written it down, much less when I'm doing it and have to remember where I'm at. Is there a better way?Is this an abstraction of a real world problem?
Anyway - check out CASE in BoL - it is just the ticket for you.|||Is this an abstraction of a real world problem?
Anyway - check out CASE in BoL - it is just the ticket for you.

I'm not sure what you mean by abstraction? It IS a real-world problem: "Real" as in "my boss is swearing at me". :eek:

But you're right: CASE does the job perfectly. Thankyou very much :)|||Generally it's better to use a lookup table to store your Happy/Unhappy/Neither strings and refer to them through fk ids in tblQuiz. That way you don't waste space storing the same string over and over.

If there are truly only 3 possible values, you could be using a tinyint in the q5 column instead of nvarchar(100), which is a pretty big space savings. this begins to matter pretty quickly in large databases with millions of rows.