Pages

Monday, January 30, 2012

Do you know - Accessing Linked server using Synonym results in error 7359

Accessing Linked server using Synonym results in an error after we rebuild Indexes on the target server in SQL Server 2005.
The Error Message is like below,
Msg 7359, Level 16, State 1, Line 1
The OLE DB provider "SQLNCLI" for linked server "Domain\User" reported a change in schema version between compile time ("165944653572647") and run time ("170368469612430") for table ""Adventureworks"."dbo"."address"".
You will hit this issue in the following scenario
1. On SQL Server instance A create a linked server to SQL Server instance B.
2. On SQLS erver instance A, run a Select query like “select * from SQLServer2.Aventureworks.dbo.address”. This should return a valid result set.
3. On SQL instance A create a synonym by running the following command in query window.
Create Synonym syn_name For SQLServer2.Aventureworks.dbo.address”
4. On SQL Instance A create a view on the synonym by running the following command
Create View vw_name
As
Select
* from syn_name
5. On SQL instance A, run a select query on the Synonym like “Select * from syn_name”
This should return the same results that you got in Step2
6. Now on SQL Instance B rebuild an index on the table “Address”
Alter index PK_Address_AddressID on address rebuild
7. Now on SQL instance A select from the view “Select * from syn_name”.
This will fail with Error: 7359

To workaround this error, run the sp_refreshview command before executing the view
exec sp_refreshview vw_name
Select * from vw_name

How to: Restrict access to public on server/database objects? What are its implications? What will happen to ownership chains?

I've been asked multiple times questions related to SQL Security where DBAs need to harden the security of their SQL Server and prevent the public/guest users from having access to certain information like server or database level management views. This hardening is required due to company standards, policies or more generally a security best practise.




Here are few common questions that I've asked multiple times:


1. I want to remove permissions for a guest/public user from viewing server and database like views like sys.databases etc..
2. Whenever I disable guest account, windows users are not able to access the databases.
3. I want to revoke all access from public role and give it to other user roles.
4. I want to know that even after revoking the select permission on the table, why I'm still able to access the rows using the same select statement in a stored proc.




Let's first talk about the GUEST account. Here is some information on the GUEST account, which may come as a surprise to some of you. But I've put it together as an FAQ.


Question 1:
What happens when I disable ‘guest’ account?
Answer: By default the GUEST account is disabled in each user database. You cannot drop this guest account.

To grant access to guest account from a sysadmin login,
          USE <DBNAME TO ENABLE GUEST>         GRANT CONNECT TO GUEST

To revoke access to guest account from a sysadmin login,
         USE <DBNAME TO DISABLE GUEST>         REVOKE CONNECT FROM GUEST


Once the GUEST account is disabled on each database, only logins which have a database mapping will be able to access (USE) the database. Guest user itself exists to permit access to a database for logins that are not mapped to a specific database user.

If a login, which does not have a mapping (user defined in the database) and they try to access it, it will fail with following error:-

Msg 916, Level 14, State 1, Line 1
The server principal "USERNAME" is not able to access the database "DBNAME" under the current security context.


Question 2. I want to revoke all access from public role and give it to other user roles.
Answer: By default every database user automatically belongs to the fixed Public Server role. This cannot be changed as its hardcoded internally.


You can use the following query to check server permissions for public role

SELECTState_Desc, Permission_Name, class_desc,COALESCE(OBJECT_NAME(major_id),DB_NAME(major_id)) SecurableName, SCHEMA_NAME(O.schema_id) [Schema],Grantees.Name GranteeName, Grantees.Type_Desc GranteeType
FROM sys.server_permissions Perms
INNER JOIN sys.server_principals Grantees ON Perms.Grantee_Principal_Id = Grantees.Principal_Id
LEFT OUTER JOIN sys.all_objects O ON Perms.major_id = O.object_idwhere Grantees.Name = 'public'ORDER BY SecurableName


You can use the following query to identify permissions for public in EVERY database

sp_msforeachdb 'BEGIN PRINT ''Permissions For PUBLIC ROLE in Database: ?'' PRINT '' '' select a.principal_id as [PrincipalID],
a.name as [UserName], b.major_id as [ObjectID], object_name(b.major_id) as [ObjectName],
b.permission_name as [PermissionDesc], b.state_desc as [PermissionState] from ?.sys.database_principals a
inner join ?.sys.database_permissions b
on a.principal_id = b.grantee_principal_id
where a.principal_id=0 -- Filtering on PUBLIC role
PRINT '''' END'


You can identify from above output as to what permissions are required for your application and then move that to other user-created roles. Keep in mind, once guest is disabled on each database, only logins that have a user mapping will be able to use PUBLIC database role.

Question 3. Okay, I got it, but I still want to remove permissions from the public role in case make SQL more secure. what would be the implications?
Ans:wer Allright, here are some implications of revoking access to public :


1. Regular users cannot use SQL Server Management Studio to connect to SQL Server.
e.g. If you revoke CONNECT TO
       use master
       go
       DENY
CONNECT SQL TO PUBLIC


Or you can specifically revoke CONNECT permission for each database to the guest account

      USE <DBNAME TO DISABLE GUEST>      go
      REVOKE
CONNECT FROM GUEST


2. If you revoke permissions from public you will break functionality in features like replication, agent and logshipping. We don't have a list of what (if) will break.
But specifically regarding system objects, here we give a warning on possible functionality break,
http://msdn.microsoft.com/en-us/library/ms178634(SQL.90).aspx


3. Catalog Views have restricted visibility to a normal user so the select grants to public do not impact security. Likewise, system stored procedures with Execute granted to public have internal checks for role membership or possession of some additional permission.

However we can prevent users from public looking at server information by revoking access to public ON VIEW DEFINITION & VIEW ANY DEFINITION to view catalog views (system and database specific)

Reference http://technet.microsoft.com/en-us/library/ms190785.aspx (do the DENY for objects mentioned here)


use
databasename
go
DENY
VIEW DEFINITION TO testlogin;
-> Above restricts the user from getting information from the information_schema views and any other database scoped views


-> Note: This does NOT mean a user will get access denied, but will have "restricted view" which means results may be empty or displayed only for the public principal.


use master
go
DENY
VIEW ANY DEFINITION TO testlogin;
-> Above restricts the user from getting information from the server scoped views like sys.databases etc.

As an additional hardening step, we can do the above commands even for the public role.

use
master
DENY
VIEW ANY DEFINITION TO public;
use <dbname> DENY VIEW DEFINITION TO public;


4. In master database, you can revoke access to xp_cmdshell for public. This is more hardenning. Its upto to you to decide if you want to or not.

use master
go
DENY
EXECUTE ON sys.xp_cmdshell TO public;


Q.4. I denied SELECT permissions for a user on a table, but I am still able to access the rows using the same select statement inside a stored procedure. Is this a bug?
Ans. No! Here is why I say that...

Reviewing this article http://msdn.microsoft.com/en-us/library/ms188332(SQL.90).aspx tells me this
<snippet>
Permissions are not required to run the EXECUTE statement. However, permissions are required on the securables that are referenced within the EXECUTE string. For example, if the string contains an INSERT statement, the caller of the EXECUTE statement must have INSERT permission on the target table. Permissions are checked at the time EXECUTE statement is encountered, even if the EXECUTE statement is included within a module.
</snippet>
I did a repro on my local SQL instance to test this behavior. Here are the steps  followed:-

1. Created a new TEST login and gave it access to TEST database. By default it took DBO schema on the TEST database.

2. Logged in as test login and ran the query "select * from test" and I was able to query the table.
3. Denied access on table TEST to login test.

DENY SELECT ON OBJECT::dbo.test TO [test]

Msg 229, Level 14, State 5, Line 1
The SELECT permission was denied on the object 'Test', database 'test', schema 'dbo'.
4. Created a view to select from the test table "SELECT * FROM TEST.dbo.test"
    I get same access denied when running the view

5. Created a Stored Procedure to select from the test table "SELECT * FROM TEST.dbo.test"
    I get same access denied when EXECUTING the SP.

6. Granted EXECUTE permissions to user TEST on the stored procedure created. 

Now am able to select the rows, even though a direct SELECT fails!
Hmmm, strange behavior. Did some more testing. I checed the owner of the table and SP.

EXEC sp_help 'testproc'EXEC sp_help 'test'
Name Owner Type --------- ------- -----------------testproc dbo stored procedure
Name Owner Type
--------- ------- -----------------Test dbo user table


In this case, the owner of table and SP were the same. This is because I created the SP when logged in SA. So what seems to be happening is that when actually doing the SELECT on the table, we DO NOT check permissions but check schema or owner.
To verify the above theory I changed the owner of SP/Table to something else (either 1 of below will do)

EXEC sp_changeobjectowner 'dbo.testproc', 'guest';
EXEC sp_changeobjectowner 'dbo.test', 'Domain\username';

I got this warning when changing owner,

Caution: Changing any part of an object name could break scripts and stored procedures.

Now when I tried to execute the stored procedure it failed because owning schema mismatches between table and SP. So this looks like ownership chaning. A Bing search brought up this gem of an article. I would highly recommend you read it as its very simple & a pictorial representation of security chaining in SQL Server.

Ownership Chains
http://msdn.microsoft.com/en-us/library/ms188676(SQL.90).aspx


Other Useful Documentation regarding SQL Server Security

Security Best Practises Document
http://download.microsoft.com/download/8/5/e/85eea4fa-b3bb-4426-97d0-7f7151b2011c/SQL2005SecBestPract.doc

Troubleshooting Metadata Visibility
http://technet.microsoft.com/en-us/library/ms190785.aspx

Ownership Chains
http://msdn.microsoft.com/en-us/library/ms188676(SQL.90).aspx

Does using Bulk Logged recovery model for Bulk Operations will reduce the size of Transaction log backups..?

I've been working on SQL Server from more than 5 years and I keep coming across various SQL Server misconceptions (or say a DBA Myth). In this specific post I’ll demystify a common DBA myth around log-shipping: "In a log shipping setup, to have smaller size T-log log backups being shipped to secondary, switch to "BULK_LOGGED" recovery model before performing any bulk operation"

SCENERIO:

- You have configured Log Shipping for a high OLTP database
- On Primary Server, to optimize the database performance, a daily RE-INDEXING job is scheduled
- Everyday, after the RE-INDEXING job is executed, T-log grows huge and size consecutive T-Log backup is large. It takes lot of time (Of course the Network Resources!!) to ship T-log backups to secondary server
- So, to resolve this issue, DBA came with below plan:

1. Change the recovery model of database to BULK_LOGGED
2. Perform RE-INDEXING (which qualifies to be minimally logged operation)
3. Change the recovery model back to FULL


- you believes, following these steps will reduce the size of T-log and thereby minimize network resource usage. This is "THE MYTH"


EXPLANATION:

- Backups under BULK_LOGGED recovery model, will include log records and the data pages changed by bulk operations, which will actually make a log backup very large.

Here’s an extract from the Books Online (http://technet.microsoft.com/en-us/library/ms190692(SQL.90).aspx). Note the parts marked in Bold Red below:

Compared to the full recovery model, which fully logs all transactions, the bulk-logged recovery model minimally logs bulk operations, although fully logging other transactions. The bulk-logged recovery model protects against media failure and, for bulk operations, provides the best performance and least log space usage.

Under the bulk-logged recovery model, if a log backup covers any bulk operations, the log backup contains both log records and the data pages that were changed by bulk operations. This is necessary to capture the results of the bulk-logged operations. The incorporated data extents can make a log backup very large. Additionally, backing up the log requires access to the data files that contain the bulk-logged transactions. If any affected database file is inaccessible, the transaction log cannot be backed up and all operations committed in that log are lost.

LET’S BREAK THIS MYTH – With some testing!!

For purpose of demonstration, I will be using AdventureWorks sample database.

--1. Verify the recovery model of the AdventureWorks database


sp_helpdb 'AdventureWorks' 
Go 
AdventureWorks Recovery=FULL

--2. Verify current T-log usage

dbcc sqlperf(logspace) 

Database Name Log Size (MB) Log Space Used (%) Status
------------------ ------------- ------------------ -----------
AdventureWorks 257.9922 1.920442 0

--3. Perform re-indexing on AdventureWorks database

Use AdventureWorks 
Go 
exec sp_MSforeachtable "DBCC DBREINDEX ('?')" 
Go 

--4. Verify T-log usage after Re-Indexing
dbcc sqlperf(logspace) 
Database Name Log Size (MB) Log Space Used (%) Status
----------------- ------------- ------------------ -----------
AdventureWorks 257.9922 41.09579 0


--5. Backup T-Log in FULL recovery model

BACKUP LOG [AdventureWorks] TO  DISK = N'C:\Backups\LOG_BCKUP_IN_FULL_RECOV.trn' 
GO 
--6. Verify the size of T-log Backup in FULL recovery

clip_image002
Size = 102.9 MB

--8. Change the Recovery Model to BULK_LOGGED


USE [master] 
GO 
ALTER DATABASE [AdventureWorks] SET RECOVERY BULK_LOGGED WITH NO_WAIT 
GO 
--9. Verify the recovery model of the AdventureWorks database

sp_helpdb 'AdventureWorks' 
Go 
AdventureWorks Recovery=BULK_LOGGED

--10. Verify current T-log usage

dbcc sqlperf(logspace) 
Database Name Log Size (MB) Log Space Used (%) Status
----------------- ------------- ------------------ -----------
AdventureWorks 257.9922 2.983337 0


--11. Perform re-indexing on AdventureWorks database

Use AdventureWorks 
Go 
exec sp_MSforeachtable "DBCC DBREINDEX ('?')" 
Go 
--12. Verify T-log usage after Re-Indexing

dbcc sqlperf(logspace) 
Database Name Log Size (MB) Log Space Used (%) Status
----------------- ------------- ------------------ -----------
AdventureWorks 257.9922 4.773189 0


--13. Backup T-Log in BULK_LOGGED recovery model


BACKUP LOG [AdventureWorks] TO  DISK = N'C:\Backups\ADW_REINDX_IN_BULK.trn' 
GO 
--14. Verify the size of T-log Backup in BULK_LOGGED

clip_image004
Size = 109.7 MB

CONCLUSION: Myth Busted!!

- Bulk operations (like Re-indexing) under BULK_LOGGED recovery model are minimally logged, which will reduce T-log file growth. However this will NOT reduce the size of consecutive T-log backups[1]. So this approach is no good for scenario in subject.

NEXT STEPS:
- Go and test this yourself :)

REFERENCE READ:    

Backup Under the Bulk-Logged Recovery Model
: http://technet.microsoft.com/en-us/library/ms190692(SQL.90).aspx

Overview of the Recovery Models : http://msdn.microsoft.com/en-us/library/ms189275(SQL.90).aspx

[1] A T-log Backup under Bulk Logged recovery model will contain both ‘modified data pages’ and ‘information from T-log files’. As such, there are 2 additional factors that can influence number of data pages being backed up and consequently the overall size of T-log backup in bulk logged recovery model:
Page Fullness
– Is derived from Average Bytes free per page and Average Page Density(FULL). The higher the numbers are, the less full the pages. So basis page fullness, we can consume more or less data pages for transactions.
Index Fill Factor
– Fill factor is the % of space in an Index Page that is used when an index is created or rebuild. The purpose is to leave behind some free space in the index page to accommodate future insert/update to avoid page splits.

Thursday, January 26, 2012

How to: Set up Database mirroring in SQL Server 2008 using T-SQL when the database is encrypted using Transparent Data Encryption

Consider the following Scenario, You are using SQL Server 2008 Enterprise edition RTM or higher version and you are Transparent database encryption (new feature introduce in SQL Server 2008) to encrypt the Database to be mirrored. The database encryption key has been encrypted using a server certificate and which is turn encrypted using master database key (usually a symmetric key).
When trying to setup database mirror using SSMS you might get the below exception
<Exception>
TITLE: Database Properties
------------------------------
An error occurred while starting mirroring.
------------------------------
ADDITIONAL INFORMATION:
Alter failed for Database '<DB_NAME>'. (Microsoft.SqlServer.Smo)
------------------------------
Alter failed for DatabaseEncryptionKey 'Microsoft.SqlServer.Management.Smo.ObjectKeyBase'. (Microsoft.SqlServer.Smo)
------------------------------
An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo)
------------------------------
Database '<DB_NAME>' cannot be opened. It is in the middle of a restore. (Microsoft SQL Server, Error: 927)
------------------------------
BUTTONS:
OK
------------------------------
</Exception>
It seems to be an issue with SSMS GUI. To workaround this problem you can use T-SQL to setup Database Mirroring in SQL Server 2008 of an encrypted database.
Here, I am going to explain the setup by step procedure on how to encrypt the database using TDE and then mirror the encrypted database using T-SQL.
On the Principal Site
======================
/* Create a Database Master Key in the Master Database which should be secured by a password. If you omit to specify the encryption mechanism (password) while creating the MASTER KEY, the database master key gets encrypted by SERVICE MASTER KEY (default behavior). In that scenario you might need to export the service master key and copy/import it on the mirror site.
Since the database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database.
Information about the database master key is visible in the sys.symmetric_keys catalog view.
If the database master key already exists and not in use or use wants to change the encryption algorithm or key protection mechanism, either you can the alter the database master key or drop and recreate it.
In my Test Scenario, I'm dropping the existing master key and creating a new master key encrypted with password */
--Check if the master key already present.
USE Master
go
select * from sys.symmetric_keys
--Drop the existing Master Key
Use MASETR
GO
DROP MASTER KEY
Go
--Create Master Key in Master Database
USE MASTER
GO
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<TypeStrongPassword>';
go
**Note : TypeStrongPassword should remain same throughout the setup.
--Create Server Certificate in the Master Database encrypted with master key (created above) which would be used to create USER database encryption key.
USE Master;
GO
CREATE CERTIFICATE <MyDB_Mirror_Server_Cert> WITH SUBJECT = 'SQL TDE CERT'
Go
**Note : Replace <MyDB_Mirror_Server_Cert> with the name of Certificate. You can specify any name of your choice. Also you can change the SUBJECT to a more meaningful description.

-- Now in the User database, create a Database Encryption Key. In my test scenario, I'm dropping the existing Database Encryption Key if already exist and not in use.
-- Information about the database encryption keys is stored in sys.dm_database_encryption_keys.
USE <User Database>
go
DROP DATABASE ENCRYPTION KEY
go
CREATE DATABASE ENCRYPTION KEY
WITH ALGORITHM = AES_128
ENCRYPTION BY SERVER CERTIFICATE <MyDB_Mirror_Server_Cert>
GO

--Enabling Transparent Database Encryption for the USER Database
USE master;
GO
ALTER DATABASE <User Database> SET ENCRYPTION ON
GO
-- Now Backup master key immediately
USE master;
OPEN MASTER KEY DECRYPTION BY PASSWORD = '<TypeStrongPassword>';
BACKUP MASTER KEY TO FILE = '<Full path and exportmasterkey filename>’
ENCRYPTION BY PASSWORD = '<TypeStrongPassword>';
GO


**Note: Replace <Full path and exportmasterkey filename> with full path and export file name. Also Replace <TypeStrongPassword> with the actual password use to encrypt the master key.
-- Now Backup Server certificate as well
BACKUP CERTIFICATE <MyDB_Mirror_Server_Cert> TO FILE = '<Full path and export cert filename>'
WITH PRIVATE KEY ( FILE = '<Full path and export filename _key>' ,
ENCRYPTION BY PASSWORD = '<TypeStrongPassword>');
GO
-- Perform Full database backup of the Principal database




On the Mirrored Site
==============

/* On Mirror Server, restore the master key from backup performed from principal site. Since the database master key is a symmetric key used to protect the private keys of certificates and asymmetric keys that are present in the database. Information about the database master key is visible in the sys.symmetric_keys catalog view.
If the database master key already exists and not in use, drop the existing database master key (if any) and restore it from backup taken from principal site.
In my Test Scenario, I'm dropping the existing master key and restoring the master key from backup taken from principal site */
use master
go
drop master key
go
RESTORE MASTER KEY
FROM FILE = ' Full path and exportmasterkey filename>'
DECRYPTION BY PASSWORD = '<TypeStrongPassword>'
ENCRYPTION BY PASSWORD = '<TypeStrongPassword>';
GO


-- Create server certificate on the mirror site using the PRIVATE KEY backed up from principal site
USE Master;
GO
DROP CERTIFICATE <MyDB_Mirror_Server_Cert>
go
OPEN MASTER KEY DECRYPTION BY PASSWORD = '<TypeStrongPassword>'
GO 
-- Add another encryption mechanism to encrypt the database master key with service master key.
ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY
GO
CREATE CERTIFICATE <MyDB_Mirror_Server_Cert>
FROM FILE = '<Full path and export cert filename>'
WITH PRIVATE KEY (FILE = '<Full path and export filename _key>',
DECRYPTION BY PASSWORD = '<TypeStrongPassword>');
GO
-- Restore the database from backup with NORECOVERY
RESTORE DATABASE <User Database>
FROM disk = 'C:\Program Files\Microsoft SQL Server\MSSQL10.x\MSSQL\Backup\<Backup_FileName>.bak'
WITH NORECOVERY,
MOVE '<Primary FileGroup>' TO
'C:\Program Files\Microsoft SQL Server\MSSQL10.y\MSSQL\DATA\<PrimaryDB_File>.mdf',
MOVE '<Logical File name of LogFile>'
TO 'C:\Program Files\Microsoft SQL Server\MSSQL10.y\MSSQL\DATA\<Log_File>.ldf'
Where x = Instance ID of the Principle Server and y = Instance ID of Mirror Server
-- On the Mirrored Site, drop the existing mirroring endpoint and create the database mirroring endpoint as follows
DROP ENDPOINT <endpoint_mirroring>
CREATE ENDPOINT <endpoint_mirroring>
STATE = STARTED
AS TCP ( LISTENER_PORT = 7023 )
FOR DATABASE_MIRRORING (ROLE=PARTNER);
GO
-- Verify that the endpoint is properly configured and is in state "STARTED"
select * from sys.database_mirroring_endpoints
--On the primary site, drop the existing mirroring endpoint and create the database mirroring endpoint as follows
DROP ENDPOINT <endpoint_mirroring>
CREATE ENDPOINT <endpoint_mirroring>
STATE = STARTED
AS TCP ( LISTENER_PORT = 7022 )
FOR DATABASE_MIRRORING (ROLE=PARTNER);
GO
-- Verify that the endpoint is properly configured and is in state "STARTED"
select * from sys.database_mirroring_endpoints
-- First set the principal server as partner on the mirror database
ALTER DATABASE <User Database> SET PARTNER = 'TCP://<FQDN of the Principal Server>:7022'
-- Now set the Mirror server as partner on the principal database
ALTER DATABASE <User Database> SET PARTNER = 'TCP://<FQDN of the Mirror Server>:7023'





Now the DATABASE Mirroring has been successfully setup when using Transparent Data Encryption in SQL Server 2008.

You can test and verify failover works fine either using SSMS or the T-SQL command.