Tuesday, 31 January 2012

Global Vs Local Temporary Table

 

Temporary table is a very important part for SQL developer. Here in this article we are focusing about the local and global temporary table.

A global temporary table is visible to any session while in existence. It lasts until all users that are referencing the table disconnect.

The global temporary table is created by putting (##) prefix before the table name.

CREATE TABLE ##sample_Table

                          (emp_code  DECIMAL(10)   NOT NULL,

                           emp_name  VARCHAR(50) NOT NULL)

A local temporary table, like #California below, is visible only the local session and child session created by dynamic SQL (sp_executeSQL). It is deleted after the user disconnects.

The local temporary table is created by putting (#) prefix before the table name.

CREATE TABLE #sample_Table

                         (emp_code  DECIMAL(10)   NOT NULL,

                          emp_name  VARCHAR(50) NOT NULL)

 

If we use the block like BEGIN TRANSACTION and COMMIT TRANSACTION/ROLLBACK TRANSACTION the scope of the Local temporary table is within the transaction not out site of transaction.

Hope the article is quite informative and thanking you to give your valuable time.

Posted by: MR. JOYDEEP DAS

 

 

Monday, 30 January 2012

DECMIAL (Precision, Scale)

 

 

There are some common mistake related to DECIMAL data type to configure PRECISION and SACLE.  Some time I saw that, people are making mistake on it, they just take whatever number they think not related to data storage. If we have confusion, we must solve it whether how tinny it is.

 

In my article I am going to illustrate this part. Hope you like it.

 

Numeric data types have fixed precision and scale.

 

DECIMLA (P, S)

 

 PRECISION:

 

The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.

When maximum precision is used, valid values are from - 10^38 +1 through 10^38 - 1.

 

SCLE:

 

The maximum number of decimal digits that can be stored to the right of the decimal point.

Default is the 0.

 

The maximum storage size is varies based on precision is described bellow

 

Precision

Storage bytes

1 - 9

5

10-19

9

20-28

13

29-38

17

 

Hope this is quite informative and thanking you to provide your valuable time.

 

Posted by: MR. JOYDEEP DAS

 

PRIMARY Vs UNIQUE key

This article is created for the beginners only. I find that there are lot of bad concepts related to PRIMARY key and UNIQUE key. In my article I want to clarify all those points related to it.

1.       PRIMARY key and UNIQUE key both maintains the uniqueness of the columns.

2.       PRIMARY Key NOT allows any kind of NULL values but in UNIQUE Key allow only One NULL value in it.

3.       So the PRIMARY KEY = UNIQUE + NOT NULL combination of data.

4.       So the UNIQUE KEY = UNIQUE + One NULL Value (Allowed)

5.       By default PRIMAY Key takes CULSTERED INDEX.

6.       We can create NON CLUSTERED INDEX on UNIQUE Key  

I think the article is quite informative and thanking you to provide your valuable time on it.

 

Posted by: MR. JOYDEEP DAS

 

Authentication mode of SQL server

I am collecting some interesting facts related to Authentication mode of SQL server.

 

When we are going to install the SQL server, it's asked about the Authentication mode. We have to choose between 2 options.

 

1.    Windows Authentication mode.

2.    Windows and SQL Authentication mode.

 

In my article, I am going to provide some short description related to it.

 

Windows Authentication mode

This is the default option during setup because it's the most secure.  Using this method allows you to control all of your database security through Windows and Active Directory.  This method allows you to place users into groups if you so desire and it allows you not have to manage passwords for your users.  It also keeps your environment secure by either eliminating generic accounts, or at least allowing you to keep tighter control over their passwords. 

This scenario is simple.  You've got an application that needs to connect to SQL Server 2008, and you don't want the username and password sitting out there in a file somewhere.  So the easiest way to protect against this is to use a Windows account with a password you control and run the application under that user.  This way you can also rotate the password as you see fit and there's nothing out on the server that reveals it.  Having passwords sitting out in ini files or in webcofig files, etc is one of the worst management nightmares in security because any of the developers or anyone else can just go out and get the password and connect under the generic account.  And this makes chasing down problems much harder.

Windows and SQL Authentication mode

This one seems redundant, but it's really not.  In the old days, you had 3 choices for security; you had Windows, SQL, and Both.  To a degree that made sense, but in retrospect it really doesn't.  The logic here is simple.  Since Windows is more secure, there's no reason to disallow it.  All you're really doing here is allowing SQL authentication along with the most secure method.  So you're always going to be able to connect with Windows authentication, the only thing we're arguing about is whether or not you're going to allow SQL as well.

If you're using SQL authentication though, Windows has to use NTLM which bypasses Kerberos and you can now connect to the database to troubleshoot the server, or merely to continue working until you get the Kerberos problem worked out.  So having a SQL backdoor can be useful in a very practical sense.  And the really cool thing about using SQL authentication in SQL Server 2005 and above is that you can take advantage of your Windows security model and force your SQL passwords to adhere to your Windows password policies.  So you can force SQL passwords to expire, and to honor your Windows password complexity requirements.

You can also have external customers whom you don't want to have Windows accounts.  In this case, SQL authentication is a good choice too.  And it doesn't have to be external customers.  You could just have non-Windows domains and Windows authentication simply isn't an option.


While Windows authentication is the most secure, it's simply not practical as your sole authentication method.  Don't get me wrong though.  You'll still want to insist on using Windows every chance you get.  You should work really hard to insist on running applications and websites through specific Windows accounts.  Your developers will usually fight you on this because they have a generic SQL account and they want to use it.  They love being able to bypass their personal accounts and connect to the production database to do things.  So while enabling SQL authentication is always a good idea, it's best to keep it restricted to admin usage and only hand out SQL accounts if there's absolutely no other way around a connection problem.  It's also important to note that your decision during install isn't final.  You can easily change the authentication mode at any time.  You have to restart the SQL Server service once you do though
.

 

 

Posted by: MR. JOYDEEP DAS

 

Wednesday, 25 January 2012

ORDER BY clause and PERFORMANCE of Query

 

Some of my friends told me that using the ORDER BY clause decreases the performance of SQL statements. Do you believe it?  Yes  I am, as in such type of conditions the ORDER BY clause really decrease the performance of SQL query.  But you have to understand the situations where you use or don't use the ORDER BY clause.

In my article, I am trying to illustrate those points.

In Microsoft SQL Server 2005/2008, you create a table. The table contains a column of a non-Unicode data type. Additionally, the collation of the column is an SQL tertiary collation. Then, you create an index that is not unique on the column. You run a query against the table by specifying the ORDER BY clause on the column.

In this scenario, the performance of the query is slow. Additionally, if the query is executed from a cursor, the cursor is converted implicitly to a keyset cursor.   

Now I am trying to illustrate the causes of the problem

This problem occurs because the query optimizer does not use the index. In the execution plan of the query, the SORT operator appears first. Then, the Compute Scalar operator that contains the TERTIARY_WEIGHTS function appears.

Some other things that we must considers.

In a tertiary collation definition, certain characters are considered equal for comparison. However, the weights of these characters are considered different when you order these characters. For example, a string comparison is not case-sensitive. However, when you use the ORDER BY clause to order these strings, uppercase characters appear before lowercase characters.

The following data types are non-Unicode data types:

  • char
  • text
  • varchar

The following data types are Unicode data types:

  • nchar
  • ntext
  • nvarchar

To perform the SORT operation on non-Unicode string expressions that are defined in an SQL tertiary collation, SQL Server 2005 requires a binary string of weights for each character in the string expression.

Posted by: MR. JOYDEEP DAS

 

 

Tuesday, 24 January 2012

Delete and Truncate


 
There is a common question like what are the differences between Delete and Truncate statements of SQL server.

Yes there are a lot of differences between them, but I have like to mention the most important differences between them in my short article.

1.    First we understand what type of command they are. DELETE is the DML command and TRUNCATE is the DDL Command.

2.    No WHERE condition is used with TRUNCATE, so it effects on entire Table Objects.

3.    The TRUNCATE command NOT fire any TRIGGER related to Table, as it is not delete data row wise.

4.    Drop all object's statistics and marks like High Water Mark free extents and leave the object really empty with the first extent. zero pages are left in the table

5.    If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the column

6.    Restrictions on using Truncate Statement

a.    Are referenced by a FOREIGN KEY constraint

b.    Participate in an indexed view.

c.    Are published by using transactional replication or merge replication.

Now What the Advantage of TRUNCATE rather than DELETE

1.    It is Very FAST

2.    TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data.

 

Hope the Article is quite informative and thanking you to give your valuable time in it.

 

Posted by: MR. JOYDEEP DAS

 

Delete and Truncate

 

There is a common question like what are the differences between Delete and Truncate statements of SQL server.

Yes there are a lot of differences between them, but I have like to mention the most important differences between them in my short article.

1.    First we understand what type of command they are. DELETE is the DML command and TRUNCATE is the DDL Command.

2.    No WHERE condition is used with TRUNCATE, so it effects on entire Table Objects.

3.    The TRUNCATE command NOT fire any TRIGGER related to Table, as it is not delete data row wise.

4.    Drop all object's statistics and marks like High Water Mark free extents and leave the object really empty with the first extent. zero pages are left in the table

5.    If the table contains an identity column, the counter for that column is reset to the seed value that is defined for the column

6.    Restrictions on using Truncate Statement

a.    Are referenced by a FOREIGN KEY constraint

b.    Participate in an indexed view.

c.    Are published by using transactional replication or merge replication.

Now What the Advantage of TRUNCATE rather than DELETE

1.    It is Very FAST

2.    TRUNCATE TABLE removes the data by deallocating the data pages used to store the table data.

 

Hope the Article is quite informative and thanking you to give your valuable time in it.

 

Posted by: MR. JOYDEEP DAS

 

Thursday, 19 January 2012

Comparing Two Image fields in a Table Objects

 
In this article my main objective is to compare 2 image fields (old image and new image) of a table objects. For this I use extended stored procedure ex_md5 from master database.

I provide the complete set of scenario for this test environment, it can be depends upon you to change the scenario according to you.

Step-1

Extract or build the DLL file xp_md5.dll and place it in 

C:\Program Files\Microsoft SQL Server\MSSQL\Binn(or wherever appropriate).

A precompiled DLL is in the Release directory of the source distribution.

Step-2

Create an Extended Stored Procedure called xp_md5 in the "master" database.

Right-click "Extended Stored Procedures" under the master database in the Server Manager and click "New Extended Stored Procedure...". Enter xp_md5 for the "Name" and for the "Path", enter the full path to xp_md5.dll.

 

Adding extended stored procedure.

 

USE master;

EXEC sp_addextendedproc 'xp_md5', 'xp_md5.dll'

Step-3

Create an User define Function

CREATE FUNCTION [dbo].[fn_md5x] (@data IMAGE, @len INT = -1)

RETURNS CHAR(32) AS

BEGIN

DECLARE @hash CHAR(32)

EXEC master.dbo.xp_md5 @data, @len, @hash OUTPUT

RETURN @hash

END

Step-4

Compare old and new Image

SELECT *

FROM   tablename

WHERE  dbo.fn_md5x(oldimage, datalength(oldimage))

               <> dbo.fn_md5x(newimage, datalength(newimage))
 
 

I think the article is quite informative and thanking for giving time on it.

 

 

Posted by: MR. JOYDEEP DAS

 

 

 

 

 

 

 

 

 

 

Data Compression IN SQL Server 2008

 

 

In this article I am trying to illustrate the concepts of Database Compression on SQL Server 2008. It is only the concepts behind the compression only. For details level of learning you need to follow the MSDN or any MS Notes.

 

Where we can do the Data Compression:

 

Data compression can performs only in SQL Server 2008 and not for all versions.

It takes
 

1.    SQL Server 2008 Enterprise Edition

2.    SQL Server 2008 Developer Editions

 

What is the purpose of Data Compression?

 

The data compression has 2 purposes

1.    It reduces the disk usages by decreasing the size of the database.

2.    It improves the I/O Performance.

However implementation of data compression takes extra CPU costs.

 

Type of Compressions:

 

SQL Server provides 2 type of data compression

 

1.    PAGE Compression

2.    ROW Compression

 

ROW compression is the lower level compression which stores the fixed character strings by using variable-length format by not storing the blank characters. NULL and 0 values across all data types are optimized and take no bytes.

 

PAGE compression is the higher level compression. It is as similar to table partition, index partitions. Page compression uses two types of compression.

 

A.    Prefix compression

B.    Dictionary compression

 

Prefix compression works on common values pattern across all rows on the page. It looks for common patterns in the beginning of common value on given column across all rows on the page.

 

Dictionary compression works on exact values match pattern across all pages. It looks for exact value matches across all the columns and rows on each page.

 

 

This simple SQL statement illustrate that the Which Objects and What Compressions is used

 

SELECT *

FROM sys.partitions

WHERE data_compression_desc != 'NONE'

 

Planning:

There is a lot of work involve while planning for compression strategy for example, Estimating the space saving, Application Workload, Workspace requirements, and mainly what to compress.

Disadvantages:

One of the biggest disadvantages of Data compression is database with ROW/PAGE compression cannot be restored, attached or used on other editions.

I think that the article is quite informative and thanking you to provide time on it

 

 

Posted by: MR. JOYDEEP DAS

 

 

 

Wednesday, 18 January 2012

SQL Tips with OUTPUT options

 

This is a simple statement basically the Insert and delete.

When we insert or delete records from table objects, it shows numbers of rows affected. But we want to know how what the values of the row that is effected for Inset or delete statement.

As you know that there are 2 table named Inserted and deleted who take the information related to insert and delete, using this there is a simple steps that shows what's actually going on when insert and delete statements fires.

 

Step-1 [ Just Create the table objects ]

IF NOT EXISTS(SELECT * FROM sys.sysobjects WHERE xtype='U' AND name='TestTable')

   BEGIN

            CREATE TABLE TestTable

                               (Roll    DECIMAL(1)  NOT NULL IDENTITY(1,1) PRIMARY KEY,

                                Sname   VARCHAR(50) NOT NULL,

                                Class   DECIMAL(1)  NULL,

                                Section VARCHAR(1))

   END

GO

Step-2 [ Now fire some insert statement like this ]

-- Insert Statement with OUTPUT Options

INSERT INTO TestTable(Sname, Class, Section)   

OUTPUT inserted.roll, inserted.sname, inserted.Class, inserted.Section

VALUES ('Tufan', 5, 'A'),('Joydeep',1,'A'),('Palash',1,'A'),('Sangram',1,'A')

It will show the output of what it inserted not the numbers of row affected.

roll      sname             Class  Section

1          Tufan               1          A

2          Joydeep          1          A

3          Palash             1          A

4          Sangram         1          A

 

Step-3 Now fire the delete statement like this

DELETE TestTable  

OUTPUT deleted.roll, deleted.sname, deleted.Class, deleted.Section

WHERE Roll=1

It will show the output of what it deleted not the numbers of row affected.

 

roll      sname             Class  Section

1          Tufan               1          A

 

I think this article is quite informative and thanking you giving time on it.

 

Posted by: MR. JOYDEEP DAS

 

 

 

 

Tuesday, 17 January 2012

Physical defragmentation NOT Needed After SQL Server defragmentation

 

 

May of us doing physical defragmentation to get the performance of SQL server and think that physical defragmentation after SQL server defragmentation. This is a absolutely wrong concept and to illustrate my points I have collected some facts.

 

 

Physical disk fragmentation

 

Physical disk fragmentation is likely what comes to mind when fragmentation is first discussed. Physical fragmentation is a side effect of how hard drives and Windows work. It is common knowledge that regular disk defragmentation is required to achieve optimal performance from your PC. Windows even includes a basic defragmentation utility.

Physical fragmentation slows down your PC because reading data is interrupted by head seek delay. Windows fits files into free space, often breaking the file into segments stored apart from one another. A hard drive's head relocates to read each individual segment. As it moves to each segment the head 'seeks' - often at a cost of 3-4 times the time it takes to read the segment itself. Physical fragmentation primarily affects desktop or laptop PCs containing one hard drive. The single drive must sequentially gather data – so on a fragmented disk it seeks, reads, seeks, reads - these 4 operations are performed one after another. Defragmented, the operation ends up as seek, read, read. We reduce the total cost of 24ms to 15ms in our simple example.

Physical defragmentation products such as Windows defrag, Power Defrag™, Page Defrag™ (another Microsoft tool), or the granddaddy of them all, Diskeeper 2011™ work very well when repairing segmented files.

 

Diskeeper's technology is licensed to Microsoft as the defragmentation tool internal to Windows. In fact, Diskeeper's latest innovations bring physical defragmentation capabilities to a completely new level. All of these products reorder the data on your disk, consolidating files into fewer segments to minimize "head seeks" – providing faster boot times, quicker file reads, and a more responsive system overall.

 

SQL fragmentation and Physical fragmentation
 

However, physical disk fragmentation is not the same as SQL Server defragmentation! SQL Server is different. SQL Servers use advanced storage systems with multiple drives working in tandem, changing the way files are read. Physical fragmentation is something solved with hardware – not with defragmentation scripts or tools.

The fault-tolerance in database storage overcomes the vast majority of physical disk fragmentations' impact. Best practices universally prescribe multi-drive storage subsystems for production SQL Servers. Most SQL Servers use multi-drive storage such as RAID arrays, SANs, and NAS devices; there are always multiple drives acting in tandem. Hard disk controllers supporting drive arrays are aware of the alternate seek/read dynamic and tailor communications with the array for maximum I/O.

As a result, files are distributed across many drives inherently becoming segmented. Working in tandem, however, allows one drive to seek while the others read. With the common configuration of 5 drives, a seek delay of 9ms per drive allows 2 drives reading for 3ms with no seek delay impact at all. Data storage drives are generally much faster than workstation drives, so seek times of 4ms and read times of 1.5ms are not unusual.

There are many DBAs who run a traditional physical defragmentation program in tandem with their intelligent drive controller which results in limited improvement. Physically defragmenting a file in an array implicitly leaves the file segmented across the virtual unison of tandem drives. It's by design. The goal is to gain the most performance while incurring the least overhead – so don't run physical defrags if they slow the storage by 50% while running, and ultimately improve read speeds 1-2%.

 

The most important concept to understand is that the controller, physical defragmentation programs, and multi-drive arrays are unaware of what SQL Server is doing with the file data internally. By focusing on SQL Server's representation of data - how SQL Server has laid out the database itself, how full each page is, and how effectively we're utilizing available SQL Server resources, we can optimize to

the 'next level' of SQL Server performance, solidly trumping any benefit to physical defragmentation by orders of magnitude. In a nutshell, SQL Server's performance can be most improved by focusing on its internals. In fact, once you start focusing on defragmentation at the SQL Server level – whether with manual defragmentation or with the automated defragmentation provided with SQL defrag manager, you may decide that physical defragmentation is no longer needed!

 
 

Posted by: MR. JOYDEEP DAS

 

Friday, 13 January 2012

Related to Extended Stored Procedure (xp_)

Related to Extended stored procedure, I find some very good article related to it. I think this published resources can help you to build you solid knowledge related to it.
If you've worked with versions of SQL Server prior to SQL Server 2005, you are probably familiar with extended stored procedures. These stored procedures reside in the master database and have names that begin with xp_. Extended stored procedures are invoked and managed similarly to regular stored procedures. You can grant and revoke permissions on extended stored procedures as you do for normal stored procedures. Although extended stored procedures reside in the master database name when it is invoked from a database other than master, as in the following example: database like system procedures, the procedure name has to be fully qualified with the master
exec master..xp_fixeddrives
Extended stored procedures are not built with T-SQL commands; instead, they map to a function stored in a DLL. Historically, extended stored procedures were the mechanism available to extend SQL Server functionality. However, the introduction of CLR procedures provides a much easier, safer way to extend the functionality of SQL Server 2008.
Extended stored procedures are typically written in Microsoft C or Visual C++, using the Microsoft Extended Stored Procedure API, and coding them can be quite complex. In addition, extended stored procedures run under the same security context as SQL Server and within the same address space. A poorly written extended stored procedure could bring down the SQL Server service. CLR procedures, in contrast, are written in .NET code that is type safe and runs within the App domain boundary so it cannot access random SQL Server memory locations. In other words, it is much easier and safer to create and deploy CLR procedures than extended stored procedures.
Because of the unsafe nature of extended stored procedures, and the greater security and capabilities of CLR stored procedures, extended stored procedures are a feature that will very likely be removed in some future version of Microsoft SQL Server. For new development efforts, you should use CLR procedures instead of extended stored procedures. In addition, you should make plans to convert any existing applications that currently use extended stored procedures to use CLR procedures instead.
Adding Extended Stored Procedures to SQL Server
If you happen to have a DLL that contains one or more extended stored procedures you need to add to SQL Server, you can use the sp_addextendedproc system stored procedure. Only SQL Server system administrators can add extended stored procedures to SQL Server. The syntax is as follows:
sp_addextendedproc [ @functname = ] 'procedure' , 
                   [ @dllname = ] 'dll'              
Extended stored procedures are added only in the master database. The sp_addextended procedure adds an entry for the extended stored procedure to the system catalogs and registers the DLL with SQL Server. You must provide the complete path for the DLL when registering it with SQL Server.
To remove an extended procedure from SQL Server, you use sp_dropextendedproc:
sp_dropextendedproc [ @functname = ] 'procedure'
Because extended stored procedure DLLs and SQL Server share the same address space, poorly written extended procedure code can adversely affect SQL Server functioning. Any memory access violations or exceptions thrown by an extended stored procedure could possibly damage SQL Server data areas. For this reason, it is strongly recommended that CLR procedures be considered as an alternative to extended stored procedures. If there is some compelling reason to use extended stored procedures, they should be thoroughly tested and verified before they are installed.
Obtaining Information on Extended Stored Procedures
To obtain information on the extended stored procedures in SQL Server, you use sp_helpextendedproc as follows:
sp_helpextendedproc [ [@funcname = ] 'procedure' ]
If the procedure name is specified, sp_helpextendedproc lists the procedure name along with the DLL invoked when the extended stored procedure is executed. If no procedure name is passed in, sp_helpextendedproc lists all extended stored procedures defined in SQL Server and their associated DLLs.
Extended Stored Procedures Provided with SQL Server
Most of the extended stored procedures that ship with SQL Server are undocumented. All extended stored procedures (or rather, the references to them) are stored in the master database. You can display them in SSMS under the master database. To do so, you open the Programmability folder for the master database and then open the Extended Stored Procedures folder. The provided extended stored procedures are listed in the System Extended Stored Procedures folder.
If you plan to use an undocumented extended stored procedure, be careful. First, you have to find out what it does and what parameters it takes. You should also be aware that Microsoft does not support the use of undocumented extended stored procedures. Moreover, an undocumented procedure might not be included in a later version of SQL Server, or if it is included, it might behave differently than it does now.
Table contains the lists the general categories of extended stored procedures.
Extended Stored Procedures Categories
Category
Description
General extended procedures
Provide general functionality. Perhaps the most useful is xp_cmdshell, which executes external programs and returns the output from them as a result set.
SQL Mail extended procedures
Enable you to perform email operations from within SQL Server.
SQL Server Profiler extended procedures
Are used by SQL Server Profiler. They can also be used directly, for instance, to create a trace queue and start the trace from within a stored procedure.
OLE automation procedures
Allow SQL Server to create and use OLE automation objects.
API system stored procedures
Are undocumented extended stored procedures used by the API libraries. The server cursor functionality, for instance, is implemented as a set of extended stored procedures.
 xp_cmdshell
One of the most useful, and potentially dangerous, extended stored procedures provided with SQL Server 2008 is xp_cmdshell. xp_cmdshell can execute any operating system command or program available on the SQL Server system, as long as it is a console program that doesn't require user input. xp_cmdshell accepts a varchar(8000) (or nvarchar(4000)) value as the command string to be executed, and it returns the results of the command as a single nvarchar(255) column. The full syntax of xp_cmdshell is as follows:
xp_cmdshell { 'command_string' } [ , no_output ]
If the no_output option is specified, the results from the command are not displayed. The following example uses xp_cmdshell to list the files in a directory on the SQL Server computer's hard disk:
EXEC xp_cmdshell 'DIR c:\*.*'
xp_cmdshell runs synchronously. Control is not returned to the SQL Server user session until the shell command completes. This is why you have to ensure that the shell command invoked via xp_cmdshell does not prompt for user input. Commands invoked via xp_cmdshell do not run interactively, so there is no way to respond to the user input prompt. The SQL Server session waits indefinitely for a command invoked via xp_cmdshell to return.
After SQL Server passes off the xp_cmdshell command to the operating system, SQL Server cannot interact with the command. If the command requires user input, the process waits indefinitely, and it usually doesn't go away without a fight. Killing the process in SQL Server usually just leaves it in a KILLED/ROLLBACK state. Closing the session that invoked the xp_cmdshell statement doesn't help either. Sometimes, you might have to stop and restart SQL Server to make the process finally go away. Alternatively, you may be able to use Task Manager on the system where SQL Server is running to identify the system process that corresponds to the process invoked by xp_cmdshell and end the process.
If xp_cmdshell is invoked from another database, it has to be fully qualified as master..xp_cmdshell. Unlike with system procedures, SQL Server doesn't automatically look for extended stored procedures in the master database.
Because of the potentially dangerous nature of xp_cmdshell (it essentially allows a user to run operating system–level commands on the SQL Server machine), it is disabled by default. To enable xp_cmdshell, you must run the following commands:
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
-- To enable the feature.
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO
As an additional security measure in SQL Server 2008, by default, permission to execute xp_cmdshell is limited to users with CONTROL SERVER permission. The Windows process spawned by xp_cmdshell runs within the security context of the account under which the SQL Server service is running. Essentially, it has the same security rights as the SQL Server service account.
When xp_cmdshell is invoked by a user who is not a member of the sysadmin fixed server role, it fails unless a proxy account has been set up. A proxy account is a Windows account that a system administrator defines and sets a security context for within the Windows environment. When a user who is not a member of the sysadmin group runs xp_cmdshell, the commands are run within the security context of the defined proxy account.
The proxy account for xp_cmdshell can be created by executing sp_xp_cmdshell_proxy_account. The syntax of this command is as follows:
sp_xp_cmdshell_proxy_account [ NULL | { 'account_name' , 'password' } ]
For example, the following command creates a proxy credential for the Windows domain user Developer\tom that has the Windows password ss2k5Unl:
sp_xp_cmdshell_proxy_account 'Developer/tom' , 'ss2k5Unl'
If NULL is passed as account_name, the proxy credential is deleted.
Because of the potential havoc that could be wreaked on your database server if xp_cmdshell got into the wrong hands, it is recommended that the capability to run xp_cmdshell be left disabled. If you must use xp_cmdshell, be very careful about who has access to it by limiting it to only those with sysadmin permissions if at all possible. If for some reason xp_cmdshell must be made available to all users, be sure that the permissions granted to the proxy account are restricted to the minimum permissions required to perform the commands that need to be invoked via xp_cmdshell.Bottom of Form
Posted by: MR. JOYDEEP DAS