Thursday 31 March 2016

How to release or remove lock on a table SQL server



We can free or release or remove or delete or check all locks on the sql server 2008,2012 and 2014 objects like table etc by killing the process id which is keeping the locks on the objects like tables and views. 

For example, Let us assume there is table named tblStudent.  Any user or application has executed following sql statements:

BEGIN TRANSACTION
DELETE TOP(1) FROM tblStudent

While he missed to commit or to rollback the above transaction.

If we will try select the recodes without NOLOCK,

SELECT * FROM tblStudent

It will not able to get the object tblStudent since it has locked.

Solution:

Step 1: Get the request session id by executing following sql statement:

SELECT
    OBJECT_NAME(P.object_id) AS TableName,
    Resource_type,
    request_session_id
FROM
    sys.dm_tran_locks L
    join sys.partitions P
ON L.resource_associated_entity_id = p.hobt_id
WHERE   OBJECT_NAME(P.object_id) = 'tblStudent'

We will get the output something like this:

TableName
Resource_type
request_session_id
tblStudent
RID
54
tblStudent
RID
54
tblStudent
RID
55
tblStudent
PAGE
55
tblStudent
PAGE
54
tblStudent
RID
54
tblStudent
RID
54
tblStudent
RID
54
tblStudent
RID
54
tblStudent
RID
54
tblStudent
RID
54


Step 2: Kill the request session id which has kept lock on the table tblStudent. Assume its request_session_id is 54. Execute following query:

KILL 54


We can check or get closer look or see if or find all blocking or locking on databse tables by following script:

SELECT
     blocking_session_id AS BlockingSessionID,
     session_id AS VictimSessionID,
    
     (SELECT [text] FROM sys.sysprocesses
      CROSS APPLY sys.dm_exec_sql_text([sql_handle])
      WHERE spid = blocking_session_id) AS BlockingQuery,
    
     [text] AS VictimQuery,
     wait_time/1000 AS WaitDurationSecond,
     wait_type AS WaitType,
     percent_complete AS BlockingQueryCompletePercent
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text([sql_handle])
WHERE blocking_session_id > 0

Sample output:

BlockingSessionID
VictimSessionID
BlockingQuery
VictimQuery
WaitSecond
WaitType
CompletePercent
54
60
BEGIN TRANSACTION  DELETE TOP(1) FROM tblStudent   
SELECT * FROM tblStudent
655
LCK_M_S
0


Wednesday 30 March 2016

SQL Server Profiler: Trace events from one database



I always seems to forget how to configure SQL Server Profiler to trace only a specific database. So therefore this post.
  • Open SQL Server Profiler
  • Click on New Trace.
  • Connect to the database server you want to trace.
  • The Trace Properties window is shown. Go to the Events Selection tab.
image
  • On the Events Selection tab select show all columns.
image
  • Now you have an extra column DatabaseName.
image
  • Click on Column Filters… to add a filter. Specify the name of the database in the Like section.
image
  • That’s it!

Thursday 17 March 2016

Explain XP_READERRORLOG or SP_READERRORLOG

Xp_readerrorlog or sp_readerrorlog has 7 parameters.
Xp_readerrorlog <Log_FileNo>,<Log_Type>,<Keyword-1>,<Keyword-2>,<Date1>,<Date2>,<’Asc’/’Desc’>
Log_FileNo: -1: All logs
0: Current log file
1: No1 archived log file etc
Log_Type: 1: SQL Server
2: SQL Agent
KeyWord-1: Search for the keyword
KeyWord-2: Search for combination of Keyword 1 and Keyword 2
Date1 and Date2: Retrieves data between these two dates
‘Asc’/’Desc’: Order the data
Examples:
EXEC Xp_readerrorlog 0 – Current SQL Server log
EXEC Xp_readerrorlog 0, 1 – Current SQL Server log
EXEC Xp_readerrorlog 0, 2 – Current SQL Agent log
EXEC Xp_readerrorlog -1 – Entire log file
EXEC Xp_readerrorlog 0, 1, ’dbcc’ – Current SQL server log with dbcc in the string
EXEC Xp_readerrorlog 1, 1, ’dbcc’, ’error’ – Archived 1 SQL server log with dbcc and error in the string
EXEC xp_readerrorlog -1, 1, ‘dbcc’, ‘error’, ‘2012-02-21’, ‘2012-02-22′,’desc’
Search entire sql server log file for string ‘dbcc’ and ‘Error’ within the given dates and retrieves in descending order.
Note: Also, to increase the number of log files, add a new registry key “NumErrorLogs” (REG_DWORD) under below location.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSSQL.X\MSSQLServer\
By default, this key is absent. Modify the value to the number of logs that you want to maintain.

Question and Answer

QWhat port do you need to open on your server firewall to enable named pipes connections?
Ans:
Port 445. Named pipes communicate across TCP port 445.
Q. What are the different log files and how to access it?
Ans:
  • SQL Server Error Log: The Error Log, the most important log file, is used to troubleshoot system problems. SQL Server retains backups of the previous six logs, naming each archived log file sequentially. The current error log file is named ERRORLOG. To view the error log, which is located in the %Program-Files%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\ERRORLOG directory, open SSMS, expand a server node, expand Management, and click SQL Server Logs
  • SQL Server Agent Log: SQL Server’s job scheduling subsystem, SQL Server Agent, maintains a set of log files with warning and error messages about the jobs it has run, written to the %ProgramFiles%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG directory. SQL Server will maintain up to nine SQL Server Agent error log files. The current log file is named SQLAGENT.OUT, whereas archived files are numbered sequentially. You can view SQL Server Agent logs by using SQL Server Management Studio (SSMS). Expand a server node, expand Management, click SQL Server Logs, and select the check box for SQL Server Agent.
  • Windows Event Log: An important source of information for troubleshooting SQL Server errors, the Windows Event log contains three useful logs. The application log records events in SQL Server and SQL Server Agent and can be used by SQL Server Integration Services (SSIS) packages. The security log records authentication information, and the system log records service startup and shutdown information. To view the Windows Event log, go to Administrative Tools, Event Viewer.
  • SQL Server Setup Log: You might already be familiar with the SQL Server Setup log, which is located at %ProgramFiles%\Microsoft SQL Server\90\Setup Bootstrap\LOG\Summary.txt. If the summary.txt log file shows a component failure, you can investigate the root cause by looking at the component’s log, which you’ll find in the %Program-Files%\Microsoft SQL Server\90\Setup Bootstrap\LOG\Files directory.
  • SQL Server Profiler Log: SQL Server Profiler, the primary application-tracing tool in SQL Server, captures the system’s current database activity and writes it to a file for later analysis. You can find the Profiler logs in the log .trc file in the %ProgramFiles%\Microsoft SQL Server\MSSQL.1\MSSQL\LOG directory.
Given these contents of the Customers table:
Id Name   ReferredBy
1 John Doe  NULL
2 Jane Smith  NULL
3 Anne Jenkins  2
4 Eric Branford  NULL
5 Pat Richards  1
6 Alice Barnes  2
Here is a query written to return the list of customers not referred by Jane Smith:
SELECT Name FROM Customers WHERE ReferredBy <> 2;
What will be the result of the query below? Explain your answer and provide a version that behaves correctly.
select case when null = null then 'Yup' else 'Nope' end as Result;
Answer : Nope

Why Shrink file/ Shrink DB/ Auto Shrink is really bad?

In the SHRINKFILE command, SQL Server isn’t especially careful about where it puts the pages being moved from the end of the file to open pages towards the beginning of the file.
  • The data becomes fragmented, potentially up to 100% fragmentation, this is a performance killer for your database;
  • The operation is slow – all pointers to / from the page / rows being moved have to be fixed up, and the SHRINKFILE operation is single-threaded, so it can be really slow (the single-threaded nature of SHRINKFILE is not going to change any time soon)
Recommendations:
  • Shrink the file by using Truncate Only: First it removes the inactive part of the log and then perform shrink operation
  • Rebuild / Reorganize the indexes once the shrink is done so the Fragmentation level is decreased

What are the Hotfixes and Patches?

Hotfixs are software patches that were applied to live i.e. still running systems. A hotfixis a single, cumulative package that includes one or more files that are used to address a problem in a software product (i.e. a software bug).
In a Microsoft SQL SERVER context, hotfixes are small patches designed to address specific issues, most commonly to freshly-discovered security holes.
Ex: If a select query returning duplicate rows with aggregations the result may be wrong….

Why sql server is better than other databases?

I am not going to say one is better than other, but it depends on the requirements. We have number of products in market. But if I have the chance to choose one of them I will choose SQL SERVER because…..
  • According to the 2005 Survey of Wintercorp, The largest SQL Server DW database is the 19.5 terabytes. It is a database of a European Bank
  • High Security. It is offering high level of security.
  • Speed and Concurrency, SQL Server 2005 system is able to handles 5,000 transactions per second and 100,000 queries a day and can scale up to 8 million new rows of data per day,
  • Finally more technical peoples are available for SQL SERVER when we compare to any other database.
So that we can say SQL SERVER is more than enough for any type of application.

Some Third party application or tools

I have used some of the 3rd Party tools:
  • SQL CHECK – Idera – Monitoring server activities and memory levels
  • SQL DOC 2 – RedGate – Documenting the databases
  • SQL Backup 5 – RedGate – Automating the Backup Process
  • SQL Prompt – RedGate – Provides IntelliSense for SQL SERVER 2005/2000,
  • Lite Speed 5.0 – Quest Soft – Backup and Restore
Benefits using Third Party Tools:
  • Faster backups and restores
  • Flexible backup and recovery options
  • Secure backups with encryption
  • Enterprise view of your backup and recovery environment
  • Easily identify optimal backup settings
  • Visibility into the transaction log and transaction log backups
  • Timeline view of backup history and schedules
  • Recover individual database objects
  • Encapsulate a complete database restore into a single file to speed up restore time
  • When we need to improve upon the functionality that SQL Server offers natively
  • Save time, better information or notification

SQL Server is not responding. What is action plan?

Connect using DAC via CMD or SSMS
Connect via CMD
SQLCMD -A –U myadminlogin –P mypassword -SMyServer –dmaster
Once you connect to the master database run the diagnostic quires to find the problem
Correct the issue and restart the server
Find the errors from sql log using
SQLCMD –A –SmyServer –q”Exec xp_readerrorlog” –o”C:\logout.txt”
A long running query blocking all processes and not allowing new connections
Write a query and put the script file on hard disk Ex: D:\Scripts\BlockingQuery.sql
use master;
select p.spid, t.text
from sysprocesses p
CROSS APPLY sys.dm_exec_sql_text (sql_handle) t
where p.blocked = 0
and p.spid in
( select p1.blocked
from sysprocesses p1
where p1.blocked > 0
and p1.waittime > 50 )
From command prompt run the script on sql server and get the result to a text file
SQLCMD -A – SMyServer -i”C:\SQLScripts\GetBlockers.sql” -o”C:\SQLScripts\blockers.txt”
Recently added some data files to temp db and after that SQL Server is not responding
This can occur when you specify new files in a directory to which the SQL Server service account does not have access.
Start the sql server in minimal configuration mode using the startup parameter “–f”. When we specify –f the sql server creates new tempdb files at default file locations and ignore the current tempdb data files configuration. Take care when using –f as it keep the server in single user mode.
Once the server is started change the tempdb configuration settings and restart the server in full mode by removing the flag -f
A database stays in a SUSPECT or RECOVERY_PENDING State
Try to resolve this using CheckDB and any other DBCC commands if you can.
Last and final option is put the db in emergency mode and run CHECKDB with repair_allow_data_loss
(Note: Try to avoid this unless you don’t have any option as you may lose large amounts of data)

To recover a database that is in suspect stage

ALTER DATABASE test_db SET EMERGENCY
After you execute this statement SQL Server will shutdown the database and restart it without recovering it. This will allow you to view / query database objects, but the database will be in read-only mode. Any attempt to modify data will result in an error similar to the following:
Msg 3908, Level 16, State 1, Line 1 Could not run BEGIN TRANSACTION in database ‘test’ …..etc
ALTER DATABASE test SET SINGLE_USER
GO
DBCC CHECKDB (‘test’, REPAIR_ALLOW_DATA_LOSS) GO
If DBCC CHECKDB statement above succeeds the database is brought back online (but you’ll have to place it in multi-user mode before your users can connect to it). Before you turn the database over to your users you should run other statements to ensure its transactional consistency. If DBCC CHECKDB fails then there is no way to repair the database – you must restore it from a backup.

Tuesday 15 March 2016

How To Restore a Database in a SQL Server AlwaysOn Availability Group

Background
There are two clustered servers running SQL Server 2014.  The servers host production databases as well as databases used for QA and testing.  One AlwaysOn Availability Group has been created for the production databases and one for the QA databases.  The same server serves as the primary for both AlwaysOn Availability Groups.
One of the production databases needs to be restored from backup.  Backups are taken from the secondary server, not the primary.  The backups should be restored to the same server from which they were taken.
The following tasks need to be completed in order to restore the database:
  • Make the secondary server from which the backups were taken the primary server
  • Remove the database to be restored from the AlwaysOn Availability Group
  • Restore the database
  • Add the database back into the Always Availability Group
Following are detailed instructions for completing these tasks.
Task 1: Switch the Primary and Secondary Servers
1) Connect to both servers in the cluster in SQL Server Management Studio.
2) On the Secondary server, expand the "Availability Groups" folder under the "AlwaysOn High Availability" folder.
01
3) Right-click on the availability group containing the database to be restored and select "Failover…" from the context menu.  Click “Next >”.
02
4) Select the new primary server.  Click “Next >”.
03
5) Verify the choices and click “Finish”.
04
05
6) Repeat steps 3-5 for the remaining availability group.
Task 2: Remove the Database from the Availability Group
A restore operation cannot be performed on a database that is part of an availability group, so the next task is to remove the database from the group.
1) On the new primary server, expand the list of Availability Databases for the availability group.
06
2) Right-click the database to be restored and select "Remove Database from Availability Group…" from the context-menu. 
07
3) Click “OK” to remove the database from the availability group. 
08
Task 3: Restore the Database
1) In the “Databases” folder on the primary server, right-click on the database to be restored and select "Properties" to open the “Database Properties” dialog.  Select the “Options” page, scroll down to the “Restrict Access” option, and change the value from MULTI_USER to SINGLE_USER.  Click “OK”.
09
2) In the “Databases” folder on the primary server, right-click on the database to be restored and select Tasks->Restore->Database… from the context menu.
3) On the “General” page of the “Restore Database” dialog, select the last Full backup and all Transaction log backups.
10
4) Select the “Options” page of the “Restore Database” dialog and click the "Overwrite the existing database (WITH REPLACE)" option.  Click "OK".
11
Task 4: Add the Database Back to the Availability Group
After the restore of the database to the new primary server is complete, it can be put back into the availability group.
1) In the “Database” folder on the secondary server, right-click the database and select "Delete" from the context menu.  Click “OK”.
12
2) Right-click “Availability Databases” in the availability group on the primary server and select "Add Database…" from the context menu.  Click “Next >”.
13
3) Select the database to be added to the group and click "Next >".
14
4) Select "Full" as the data synchronization preference.  This will take a full backup of the database on the primary and restore it on the secondary server(s).  Specify a network location accessible to the primary and all secondary servers in which to place the backup files.  Click "Next >".
15
5) Use the “Connect…” button to establish a connection to the secondary server(s).  Click "Next >".
16
6) The "Add Database to Availability Group" wizard will validate all of the settings for the new availability group database.  When it completes, click "Next >".
17
7) Verify the choices and click "Finish" to add the database to the availability group.
18
19
Final Tasks
The restore of a database in an AlwaysOn Availability Group is now complete.
At this point it is recommended to immediately perform a backup of the restored database on the secondary server.  This will establish a new backup chain for the database.
The backup files created during synchronization of the primary and secondary server(s) can be deleted.  The location of those files was specified in Step 4 of the “Add the Database Back to the Availability Group” task.
Note that the restored database should now be back in MULTI_USER mode.  Recall that it had been set to SINGLE_USER in Step 1 of the “Restore the Database” task.

Wednesday 9 March 2016

Troubleshooting High-CPU Utilization for SQL Server

 The basic steps in troubleshooting high CPU utilization on  a server hosting a SQL Server instance.
The first and the most common step if you suspect high CPU utilization (or are alerted for it) is to login to the physical server and check the Windows Task Manager. The Performance tab will show the high utilization as shown below:
Next, we need to determine which process is responsible for the high CPU consumption. The Processes tab of the Task Manager will show this information:
Note that to see all processes you should select Show processes from all user.
In this case, SQL Server (sqlserver.exe) is consuming 99% of the CPU (a normal benchmark for max CPU utilization is about 50-60%).
Next we examine the scheduler data. Scheduler is a component of SQLOS which evenly distributes load amongst CPUs. The query below returns the important columns for CPU troubleshooting.
Note – if your server is under severe stress and you are unable to login to SSMS, you can use another machine’s SSMS to login to the server through DAC – Dedicated Administrator Connection (seehttp://msdn.microsoft.com/en-us/library/ms189595.aspx for details on using DAC)
SELECT  scheduler_id
        ,cpu_id
        ,status
        ,runnable_tasks_count
        ,active_workers_count
        ,load_factor
        ,yield_count
FROM sys.dm_os_schedulers
WHERE scheduler_id < 255
See below for the BOL definitions for the above columns.
scheduler_id – ID of the scheduler. All schedulers that are used to run regular queries have ID numbers less than 1048576. Those schedulers that have IDs greater than or equal to 1048576 are used internally by SQL Server, such as the dedicated administrator connection scheduler.
cpu_id – ID of the CPU with which this scheduler is associated.
status – Indicates the status of the scheduler.
runnable_tasks_count – Number of workers, with tasks assigned to them that are waiting to be scheduled on the runnable queue.
active_workers_count – Number of workers that are active. An active worker is never preemptive, must have an associated task, and is either running, runnable, or suspended.
current_tasks_count - Number of current tasks that are associated with this scheduler.
load_factor – Internal value that indicates the perceived load on this scheduler.
yield_count – Internal value that is used to indicate progress on this scheduler.
                                                               
Now to interpret the above data. There are four schedulers and each assigned to a different CPU. All the CPUs are ready to accept user queries as they all are ONLINE.
There are 294 active tasks in the output as per the current_tasks_count column. This count indicates how many activities currently associated with the schedulers. When a  task is complete, this number is decremented. The 294 is quite a high figure and indicates all four schedulers are extremely busy.
When a task is enqueued, the load_factor  value is incremented. This value is used to determine whether a new task should be put on this scheduler or another scheduler. The new task will be allocated to less loaded scheduler by SQLOS. The very high value of this column indicates all the schedulers have a high load.
There are 268 runnable tasks which mean all these tasks are assigned a worker and waiting to be scheduled on the runnable queue.  
The next step is  to identify which queries are demanding a lot of CPU time. The below query is useful for this purpose (note, in its current form,  it only shows the top 10 records).
SELECT TOP 10 st.text
               ,st.dbid
               ,st.objectid
               ,qs.total_worker_time
               ,qs.last_worker_time
               ,qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_worker_time DESC
This query as total_worker_time as the measure of CPU load and is in descending order of the  total_worker_time to show the most expensive queries and their plans at the top: 
Description: CPU_1.JPG 
Note the BOL definitions for the important columns:
total_worker_time - Total amount of CPU time, in microseconds, that was consumed by executions of this plan since it was compiled.
last_worker_time - CPU time, in microseconds, that was consumed the last time the plan was executed.

I re-ran the same query again after few seconds and was returned the below output.
Description: CPU_2.JPG
After few seconds the SP dbo.TestProc1 is shown in fourth place and once again the last_worker_time is the highest. This means the procedure TestProc1 consumes a CPU time continuously each time it executes.   

In this case, the primary cause for high CPU utilization was a stored procedure. You can view the execution plan by clicking on query_plan column to investigate why this is causing a high CPU load.
I have used SQL Server 2008 (SP1) to test all the queries used in this article.