SQL Server – breaking bad habits, improving knowledge and awareness

The hardest part of being a DBA is to keep up with all the things that appear in connection to our job.

  • SQL Server new features
  • SQL Server best practices
  • SQL Server new DMVs, DMFs
  • SQL Server procedures specific to each company
  • …..

Although this information is available and reachable by checking Microsoft web sites, SQL Server specific blogs, your corporate e-mail or corporate intranet, most of the time due to the daily workload or other distractions we miss it. The answer to this will be a reminder or even better a notification. I found the notification a better option because I always like the way in which Microsoft Outlook notifies you when a new e-mail arrives.  While trying to duplicate that behavior I found out that is very hard without a good understanding of .NET programming. In order to still have what I wanted I had to accept that

  • the notification will still consume resources, it will still appear although you cannot see it – the idea behind was to check if you have more than one monitor and if yes to check if any application is running in full screen mode or covers a specific portion of the last monitor. If there will be no application that have windows that covers the lower left corner of the screen then the notification will appear otherwise the notification will not appear since you cannot see it.

Because these days especially in the office we have at least 2 displays the idea of having a notification on the third one that runs every 10 seconds and stays there for another 10 seconds reminding me about a new DMV or reminding me about something else is beneficial. Most of the time will be ignored but from time to time while contemplating the screen or waiting for a task to complete we can peek to that notification and in time we will know the new feature or what is new or other things that you want your notification to display.

Below is an example of how it looks like.

In order to close the notifications we need to double click it.

The script can be downloaded from here

Compare 2 SQL Server instances

During sql server installations or migrations of old instances to new instances we need to gather information quick in order to be sure that we are installing or configuring the instances in the same way. Sometimes when it comes to performance issues we need to compare instances settings to spot any differences that might cause it. One of the methods to spot differences while comparing 2 instances is to put them side by side and to check the property that you want using one of the built in tools, SSMS, tsql code, SQL Server Configuration Manager, Services management console, SQL Server Facets and others that might expose the setting that you want. Unfortunately this method is sometimes

  • time consuming
  • ineffective when the properties that you want to check are accessible using different tools
  • not appropriate when you want to share your results with someone else

Fortunately we can overcome all the above using Powershell and SMO to build a basic GUI ( inspired from this script # A GUI WMI explorer and WMI Method Help generator available on www.ThePowerShellGuy.com) that displays some of the information that we need. The script attached to this post is displaying properties for

  • server collections
  • jobs collections
  • logins collections
  •  users collections
  • databases collections

For other collections you can check the SMO Object Model Diagram and add them to the script together with the modifications that might be required.

Below is an example of the script output

Some of the properties are at their turn collections and for those when selected the output will show sometime only the powershell object type and not the collection.

As with almost all of the scripts provided here, this script is only something basic that can help us to provide specific type of answers faster.

The script can be downloaded from here

Tracking activity to word document for audit purpose or to prove the compliance

More than once we need to provide a prove of some of the steps, if not all, that we took while implementing a change or when solving a ticket. Sometimes this can be accomplished easily using the screen capture software or just the Microsoft built in print screen while in other cases depending on the length of the activity that we need to track it becomes heavily intrusive.

Because of this I started to search how I can do it easier by the mean of automation and powershell. The problems/questions to solved were:

  • how often I should take a capture
  • what the capture size should be
  • which capture I should write to the document

The answers to those questions were

  • every second I am checking if a mouse movement occurred and if yes a capture will be taken at the position where the mouse was
  • I chose a size that will show me enough area surrounding the mouse. This area can be changed to fit your requirements but it will affect the document size.
  • all captures taken when the mouse moves, movement of the mouse means we did something. I am not using the keyboard too much while performing my day to day tasks.

We need of course to have Word and before saving the document we need to stop the script execution.

Below is how the document look like

The script can be downloaded from here

SQL Server SPID – tracking how spid affects sql server

SQL Server Management Studio offers a lot of tools to track what a spid is doing and how it affects the sql server and other spids. Some of the tools can offer information at a very granular level while others to an wider level. Because sometimes we just want something between we need to use what Microsoft offers and use it appropriately. To have an overview of a specifc spid we can use:

  • sp_who2
  • sp_lock
  • built in dmv
  • Activity Monitor
  • extended events

Or we can use custom script that will aggregate information using Microsoft dmvs and SPs and present them. The biggest advantage of using custom scripts is that we can choose

  • to display results to text
  • to display columns as rows
  • to create a include client statistics look like

Microsoft is already doing this when we choose Query – Include Client Statistics which is why I decided to duplicate that and show status,blocking_session_id,wait_type,wait_duration_ms,cpu_time,memory_usage,total_scheduled_time,reads,writes,logical_reads,arithabort,transaction_isolation_level,lock_timeout,deadlock_priority

The output looks like below and I want to mention before anything else that the script used is providing a broad overview and was created on spot only to test how the output look like. Because of this

  • The output should be read and understood from this perspective.
  • The script and the way was written should be seen from this perspective also.

The values that I choose to show are not the most important and they were choose only to show that by choosing to display columns like status we decrease the amount of info that can fit but we can code some of the columns using numeric values, 1 for running, 2 for sleeping and so on. The biggest advantage of this type of display is that we can see how specific spid behave up and down by having only one look throughout of execution time.

Depending on the things that you want to track you can create your own script using the script below as an example. The only thing that you need to check is Results to Text in SSMS.

You can download this script from here

Migrate, re-create login(s) in SQL Server using Powershell and script method


To generate the sql server login with a single click in case we want to migrate it or just want to see what permissions it has on our instance.

My way of achieving it

Using powershell and the work of other people over the internet :). My desire was to have the code generated while I am right clicking on the Logins folder or a Login in SSMS and start powershell and paste a script. Although over the internet I found a lot of examples, the one that caught my eyes was this post (http://jongurgul.com/blog/sql-server-instance-security-scripting-permissions/) . It seems that the author wanted also the same thing but he changed his approach.  — This script was written initially to allow permissions to be scripted out with a single click,  —

Because I still wanted to create and to make a script for that approach I inspired myself and came with the script that will just give me that information. Far from being a final script, this version has the advantage that you can generate the info for all of the logins or only for one login and the information is in this order

  • create login
  • server roles
  • server permissions
  • object permissions at server level
  • create user for that login for the mapped databases
  • database roles
  • database permissions for that login
  • object permissions for that login

An output of running the script looks like

The script can be downloaded from here.

Rename a computer name that serves as a SQL Server – how much work it requires


To know up front the work that needs to be performed when we need to rename a computer that has SQL Server installed on it.

My way of achieving it

Using some of the code that other people already wrote and modifying it together with the Microsoft recommendations when it comes to renaming the computer name. I tried to have this overview in order to estimate how long or how hard it will be to modify the computer name depending on how many instances we have there and what features are already enabled. The script is far from being complete but as I built on top of other scripts we can use it as base for other checks and tests and so on.

The output of the script will look similar to this


The script can be downloaded from here

Balance tables across SQL Server database files in the primary filegroup or other filegroups


To have an overview of the tables that can be moved across the database files in a database and see if they can be moved or not

My way of achieving it

Using tsql. The script is checking the edition of SQL Server and the biggest tables from our database and will check if these tables contains LOB data and how much from the table is LOB and how much is not. Tests showed that even if you have a column that is of LOB type, if the value can fit in the page then you can still move that page to other locations in the database files. I tried to find a way to map the table to database files but I couldn’t find a solution that will work in a fashionable time. Searching on internet revealed that I can use:

  • DBCC IND(0,’tablename’,1)
  • SELECT sys.fn_PhysLocFormatter(%%physloc%%) as rowlocation FROM tablename

The problem with first one is that for huge tables is taking a lot of time which makes it unusable while the second one although is faster for a 100GB table it is still takes minutes to run. I would have wanted to have this information in order to estimate how much space a table occupies on each of the database files. Also the script is ignoring the tables that are partitioned already.

The script is nothing more than a procedure that can help us to provide a quick answer to the question – Can I move some tables to other files or not?

Like with everything else from the internet please try it first and check if the information provided are accurate or not. Below is an example of the script output.

The script can be downloaded from here

Initialize, labeling, formatting and mounting disks in windows 2012 R2 for a SQL Server installation


To check for the available disks and the server /node and to choose the one that will be used for the SQL Server instance.

My way of achieving it

Using powershell. The script requires 2 arguments, one is the instance name and the other is the installation type. I am using Get-Disk to check all available disks which have a partition style of RAW or which don’t have any partitions on them. After the discovery the script is prompting the user to choose the disks and inform him how the disks will be used.

Select the disks for your instance:

– first disk will be used for sql_data_drive

– second disk for the Data01 MP

– third one for Log01 MP

– the fourth one for the Data02 MP

– the fifth one for the Data03 MP and so on

– Example: 1,2,3,4

: 2,3,4,5

Disk 2 with the size of 0.5 GB will be used for SQL Data Drive

Disk 3 with the size of 0.9765625 GB will be used for Data01 MP

Disk 4 with the size of 4.8828125 GB will be used for Log01 MP

Disk 5 with the size of 6.8359375 GB will be used for Data02 MP

After that the disks will be formatted and configured as mount points. Script will check for first available letter and will use the first disk as the base drive for all the other disks that will be mounted on that drive.

The script and a log of an execution are attached here and here

SQL Server troubleshooting – first steps in troubleshooting performance of some TSQL statements (part 4)


To split the code of a SP or ADHOC query into pieces in order to find out the number and names of temporary tables used, variable tables, dynamic code, calls of the stored procedures, dynamic tsql, usage of the IN, parameters used in the stored procedures, if code is using index hints, cursors, outer joins, set commands.

My way of achieving it

My approach is to split the code, one word on every line and looping to the lines and searching for specific sql keywords. Is not 100 percent accurate but it will give me a 50% or more accurate view of how the code was written and how we can improve it or how we can spot the bottlenecks. Each table has a some comments columns that can be used to add the things that we must be aware when using SP or dynamic code or DDL statements (create) with DML statemetns (insert) in a SP or when using table variables or IN operator and so on.

Below is an example of the output


First result set is the query displayed as one word per row. The second result set is the actual SP or adhoc query that we troubleshoot because here we will check the code that is point to us by the other result sets where we have locationincode. For example the fifth result set shows us that our stored procedure is using parameters and that parameter is used on line 37,  select @dbname = parsename(@objname, 3). And this can help us to find out if a conversion will take place or not which is helpful in the overall troubleshooting process. Also we can see some of the tables that are used and if by experience or because we are familiar with the server and database e know that the table has millions of rows then we can start troubleshooting from there.

The script can be downloaded from here

Mapping databases to their backup files that are still on disk


To obtain a list of backups for the databases that are on our instance whcih are still on disk and to count them.

My way of achieving it

Using tsql and xp_cmdshell. From time to time we end up having free space issues even after the backup cleanup job that we have in place executed successfully. This situation might happen because of how our backup cleanup logic works. Most of the backup cleanup logic is using dates to delete files and not numbers of the backups or a combination of these two. Because of this I created the below script in order to know how many backup files are still on disk for each of my databases. If we know that we should keep only 2 full backups and we have 3 on disk then we can delete one. The same applies for differential backups. The script works well for the environment for which I created it but with some small modifications I believe it will suit other environments need.  The script is expecting the backup file name in this form


  • the path contains the database name
  • the path has a timestamp in the form above
  • the path contains the type of the backup

By default the script will output

  • the database name and how many backups on disk exists for that database
  • the database names and their path and their backup date for all the databases that have more than 2 backups on disk.
  • the delete commands for a backup file that was taken for a database that is not anymore on the instance
  • some information in case the backup path contains also a folder named ‘restore’

The script provides only information and decision to delete something is in the hands of the person that interprets the results and knows how many full backups we should have, how many differentials and so on.

Below you can see the output of the script fir diff.bak and a local path



The script can be downloaded from here