Windows 2012 disk information – displaying disk information

Goal

To obtain disk information like number, size, partition style, drive label and drive letter

My way of achieving it

Using powershell. I must start by saying that this requires windows 2012. The necessity of having something like this is my desire to create a SQL installation script that eventually will install a SQL Server from one end to the other. The only parameters that the script will require will be the account used for the services and the instance name. Everything else will be discovered automatically. The script from this post is my attempt of discovering the disks available on a server and formatting them accordingly. Till now the script was tested only against simple volumes. The script contains a few functions, one that discover all the disks, another one that discover all available disks (RAW format) , another one that prepare the disk for a stand alone installation of SQL Server and the last one which is preparing the disk for a clustered SQL server installation.

Below is the output generated by the script.

disk_config

As in the case of any actions that change the data, the format functions should be used with caution. The steps for running the script in a safe manner will be to execute the script and call the functions in the following order

  • all_disks
  • av_disks
  • formatclusterdisk or formatclusterdisk with the whatif parameter
  • formatclusterdisk or formatclusterdisk without the whatif only after checking that indeed the disk is RAW.

The script can be downloaded from here

Windows cluster – checking if services or applications are running on preferred node

Goal

To obtain information about the location where cluster services or applications are running now and if they are running on the preferred node.

My way of achieving it

Using powershell. From time to time we want to know if everything is running on the preferred node on our cluster. Although the best practice will be to already have a powershell script that I can run on our server in some cases if the script is not there already or if the creation of the scripts is forbidden or just because I find more easier to copy paste the script in powershell console I decided to create something that might be considered like  an one-liner for this job.

The output of the script is below

preferrednode_oneliner

The script can be downloaded from here

SQL Server troubleshooting – tempdb utilization

Goal

To obtain information about tempdb database from utilization perspective

My way of achieving it

Using some of the DMV/DMF that come with SQL Server and overall advises (scripts taken from different sources over internet that all refers to SQL Server DMVs in Action Better Queries with Dynamic Management Views by Ian W. Stirk)  from peers colleagues from internet customized to fit my needs  and my own way of troubleshooting. For me the most important thing while troubleshooting is to have the result of the investigation side by side with what it means since in most of the cases I am not dealing with this everyday and my memory about this topic is as good as the number of occurrences I had throughout my working experience. In simple terms for tempdb database as for other user databases we can find information about usage using specific DMVs and DMFs. The utilization information (tempdb) provided by SQL Server comes under the form of

  • present utilization
    • current space occupied by all the objects that uses tempdb – user objects, internal objects, version store, mixed extent
    • current space each session is currently using in tempdb because of running tasks
    • current space each session is currently using in tempdb because of row versioning
  • past utilization
    • used space in tempdb by all the sessions since they were established

The output of the script is ordered by the space still used and all the columns along the table gives me all the information that I need in order to decide the next steps or to say whom, when,  why in regards to tempdb utilization.

tempdbtroubleshooting

The script can be downloaded from here

SQL Server database backup – generating backup commands to multiple locations

Goal

To generate the backup commands for a full backup or a differential backup

My way of achieving it

Using xp_cmdshell and powershell. More often than we would like due to growth of the databases and creation of other databases the backup of our databases fails because of insufficient free space. Although most of the time the solution is to increase the disk where we save our backups there are exceptions from this rule

  • we really need to take a full backup or a differential one because we need to create a new log chain.
  • we really need to take a backup as a prerequisite of a change
  • we really need to take a full backup in order to free the space that other backups used to recover the database occupies on disk.  Some of the backup retention implementation will keep on disk all the backups since the last full backup.
  • we really need to take the backup according to the configured schedule, this schedule is created in such a way that will allow us to recover the database in some specific terms.

The script is generating the backup commands for locations found in backup history but also on the locations where SQL server keeps its database files.

The script requires

  • database name
  • backup type, full or differential
  • native or other type of backup, the script has syntax for litespeed but it can be modified to generate the commands using the syntax of your backup tool

The script identifies the size of the last full backup or differential one and uses that size as an estimation for the next backup.

The output of the script is shown below.

gbcmd

 

The script can be downloaded from here

 

SQL Server Instance – database files location, disk, mount point, size, free space, xp_fixeddrives looklike

Goal

To obtain information about the locations were database files are, disk, mountpoint and to display the capacity, free space and space occupied on that drive by the database files

My way of achieving it

Using xp_cmdshell and powershell. The reason behind this was the fact that although xp_fixeddrives is great sometimes we need more information. This script is nothing more than an attempt to create something that can be used when we have to answer questions like

  • where my files for my databases are stored
  • the location is a mount point or not
  • what is the size of the drive where my database files are
  • how much in percentage the database files occupies from that drive

The output of the script looks like below

dltompandsizes

For me this type of information is useful when we have to show to other teams that actually the SQL Server is not the major consumer of that location or when we have to identify the disk number where the database files are in case we have to ask the storage team performance logs for it.

The script can be downloaded from here

SQL Server Information – backups not compliant with the backup naming convention

Goal

To obtain a list of backups that have a name different than the backup naming convention and the space occupied by these backups

My way of achieving it

Using tsql, xp_cmdshell and xp_fileexists. From time to time, at least in my case, I am doing backups as part of a ticket or a request and I am naming them in 100% of the cases something that is different than what the scheduled backups uses for those backups. In time I realized the my footprint is quite big and if among the team there are other colleagues doing the same then the footprint is even bigger. Because of this I created this script that will tell us which backups looks like NOT being compliant or backups that are not taken by the scheduled jobs or using the right naming convention and provides us information about those files like:

  • path to the backup
  • if the file is still on disk
  • size of the file
  • delete commands

The assumptions that I made were that usually we are taking full backups, differential backups, tlog backups and on some servers we might have log shipping configured. Because of these we might have 4 naming conventions for our backups or more. The script starts by creating a table containing the naming conventions discovered based on the backup name length, number of numbers in the backup name and the number of occurrences. Based on this table and based on the fact that if occurrences is smaller than 10 is deciding if the backup is compliant or not.

By default the script is dealing with backup history for the last 14 days. If you want to go back in time more than 14 days then you must modify the line below

SET @days = 14 — enter here the number of days for which you want to search among backup history

in the script.

Please be aware that this setting together with how often you take the backups increases the execution time.

The output of the script is bellow

backupsnotcompliant1

The output of the script from a server where we have some backups jobs configured looks like below

backupsnotcompliant2

 

The only thing that I want to mention here is that this script is providing information only and the decision of doing something with the output is entirely something that you must decide and is very dependent of the each infrastructure environment. All the results set were left in the output because by seeing the output you can easily confirm that indeed those type of backups are for sure not compliant with the backup policy from your environment.

The script can be downloaded from here

SQL Server troubleshooting – filegroup warning alerts

Goal

To obtain information that will help me troubleshoot, solve these type of alerts

My way of achieving it

Using tsql and xp_cmdshell. From time to time and depending of the monitoring solution in place alerts related to filegroup for a database is something common throughout the day of the dba and I wanted to have something that will help me while solving those type of alerts. The script that I am using for solving these alerts doesn’t mean that we shouldn’t take further actions anymore is just an way to tell me if a deeper investigation or look should be made. The output of the script is below and the script requires the database name and the threshold for which you want the script to calculate the new values. In most of the cases the threshold is 80 but some implementations can configure the threshold to 90 which means that the alerts will come when the used percentage of the database is >=90.

These are the places that needs to be modified to reflect your database and your threshold

SET @dbname = ‘AdventureWorks’ — database name
SET @threshold = 50 — threshold

As you can see below

 

fg_warning

the script shows the current info and increases the maxsize for database files belonging to the PRIMARY filegroup with at least 2%. The main goal of the increase is to have a maxsize that will make the used space in percent to be smaller than threshold. When the comment for each database file specifies that the maxsize is bigger than the free space on that drive we need to have a look and possible to start the increase disk procedure for that location.

The script can be downloaded from here

SQL Server troubleshooting – logical disk free space and transaction log backup alerts

Goal

To obtain information that will help me troubleshoot, solve these type of alerts

My way of achieving it

Using tsql and xp_cmdshell. Sometimes when I have to troubleshoot why we ran out of free space on the drive where transaction log files are for our databases I start by checking what other files are stored in the same path and check which of them are subject to returning their free space (only transaction log files) to the OS in order to get rid of this alert. This is handy when for some reason one of the database tlog file grew because of an ad hoc query or because of the usual activity against the database and now it takes all the space. This doesn’t mean that we should not configure the drive hosting our tlog files with the right size it  only offer an workaround for creating space quick for other databases in order for these to grow if they need. Along with logical disk free space alerts most of the monitoring implementations are creating also backup alerts if the default location cannot accommodate the size of the backup which means that we have to find space somewhere else to take that backup and truncate that file to return the free space to the operating system.

Because in some cases I am connecting to the instance remotely the need for having something in one place without using along SSMS, remote disk management, windows explorer using administrative shares and so on becomes a must if the other tools cannot be used due to firewall configuration between the server on which we use SSMS to connect to that instance. The script output displays what we can do to free some space, information related to the log space usage for our database and how tlog file is configured  along with information about the places where the temporary backup can be stored. Due to the fact that someone might have configured log shipping for our database and the script is considering default path the one in which last week backups went the most there is no WARNING or logic saying that this database CANNOT be backed up to different locations that script provides. This means that I have to always check if the database is configured in a log shipping configuration and is a primary database anytime I am using the script.

The output of the script is similar with the output below depending of what you entered and the database options.

ldfsandtlogb

The script generates commands for backing up the tlog for our databases and shrinking the files. If shrink is not successful then display all open transactions in order to help you find the transaction for which the shrink cannot be done if you entered a path in the script.

The script is generating the commands for backing up tlog, showing you the tlog usage and its configuration together with places where you can take the backup if you entered a database name in the script.

The script can be downloaded from here