Patrick Purviance – SQL Server DBA Blog
Blogging impactful solutions for SQL Server DBAs worldwide

Archive for the ‘Tricks and Tips’ Category

SQL PASS Summit 2011 – A First Timers View – Day 3

Thursday, October 27th, 2011

This post is part 3 of at least a 4 post article on my First Timer view of the SQL PASS Summit as I experienced it personally.  If you missed the first posts, click here.

Day 3

Day 3 for me, Day 2 of the Summit (Thursday) rolls around and today I’ve gotten up a little earlier to make it on the shuttle over to the convention center for the free breakfast at the Summit.  Yesterday I mentioned in my Day 2 summation that the hot breakfast from the Marriott was a better choice.  Don’t get me wrong, the breakfast at the Summit in the dining hall was OK, with typical cereal, yougurt and continental items, but it just wasn’t as good as the hot fresh breakfast items at the hotel.  So to me, in the end, the 10 block walk is worth it if your hotel is providing a hot breakfast for free too…

Anyway, on to the Summit Day 2 Keynote.  Plus, it’s SQL Kilt Day, so Bill Graziano, PASS Executive VP, is in a kilt, like another 30 or so other attendees, to introduce Day 2 and Quentin Clark (Corporate Vice President, SQL Server Database Systems Group Microsoft Corp) as the keynote speaker.

Quentin provided a second keynote comprised of a sales pitch.  This time focused mostly on what he called the “Fantastic 12”, the top 12 important features in SQL Server 2012.  Quentin is an entertaining speaker and also brought out quite a few Microsoft technologists and experts to cover the “Fantastic 12” in the keynote:

#1 Required 9’s & Protection – SSIS is a server, AlwaysOn, and HA for StreamInsight
#2 Blazing Fast Performance improvements to the SQL Engine, SSIS, and SSAS (and the new ColumnStore index (really sweet for DWH))
#3 Rapid Data Exploration
#4 Managed, Self-Service BI – Power View, PowerPivot
#5 Credible Consistent Data – Data Quality Services; Master Data Services
#6 Organizational Compliance Auditing; User-defined server roles
#7 Peace of mind – Distributed Replay Utility; System Center Advisor / MPs; Premier Mission Critical support
#8 Scalable data warehousing – Optimized & pre-tuned appliances from multiple vendors
#9 Fast time to solution
#10 Extend Any Data, Anywhere – Drivers for PHP, Java, Hadoop; ODBC drivers for Linux & CDC for SSIS & Oracle
#11 Optimized productivity SQL Server Data Tools (“Juneau”) for both OLTP and BI; deployment + targeting “freedom”
#12 Scale on demand AlwaysOn; Elastic Scale & Federation

Again, lots of demos and a lot of time spent talking about the Parallel Data Warehouse turn-key data warehousing hardware platform.  At least it was a good amount of information shoved into the space.

To see the entire keynote for yourself, click this link.

The rest of the day was just like the first day, filled with sessions and Expert Pods and a quick stop into the SQL Server Clinic put on by Microsoft and staffed with escalation team and sqlCAT team members to answer tough questions you brought with you.  Unfortunately, I have tough questions about merge and transactional replication issues and there was no one in the clinic to answer them.  The recommended I post them to them to the MS forums or Connect.  I did get to take a bag full of SWAG for my SQL PASS Chapter though, so that was a #win.

Thursday evening was the night of the Microsoft sponsored Community Appreciation Party held across the street from the convention center at Gamewerks.  This event was awesome.  A huge sportsbar/arcade cover a massive amount of square footage.  There was free food and drinks again of course.  And then, to spur networking, there was a contest where you had to find 5 new people, get to know them a bit, and have them put their name and email addy on a card.  I met some neat people, played a bunch of games, and found out the next day that I’d won in that contest.  A new xBox360 Kinect!  #winning #sqlpass rocks!

Back to the hotel instead of where a lot of people go, SQL Karaoke, but it was a great night anyhow.  The walk back to the hotel was chilly, but I called my wife and we chatted the entire walk back.  #morewinning

Little work for the company to wind down and some shut eye to prepare for the final day….was almost sad it was this close to the end…..


SQL PASS Summit 2011 – A First Timers View

Monday, October 24th, 2011

So, as promised, I’m lightly documenting my experience as a first timer at the SQL PASS Summit. In the next few posts, I’ll be specifically covering the 2011 Summit, held in Seattle, WA. I’ll break it up by day, beginning with travel and what to expect.

Also, there are plenty of other articles out there from well experienced PASS Summit attendees, so I encourage you to read all you can prior to your first trip to what is truly the largest, best supported, and well carried out technical conference I’ve ever attended. Without further ado….

Day 1 of my First Timer experience of SQL PASS Summit 2011

Travel to the Summit
Day 1 was the Tuesday before the first day of the Summit. It began with a two-stop flight from Wichita to Seattle, with the stop in Denver. No hassles anywhere along the way there. It was when I arrived at SEATAC that things began to get interesting. I had read and heard that Seattle had a Light Rail Train system and that the cost was only a few dollars to get from the airport to just about anywhere between SEATAC and downtown Seattle, so I immediately set off to find the Light Rail. That was a bit of a trek, but nothing deadly ;). It was on the third level, accessed from a walkway connecting the airport parking garage to the Light Rail. There were plenty of signs along the way. Once you get to the Light Rail, there’s no one posted to talk to or ask questions, which you’re sure to have if this is your first time. You walk thru the gates and must purchase a ticket to ride the train based upon your destination. For the Summit, the very last stop (Westlake Center) was what I wanted, but the maps are difficult to read and figure out when there’s line of people behind you at the ticket kiosk, so I chose the University destination. I then went up the escalator to the train loading areas and luckily didn’t have to wait for a train since one was there already. (SEATAC is the southern most end of the Light Rail system, so any train is going where you want to go). I jumped on and took a seat. Then waited for 5 minutes and the train took off, north bound. I only wasted all this time to talk about the Light Rail train to highlight my mistake on the destination (which forced me to walk a lot more with bags in tow to get to my hotel (Marriott Spring Hill Suites) and to let you know that it took the train about 50 minutes to get from SEATAC to Westlake.

Summit Registration
Once I got to the hotel, I checked in, dropped off my bags, and headed over to the Washington State Convention Center, the site of the Summit in 2011. Registration was easy. It was on the 4th floor, up two sets of escalators in the main hall of the center. Lots of people were posted on the 4th and 6th floors (both PASS volunteers and convention center staff) to direct you to what you needed. I registered (4th floor thru the doors on the left to enter the Summit site) around 1:15PM Tuesday afternoon and there was zero wait. The registration volunteers were cheerful and very helpful, explaining what everything was for. Not sure if that would be the case if there were lines, but it was nice for me.

PASS Chapter Leader Mtg
I then grabbed lunch at the Subway out in the main hall on the 4th floor and read thru some of the program guide and mini-schedule. For those of us who are PASS Chapter Leaders, there was a pre-conference meeting that I attended Tuesday @ 2:30PM. I met a few other chapter leaders and we chatted for a while before the meeting. Finding people I already had something in common with right off the bat was nice.

First Timer Orientation
After the chapter leader meeting, it was already time for the first timers meeting. This was something new this year. Thomas Larock ( Twitter: @SQLRockStar) and other PASS leaders and volunteers coordinated a Big Brother/Sisters solution to the issues of attendees not returning to the Summit because they felt they arrived alone, ate alone, spent the evenings alone, and left without making any real contacts. I don’t know, honestly, if the Big Brother/Sister program worked well due to the fact that is very difficult to stay in contact with any of the people in your group after you initially meet. Our group never got back together. And with all the different sessions, experience levels, and goals for the summit, I can understand how that could be difficult. It would have been nice to have some more group lunches or breakfast or something. In the end I did meet one person that I really felt we hit if off well together and did manage to hang out with a few times during the summit. So that was a win.

Speed Networking
After the first timers met their group and Big Brother/Sister, we all went into a huge room where all 800 first timers sat in on a 45 minute Speed Networking session by Don Gabor. We learned the importance of introducing and selling ourselves within a couple minute’s time during this session, which was like a massive 800 person speed dating party. Every 3 or 4 minutes you moved down to the chair and met someone new. Interesting for sure.

First Timers Red Carpet Welcome
At the end of the Speed Networking session the two large displays showed the insides of the even larger conf room next door (the main conf room) where the current PASS President (Rushabh Mehta) was on stage and welcoming everyone to the 2011 PASS Summit. At the end of that welcome, he introduced the first timers and a huge black curtain between the two rooms fell and all 800 first timers entered Welcome Reception thru lots of fog, lights, and a red carpet. Pretty sweet way to start the Summit. We were all rock stars!

Welcome Reception
Alright, so on to the lowdown on the Welcome Reception. It was around 4000 people in a huge ballroom. Dimly lit, free food and drinks, and lots of movement and noise. Some of that is where I have some complaints. There weren’t any signs or indications of where things were. As a first timer, I had no idea what was going on and decided that if I didn’t just grab some food, I wasn’t going to get any (turned out that wasn’t the case). With 4000 people and cuisine specific buffet tables in different areas of the room, getting around was tough and you didn’t know what was available unless you got around the whole room. I was also disappointed in the fact that although drinks were free, there was nothing non-alcoholic aside from water.

During the reception party, a group of SQL Server community “celebrities” were brought on stage along with 4 attendees and 4 teams played Quiz Bowl, a team based version of Jeopardy, with answers consisting of funny combinations of SQL Server technology and pop culture. Entertaining, and those attendees walked away with prizes.

First Afterhours Event
As the Welcome Reception was drawing to a close, many people began heading out to various after hours events and this is where the PASS Summit to me is vastly different from other technology conferences. As a first timer I wasn’t sure what to expect, but had heard that getting yourself invited to afterhours stuff was critical in experiencing the Summit the right way because you get to hang out with other SQL Server professionals in a mostly social way. Sure there was still a lot of shop talk, but it was usually surrounded by a party. This night, as a chapter leader, I attended an invitation only event sponsored by PASS for volunteers and speakers at the Garage, around 10 blocks from the convention center. I caught a cab with group of people, so it was only a couple bucks each. Most were already blitzed and I don’t drink, but I can have a good time w/o, so it was all good. At this party it was a little more difficult finding people to talk to here, but I think it was because most of them weren’t first timers. They were reconnecting, which is understandable. I did meet some people and have some cool conversations just by sitting down and introducing myself. I shared a cab with some other leaders back to my hotel around midnight and then did some work and hit the sack. The next day (Wednesday) was the first conference day and started at 8AM, so I needed some sleep.

At the end of the 1st day, I’d already met a handful of people I really liked and would chat with during the rest of the Summit off and on. And that’s honestly what the Summit all about. Realizing you’re all there with the same passion for SQL Server and overcoming fear and putting yourself out there. You’ll be accepted here….

As the PASS mantra goes, Connect/Share/Learn.

Well, that’s it for today. Tomorrow, Day 2.


Procedure to Validate SQL Server Replication Convergence

Friday, July 22nd, 2011

Today I’m posting a turn-key solution for those of you out there supporting SQL Server replication. The purpose of the stored procedure created by the script provided below is simply to leverage a little-known, but very useful, tool provided by Microsoft for the sake of convergence validation (validate that all of your table based articles are identical between the publisher and subscriber(s)).

The execution of the SP (once compiled) simply generates syntax in text format that can be saved as a batch file or run as-is (carefully) from a command prompt to check for convergence issues between a published DB and its subscriber’s DB tables.

The underlying tool, called TableDiff.exe, again, is provided by Microsoft and can be found with the SQL Server binaries. Because of this, the SP should be executed on the server upon which you will be executing the resulting script so that the correct binary location will be provided.

I typically compile the SP in a DBA “tools” database, execute it providing all of the required parameters, save the resulting text as .bat file on the desktop of the publisher’s host server, and then execute the .bat. The batch file will create a log report and, optionally, a fix-it T-SQL script for each article to bring the table(s) back into convergence if differences are found.

Note: There currently is NO error checking done for the the location of TableDiff.exe. It can be added if needed.

Example Usage: EXEC DBA_TOOLS.dbo.Repl_Gen_TableDiff_Code_usp @Pub_SvrName = @@servername , @Sub_SvrName = 'SERVER_A' , @Pub_DBName = 'CUSTOMER' , @Sub_DBName = 'CUSTOMER' , @Provide_Fix = 1 , @Outputdir = 'C:\' Where: @Pub_SvrName is the name of the publisher server @Sub_SvrName is the name of the subscriber server @Pub_DBName is the name of the published database @Sub_DBName is the name of the subscriber database

This has been tested on SQL Server 2005/2008/2008R2, but only with Transactional & Snapshot Replication. Merge Replication uses some alternative tables for configuration and lookups and often includes parameterized filters for subscribers, so I won’t claim this solution is appropriate for Merge or Peer-to-Peer replication, but am open to comments on the topic.

USE [DBA_Tools]
IF OBJECT_ID('Repl_Gen_TableDiff_Code_usp') IS NOT NULL
	DROP PROCEDURE dbo.Repl_Gen_TableDiff_Code_usp
CREATE PROCEDURE [dbo].[Repl_Gen_TableDiff_Code_usp] 
	  @Pub_SvrName	SYSNAME
	, @Sub_SvrName	SYSNAME
	, @Pub_DBName	SYSNAME
	, @Sub_DBName	SYSNAME
	, @Provide_Fix  BIT = 0
	, @OutputDir	VARCHAR(4000)
	, @DebugFlag	BIT = 0
--==  Procedure:		dbo.Repl_Gen_TableDiff_Code_usp  
--==  Author:			Patrick Purviance
--==  Description:		Simple SP to generate syntax that can be saved as a batch
--==					file or run as-is (carefully) from a command prompt to 
--==					check for convergence issues between a published DB and
--==					its subscriber's DB tables.  TableDiff.exe is provided
--==					with the SQL Server binaries and therefore this SP should
--==					from the server upon which you will be executing the 
--==					resulting script so that the correct binary location will
--==					be selected.  
--==					Note:  There currently is NO error checking done for the 
--==					the location of TableDiff.exe.  It can be added if needed.
--==  Usage:			EXEC DBA_TOOLS.dbo.Repl_Gen_TableDiff_Code_usp 
--==  								@Pub_SvrName	= @@servername
--==  								, @Sub_SvrName	= 'SERVER_A'
--==  								, @Pub_DBName	= 'CUSTOMER'
--==  								, @Sub_DBName	= 'CUSTOMER'
--==								, @Provide_Fix = 1
--==								, @Outputdir = 'C:\'
--==					Where @Pub_SvrName is the name of the publisher server,
--==							@Sub_SvrName is the name of the subscriber server,
--==							@Pub_DBName is the name of the published database
--==							@Sub_DBName is the name of the subscriber database
--==  Version history:
--==					v1.0 Created, PPurviance, 11/29/2010
--==					v1.1 Modified to use replication database syspublications to be sure 
--==						 to pick up all articles. I found a bad publication_id in 
--==						 distribution..mspublications causing some articles not to 
--==						 get picked up for convergence checking.
		, @SQL VARCHAR(4000)
		, @LogDate VARCHAR(13)
		, @Version VARCHAR(4)
		, @TablediffLoc VARCHAR(256)
		, @FixFile VARCHAR(256)
DECLARE @SysArticles TABLE (pubid int, artid int, artname SYSNAME)
DECLARE @Publications TABLE (pubname sysname, pubid int)
--== Determine the current SQL version:
Select @version = left(convert(varchar, SERVERPROPERTY('ProductVersion')), 2)
Select @version = 
	Case @version 
		When '7.' then '7'
		When '8.' then '2000'
		When '9.' then '2005'
		When '10' then '2008'
--== Check all supplied values:
--== First check the publisher server name. This must be run on the publisher.
IF rtrim(@Pub_SvrName) <> @@servername
	SET @MSG = 'This procedure must be run on the publisher to work correctly.  The servername provided does not match the local server name.'
IF NOT EXISTS(SELECT 1 FROM master.sys.databases where name = rtrim(@Pub_DBName))
	SET @MSG = 'Supplied Publisher Database cannot be found.'
--==  End check all supplied values
SET @SQL = 	'SELECT pubid, artid, name FROM [' + rtrim(@Pub_DBName) + '].[dbo].[sysarticles]'
INSERT @SysArticles	
IF @DebugFlag = 1
SET @SQL = 	'SELECT name, pubid FROM [' + rtrim(@Pub_DBName) + '].[dbo].[syspublications]'
INSERT @Publications
IF @DebugFlag = 1
--==Build the date and time and tablediff location portion for the log name
SELECT @TablediffLoc = CASE @version 
	WHEN '2000' THEN '"C:\Program Files\Microsoft SQL Server\80\COM\tablediff" '
	WHEN '2005' THEN '"C:\Program Files\Microsoft SQL Server\90\COM\tablediff" '
	WHEN '2008' THEN '"C:\Program Files\Microsoft SQL Server\100\COM\tablediff" '
SET @FixFile = ''
--==Now build the statement
		SELECT @TablediffLoc 
			+ '-sourceserver "' + rtrim(@Pub_SvrName)
			+ '" -sourcedatabase "' + rtrim(@Pub_DBName)
			+ '" -sourcetable "' +  sa.artname
			+ '" -destinationserver "' + rtrim(@Sub_SvrName)
			+ '" -destinationdatabase "' + rtrim(@Sub_DBName)
			+ '" -destinationtable "' + sa.artname 
			+ CASE WHEN @Provide_Fix <> 0
				THEN '" -f "' + RTRIM(@OutputDir) + '\Make_' + RTRIM(@Sub_SvrName) + '_' + RTRIM(@Sub_DBName) + '_LookLike_' + RTRIM(@Pub_SvrName) + '_' + RTRIM(@Pub_DBName) + '_For_' + RTRIM(sa.artname) + '.sql"'
			  ELSE '"'
			+ ' >> "' + RTRIM(@OutputDir) + '\Tablediff_Log_' + RTRIM(@Sub_SvrName) + '_' + RTRIM(@Sub_DBName) + '_vs_' + RTRIM(@Pub_SvrName) + '_' + RTRIM(@Pub_DBName) +  '_' + rtrim(@LogDate) +  '.txt"'
		FROM @SysArticles sa inner join
			@Publications pub on pub.pubid = sa.pubid
		ORDER BY sa.artname 


Hey DBAs, how do you provide passwords to end users?

Monday, June 6th, 2011

So and end user has decided they need a new login created on your SQL Server to complete a project/assignment. Immediately a question explosion happens in your mind, right? What will the login name be? What will this login need access to? What will be done with it? Who will have access to the password? How complicated should the password be? You get the point.

Well, today I’m offering up a solution for one of the questions that may seem menial, but to me it’s a pet peeve. How to provide the password to an end user in a simple way without putting it in an email or typing it in an IM or writing it on a sticky note (you wouldn’t do that would you??? hmmmm). Yes, a phone call is a perfectly good way of controlling the sitaution, but I often find people people always want to copy and paste and sometimes I’d rather not make myself instantly accessible to an end user and this is a nice buffer. Below is a process I’ve used for years to provide this information and it’s treated me well.

Secure-Table Method
Once you have the login and password defined in your SQL Server and have granted the appropriate permissions:
1) Create a table (often I just do this in master as every login should have public access to the DB) defined with two columns; one for the login name and one for the password
2) Insert the login name and the password into the table you created
3) Grant SELECT permissions ON the table you created TO the end user who needs to know (only you, the end user you granted permissions to, and any other System Admins should then have access to this table)
4) Once the user has confirmed they have the login name and password, simply drop the table

Additionally, you could create a database specifically for this purpose and add a layer of encryption, but that’s up to you.

I realize this may be a bit overkill for some. But I like security, and I like sleeping at night. Putting a password into any insecure media, able to be accessed by who knows who, and for how long, asks me to question how long I should expect to keep my job.

Anyway, just a solution I thought I’d document. If it helps anyone, great. If not, only cost me about 30 minutes of my life 😉

SQL Agent Jobs owned by individual users – Quick Fix

Thursday, June 2nd, 2011

Today I’m sharing a “quickly whipped script” to change the owner of SQL Agent Jobs to SA (or another of your choosing), so if the individual job owner were to leave the company and their AD account dropped, the job wouldn’t just begin failing. 

Feel free to use/modify/redistribute for any reason.

SELECT 'EXEC MSDB.dbo.sp_update_job ' + char(13) +
'@job_name = ' + char(39) + j.[Name] + char(39) + ',' + char(13) + 
'@owner_login_name = ' + char(39) + 'sa' + char(39) + ', @description = ' + char(39) + 
CASE WHEN j.[description] = 'No description available.'
	THEN 'Originally owned by ' + + '.' + CHAR(39)
	ELSE j.[description] + CHAR(13) + '.  Originally owned by ' + + '.' + CHAR(39)
FROM MSDB.dbo.sysjobs j
INNER JOIN Master.dbo.syslogins l
ON j.owner_sid = l.sid
ORDER BY j.[name]

Open Transactions Stuck in the Log of a Database in Simple Recovery Mode

Thursday, November 4th, 2010

The situation:

I found I had a transaction log that had grown somewhat out of control on a SQL Server 2008 server and pressing hard on what little disk space I had left on a system.  I needed to shrink that log file ASAP before I had a critical outage. 

I’m not going to focus on the long drawn out discussion that could be had with regards to how the log got to this condition.  I only plan to document the condition of said database and how I corrected the problem.  So…..YMMV

The facts:

  • The database was in SIMPLE RECOVERY Mode
  • The log was not automatically truncating (marking the inactive VLFs to be re-used) 
  • DBCC OPENTRAN showed NO Active Transactions
  • DBCC LOGINFO showed 187 VLFs with STATUS = 2 and 2 VLFs with STATUS  = 0

Essentially, I had active transactions stuck in the log and no open transactions to correlate to them.  Again, not going to cover how that could have occurred here.  I simply needed the VLFs containing those transactions to become inactive. I needed to be able to clear an unruly transaction log to get going again.

What I did:

Prior to SQL Server 2008, again in an emergency sitaution, I would have leveraged the BACKUP LOG WITH TRUNCATE_ONLY command, but Microsoft removed that from 2008 and BOL recommends you simply change your database recovery model from FULL to SIMPLE, then back to FULL and take a full backup to restart your backup/log chaining.  Great, but what if my database was already in SIMPLE mode?  Well, I found the same advice will help in this situation. 

The Big Warning
Although there is little chance of corruption or damage to the database, performing the steps below on a production system should only be done in an emergency situation. What I’m documenting here is a modified attack from what is found in BOL (Books Online) for a SQL 2008 system. For 2000/2005, you have the option of using the BACKUP LOG WITH TRUNCATE_ONLY to essentially do the same thing, but again, proceed with extreme caution.

Executed the following:

USE Database_Name

This effectively cleared all the uncommitted/active transactions from the log and I was then able to shrink the log back down to it’s original size.

Hope that helps someone out there.  But again, be sure you understand what you’re doing here.  And if you’re not sure, seek additional advice.


SSMS 2008 Bug Displaying SQL Server 2005 Database Objects

Wednesday, October 27th, 2010

Just another quick post to help anyone dealing with a rather annoying bug found in the 2008 SSMS. Turns out that there is a permissions/join bug in the 2008 SSMS when users who have lesser privileges attempt to view objects with the SSMS object explorer. If you ever find a user coming to you and declaring, “I used to be able to see all the tables in database XYZ, but after we upgraded to the 2008 tools, I can’t anymore. I dont’ have the problem with the 2008 SQL Servers. Just the old 2005 SQL Server. What gives?”…..check out this link:

Non-DBO Schemas have been the be root cause for all of my affected users so far. Seems they all want to see the metadata on stuff I’m trying to hide from them 😉

The only legitimate fix, to date, is to grant the user VIEW DEFINITION permissions at the database level (I’ve tried granting the permission on individual schemas and objects to no avail :( ).  If it be a developer using 2008 SSMS on a 2005 database server, maybe VIEW ANY DEFINITION is the solution (they should probably have sufficient privileges already on a Dev server). If it be an end user on production machine, however, VIEW DEFINITION on the particular database is more appropriate. Either way, a role with these permissions would be the best practice and easiest to manage.


Remote SQL Server Configuration Manager Access

Tuesday, October 26th, 2010

Today, while updating some CMS (Centralized Management Server) registrations, I stumbled upon something rather useful.  The finding was that you can remotely access the SQL Server Configuration Manager for any 2005+ SQL Server.  The CMS doesn’t support management of prior versions, but it does support many other functions for those previous versions…see the link above for more information.  But getting back to today’s topic, the Configuration Manager, follow the steps below to avoid having to RDP or physically log onto your SQL Servers in the field just to manage and configure the underlying services:

Remotely Accessing the SQL Server Configuration Manager

1)  Make sure you have a CMS configured (again, Centralized Management Server).

2)  Using SSMS, connect to your CMS via the Registered Servers view and navigate to a server instance you would like to manage remotely via the Configuration Manager console.

3)  Right-click on the server instance and choose SQL Server Configuration Manager.

That’s all there is to it.  Now you can modify configurations for, and manage the state of, any of the SQL Server services on the remote servers without ever leaving the comfort of your own desktop :)

Keep in mind that you can only access this feature through the Registered Servers view of the SQL Server Management Studio, not the Object Explorer view.  IF you find or already know of a better way, feel free to post a comment and I’ll be sure to update this post.


Didn’t find a better way, but did find that there appears to be a bug in the 2008 CMS GUI within SSMS, at least thru SP1 (build 2531). When you have SQL Server 2005 instances registered and attempt to use the above technique to manage the services on that remote server, you’ll get the following error message:

Cannot connect to WMI provider. You do not have permission or the server is unreachable. Note that you can only manage SQL Server 2005 and later servers with SQL Server Configuration Manager.
Invalid namespace [0x8004100e]

I found this KB Article, but it’s not particularly useful:

If you find a solution, please post a comment and let me know.   Based on the error, this shouldn’t be occurring when attempting to manage the 2005 instances, but maybe there’s just some sort of imcompatibility built into the coding of the 2008 SSMS, just as when attempting to manage 2005 SSRS report objects with 2008 BIDS.


“Indexes? We don’t need no stinking indexes!”

Tuesday, September 28th, 2010

Yeah, right…..Alright, now that I’ve got your attention (or at least I better have if you’re a DBA worth your weight in any known unit of measure), below is a quick query for finding AND generating a script to create missing indexes for SQL Server 2005+. Now, this is just my version of a way to do it, and you may have opinions which you’re welcome to share if it will improve the query in some meaningful way for myself and everyone else, but I’ll throw out the normal disclaimer: You MUST review the recommended indexes and determine for yourself if they are appropriate for your database/workload. They may make sense, they may not, so TEST, TEST, TEST!

Have fun and good luck optimizing!

UPDATED! Now includes better formatting in the output to more closely mimic the missing index feature recommendation in SSMS execution plans.

SELECT as [Table Name]
	, (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) AS [Relative Impact]
		CASE WHEN mid.equality_columns IS NULL THEN ''
			ELSE '_' + replace(replace(replace(replace(IsNull(mid.equality_columns, ''), ',', '_'), '[', ''), ']', ''), ' ', '') 
		END + 
		CASE WHEN mid.inequality_columns IS NULL THEN ''
			ELSE '_' + replace(replace(replace(replace(IsNull(mid.inequality_columns, ''), ',', '_'), '[', ''), ']', ''), ' ', '') 
		END + 
		'] ON [' + + '].[' + COLLATE DATABASE_DEFAULT + '] (' + 
		CASE WHEN mid.equality_columns IS NULL THEN '' 
			ELSE mid.equality_columns 
		END + 
		CASE WHEN (mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL) THEN ', '
			ELSE ''
		END + 
		CASE WHEN mid.inequality_columns IS NULL THEN ') '
			ELSE mid.inequality_columns + ') ' 
		END + 
        CASE WHEN mid.included_columns IS NULL THEN ') '  
		ELSE 'INCLUDE (' + mid.included_columns + ')' 
		END + ';' 
		AS [Create Index Statement]
	, mid.equality_columns as [Equality Columns]
	, mid.inequality_columns as [Inequality Columns]
	, mid.included_columns  as [Included Columns]
FROM sys.dm_db_missing_index_group_stats AS migs 
	INNER JOIN sys.dm_db_missing_index_groups AS mig ON migs.group_handle = mig.index_group_handle 
	INNER JOIN sys.dm_db_missing_index_details AS mid ON mig.index_handle = mid.index_handle AND mid.database_id = DB_ID() 
	INNER JOIN sys.objects Obj ON mid.OBJECT_ID = Obj.OBJECT_ID 
	INNER JOIN sys.schemas Sch ON Obj.schema_id = Sch.schema_id
WHERE (migs.group_handle IN 
		SELECT TOP (500) group_handle 
		FROM  sys.dm_db_missing_index_group_stats WITH (nolock) 
		ORDER BY (avg_total_user_cost * avg_user_impact) * (user_seeks + user_scans) DESC))  
					AND OBJECTPROPERTY(Obj.OBJECT_ID, 'isusertable')=1 

Another SSWUG mention found…

Wednesday, July 14th, 2010

Seems I was quoted in another SSWUG email daily newsletter a while back. The topic of conversation for quite a few days was index fragmentation and how to handle it.

The link above may not work well for everyone as I can’t find a way to get a bookmark to take you directly to the quotation, but below is the original topic and my response quoted in the newsletter on 07/07/2010.

The Question:

Index Defragmentation – How Often Do You See the Need?
Once again, on the list servers here on SSWUG there is a solid discussion about index defragmentation and an application that sets up maintenance plans.

First, a question – please take just a minute and let me know – how often (on average, yes I realize it is variable) do you find yourself needing to defragment your indexes on your production databases?

My response:

…and here’s Patrick’s experience – “We have a variety of systems that we support (120 servers/150 instances (2000/2005/2008)) and my experience shows that for many systems we are able to simply ignore the fragmentation levels because the performance of the system is not negatively impacted enough by the fragmentation to become a squeaky wheel. That said, however, some of our largest systems (0.5TB to 0.8TB in size with tables approaching 120GB) require outages and defragmentation every 6 months to keep performance at an acceptable level. We have a custom built SProc that we run on a weekly basis on the majority of the systems that sends us an email based report on the fragmentation levels of the indexes and also has internal logic to identify what actions should be taken based on best practices (defrag/rebuild/drop & recreate). This rarely gives us any grief and I feel we’ve effectively addressed fragmentation from a performance standpoint enterprise wide.

Yes, you will likely find some heavily fragmented tables/indexes here and there, but by and large we have very few performance complaints that can be tracked down specifically to fragmentation (I feel due to the process we have and use). 9 times out of 10 it’s an I/O bottleneck or someone running a report where it shouldn’t be run.”

I got SSWUG Published ;)

Wednesday, July 14th, 2010

A recent request from the SSWUG email newsletter asked how we as DBAs approach and handle application/service consolidation on your SQL Server servers. I have a pretty strong feeling and approach to this topic and emailed my experience to SSWUG…..and to my surprise (for the 2nd time I just heard from another DBA friend of mine), I was “published” in the daily SSWUG newsletter. Even further to my surprise, my entire email reply was included in its entirety. Thanks SSWUG!!

And just in case the link doesn’t work someday, here’s the text of the originally posed topic and my response.

Original SSWUG Topic Question:

Who Shares Your Database Server?

Many small businesses and often times large businesses find it difficult to have servers deployed for a single purpose. They may be constrained by hardware and software costs, or not have the necessary skills to configure servers according to best practices.

For example, Microsoft encourages the use of integrated security for Sql Server. Best practices teach us that your Domain Controller and Sql Server should reside on different instances of Windows Server. Already we have two servers.

IIS running on Windows, or even Apache, are often best when run on a different server. This is especially true if the web server faces the Internet. In that case, you will most likely have a firewall between your web server and all your internal hardware, and therefore can’t share windows with Sql Server. Even web servers that are for internal purposes only should not share resources with your database.

Other products also compete for resources that are often used in conjunction with Sql Server. SSIS, SSAS, and SSRS all may require significant memory, disk or CPU resources. Many businesses also have Mail servers, application servers, etc.

Let’s hear from you. You live in the real world and have to balance resources with business needs on a daily basis. Why not share things like what you are currently doing, what you like about it, what you would like to change, and the factors that caused you to choose your current configuration. Don’t worry if your system doesn’t follow best practices. The goal here is to share with each other the issues we all face. Send your experience to

My published response

Patrick Says…

We have a decent number of dedicated SQL Server instances (hundreds) to support all of the different business applications utilized in our environment. Server sprawl and SQL Server “Silos” as they’ve been labeled by the Systems Admins, have become more prevalent over the last few years, and to accommodate, most of our web/application/support service servers are run in VMs. We also run our SSRS and SSAS installations on dedicated servers because the required resources to do what they’re intended to do without causing performance grief.

I’ve always been a firm believer that you must hold your ground on providing the most appropriate environment for your SQL Server databases to perform within if you have any chance of meeting rigid SLAs (written or not, expressed or implied), and rarely, if ever, allow any other applications that consume even remotely moderate amounts of memory to be run on my SQL Servers. I’ve seen too many departmental SQL Servers struggle to breath with IIS\Exchange on them and just don’t allow it unless there’s no other way, and I document my stance early on.

SQL Server requires certain things. CPU, Memory, Storage. I rarely run into CPU issues with todays (and even yesterday’s) processors. I rarely spec a server with less than 4GB of RAM and usually much more, but there are a ton of configuration options/needs there. The I/O subsystem usually is where I have to compromise most, but that’s a whole different conversation. But right now we’re talking about role/application consolidation, and to me it comes down to how much RAM and CPU is available for SQL Server with at least 30% overheard (with a little flexibility). If there’s anything left in a departmental/dev/test server, we’ll talk about it. If it’s production, fuggeddaboutit!


Check “Long Running Operations” Stored Procedure

Monday, June 7th, 2010

I threw together a SProc for returning useful information about long running operations for SQL Server 2005+ and thought I’d share it. The procedure uses the dynamic management objects, dm_exec_requests, dm_exec_connections, dm_exec_sessions, and dm_exec_sql_text to tell you how much longer you may have to wait on processes that SQL Server categorizes as “long running”. Typically I use this SP to find out how much longer a backup or a restore process is going to take. And it’s usually within a few minutes of being correct, so it’s worlds better than starting a restore on a SQL 2000 box and just telling everyone it will be done when it’s done and there’s little way to tell other than experience. 😉

Let me know what you think…


SQL Server Version/Build Lookup site

Thursday, June 3rd, 2010

Just posting a fantastic resource link today. keeps a nearly always up to date listing, complete with links to KB Articles and the MS product pages, for SQL Server Versions/Builds.  This one is my personal favorite of all the SQL Server build lookup pages online.

SQL Server Version/Build Lookup site

From the site authors:

This is a database of SQL Server versions for those of us who want to know what possible vulnerabilities may exist in unpatched SQL Server systems.  This makes it easier for those of us tasked with securing those environments to prepare the proper documentation outlining the threat.  Special thanks to Ken Klaft for helping maintain this area of the site.  With the seamingly endless stream of PSS-only releases out there this gets to be really time-consuming.


SQL Server 2008 & the SQL Server Surface Area Configuration Tool

Thursday, May 6th, 2010

SQL Server 2008 does away with the dedicated Surface Area Configuration Tool included with SQL Server 2005 in favor of leveraging a new facet within SSMS (SQL Server Management Studio) to now expose the properties necessary to make changes to options not available through the 2008 SQL Server Configuration Manager.

The new facet is called Surface Area Configuration and is accessed via SSMS by right-clicking on the instance name and choosing Facets.  You then select Surface Area Configuration from the Facet Drop-Down box and there you go.  Access to some feature options that you might have thought you lost after upgrading to 2008, but low and behold, Microsoft still has you covered!  Interestingly enough, this works as a single replacement alternative for the 2005 Surface Area Configuration Tool as it works with SQL Server 2005 instances as well (if you’re using SSMS 2008).

Make sure you dig around a little in the View Facets form as there are some more interesting nuggets hiding in there for you to discover….


Locate Startup Stored Procedures

Wednesday, April 28th, 2010

Quick check for stored procedures run at startup.  This will help you determine what stored procedures you may have set to run at startup.  I use this quite often (in modified form) to locate SQL Servers in the domain, leveraging SQL Server CMS (Centralized Management Server), running the SQL Blackbox trace.

Locate Startup SProcs:

Exec master..sp_MSforeachdb
'Select ''?'' [Database Name],
     name as [Startup Proc Name]
From [?]..sysobjects
Where type = ''p''
     and OBJECTPROPERTY(id, ''ExecIsStartup'') = 1;'

Tested to run on SQL 2000/2005/2008

Changing SQL Server Agent Jobs Owners

Tuesday, April 27th, 2010

Wrote a quick fix-it today for changing the owner of a SQL Agent Job from one login to another…..

Ever get annoyed by SQL Server Agent Jobs, whether created via a Maintenance Plan or any other method, defaulting the owner to the creator of the job?  I know I do.  The script below containing dynamic T-SQL will allow you to script out the change scripts needed to modify the jobs without the need of opening and modifying each job manually.  Big time saver!!!

Script to change job owner:

Select 'EXEC msdb.dbo.sp_update_job 
     @job_name = N''' + replace(name, '''', '''''' ) + ''', 
     owner_login_name = N''new_owner'''
From msdb..sysjobs
Where owner_sid = 
     (Select sid From master..syslogins 
      Where name like '%old_owner%')