SQL Saturday #39 New York

March 23, 2010 at 12:21 pm (PASS) (, )

New York, New York, it’s a hell of a town. The Bronx is up and the Battery’s down. The people ride in a hole in the ground…

Anyway, my abstract for SQL Saturday #39 in New York was accepted a while ago, but I just got all my reservations set, so I’m going for sure. This should be a pretty exciting event based on the people and the schedule. Hope to see you there.

Advertisements

Permalink Leave a Comment

Undocumented Virtual Column: %%lockres%

March 18, 2010 at 8:43 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , , , , , )

One of my development teams needed a mechanism for identifying the value of a key that was part of a lock (don’t ask). I’d never tried doing that before. Obviously if you hit the DMV sys.dm_os_tran_locks you can see the hash of the key in the resource_description column. But how to pull the value back. After some research, I first found this excellent article by the late, great, Ken Henderson (I really wish he was still around). The article outlined, among other things, the use of an undocumented “virtual” column called %%lockres%%. Some more searching then uncovered this great article by James Rowland-Jones, AKA Claypole. He described how, in a very high volume system, he used %%lockres%% to identify the source of a deadlock as the internal mechanisms that SQL Server uses to manage locks, the hash of the key. Oh, and he opened an incident on Connect, which seems to be closed, but vote on it anyway, I did. %%lockres%% is also covered in Kalen Delaney’s excellent book on SQL Server 2008 Internals and even warrants a bit of discussion in Professional SQL Server 2008, but that was written by James Rowland-Jones, so I’m not sure it counts.

In the meantime, while I was investigating this stuff, evidently the development team was looking into it on their own. They came to the same set of resources and decided to use the virtual column as part of their real-time, transactional application. Yeah, an undocumented “virtual” column going into a major application. Since I would probably be unable to do anything about this, I decided to at least look into how this thing behaves so I can be aware of what types of problems I might run into.

First, a simple query:

SELECTa.City
–,%%lockres%%
FROM Person.Address AS a
WHERE a.AddressID = 432

If you run this query and take a look at the execution plan you’ll see a nice clean clustered index seek, just as you would suspect. If you take away the comment and run it again, the execution plan is identical. On the version of AdventureWorks2008 currently installed on my machine, I get two page reads, regardless of whether or not I include %%lockres%% or not. With the comments removed, it returns the hash of the primary key: (b0004e040bc2). This looks pretty painless, free even.

If we want to see %%lockres%% in action, it’s not too difficult:

BEGIN TRAN
UPDATE Person.Address
SET City = ‘dude’
WHERE AddressID = 432;
–ROLLBACK TRAN

Obviously this will put a key lock on that row in the table. If I just select against sys.dm_os_tran_locks, the data returned looks like this:

resource_type   resource_description   resource_associated_entity_id   request_mode
KEY                       (b0004e040bc2)            72057594043564032                      X 

The original request from the development team was for a way to get the key value back when you know that a table is locked, such as the case here. I wrote this simple query to make that happen:

SELECT a.AddressID
FROM person.address(NOLOCK) AS a
JOIN sys.dm_tran_locks AS dtl
ON a.%%lockres%% = dtl.resource_description
WHERE dtl.resource_type = ‘KEY’

This query works and returns our key value of 432 just as you would want. But, take a look at the execution plan:

Yes, that’s a clustered index (or table, same thing) scan followed by a Sort followed by a merge join, processing 19614 rows to return one. But hey, it was only 341 reads. To say the least, I’m not excited about seeing this in a production system. This was explicitly cautioned in Kalen Delaney’s book. While it appears that the remote scan operator, which is how the DMV is accessed in this case, is 59% of the operation, that’s the estimated cost and has been pointed out before, isn’t the best measure of real cost in the system.

The development team went off and developed their own query, they had said they were looking for the key value, but evidently they were looking for who was holding the lock on a particular key value:

SELECT s.nt_user_name
FROM sys.dm_tran_locks l
INNER JOIN sys.dm_exec_sessions s
on l.request_session_id = s.session_id
inner join sys.partitions p on l.resource_associated_entity_id = p.hobt_id
where OBJECT_NAME(p.object_id) = ‘Address’ and
l.resource_description in (select %%lockres%%
from person.Address(NOLOCK) a WHERE a.AddressID = 432)

I actually had to adjust their query just a bit to get it to work correctly, but basically they had the right idea. Here’s the final execution plan:

This was still not terribly inspiring a thing to think about running in a production system although it only had one scan and seven reads. Whether or not putting this in a transactional system is a good idea, it certainly adds yet another tool, albeit an undocumented one, to the tool belt.

Permalink 3 Comments

SNESSUG March Meeting

March 15, 2010 at 6:17 pm (SNESSUG, SQL Server 2005, SQL Server 2008) (, , )

Aaron Bertrand showed up to teach us tips and tricks for SQL Server Management Studio. We had to move our meeting night because of a conflict at our wonderful host, New England Tech. But we still had 12 people show up. For SNESSUG, that was a good turnout. I gave away some swag that I had received from Microsoft and some stuff that we had purchased. Bribary works (at least that’s my theory, so feel free to bribe me, whenever).

Aaron’s presentation was great. He’s just showing nothing but meat. There’s no fluff. He’s just showing a series of tips & tricks in SSMS and explains why you want to use them. First revelation, -nosplash has no effect whatsoever on load time. He called it a placebo. It just kept going from there. Aaron’s stated goal was to make everyone in the audience say “wow” or “cool” at some point during the presentation. I’m pretty sure he succeeded. The first one that got a lot of people is when he demonstrated setting the connection color so you can track different connections visually on your screen. My personal one was the Registered Servers import list so you can maintain a common list, move copies around, share registered server lists within your team… I love learning stuff at a good presentation.

Oh yeah, and everyone said “wow” or “cool” at least once.

Permalink 1 Comment

Location of the PASS Summit Follow-up

March 11, 2010 at 1:33 pm (PASS) (, , )

I got a little distracted after lunch and was reading through some of the various bloggers reactions to the decision to keep the Summit in Seattle for the forseeable future. I enjoyed Brent Ozar’s take on the situation, but the thing that struck me square in the eyes and inspired me to add one more post of my own, was a comment on Brent’s post by Aaron. Scroll down and read it. Here’s the part that really made an impact:

This whole situation is making me less interested in supporting the organization. As a relative newcomer to PASS and having never attended a Summit, I’m turned off by the “come talk to Microsoft employees” stance. I’d rather them say come and talk to (or sing Karaoke with?) cool guys like Brent Ozar and others in the community who deal with real world issues day to day.

The arguments for keeping the Summit in Seattle largely boiled down to cost & access to Microsoft. Cost has been beat about the face & neck by me & others, and I can’t add anything else even marginally intelligent to the conversation, so I’ll shut up on that. Microsoft. Yeah, having access to Microsoft makes the Summit pretty cool (not to mention useful, a couple of my favorite sessions were from Microsoft presenters). PASS stands for the Professional Association of SQL Server users. The summit is produced by PASS in order to meet it’s own goals which are “dedicated to supporting, educating, and promoting the Microsoft SQL Server community.” (Yeah, silly me, I go and read the organization’s web site).

So here’s my comment. Is the community PASS is trying to support, educate and promote better represented by a bunch of Microsoft developers, or, to quote Aaron, “Brent Ozar and others in the community who deal with real world issues day to day?”

And, lest I take an inappropriate beating, I’m neither knocking Microsoft developers, nor saying that they’re not a part of the community. My job and, to a small degree, my life, wouldn’t be the same without those people. I just want to make sure any rocks tossed my way are thrown for the right reasons.

Permalink 9 Comments

Location of the PASS Summit

March 10, 2010 at 12:12 pm (PASS) (, , )

There has been some discussion recently around the location of the PASS Summit. The debate was centered on the results from a recent survey hosted by PASS. Today’s Community Connector has an editorial by the PASS President, Rushabh Mehta, explaining why those of us on the East Coast will be flying to the other side of the continent for the next two years, and supplying the results of the survey

I get why they’re doing this. Microsoft really will commit more resources to an event that is in their back yard. I get it. I also understand, that those of us who consider the PASS Summit a big part of our “community” are actually in the minority. Most people attending the Summit aren’t involved in the community, aren’t interested in networking, and go to the conference to learn something and then go back to their hotel room. They want to see Microsoft developers, not community members. I understand. I also know that the excellent support team provided through the management company is also headquartered out there. We might see fewer of them at the conference, and those few will cost more to fly in. I get it.

But…

With the economy shrinking, and no end in sight, budgets are getting tightened. Travel expenses are being examined closely where I work and justification for a trip is more difficult than it was previously. Cutting a few corners here & there, including reducing the cost of a plane flight,  might make a difference. For example, doing a quick search on Travelocity, no details, accepting defaults, meeting half-way, in Dallas, would cost $216 instead of $399. That’s almost $200 in savings. Even if Rushabh is right and we’d have to increase the cost, let’s say $150/attendee, that’s still offset by the flight.

Still, those are savings at the margins, would that offset it enough to prevent people from travelling? Maybe, some people. But, there’s also the flight itself to consider. Not everyone is Gail Shaw, prepared to cross continents, oceans, raging rivers, burning deserts, and French strike lines to get to the Summit. For some people, that hike out to Seattle, ignore the cost, Microsoft, the community, is too much. Would moving it to Atlanta or Dallas or wherever guarantee a larger percentage of attendee’s? Nope, probably not, but I’ll bet you you’d see a different set of attendee’s and I’ll bet you the attendance wouldn’t drop. Because remember, it’s not just the flying time or the travel costs or the time away from work (yay). It’s time away from the family. Based on the results of the survey, 800 (51%) of the 1500 plus think a short flight is very or somewhat important, where as only about 380 (25%) thought it wasn’t. The vast majority of people responding to the survey are in the Eastern & Central time zones (585 & 458 compared to 331 in the Western zone). I could actually be wrong about the number of attendees.

And let’s just mention, Microsoft is holding some sort of get together in June. It seems to be fairly well attended by Microsoft people and, oh, look at that, it’s in New Orleans, not Seattle. I guess it is possible to get some Microsoft involvement in other places if Microsoft wants to.

I’m not on the board, so it’s easy for me to snipe from the sidelines, but based on the noise level, and the fact that 588 people thought having a conference on the east coast would make it more likely that they would attend, with only 405 making it less likely, and the fact that that number goes up to 639 more likely if the Summit was in the center of the country, I’m not alone in thinking that the PASS Community Summit should move around a bit more than we’ve been doing lately.

On another note, the release of the survey results was… poorly handled. The board, probably for good reasons, tends to play things very close to the vest. I think, at least in this case, too close. I appreciate the need to keep valuable information away from the competition. However, since this is a community organization, and one that is largely run by volunteers, I think the board really ought to err on too much communication instead of too little.

Finally, assuming anyone has made it this far, I want to thank the board and Rushabh for releasing this information. I think explaining how they made their decision and providing the basis for that in the results is absolutely the right thing to do. Did it apparently, or even evidently, require poking from people outside the board? Maybe, but they still did it and deserve the credit for taking the right action.

That’s it. End of the pointless, wandering diatribe. Go about your lives citizens. Hopefully, I’ll get another session or two accepted this year and I’ll see you all Seattle (again).

Permalink Leave a Comment

Blog Anniversary

March 9, 2010 at 8:26 am (Misc) (, )

Two years old. In March of 2008 I received a whopping 96 visits. I’m up to 1900 so far this month. I’d call that a positive growth trend. Thanks for stopping by, especially if you’ve been here more than once.

Permalink 4 Comments

MacGyver?

March 8, 2010 at 8:00 am (Misc) (, , )

Who the heck is MacGyver? Television program you say? Hang on. I need to visit imdb.com

Oh, the late eighties… Yeah, I wasn’t watching TV in the late eighties. Actually I didn’t own a TV in the late eighties. But reading a few of the plot lines (you guys watched this?) I get the idea.

Nuts. I don’t think I’m MacGyver. Can’t we just buy something to fix the problem?

Honestly, the only thing that comes to mind was the time when I needed to get alerts when jobs failed, but I couldn’t install DBMail on the server because our admin types didn’t want mail clients on our machines. What to do? Use event forwarding. Instead of setting up the mail client on a server, I got it installed on a different machine, then forwarded events to it and let it take care of sending emails. Yeah, I know, almost as exciting as a MacGyver episode (based on reading a couple of synopsis’, synopsi, whatever).

So, I don’t have a good MacGyver story. But, what I do have is a ready set of skills, just in case I need to be MacGyver. What skills you may ask? Let me ask you, do you run backups from TSQL or from Management Studio. I don’t mean every time, but most of the time. For that matter, how much do you run CREATE or ALTER or DROP through the TSQL window and how much do you know the right-click pop-up menu of most of the database objects by heart? If you’re only using the SSMS window to administer your databases, then your MacGyver skills are going to be missing when the time comes to put them to work. I came into the database world through programming. My skills are rusty, almost to the point of immobility, but I can actually write functional (mostly) C# code and I’m working on my PowerShell skills. Is the only language you know T-SQL? Best get started on, at least, PowerShell so when your MacGyver moment comes along, you’ll be ready. Slightly harder to measure, but are you open to new ideas, especially if they seem a bit crazy or weird or MacGyverish? Assuming we’re not putting our production data at risk, I’ll try any scheme or methodology you want to take a swing at. I’ve done Agile, SCRUM, Feature-Driven Development, Test Driven Development, all on the database. I’m willing to try things. I call it being a bit of a cowboy, but you could call it being a little bit MacGyver (and I will in this case, just to keep with the theme). In addition to all this, I read, a lot, and study, a lot, to try to learn new things and keep my brain facile (as much as possible). Do you have twenty years of experience or one year of experience repeated twenty times?

There’s a self-help saying, control the mind and the body will follow (in Kenpo, we say control the head and the body will follow, same thing, just outwardly directed). You can’t be MacGyver if you don’t have that MacGyver mind-set and the skills it takes to use it. So, while I’m waiting for my great MacGyver story, I’ll keep practicing my skills.

Tagging… I’m going with the one person that I suspect may have never seen the show before, Gail Shaw.

Permalink 2 Comments

Powershell for Batch Operations

March 1, 2010 at 7:30 am (PowerShell) (, , )

I’m right in the middle of moving one of our databases from Oracle to SQL Server (and I just love saying that, over & over). Evidently the most common practice in Oracle is to leave all the tables as heaps. That’s a tad bit problematic approach for SQL Server. We moved the structure from Oracle to SQL Server with almost no changes and now we’re cleaning up the mess fixing the structure iterating the design. I needed to drop & recreate 250 plus indexes. 

Remember, I’m lazy. A few minutes bouncing around the web and looking at the wonderful book “SQL Server 2008 Administration with Windows Powershell” and I put together a simple little script that generates a script for dropping and recreating all the primary keys on the tables and, as a bonus, changes them to clustered (yes, I know, all primary don’t have to be clustered and there are possibly other good candidate keys… to misquote someone, compromises were made). Here’s the script: 

param
([string] $Server, [string] $Database, [string] $filepath)
# Connect to the server
[reflection.Assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) | Out-Null
$Scripter= New-Object(“Microsoft.SqlServer.Management.Smo.Scripter”)

 $srv =New-Object “Microsoft.SqlServer.Management.Smo.Server” “$Server”
$db = $srv.Databases[“$Database”]
$Scripter.Server = $srv

 # define the output
$filepath =$filepath+”test.sql”
$scrcontent = “use [$Database]” + “`r`n”+”GO”+”`r`n”
$Scripter.Options.DriPrimaryKey = $true

$Scripter.Options.ScriptDrops = $true

$scriptingOptions = New-Object Microsoft.SqlServer.Management.Smo.ScriptingOptions

$scriptingOptions.ScriptDrops = $true

 foreach ($Table in $db.Tables)
{
foreach($Index in $Table.Indexes)
{
if ($Index.IndexKeyType -eq “DriPrimaryKey” )
{
$scrcontent=$scrcontent + $Index.Script($scriptingOptions) +”`r`n”+”Go”+”`r`n”
$scrcontent=$scrcontent + $Index.script() +”`r`n”+”Go”+”`r`n”} } }

The only issue I ran into was the check to see if the index was clustered didn’t work when I tried passing the property type, so I had to use a string. No big deal, but it’s pretty cool how  what might have been a hard task was reduced to nothing. I’m digging PowerShell.

Permalink 2 Comments