PASS Summit, Kilt Wednesday

April 27, 2010 at 11:43 am (PASS) (, , , )

Last year at the PASS Summit we held a silly little event called Kilt Wednesday. Only three people took part, but it was very popular nonetheless. Here’s a sample of what it looked like. This year is looking to be a lot bigger. Keep an eye on Twitter for updates under the hash tag: #sqlkilt.

If you’re going to the 2010 Summit, bring your kilt for Wednesday. Ladies, you’re invited too.

This is an unofficial event and has nothing to do with the PASS organization. We’re just having a little fun. Remember, Seattle is home to Utilikilts, so you can pick up a kilt while you’re there.

Advertisements

Permalink 2 Comments

PASS Summit Content Survey Results

April 22, 2010 at 2:41 pm (PASS, SQLServerPedia Syndication) (, , , )

The results of a survey conducted by the PASS organization have been posted (thanks to the Board for all their work, again). Since getting to speak at PASS is a competition, I really shouldn’t be pointing this out, because I’d like to speak again. However, if you’re trying to decide whether or not a detailed discussion of Windows Server 2008 Collation would be more interesting to the attendees than a session on Filtered Indexes (it wouldn’t) you can go check it out on the survey. It should help you make better choices for what the attendees want to see. Of course, if everyone runs off and submits sessions on the same four or five topics, that’s going to open up others. Regardless, this is a service to the attendees because we should get more interesting sessions because of this survey. The survey is also a service to potential speakers because we’re going to be able to decide where to focus our efforts.

But you know what, I really don’t know what I’m talking about, so why don’t you just ignore me and get back to working on your slide deck on Collation, lots of people want to see it, really.

Permalink Leave a Comment

SQL Server Standard: Volume 7, Issue 3

April 21, 2010 at 2:07 pm (sql server standard) (, , , )

FINALLY!

It’s not like Don Gabor had the article done in January or anything…oh wait. He did have the article done in January. However, it looks like we might be breaking the log jam and we’ll be publishing a number of SQL Server Standard issues.

Anyway, do you want to learn how to talk techie to non-techies? You do? That’s excellent because I’ve got a fantatic article by Don Gabor (blog), just for you. Please go and read it.

Permalink Leave a Comment

SQL Saturday #39, New York, New York

April 20, 2010 at 11:18 am (PASS) (, , )

A town so big they named it twice.

If you’re not excited about SQL Saturday in NYC this weekend… why not? Take a look at the schedule. There are some excellent speakers presenting there. This is going to be a great opportunity to learn a lot of stuff, network with your peers, and possibly pick up a bit of free swag. What’s not to like? See old friends, meet new friends, learn stuff and all for free. I’m presenting a session called MUQT (pronounced “MUCK” the T is silent because we shouldn’t be doing this).

Permalink 2 Comments

Powershell Script for Verifying Space

April 19, 2010 at 8:00 am (PowerShell, SQLServerPedia Syndication) (, , )

First let me say, I know my Powershell skills are sub-par. I’m working on it. Slowly but surely. That said, I had a problem to solve. It’s one that I could have done with TSQL, but it would be very ugly TSQL, probably involving dynamic queries, and even for admin scripts, I try to avoid that. So, I went for SMO and WMI wrapped by Powershell to solve the problem.

What was the problem you ask? We automate as many of our processes as we can. One process we do is resetting databases from production or other sources. Our processes work very well, but we occasionally run into a problem where the source system db has grown and the target system doesn’t have the required disk space. So, we needed a method for validating it. This is my first pass at that method. I know it needs work, but it’s functional, so I thought I’d share.

param($sourceserver,$targetserver,$databasename)
[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) | out-null

$source = New-Object (“Microsoft.SqlServer.Management.Smo.Server”) “$sourceserver”
$target = New-Object (“Microsoft.SqlServer.Management.Smo.Server”) “$targetserver”

$sourcedb = $source.Databases[$databasename]
$targetdb = $target.Databases[$databasename]

$sourcelogfiles = $sourcedb.logfiles
$targetlogfiles = $targetdb.logfiles

## walk through all the log files

foreach ($slf in $sourcelogfiles)
{
$tlf = $targetlogfiles[$slf.name]
##See if the target is smaller than the source
if ($slf.Size -gt $tlf.size)
{
##if the target is smaller, check the drive for free space
$drive = Split-Path $tlf.FileName -Qualifier
$server = $targetserver.Substring(0,$targetserver.IndexOf(“\”))
$driveinfo = gwmi win32_logicaldisk -computername $server |where-object {$_.name -like “$drive”}|select name,freespace

if ($slf.size -gt ($driveinfo.freespace + $tlf.Size))
{
Write-Output “Drive: $drive has insufficient space. $databasename Source: $slf.size, Target: $tlf.size + $driveinfo.size”
}
}
}

$sourcedatagroups = $sourcedb.FileGroups
$targetdatagroups = $targetdb.FileGroups

##walk through all the data files
foreach ($sdg in $sourcedatagroups)
{
foreach ($sdf in $sdg.Files)
{
$tdg = $targetdatagroups[$sdg.name]
$tdf = $tdg.Files[$sdf.name]

if ($sdf.Size -gt $tdf.Size) {
$drive = Split-Path $tdf.FileName -Qualifier
$driveinfo = gwmi win32_logicaldisk -ComputerName $server | where-Object{$_.name -like “$drive”}|select name,freespace
if ($sdf.Size -gt ($driveinfo.freespace + $tlf.Size)) {
Write-Output “Drive: $drive has insufficient space. $databasename Source: $slf.size, Target: $tlf.size + $driveinfo.size”
}
}
}
}

It’s pretty straight forward. It gets a connection to each SQL instance it’s passed, goes to the database in question, which in our situation will always have the same name, and walks the log files and data files, which again, will always have the same logical names and the same file groups. If there’s insufficient space, it kicks out a message. That’s it. Seems to work.

Permalink Leave a Comment

Powershell SMO Problem

April 16, 2010 at 10:14 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , , , , , )

We’ve been running the Enterprise Policy Management tools available from Codeplex for a few months now (Thanks to Buck Woody’s (blog | twitter) session at the PASS Summit). They’re honestly great. It’s a fantastic way to use Policy Based Management on 2000 and 2005 servers. We did hit some issues with timeouts and looking at the script, it made a call to invoke-sqlcmd, but didn’t pass the -querytimeout value. That means it default to 30 seconds and the import to database process was taking more than a minute for some of our queries. I did a little looking around and decided to just disable the timeout by passing a value of zero (0). But, I still got timeouts. Finally, after a bit of searching around, I found a closed (because it was posted in the wrong place) Connect item. It’s pretty simple to test. If you want to see a good run, do this:

Invoke-Sqlcmd “waitfor delay ’00:00:29′” -Database master -ServerInstance SomeServer -Querytimeout 0

It’ll work fine. Change it to this:

Invoke-Sqlcmd “waitfor delay ’00:00:31′” -Database master -ServerInstance SomeServer -Querytimeout 0

You’ll get a timeout. I don’t know if this is a bug or by design, but it’s a bit of a pain that you can’t simply bypass the timeout. There is a max value (a huge max value) 65535, but what happens if I run a sql command that runs longer than that?. Please go and vote on the new Connect item.

Permalink 6 Comments

Confio Ignite: Part II

April 16, 2010 at 8:00 am (Tools) (, , , , )

I’m continuing to evaluate Confio’s Ignite database monitoring tool. I’ve had it collecting data on a couple of production servers for about a week now. Based on what I’ve seen so far, it’s looking like a pretty good piece of software.

Breaking with the usual tradition, I’m going to talk about the things I’m not crazy about with the software, before I start singing its praises. The first thing, that I thought was missing, but is actually just hard to find, is the ability to look at the query information that Ignite collects, broken down by database. It looks like you should be able to get to it by looking at the Databases tab, but instead you have to first drill down into a time-period, then select specific databases within that time period, which will show you the queries by database. I know that in my environment, it’s this database listing of queries that is probably going to get used a lot. Tracking it down required help from the Confio team, which they quickly provided. They also showed me a way I could run queries to get this from their database so that I could create a report.

Speaking of reports, because they support Oracle & DB2 as well as SQL Server through their interface, no Reporting Services. OK, I recognize that I’m more than a bit Microsoft-centric when I say this, but I’d like to see SSRS reports so that I can manipulate things the way I want to. Again, not the end of the world, but it’s just something I don’t like. Because the data store is available though, I can get in there and create my own reports to my heart’s content and, if I like their report, I can always run a trace to capture the query and see how they built it so that I can build one of my own that mirrors it. I think they provide a mechanism for customizing reports by building some XML that will add it to the interface, which is a bit of a pain, but shows they’re on top of things.

One other thing that bothered me as I worked with Ignite is that, in the real time monitoring section, it was hard to find my way to the list of locks. It had the list, but it just wasn’t obvious how to get to them and this is something I’m used to looking at when I’m worrying about real time performance issues.

Right, enough talking about things I don’t like. See this? I love this:

That’s a break down, by database, by day, of the cumulative waits on the system. Yeah, that little pink database is causing all sorts of problems on one of my production systems. I actually already knew this was a problematic database, but I wasn’t that precisely aware of when and how it had issues. You can drill down to see the same thing for a given day:

And it’s not just the pretty pictures either, showing that most of our production load is in the morning, but that there are some odd spikes at midnight, 4AM & 7PM, but there’s data available behind the graph. If you drill down and hover over the graphs, pop-ups like this one appear:

(Names have been blacked out to protect my job)

And it’s the focus on wait times and types provided by the trending views and reports that make this a very strong tool. Ignite also collects pretty standard performance metrics, buffer cache hit ratio, memory paging, etc. And you can customize which of those metrics it collects and which ones you display, all on your own. But almost any decent monitoring tool does that. I use a tool that does that, and might even do that a bit better. No, what seperates this tool from the pack is that ability to identify the wait states and the query that is causing them. That’s what will get you excited when you’re using Ignite.

It also has a little ping mechanism that shows response time on the server, a helpful and friendly little reminder that all performance isn’t around what’s happening on SQL Server, but what’s happening across the enterprise.

Big chunks of the interface are customizable and you can add or drop things using the little icons on the right side of the picture above.

I can keep going with all the stuff that’s good, because it’s a long list, but I’ll summarize what I like so far. I like the focus on wait states, a lot. I like the focus on query performance. Between those two I feel like I’m getting the majority of what I need from the tool, and more than I get from other, similar, monitoring tools. The fact that it does all the other standard data collection on top feels like gravy. The problems I have with the software so far are either personal issues, Reporting Services, or fairly minor problems. I have a monitoring product running against the same servers as Ignite and I haven’t seen any impact from Ignite’s monitoring showing up there, so it looks like this is a benign monitoring software.

For my final installment, I’m going to look up the skirt of Ignite and see what the underlying data structure looks like. The ability to gather and present data is all well and good, but more important is the ability to take that data and make serious use of it through reports or by building other data structures out of it for long term storage and reporting.

Permalink 2 Comments

SNESSUG 4/14/2010

April 14, 2010 at 7:32 pm (SNESSUG, Visual Studio) (, , , )

Tonight’s Southern New England SQL Server Users group is sponsored by Idera. Our presenter is Scott Abrants of Iron Mountain. He’s talking about deploying databases using Visual Studio Team System:Database Edition. We have a good turnout with 12 people (yeah, we’re small).

Scott’s presentation was a lot of fun and very informative. He’s very involved with automating his deployments to a fare-thee-well. He really has Visual Studio dancing and singing. It was a very thorough overview of the VSTS:DBE soltuion. Other user groups should be jealous that we got to see this presentation.

Permalink 3 Comments

The SQL Server Community

April 14, 2010 at 8:53 am (PASS, SQLServerPedia Syndication) (, , , , )

I attended, and spoke at, the inaugural meeting of the Seacoast SQL Server User’s group last night. There were about 60 people in attendance. An excellent turn-out and congratulations go out to Mike Walsh (blog | twitter) and the other organizers.

I was curious about something after watching Mike present the PASS monthly slide-deck. He asked how many people were PASS members. Approximately a third of the audience raised their hands. When it was my turn to speak, I asked how many people had heard of Buck Woody (blog | twitter). I was honestly shocked when only about 6 people raised their hands. Then I asked how many had heard of Paul Randal (blog | twitter). This time I had about 9-12 people. Finally, I asked about Brent Ozar (blog | twitter) and only had about 4-6 people raise their hands.

Today I was reading the minutes from the PASS Board meeting from March. Oh, as an aside, well done, thank you, and hearty congratulations to the board for performing this act of openness. In it, they were talking about, what else, the SQL Server Community.

It got me thinking. When I say “community” in referring to the people that use SQL Server, a lot of the times, I’m talking about the vocal and visible people, the PASS board, Brent Ozar, Buck Woody, Paul Randal, Denny Cherry (blog | twitter), I can keep going, all the bloggers I read, all the tweeters/twitterers/whatever that I follow, all the posters at SQL Server Central (especially those on The Thread) and at Ask.SQLServerCentral.com… You get the point. Even with that little list there, I’m leaving out people that I like and admire and learn from. But you know what, most of those people, know who Buck Woody is. Most of those people know who Paul Randal is. Yeah, most of them even know who Brent Ozar is (probably). But, based on my completely un-scientific survey, that’s only about 10-15% of all the SQL Server users out there, at the most 20%.

On the one hand, you can say, “Oh crud. We’re only hitting 10-15% of the users despite busting our behinds writing blog posts, tweeting, answering questions on forums, presenting at user groups, SQL Saturday events, PASS Summits, Connections. I might as well get a case of botulism.” And it could be disheartening. On the other hand, you could say, “Holy crud, we can grow this community three or four times and still not even be hitting half of all the SQL Server users out there. Oh boy, I’m going to blog more, tweet more, write more books…” because our growth potential is HUGE!

So, to the board of PASS I say, again, thanks for posting the minutes, and thank you for your hard work. You guys have fantastic opportunities in front of you. Good luck. To all the bloggers, tweeters, posters, presenters & authors, and my friends that fit many or all those categories, what are you doing right now? We’ve got a market to penetrate. Stop lolly-gagging and get to work.

Permalink 23 Comments

Southern New England SQL Server Users Group

April 13, 2010 at 11:02 am (SNESSUG) (, , , , )

Tomorrow, Wednesday April 14th, is the next SNESSUG meeting. We’re going to get a great presentation from Scott Abrants on using Visual Studio Team System for database deployments. I saw Scott presenting this at SQL Saturday:Boston to a packed room. If you didn’t get to see it then, come on down to Rhode Island tomorrow evening. You won’t be sorry.

Permalink 1 Comment

Next page »