Chapter 2: Graphical Execution Plans

December 23, 2008 at 7:58 am (TSQL)

Simple-Talk has posted Chapter 2 of the book online. The one thing missing from the chapter (and the book for that matter) that you really need is sys.dm_exec_query_plans. That DMV is a must have for working with execution plans.

Permalink 4 Comments

2008 Index Fragmentation

December 18, 2008 at 1:26 pm (SQL Server 2008) (, , , , , , , )

I forgot all about this, but a script I wrote on using all the new functionality of dynamic management views & functions to do index defragmentation and rebuilds got published over at SQL Server Central.

It could stand a bit of tweaking, but gets the job done on several of the systems I’ve tested it on so far.

Permalink Leave a Comment

Profiler Research

December 18, 2008 at 8:21 am (SQL Server 2005, SQL Server 2008) (, , , )

A question came up on SQL Server Central as to why or how Profiler could bring down the production server.  I was aware of the fact that Profiler caused problems and that’s why you should always use a server-side trace on production systems. Duh! Of course…. But why? I didn’t have an answer. I just “knew” what everyone told me. So I set out to do some research. Initially I hit my favorite source of information on SQL Server (and it should be yours too) the Books Online.

Here is what I was reading on the architecture of Profiler. Good, but not enough.

I still want more information, so I’ve gone out a’wanderin’ through the interwebs. First, Linchi Shea has this great set of tests comparing Profiler to server side traces. Fantastic information, and he proves what lots of people “know,” but he doesn’t explain why it’s occurring, simply that it does, which, by and large, is good enough for me most of the time, but my interest was piqued.

I still haven’t read Inside 2005: Query Tuning and Optimization (trying very hard not to plagiarize, even by accident, that’s about done and I can’t wait to read this), but one chapter, just on trace and Profiler is available online. That’s a scary read (read it to find out what happens if the file buffer fills). But there is a mechanism that flushes the file reader queue every four seconds, which is a trick to apparently reduce load on systems with a high number of transactions. That and the general speed of a file system over SMO seems to be why there is better performance for the server-side trace. They go on to quote Linchi Shea’s article above, which still doesn’t provide the answer as to why Profiler is, not only slower, but actually slows the system.

Ah, finally, here’s the answer. Profiler, as opposed to trace, actually requires memory latches, and exclusive latches at that, in order for the sessions on SQL Server to write the event out to Profiler. Then, you factor in the network, that the machine consuming this stuff is probably not terribly powerful… You get Profiler slowing down the server… Cool!

UPDATE: Missed some links. Thanks Gail.

Permalink 14 Comments

PASS Evaluation Results

December 15, 2008 at 3:05 pm (PASS) ()

I received my evaluation results this afternoon (thanks so much Marcella). I had about 35 people attend the session and 11 took the time to respond. Overall, the session seems to have been well received. I had one person ding me for finishing early. I did finish early, but then we actually went over time on the Q&A session, so I think it was a wash. For my first time presenting in this type of environment, I find this to be useful feedback. I can see where Andy Warren is coming from in suggesting some questions that would be specifically helpful to the speaker would be nice. The questions are primarily aimed at determining whether or not the topic will be useful next year and whether or not the speaker will be accepted again. I enjoyed the written feedback especially, “You used the word glorious too much.” HA! I’ll work on that.

Here are the results (typos if any, are mine). Values are based on a scale of 1-5:

What is your overall Evaluation of the Session? 4.64
How would you rate the usefulness of the information presented in your day-to-day environment? 4.45
How would you rate the Speaker’s presentation skils? 4.82
How would you rate the Speaker’s knowledge of the subject? 4.82
How would you rate the accurace of the session title, description and experience level to the actual session? 4.82
How would you rate the amount of time allocated to cover the topic/session? 4.27
How would you rate the quality of the presentation materials? 4.64

Permalink 10 Comments

New England Data Camp v1.0

December 11, 2008 at 10:13 am (PASS, SNESSUG) (, , )

Adam Machanic of the New England SQL Server Users Group (among other things), has contacted the Southern New England SQL Server Users Group to ask us to take part in a one day SQL Server code camp. Of course we said yes.

It’s taking place at the Microsoft facility in Waltham on Saturday, January 24th. You can register here. If you’re interested in speaking, speaker registration is here. I’ll be listing the sponsors as they become available, but it will be a PASS event.

As information gels around this, I’ll continue posting updates.

Permalink 2 Comments

Whine

December 10, 2008 at 5:12 pm (SQL Server 2008, TSQL) ()

I’m working on the chapter on cursors. My editors thought my plan for the chapter was less than optimal. I wanted to have the chapter head

Cursor Cost Analysis

And then this would be the entire chapter:

They cost too much. DO NOT USE THEM!

Ah well. Guess I have to go through it and create all the silly examples.

End of whine.

Permalink 11 Comments

Tagged Again, Two Mistakes

December 10, 2008 at 8:58 am (Misc) (, )

OK. I really do need to take a look at incoming links more often. This one was pointed out to me. Now I’m to provide two mistakes… Only two? This is another idea originating with Chris Shaw who apparently spends all day providing a duel service. He torments fellow DBA’s and provides blogging fodder. I’m not sure which is more useful to the public.

1)  HUGE mistake. Way back, much earlier in my career, I thought I could do no wrong and learn anything I needed to learn within a week or so. First time I learned that wasn’t true was when I claimed to be a database expert at the consulting company I was working for. They sent me off to a client where I proceded to trash a database server. I mean, I mucked it up totally with the gleeful abandon that comes from total ignorance. Next day, I come back, of course, nothing is working. My fault, but I didn’t know it at the time. Instead, I ask around to see if anyone else had been on the system. I find out that their local DBA had been in it. I go over and start berating this woman, who was apparently a saint. I finish ripping her a new one and she patiently walks over to the machine and begins my education as a DBA. She goes for an hour straight, just listing my mistakes. Then she suggests a couple of books, some classes, calls my boss and asks them to ship me back to the office. I leave there, humbled and totally beaten. But, I worked for a great company (except for their tendency to send out unqualified people, which they did quite a lot). They felt I had potential so they provided me with a bit of training and off I ran. Lesson learned, try to learn what you don’t know prior to screaming that lack of knowledge at the planet.

2) I was still learning my chops around databases when I decided that one of the tables I was working on needed two clustered indexes. Yeah, I know now that they can only have one. It was a much older version of Sybase and I was using a third party tool, I’m not sure which one. Anyway, I did it. I got two clustered indexes onto the table. Oh, and did I mention, it was a production table. Needless to say, that part of the database was offline. I was getting really weird errors. I struggled and fought and finally called Sybase support. I’ve got the high-mucky-mucks of the company standing behind me moistening my neck with their breath as I try to solve the problem. A few minutes with the Sybase tech support and they provide a solution, fixing the allocation errors on the table. Then, of course, the VP’s or whatever they were ask how it happened so it won’t happen again. I turn to the Sybase support guys. They ask me what I did? Nothing, says I, I was just putting another clustered index on the table when… Once the guy on the phone stopped laughing, he explained my error and I had to turn around and explain it to the VP’s. I thought about trying to shade it, but isntead I just told the truth. “I screwed up. I did something wrong.” That worked. They were satisfied. It’s something I’ve carried forward. If I screw up, I own it.

Also, TEST stuff offline in non-production environments for crying out loud!

Now, to tag. I’m going for SQLBatman & Gail again.

Permalink 3 Comments

SQL Quiz Part 2a

December 10, 2008 at 8:03 am (Misc) ()

I guess I should check my incoming links more often. I was tagged prior to TJay tapping me. Sorry Jeremiah.

Permalink Leave a Comment

SQL Quiz Part 2

December 10, 2008 at 7:52 am (Misc) (, , , )

Gee, thanks TJay. I have apparently been tagged to take part in a quiz started by Chris Shaw. It’s an experiment in learning where a series of people answer the same question, hopefully spreading some useful knowledge. The question:

What are the largest challenges that you have faced in your career and how did you overcome those?

This is tough. You could answer based purely on technical issues, what was my toughest technical challenge. You could talk about career issues, tough bosses, ignorant co-workers, incompetent subordinates, clueless clients. You could also talk about shifting jobs as companies fail or are mismanaged or you move. You could also talk about missed opportunities, when you faced the large scale challenges and didn’t quite rise to the occasion. Anyway… My Answers)

1) My first challenge is ongoing, but about five years old at this point. Five years ago, I was acting as the gatekeeper on development machines. I would vet applications prior to their release to ensure they met our companies minimum standards. The first part of this challenge was actually the easiest to fix. Checking applications right before they’re due to release is a pretty bad scheduling. We now monitor apps as they’re developed, rather than at the end of the process attempt to make changes. However, back then, that was how it was getting done. I received word that a major application was about three weeks away from release so I needed to take a look at it. I checked it out. It was beautiful. You’ve never seen so crystaline a structure in all your life. The developers were geniuses. Evil and misguided, but geniuses. They had built an entirely object oriented mechanism within the database with user defined functions (UDF’s) calling UDF’s that called other UDF’s, all in this incredible chain. It was amazing. And it worked, very well… As long as you only had about 10 rows in the tables and no more than one user at a time. Since this was going to support millions of rows and hundreds of simultaneous users… So, I went back to them and said that they couldn’t do this, that the entire database needed to be rewritten. “Why?” was the response. Well, because it won’t work. “Why?” Because these things behave poorly. “Why?” Well, because they do. “Why?” I didn’t have THE answer (which is, that since multi-statement UDF’s have no statistics, they make execution plans based on a single row of data, which is fine as long as there is only a single row, but stinks on toast when there are more rows). Their argument was, if the database let them do it, it was OK and any performance problems were on my end, not in the TSQL code that was written. The challenge, and it continues to this day, was to find THE answer and be able to supply it… on demand. Our developers are somehow under the impression that if you have to go and look something up or research something it means you don’t actually know what you’re talking about. I don’t know how that impression has come to them, but there it is. So I am daily trying to get in front of whatever it is they’re doing that day (ORM, nHibernate, LINQ, SCRUM, Agile, whatever the latest development buzzword might be) so that when they come to the database and declare the next PERFECT solution (like the UDF’s) I can answer “WHY” instantly. It’s an incredible challenge that I have to struggle with constantly.

2) I worked at a Dot Com that I probably shouldn’t name (DASH.COM) for a bunch of guys who graduated from… TOP SCHOOLS. These guys were IVY LEAGUE! You have to use all caps and exclamation points because, that’s the way they talked about their education. They really looked down on anyone that didn’t go to a TOP SCHOOL and tended to get really pissed when those of us who didn’t go to TOP SCHOOLS opened our mouths. The core challenge was not strangling these little twerps with their frigging Harvard Business School diploma’s, but there were other challenges. Such as the time when they came to the DBA group and declared that we had spent too much time doing maintenance and we were slowing down the development process. Therefore, no more maintenance on the databases, just concentrate on development to the exclusion of all else. OK. We told them things weren’t going to go well, but these guys didn’t know enough to trust us, despite the TOP SCHOOL education. What they needed was more education, and it was up to us, the DBA’s to provide it. Unfortunately, at the time, I didn’t have enough experience to realize that. The new method worked great for almost a month. Then one day, the log backups failed the same day a new piece of software was implemented in the production system. We hadn’t completed our monitoring processes, so we didn’t find out about the log failure. This was back in 7.0 and the system actually could fill the drive. Our drive filled and the entire server locked up. We were down. Two co-workers and I spent three days straight, no sleep, trying to recover the system, failing, and finally rebuilding the system, losing one day of data. Herculean effort. I wrote up full documentation on what went wrong and why. A couple of days later we received word that we should spend as much time as we needed ensuring that proper database maintenance was completed. While it took a catastrophe to pound it home, the one thing I learned from this is that documentation for why you’re doing something is the best thing you can provide management. Assuming they know enough to trust you is a bad assumption. You need to make sure you communicate what can go wrong ahead of time. By the way, they don’t teach apologies at TOP SCHOOLS.

Not great answers I suppose, but both helped push me down the track a bit farther.

Now, my tags… Ah, first, of course, GilaMonster at SQL In the Wild. Take it away Gail. Next… Tim Mitchell at Bucket of Bits.

Permalink 7 Comments

SQL Server 2008 Upgrade Whitepaper

December 8, 2008 at 7:33 am (SQL Server 2008) (, , )

One of my tasks for the coming year is to evaluate each and every SQL Server 2000 system, identify all the databases, the applications they belong to, and provide an upgrade to SQL Server 2008 cost estimate. I’ve only started reading this new white paper from Microsoft, but I can already tell it’s going to be a huge help.

Permalink Leave a Comment

Next page »