Andy Warren posted a summary of how he saw the PASS Board Meeting that recently took place. If you’re a volunteer for PASS, I’d strongly suggest going over and reading it.
SQLBatman also posted on this a few days ago. It’s absolutely worth a read too.
As a volunteer, I have to say, I really enjoy having some knowledge of what’s going on, the processes behind the decisions, and the intent of those decisions. This knowledge makes it easier to maintain a level of enthusiasm that will help to keep me involved. I’m sure it’ll work the same way for others.
By the way, helping the community is one reason to get involved, but an even better reason is the great people you’ll have an opportunity to work with.
Anyone reading this who attended the New England Data Camp and filled out an eval, for any of the sessions, thanks. For those 63 evals between the two sessions that I received, thanks. Here are the aggregates on my sessions:
Using Visual Studio Team System Database Edition:
|Average of Knowledge||8.344827586|
|Average of Presentation||8.482758621|
|Average of Preparation||8.103448276|
|Average of Interesting||8.172413793|
|Average of Overall||8.275862069|
|Number of Submissions||29|
Understanding Execution Plans
|Average of Knowledge||8.647058824|
|Average of Presentation||8.617647059|
|Average of Preparation||8.705882353|
|Average of Interesting||8.529411765|
|Average of Overall||8.625|
|Number of Submissions||34|
These are all on a scale of 1-9. I’m really quite happy with the results. Here are the average results for all the speakers and all the sessions at the Data Camp:
|Total Average of Knowledge||8.407843|
|Total Average of Presentation||7.912109|
|Total Average of Preparation||8.130859|
|Total Average of Interesting||7.962891|
|Total Average of Overall||8.096004|
|Total Number of Submissions||515|
Overall, both sessions beat the average. My knowledge level was marked down a bit on the Visual Studio session and I attribute that (mostly) to a lack of rehearsal and preparation. I changed that slide deck just the week before the Data Camp and it showed. Same problem with the Visual Studio session regarding preparation. What practices and rehearsals I had done were on my desktop at work. I found out that morning that my laptop didn’t have the GDR release installed, so I had to RDP to my desktop. It created several technical issues. I’m glad that people picked up on it. It really does keep me honest. I guess the session on execution plans was well received (despite the fact that I kept saying page when I meant leaf when referring to an index structure, bleh).
There were some really nice comments, thanks everyone. A couple of the comments on the Visual Studio session talked about market penetration and the readiness of the tool set. I had about 60 people in the audience and only three (3!) were using the tool. More were using the Team Foundation System, but not to the extent we use it where I work. I don’t think that’s because the tool isn’t ready (although I think it has a few shortcomings in & around deployments, especially incremental deployments) but rather the fact that it costs a bloody fortune. Few individuals can afford it and not that many companies are going to be willing to pay for it, especially in this economy. Other than that, no suggestions for improving the presentation, despite the fact that I got marked down a bit on this one. I’ll take the preparation more seriously next time.
I only got one negative on the Understanding Execution Plans session and, unfortunately, it’s only marginally useful. One person gave me a 2 on “Interesting” (in a sea of 9’s a few 8’s and two 7’s). This person wanted to see a session on query tuning and optimization. But, that’s just not what the session is about, at all. So it’s hard to take this as a mechanism for improving my presentation on what is an execution plan and how do you read one. However, it does let me know that I should probably try to come up with some kind of performance tuning & tips session that I can give from the new book. Unfortunately, this is such a full field with great presenters like Gail Shaw already showing exactly what I’d show (except better) that I’m not sure what to do about it. I need some idea to drive the session, a hook like Gail’s “Dirty Dozen” (fantastic name). I’m thinking about this one.
Anyway, there are the results, all out in the open. Thank you again for sending in your evals (even the 2 was very helpful) and your comments. The compliments were extremely nice to read, thank you.
UPDATED: Typed Gail’s name wrong AND forgot to link to her site.
I believe that the very first New England Data Camp was a success. We had about 185 attendees. There 18 sessions from 16 speakers. Both the sessions I gave and the one I sat in on were full. Credit goes to to Adam Machanic who did 90% of the work pulling this together. Amazing job Adam. My personal thanks to our sponsors. First, Microsoft, who provided us with a magnificent facility, nice swag, a full AV suite, coffee and donuts and in the morning, and a lot of help. It wouldn’t have come out as well as it did without you guys. Next, the Professional Association of SQL Server Users (PASS), who supplied us with money, without which we could not have eaten lunch, a few posters to decorate the place and a nice Powerpoint template. Good job guys. Finally, Red Gate, those t-shirts were very handy. Thanks again.
A special thank you to the speakers. You guys rock, and from the evaluations I saw, others think so too. You volunteered to come in on a Saturday to share with others. That’s pretty special.
Thanks to Dave Mulanaphy from SNESSUG. He did a ton of work before the event and was a huge help that day. It wouldn’t have been a success without him. Thanks Dave.
I saw about 1/4 of the evals, and except for getting dinged on food (more on that in a moment), the Data Camp was very well received and I saw many requests that we do another.
Food. Yes, pizza is not the healthiest choice. Yes, I like pepperoni too. But guys, you’re getting first class training and breakfast and lunch, all for free. You need to cut us some slack because we’re doing the best we can to get as much together as quickly as possible. Pizza is easy. We spent, are you ready, $1400 on pizza. We only had $1200 in donations. We spent another $100 on drinks. That’s $300 that came out of the two user NOT FOR PROFIT user groups that hosted the event. We did the best we could (or Adam did, I just pitched in) and, as someone else pointed out, the door wasn’t locked and you came there in a car. If you have special dietary needs, run out for lunch.
The two sessions I presented seemed to be very well received. I could have done with a bit more preparation on the Visual Studio Team System Database Edition session. I hadn’t rehearsed the new version of the presentation enough and it showed in a couple of places. The execution plan session went well, I thought. The big “ooh” moment in that presentation surprised me. Most of the audience didn’t know about the little plus sign in the lower right of the Management Studio execution plan window (it’s in 2005 and 2008 ) that lets you scroll around in an execution plan. When I get the full set of aggregated results from the evaluations, I’ll post them. I saw some positive feedback (thank you) and some interesting criticisms (thank you too).
Overall it was a great day. I hope Adam recovers and decides to put on another, but next time he should delegate more to others.
I’ve been advocating that our company use composite projects for our deployments using the VSTSDBE GDR (Visual Studio Team System Database Edition, General Distribution Release for those not instantly geeky). In a nutshell, VSTSDBE offers two mechanisms for deployment across multiple environments. Both of these work wonderfully well for automation when you are doing a full tear-down and rebuild. When you’re doing incremental deployments, they both fail.
Option 1: Use SQL Command variables to set environment specific variables such as file location, etc., and post-deployment scripts to set security. This works. It’s the method we used prior to the GDR. Unfortunately, security and other environment specific information is hidden inside scripts rather than visible to a given configuration directly within the VS interface.
Option 2: Create a composite project. It stores the common objects, the stuff on it’s way to production, in one project, and the environment specific stuff, such as security, in a second project. Some of the Micrsoft guys are even suggesting this approach. You can then store everything inside of configurations and in project objects, visible to the VS gui, easy to maintain, easy to build…
Ah, but there’s the rub. Easy to build when you’re rebuilding every time. Incremental changes require a database comparison between the project and the database. Ah, but which project. In a composite environment, it has to deploy each project independently. What happens when a change requires a data loss? You have to create that script manually. Fine, but how do you now get it into the rest of the build in an automated fashion?
I finally punted and posted it on the MSDN Forums.
It’s kind of scary to see someone else put down thought processes that could have been your own. That’s what Gail did with this post. It’s worth a close read because it’s offering very good advice and supplying the reasoning behind that advice.
During my recent visit to the Microsoft Technology Center in Waltham, Rich Crane gave me a tour of the facility. It included a room, I think he called the Concept Center. It was a little theatre type of arrangement around a series of work areas or work styles. Microsoft uses the room for demo’s that go WAY beyond some silly PowerPoint slide show. Here are a few pictures I took while I was there.
First, you recieve a very explicit set of pre-requisites. You need to install the SQL Server Upgrade Assistant, a tool that Microsoft licensed Scalability Experts to create for them. You have to run this against a small database, >25gb. The tool backups up all the databases from the server (so you need to put it on to a test box, rather than try to move an entire production system worth of databases). It then starts a trace that captures all the calls made to the database. I spent two days working with one of my application teams to get a server set up, the app connected, and a good set of tests run on the server to capture about an hour’s worth of trace data. It was at no point hard to meet the requirements, it just took time to get everything set up just right. They recommend you single thread the trace, meaning, just have one user run all the tests. This is because, when run without any extra work, Profiler, which replays traces, is single threaded. This can lead to unrealistic errors, especially blocking and deadlocks.
Once I had everything, I went to Waltham (a two-hour commute… the less said the better) to run the tests. The lab set up was impressive. They had a seperate room for each of the four companies that sent someone to the testing facility. We had a solid workstation (running Windows 7 by the way, fun stuff) and a set of servers on a seperate lan inside each room. The servers were running on HyperV, Microsoft’s virtual server software. Unfortunately, we did run into a snag here. Each server was supposed to have 100gb of space to accomodate the copy of the database as well as a restore of it and some more room besides. The virtual machines were configured to run on a system that only had 140gb of storage to start with. I filled that with my database during the initial set up (I ran the processes on three servers simultaneously). That put us out of commission while the MS techs went about fixing everything (which they did, quickly). It was just the pain of being first.
The documentation on the labs was very complete. Few steps were left to the imagination. Any where that had ambiguity, a second set of documentation cleared up nicely. With one exception. They did want to you to restore the System db’s. It made sense to do it, but I checked both sets of documentation and it wasn’t there, so I thought, hey, what I do know, MS is on top of this… Wrong. Had to restart, again.
Once all the initial configuration issues were done, it was simply a matter of walking through the lab. The first step was to establish a baseline, so I played back the trace on a 2000 server. Then I did an in place upgrade to 2008 and ran the trace and an upgrade to a new install using a restore and ran the trace there. All the results could then be compared.
Over all, it was a good session. Rich Crane, Rob Walters and Sumi Dua from Microsoft were very helpful. I picked up a few tips on doing upgrade testing and got to do it away from managers and developers, making quite a few mistakes along the way. Now maybe I can do it in front of them with fewer mistakes. I liked the Upgrade Assistant tool since I’m pretty lazy, but it didn’t do anything earth shattering that you couldn’t do on your own.
One tip worth repeating, if you’re using the Upgrade Assistant to capture a trace, it doesn’t put filtering in place. You can open the trace file, filter out the databases, by ID, that you don’t need, and then save a new copy of the trace file, just for the database you’re interested in. Thanks for that one Rich.
Gentlemen, you did a nice job. I appreciate your time and your help. Sumi, nice to meet you. Rob, good to see you again. Rich, thanks again for everything, great chatting and good to see you again as well. Rich and I used to work at a dot com “back in the day.”
Red Gate has compiled a bunch of it’s Cribsheets into a single E-book, the SQL Server Cribsheet Compendium. It’s pretty cool. I’ve got two entries in there, performance tuning and backups & restores, along with great articles from Robyn Page, Phil Factor, Robert Sheldon and Amirthalingam Prasanna, pretty heady company. It’s worth a look.