I published an article describing an attempt my company made at using virtual servers as a full-fledged development environment. It didn’t work real well. I’m not a systems guy. I know a bit. I pay attention and learn things. I certainly listen to the people around me that are systems experts. When several of them commented on this great assessment of virtual drives, I went and read it. It sounds like some really cool technology only with the ability to really make our lives harder. Like the article says, how often do you check to see if the drive has slowed down?
Using the Team Edition of Visual Studio for Databases (VSDB) enables you to build a database out of source control. This means you can treat your database like code, as much as you can. The fact is, because of persistence, a database just isn’t code. Because you have to keep all the data previously entered, when you deploy a new version of your database to production, you don’t simply get to replace the database like you do with the code. You have to run scripts that alter that which can be altered, but preserve the existing data everywhere. That’s just how it is in production. You have to do the work necessary to protect your data.
Not so in Development. Development (and QA, Test, Financial Test, Performance Test) is the place where you can, for a time, treat your database like code. Do a full tear down and replace. I’m pushing this because I’m working with two extremes these days. Most of our new projects have been using tear down and rebuild from day one. Because of this, we know everything there is to know about the objects inside the database. There are no integrity issues, old code, missing keys, etc. We know this because every time we deploy, we get to rebuild the database. I also have a few older databases. These have not been rebuilt from scratch, ever. Ever.
I’ve just started going through and rebuilding one of them for the first time. There are broken stored procedures everywhere. These are procedures that got deployed to production years ago. Over time the structures which they referenced was changed or even dropped entirely, yet the procedures were left in place. Because they had already been stored by SQL Server and they’re not being referenced by application code (replaced completely by a deployment after all) they’re just sitting out there, waiting for the unwary. There are missing constraints on tables, found during data loads. There are all kinds of problems, usually identified quickly and easily when the database is rebuilt with each deployment.
It’s just reinforcing for me how useful tools like VSDB can become.
I’m going to have this tattooed insde the eye-lids of most of our development staff:
I’m not kidding. Drastic measures are called for.
When I wrote the book “Dissecting SQL Server Execution Plans” I knew I was going to get things wrong. Several people have pointed out things over the last couple of months. They’ve all been in the details. None of them were serious errors of fact. Andy Warren just found a huge one.
In the section on Table Hints I detail how to apply an INDEX() hint. It’s on page 123 in the electronic version or 124 of the first print version. I state that index number starts at 0 with the clustered index. That’s just flat wrong. A clustered index is always 1. A 0 indicates a heap. Other indexes will have values greater than 1. If you were to supply a 0 to the INDEX() hint, as shown in the book, it forces either a clustered index scan or a table scan. I go on to suggest that you use index names in order to be sure of the index used by your query and show that in the example. Good thing too. Hopefully people will listen more to that than the statement preceding it.
I’m not sure if I’ll get a chance to do edits prior to more print runs, but that will definately be included. Thanks Andy.
I no more than let Tony Davis, my editor, know about the issue than he gets the electronic version updated. Plus, he’s at home sick. Tony, you’ve earned TWO beverages on me at the PASS Summit this year.
Every week the Database Weekly Update comes out from SQL Server Central. There are always good things to read there. Links to interesting tid-bits of information posted by really smart people. One of them that jumped out at me this week, to the point that I read it out of order, was a blog entry by Andy Warren. If you ever get the opportunity to listen to Andy speak, jump on it. He’s great. His writing is wonderful too.
Anyway, he wrote an entry giving advice on how to break in to the national scene as a speaker. I know I dwell on this WAY too much, but I’ve been given the opportunity to speak at PASS this year. Notice I said “been given.” I don’t think I earned it quite the way I should have. Regardless, based on my experience so far, Andy’s advice is very good. If you too want to break in at that level, reading what this man (read his About page if you need incentive) has to say is well worth your time.
This is the complete code listing from the book “Dissecting SQL Server Execution Plans.” You need a copy of AdventureWorks. Please note, AdventureWorks changes. It changed three times while I wrote the book. These changes can be very subtle causing variations in statistics which will make some of the queries generate execution plans in a different manner than what was published in the book. Some of these changes can be pretty radical causing the queries to not work at all. Also, the book went through quite a few edits including rearranging the order in which sections appear. This listing is the order in which things were written and it might vary from the book. In other works, caveat emptor, your mileage may vary, keep your hands and feet inside the vehicle at all times… You get the idea.
I see a lot of searches from people apparently trying to find out what having a Constant Scan in their execution plan means. I can understand why. Here’s the definition from the Books Online:
The Constant Scan operator introduces one or more constant rows into a query. A Compute Scalar operator is often used to add columns to a row produced by a Constant Scan operator.
OK. Very precise and yet, unless you know what the sentence means, reading it can be pretty confusing. The key is to see what Compute Scalar means:
The Compute Scalar operator evaluates an expression to produce a computed scalar value. This may then be returned to the user, referenced elsewhere in the query, or both. An example of both is in a filter predicate or join predicate.
What this usually means is, when you have something like GETDATE() in your query, you’ll see a Compute Scalar. What does that have to do with Constant Scan? You’ll see situations where the query has to create a row to hold it’s data before it can access data from tables. The following example is taken from Dissecting SQL Server Execution Plans:
‘1313 Mockingbird Lane’
In this example, the execution plan generated starts off with a Constant Scan where the engine creates a row that it begins to populate in order to generate the rest of the data for the insert. You’ll see Constant Scans created in other situations for the same purpose. I’ve seen them alot when querying against XML or creating XML.
Another instance of Constant Scans is when you have partitioned tables. As this explanation from Microsoft points out, the Constant Scans in these situations represent the partitions.
It’s usually not a big deal or a major performance bottleneck. Spend time worrying about other operators.
I’m pretty jazzed to see that the PASS Community Summit has finally put up the list of sessions and the abstracts. However, I’m more than a bit nervous to see my name at the very top of that list. I couldn’t have been buried somewhere in the middle? It’ll probably change. I don’t doubt I’ll be delivering during the final session on Friday.
This sure makes it more real. I’ll have to get to work on the slides and demo’s now.