Powershell is Really Easy… If you know what you’re doing

October 18, 2010 at 8:00 am (PowerShell, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication) ()

Sigh… I just spent almost a full work day trying to come up with, what turned out to be a single line script. The requirements were simple. After patching of servers is completed, and everything is rebooted, sometimes, not often, but sometimes, SQL Server Agent doesn’t restart correctly. I was asked to put together a script that will check a list of servers to see if Agent is running. Any PowerShell guru’s reading this are either laughing or crying, already.

So, I started exploring how to connect to the server using using some type of PS-Drive, but I quickly figured out that I’d have to loop through the servers that way. Plus, I don’t think there’s a way to check Agent status through the SQL drive. So that went out the window. Then, it was to use Invoke-SQLCmd. After all, I can call multiple servers through the -ServerInstance property, so I just need to find a good T-SQL statement to check Agent status… Googling & Binging didn’t turn up a good T-SQL statement to check Agent status (not saying there isn’t a way, but I couldn’t find one easily, plus, this is a story about PowerShell). On to SMO. All I had to do was figure out how to connect to the server and the SQL Agent object through SMO. I’d have to loop through the servers, but for such a simple script, that shouldn’t be hard. Ever tried to connect directly to the Agent through SMO? Me neither. It’s a pain.

Finally, I referred back to Don Jones (blog|twitter) book, Windows Powershell 2.0, TFM (and a more appropriately named book, there isn’t), and noticed a statement, Get-Service… Hey! Isn’t SQL Agent basically a service? Yes. Yes it is. Here’s the final script, in all it’s simplistic glory:

Get-Service -DisplayName "SQL Server Agent*" -ComputerName (Get-Content "servers.txt") | Select-Object MachineName,Status

Yep, it was that easy. Kids, don’t try this at home. Take the time to learn what you’re doing. Don’t assume you know.

Advertisements

Permalink 13 Comments

How to Tell if Execution Plans are Reused

October 4, 2010 at 8:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

I try to watch the search phrases that point people to the blog because sometimes, you get a sense of what problems people are running into. The latest question or phrase I’ve seen a lot lately is along the lines of “how do you know if an execution plan is being reused.”

Since compiling an execution plan can be an extremely expensive operation, it’s worth your time to understand how well a given plan is getting reused. If you’ve seen me present, I’ll frequently talk about the application that had a query with an 86 table join. Recompiles on that thing were frequent and extremely costly. The only good news was, they were recompiles. If we weren’t getting plan reuse it would have been an even worse system than it was.

There are a number of ways you can see if a plan is being reused. One of the easiest is to hit the DMOs that look into the plan cache. You can look at either sys.dm_exec_procedure_stats or sys.dm_exec_query_stats. These DMOs are somewhat different and somewhat the same. Actually, they sort of cross each other. For sys.dm_exec_procedure_stats, you get aggregate information about stored procedures. For sys.dm_exec_query_stats you get aggregate information about queries, which, may be run within stored procedures. So either or both could be useful depending on what you’re attempting to measure. However, both return a very useful counter, execution_count. Quite simply, that tells you that the plan, whether for the statement or the procedure, is being reused. To get maximum gain out using either of these, you’ll want to combine them with other DMOs. Something along these lines can show you the plans for procedures that have been running against a system, ordered by the number of times they’ve been reused:

SELECT deps.execution_count ,
OBJECT_NAME(deps.object_id, deps.database_id) 'Procedure' ,
deqp.query_plan
FROM sys.dm_exec_procedure_stats AS deps
CROSS APPLY sys.dm_exec_query_plan(deps.plan_handle) AS deqp
ORDER BY deps.execution_count DESC

The only thing wrong with using the DMO like this is that you can only see what’s currently in cache. This means no history, depending on how volatile the cache is on your system.

Another way to tell if a plan is being reused, is to set up a server side trace and capture the event for the SP:CacheMiss or SP:CacheHit. This is pretty straight forward. If the query was not in, it generates a miss event. If it was there, it generates a hit. But, this has to be running in order for you to know if you had a hit or a miss. The good news is, if it’s running, you’ve got historical information since this captures the event as it occurs.

You can also catch recompile events using SQL:StmtRecompile. Yes, you can get SP:Recompile if you’re only ever dealing with procedures, but if you’ve got any kind of ad-hoc querying going on in the system or triggers, you’ll miss recompile events. Basically, because recompiles are at the statement level starting in 2005, BOL recommends only using SQL:StmtRcompile.

That’s pretty much it. Using these methods in the appropriate place will let you know if the plan is being reused or not.

Permalink 1 Comment

Parameter Name Size And Performance

September 27, 2010 at 8:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

I saw an odd statement the other day, “The size of the name of the parameter does not affect performance.” My first thought was, “Well, duh!” But then, I had one of those, “Ah, but are you sure” thoughts. And you know what, I wasn’t sure.

If size of the parameter name did affect performance, I figured, the one sure place where that would be evident is in the size of the execution plan. Right? I mean, if there was an impact on memory, and hence on performance, that’s probably where you’d see evidence of it. I wrote two queries:

 DECLARE @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance int
SET @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance = 572
SELECT soh.SalesOrderID
,sod.SalesOrderDetailID
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS sod
ON soh.SalesOrderID = sod.SalesOrderID
WHERE soh.SalesOrderID = @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance

DECLARE @v int
SET @v = 572
SELECT soh.SalesOrderID
,sod.SalesOrderDetailID
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS sod
ON soh.SalesOrderID = sod.SalesOrderID
WHERE soh.SalesOrderID = @v 

If you run this against AdventureWorks2008R2 you’ll get two distinct, but identical, execution plans:

You can see that they look identical, but how do I know they’re distinct? If you run this query:

 SELECT deqs.creation_time,
deqs.query_hash,
deqs.query_plan_hash
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
WHERE dest.text LIKE '%SELECT soh.SalesOrderID%' 

You’ll get this back as a result:

creation_time                               query_hash                                 query_plan_hash
2010-09-23 18:18:09.347      0x8D0FB9D524B8DD4D       0x13707445560737BA
2010-09-23 18:18:16.223       0x8D0FB9D524B8DD4D       0x13707445560737BA

Two distinct queries, but with identical hash values, so the plans generated are nearly the same, but clearly different, due to the fact that they were built with different parameters, including the monster name. So, how to see if there is a difference in the plan generated that could affect performance? How about the execution plan properties. First, the property sheet for the SELECT operator for the query with the long parameter name:

Of particular note is the Cache Plan Size. Let’s compare it to the same property sheet for the small parameter name:

If you compare the two, you’ll see that they’re the same. In fact, if you look at almost all the values, you’ll see that the Compile CPU, Compile Memory, and Compile Time are all identical. Based on all this information, I have to conclude that no, the size of the name of the parameter doesn’t affect performance, positively or negatively. But why?

I’m actually not 100% sure, but based on some things I know, here’s what I think. The Algebrizer within the Query Optimizer breaks down all the objects referred to within a query plan. It assigns them all values and identifiers for that plan, part of getting the information together to feed the plan into the mathematical part of the Optimizer. I’ll bet it just assigns values to parameters that are the same type of value, if not the same exact value, and that value is the same size from one execution plan to the next.

This means that you don’t save memory by assigning parameters @a, @b, @c when in fact you mean @ReferenceCount, @MaxRetries, @BeginDate. Do I think you should put in parameters of the silly length I put in before? No, of course, not, because it makes the TSQL code less clear. But, so does putting in equally silly, short, parameter names.

Don’t make your TSQL code hard to read. It doesn’t help performance.

Permalink 3 Comments

Viva Las Vegas!

August 11, 2010 at 11:00 am (SQL Server 2005, SQL Server 2008, TSQL) (, , )

I won’t be going to Las Vegas, but I will be presenting to the Las Vegas SQL Server Users Group, S3OLV. The sesssion will be “Introduction to Execution Plans.” Please swing by if you’re in the area.

I’m not sure if they’ll make the LiveMeeting available to the public or if they’ll record the session. But if they do, please attend that way too.

Permalink Leave a Comment

SQL University: Introduction to Indexes, Part the Third

July 23, 2010 at 5:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , , )

Nice to see most of you have managed to fight your way through the shoggoths outside to attend another lecture at the Miskatonic branch of SQL University. This will be the third and final part of the introduction to indexes lecture. Please, if you’re going mad, step out into the hall. Our previous two lectures introduced the concept of indexes and then talked about two types of indexes, clustered and nonclustered. This lecture will cover the concept of statistics as they relate to indexes.

If you followed the previous lecture then you know that indexes are stored in a Balanced Tree or B-Tree structure. You know that this storage mechanism is intended to provide fast retrieval of data. But, how can the query engine inside SQL Server know which index has the information it wants? How can the query engine know that a given index will provide a mechanism to speed up the query that is being run? Statistics.

Statistics are information generated defining the selectivity of a column or columns in the database. They may or may not be part of an index. Columns alone can have statistics and indexes can have statistics. The query optimizer uses statistics to determine how it will access data. You may have the greatest index in the world, but if your statistics are wrong, that index won’t get used. By default statistics are automatically created and automatically updated. The automatic update of statistics is based on a formula that depends on changes in the data and changes in structure, as outlined in this Microsoft documentation. In general, it’s a very good practice to leave this alone because SQL Server does a great job of maintaining statistics.

But what are statistics really?

Statistics are basically a sampling of data stored in such a way as to make it possible for the optimizer to understand what the data looks like without actually accessing it. It is, absolutely, meta-data, data about data. The sampling of the data is, usually, a scan across the data, dropping in every certain number of rows as a percentage of the data in the table, but it can be a full scan, where it looks at all the data in the table. This meta data that is collected is broken down into several pieces. Statistics are represented through:

  • The histogram, which provides a picture of the data in a column
  • The header, which provides information about the statistics or meta-data about meta-data
  • Density information, which stores the distribution of the data across the columns being tracked by the statistics

In short, a lot of information. To start understanding what this all means, let’s see how to retrieve a set of statistics:

DBCC SHOW_STATISTICS ("Sales.SalesOrderDetail", IX_SalesOrderDetail_ProductID);

This query uses tables stored in the AdventureWorks2008R2 to display statistics information. Just a side note, so many DBCC functions have been replaced by dynamic management views that I honestly expected one for statistics too, but it’s not there. Running the query results in the following set of information:

The first result set at the top is the header. Most of the information you see there should make sense to you based on what we’ve been talking about. A couple of interesting points worth noting are the “Rows Sampled” column which shows how many of the rows were looked at while the stats were being built, Steps, which gives you an indication of the size of the histogram, and Density, which shows the selectivity of the statistics, but that column is not the one used by the optimizer. The main thing I use header information for is to see when the statistics were last updated.

The second set of data shows the density of the columns used to define the index. You can see that each column is measured as it gets added to the index. This shows you how a compound index makes the data more and more selective. If those numbers are at all big, as opposed to vanishingly small like those presented, you have a selectivity isssue with your index. The less selective the index is, the less likely it will be used.

Finally, the last data set is the histogram. What I have displayed is actually just the first seven of two hundred rows. A histogram is basically a small snapshot of the data in your index. Specifically, the data in the first column of your index. That’s worth knowing. If you choose to, for some crazy reason, build an index with a bit column as the leading edge (that means the first column) and you’re surpised that your index isn’t getting used, this is where you can go to understand why. The histogram is one of the main tools that SQL Server uses to determine which operations it will use to retrieve your data. Here’s how it works. First you get the RANGE_HI_KEY which shows the upper bound column value, the top of the range of rows, that the this entry in  the histogram represents. Then you see the RANGE_ROWS that displays the number of rows within the range represented by the this entry in the histogram (except, for this entry, there is no range, it’s a single value). Next is the EQ_ROWS that tells you how many rows are equal to the value represented by the RANGE_HI_KEY. That number is large here because there is no range of rows, this value is represented by approximately 2838.166 rows inside the data. Following that is DISTINCT_RANGE_ROWS which shows you the distinct number of rows within each stepped range of values. In this case, again, because we’re not dealing with a range, that number is zero. Finally, you can see the AVG_RANGE_ROWS displaying the average number of duplicate values within the range.

Scrolling down within that set of data you can see what a range of values looks like:

In this case the range covers two values with the upper limit being 805 and the previous upper limit being 801.

In short, you can see what the optimizer sees and determine if you should have more up to date statistics. For example, if I query the table and retrieve ProductID values that are between 802 and 805, which would represent step 72 in the histogram, I get 246 rows, not 442, which is what I should see. Since the statistics are a little old on the table, they haven’t been updated since June 23rd, I’ll update them. You can use sp_updatestats, which will sample the data and generate a histogram with some guesses, or you can do this:

UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN

Now when I run DBCC SHOW_STATISTICS, the header information shows that the rows sampled equal the rows in the table. My new histogram has almost exactly the same distribution, but the row counts are incredibly accurate. In my case the number of RANGE_ROWS is equal to 200 and the number of EQ_ROWS is equal to 46. Which equals the 246 rows that we got from querying the data directly. That represents a very accurate set of statistics.

So how do you know when to update your statistics? It’s really hard to say. If you start seeing different execution plans than you did previously, or if you note that the date on the statistics seems old, you might want to update stats. How do you determine if you should use a full scan or sampled statistics? Again, hard to say. In most circumstances a sampled set of statistics should work, but in some areas, if the data distribution is off, you may want to run a full scan.

There is clearly more and more to discuss about statistics and how to maintain them. There is a lot more to discuss about indexes. I didn’t even get to index fragmentation. Today you should remember:

  • Statistics are used to determine which indexes work well for a query
  • Statistics are composed of multiples sets of data, the header, density information, and the histogram
  • Histograms have a maximum of 200 steps
  • Histograms show the data distribution
  • Despite automatic maintenance, statistics can be out of date
  • The sampled update of statistics doesn’t always create the best set of data

For more information on statistics, I recommend Microsoft’s Books On Line for SQL Server. It’s a great resource.

But I hear the whipoorwills kicking up and I see a funny looking fog developing outside. It’s getting awfully dark for mid-day. One of the other professors stepped on some squealing white… thing… on his way into today and a one of the TA’s went stark raving mad at something they saw in the rectory. Time to end class. Best of luck on your journey back to your dorms. I’m locking myself in my office.

While I’m there. I might churn out one or two more entries on indexes. Look for them in the SQL University E-Book (no doubt to be published posthumously)

Permalink 4 Comments

SQL University: Introduction to Indexes, Part the Second

July 21, 2010 at 5:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

Welcome once more to the Miskatonic branch of SQL University. Please try to concentrate. I realize the whipoorwills singing outside the window in a coordinated fashion that sounds almost like laboured breathing can be distracting, but we’re talking about indexes here.

We left last class with a general idea what an index is, now it’s time for some specifics. There are several different kinds of indexes, as we talked about last class. But the two you’re probably going to work with the most are clustered, non-clustered. Each of these indexes is stored in a structure called a B-Tree, a balanced tree, not a binary tree. That’s a very important distinction.

A B-Tree is a double-linked list that is defined by the keys of the indexes on the top and intermediate pages, and at the leaf level by the data itself in the case of clustered indexes. Some of you no doubt think I’m quoting from De Vermis Mysteriis. Basically, for our purposes, a B-Tree consists of a series of pages. There is a top page, or root page, that defines the beginning of the index key. It points to a series of intermediate pages. Each intermediate page contains a range, a previous and a next value. These all point to each other, hence, double linked. The idea is that SQL Server can quickly identify which intermediate page has the pointers down to the leaf node, the final node in the stack. The values of these pointers are defined by the key of the index, the column or columns that you define when you create the index. There are always at least two levels, leaf & root, but there can be more, depending on the amount of data and the size of the keys. Just remember, the size of the key, which refers both to the data types in the key and the number of columns, determines how many key values can get on a page, the more key values on a page, the faster access will be, the fewer key values, the more pages that have to be read, and therefore, the slower the performance.

In general the purpose is to be able to quickly navigate to a leaf or set of leaf pages. When a B-Tree is used and the query engine is able to navigate quickly down to the leaf needed, that is an index seek. But when the B-Tree has to be moved through, in whole or in part, scanning for the values, you’re looking at an index scan. Obviously, in most cases, a seek will be faster than a scan becuase it’s going to be accessing fewer pages to get to the leaf needed to satsify the query. Just remember, that’s not always true.

Let’s get on to the indexes. It’s already been mentioned, but it bears repeating, the principle difference between a clustered and non-clustered index is what is at the leaf level. In a non-clustered index, it’s simply the key values and an values added through the use of the INCLUDE option along with a lookup value to either the clustered index key or an identifier within a table. In a clustered index, the data is stored down at the leaf. This is why people will frequently refer to a clustered index as being “better” than a non-clustered index, because you’re always going directly to the data when you’re looking information up within a clustered index. But, as with the scans vs. seek argument, this is not always true either.

I mentioned that a non-clustered index refers back to the clustered index, if there is one on the table. Because the data is stored at the leaf level of the clustered index, when you need to retreive other columns after performing a seek on a non-clustered index, you must go and get those columns from the clustered index. This is known as a key lookup, or in older parlance, a bookmark lookup. This operation is necessary when data not supplied by the non-clustered index, but can be very expensive because you’ve just added extra reads to your query.

What if there isn’t a clustered index on the table? What does the non-clustered index use to find other columns? If the table doesn’t have a clustered index, then that table is referred to as a heap. It’s called a heap because the data is simply stored in a pile, with no logical or physical ordering whatsoever. With a heap, SQL Server takes it on itself to identify the leaf level storage and creates a row id value for all the rows in the table. This row id can be used by the non-clustered index to find the data. That is referred to by the completely arcane and incomprehensible term, row id lookup. You might be thinking, hey, that means I don’t have to create a clustered index because SQL Server will create one for me. You’d be wrong. Maintaining the row id is an expensive operation  and it doesn’t help in retrieving the data in an efficient manner. It’s just necessary for SQL Server to get the data back at all. In general, this is something to be avoided.

A non-clustered index doesn’t necessarily have to perform a lookup. If all the columns referred to in a query are stored within a non-clustered index, either as part of the key or as INCLUDE columns at the leaf, it’s possible to get what is called a “covering” query. This is a query where no lookup is needed. Indexes that can provide a covering query everything it needs are referred to as covering indexes. A covering query is frequently one of the fastest ways to get at data. This is because, again, depending on the size of the keys and any INCLUDE columns, a non-clustered index will have more information stored on the page than a clustered index will and so fewer pages will have to be read, making the operation faster.

By and large, a good guideline is to put a clustered index on all tables. SQL Server works extremely well with clustered indexes, and it provides you with a good access mechanism to your data. If you don’t put a clustered index on the table, SQL Server will create and maintain a row ID anyway, but as I said before, this doesn’t save much work on the server and it doesn’t provide you with any performance enhancement.

That’s a basic introduction to the three concepts of the clustered index, the non-clustered index and the heap. The points I’d like you to remember are:

  • Indexes are stored in Balanced Trees
  • Balanced Trees have, generally, three levels, root page, intermediate page, and leaf page
  • In clustered indexes, data is stored at the leaf page
  • In non-clustered indexes, a pointer is maintained back to the clustered index or the row id
  • A heap is a table without a clustered index

Remember those things and you can really begin to dig down on how indexes work. Understanding how they work will assist you in designing them for your database and your queries.

Next class we’ll go over statistics.

I wouldn’t walk back to your dorm by way of the shore. I’ve seen some rather odd looking people near the docks lately that didn’t give me a good feeling. See you next time… maybe.

Permalink 3 Comments

SQL University: Introduction to Indexes, Part the First

July 19, 2010 at 5:00 am (PowerShell, spatial data, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

Right, all eldritch tomes are to be closed and Elder Signs are to be put away during this course.

Welcome to the History department here at the Miskatonic branch of SQL University. Why the History department? Well, first, because I like history and have frequently thought I would enjoy teaching it. Second, because I needed a hook upon which to hang part of the story I want to tell. What story is that you ask? Why, the story of the Dewey Decimal System. We are interested in studying history and historians must classify our subjects carefully. For advanced students we’ll be covering the Library of Congress Classification System and the…

Right, I give, this is the introductory class on indexes. If you thought we were covering something exciting and sexy like PowerShell, you’re in the wrong room.

Indexes… indexes…. There are, of course, different kinds of indexes. I’m sure that some of you, glancing ahead in your books, are already thinking, “yeah, two.” And you would, of course, be ABSOLUTELY WRONG! That’s why you’re in this class, because you don’t know. There are a large number of different kinds of indexes. Most people think of the standard indexes, of which there are two, clustered and non-clustered. But when pressed they can usually come up with the Full-Text index and possibly even the XML index. But that leaves out Spatial indexes, filtered indexes… more. Microsoft’s documentation lists eight different indexes:

  • clustered
  • non-clustered
  • unique
  • indexes with included columns
  • Full-Text
  • Spatial
  • Filtered
  • XML

But I’ve seen other people count them other ways and arrive at varying amounts. Is a compound index a different kind of index? If it’s not, is unique really a different kind of index? Things to think about.

Why so many? What the heck is an index good for? They must be useful critters or Microsoft wouldn’t have put so many different sorts (however many that is) into SQL Server. I started off talking about the Dewey Decimal System for a reason. An index, any of the indexes we’re going to talk about, is primarily meant, like the DDS, as a mechanism to make finding things easier. That’s all it is. Pretty simple, right? Wrong. You clearly haven’t spent time with SQL Server indexes or the DDS. It’s really complicated. But, just like the DDS, learning how indexes work will make using them much easier.

Remember, the main purpose of a database, despite what your DBA may secretly feel in his or her heart, is not to keep, store and protect data. No, the main purpose of a database is to feed that data up to your business users, whoever they may be, in a timely and accurate fashion. That’s where indexes come in. They will help your queries get the data out to your users faster. Think about your data like a really huge library and your data like a bunch of books. The index acts like the DDS as a mechanism to speed you through the library and quickly and easily retrieve the book that you want.

Enough comparisons, since this is introductory, I just wanted to get the idea of indexes into your head. In the next installment I’ll take on two (or four, depends on how you count them) different kinds of indexes, starting with the standard two that you expected me to cover, clustered and non-clustered indexes. I’ll also introduce the concept of a heap and we’ll talk about what the heck a B-Tree is.

See you next class, probably. Be careful crossing the quad, I’ve heard Wilbur Whately is back on campus and we all remember what happened last time.

Permalink 4 Comments

Red Gate SQL Source Control

July 13, 2010 at 9:23 pm (SQL Server 2005, SQL Server 2008, Tools, TSQL) (, , , )

You just have to love Red Gate tools. They find the small area that they want to cover and then they cover it extremely well. I rave regularly about SQL Prompt and SQL Compare and SQL Search (free one, btw). I’ve got SQL Data Compare and SQL Data Generator open & working on my desk regularly. I’m dabbling in their other tools fairly often as well. I just like Red Gate tools. I guess my constant & consistent praise is why I’m a “Friend of Red Gate.” I like to mention that before I start praising their tools some more, just so no one thinks I’m hiding it. Why would I hide it? I’m proud to say it. I am a Friend of Red Gate! … anyway… where was I… right, new software. I took a small part (a very small part) in the beta for their new software, SQL Source Control. I thought it was pretty cool when it wasn’t quite working right. Well, now it’s out, working very well, and it’s pretty slick.

Basically Red Gate has created a nice tight coupling between Source Control & your database. They currently support Apache Subversion and Microsoft’s Team Foundation Server (TFS). It let’s you create a mechanism for keeping track of your databases in the same way that you track your code. I honestly believe this is a must for any reasonably sized development team (read, more than two). I can expound on why, but instead I’ll just talk some more about SQL Source Control.

First thing you need to know is that it’s hooked into Management Studio. After you do the install, you get some extra windows in SSMS that look something like this:

I’ve scratched out my own server & database names, but you get the idea. The description summarizes it very well. Lots of people can work on the database, save the scripts into source control, and then they can pull that common set of scripts back out to do more work, just like working with code. It really is the best way to develop.

You just have to connect up the database following the directions and you’ll see something like this:

If you can see that, that’s a database (name hidden) that’s been hooked up to source control. Actually, that and the change to the set-up screen are about your only indications that this tool is running. I love the lack of intrusion.

Better still, each time you reconnect the database, as it goes and checks to see if there are updates in source control, you get a little spinning… looks like a yin/yang symbol.

Enough about pretty graphics. How does it work? Extremely well. I started adding new database objects, editing existing objects, and all it ever did was put one of it’s little symbols on the object that I had created or edited, marking it as a change. When I was ready to move the changes to source control, I just clicked on the Commit Changes tab. All the changes are listed and you see scripts showing before & after between the code in the database and the code in source control.

It just works. Same thing going the other way. A database already connected can just pull changes out and apply them. Nothing I did in all my testing hit a snag (granted, I was just working on pretty traditional tables, procedures, indexes, etc.).

The one thing I’ve found that I don’t like is that there doesn’t seem to be a facility for deploying the databases automatically. Instead, I had to create a blank database, hook that to the existing database in TFS and then pull down all the “missing” objects. Hopefully they’ll go to work on a way to automate that soon.

Just to reiterate, the point of the exercise is to get your code (and while you’re developing, a database is as much code as anything written in C#) into source control. Once you’re in source control, you manage your databases just like code, label, version, branch, whatever you need to do to maintain a tight coupling with the rest of the code for the app. SQL Source Control acts as a very fast and simple tool to enable that coupling for you.

Permalink 10 Comments

Slick New Software

May 24, 2010 at 12:52 pm (SQL Server 2005, SQL Server 2008) (, , , )

I’m lazy. And frankly, I’m not ashamed to admit it. When a software comes along that can do the work for me, even work I can do just fine on my own, I’m interested. When that software is inexpensive, even better. When it’s free…

I just got word that Confio is putting together a free version of Ignite, their performance monitoring software. It’s basically going to be the current view of performance and not have all the historical tracking and nifty trend reports. Yeah, it’s a tease, but it’s a tease on a product that focuses on monitoring wait states, something you should be doing. Just remember, reference above, I’m lazy.

The thing is, they need some people to help them beta test. Here’s the word I received:

If you would like to participate in the beta test, you can download and get the full copy at www.ignitefree.com

As a measure of our appreciation, we will send you a $5 Starbucks card once you install and provide feedback at our IgniteFree forum  www.ignitefree.com/forum

IgniteFree will be the only free response time based performance tool.  It uses the same agentless, low load monitoring technique as Ignite 8.  The difference is that IgniteFree is focused on real-time monitoring, showing only the last hour of response time, server resources, query and session statistics.   Longer history and advanced features are not included in the freeware.  

Do some beta testing on a slick piece of software, help make it better, and get a cup of coffee (I’m assuming that’s all that $5 at Starbucks will get you). What’s not to like?

Permalink 2 Comments

Confio Ignite: First Impressions

April 6, 2010 at 8:28 am (SQL Server 2005, SQL Server 2008) (, , , , , )

One of the national sponsors for SQL Satuday is Confio Software. Just because they sponsor SQL Saturday, it’s worth checking out their product, but they also host the speaker’s dinner at these events. So, having eaten off their dime, I felt obligated to take a look. I’m glad I did.

Confio Ignite is a monitoring software that keeps real-time and historical track of the performance of your SQL Server (and Oracle and DB2) database servers. It’s focus is on wait states and queues, a very common method for troubleshooting performance.

You can get a trial download from their web site to run for a couple of weeks yourself. Everyone’s first impression of a software, after the web site of course, is when go to do the install. The install routine for Ignite was very easy. It’s also a bit of a shock. It actually uses web pages for configuration. This makes sense when you consider that the software originally comes from Oracle and Unix systems. Anyway, despite the bit of weirdness of using a web page for configuring software, it’s a very easy install. Once the install is complete, it’s also really simple to add servers for the software to monitor.

I put it to work on a number of development servers to start (I’m not plugging some unknown software into Production without a shake-down. I may be stupid, but I’m not crazy). It’s agentless and it began running queries against the systems immediately. It’s storing the data to my local desktop. That data is available in real-time, a historical view, or a trending view, showing how performance is changing over time. It’s all through a web site, not dissimilar to Quest’s Foglight. You can see a list of servers, their immediate state, action choices and some other general info.

Clicking a server, after it’s had time to gather data, put’s you into the Trend screen. It’s laid out very nicely. You get to break down performance by SQL, Waits, Programs, Databases, Machines, or Users. Then it shows information by Total Wait, Average Wait or Typical Day. The chart gives you a visual representation of queries running against the system, by default numbered and assigned by an internal hashing system that Ignite uses (no, I don’t know what the internals look like, I’m inferring that from the information presented). Down at the bottom of the screen you can see Top Query Problems, which shows you queries that are running slow or using too many resources according configurable settings, or Resources, your standard breakdown of CPU, Memory Paging, Disk Queue and the rest, or SQL Text, literally the queries that have been running on your system.

Almost everything has a drill down that takes you to a lot more information, and there are pop-ups that show you bits of detail.

I’ve only been running it for a few days, but I can already see how it could be very useful. I’m very happy with the focus on waits and on queries. So often when you’re looking at these tools queries are either an after thought or ignored completely. You frequently have to spend time monitoring your queries on your own, while that very expensive monitoring software is tracking page life expectancy and buffer cache hit ratio, both useful measures, but you still need to know which query is causing the problems. Ignite will get to that information right away and still collect the other as well.

So far, I’m pretty pleased with the software. I’m going to switch this over to a production system in the next few days and see what I can see with Ignite running against a real system. More to come.

Permalink 2 Comments

Next page »