SQL University: Introduction to Indexes, Part the Third

July 23, 2010 at 5:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , , )

Nice to see most of you have managed to fight your way through the shoggoths outside to attend another lecture at the Miskatonic branch of SQL University. This will be the third and final part of the introduction to indexes lecture. Please, if you’re going mad, step out into the hall. Our previous two lectures introduced the concept of indexes and then talked about two types of indexes, clustered and nonclustered. This lecture will cover the concept of statistics as they relate to indexes.

If you followed the previous lecture then you know that indexes are stored in a Balanced Tree or B-Tree structure. You know that this storage mechanism is intended to provide fast retrieval of data. But, how can the query engine inside SQL Server know which index has the information it wants? How can the query engine know that a given index will provide a mechanism to speed up the query that is being run? Statistics.

Statistics are information generated defining the selectivity of a column or columns in the database. They may or may not be part of an index. Columns alone can have statistics and indexes can have statistics. The query optimizer uses statistics to determine how it will access data. You may have the greatest index in the world, but if your statistics are wrong, that index won’t get used. By default statistics are automatically created and automatically updated. The automatic update of statistics is based on a formula that depends on changes in the data and changes in structure, as outlined in this Microsoft documentation. In general, it’s a very good practice to leave this alone because SQL Server does a great job of maintaining statistics.

But what are statistics really?

Statistics are basically a sampling of data stored in such a way as to make it possible for the optimizer to understand what the data looks like without actually accessing it. It is, absolutely, meta-data, data about data. The sampling of the data is, usually, a scan across the data, dropping in every certain number of rows as a percentage of the data in the table, but it can be a full scan, where it looks at all the data in the table. This meta data that is collected is broken down into several pieces. Statistics are represented through:

  • The histogram, which provides a picture of the data in a column
  • The header, which provides information about the statistics or meta-data about meta-data
  • Density information, which stores the distribution of the data across the columns being tracked by the statistics

In short, a lot of information. To start understanding what this all means, let’s see how to retrieve a set of statistics:

DBCC SHOW_STATISTICS ("Sales.SalesOrderDetail", IX_SalesOrderDetail_ProductID);

This query uses tables stored in the AdventureWorks2008R2 to display statistics information. Just a side note, so many DBCC functions have been replaced by dynamic management views that I honestly expected one for statistics too, but it’s not there. Running the query results in the following set of information:

The first result set at the top is the header. Most of the information you see there should make sense to you based on what we’ve been talking about. A couple of interesting points worth noting are the “Rows Sampled” column which shows how many of the rows were looked at while the stats were being built, Steps, which gives you an indication of the size of the histogram, and Density, which shows the selectivity of the statistics, but that column is not the one used by the optimizer. The main thing I use header information for is to see when the statistics were last updated.

The second set of data shows the density of the columns used to define the index. You can see that each column is measured as it gets added to the index. This shows you how a compound index makes the data more and more selective. If those numbers are at all big, as opposed to vanishingly small like those presented, you have a selectivity isssue with your index. The less selective the index is, the less likely it will be used.

Finally, the last data set is the histogram. What I have displayed is actually just the first seven of two hundred rows. A histogram is basically a small snapshot of the data in your index. Specifically, the data in the first column of your index. That’s worth knowing. If you choose to, for some crazy reason, build an index with a bit column as the leading edge (that means the first column) and you’re surpised that your index isn’t getting used, this is where you can go to understand why. The histogram is one of the main tools that SQL Server uses to determine which operations it will use to retrieve your data. Here’s how it works. First you get the RANGE_HI_KEY which shows the upper bound column value, the top of the range of rows, that the this entry in  the histogram represents. Then you see the RANGE_ROWS that displays the number of rows within the range represented by the this entry in the histogram (except, for this entry, there is no range, it’s a single value). Next is the EQ_ROWS that tells you how many rows are equal to the value represented by the RANGE_HI_KEY. That number is large here because there is no range of rows, this value is represented by approximately 2838.166 rows inside the data. Following that is DISTINCT_RANGE_ROWS which shows you the distinct number of rows within each stepped range of values. In this case, again, because we’re not dealing with a range, that number is zero. Finally, you can see the AVG_RANGE_ROWS displaying the average number of duplicate values within the range.

Scrolling down within that set of data you can see what a range of values looks like:

In this case the range covers two values with the upper limit being 805 and the previous upper limit being 801.

In short, you can see what the optimizer sees and determine if you should have more up to date statistics. For example, if I query the table and retrieve ProductID values that are between 802 and 805, which would represent step 72 in the histogram, I get 246 rows, not 442, which is what I should see. Since the statistics are a little old on the table, they haven’t been updated since June 23rd, I’ll update them. You can use sp_updatestats, which will sample the data and generate a histogram with some guesses, or you can do this:

UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN

Now when I run DBCC SHOW_STATISTICS, the header information shows that the rows sampled equal the rows in the table. My new histogram has almost exactly the same distribution, but the row counts are incredibly accurate. In my case the number of RANGE_ROWS is equal to 200 and the number of EQ_ROWS is equal to 46. Which equals the 246 rows that we got from querying the data directly. That represents a very accurate set of statistics.

So how do you know when to update your statistics? It’s really hard to say. If you start seeing different execution plans than you did previously, or if you note that the date on the statistics seems old, you might want to update stats. How do you determine if you should use a full scan or sampled statistics? Again, hard to say. In most circumstances a sampled set of statistics should work, but in some areas, if the data distribution is off, you may want to run a full scan.

There is clearly more and more to discuss about statistics and how to maintain them. There is a lot more to discuss about indexes. I didn’t even get to index fragmentation. Today you should remember:

  • Statistics are used to determine which indexes work well for a query
  • Statistics are composed of multiples sets of data, the header, density information, and the histogram
  • Histograms have a maximum of 200 steps
  • Histograms show the data distribution
  • Despite automatic maintenance, statistics can be out of date
  • The sampled update of statistics doesn’t always create the best set of data

For more information on statistics, I recommend Microsoft’s Books On Line for SQL Server. It’s a great resource.

But I hear the whipoorwills kicking up and I see a funny looking fog developing outside. It’s getting awfully dark for mid-day. One of the other professors stepped on some squealing white… thing… on his way into today and a one of the TA’s went stark raving mad at something they saw in the rectory. Time to end class. Best of luck on your journey back to your dorms. I’m locking myself in my office.

While I’m there. I might churn out one or two more entries on indexes. Look for them in the SQL University E-Book (no doubt to be published posthumously)

Advertisements

Permalink 4 Comments

SQL University: Introduction to Indexes

July 16, 2010 at 11:41 am (SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

For some crazy reason (probably everyone is on vacation), Jorge Segarra (blog|twitter) who runs SQL University has asked me to submit some material. I’ve got three posts, all coming up next week, that give a very, very basic introduction to indexes in SQL Server.

Permalink 1 Comment

Characters

October 21, 2009 at 2:42 pm (SCOM, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

No, I’m not talking about a Dickens novel. I’m talking about the number of characters in a string. I had a painful time recently because of the word “characters.” 

If you take a look at the dynamic management view sys.dm_exec_sql_text you can get the queries that have been run on your system that are still in the cache. It’s a great utility. Better still, you can get specific statements from the code that are actively running through sys.dm_exec_requests or ones that have run through sys.dm_exec_query_stats. To do this is very simple. Each of these DMV’s has a pair of columns, statement_start_offset and statement_end_offset. These columns, and I’m quoting directly from books online measure the “number of character” offset from the beginning of the SQL string and from the end of the SQL string. Using these values you can retrieve an individual statement out of a stored procedure that has multiple statements.

But… Here’s where things get tricky. Try this on your machine:

SELECT SUBSTRING(dest.text, (der.statement_start_offset ) + 1,
(der.statement_end_offset - der.statement_start_offset) + 1)
,LEN(dest.text) AS CharLength,
der.statement_start_offset,
der.statement_end_offset
FROM sys.dm_exec_query_stats AS der
CROSS APPLY sys.dm_exec_sql_text(der.sql_handle) AS dest
WHERE der.statement_end_offset > -1

You might get an error or you might get a bunch of really odd looking statements in the first column, starting part way into TSQL and cutting off after they’re done or before they’re over. It’ll look odd. But what’s the deal? The SUBSTRING function should work. Logically it’s configured correctly. Here’s the problem.

The [text] column in sys.dm_exec_sql_text is of the datatype NVARCHAR(MAX). Unicode. If you look at the length of the text, it’ll tell you exactly how many characters you see in the string that called to your server. But, the statement_start_offset and statement_end_offset are measuring something different. They’re not measuring characters, they’re measuring unicode characters. Try this query instead:

SELECT SUBSTRING(dest.text, (der.statement_start_offset / 2) + 1,
(der.statement_end_offset - der.statement_start_offset) / 2+ 1),
LEN(dest.text) AS CharLength,
DATALENGTH(dest.text) AS DLength,
DATALENGTH(dest.text) / 2 AS HalfDLength,
der.statement_start_offset,
der.statement_end_offset
FROM sys.dm_exec_query_stats AS der
CROSS APPLY sys.dm_exec_sql_text(der.sql_handle) AS dest
WHERE der.statement_end_offset > -1

You can see that the character length is, whatever it’s supposed to be, but the DATALENGTH is twice that much. Unicode, as we all know, includes a byte to identify the character set. That’s included in the character count in statement_start_offset and statement_end_offset.  You need to take that into account when dealing with these “characters.”

Permalink 1 Comment

Table Variables Are Only in Memory: Fact or Myth

October 13, 2009 at 8:40 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

I keep seeing these concepts that have long been disproven, posted again and again on newsgroups as if they were valid information. The latest? Table variables are better because they’re only in memory where as temporary tables write to the disk through tempdb. This one is abjectly wrong. I’m not even going to hedge with “it depends.” From a macro point of view, there are only a few differences between temporary tables and table variables, statistics being the biggest. Temporary tables have ’em and table variables don’t. Other than that, both will reside completely in memory or will swap out to the disk through tempdb, depending on their size. Some of the minor differences, and why you might want to use table variables over temporary tables, table variables won’t cause a statement recompile while temporary tables will, table variables don’t participate in transactions and log data for a rollback is not maintainted for temp tables. There are a number of other uses and functions around table variables, but they’re not applicable to this discussion. The point here is that table variables do not live only in memory.

Some of you don’t believe me, of course. So let’s prove this one, again. Here’s a very simple query. It creates a temporary table and loads one hundred thousand rows of data into it. The query does all this within an open transaction so that we can investigate where the data is stored, at our leisure:

BEGIN TRAN
SELECT TOP 100000
IDENTITY (INT,1,1) AS n
INTO #Nums
FROM master.dbo.syscolumns AS sc1
,master.dbo.syscolumns AS sc2

This will leave the temp table open and created. The only thing remaining is to determine where the data is stored. To do this, we can query sys.dm_db_session_space_usage. This DMV shows the pages allocated and deallocated to the tempdb. If I run a simple select against the DMV, I’ll see 233 pages in the user_objects_alloc_page_count for the session I’m currently connected as. That represents the data from the temporary table, stored within tempdb. I’ll rollback the transaction and close the connection. Then, I’ll add the following code to the end of the query in a new window with a whole new connection. I do this to avoid any possibility that the temporary table is still within context. Here’s the addition:

DECLARE @temp TABLE(
ID INT )
INSERT INTO @temp
(ID)
SELECT n
FROM #Nums;

Now I’ll run the whole query, creating two identical tables, a temporary and a variable, with identical data. When I look at sys.dm_db_session_space_usage, I now see 466 pages in the user_objects_alloc_page_count for the new connection. That’s because the tempdb is now holding two tables worth of data instead of one.

Or, in other words, table variables are stored in tempdb like temporary tables.

EDIT: Modified the description of the differences between table variables & temp tables.

Permalink 5 Comments

Identifying Frequently Running Queries

September 21, 2009 at 9:13 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

It’s not enough to look at the longest running query. What if you have two queries, one runs for 5 minutes and the other runs for 5 seconds. Which do you tune? The first one, of course. But, let’s add a little information to our thought experiment. The one that runs for 5 minutes is called 2AM, your local time, when there are almost no users on the system and the one that runs for 5 seconds is called 40 times a minute all day long. Now, which one do you tune? That’s right, suddenly that 5 second query takes on a HUGE significance based on how often it’s called.

But how do you know how often a query is called? The easiest way to gather that information is not necessarily the best in terms of a long term methodology, a server side trace that captures RPC and SQL Batch completion events. The beauty of the server side trace is how easy it is to set up and how much information you can gather, so quickly. You can immediately start getting every query run on the server along with how long they took to run, how much CPU each used and various other metrics. You can run this for short periods or long and then gather the data together into a table in a database and run aggregation queries against it to generate reports showing the most frequently called query. Simple, quick easy. But there are a couple of problems with it. First, while you can run it 24×7, you’re going to see quite a bit of data generated from the events, depending on the load on your system, of course. Second, the queries that come through will likely have all kinds of stuff associated with them, various paramter values and what not, that will make aggregation hard. You’ll have to put together a means of cleaning the data and maintaining it or purchase one of the many third party tools for doing this. You’ll have to store and manage large amounts of data as it comes into the system. It’s going to be a lot of work.

Another option, one that will require a degree of setup, is a little less accurate, it’s unlikely to capture every single execution of every single query, but it will get the majority, sys.dm_exec_query_stats. This handy little dynamic management view carries aggregate data about all the queries currently in the cache. It will show how much accumulated processing time a query has taken, the last time, the longest time, the shortest time and lots of other information, but the one interesting value we would be most interested in here, is the usage count. If you just run a SELECT statement against sys.dm_exec_query_stats, you’ll get interesting data back, but you won’t see your query or your execution plan. To make the information available within the DMV, you’ll need to combine it with other DMV’s. Here’s a sample query that pulls back information that could be useful for identifying badly performing queries on your system including how often they’re called.

SELECT dest.text
,deqp.query_plan
,deqs.execution_count
,deqs.creation_time
,deqs.query_hash
,deqs.query_plan_hash
,deqs.total_elapsed_time
,deqs.total_clr_time
,deqs.total_physical_reads
,deqs.total_worker_time
FROM sys.dm_exec_query_stats deqs
CROSS APPLY ( SELECT *
           FROM sys.dm_exec_sql_text(deqs.sql_handle)) dest
CROSS APPLY ( SELECT *
           FROM sys.dm_exec_query_plan(deqs.plan_handle)) deqp

There’s a lot you could do with this query, but the real power is going to come from gathering this data on a regular basis. It’s going to depend on how volatile the cache is on your system. If most of your queries sit in the cache for weeks (and that’s possible), then you could run a query once a week to gather the information. If your cache is getting flushed clean once an hour, you’d need to run this query more often. Regardless of how often you run the query, understand that, because you’re dependent on the cache, it is possible to miss data. Once you get the query running on your system, you can use the information to determine which of the queries needs to be tuned, not just on how long it runs, but also how often it runs, or even using other combinations of the information returned.

One other point, you may see ad hoc queries on your system. Here’s a simple example:

SELECT TOP (5) sod.*
FROM Sales.SalesOrderDetail sod
WHERE sod.ProductID = 772
ORDER BY sod.SalesOrderID ASC ;

If the value passed to sod.ProductID is changed to 773 or any other value, you will get a different entry in cache for this procedure. That’s where you can put the query_hash or the query_plan_hash values to work. These values show a hash that SQL Server creates for each query that it stores in cache. You could group by these values to aggregate ad hoc queries that would, if they were stored procedures, be identical executions and arrive at an aggregated count.

So, if you need precise information, you can use trace events, but, pay for that with a lot of data that you have to manage.

EDIT: Changed deqs.total_elapsed_date to deqs.total_elapsed_time as per comments below.

Permalink 6 Comments

More Free Training

September 15, 2009 at 9:59 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , , )

Quest Connect 2009, taking place in October 21 for 24 hours, looks like it’s going to have 64 different sessions, live and recorded, by a variety of the names in the industry. It’s another chance to dig in and learn the details on a variety of topics from some of the top names in the business. Can you say Tom LaRock? How about Tim Ford? I know you want to hear from Brent Ozar. Those are just some of the featured speakers. There are a whole slew of others, it’s worth pursuing, and did I mention, the price is right.

I recorded a session for them last night. It’s on the basics of understanding execution plans.

Permalink 2 Comments

No Join Predicate

September 15, 2009 at 8:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

You could be looking at an execution plan on a query and see this message: Warning, No Join Predicate. With a very few exceptions (very few), that’s not a warning, like “Mind the gap” or “Your Mileage May Vary” or “Never fight a land war in Asia.” No. It’s a screaming shout saying “BY ALL THE GODS IN VALHALA, YOU DON’T HAVE ANY JOIN PREDICATES IN THIS QUERY!” Seriously, that’s exactly what it says.

But, you might be asking yourself, that’s what it says, but what does it mean? Please allow me to explain. The tables in your system, whether it’s a classic BI star schema, or a highly normalized OLTP system, or even (shudder) ORM objects, are related to one another. Usually they’re related through the use of primary and foreign keys. The primary key uniquely identifies a row, like a particular value in a dimension table. This then is related to some number of rows in another table, like in a fact table, through the foreign key. The foreign key acts to protect the integrity of the data, ensuring that if you insert something that’s supposed to relate to another row, then that other row really exists and if you try to delete that other row, the foreign key will remind you that, hey, that row has things that depend on it existing in another table. All of this can be quite complicated or quite simple. Heck, you may be using an ORM system and don’t have a single foreign key in site.

But, when you go to retrieve data from two tables that are related in a TSQL query, you need to define that relationship in the TSQL itself. SQL Server 2008 relies on what was defined as the ANSI standard back in 1992. But, if you learned your SQL coding skills back in the day or on Oracle, you may be used to simply listing tables and then defining their relationships through the WHERE clause. Just a side note, unless you learned your SQL skills in Oracle, if you were born AFTER the 1992 standard was enacted, you don’t have an excuse. Anyway, the old way that code looked like was this:

SELECT…
FROM TableA AS a, TableB as b, TableC as c
WHERE a.MyID = b.MyID
AND b.YourID *= c.YourID
AND….

What you have there is an inner join, or in the old parlance, an equi-join because of the use of the equals sign, between TableA and TableB and an left outer join, defined through the use of the ‘*’, between TableB and TableC. Any other criteria that defined the WHERE clause then followed.

Flat out, in SQL Server 2008, you can’t do this any more. It will not support that style of outer join. The new fangled method, remember, new as of 1992, is this:

SELECT …
FROM TableA as a
INNER JOIN TableB as b
ON a.MyID = b.MyID
LEFT JOIN TableC as c
ON b.YourID = c.YourID
WHERE….

This forces you to separate the relations between the tables as defined by the JOIN’s and the filtering methods supplied through the WHERE clause. And yes, it’s a lot more complicated than I’m making it out to be, but I’m trying to establish the ground rules here. Where was I? Oh yeah, missing join criteria.

What happens is, someone that’s used to writing things in the ANSI ’89 syntax can either forget to define the JOIN criteria or accidently remove it while editing or something. Regardless of how it happened, you’ve ended up with what is known as a cartesian product, shortest possible explanation, everything joined on everything else. For example, here’s a real query against the AdvenureWorks2008 database:

SELECT *
FROM Sales.SalesOrderHeader AS soh
,Sales.SalesOrderDetail AS sod
,Production.Product AS p
WHERE soh.SalesOrderID = 43659

That query does not define join criteria between any of the tables involved. This results in the following estimated execution plan (estimated because I got sick of waiting for the query to finish):

MissingJoin

If you look at the final Nested Loop operation right before the SELECT statement operator, you’ll notice there is a little exclamation point inside a yellow triangle. This is an indication of a warning in your query.  You can hover over the operator to get the tool tip, which looks like this:

NoJoinPredicate

You can see at the bottom the warning that no join predicate is in use. If you constructed the query appropriately it would look more like this:

SELECT *
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS sod
ON soh.SalesOrderID = sod.SalesOrderID
JOIN Production.Product AS p
ON sod.ProductId = p.ProductID
WHERE soh.SalesOrderID = 43659;

Which, instead of returning 800,000 rows before I interrupted it returned only 12 rows and had a very clean execution plan:

Join

I would never consider “No Join Predicate” as a warning. It’s a blaring klaxon of a problem and you need to do something about it, right away.

Permalink 13 Comments

No really. When did this statement start?

July 31, 2009 at 8:07 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication) (, , , , , )

I thought I had an interesting answer to the question. Unfortunately Adam Machanic, who has been working in this specific area quite a bit longer than I have, and, let’s face it, is a bit smarter, pointed out (in the comments) the possibility that I didn’t have this quite right.

I ran all his tests and sure enough, it wasn’t working the same way that I saw it work. First, I tried modifying his query so that it ran the SELECT statements from the DMV’s between the operations, but this didn’t change the results, start_time and last_request_start_time were the same. From a scientific stand-point, if no one can replicate your results, the experiment failed.

So I went back and tried again. To be sure I got things right and wasn’t, somehow, querying the wrong server or looking at the wrong spid, I added a couple of checks, specifically I queried the spid I was working with from my first query: SELECT @@SPID.

Unlike Adam’s tests, I continued running mine with two connections. I wanted to see multiple entries while the test was running. The results from sys.dm_exec_requests showed what I had seen before. There were 25 rows,  All of them showing the exact same start_time. The command, sql_handle, wait_time, cpu_time and all the other values that you would have expected to change, did. Elapsed time and logical_reads for example show a steadily increasing value, as you would expect.

Turning to sys.dm_exec_sessions, it showed more data, 40 rows instead of 25, just because it continued to record a session for the whole time I was querying the view since the spid wasn’t dropped. I finally saw the problem. It was the way I tested. I started the query against the DMV’s before I started the query that was generating the load. Because of this, the initial queries against the sys.dm_exec_sessions showed a different start & end time, unrelated to the test. Then the test ran and a second set of values were generated when the test completed. What I had attributed to the time it took for individual statements to complete and then be registered was not. It was just the different batches finishing, not the statements.

Nuts. 100% wrong. That’ll make Tim Ford happy though.

Unfortunately I’m back to where I was. Is there a way to tell when a statement started? I don’t know of a dynamic method for doing so. Obviously you could run a trace to get the statement start event, but you’ll be collecting a lot of data depending on how you set up that trace, so it wouldn’t be a good approach for everyday troubleshooting. I’ll keep looking.

Permalink 13 Comments

MS Field Engineer’s on Performance Troubleshooting

July 20, 2009 at 7:01 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , )

Do you want to get a glimpse into how the Microsoft Field Engineers would go about troubleshooting performance issues on your server? Then go and read this blog entry by Denzil Ribeiro. Not only is this an excellent how-to on troubleshooting performance problems, but Mr. Ribeiro provides multiple links that describe the concepts he’s dealing with further, making it a great piece of documentation.

The MS Field Engineer blog is not terribly active, but what gets posted there is worth reading. If you don’t have it on your feed list, you should.

Permalink Leave a Comment

Execution Plan Compile Termination

July 17, 2009 at 7:22 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

Recently I’ve been seeing a lot of people with bad execution plans, desperately trying to tune them, but they were unable to explain why they had such bad plans. More often than no these were larger queries, with a number of derived tables, CTE’s, CROSS APPLY, etc. In most cases the statistics appeared to be fine (this is usually checked by comparing estimated & actual rows within the operations in the execution plan) and the plans themselves didn’t look crazy, but the execution plans were flat out, not good.

If you’re looking at a plan and it doesn’t make much sense, one option that most people don’t check… SQL Server didn’t have enough time to complete optimization. The optimizer is a pretty amazing bit of code. The scary volume of work it does in frightenly short periods of time is quite awesome. However, it doesn’t always get done. To quickly check this situation you need to determine the “Reason For Early Termination Of Statement Optimization.” In the graphical execution plans this is found by going to the final operator in the plan, usually an INSERT, UPDATE, DELETE, or SELECT operator. Most people know to hover the mouse over the operator & get interesting little facts, but most people forget about checking the properties. Right click on this, or any other operator, and select Properties from the context menu. This will open the properties window, which looks something like this:

Properties

Right near the bottom you can see that this execution plan is showing an early termination reason of “Time Out.” That means that the optimizer didn’t finish optimizing. When the optimizer is running, it tries a whole series of different joins, join order, filters, what have you, all through some mathematic process that I’m not even remotely qualified to describe. But if it runs out of time, it goes with the last one it had. The last one it had might be the worst possible plan for this query, but because it hit a time out…

You can also find this right in the XML for the execution plan. In the Batch/Statements/StmtSimple property find the @StatementOptmEarlyAbortReason. It will show the same thing, “Time Out,” when the optimizer has run out of time.

You can get other early abort reasons, but the best one is “GoodEnoughPlanFound.” That means that the optimizer was able to complete it’s job. That doesn’t mean you have a good plan, but it means that you don’t have some interim pile of junk that is doing you more harm than good.

What can you do to fix this? Write simpler queries.

OK. Maybe that’s not helpful. Try to simplify the query. You may not be able to. If not, you might try rearranging the query, changing the join order. Yes, as everyone says, join order doesn’t matter, but the full statement is “Join order doesn’t normally matter.” Sometimes it can make a real difference. You may also have to supply some query hints, join hints, etc., to try to nudge the optimizer along.

Permalink 14 Comments

Next page »