No really. When did this statement start?

July 31, 2009 at 8:07 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication) (, , , , , )

I thought I had an interesting answer to the question. Unfortunately Adam Machanic, who has been working in this specific area quite a bit longer than I have, and, let’s face it, is a bit smarter, pointed out (in the comments) the possibility that I didn’t have this quite right.

I ran all his tests and sure enough, it wasn’t working the same way that I saw it work. First, I tried modifying his query so that it ran the SELECT statements from the DMV’s between the operations, but this didn’t change the results, start_time and last_request_start_time were the same. From a scientific stand-point, if no one can replicate your results, the experiment failed.

So I went back and tried again. To be sure I got things right and wasn’t, somehow, querying the wrong server or looking at the wrong spid, I added a couple of checks, specifically I queried the spid I was working with from my first query: SELECT @@SPID.

Unlike Adam’s tests, I continued running mine with two connections. I wanted to see multiple entries while the test was running. The results from sys.dm_exec_requests showed what I had seen before. There were 25 rows,  All of them showing the exact same start_time. The command, sql_handle, wait_time, cpu_time and all the other values that you would have expected to change, did. Elapsed time and logical_reads for example show a steadily increasing value, as you would expect.

Turning to sys.dm_exec_sessions, it showed more data, 40 rows instead of 25, just because it continued to record a session for the whole time I was querying the view since the spid wasn’t dropped. I finally saw the problem. It was the way I tested. I started the query against the DMV’s before I started the query that was generating the load. Because of this, the initial queries against the sys.dm_exec_sessions showed a different start & end time, unrelated to the test. Then the test ran and a second set of values were generated when the test completed. What I had attributed to the time it took for individual statements to complete and then be registered was not. It was just the different batches finishing, not the statements.

Nuts. 100% wrong. That’ll make Tim Ford happy though.

Unfortunately I’m back to where I was. Is there a way to tell when a statement started? I don’t know of a dynamic method for doing so. Obviously you could run a trace to get the statement start event, but you’ll be collecting a lot of data depending on how you set up that trace, so it wouldn’t be a good approach for everyday troubleshooting. I’ll keep looking.

Permalink 13 Comments

MS Field Engineer’s on Performance Troubleshooting

July 20, 2009 at 7:01 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , )

Do you want to get a glimpse into how the Microsoft Field Engineers would go about troubleshooting performance issues on your server? Then go and read this blog entry by Denzil Ribeiro. Not only is this an excellent how-to on troubleshooting performance problems, but Mr. Ribeiro provides multiple links that describe the concepts he’s dealing with further, making it a great piece of documentation.

The MS Field Engineer blog is not terribly active, but what gets posted there is worth reading. If you don’t have it on your feed list, you should.

Permalink Leave a Comment

Execution Plan Compile Termination

July 17, 2009 at 7:22 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

Recently I’ve been seeing a lot of people with bad execution plans, desperately trying to tune them, but they were unable to explain why they had such bad plans. More often than no these were larger queries, with a number of derived tables, CTE’s, CROSS APPLY, etc. In most cases the statistics appeared to be fine (this is usually checked by comparing estimated & actual rows within the operations in the execution plan) and the plans themselves didn’t look crazy, but the execution plans were flat out, not good.

If you’re looking at a plan and it doesn’t make much sense, one option that most people don’t check… SQL Server didn’t have enough time to complete optimization. The optimizer is a pretty amazing bit of code. The scary volume of work it does in frightenly short periods of time is quite awesome. However, it doesn’t always get done. To quickly check this situation you need to determine the “Reason For Early Termination Of Statement Optimization.” In the graphical execution plans this is found by going to the final operator in the plan, usually an INSERT, UPDATE, DELETE, or SELECT operator. Most people know to hover the mouse over the operator & get interesting little facts, but most people forget about checking the properties. Right click on this, or any other operator, and select Properties from the context menu. This will open the properties window, which looks something like this:


Right near the bottom you can see that this execution plan is showing an early termination reason of “Time Out.” That means that the optimizer didn’t finish optimizing. When the optimizer is running, it tries a whole series of different joins, join order, filters, what have you, all through some mathematic process that I’m not even remotely qualified to describe. But if it runs out of time, it goes with the last one it had. The last one it had might be the worst possible plan for this query, but because it hit a time out…

You can also find this right in the XML for the execution plan. In the Batch/Statements/StmtSimple property find the @StatementOptmEarlyAbortReason. It will show the same thing, “Time Out,” when the optimizer has run out of time.

You can get other early abort reasons, but the best one is “GoodEnoughPlanFound.” That means that the optimizer was able to complete it’s job. That doesn’t mean you have a good plan, but it means that you don’t have some interim pile of junk that is doing you more harm than good.

What can you do to fix this? Write simpler queries.

OK. Maybe that’s not helpful. Try to simplify the query. You may not be able to. If not, you might try rearranging the query, changing the join order. Yes, as everyone says, join order doesn’t matter, but the full statement is “Join order doesn’t normally matter.” Sometimes it can make a real difference. You may also have to supply some query hints, join hints, etc., to try to nudge the optimizer along.

Permalink 14 Comments

Re-evaluating Execution Plans (again)

June 26, 2009 at 7:23 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

I keep thinking I’ve got a handle on the way execution plans are dealt with in SQL Server. I had a pretty simplistic view of things, there’s the estimated plan which comes out of the optimizer and there’s the actual plan which comes out of the data engine. The one shows the best guess for what will happen based on the query and the underlying statistics. The other shows what actually happened based on the query and the real data, load on the server, available memory, you get the idea. This model is easy to explain and understand. Too bad it’s not quite the way things work.

If you query the dynamic management function sys.dm_exec_query_plan, you can see a query plan. Once you drill down on the XML, or browse through the GUI, you’ll see that this is an estimated plan, with no execution statistics at all. The safe assumption here is that SQL Server takes the execution plan created by the optimizer and stores it. Ah, but is that exactly how it works? Can you generate an estimated execution plan for this batch statement:

CREATE TABLE dbo.TempTable (id INT IDENTITY(1, 1)
,val VARCHAR(50) ) ;
INSERT INTO dbo.TempTable
(‘dude’) ;
FROM dbo.TempTable AS tt ;
DROP TABLE dbo.TempTable ;

No. You can’t. If you try to generate an estimated plan you’ll get the error “Invalid object name ‘dbo.TempTable’.” This is because the algebrizer, a part of the process of the optimizer, which is what generates the estimated plan, can’t resolve the table dbo.TempTable because it doesn’t exist yet. However, if you execute the batch above and then run this query:

SELECT deqp.query_plan
sys.dm_exec_cached_plans AS decp
CROSS APPLY sys.dm_exec_query_plan(decp.plan_handle) AS deqp
CROSS APPLY sys.dm_exec_sql_text(decp.plan_handle) AS dest
dest.text LIKE ‘create table%’

You can see the estimated plan for executing this batch. So… if the optimizer creates estimated plans, but it can’t create an estimated plan for this query since the algebrizer can’t resolve the object name… where did this estimated plan come from? Good question.

Here’s some more interesting info. You know that you can capture both estimated & actual execution plans through trace events, right? Watch what happens if you run the batch statement above and capture both the “Showplan XML” and “Showplan XML Statistics Profile” events, estimated and actual, respectively. You’ll get the estimated plan that is being stored in sys.dm_exec_query_plan and you’ll get an actual execution plan including actual rows vs. estimated, etc. Where did that execution plan come from? Another good question.

A better man than I, Tibor Karaszi, first noticed that there was, other than the execution statistics, no difference between “XML Showplan” and “XML Showplan Statistics Profile.” He asked whether or not we should ever expect to see a difference in a forum post. I jumped in with both feet, of course you’ll see a difference. After all, one is the estimated plan and one is the actual plan. Mr. Karaszi corrected a couple of my assumptions and suddenly, there I was, not understanding execution plans, again. I didn’t know where those estimated plans were coming from, not for sure. I assumed the optimizer, but then how were they getting past the algebrizer?

I had a theory and a mechanism for testing it. If I set up a trace and capture the recompile events using SQL:StmtRecompile and return the EventSubClass column, I can see what caused a recompile. When I run this query and check the trace information, I find that the EventSubClass resolves to 3-Deferred Compile. I’m not 100% certain that I’m correct here, but the following is what I think occurrs.

Because of the CREATE statement, the batch is going to fail in the optimizer. But, because it’s a DDL statement, it goes on to execute anyway. Then, a recompile puts the batch back through the optimizer. There, a table is now available where none was previously. An estimated execution plan is created. This plan, previously non-existent gets stored and can be accessed by us.

Within the facts available, this seems to explain the behavior. In fact, it’s basically supported by Connor Cunningham, one of the MS SQL Server developers, in some blog posts over at Conner vs. SQL.

Permalink Leave a Comment

More Spatial Headaches

June 11, 2009 at 2:38 pm (spatial data, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

I keep thinking I’ve got a handle on spatial data and I keep finding out I’m wrong. I rewrote the cursor that my co-worker used into a query that joined the list of test criteria onto our list of locations. I used an OUTER APPLY to more or less run a check for each of the test criteria since, except for the proximity to the locations, there’s no actual relationship between the test criteria and the location data for me to join on.

The query looked something like this:

,l.[Location].Lat AS LocationLatitude
,l.[Location].Long AS LocationLongitude
FROM dbo.Alert AS a
		FROM dbo.MyLocs x
		WHERE x.OrgID = 42
		AND x.[Location].STDistance(a.AlertLocation) < 50000) AS l

The cursor was taking almost a full minute to return just under 400 rows worth of data. This query took about 25 seconds. SCORE!

Unfortunately when I looked at the execution plan, the optimizer was not able to recognize that my spatial index would increase the performance. So I modified the query to provide a hint:

FROM dbo.MyLocs x WITH (INDEX(xsHigh))

Now the query returned in 5 seconds. BIG SCORE! Then I remembered my co-worker and his low density index. So I quickly rebuilt it and changed the INDEX hint to use the low density index. It ran in just over 2 seconds. Yeah, it’s a score, but why? I thought the high density index was the way to go with point locations. I decided I just wanted to see the middle behavior and see if a medium density index took 3.5 seconds about mid-way between the low and high density indexes… It ran in just over 1 second. WHAT THE F…

I then ran a whole series of experiments with my query. I kept changing the density and the cells per object value for the three indexes and recording times. In every test, the medium density index outperformed the low and high density indexes. It never outperformed the low density index by a whole lot, usually running ~50% faster as opposed to running 5 times faster than the high density index. But it really did run better.

My current working theory, and I’m trying to figure out how I can validate this, is that the low and medium density indexes were more efficient at eliminating the negative values, the places where the criteria wasn’t in proximity to my locations. Then, the medium density index was just better at retrieving the data than the low density index. I base this on the fact that the primary filter efficiency on the medium density index is  better than the low (although not as good as the high).

Permalink 2 Comments

Spatial Indexes and a Lack of Data

June 9, 2009 at 1:44 pm (spatial data, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

I was feeling quite confident about my new-found abilities with spatial indexes so I did a presentation for my team, to share what I had learned. I had also been sharing with one co-worker as I developed the knowledge of spatial indexes. While I was preparing my presentation, he was preparing his. I had focused on finding a set of data that showed it’s proximity to a test location and then showing how retrieving that set of data was faster because of the spatial index. He took a different approach. He took the idea of saying, here’s a list of different test locations, let’s see which one of our internal locations meet the proximity test. At the same time, he tried three different spatial indexes, one with high granularity, one with medium and a final one with low granularity.

The day came for presentations. I showed how the spatial index improved performance and how you could read from sp_help_spatial_geography_index to see how efficient the index was for the sample data. It all went without a hitch. Then my co-worker showed his tests. Wherein the low density index outperformed high or medium density indexes, despite having a primary_filter at about 5% efficiency. I was floored. I couldn’t explain and nor could my co-worker. But as far he was concerned, this whole spatial index thing was for the birds.

I went away and I started testing. Finally, after feeling like I had a handle on things, I asked my co-worker for his scripts. He had a cursor (which he acknowledged was a lousy way to write the query) that walked through the test locations, one at a time, and ran my query against our data. Sure enough, when I carefully built out the indexes and ran the test, the low density indexes worked better than the high density indexes. I set up a trace event and captured all the statement completion events so I could compare the high & low density runs. That’s when I spotted the issue.

The majority of the test locations didn’t find any matches in our location data. When the query ran against the low density index it would usually have a few hundred reads and a duration around 20ms to return no data. On the other hand, the same query, returning no data, on the high density index had about 5000 reads and took about 34ms. With each one of these cumulative queries running faster against the low density index, it really appeared to be better. But, when you got a query where data was returned, the low density index had more reads and a lot longer duration, about 500ms for the low density index compared to a consistant 34ms for the high density index.

There was my explanation. The cursor masked the fact that over and over again, the query was running but no data was being returned. I’m in the process now of rewriting the query to use the same set of test locations, but JOIN it against our location data to see which of the indexes is faster.

Permalink 3 Comments


June 3, 2009 at 10:26 am (SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , )

I answered a question on SSC with the comment that while an INT would perform better than a DATETIME in an ORDER BY query, assuming each has a viable index, that the difference wouldn’t be all that terribly substantial. Then I realized, maybe that’s not true. So I ran up a quick test, just to see.

First I created a little test table with the right indexes and loaded it with data:


ON dbo.IntDate(IntCol);
ON dbo.IntDate(DateCol);

INTO #Tally
FROM Master.dbo.SysColumns sc1,
Master.dbo.SysColumns sc2;

INSERT INTO dbo.IntDate (
DATEADD(dd,- t.n,GETDATE() )
FROM #Tally AS t;


Then I ran these two queries. Each one correctly accessed the index, doing a scan, to retrieve the data:

SELECT id.DateCol 
FROM dbo.IntDate AS id

SELECT  id.IntCol
FROM dbo.IntDate AS id

Here are the execution times:

Table 'IntDate'. Scan count 1, logical reads 26, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 70 ms.
(10000 row(s) affected)
Table 'IntDate'. Scan count 1, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 59 ms.

But what happens if I reverse the order?

SELECT id.DateCol 
FROM dbo.IntDate AS id

SELECT id.IntCol
FROM dbo.IntDate AS id

Then results are:

Table 'IntDate'. Scan count 1, logical reads 26, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 52 ms.
(10000 row(s) affected)
Table 'IntDate'. Scan count 1, logical reads 21, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 5 ms.

So, there can be a somewhat significant difference, although at least part of that is explained by the differences in the number of reads. Not exactly earth shattering, but good to know.

Permalink 2 Comments

Spatial Index & Performance & Brain Pain

May 28, 2009 at 10:36 am (spatial data, SQL Server 2008, TSQL) (, , , , , )

In my previous post, Spatial Data Hurts My Brain, I showed how a query used the spatial index, but hurt performance. Several people jumped in to try to help out. Thanks to Bob Beauchamin, Isaac Kunin, Gail Shaw, and Valerie Yakich (via Twitter).

I’ve definately learned a bit more than I knew previously about spatial data and spatial indexes. I’ve posted about it before, but it bears repeating, if you’re just starting out, I can’t recommend Alistair Aitchison’s book, Beginning Spatial with SQL Server 2008, enough. Here’s where I’m at. I think I was confusing/confused/confounded/something about what to expect from a spatial index. I’m going to present two queries and two indexes and try to explain why each works well, or not together, mainly as a means for enabling my understanding, not so much teaching anyone else. The first query is simply a retrieval of the data from a database where the distance between a supplied point and the points stored in the geography column are less than a certain value:

FROM dbo.ebe
WHERE ebe.[Location].STDistance(@location) < 15000

When this is run against the table with only a clustered index on the PK, this is the execution plan:


About what you would expect. The query returns almost 5000 rows of the 500000 in the table and runs for 14 seconds on my machine. Now I’m going to create this index and run the query again.

CREATE SPATIAL INDEX [ebe_High] ON [dbo].[ebe] 

When the query is run a second time, here’s the execution plan.


This time the query ran in under a second to return the same rows. Clearly, spatial indexes do work to speed up execution. The execution plan is hard to see here I realize. I can’t upload zipped files or .sqlplan files, so if you really want a copy of the plan, I’ve pasted both into a Word doc: spatial_plans

I’m not going to try to explain everything happening with the plan, because I’m not entirely sure what all is. I’m still figuring that out. Suffice to say that the table valued function and the constant scan operator feed data into a seek against the clustered index (which is missing statistics for some reason) where it’s fed to a compute scalar and then finally it’s loop joined to the clustered index in what I assume is the equivalent of a key lookup. Again, I’m still figuring this part out. I skipped some steps.

Now, I want to modify the query because the goal is to not simply get all locations near this point of interest, but the locations of a specified organization:

FROM dbo.ebe
WHERE ebe.[Location].STDistance(@location) < 15000
AND ebe.[OrgId] = 4464

When I run this query, I get the following execution plan:


The query ran in 96ms and returned 148 rows. It’s all you could ask for. Or is it? What if I add this index:

) INCLUDE (InterestId,Location)

Now when I run the query, I get the following execution plan:


Much, much less complex and quite easy to explain. I have a covering index that does a simple seek to retrieve the initial set of data and then it’s filtered to remove the locations that are too far from the point of interest. But, here’s where it gets interesting. The execution time was 126ms. The spatial index shaved about 30ms, or about 25%, off the time of what would normally be the fastest access path, a covering non-clustered index. That is a substantial savings.

I’ve done a lot of testing over the last few days and I didn’t always see this kind of performance benefit. Several times I’ve seen worse performance. Also, I’m not sure why a brand new index has missing statistics, as seen in the execution plans when the where the spatial index was used. Further, the optimizer is choosing the nonclustered index over the spatial index, clearly because of the estimated costs, when the spatial index does, in fact, perform better in this instance. I’m not sure what to do about that, apart from dropping the nonclustered index. That’s OK. It just means more to learn. Starting with, explaining exactly what’s happening in the spatial index execution plan. That may be the next post.

I forgot to mention. I ran the sp_help_spatial_geography_index DMV to see what information it could give, but when I supplied the parameter value listed above, I received a filter efficiency of zero. I’m clearly not understanding how to apply this DMV.

Permalink 3 Comments

Unpacking the View

April 24, 2009 at 9:16 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

A view is simply a query that behaves something like a table. Most people know this. Most people also know that a view is simply a mask on top of what might be a very complex query. It all seems really simple. You call the view inside a simple query, the view runs the underlying complex query. Most people might not know that when a view is called and it gets sent to the optimizer, the optimizer unpacks the view and binds the component parts of the query necessary to create an execution plan that will return the data requested. What I didn’t know until recently was that the optimizer is VERY smart. Not only does it unpack the query of the view, but it will change the query that the view uses. Let’s take an example (using AdventureWorks2008):

 SELECT soh.AccountNumber
,(SELECT COUNT(pcc.BusinessEntityId)
FROM Sales.PersonCreditCard AS pcc
WHERE pcc.CreditCardID = soh.CreditCardID
) AS PersonCreditCard
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS sod
ON soh.SalesOrderID = sod.SalesOrderID ;

Whether or not this query is a good one (I’d say it isn’t) and whatever you might think about it, the view definition here is pretty straight forward. If you run a SELECT against this view and take a look at the execution plan, it looks like this:


As you can see, the behavior is completely as defined above. The view has been resolved out to it’s component parts in order to build the query and arrive at the data requested. What happens if, instead of selecting all the columns, only a couple are selected? Well, the query that defines the view is not changing, so the optimizer will arrive at the same execution plan as that shown in Figure 1, right? Run this query and get the execution plan:

 dbo.vTest AS vt;


What the heck just happened?

I told you, the optimizer is smart. Compare that execution plan with the one generated from this query:

 SELECT soh.AccountNumber
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS sod
ON soh.SalesOrderID = sod.SalesOrderID ;

You’ll find that they’re identical. As stated before, views are just a query stored on the side. The optimizer is smart enough to figure out that it only needs to generate a plan as if the query requested was this. But what happens when you start nesting views? Yes, yes, Yeah… I… Ok… But… YES! That’s not a good idea. So? When has that slowed people down for a New York second? I’ve seen views nested three and four layers deep. I’ve seen table valued user defined functions nested six and seven layers deep. The question is, how well does the optimizer handle something like this? This script sets up a new situation:

CREATE VIEW dbo.vSalesInfo
SELECT soh.AccountNumber
FROM Sales.SalesOrderHeader AS soh
JOIN Sales.SalesOrderDetail AS  sod
ON soh.SalesOrderID = sod.SalesOrderID ;

CREATE VIEW dbo.vTest3
SELECT vsi.AccountNumber
SELECT COUNT(pcc.BusinessEntityId)
FROM Sales.PersonCreditCard AS pcc
WHERE pcc.CreditCardID = vsi.CreditCardID) AS PersonCreditCard
,(SELECT COUNT(v.BusinessEntityId)
FROM Purchasing.Vendor v) AS VendorCount
,(SELECT COUNT(a.AddressId)
FROM Person.Address AS a) AS AddressCount
,(SELECT COUNT(be.BusinessEntityID)
FROM Person.BusinessEntity be) AS BusinessEntityCount
FROM dbo.vSalesInfo AS vsi

From the script you can see that I’ve nested one view inside another and complicated the outer a view a bit. What happens when you run the query that would eliminate tables from the view this time?

 SELECT vt.AccountNumber
FROM dbo.vTest3 AS vt


See. The optimizer is very smart. Ah, but before you get all comfy and crack a bottle of some frothy beverage, we need to look at the compile times. Compare the STATISTICS TIME output from a query that pulls from the full view, the limited pull from the view and the pull from the tables:

–View 1
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 6 ms.

SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 6 ms.

–View 2
SQL Server parse and compile time:
CPU time = 8 ms, elapsed time = 8 ms.

These are small views and easy queries, so the compile times are not large. But with one consistently 30% larger than the others in terms of execution time and actually requiring CPU cycles when the others require less, you begin to see that, despite the extreme intelligence of the optimizer, there is a cost. TANSTAAFL still applies. Keep all this in mind if you think that nested views are a nifty way to write your code or you’re trying to convince someone that nested views are not a nifty way to write their code.

Permalink 12 Comments

Execution Plan Estimated Operator Cost

March 19, 2009 at 1:09 pm (SQL Server 2008, SQLServerPedia Syndication) (, , , , , , )

I’ve said it over and over again, the costs on operators in execution plans, even in actual execution plans are estimates.  You need to understand that when looking at your execution plans. It’s vital because you need to be able to distinguish between the truly costly parts of a plan and the less costly parts of a plan. Don’t believe me? Take a look at this picture and see if you can spot the discrepancy:


Spot it yet?

Add up the costs for the operators visible in the part of the plan…

Yep 125%.  And there’s more to the plan that I’m not showing. I think this one must total near 200%. The statistics are up to date and there’s no consistency errors in the database. These estimates are just off sometimes.

This is a rather costly query being run against Microsoft Dynamics CRM. There were two missing queries identified by the optimizer and about four key lookup operations that I fixed with two index changes. This jumped out and I wanted to share. The fact is, right now, the query is working well. That particular table has one scan and some reads, identified from looking at the STATISTICS I/O, and it’s not even close to 100% of the cost of the query, but I would sure be in a panic if I believed the estimated operation cost.

Permalink 14 Comments

« Previous page · Next page »