SQL Rally: Performance Tuning Abstract

October 28, 2010 at 8:00 am (PASS) (, , , , , , )

I get the call, you get the call, everyone gets the call. “Hey, my app/procedure/query/report is running slow.” Now what do you do? You go to my full day session at SQL Rally, that’s what. Assuming you vote for it.

I didn’t post the abstract I submitted for the SQL Rally before because I thought that it would be redudant. However, since it’s not right off the voting page (unless they updated it since I voted), if you’re interested, here’s what I thought I would do for a day. If it sounds good to you, please go here and vote for it.

One of the most common problems encountered in SQL Server is the slow running query. Once a query is identified as performing poorly, DBAs and developers frequently don’t understand how to diagnose the problem and often struggle to fix the problem. This one day seminar focuses exclusively on these two topics. Attendees will learn how to identify the queries that are performing badly and learn how to fix them. We will start by learning how to gather performance metrics, both server and query metrics, using tools available directly from Microsoft such as performance monitor, DMVs and Profiler. From there we’ll move into learning how the optimizer works and how it uses statistics to determine which indexes and other database objects can assist the performance of a query. The session takes considerable time to show exactly how to generate and read execution plans, the one best mechanism for observing how the optimizer works. We’ll then look at other DMVs that can assist you when performance tuning queries. With all this knowledge gathered, we’ll move into looking at common performance problems, how they evidence themselves in the metrics and execution plans, and how to address them. Finally, we’ll explore advanced methods for solving some of the more difficult query performance problems introducing such concepts as query hints, plan guides and plan forcing. Through all of this, best practices and common techniques will be reviewed. Attendees will go home with a working knowledge of query performance tuning, a set of methods for identifying poorly performing queries, scripts to assist in these processes and the knowledge of how fix performance problems in their own systems.

To see the other sessions go here:
BI
DBA
Developer
Misc

Although I would prefer that you voted for me, it’s more important that you vote at all (same thing as in real life). Please go here and select the sessions that you want to see.

Advertisements

Permalink 1 Comment

nHibernate, First Look at TSQL

April 5, 2010 at 10:19 am (nHibernate) (, , , , )

I’ve blogged in the past about the nHibernate project that has been going on at my company for the last eighteen months. Prior to today, I have only seen the database generated by nHibernate. But today I finally started seeing some TSQL code. My first impressions… oy!

There are two levels of concern coming out of the gate. First, it appears that some of the programming decisions, completely independent of nHibernate, are going to cause problems. Second, it appears we’re going to be hitting issues with nHibernate.

First, the programming decision. I’ve been using Confio to monitor this server for a few days now (look for upcoming posts on my eval of Confio). Only one day has captured any real activity from the nHibernate team (and yes, I’m basically spying on these guys because they are absolutely insistent that my team stay out of their way). The longest running query was two calls to this (names have been changed to protect my job, there are no innocents):

SELECT BookID
FROM Books WITH (updlock,rowlock)
WHERE BookID = 42

 What they’re doing is locking the row so that no other user can get at it while it’s being edited. Total lock time for the first day was 38 seconds for two calls. I’ve posted about this previously, so I won’t go into it again, but this is potentially a disaster.

On to nHibernate. The next query was pretty benign:

SELECT OrgID
,OfficeID
,StartDate
,EndDate
,OtherID
FROM SpecialList
WHERE OfficeID = 42

Unfortunately this table, as created out of nHibernate, has no primary key, no clustered index or any other type of index, so this is just a table scan. But it’s only on 109 rows… in dev, not production, oh, and most of the 109 rows have a null value for OfficeID, but it’s all good I suppose… until I notice that this query also had 38 seconds of wait time, but it was called 394,652 times… oy (and no, I’m not even one of the chosen, but that’s a wonderful expression for this situation). Looking at the data in cache, this query has been called, since it was created in cache on the 2nd, 598351 times with a total elapsed time on the server of 5429689ms. The average then is 9ms, but the max was 29296 or 29 seconds.

The next query up looked like this:

SELECT col1
,col2
,col3
FROM Table1
INNER JOIN Table2
ON Table1.ChildID = Table2.OtherID
WHERE tabl2.ParentID IN (@p1,@p1,@p2…@p99)

Yep. 99 parameters passed and used against an IN statement. I can’t post the execution plans on this stuff without talking it over with the boss, but suffice to say, it’s two table scans and a merge join to get the data back. Oh, and this was called 1,138 times with a minimum elapsed time of 976ms. Yeah, minimum time to call this proc is 1 second and it’s been called over a thousand times.

It went down hill from there. I’m looking at other queries, one that hits seven tables, but instead of using JOINs uses a bunch of sub-selects in the WHERE clause. It has a minimum run time of 9 seconds. Luckily it’s only been called 64 times.

This is a horror show.

A lot of benefit can be reaped if we’re able to go in and create some indexes on these tables, but that’s going to be an issue anyway because I’m seeing lots of implicit data conversions on the parameters being passed in, as I noted a couple of years ago. At this point in time, I would say, at best, I’m very underwhelmed by what nHibernate is producing. To be perfectly honest, I’m concerned for the health of the project.

Permalink 22 Comments

Gila Monster on Estimated vs. Actual Row Counts

September 22, 2009 at 6:47 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , , )

I don’t generally do lots of blog aggregation and cross post linking & stuff. It’s just not something I’m that into. However, this time is an exception. Gail Shaw, Gila Monster to those who hang on out SQL Server Central, has posted an excellent explanation of times when Estimated and Actual row counts vary for a reason. I’m one of those who emphasises that differences between estimated & actual is an indication of… something. It could be out of date or missing statistics or it could be caused by query language like multi-statement table valued functions, but it’s usually and indication of a problem. Except when it’s not. Read Gail’s explanation for more.

Permalink Leave a Comment

LaRock on Monitoring

September 11, 2009 at 7:34 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , , )

Since I just spent a bit more than half of my 24 Hours of PASS presentation on tuning queries talking about monitoring performance, you could probably tell that I think that the two are inextricably linked. I’m not alone. Tom LaRock has put a post on why it’s important to understand the extent of your problem prior to attempting to fix it. It’s absolutely spot-on and a must read.

Remember, if someone says performance on a system you manage is slow, you have to ask, as compared to what? You need to understand what “normal” performance is on your system in order to arrive at the ability to identify a particular process as performing in an abnormal manner. That’s not to say that you couldn’t tune a query in complete isolation to a baseline of performance. Of course you could. The question that Tom is answering is, are you tuning the right query for the right reasons?

Oh, and for those that are interested, some of the presentations made during 24 Hours of PASS are now available for download. The rest of the sessions will be available after the Summit. If you haven’t made plans to go, I’d get on over there now and register.

Permalink Leave a Comment

Execution Plan Compile Termination

July 17, 2009 at 7:22 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , )

Recently I’ve been seeing a lot of people with bad execution plans, desperately trying to tune them, but they were unable to explain why they had such bad plans. More often than no these were larger queries, with a number of derived tables, CTE’s, CROSS APPLY, etc. In most cases the statistics appeared to be fine (this is usually checked by comparing estimated & actual rows within the operations in the execution plan) and the plans themselves didn’t look crazy, but the execution plans were flat out, not good.

If you’re looking at a plan and it doesn’t make much sense, one option that most people don’t check… SQL Server didn’t have enough time to complete optimization. The optimizer is a pretty amazing bit of code. The scary volume of work it does in frightenly short periods of time is quite awesome. However, it doesn’t always get done. To quickly check this situation you need to determine the “Reason For Early Termination Of Statement Optimization.” In the graphical execution plans this is found by going to the final operator in the plan, usually an INSERT, UPDATE, DELETE, or SELECT operator. Most people know to hover the mouse over the operator & get interesting little facts, but most people forget about checking the properties. Right click on this, or any other operator, and select Properties from the context menu. This will open the properties window, which looks something like this:

Properties

Right near the bottom you can see that this execution plan is showing an early termination reason of “Time Out.” That means that the optimizer didn’t finish optimizing. When the optimizer is running, it tries a whole series of different joins, join order, filters, what have you, all through some mathematic process that I’m not even remotely qualified to describe. But if it runs out of time, it goes with the last one it had. The last one it had might be the worst possible plan for this query, but because it hit a time out…

You can also find this right in the XML for the execution plan. In the Batch/Statements/StmtSimple property find the @StatementOptmEarlyAbortReason. It will show the same thing, “Time Out,” when the optimizer has run out of time.

You can get other early abort reasons, but the best one is “GoodEnoughPlanFound.” That means that the optimizer was able to complete it’s job. That doesn’t mean you have a good plan, but it means that you don’t have some interim pile of junk that is doing you more harm than good.

What can you do to fix this? Write simpler queries.

OK. Maybe that’s not helpful. Try to simplify the query. You may not be able to. If not, you might try rearranging the query, changing the join order. Yes, as everyone says, join order doesn’t matter, but the full statement is “Join order doesn’t normally matter.” Sometimes it can make a real difference. You may also have to supply some query hints, join hints, etc., to try to nudge the optimizer along.

Permalink 14 Comments