SQL Server Standard Update

September 30, 2009 at 1:00 pm (PASS, sql server standard, SQLServerPedia Syndication) (, , , )

Just to keep people who might be interested up to date on what’s happening with the SQL Server Standard relaunch. I’ve received some fantastic abstracts on a wide range of topics from a diverse group of SQL Server DBA’s and Developers. From that, we’ve got a lot of articles in the hopper going through technical edits and second drafts. Three articles have made it all the way to copy edit. They should be done soon. I’m positive you’ll see at least one of them, maybe two, before the PASS Summit.

I need even more abstracts so I can publish even more articles. If you meet the qualifications to write for us, please submit an abstract to my PASS email address: grant.fritchey – at – sqlpass -dot- org (unobfuscate the obvious). Any topic that relates to SQL Server, Reporting Services, Integrations Services, Analysis Services, ORM to SQL Server, PowerShell to SQL Server, Monitoring, Tuning, Designing, Coding… There’s an audience out there that want’s to hear from you.

Permalink Leave a Comment

Pre-Compiled Stored Procedures: Fact or Myth

September 30, 2009 at 9:01 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , , )

There’s a very common belief among DBA’s: Stored procedures are pre-compiled. I had someone tell me all about it just last week. There are lots of sources that say what I’m getting ready to say, so this really shouldn’t come as news to anyone,but it will. No, they’re not. Stored procedures are not pre-compiled.

When you create or update a stored procedure on the server, it gets a syntax check. But there is no compile process, let alone a pre-compile process. That stored procedure exists exactly as it is named, a procedure that is stored on the server. It’s just a collection of TSQL code, possibly with some parameters, waiting to be called. Until it’s called, it is not ready to be executed. Don’t believe me? I’m not a huge sports fan (except for MMA), but I’m willing to use a common sports expression. Let’s go to the slow motion instant replay, also known as trace events.

I’m going to run each one of the following statements, one at a time and capture a set of trace events, also listed below. First the code:

CREATE PROCEDURE dbo.MyTest
AS
SELECT *
FROM sys.dm_exec_query_stats AS deqs ;
GO
EXEC dbo.MyTest ;
GO
EXEC dbo.MyTest ;
GO
DROP PROCEDURE dbo.MyTest ;
GO

The events I’m capture are:

  • SQL:BatchCompleted
  • SP:CacheMiss
  • SP:CacheHit
  • SP:CacheInsert
  • SP:CacheRemove

After I run the script, here is what is visible in the trace:

Precompile

With the first statement to create the procedure, you see that there was a CacheMiss event, since the DDL statement isn’t cached. You’ll get a miss event every time this runs. Next, the execute statement runs and you can see another CacheMiss event followed immediately by a CacheInsert event. This is the creation of the execution plan and it’s addition to the cache. The statement completes and the second execute statement fires. This gets a CacheHit event, since the last execute created a plan, and then it completes. The final statement, again, DDL, so you see a CacheMiss event and then the CacheRemove event as the procedure is dropped.

Nothing was added to the cache until the procedure was called. There was no pre-compile process. None. To quote a popular television show, myth busted.

Permalink 22 Comments

Minor Celebration

September 30, 2009 at 7:36 am (Misc) (, , )

A little cheap thrill for me. I passed 50,000 views on the blog yesterday. Thanks to everyone who has stopped by to read my fumbling attempts to understand SQL Server better. I hope some of the posts have proved useful to you. I learn a lot from putting them together.

Permalink 3 Comments

Vote for Mr. LaRock

September 29, 2009 at 10:21 am (PASS) (, , , )

It is rapidly coming up on election time at the Professional Association of SQL Server users (PASS). Once more, a person I respect and admire is running for office. Tom LaRock has been a volunteer with the organization longer than I have (I’m approaching four years) and for the entire time I’ve been involved he’s been in the thick of things, working for the organization and working with the other volunteers. He does great work. He’s presented at the Summit, three times I think, and he’s presenting again this year. He’s one of the people chiefly responsible for the new Virtual Chapters. He’s been serving on the Board as an appointee. I think it’s finally time to make it official and elect this great guy to the board. He’ll do a good job, as he has done, and he’ll be able to do more because, rather than worrying about running for office, again, he can strap in & work hard for two years if he gets elected.

Tom’s going to be hosting live chats where he’s going to answer your questions about his plans for his time on the board. If you have any doubts that he’s the right guy (and he is) or you don’t know Tom, get on over there and listen to what he has to say.

To see the other candidates (not a slouch in the bunch, but I’m backing Tom), go on over to the campaign space and see what they have to say for themselves.

Permalink 3 Comments

Recap of Cape Cod .NET User’s Group

September 24, 2009 at 6:28 am (PASS) (, , , )

I presented last night at the Cape Cod .NET User’s Group. What a great bunch of people. About 20 people showed up at the nice facility they’re using, Venture Think Together. A little pizza and a little chat and the meeting got started. Marcia McLean, their president, gave me a nice little introduction and I went through my slides. As I did them, I kept thinking about some of the advice offered up by Brent Ozar yesterday. Suddenly my slides seemed so wordy. It started to throw me off, but I just ignored it & plowed on. Overall, I think the presentation went well. I hit a couple of snags on some of the examples that I’m going to tweak before I present this at the PASS Summit in just a few weeks.

Thanks for having me guys. You’re invited to come see us at the Southern New England SQL Server User’s Group any time. I’ll post the slides and sample code over here after I present at PASS.

Permalink 1 Comment

PASS Virtual Chapters

September 23, 2009 at 9:00 am (PASS, SQLServerPedia Syndication) (, , )

If you’re not a member of the Professional Association of SQL Server users, PASS, why not? No, I’m sorry. I meant to say, If you’re not a member of PASS, you may not have heard about the change between special interest groups and the new Virtual Chapters. What used to be the SIG’s has been repurposed, rebuilt and, let’s say it, reborn, as Virtual Chapters. There’s a whole new energy and all new web sites with content and presentations and more free training than you can shake a stick at. There are great opportunities to begin networking, and we all know how important that is as a means of career building. It’s like having a really vibrant, active, useful user’s group available 24/7. Get on over and check out the offerings and get involved. You learn more and grow more by actively participating and the PASS Virtual Chapters are a great place to do it.

Permalink 2 Comments

Presentation Skills from Brent Ozar

September 23, 2009 at 8:08 am (Misc) (, , )

If you present at any level, read this. If you’re thinking about presenting, read this. If you’d like to start thinking about presenting, read this. That is all. Carry on.

Permalink 2 Comments

Cape Cod .NET Users Group

September 22, 2009 at 10:26 am (PASS) (, , )

Tomorrow night, Wednesday, September 23, I’m going to present on “Best Practices for Working with SQL Server Execution Plans” for the Cape Cod .NET User’s Group.  It’s basically a rehearsal for the presentation that I’ll be doing at the 2009 PASS Summit in just 5 weeks. Can you believe it’s already almost here? I’m so jazzed. This year is going to be a great summit. I can’t wait to see everyone again… I… Where was I, oh yeah, come down to Hyannis if you can make it. It should be a fun evening. Not quite as good as the Summit, but a good time.

Permalink Leave a Comment

Gila Monster on Estimated vs. Actual Row Counts

September 22, 2009 at 6:47 am (SQL Server 2005, SQL Server 2008, TSQL) (, , , , )

I don’t generally do lots of blog aggregation and cross post linking & stuff. It’s just not something I’m that into. However, this time is an exception. Gail Shaw, Gila Monster to those who hang on out SQL Server Central, has posted an excellent explanation of times when Estimated and Actual row counts vary for a reason. I’m one of those who emphasises that differences between estimated & actual is an indication of… something. It could be out of date or missing statistics or it could be caused by query language like multi-statement table valued functions, but it’s usually and indication of a problem. Except when it’s not. Read Gail’s explanation for more.

Permalink Leave a Comment

Identifying Frequently Running Queries

September 21, 2009 at 9:13 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

It’s not enough to look at the longest running query. What if you have two queries, one runs for 5 minutes and the other runs for 5 seconds. Which do you tune? The first one, of course. But, let’s add a little information to our thought experiment. The one that runs for 5 minutes is called 2AM, your local time, when there are almost no users on the system and the one that runs for 5 seconds is called 40 times a minute all day long. Now, which one do you tune? That’s right, suddenly that 5 second query takes on a HUGE significance based on how often it’s called.

But how do you know how often a query is called? The easiest way to gather that information is not necessarily the best in terms of a long term methodology, a server side trace that captures RPC and SQL Batch completion events. The beauty of the server side trace is how easy it is to set up and how much information you can gather, so quickly. You can immediately start getting every query run on the server along with how long they took to run, how much CPU each used and various other metrics. You can run this for short periods or long and then gather the data together into a table in a database and run aggregation queries against it to generate reports showing the most frequently called query. Simple, quick easy. But there are a couple of problems with it. First, while you can run it 24×7, you’re going to see quite a bit of data generated from the events, depending on the load on your system, of course. Second, the queries that come through will likely have all kinds of stuff associated with them, various paramter values and what not, that will make aggregation hard. You’ll have to put together a means of cleaning the data and maintaining it or purchase one of the many third party tools for doing this. You’ll have to store and manage large amounts of data as it comes into the system. It’s going to be a lot of work.

Another option, one that will require a degree of setup, is a little less accurate, it’s unlikely to capture every single execution of every single query, but it will get the majority, sys.dm_exec_query_stats. This handy little dynamic management view carries aggregate data about all the queries currently in the cache. It will show how much accumulated processing time a query has taken, the last time, the longest time, the shortest time and lots of other information, but the one interesting value we would be most interested in here, is the usage count. If you just run a SELECT statement against sys.dm_exec_query_stats, you’ll get interesting data back, but you won’t see your query or your execution plan. To make the information available within the DMV, you’ll need to combine it with other DMV’s. Here’s a sample query that pulls back information that could be useful for identifying badly performing queries on your system including how often they’re called.

SELECT dest.text
,deqp.query_plan
,deqs.execution_count
,deqs.creation_time
,deqs.query_hash
,deqs.query_plan_hash
,deqs.total_elapsed_time
,deqs.total_clr_time
,deqs.total_physical_reads
,deqs.total_worker_time
FROM sys.dm_exec_query_stats deqs
CROSS APPLY ( SELECT *
           FROM sys.dm_exec_sql_text(deqs.sql_handle)) dest
CROSS APPLY ( SELECT *
           FROM sys.dm_exec_query_plan(deqs.plan_handle)) deqp

There’s a lot you could do with this query, but the real power is going to come from gathering this data on a regular basis. It’s going to depend on how volatile the cache is on your system. If most of your queries sit in the cache for weeks (and that’s possible), then you could run a query once a week to gather the information. If your cache is getting flushed clean once an hour, you’d need to run this query more often. Regardless of how often you run the query, understand that, because you’re dependent on the cache, it is possible to miss data. Once you get the query running on your system, you can use the information to determine which of the queries needs to be tuned, not just on how long it runs, but also how often it runs, or even using other combinations of the information returned.

One other point, you may see ad hoc queries on your system. Here’s a simple example:

SELECT TOP (5) sod.*
FROM Sales.SalesOrderDetail sod
WHERE sod.ProductID = 772
ORDER BY sod.SalesOrderID ASC ;

If the value passed to sod.ProductID is changed to 773 or any other value, you will get a different entry in cache for this procedure. That’s where you can put the query_hash or the query_plan_hash values to work. These values show a hash that SQL Server creates for each query that it stores in cache. You could group by these values to aggregate ad hoc queries that would, if they were stored procedures, be identical executions and arrive at an aggregated count.

So, if you need precise information, you can use trace events, but, pay for that with a lot of data that you have to manage.

EDIT: Changed deqs.total_elapsed_date to deqs.total_elapsed_time as per comments below.

Permalink 6 Comments

Next page »