I saw an odd statement the other day, “The size of the name of the parameter does not affect performance.” My first thought was, “Well, duh!” But then, I had one of those, “Ah, but are you sure” thoughts. And you know what, I wasn’t sure.
If size of the parameter name did affect performance, I figured, the one sure place where that would be evident is in the size of the execution plan. Right? I mean, if there was an impact on memory, and hence on performance, that’s probably where you’d see evidence of it. I wrote two queries:
DECLARE @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance int SET @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance = 572 SELECT soh.SalesOrderID ,sod.SalesOrderDetailID FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail AS sod ON soh.SalesOrderID = sod.SalesOrderID WHERE soh.SalesOrderID = @ThisIsAVeryVeryLongParameterNameThatIsTrulyRidiculousButItIllustratesThePointThatParameterLengthDoesNotAffectPerformance DECLARE @v int SET @v = 572 SELECT soh.SalesOrderID ,sod.SalesOrderDetailID FROM Sales.SalesOrderHeader AS soh JOIN Sales.SalesOrderDetail AS sod ON soh.SalesOrderID = sod.SalesOrderID WHERE soh.SalesOrderID = @v
If you run this against AdventureWorks2008R2 you’ll get two distinct, but identical, execution plans:
You can see that they look identical, but how do I know they’re distinct? If you run this query:
SELECT deqs.creation_time, deqs.query_hash, deqs.query_plan_hash FROM sys.dm_exec_query_stats AS deqs CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest WHERE dest.text LIKE '%SELECT soh.SalesOrderID%'
You’ll get this back as a result:
creation_time query_hash query_plan_hash
2010-09-23 18:18:09.347 0x8D0FB9D524B8DD4D 0x13707445560737BA
2010-09-23 18:18:16.223 0x8D0FB9D524B8DD4D 0x13707445560737BA
Two distinct queries, but with identical hash values, so the plans generated are nearly the same, but clearly different, due to the fact that they were built with different parameters, including the monster name. So, how to see if there is a difference in the plan generated that could affect performance? How about the execution plan properties. First, the property sheet for the SELECT operator for the query with the long parameter name:
Of particular note is the Cache Plan Size. Let’s compare it to the same property sheet for the small parameter name:
If you compare the two, you’ll see that they’re the same. In fact, if you look at almost all the values, you’ll see that the Compile CPU, Compile Memory, and Compile Time are all identical. Based on all this information, I have to conclude that no, the size of the name of the parameter doesn’t affect performance, positively or negatively. But why?
I’m actually not 100% sure, but based on some things I know, here’s what I think. The Algebrizer within the Query Optimizer breaks down all the objects referred to within a query plan. It assigns them all values and identifiers for that plan, part of getting the information together to feed the plan into the mathematical part of the Optimizer. I’ll bet it just assigns values to parameters that are the same type of value, if not the same exact value, and that value is the same size from one execution plan to the next.
This means that you don’t save memory by assigning parameters @a, @b, @c when in fact you mean @ReferenceCount, @MaxRetries, @BeginDate. Do I think you should put in parameters of the silly length I put in before? No, of course, not, because it makes the TSQL code less clear. But, so does putting in equally silly, short, parameter names.
Don’t make your TSQL code hard to read. It doesn’t help performance.
I was privileged to be able to attend and present at SQL Saturday 46 in Raleigh, NC, this last weekend. It was a great collection of people presenting some amazing stuff. I want to say, right off, I think this is the best SQL Saturday event I’ve been to. I say that despite the fact that I’ve helped put on a SQL Saturday. I also say that despite the fact that my sample size on SQL Saturday’s is fairly low. I’ve only been to three (including the one I put on).
You have to understand, the people who put on #sqlsat46, the Triangle SQL Server Users Group, did an absolutely outstanding job. They had clearly done the early work of getting sponsorships and organizing. This weekend, all that early work was on evidence. They had speaker shirts AND they had volunteer shirts. You could always tell who to talk to when you had questions. There was excellent signage, including signs on every door for all the sessions that were taken down as sessions finished, so you could tell which sessions were coming up and didn’t have to try to figure out what time it was or anything. The speaker dinner was at an EXCELLENT restaraunt called The Pit in downtown Raleigh. Sandra, the amazingly hard working volunteer responsible for the speakers did an simply wonderful job of making sure we had everything we needed to get our presentations off without a hitch. On top of that, she was really funny and fun to hang out with. There was a shuttle to get the speakers from our hotels to the speakers dinner & back. The food was excellent at breakfast & lunch and there was plenty of it. They even had an afternoon snack. They gave away a ton of excellent prizes. It was just a very well run event and a real pleasure to take part in it.
I can offer up but one criticism, and it’s pretty minor. The facilities were a little bit weak. First off, they were rather confusing to get around in and at one point Tom Larock (blog|twitter) and I got locked into a hallway that we couldn’t get out of. A little pounding on a door got someone’s attention and we were rescued (before I had to kill & eat Tom). The rooms that most of the sessions were in had an orientation such that entering or leaving the room required you to walk right in front of the speaker, so it was hard to show up late or leave early without being very disruptive. But, that’s it. Other than those two, minor, weaknesses, the facilities were nice, clean, well appointed, comfortable… you get the point.
As to people… Wow! Is the best thing I can say. I went to excellent sessions, one each by Andrew Kelly (blog|twitter) and Aaron Nelson (blog|twitter). I got to talk to and hang out with Andy Leonard (blog|twitter), Tom Larock, Allen White (blog|twitter), Tim Chapman (blog|twitter), Kevin Boles (twitter), Geoff Hiten (blog|twitter), Jessica Moss (blog|twitter), Eric Humphrey(blog|twitter)… yeah, look at that list. I’m not dropping names, I’m just in awe of who I got to talk to and I’m not listing everyone that was there. These guys at Triangle SQL pulled together an amazing group of people to present. The networking opportunities were just excellent. I got to meet a lot of new people too. Special shout out to Eli Weinstock-Herman (blog|twitter) who I met for the first time, ever, and had a great conversation with at the after party (along with Allen and a bunch of other guys).
I presented two things. A session on Red Gate’s excellent new piece of software, SQL Source Control. The room was full, the people were engaged and I had a great time. I hope everyone enjoyed the presentation. I also presented a preview of one of my presentations for the 2010 PASS Summit. Unfortunately I had spent most of my rehearsal time getting ready for 24 Hours of PASS, so I didn’t rehearse adequately for this session. I just didn’t do as good a job as I’m capable of. I’ll work on it some more and get it polished up for the Summit. But it was well received, so hopefully people got some good from it. That’s sure the goal.
To sum up, great people, great place, great opportunity. Thank you, very much to Jimmy, Brent, Sandra and all the rest of the magnificent people at Triangle SQL who put this show on. You guys should be damned proud of a job well done.
Normally, I try to stick to posting technical info or community stuff on the blog, but there were a couple of links from Twitter today that are too good not to share.
First, an interesting take from Tom LaRock on the issue around the lack of quality DBA’s. He suggests that it’s actually a lack of quality managers. Go read it & comment there.
Second, this is Not Safe For Work (NSFW). Please, please please understand that before you click on this link. It’s a hilarious discussion about NoSql. Put on headphones & give it a listen.
Back to your regularly scheduled blog posts…
I am not a Reporting Services guru and nor do I play one on TV. I am however forced to be all things Microsoft Data where I work. So I frequently find myself stretching way beyond my abilities. I just had to get a report running that feeds from a web service and has a recursive hiearchy with customized aggregation on multiple fields with drill down to a different set of details. Yeah, through the internet I can see the SSRS monsters rolling their eyes at the ease of this task. But for us mere mortals it was work. Since I spent so much time learning how to do it, I thought I’d share.
XML as a Source
First, because we have a very paranoid (and appropriately so) group of PeopleSoft administrators, I couldn’t get direct access to the Oracle database. Instead, they provided me with a web service. Easy enough to consume, but it comes back as XML. Good news is Reporting Services can consume XML through a URL. Bad news is that it has a sort of proprietary XQuery language that is extremely obtuse (or I find it so, but then I’ve had trouble with SQL Server’s XQuery as well).
Setting up the Data Source is extremely simple. When you select XML from the Type dialogue, it’s going to ask you for a Connection String. Supply the URL. Done.
The work comes when you need to set up the DataSet. When you set the Data Source the Query Type will change to Text. No options. And you’ll be looking at a big blank box of nothing. My initial XML data set was this stacked hiearchy that had nested departments, accurately portraying the structure of the data. To query this XML you can do one of two things, set up the XML path as described in this excellent Microsoft white paper or allow SSRS to parse the XML for you. I tried working through the path, but I kept excluding parts of the structure. Basically I needed a method to recursively union the data within XML and, frankly, that was too hard. So I tried the automatic route. What’s the query look like for the automatic route?
That was tough. But, the same problem occurred. According to the white paper referenced above, letting SSRS figure out how to parse the XML means it will walk through and identify the first repeating group within the XML and that will be the structure it uses for the rest of the data. So, in my example, I have Departments and Personnel. The Personnel are inside the Department and Departments are inside Departments which have other Personnel… etc. It looks something like this:
<?xml version="1.0"?> <ROOT_SEGMENT> <REPORT_TITLE>Monster Hunters Status</REPORT_TITLE> <SUMMARY> <DEPTID>997</DEPTID> <PARENT_DEPTID></PARENT_DEPTID> <DETAIL> <EMPLID>000001</EMPLID> <NAME>Shackleford, Julie</NAME> <TERMINATED>N</TERMINATED> <RETIRED>N</RETIRED> </DETAIL> <DETAIL> <EMPLID>000002</EMPLID> <NAME>Jones, Trip</NAME> <TERMINATED>Y</TERMINATED> <RETIRED>N</RETIRED> </DETAIL> <SUMMARY> <DEPTID>998</DEPTID> <PARENT_DEPTID>997</PARENT_DEPTID> <DETAIL> <EMPLID>000003</EMPLID> <NAME>Pitt, Owen</NAME> <TERMINATED>N</TERMINATED> <RETIRED>N</RETIRED> </DETAIL> <DETAIL> <EMPLID>000003</EMPLID> <NAME>Newcastle, Holly</NAME> <TERMINATED>N</TERMINATED> <RETIRED>N</RETIRED> </DETAIL> <SUMMARY> <DEPTID>342</DEPTID> <PARENT_DEPTID>998</PARENT_DEPTID> <DETAIL> <EMPLID>000022</EMPLID> <NAME>Harbinger, Earl</NAME> <TERMINATED>Y</TERMINATED> <RETIRED>Y</RETIRED> </DETAIL> </SUMMARY> </SUMMARY> </SUMMARY> </ROOT_SEGMENT>
Problem is, the first repeating group didn’t include the nesting. That was a deviation, so it didn’t read in the same way. What I had to do, in order to use the automated parsing, was flatten the structure, moving the SUMMARY areas outside of each other. With the new structure, the query returned all the data. Now the trick was to get the department hiearchy into the report
Thankfully, after a bit of searching, I found this in the documentation on SSRS. It shows exactly what I needed, the, incredibly simple, method for creating a recursive hiearchy. The only trick was to have the Parent field stored with the child records. You can see that in the XML above, but the original didn’t have it. Once that modification was in place, it was simple. Follow the directions. In my case, DEPTID became the grouping field. To support other functions I also changed the name of the group so it could be referenced in functions.
Once it was created, simply going into the Advanced tab in the Row Groups property window and setting PARENT_DEPTID as the recursive parent was all that was needed.
Way too easy. But, how to get the drill down and the aggregates?
Drill Down & Aggregates
With that in place, the query will return hiearchical data, grouping on the DEPTID and keeping the parent child relationships in order. To establish drill down, it’s just a matter of going into the Row Group properties for the Department group again. In the Visibility tab, you set the visibility to Hide and check “Display can be toggled by this report item:”
Once that’s done, the recursive groups are only displayed as the little plus signs expand and contract the groups. It works great. You can even get fancy and add an indent function as shown in this bit of the documentation. But, how to do get the totals to display recursively? Not tricky at all. In fact, pretty easy. Since the data coming out has a set of flags that I have to check for positive or negative values, I have to use a expression to check them anyway. Something like this:
Luckily, built right into the function is a method to make it work recursively, so that you get totals of the children displayed with the parent. All that’s necessary is to supply the group, which I named earlier, Department, and tell it to total this stuff in a recursive manner, like this:
Put one of these with the appropriate field and you have a nice neat report.
To finish up, none of this is rocket science. It’s just a question of knowing where to go and how to put it all together. Being a newb when it comes to Reporting Services, I spent a lot of time struggling. Now, you won’t have to.