Switching Off Parameter Sniffing

November 15, 2010 at 8:00 am (SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

Or, another way to put it, in most cases, shooting yourself in the foot.

I was not aware that the cumulative update for SQL Server 2008 back in June included a switch that allows you to turn parameter sniffing off within SQL Server. Thanks to Kendra Little (blog|twitter) for letting me know about it (although she let me know by “stumping the chump” during my lightening talk at the Summit, thanks Kendra!).

When I first saw the switch, I thought about the places where turning off parameter sniffing could be helpful. But, as I thought about it, the more I realized that removing parameter sniffing was an extremely dangerous switch. Why? Because, most people only ever hear about parameter sniffing when they run into a problem. Someone says “Parameter sniffing” and you see people cringe. Too many people will take this information in and go, “Hey, I can just switch parameter sniffing off and I’ll have a much faster system, all the time.” But… even when you’re not hitting a problem with parameter sniffing, you’re still getting parameter sniffing. Here is where I see a problem. Let’s discuss what parameter sniffing is.

Parameter sniffing is applicable to stored procedures and parameterized queries. What happens is, when a value is passed to a parameter, the optimizer has the ability to read, or “sniff,” the value of that parameter. It can do this because it knows exactly what the value is when the proc/query is called. This is not applicable to local variables, because the optimizer can’t really know what those values might be, where as it knows exactly what the values of parameters are going in. Why does it do this? One word: statistics. Statistics are what the optimizer uses to determine how queries will be executed. If the optimizer is given a specific value, it can then compare that value to the statistics on the index or table in question and get as good an answer as is possible from those statistics as to how selective this value may be. That information determines how the optimizer will run the query and because it is using specific values, it’s looking at specific information within the stats. If the parameters are not sniffed, the statistics are sampled and a generic value is assumed, which can result in a different execution plan.

The problem with parameter sniffing occurs when you have out of date statistics or data skew (certain values which return a wildly different set of results compared to the rest of the data within the table). The bad statistics or skew can result in an execution plan that is not consistent with most of the data that the stats represent. However, most of the time, in most situations, this is an edge case. Notice that hedging though. When parameter sniffing goes bad, it hurts.

Most of the time we’re going to gain huge benefits from parameter sniffing because the use of specific values leads to more accurate, not less accurate, execution plans. Sampled data, basically an average of the data in the statistics, can lead to a more stable execution plan, but a much less accurate one. Switching parameter sniffing off means that all queries will use sampled data, which creates a serious negative impact on performance. Most of the time, most of us are benefitting wildly from the strengths of parameter sniffing and only occasionally are we seeing the problems.

Unless you know, and I mean know, not suspect, that your system has major and systematic issues with parameter sniffing, leave this switch alone and let the optimizer make these choices for you. If you don’t, it’s very likely that you’ll see a performance hit on your system.

Permalink Leave a Comment

The New Path To MCM

November 12, 2010 at 5:46 pm (Uncategorized)

Microsoft has announced changes to the MCM program. This page shows all the ways that you can become an MCM without having to spend a month at Microsoft.

I’m interested in this for two reasons. First, I’d love to have the opportunity to try to become an MCM. Making it easier to make the attempt, hopefully without dumbing down the curriculum, is a great idea. Second, scroll down to the books section. That’s right, my book is one of the suggested books for learning enough so that you can pass the MCM test. I’m gob-smacked and honored and jazzed and I’ll stop gushing now.

Permalink Leave a Comment

PASS Summit 2010, Day 3 Key Note

November 11, 2010 at 11:48 am (PASS, SQLServerPedia Syndication) (, , )

Today is Dr. Dewitt.

The ballroom, where the keynotes are held, is filled with extra chairs. The Summit organizers expect extra attendance today, and well they should. Dr. Dewitt was amazing last year. I suspect this year will be more of the same.

Rick Heiges is introducing the day (waiting for Dr. Dewitt). Lynda Rab is leaving the board. Sad. I started volunteering for the PASS organization working for Lynda. She’s great. The new board members are Douglas McDowell, Andy Warren and Allen Kinsel.

The spring SQL Rally event was announced. I’ll be presenting a full day session on query performance, Query Performance Tuning, Start to Finish. Look for (a lot) more blog posts on this. The Summit next year has been moved to mid-October. WHOOP! This is great because I was going to miss it next year. Oct 11-15 will be the dates in 2011. Of course, it’ll be at Seattle.

Dr. Dewitt is finally on stage. From this point forward, I’ll be just posting his words & some comments. This is my best attempt to capture the information. There will be typos.

Query optimization is a really hard problem. Dr. Dewitt, says “I’m running out of ideas.” Yeah, right. His “Impress Index” is basically an arrow going down. He’s cracking jokes about his delivery, asking, How Can I Possibly Impress You. He’s showing this strange picture that has 240 seperate colors that each represent an exec plan in the optimizer. We’ll be back to that. This session was voted on. I’m glad optimization won. They live in fear of regression, talking about the optmizer developers.

The 100,000 foot view, magic happens. He’s working off of TPC-H benchmark, query 8. There are 22 million ways of executing this query. The optimizer has to spend a few seconds to pick the correct plan from this full set. It’s still possible to pick bad plans. Cost Based optimization came from System R & a lady named Pat Selinger at IBM. Optimization is the hardest part of building a DBMS, after 30 years. Situation is fruther complicated by advances in hardware and functionality within the DBMS.

The goal of the optimizer is to transform sQL queries into an efficient execution plan. The parser turns out a logical operator gtree, which then goes to the optmizer and a physical operator tree is sent to the execution engine. He’s showing a simple table, based on movie reviews. The query is a SELECT with AVG. Two possible plans. A scan occurs first, then a filter is applied to pull out the right movie and then an aggregate occurs. With this you’ll get a scan, meaning I/O corresponds to the number of pages on the table. Plan 2 uses an index to pull pages from the non-clustered index. This means random disk access that will look up the movies and then pass that on to the aggregate. The optmizer then has to figure out which is faster. The optimizer estimates the cost based on the statistics it has in hand. It has to estimate how many movies there are. So it estimates the selectivity of the predicate, then it calculates the cost of the plans in terms of CPU and I/O time.

So there are equivalence rules, such as select & join operators. Join operators are associative, meaning that the results from multiple tables are associated. Select operator distributes over joins and there are multiple ways of getting back the same information, all evaluated by the optimizer.

With a more complicated query, it could start with seelction of customers, then a selection of reviews, join them together, then join to the movies table and then project out the select out the columns wanted. But with equivalence rules, you can get other plans. Selects distribute over joins rule gets a different plan, or selects commute rule can change the plan. He showed five different plans, then four more plans & said he could have done another 20. For this simple query, he came up with 9 logically equivalent plans. All nine will produce the same data. For each of the 9 plans there is a large number of alternate physical plans that the optimizer can choose.

Assuming the optimizer has three joing strategies, nested loops, sort-merge & hash. He’s also assuming two selection strategies, sequential scan or index scan. Obviously, this is simplified.So, using these three joins & two select methods, there are 36 possible physical alternatives, for one logical plan. So with 9 logical plans there are 9*36 = 324 possible physical plans. And that’s for a VERY simple query.

Selectivity estimation, is the task of estimating how many rows can satisfy a predicate like MoviesId = 932. Plan quality is highly dependent on quality of the estimates that the optimizer makes.

I just sent in a question.

So the Histogram is the distribution of the data within the table. So there isn’t enough space within the db to store detailed statistical info. The solution is histograms. You can different kinds. The equi-widthy histogram divides the rows into equal sized buckets and then figures out how many values match each range of values. So, for an actual value, it might be .059 selectivity, but the estimated value is actually .050. That’s extremely close. But, another value he shows has .011 actual but in the histogram is .082, which is a HUGE error. Hello bad execution plan.

Another approach is equi-height histograms. These divide the ranges so that all buckets contain roughly the same number of rows, as opposed to an equal distribution of values. In equi-height, the second example is .033 instead of .082. Which is pretty good, but still skewed. He’s basically showing that errors can be introduced all over the place. The first example is .167.

Histograms are the critical tool for estimating selectiviy factors for selection predicates. But errors still occur. The deal is, there’s just a limited amount of space for these. other statistics are rows, pages, etc.

Estimating costs the optimzer considers I/O time and CPU time. Actual values are highly dependent on CPU and I/O subsystem on which the query will be run. For a parallel database system, such as PDW, plug, the problem focuses also on network traffic. So back to the two alternative physical plans… You have to determine which plan is cheaper. Assuming that the optimizer gets is right, we know that there are 100 rows out of 100k pages. These are sorted on date, but we’re going for MovieID, random reads. The optimizer doesn’t know system it’s on, but it makes a guess that a scan will take 8 seconds. The Filter will work on .1 microsecond/row & aggregate will be .1micrsec/row, for .00001 seconds, for a total of 9 seconds. Plan two will use the index. Since the rows are sorted on date, random seeks are going to occur. .003 seconds / seek, then  total time .3 seconds and same time for the aggregate. This means plan two is the winner.

But, what if the estimates are wrong. On a log plot, you start to see how, as the number of rows returned, each plan will perform better, based on the rows returned. More will make plan 1 better, but less will make plan 2 better.

That was just to get the data out of a table. To add in JOIN costs, things get worse. First example is to take a sort-merge join. This sorts each data set being returned, and then merges the results through a simple scan. Cost is 5r + 5m for I/O. A nested loop works on scanning one table and row-by-row, scanning the other table. The cost is R + R * M. R is rows M is pages.

With the example, you can see that with an indexin place, highly selective, loop joins can be cheap. But it’s the cardinalities that affect things. So, getting the histogram right is the key trick. With a log plot, again, you see how the various operations vary over time. So for a sort merge, it’s very expensive at a low number of rows, but at a large number of rows, it still returns in about the same amount of time. So as large sets of data are accessed, merge gets good. But at lower numbers of rows, the nested loop works better. So if the cardinality estimate is off, you could get a huge error in performance, especially at the larger sets of data. The optimizer has to pick the right join method. This is based on the number of rows in each set of data being joined.

He then moves on talk about how much space these things take up. The space depends on the “shape” of the query. He shows a type called a “start” join and a type called a “chain” join. Whoa! as you increase tables, the likely numbers of plans increases a lot. I knew this, but I haven’t seen it written down like this. But these shapes are extremes.

Every query optimizer starts off with a left deep plan, first, instead of bushy plans. For the example, a bushy tree would have 645k equivalents for the Star Join as opposed to 10k for left deep plans. With 3 joins methods and n number of joins in a query, there will be 3 to the power of n possible physical plans. Uh… wow. Instead, the optimizer uses dynamic programming. Sometimes heuristics will cause the best plan to be missed.

One method of optimization is Bottom Up. Optimiztion is performed in N passes (if N relations are joined). First pass, find the best 1-relation plan for each relation. Pass 2, find the best way to join the result of each 1-relation plan to another relation to generate all 2-relation plans. Pass n, find the best join result… can’t see it. Gets the lowest cost plans & interesting order rows. In spite of pruning plan space, this approach is still exponential in the # of tables. Costs are done, then pruning occurs. I’ve stopped taking notes on this part. You’ll have to see how this works in the slide deck (I’ll post the location at the end).

So that’s the theory. But the problem is, bad plans can be picked. If the statics are missing or out of date, cardinaltiy estimates are against skewed data, attribute values are correlated, and regression, hardware changes mess stuff up.

Opportunities to improve. Jayan Haritsa, has the Picasso Project. Bing this: Picasso Haritsa. There are actually software there that helps improve values. He’s back to TPC-H Query 8, and using the tool, it will show the plan space for the query, this is the painting of the cool picture at the start of the talk. With this, you can see how sensitive input parameters are to plan generation. So the cardinalities estmates are the key.

This animation shows how the estimated costs for a query start low, peak, and then, instead of continuing up, goes back down. And the optimizer team doesn’t know why. This is his example of how QO is indeed, harder than rocket science.

What can you do better? Well, Indexed Nested Loops looks good, but they’re not stable across the range of selectivity factors. If they went conservative and always picked sort-merge, it would be more stable. So, picking slower operations could make things more stable, just slower. Robustness is tied to the number of plans. And he says the QO team doesn’t understand.

At QO time, have the QO annotate compiled queryu plans with statistics and check operators. Then, you can see how this stuff works. They use this in two ways, a learning optimizer and dynamic reoptimization. The optimizer observed stats go back to a statistics tracker and then, feed that back through to the catalog, and the next query will be better. The dynamic reoptimization takes the idea that actual stats note the estimated stats and when there are differences, truncate the operation, pause the execution, output the query back to tempdb, stores that, and then uses that with the rest of the query to re-optimize using real values. Cool!

Key points: Query optimization is harder than rocket science. Three phases of QO: Enumeration of the logial plan space, enumeration of alternate physical plans selectivity estimates. The QO team of every DB vendor lives in fear of regressions, but it’s going to happen, so cut the optimizer some slack.

“Microsoft Jim Gray Systems Lab” on FaceBook is the source for the slides. Available here.

Permalink 4 Comments

PASS Summit: Day 2 Keynote

November 10, 2010 at 12:26 pm (PASS, Uncategorized) (, , )

Today is Kilt Day at the PASS Summit. We’re going to try to arrange a group photo at lunch time.

The network connection is extremely slow. I suspect the tweeting about the kilts.

Bill Graziano is leading the key note and he started off with having all the kilted stand. Only about 12-15 of us, but that’s five times better than last year. Then it was time for the volunteers to stand up. It was excellent to see so many people. The Outstanding Volunteer of the Year was Lorie Edwards. The PASSion award went to Wendy Pastrick, who really earned it.

Unfortunately the next segment was on governance… blech! But necessary. Everyone here is a member, so they should know how the money is spent. Luckily Bill is not digging in a lot. He’s covering the things he has to. Yes, it’s a boring topic, but this is a not-for-profit organization and it needs to be transparent. I’ve always been happy to see the numbers, even when it bored the heck out of me.

An X-Box Kinnect was given out to a lucky winner. Cool! I was too busy yesterday to take advantage of the contests… ah well.

Today is also the Women In Technology Luncheon.

The first speaker of the day is Quentin Clark of Microsoft. Mr. Clark is introducing Denali. Today we should get some meat. The goal is shifting user expectations and shifting business expectations. Sadly, I was extremely excited about this presentation, but, instead of getting into the product, we got quite a lot of sales pitch. I do want to see what they think is the most important functionality, but I want to see it, not hear about it. That’s important. I think vendors frequently don’t think about the audience. The Twitter stream started to get pretty abusive, just like last year during the “I can’t mention the major hardware vendor that supports PASS because we really appreciate it” presentation.

Finally, after 40 years in the wilderness, we got a demo of SQL Server Always On. He started right into Management Studio, which is the first time I’ve seen it in the last two days during any of the Denali demos. That’s an indication of something. This is pretty neat. Automatic failover with multiple secondaries, so you can have more than one data center, around the country and have synchronous data in multiple sites. THAT will be useful. This without shared disk. Yes, you can still use it, but now you don’t have to. That’s a huge improvement over what we’ve had in the past. And, he got an ovation during the demo. When you have a collection of nerds as big as this clapping for you, you did something right. Thank you Microsoft. The data synchs occur in near real time, behind the scenes, with HA set ups that you can put together, for individual databases or groups of databases, in about five minutes. Huzzah! Oh, and the secondaries can be set to be readable and you can move your backups to the secondary… WOW! Again, thank you Microsoft.

The break down of the goals is the same as outlined yesterday, of course, Mission Critical, which they just showed, then IT Pro & Developer Productivity and Pervasive Insight. Then Mr. Clark mentioned DAC and there was a low rumble around the blogger table. That is not a popular set of functionality. There’s going to be enhancements in spatial within Denali, modifying the abilities to run queries and moving all the way through the BI stack. We’re finally getting Sequence Generator and Paging and enhanced Error Handling.

FileTable, a whole new integration of FileStream technology is being demo’ed next. This should be good too. The Key Take Away is “Every windows application that generates files, can now store files within sQL Server without a single modification to the app.” I’m not so sure this is a good thing, and what about SharePoint? Still, technology is cool and I’m a geek enough. I’m going to enjoy it. So, to a degree, this works like FileStream, but it’s file management through the database, but, the demo showed a set of files getting inserted into SQL Server management through a command prompt. Oooh… That’s cool. The demo is impressive. You can update the documents from the file system or from the database. That’s pretty neat. I’m just not sure exactly where this goes within the enterprise. I’ll have to read some more about it.

The next set of functionality is Project Juneau. I’ve heard a lot about this. It’s likely to hurt some of the 3rd party tools. We went right to the Demo this time. Thanks. We’re in the VS 2010 Shell now, along with BIDS and everything else. They’re not retiring SSMS, but it’s clear that it’s on the way out, must be. I like the improved TSQL completion. The table designer is good too. Because you can sync the visuals & tsql as you create the table. That’s great! I think I said this yesterday, but there are a lot of people that will not enjoy moving to Visual Studio. I’m a fan, but others will not like it. Still, it looks good. It’s working better than it ever did, and that’s a good thing.

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 3

November 9, 2010 at 12:51 pm (PASS) (, , )

Ted Kummert is still talking.

For the cloud, of course, they’re talking about SQL Azure. Microsoft really is throwing themselves into the cloud, completely. The emphasis is that they offer both a cloud and an on-premises solution. I don’t mind saying, I’m still trying to get the full business proposition for an old school, fat, business like the one I work for. What should we be doing with the cloud. I just haven’t seen the magic. I see where smaller businesses, or start-ups, or temporary surge capacity for businesses that may have that type of thing can use the cloud, but… traditional work, it just doesn’t seem to jive yet.

We’re going to see some made-up scenarios for how Azure can manage Contoso Bikes. He shows how the report can pull data from the cloud and deploy reports from the cloud, in order to deliver to people on the road. But, we can do that already in other ways. The ability to link your data with the Data Market data is pretty cool. I can see that being useful. You will have to purchase access to these data sets. You can query against them, but, similar to the PDW demo, we’re not in SSMS any more. I wonder what Microsoft’s long term plans are for SSMS based on the ways we’re seeing it being bypassed.

What’s next for SQL Server? Denali. The CTP is getting handed out tomorrow after the keynote tomorrow. We’ll be seeing the demo on Denali tomorrow. The idea that Mr. Kummert is communicating is that Denali represents client requests. They targets are Mission Critical, IT Pro & Developer Productivity, and Pervasive Insight. They’ve focused on manageability and upgrade capacity. That should be good. They’re going to work on performance, which is interesting. They’re unifying the experience into Visual Studio… I’m OK with this, but I know that a LOT of DBAs are not OK with this. It’ll be interesting to see how it breaks out. Denali is the largest release of integration services ever. Full life cycle development on SSIS. That will be good. They’re also talking about expansion on the PowerPivot type of work. Project Crescent is a new reporting tool that is coming out with Denali, which is a new way of showing business information. Sounds good. Finally, a demo. We’re seeing the 100 million row demo, again. I’d like to see the new stuff, please. So, they pulled the data out of Excel and directly into Analysis Services. That’s good. Showing how it’s working within VS, which gives you source control, etc., and then you also get to use the server, which is better than the memory limits within PowerPivot. And he’s showing how over 2 billion. This is a great demo. We’re seeing a trillion rows per minute, filtered & reported on. It’s very slick. This is good. Same technology is also in the database engine. We’re seeing fantastic performance. I might be out of a job. It’s based on the columnar data store technology. It’s a very good thing.

Come back for more tomorrow!

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 2

November 9, 2010 at 12:21 pm (PASS) (, , )

Mark Souza from the SQL CAT Team, some of the smartest & most capable of MS consultants in SQL Server, is presenting how his team is offering a health check for people’s SQL Server systems.

There going to actually be using some technology to do this little event called SQL PASS It On, using Twitter. Twitter is become more and more of a major part of the event. If you’re not at least monitoring Twitter, you’re missing out.

It’s a busy day with the SQL Clinic, the Exhibit Hall, Community Learning Center, Birds of a Feather Lunch, Regional Mentors, Book Signing and Exhibitor Reception. That’s not mentioning all the sessions.

The key notes will be Ted Kummert today, Quentin Clark tomorrow, and David DeWitt (YAY!) on Thursday where he will talk about Query Optimization. I will be taking notes!

We’re seeing a history of how Microsoft split the code from Sybase for the SQL Server 7.0 release. They built a brand-new database platform in 2.5 years. That’s pretty amazing.

They started off with SQL Server 7.0 for ease of use. Ted Kummert is emphasizing how important Total Cost of Ownership is to Microsoft and their plans. He’s also talking about how important it is that SQL Server is integrated, including Analysis Services and Cloud. His final focus is on large scale, high availability systems. This is the history of what they’ve built. Now, he’s going to focus on the future, starting with mission critical, then covering the cloud, and finally what is going to happen with SQL Server Next.

For mission critical, they’re releasing the Parallel Data Warehouse, which will allow for 100s of terrabytes in what is basically and appliance. That’s right, a toaster for SQL Server. Seriously, this is a big deal. The demo is already fascinating. He’s showing how you create tables with the distribution, and partitioning in place.  But it comes with a special PDW loader, which will load up to 1tb an hour of data. It can even be integrated with SSIS. This is pretty amazing. On the Tweet stream I saw Michelle Ufford mention that she’s looking at it for GoDaddy, so this is viable. They then showed how they could move 800 billion (yes, that is a “b”) rows into the system in 19 seconds. Interesting point from Brent Ozar, what they were doing was not in SSMS. Paulo Resende from Bank of America came out to give a customer testimonial on how they implemented PDW. Now Dave Mariani of Yahoo is giving another testimonial on how they manage User Data & Analytics for… well… spam. They’re running through 1.2 tb a day and 50 gb an hour… uh… WOW! The fascinating thing is, they’re moving that data in a cube for the queries and are able to pull out data in less than 10 seconds. That’s great. Microsoft is also announcing “Atlanta” which is a service that assesses the configuration of your 2008 and 2008 R2 systems, through the cloud. Bob Ward, cool, is out to show how Atlanta works. This is extremely cool stuff. I’d like to think that I keep most of my servers up to date, but a service like this could still be extremely useful.

Permalink 1 Comment

PASS Summit: Day 1 Keynote, Part 1

November 9, 2010 at 11:35 am (PASS) (, , )

Sitting at the big kids table at the PASS Summit, ready to rock and roll. The Summit has not officially started yet, but it’s been a fantastic ride already. I’m getting to meet a bunch of great and amazing people. I made my very first trip out to the Microsoft campus yesterday. Last night was the SQL Server Central party. This is just a great organization and a great event.

Right at the start, the tweeting is hot & heavy. Hmmm… OK, starting off with a Tina Turner impersonator. She’s extremely good, but I have to ask, what were they thinking? Her name is Truly Tina. She was outstanding. Just a bit odd.

Rushabh Mehta is introducing the PASS organization. He’s showing off the Board of directors and the executive committee. He’s also showing what else PASS has besides the Summit, which include 24 Hours of PASS, SQL Saturday and the European Summit. The organization also includes the chapters and the vritual chapters. The organization reaches thousands of people around the world through all these events and organizations. The goal this year is try to get to 250,000 members.

This year the summit has 3807 registrations from 48 countries. The keynote is streaming live, as well as 40 people blogging and tweeting away. If you want to follow the tweets, make sure you use the hash tag #sqlpass. There are 191 speakers with 44 of them MVPs.

Permalink Leave a Comment

Pass Summit Sunday Morning

November 7, 2010 at 11:06 am (Uncategorized)

image

I’m sitting in Top Pot Donut. I’m having a fantastic apple fritter. I’m also trying out the capabilities of the WordPress Android app. So far so good. I wouldn’t want to try doing hard core technical posts from this but it works. I can even add photos.

Permalink 2 Comments

SQLServerPedia Award Votes

November 3, 2010 at 1:08 pm (SQLServerPedia Syndication) (, , )

This is just another reminder to please vote for my blog post on using PowerShell Remoting with SQL Server. It’s a post I’m proud of.

Also, I think that Gail Shaw (blog|twitter) is one heck of a great blogger. She has multiple posts in several categories. She’s extremely deserving of your vote. She might even get the most votes overall, another thing I think she deserves. I’ve learned tons and tons from all the information she puts out there. The least she deserves is a little chunk of plexiglass for all that hard work. So vote for all her posts.

Permalink 1 Comment

Kilt Day

November 3, 2010 at 8:00 am (PASS, SQLServerPedia Syndication) (, , , , )

A week from now will be Kilt Day at the PASS Summit. It’s probably way too late to order a kilt at this point. But, don’t despair. You can still take part. Just a short walk from the Summit is the headquarters of Utilikilt. These are not classic tartan wraps with sporans and socks. They’re the modern equivalent, come in fun fabrics & colors and are actually pretty practical. So if you still want to participate in Kilt Day, and we’d love to have you, plan a trip to Utilikilt.

And no, they’re not sponsoring me or anything (more’s the pity). I just like them.

Permalink Leave a Comment

Next page »