PASS Summit 2010, Day 3 Key Note

November 11, 2010 at 11:48 am (PASS, SQLServerPedia Syndication) (, , )

Today is Dr. Dewitt.

The ballroom, where the keynotes are held, is filled with extra chairs. The Summit organizers expect extra attendance today, and well they should. Dr. Dewitt was amazing last year. I suspect this year will be more of the same.

Rick Heiges is introducing the day (waiting for Dr. Dewitt). Lynda Rab is leaving the board. Sad. I started volunteering for the PASS organization working for Lynda. She’s great. The new board members are Douglas McDowell, Andy Warren and Allen Kinsel.

The spring SQL Rally event was announced. I’ll be presenting a full day session on query performance, Query Performance Tuning, Start to Finish. Look for (a lot) more blog posts on this. The Summit next year has been moved to mid-October. WHOOP! This is great because I was going to miss it next year. Oct 11-15 will be the dates in 2011. Of course, it’ll be at Seattle.

Dr. Dewitt is finally on stage. From this point forward, I’ll be just posting his words & some comments. This is my best attempt to capture the information. There will be typos.

Query optimization is a really hard problem. Dr. Dewitt, says “I’m running out of ideas.” Yeah, right. His “Impress Index” is basically an arrow going down. He’s cracking jokes about his delivery, asking, How Can I Possibly Impress You. He’s showing this strange picture that has 240 seperate colors that each represent an exec plan in the optimizer. We’ll be back to that. This session was voted on. I’m glad optimization won. They live in fear of regression, talking about the optmizer developers.

The 100,000 foot view, magic happens. He’s working off of TPC-H benchmark, query 8. There are 22 million ways of executing this query. The optimizer has to spend a few seconds to pick the correct plan from this full set. It’s still possible to pick bad plans. Cost Based optimization came from System R & a lady named Pat Selinger at IBM. Optimization is the hardest part of building a DBMS, after 30 years. Situation is fruther complicated by advances in hardware and functionality within the DBMS.

The goal of the optimizer is to transform sQL queries into an efficient execution plan. The parser turns out a logical operator gtree, which then goes to the optmizer and a physical operator tree is sent to the execution engine. He’s showing a simple table, based on movie reviews. The query is a SELECT with AVG. Two possible plans. A scan occurs first, then a filter is applied to pull out the right movie and then an aggregate occurs. With this you’ll get a scan, meaning I/O corresponds to the number of pages on the table. Plan 2 uses an index to pull pages from the non-clustered index. This means random disk access that will look up the movies and then pass that on to the aggregate. The optmizer then has to figure out which is faster. The optimizer estimates the cost based on the statistics it has in hand. It has to estimate how many movies there are. So it estimates the selectivity of the predicate, then it calculates the cost of the plans in terms of CPU and I/O time.

So there are equivalence rules, such as select & join operators. Join operators are associative, meaning that the results from multiple tables are associated. Select operator distributes over joins and there are multiple ways of getting back the same information, all evaluated by the optimizer.

With a more complicated query, it could start with seelction of customers, then a selection of reviews, join them together, then join to the movies table and then project out the select out the columns wanted. But with equivalence rules, you can get other plans. Selects distribute over joins rule gets a different plan, or selects commute rule can change the plan. He showed five different plans, then four more plans & said he could have done another 20. For this simple query, he came up with 9 logically equivalent plans. All nine will produce the same data. For each of the 9 plans there is a large number of alternate physical plans that the optimizer can choose.

Assuming the optimizer has three joing strategies, nested loops, sort-merge & hash. He’s also assuming two selection strategies, sequential scan or index scan. Obviously, this is simplified.So, using these three joins & two select methods, there are 36 possible physical alternatives, for one logical plan. So with 9 logical plans there are 9*36 = 324 possible physical plans. And that’s for a VERY simple query.

Selectivity estimation, is the task of estimating how many rows can satisfy a predicate like MoviesId = 932. Plan quality is highly dependent on quality of the estimates that the optimizer makes.

I just sent in a question.

So the Histogram is the distribution of the data within the table. So there isn’t enough space within the db to store detailed statistical info. The solution is histograms. You can different kinds. The equi-widthy histogram divides the rows into equal sized buckets and then figures out how many values match each range of values. So, for an actual value, it might be .059 selectivity, but the estimated value is actually .050. That’s extremely close. But, another value he shows has .011 actual but in the histogram is .082, which is a HUGE error. Hello bad execution plan.

Another approach is equi-height histograms. These divide the ranges so that all buckets contain roughly the same number of rows, as opposed to an equal distribution of values. In equi-height, the second example is .033 instead of .082. Which is pretty good, but still skewed. He’s basically showing that errors can be introduced all over the place. The first example is .167.

Histograms are the critical tool for estimating selectiviy factors for selection predicates. But errors still occur. The deal is, there’s just a limited amount of space for these. other statistics are rows, pages, etc.

Estimating costs the optimzer considers I/O time and CPU time. Actual values are highly dependent on CPU and I/O subsystem on which the query will be run. For a parallel database system, such as PDW, plug, the problem focuses also on network traffic. So back to the two alternative physical plans… You have to determine which plan is cheaper. Assuming that the optimizer gets is right, we know that there are 100 rows out of 100k pages. These are sorted on date, but we’re going for MovieID, random reads. The optimizer doesn’t know system it’s on, but it makes a guess that a scan will take 8 seconds. The Filter will work on .1 microsecond/row & aggregate will be .1micrsec/row, for .00001 seconds, for a total of 9 seconds. Plan two will use the index. Since the rows are sorted on date, random seeks are going to occur. .003 seconds / seek, then  total time .3 seconds and same time for the aggregate. This means plan two is the winner.

But, what if the estimates are wrong. On a log plot, you start to see how, as the number of rows returned, each plan will perform better, based on the rows returned. More will make plan 1 better, but less will make plan 2 better.

That was just to get the data out of a table. To add in JOIN costs, things get worse. First example is to take a sort-merge join. This sorts each data set being returned, and then merges the results through a simple scan. Cost is 5r + 5m for I/O. A nested loop works on scanning one table and row-by-row, scanning the other table. The cost is R + R * M. R is rows M is pages.

With the example, you can see that with an indexin place, highly selective, loop joins can be cheap. But it’s the cardinalities that affect things. So, getting the histogram right is the key trick. With a log plot, again, you see how the various operations vary over time. So for a sort merge, it’s very expensive at a low number of rows, but at a large number of rows, it still returns in about the same amount of time. So as large sets of data are accessed, merge gets good. But at lower numbers of rows, the nested loop works better. So if the cardinality estimate is off, you could get a huge error in performance, especially at the larger sets of data. The optimizer has to pick the right join method. This is based on the number of rows in each set of data being joined.

He then moves on talk about how much space these things take up. The space depends on the “shape” of the query. He shows a type called a “start” join and a type called a “chain” join. Whoa! as you increase tables, the likely numbers of plans increases a lot. I knew this, but I haven’t seen it written down like this. But these shapes are extremes.

Every query optimizer starts off with a left deep plan, first, instead of bushy plans. For the example, a bushy tree would have 645k equivalents for the Star Join as opposed to 10k for left deep plans. With 3 joins methods and n number of joins in a query, there will be 3 to the power of n possible physical plans. Uh… wow. Instead, the optimizer uses dynamic programming. Sometimes heuristics will cause the best plan to be missed.

One method of optimization is Bottom Up. Optimiztion is performed in N passes (if N relations are joined). First pass, find the best 1-relation plan for each relation. Pass 2, find the best way to join the result of each 1-relation plan to another relation to generate all 2-relation plans. Pass n, find the best join result… can’t see it. Gets the lowest cost plans & interesting order rows. In spite of pruning plan space, this approach is still exponential in the # of tables. Costs are done, then pruning occurs. I’ve stopped taking notes on this part. You’ll have to see how this works in the slide deck (I’ll post the location at the end).

So that’s the theory. But the problem is, bad plans can be picked. If the statics are missing or out of date, cardinaltiy estimates are against skewed data, attribute values are correlated, and regression, hardware changes mess stuff up.

Opportunities to improve. Jayan Haritsa, has the Picasso Project. Bing this: Picasso Haritsa. There are actually software there that helps improve values. He’s back to TPC-H Query 8, and using the tool, it will show the plan space for the query, this is the painting of the cool picture at the start of the talk. With this, you can see how sensitive input parameters are to plan generation. So the cardinalities estmates are the key.

This animation shows how the estimated costs for a query start low, peak, and then, instead of continuing up, goes back down. And the optimizer team doesn’t know why. This is his example of how QO is indeed, harder than rocket science.

What can you do better? Well, Indexed Nested Loops looks good, but they’re not stable across the range of selectivity factors. If they went conservative and always picked sort-merge, it would be more stable. So, picking slower operations could make things more stable, just slower. Robustness is tied to the number of plans. And he says the QO team doesn’t understand.

At QO time, have the QO annotate compiled queryu plans with statistics and check operators. Then, you can see how this stuff works. They use this in two ways, a learning optimizer and dynamic reoptimization. The optimizer observed stats go back to a statistics tracker and then, feed that back through to the catalog, and the next query will be better. The dynamic reoptimization takes the idea that actual stats note the estimated stats and when there are differences, truncate the operation, pause the execution, output the query back to tempdb, stores that, and then uses that with the rest of the query to re-optimize using real values. Cool!

Key points: Query optimization is harder than rocket science. Three phases of QO: Enumeration of the logial plan space, enumeration of alternate physical plans selectivity estimates. The QO team of every DB vendor lives in fear of regressions, but it’s going to happen, so cut the optimizer some slack.

“Microsoft Jim Gray Systems Lab” on FaceBook is the source for the slides. Available here.

Permalink 4 Comments

PASS Summit: Day 2 Keynote

November 10, 2010 at 12:26 pm (PASS, Uncategorized) (, , )

Today is Kilt Day at the PASS Summit. We’re going to try to arrange a group photo at lunch time.

The network connection is extremely slow. I suspect the tweeting about the kilts.

Bill Graziano is leading the key note and he started off with having all the kilted stand. Only about 12-15 of us, but that’s five times better than last year. Then it was time for the volunteers to stand up. It was excellent to see so many people. The Outstanding Volunteer of the Year was Lorie Edwards. The PASSion award went to Wendy Pastrick, who really earned it.

Unfortunately the next segment was on governance… blech! But necessary. Everyone here is a member, so they should know how the money is spent. Luckily Bill is not digging in a lot. He’s covering the things he has to. Yes, it’s a boring topic, but this is a not-for-profit organization and it needs to be transparent. I’ve always been happy to see the numbers, even when it bored the heck out of me.

An X-Box Kinnect was given out to a lucky winner. Cool! I was too busy yesterday to take advantage of the contests… ah well.

Today is also the Women In Technology Luncheon.

The first speaker of the day is Quentin Clark of Microsoft. Mr. Clark is introducing Denali. Today we should get some meat. The goal is shifting user expectations and shifting business expectations. Sadly, I was extremely excited about this presentation, but, instead of getting into the product, we got quite a lot of sales pitch. I do want to see what they think is the most important functionality, but I want to see it, not hear about it. That’s important. I think vendors frequently don’t think about the audience. The Twitter stream started to get pretty abusive, just like last year during the “I can’t mention the major hardware vendor that supports PASS because we really appreciate it” presentation.

Finally, after 40 years in the wilderness, we got a demo of SQL Server Always On. He started right into Management Studio, which is the first time I’ve seen it in the last two days during any of the Denali demos. That’s an indication of something. This is pretty neat. Automatic failover with multiple secondaries, so you can have more than one data center, around the country and have synchronous data in multiple sites. THAT will be useful. This without shared disk. Yes, you can still use it, but now you don’t have to. That’s a huge improvement over what we’ve had in the past. And, he got an ovation during the demo. When you have a collection of nerds as big as this clapping for you, you did something right. Thank you Microsoft. The data synchs occur in near real time, behind the scenes, with HA set ups that you can put together, for individual databases or groups of databases, in about five minutes. Huzzah! Oh, and the secondaries can be set to be readable and you can move your backups to the secondary… WOW! Again, thank you Microsoft.

The break down of the goals is the same as outlined yesterday, of course, Mission Critical, which they just showed, then IT Pro & Developer Productivity and Pervasive Insight. Then Mr. Clark mentioned DAC and there was a low rumble around the blogger table. That is not a popular set of functionality. There’s going to be enhancements in spatial within Denali, modifying the abilities to run queries and moving all the way through the BI stack. We’re finally getting Sequence Generator and Paging and enhanced Error Handling.

FileTable, a whole new integration of FileStream technology is being demo’ed next. This should be good too. The Key Take Away is “Every windows application that generates files, can now store files within sQL Server without a single modification to the app.” I’m not so sure this is a good thing, and what about SharePoint? Still, technology is cool and I’m a geek enough. I’m going to enjoy it. So, to a degree, this works like FileStream, but it’s file management through the database, but, the demo showed a set of files getting inserted into SQL Server management through a command prompt. Oooh… That’s cool. The demo is impressive. You can update the documents from the file system or from the database. That’s pretty neat. I’m just not sure exactly where this goes within the enterprise. I’ll have to read some more about it.

The next set of functionality is Project Juneau. I’ve heard a lot about this. It’s likely to hurt some of the 3rd party tools. We went right to the Demo this time. Thanks. We’re in the VS 2010 Shell now, along with BIDS and everything else. They’re not retiring SSMS, but it’s clear that it’s on the way out, must be. I like the improved TSQL completion. The table designer is good too. Because you can sync the visuals & tsql as you create the table. That’s great! I think I said this yesterday, but there are a lot of people that will not enjoy moving to Visual Studio. I’m a fan, but others will not like it. Still, it looks good. It’s working better than it ever did, and that’s a good thing.

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 3

November 9, 2010 at 12:51 pm (PASS) (, , )

Ted Kummert is still talking.

For the cloud, of course, they’re talking about SQL Azure. Microsoft really is throwing themselves into the cloud, completely. The emphasis is that they offer both a cloud and an on-premises solution. I don’t mind saying, I’m still trying to get the full business proposition for an old school, fat, business like the one I work for. What should we be doing with the cloud. I just haven’t seen the magic. I see where smaller businesses, or start-ups, or temporary surge capacity for businesses that may have that type of thing can use the cloud, but… traditional work, it just doesn’t seem to jive yet.

We’re going to see some made-up scenarios for how Azure can manage Contoso Bikes. He shows how the report can pull data from the cloud and deploy reports from the cloud, in order to deliver to people on the road. But, we can do that already in other ways. The ability to link your data with the Data Market data is pretty cool. I can see that being useful. You will have to purchase access to these data sets. You can query against them, but, similar to the PDW demo, we’re not in SSMS any more. I wonder what Microsoft’s long term plans are for SSMS based on the ways we’re seeing it being bypassed.

What’s next for SQL Server? Denali. The CTP is getting handed out tomorrow after the keynote tomorrow. We’ll be seeing the demo on Denali tomorrow. The idea that Mr. Kummert is communicating is that Denali represents client requests. They targets are Mission Critical, IT Pro & Developer Productivity, and Pervasive Insight. They’ve focused on manageability and upgrade capacity. That should be good. They’re going to work on performance, which is interesting. They’re unifying the experience into Visual Studio… I’m OK with this, but I know that a LOT of DBAs are not OK with this. It’ll be interesting to see how it breaks out. Denali is the largest release of integration services ever. Full life cycle development on SSIS. That will be good. They’re also talking about expansion on the PowerPivot type of work. Project Crescent is a new reporting tool that is coming out with Denali, which is a new way of showing business information. Sounds good. Finally, a demo. We’re seeing the 100 million row demo, again. I’d like to see the new stuff, please. So, they pulled the data out of Excel and directly into Analysis Services. That’s good. Showing how it’s working within VS, which gives you source control, etc., and then you also get to use the server, which is better than the memory limits within PowerPivot. And he’s showing how over 2 billion. This is a great demo. We’re seeing a trillion rows per minute, filtered & reported on. It’s very slick. This is good. Same technology is also in the database engine. We’re seeing fantastic performance. I might be out of a job. It’s based on the columnar data store technology. It’s a very good thing.

Come back for more tomorrow!

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 2

November 9, 2010 at 12:21 pm (PASS) (, , )

Mark Souza from the SQL CAT Team, some of the smartest & most capable of MS consultants in SQL Server, is presenting how his team is offering a health check for people’s SQL Server systems.

There going to actually be using some technology to do this little event called SQL PASS It On, using Twitter. Twitter is become more and more of a major part of the event. If you’re not at least monitoring Twitter, you’re missing out.

It’s a busy day with the SQL Clinic, the Exhibit Hall, Community Learning Center, Birds of a Feather Lunch, Regional Mentors, Book Signing and Exhibitor Reception. That’s not mentioning all the sessions.

The key notes will be Ted Kummert today, Quentin Clark tomorrow, and David DeWitt (YAY!) on Thursday where he will talk about Query Optimization. I will be taking notes!

We’re seeing a history of how Microsoft split the code from Sybase for the SQL Server 7.0 release. They built a brand-new database platform in 2.5 years. That’s pretty amazing.

They started off with SQL Server 7.0 for ease of use. Ted Kummert is emphasizing how important Total Cost of Ownership is to Microsoft and their plans. He’s also talking about how important it is that SQL Server is integrated, including Analysis Services and Cloud. His final focus is on large scale, high availability systems. This is the history of what they’ve built. Now, he’s going to focus on the future, starting with mission critical, then covering the cloud, and finally what is going to happen with SQL Server Next.

For mission critical, they’re releasing the Parallel Data Warehouse, which will allow for 100s of terrabytes in what is basically and appliance. That’s right, a toaster for SQL Server. Seriously, this is a big deal. The demo is already fascinating. He’s showing how you create tables with the distribution, and partitioning in place.  But it comes with a special PDW loader, which will load up to 1tb an hour of data. It can even be integrated with SSIS. This is pretty amazing. On the Tweet stream I saw Michelle Ufford mention that she’s looking at it for GoDaddy, so this is viable. They then showed how they could move 800 billion (yes, that is a “b”) rows into the system in 19 seconds. Interesting point from Brent Ozar, what they were doing was not in SSMS. Paulo Resende from Bank of America came out to give a customer testimonial on how they implemented PDW. Now Dave Mariani of Yahoo is giving another testimonial on how they manage User Data & Analytics for… well… spam. They’re running through 1.2 tb a day and 50 gb an hour… uh… WOW! The fascinating thing is, they’re moving that data in a cube for the queries and are able to pull out data in less than 10 seconds. That’s great. Microsoft is also announcing “Atlanta” which is a service that assesses the configuration of your 2008 and 2008 R2 systems, through the cloud. Bob Ward, cool, is out to show how Atlanta works. This is extremely cool stuff. I’d like to think that I keep most of my servers up to date, but a service like this could still be extremely useful.

Permalink 1 Comment

PASS Summit: Day 1 Keynote, Part 1

November 9, 2010 at 11:35 am (PASS) (, , )

Sitting at the big kids table at the PASS Summit, ready to rock and roll. The Summit has not officially started yet, but it’s been a fantastic ride already. I’m getting to meet a bunch of great and amazing people. I made my very first trip out to the Microsoft campus yesterday. Last night was the SQL Server Central party. This is just a great organization and a great event.

Right at the start, the tweeting is hot & heavy. Hmmm… OK, starting off with a Tina Turner impersonator. She’s extremely good, but I have to ask, what were they thinking? Her name is Truly Tina. She was outstanding. Just a bit odd.

Rushabh Mehta is introducing the PASS organization. He’s showing off the Board of directors and the executive committee. He’s also showing what else PASS has besides the Summit, which include 24 Hours of PASS, SQL Saturday and the European Summit. The organization also includes the chapters and the vritual chapters. The organization reaches thousands of people around the world through all these events and organizations. The goal this year is try to get to 250,000 members.

This year the summit has 3807 registrations from 48 countries. The keynote is streaming live, as well as 40 people blogging and tweeting away. If you want to follow the tweets, make sure you use the hash tag #sqlpass. There are 191 speakers with 44 of them MVPs.

Permalink Leave a Comment

Kilt Day

November 3, 2010 at 8:00 am (PASS, SQLServerPedia Syndication) (, , , , )

A week from now will be Kilt Day at the PASS Summit. It’s probably way too late to order a kilt at this point. But, don’t despair. You can still take part. Just a short walk from the Summit is the headquarters of Utilikilt. These are not classic tartan wraps with sporans and socks. They’re the modern equivalent, come in fun fabrics & colors and are actually pretty practical. So if you still want to participate in Kilt Day, and we’d love to have you, plan a trip to Utilikilt.

And no, they’re not sponsoring me or anything (more’s the pity). I just like them.

Permalink Leave a Comment

PASS Summit Blogging

October 28, 2010 at 1:46 pm (PASS, SQLServerPedia Syndication) (, , )

During the PASS Summit I have again been given the opportunity to keep my laptop plugged in… as long as I blog about the Key Notes. So, I’m going to do it, power is hard to come by in that place. Once again I can regale you, near real time, what’s occurring in the key note addresses at the PASS Summit. Once more I’ll have the opportunity to jump on to the table while wearing a kilt.

But, this year, you may not want to read me. Instead, you might want to tune into the key notes yourself. PASS is going to transmit them live. You can go to the this link to watch them. Now, I can hear you, literally, thinking to yourself, “Right, just what I need in my life, to listen to some sales hack tell me about some semi-functional bit of software.” Most of the time, you’d be right. But this is PASS. We don’t just listen to sales hacks stumbling through presentations. We’re getting to learn from Dr. DeWitt again this year. I’m jazzed and you should be too. Dr. DeWitt’s presentation last year was simply amazing. In terms of sheer geek fun, it’s hard to beat. This year should be as good, or better.

I’ll also be tweeting all week. Follow hash tag #sqlpass to find out what’s happening from me and all the other Twitterati.

Permalink 2 Comments

New Book on Query Optimizer

October 26, 2010 at 10:00 am (Uncategorized) (, , )

Benjamin Nevarez (blog) has been working really hard on a book on the Query Optimizer. It just got finished in time so that there will be copies available at the PASS Summit. I strongly recommend you track it down. There’s a lot to learn between the covers. How do I know since the book just got finished and isn’t in anyone’s hands yet? Because I’ve been watching it get built. I put in my small efforts as the technical editor. I feel bad about that because I’ve always learned from my technical editors and I’m fairly sure I learned more from Benjamin than he did from me. Get a copy of the book. You won’t be disappointed.

Permalink 1 Comment

Who You Learn From

October 25, 2010 at 9:01 am (PASS) (, , , )

Less than two weeks to go until the PASS Summit. I’m excited. I’ve managed to cram a ton of activities into this Summit, more than ever before. But, I’m still going to try to go to a few sessions. The question asked, which sessions are you going to? Who do you want to learn from. Who can you learn from?

I’ve got a pretty simple answer. Everybody. There’s not a single person that I work with on my current team that I haven’t learned something from. Sure, there are those that teach you tons and tons, for example, we have a fantastic SSIS guy on our team, that has taught me quite a lot, faster than I could have picked it up on my own.

So, you’re going to the PASS Summit. Is your plan to hit just the big name people? If so, you’re messing up. You can learn from everybody. I’m not saying don’t go to the big name sessions, heck I will, but I’m saying you need to look around at more than just names. Now, that said, before I tell you the people who’s sessions I’m going to, I want to give you one important piece of advice. If you go into a session and within 5-10 minutes you can tell that session isn’t for you, get up & leave. Go to another one, or start chatting people up out in the hallway or down at the PASS booth. Don’t waste your time.

I already listed a number of sessions that I thought were must sees. Unfortunately, I won’t be making it to many of them. I’m pretty busy, presenting on Tuesday & Wednesday and at one of the Lightening Rounds on Thursday. I’m also going to work the Ask the Experts area for the first time ever (please, don’t stop by to play “Stump the Chump.” I know you guys know more than I do. I’m just trying to help) Thursday afternoon. Here are some other sessions that should have made my list, that I plan on attending.

 Tuesday afternoon I’m absolutely going to make it to Aaron Nelson’s (blog|twitter) session on PowerShell, The Dirty Dozen. I saw him present at SQL Saturday in Raliegh. This guy is good. You may not know his name, but I promise, if you’re getting started in PowerShell, or even if you’ve been working with it for a while, you’re going to learn from him. I’ll probably hit a couple of other sessions on Tuesday too.

On Wednesday afternoon there’s a total embarassment of riches. I want to go to four different sessions right after lunch. I’m leaning towards the one on Professional Development Plans, but I’m not sure I want to miss the one Troubleshooting SSRS Performance or the Incredible Shrinking Execution Plan. After that, probably, because of a new emphassis on SSRS where I work, Cooking with SSRS. The last session of the day is easy, Kimberly Tripp’s (blogTales From the Trenches.

Thursday morning is open, assuming I’m still on my feet. I’ll probably hit DBA MythBusters. That’s also assuming that after listening to Dr. Dewitt my brain isn’t completely stuff full. If you only make one keynote, make it Thursday morning’s.

This is going to be an excellent summit. For the names I left out, for the sessions I didn’t mention, I could just list the entire summit schedule and tell you to go to all of them. I’d be willing to bet there are very few, if any, that you won’t learn from. Like I said, everyone can teach you something. Figure out which ones are best for you and go to them. See you there.

Permalink Leave a Comment

Kilt Day at the PASS Summit

October 20, 2010 at 4:52 pm (PASS) (, , , , )

Last year, with the infinite power at my disposal (read, zero), I declared Wednesday, Kilt Wednesday at the PASS Summit. It took off… a little ways. Three people wore kilts. Now, you’d think that three out of 3000 would almost not get noticed, but the three people wearing them… well, each for different reasons, we stand out in a crowd. Heck, I was even told one of us looked good in the kilt (wasn’t me, of course). Anyway, where was I, oh yeah, we were noticed (and it might be because I jumped up on the bloggers table during one of the the key notes…) and now, this year, LOTS of people are planning on wearing kilts on Wednesday, November 10th, 2010.

If you don’t have a kilt, don’t panic. You can always run down the street to Utilikilt, who has their headquarters right there in Seattle. There are lots of other sources. You don’t want to miss out, this year. It’s going to be fun. Follow the hashtag #sqlkilt on Twitter to keep up to date on what’s happening.

Also, Wednesday is the Women In Technology lunch. So, you can get extremely creative and supportive WIT you should track down Jenn McCown (blog|twitter) and get one of her cool t-shirts.

Permalink 4 Comments

Next page »