PASS Summit 2010, Day 3 Key Note

November 11, 2010 at 11:48 am (PASS, SQLServerPedia Syndication) (, , )

Today is Dr. Dewitt.

The ballroom, where the keynotes are held, is filled with extra chairs. The Summit organizers expect extra attendance today, and well they should. Dr. Dewitt was amazing last year. I suspect this year will be more of the same.

Rick Heiges is introducing the day (waiting for Dr. Dewitt). Lynda Rab is leaving the board. Sad. I started volunteering for the PASS organization working for Lynda. She’s great. The new board members are Douglas McDowell, Andy Warren and Allen Kinsel.

The spring SQL Rally event was announced. I’ll be presenting a full day session on query performance, Query Performance Tuning, Start to Finish. Look for (a lot) more blog posts on this. The Summit next year has been moved to mid-October. WHOOP! This is great because I was going to miss it next year. Oct 11-15 will be the dates in 2011. Of course, it’ll be at Seattle.

Dr. Dewitt is finally on stage. From this point forward, I’ll be just posting his words & some comments. This is my best attempt to capture the information. There will be typos.

Query optimization is a really hard problem. Dr. Dewitt, says “I’m running out of ideas.” Yeah, right. His “Impress Index” is basically an arrow going down. He’s cracking jokes about his delivery, asking, How Can I Possibly Impress You. He’s showing this strange picture that has 240 seperate colors that each represent an exec plan in the optimizer. We’ll be back to that. This session was voted on. I’m glad optimization won. They live in fear of regression, talking about the optmizer developers.

The 100,000 foot view, magic happens. He’s working off of TPC-H benchmark, query 8. There are 22 million ways of executing this query. The optimizer has to spend a few seconds to pick the correct plan from this full set. It’s still possible to pick bad plans. Cost Based optimization came from System R & a lady named Pat Selinger at IBM. Optimization is the hardest part of building a DBMS, after 30 years. Situation is fruther complicated by advances in hardware and functionality within the DBMS.

The goal of the optimizer is to transform sQL queries into an efficient execution plan. The parser turns out a logical operator gtree, which then goes to the optmizer and a physical operator tree is sent to the execution engine. He’s showing a simple table, based on movie reviews. The query is a SELECT with AVG. Two possible plans. A scan occurs first, then a filter is applied to pull out the right movie and then an aggregate occurs. With this you’ll get a scan, meaning I/O corresponds to the number of pages on the table. Plan 2 uses an index to pull pages from the non-clustered index. This means random disk access that will look up the movies and then pass that on to the aggregate. The optmizer then has to figure out which is faster. The optimizer estimates the cost based on the statistics it has in hand. It has to estimate how many movies there are. So it estimates the selectivity of the predicate, then it calculates the cost of the plans in terms of CPU and I/O time.

So there are equivalence rules, such as select & join operators. Join operators are associative, meaning that the results from multiple tables are associated. Select operator distributes over joins and there are multiple ways of getting back the same information, all evaluated by the optimizer.

With a more complicated query, it could start with seelction of customers, then a selection of reviews, join them together, then join to the movies table and then project out the select out the columns wanted. But with equivalence rules, you can get other plans. Selects distribute over joins rule gets a different plan, or selects commute rule can change the plan. He showed five different plans, then four more plans & said he could have done another 20. For this simple query, he came up with 9 logically equivalent plans. All nine will produce the same data. For each of the 9 plans there is a large number of alternate physical plans that the optimizer can choose.

Assuming the optimizer has three joing strategies, nested loops, sort-merge & hash. He’s also assuming two selection strategies, sequential scan or index scan. Obviously, this is simplified.So, using these three joins & two select methods, there are 36 possible physical alternatives, for one logical plan. So with 9 logical plans there are 9*36 = 324 possible physical plans. And that’s for a VERY simple query.

Selectivity estimation, is the task of estimating how many rows can satisfy a predicate like MoviesId = 932. Plan quality is highly dependent on quality of the estimates that the optimizer makes.

I just sent in a question.

So the Histogram is the distribution of the data within the table. So there isn’t enough space within the db to store detailed statistical info. The solution is histograms. You can different kinds. The equi-widthy histogram divides the rows into equal sized buckets and then figures out how many values match each range of values. So, for an actual value, it might be .059 selectivity, but the estimated value is actually .050. That’s extremely close. But, another value he shows has .011 actual but in the histogram is .082, which is a HUGE error. Hello bad execution plan.

Another approach is equi-height histograms. These divide the ranges so that all buckets contain roughly the same number of rows, as opposed to an equal distribution of values. In equi-height, the second example is .033 instead of .082. Which is pretty good, but still skewed. He’s basically showing that errors can be introduced all over the place. The first example is .167.

Histograms are the critical tool for estimating selectiviy factors for selection predicates. But errors still occur. The deal is, there’s just a limited amount of space for these. other statistics are rows, pages, etc.

Estimating costs the optimzer considers I/O time and CPU time. Actual values are highly dependent on CPU and I/O subsystem on which the query will be run. For a parallel database system, such as PDW, plug, the problem focuses also on network traffic. So back to the two alternative physical plans… You have to determine which plan is cheaper. Assuming that the optimizer gets is right, we know that there are 100 rows out of 100k pages. These are sorted on date, but we’re going for MovieID, random reads. The optimizer doesn’t know system it’s on, but it makes a guess that a scan will take 8 seconds. The Filter will work on .1 microsecond/row & aggregate will be .1micrsec/row, for .00001 seconds, for a total of 9 seconds. Plan two will use the index. Since the rows are sorted on date, random seeks are going to occur. .003 seconds / seek, then  total time .3 seconds and same time for the aggregate. This means plan two is the winner.

But, what if the estimates are wrong. On a log plot, you start to see how, as the number of rows returned, each plan will perform better, based on the rows returned. More will make plan 1 better, but less will make plan 2 better.

That was just to get the data out of a table. To add in JOIN costs, things get worse. First example is to take a sort-merge join. This sorts each data set being returned, and then merges the results through a simple scan. Cost is 5r + 5m for I/O. A nested loop works on scanning one table and row-by-row, scanning the other table. The cost is R + R * M. R is rows M is pages.

With the example, you can see that with an indexin place, highly selective, loop joins can be cheap. But it’s the cardinalities that affect things. So, getting the histogram right is the key trick. With a log plot, again, you see how the various operations vary over time. So for a sort merge, it’s very expensive at a low number of rows, but at a large number of rows, it still returns in about the same amount of time. So as large sets of data are accessed, merge gets good. But at lower numbers of rows, the nested loop works better. So if the cardinality estimate is off, you could get a huge error in performance, especially at the larger sets of data. The optimizer has to pick the right join method. This is based on the number of rows in each set of data being joined.

He then moves on talk about how much space these things take up. The space depends on the “shape” of the query. He shows a type called a “start” join and a type called a “chain” join. Whoa! as you increase tables, the likely numbers of plans increases a lot. I knew this, but I haven’t seen it written down like this. But these shapes are extremes.

Every query optimizer starts off with a left deep plan, first, instead of bushy plans. For the example, a bushy tree would have 645k equivalents for the Star Join as opposed to 10k for left deep plans. With 3 joins methods and n number of joins in a query, there will be 3 to the power of n possible physical plans. Uh… wow. Instead, the optimizer uses dynamic programming. Sometimes heuristics will cause the best plan to be missed.

One method of optimization is Bottom Up. Optimiztion is performed in N passes (if N relations are joined). First pass, find the best 1-relation plan for each relation. Pass 2, find the best way to join the result of each 1-relation plan to another relation to generate all 2-relation plans. Pass n, find the best join result… can’t see it. Gets the lowest cost plans & interesting order rows. In spite of pruning plan space, this approach is still exponential in the # of tables. Costs are done, then pruning occurs. I’ve stopped taking notes on this part. You’ll have to see how this works in the slide deck (I’ll post the location at the end).

So that’s the theory. But the problem is, bad plans can be picked. If the statics are missing or out of date, cardinaltiy estimates are against skewed data, attribute values are correlated, and regression, hardware changes mess stuff up.

Opportunities to improve. Jayan Haritsa, has the Picasso Project. Bing this: Picasso Haritsa. There are actually software there that helps improve values. He’s back to TPC-H Query 8, and using the tool, it will show the plan space for the query, this is the painting of the cool picture at the start of the talk. With this, you can see how sensitive input parameters are to plan generation. So the cardinalities estmates are the key.

This animation shows how the estimated costs for a query start low, peak, and then, instead of continuing up, goes back down. And the optimizer team doesn’t know why. This is his example of how QO is indeed, harder than rocket science.

What can you do better? Well, Indexed Nested Loops looks good, but they’re not stable across the range of selectivity factors. If they went conservative and always picked sort-merge, it would be more stable. So, picking slower operations could make things more stable, just slower. Robustness is tied to the number of plans. And he says the QO team doesn’t understand.

At QO time, have the QO annotate compiled queryu plans with statistics and check operators. Then, you can see how this stuff works. They use this in two ways, a learning optimizer and dynamic reoptimization. The optimizer observed stats go back to a statistics tracker and then, feed that back through to the catalog, and the next query will be better. The dynamic reoptimization takes the idea that actual stats note the estimated stats and when there are differences, truncate the operation, pause the execution, output the query back to tempdb, stores that, and then uses that with the rest of the query to re-optimize using real values. Cool!

Key points: Query optimization is harder than rocket science. Three phases of QO: Enumeration of the logial plan space, enumeration of alternate physical plans selectivity estimates. The QO team of every DB vendor lives in fear of regressions, but it’s going to happen, so cut the optimizer some slack.

“Microsoft Jim Gray Systems Lab” on FaceBook is the source for the slides. Available here.

Permalink 4 Comments

PASS Summit: Day 2 Keynote

November 10, 2010 at 12:26 pm (PASS, Uncategorized) (, , )

Today is Kilt Day at the PASS Summit. We’re going to try to arrange a group photo at lunch time.

The network connection is extremely slow. I suspect the tweeting about the kilts.

Bill Graziano is leading the key note and he started off with having all the kilted stand. Only about 12-15 of us, but that’s five times better than last year. Then it was time for the volunteers to stand up. It was excellent to see so many people. The Outstanding Volunteer of the Year was Lorie Edwards. The PASSion award went to Wendy Pastrick, who really earned it.

Unfortunately the next segment was on governance… blech! But necessary. Everyone here is a member, so they should know how the money is spent. Luckily Bill is not digging in a lot. He’s covering the things he has to. Yes, it’s a boring topic, but this is a not-for-profit organization and it needs to be transparent. I’ve always been happy to see the numbers, even when it bored the heck out of me.

An X-Box Kinnect was given out to a lucky winner. Cool! I was too busy yesterday to take advantage of the contests… ah well.

Today is also the Women In Technology Luncheon.

The first speaker of the day is Quentin Clark of Microsoft. Mr. Clark is introducing Denali. Today we should get some meat. The goal is shifting user expectations and shifting business expectations. Sadly, I was extremely excited about this presentation, but, instead of getting into the product, we got quite a lot of sales pitch. I do want to see what they think is the most important functionality, but I want to see it, not hear about it. That’s important. I think vendors frequently don’t think about the audience. The Twitter stream started to get pretty abusive, just like last year during the “I can’t mention the major hardware vendor that supports PASS because we really appreciate it” presentation.

Finally, after 40 years in the wilderness, we got a demo of SQL Server Always On. He started right into Management Studio, which is the first time I’ve seen it in the last two days during any of the Denali demos. That’s an indication of something. This is pretty neat. Automatic failover with multiple secondaries, so you can have more than one data center, around the country and have synchronous data in multiple sites. THAT will be useful. This without shared disk. Yes, you can still use it, but now you don’t have to. That’s a huge improvement over what we’ve had in the past. And, he got an ovation during the demo. When you have a collection of nerds as big as this clapping for you, you did something right. Thank you Microsoft. The data synchs occur in near real time, behind the scenes, with HA set ups that you can put together, for individual databases or groups of databases, in about five minutes. Huzzah! Oh, and the secondaries can be set to be readable and you can move your backups to the secondary… WOW! Again, thank you Microsoft.

The break down of the goals is the same as outlined yesterday, of course, Mission Critical, which they just showed, then IT Pro & Developer Productivity and Pervasive Insight. Then Mr. Clark mentioned DAC and there was a low rumble around the blogger table. That is not a popular set of functionality. There’s going to be enhancements in spatial within Denali, modifying the abilities to run queries and moving all the way through the BI stack. We’re finally getting Sequence Generator and Paging and enhanced Error Handling.

FileTable, a whole new integration of FileStream technology is being demo’ed next. This should be good too. The Key Take Away is “Every windows application that generates files, can now store files within sQL Server without a single modification to the app.” I’m not so sure this is a good thing, and what about SharePoint? Still, technology is cool and I’m a geek enough. I’m going to enjoy it. So, to a degree, this works like FileStream, but it’s file management through the database, but, the demo showed a set of files getting inserted into SQL Server management through a command prompt. Oooh… That’s cool. The demo is impressive. You can update the documents from the file system or from the database. That’s pretty neat. I’m just not sure exactly where this goes within the enterprise. I’ll have to read some more about it.

The next set of functionality is Project Juneau. I’ve heard a lot about this. It’s likely to hurt some of the 3rd party tools. We went right to the Demo this time. Thanks. We’re in the VS 2010 Shell now, along with BIDS and everything else. They’re not retiring SSMS, but it’s clear that it’s on the way out, must be. I like the improved TSQL completion. The table designer is good too. Because you can sync the visuals & tsql as you create the table. That’s great! I think I said this yesterday, but there are a lot of people that will not enjoy moving to Visual Studio. I’m a fan, but others will not like it. Still, it looks good. It’s working better than it ever did, and that’s a good thing.

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 3

November 9, 2010 at 12:51 pm (PASS) (, , )

Ted Kummert is still talking.

For the cloud, of course, they’re talking about SQL Azure. Microsoft really is throwing themselves into the cloud, completely. The emphasis is that they offer both a cloud and an on-premises solution. I don’t mind saying, I’m still trying to get the full business proposition for an old school, fat, business like the one I work for. What should we be doing with the cloud. I just haven’t seen the magic. I see where smaller businesses, or start-ups, or temporary surge capacity for businesses that may have that type of thing can use the cloud, but… traditional work, it just doesn’t seem to jive yet.

We’re going to see some made-up scenarios for how Azure can manage Contoso Bikes. He shows how the report can pull data from the cloud and deploy reports from the cloud, in order to deliver to people on the road. But, we can do that already in other ways. The ability to link your data with the Data Market data is pretty cool. I can see that being useful. You will have to purchase access to these data sets. You can query against them, but, similar to the PDW demo, we’re not in SSMS any more. I wonder what Microsoft’s long term plans are for SSMS based on the ways we’re seeing it being bypassed.

What’s next for SQL Server? Denali. The CTP is getting handed out tomorrow after the keynote tomorrow. We’ll be seeing the demo on Denali tomorrow. The idea that Mr. Kummert is communicating is that Denali represents client requests. They targets are Mission Critical, IT Pro & Developer Productivity, and Pervasive Insight. They’ve focused on manageability and upgrade capacity. That should be good. They’re going to work on performance, which is interesting. They’re unifying the experience into Visual Studio… I’m OK with this, but I know that a LOT of DBAs are not OK with this. It’ll be interesting to see how it breaks out. Denali is the largest release of integration services ever. Full life cycle development on SSIS. That will be good. They’re also talking about expansion on the PowerPivot type of work. Project Crescent is a new reporting tool that is coming out with Denali, which is a new way of showing business information. Sounds good. Finally, a demo. We’re seeing the 100 million row demo, again. I’d like to see the new stuff, please. So, they pulled the data out of Excel and directly into Analysis Services. That’s good. Showing how it’s working within VS, which gives you source control, etc., and then you also get to use the server, which is better than the memory limits within PowerPivot. And he’s showing how over 2 billion. This is a great demo. We’re seeing a trillion rows per minute, filtered & reported on. It’s very slick. This is good. Same technology is also in the database engine. We’re seeing fantastic performance. I might be out of a job. It’s based on the columnar data store technology. It’s a very good thing.

Come back for more tomorrow!

Permalink 3 Comments

PASS Summit: Day 1 Keynote, Part 2

November 9, 2010 at 12:21 pm (PASS) (, , )

Mark Souza from the SQL CAT Team, some of the smartest & most capable of MS consultants in SQL Server, is presenting how his team is offering a health check for people’s SQL Server systems.

There going to actually be using some technology to do this little event called SQL PASS It On, using Twitter. Twitter is become more and more of a major part of the event. If you’re not at least monitoring Twitter, you’re missing out.

It’s a busy day with the SQL Clinic, the Exhibit Hall, Community Learning Center, Birds of a Feather Lunch, Regional Mentors, Book Signing and Exhibitor Reception. That’s not mentioning all the sessions.

The key notes will be Ted Kummert today, Quentin Clark tomorrow, and David DeWitt (YAY!) on Thursday where he will talk about Query Optimization. I will be taking notes!

We’re seeing a history of how Microsoft split the code from Sybase for the SQL Server 7.0 release. They built a brand-new database platform in 2.5 years. That’s pretty amazing.

They started off with SQL Server 7.0 for ease of use. Ted Kummert is emphasizing how important Total Cost of Ownership is to Microsoft and their plans. He’s also talking about how important it is that SQL Server is integrated, including Analysis Services and Cloud. His final focus is on large scale, high availability systems. This is the history of what they’ve built. Now, he’s going to focus on the future, starting with mission critical, then covering the cloud, and finally what is going to happen with SQL Server Next.

For mission critical, they’re releasing the Parallel Data Warehouse, which will allow for 100s of terrabytes in what is basically and appliance. That’s right, a toaster for SQL Server. Seriously, this is a big deal. The demo is already fascinating. He’s showing how you create tables with the distribution, and partitioning in place.  But it comes with a special PDW loader, which will load up to 1tb an hour of data. It can even be integrated with SSIS. This is pretty amazing. On the Tweet stream I saw Michelle Ufford mention that she’s looking at it for GoDaddy, so this is viable. They then showed how they could move 800 billion (yes, that is a “b”) rows into the system in 19 seconds. Interesting point from Brent Ozar, what they were doing was not in SSMS. Paulo Resende from Bank of America came out to give a customer testimonial on how they implemented PDW. Now Dave Mariani of Yahoo is giving another testimonial on how they manage User Data & Analytics for… well… spam. They’re running through 1.2 tb a day and 50 gb an hour… uh… WOW! The fascinating thing is, they’re moving that data in a cube for the queries and are able to pull out data in less than 10 seconds. That’s great. Microsoft is also announcing “Atlanta” which is a service that assesses the configuration of your 2008 and 2008 R2 systems, through the cloud. Bob Ward, cool, is out to show how Atlanta works. This is extremely cool stuff. I’d like to think that I keep most of my servers up to date, but a service like this could still be extremely useful.

Permalink 1 Comment

PASS Summit: Day 1 Keynote, Part 1

November 9, 2010 at 11:35 am (PASS) (, , )

Sitting at the big kids table at the PASS Summit, ready to rock and roll. The Summit has not officially started yet, but it’s been a fantastic ride already. I’m getting to meet a bunch of great and amazing people. I made my very first trip out to the Microsoft campus yesterday. Last night was the SQL Server Central party. This is just a great organization and a great event.

Right at the start, the tweeting is hot & heavy. Hmmm… OK, starting off with a Tina Turner impersonator. She’s extremely good, but I have to ask, what were they thinking? Her name is Truly Tina. She was outstanding. Just a bit odd.

Rushabh Mehta is introducing the PASS organization. He’s showing off the Board of directors and the executive committee. He’s also showing what else PASS has besides the Summit, which include 24 Hours of PASS, SQL Saturday and the European Summit. The organization also includes the chapters and the vritual chapters. The organization reaches thousands of people around the world through all these events and organizations. The goal this year is try to get to 250,000 members.

This year the summit has 3807 registrations from 48 countries. The keynote is streaming live, as well as 40 people blogging and tweeting away. If you want to follow the tweets, make sure you use the hash tag #sqlpass. There are 191 speakers with 44 of them MVPs.

Permalink Leave a Comment

Kilt Day at the PASS Summit

October 20, 2010 at 4:52 pm (PASS) (, , , , )

Last year, with the infinite power at my disposal (read, zero), I declared Wednesday, Kilt Wednesday at the PASS Summit. It took off… a little ways. Three people wore kilts. Now, you’d think that three out of 3000 would almost not get noticed, but the three people wearing them… well, each for different reasons, we stand out in a crowd. Heck, I was even told one of us looked good in the kilt (wasn’t me, of course). Anyway, where was I, oh yeah, we were noticed (and it might be because I jumped up on the bloggers table during one of the the key notes…) and now, this year, LOTS of people are planning on wearing kilts on Wednesday, November 10th, 2010.

If you don’t have a kilt, don’t panic. You can always run down the street to Utilikilt, who has their headquarters right there in Seattle. There are lots of other sources. You don’t want to miss out, this year. It’s going to be fun. Follow the hashtag #sqlkilt on Twitter to keep up to date on what’s happening.

Also, Wednesday is the Women In Technology lunch. So, you can get extremely creative and supportive WIT you should track down Jenn McCown (blog|twitter) and get one of her cool t-shirts.

Permalink 4 Comments

Working the Door

October 14, 2010 at 8:00 am (PASS) (, , )

You know those guys that work the door at clubs, seperating the wheat from the chaff, culling the herd, Choosing the Slain, sifting the gold from the dross, telling the difference between the sheep and the goats, winnowing out the weak, tipping the scales of justice… you know, the guys taking tickets. Well, this year, I get to do that job at the SQL Server Central party at the PASS Summit.

That’s right, I’ve been given absolute power, the keys to the kingdom, control of the list… you get the point. And best of all, while I was offered money, fame, power & women, I bargained for more and I got it. I’ll be wearing one of those nifty Hawaiian shirts (Friday shirts) we always see Steve & Andy sporting. HA! And my wife won’t let me negotiate with used car salesmen. Now I’ll finally get some respect.

Anyway, see you at the SQL Server Central party, opening night at the PASS Summit, right after the official Welcome shindig.

Remember, registration code SSC2010 or $30 at the door.

Permalink 4 Comments

PASS Summit Birds of a Feather Lunch

October 13, 2010 at 11:18 am (PASS, SQLServerPedia Syndication) (, , , )

The absolute biggest part of the PASS Summit is the one thing that most people don’t take advantage of, networking. And no, I don’t mean glad-handing everyone you meet, remembering all their names (although that is a good skill to have) and saying over & over again “Rush Chairman, damn glad to meet you.” I mean taking advantage of the fact that you can talk to people that have already solved the problem that you’re facing at work, or just might have some insight into that issue, or maybe you can give them insight into a problem they’re facing. I’m mean, talking to people.

Yeah, I know, we’re all geeks, and worse than that, data geeks. That means we like to sit in dark little caves & grumble about our fellow man having WAY too much access to the data we’ve been sworn to protect. I’m with you. But, you’ve made the decision to go the PASS Summit. You’re there. All over the place are your fellow data geeks. And look at that, some of them are talking to each other. You can too. In fact, you should.

So, how do you break the ice? Here’s a suggestion. When you go to lunch on Tuesday, look for the Birds of a Feather tables. Each one will have a different topic, hosted by someone who knows at least a little about that topic, or is just really excited about discussing that topic. Sit down (you don’t need permission, it’s implied), introduce yourself and dive into the topic. Ask questions. Answer questions. At least say hi before you sit there and listen. You’re in. You’ve just made the leap. Welcome to networking. Now, find out where the party is for Tuesday night and you can do some more.

I’ll be hosting a table on the topic “T-SQL Tuning & Optimization.” If you’re interested in that topic, please, sit down & talk. Oh, I might be a minute or two late. I’m presenting right before lunch. Save me a chair, just in case.

Permalink Leave a Comment

24 Hours of PASS: Summit Preview

August 11, 2010 at 8:58 am (PASS, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , , , )

Registration is open for the second 24 Hours of PASS this year. This one is going to be a preview of the Summit itself. So all the sessions are tied, in some manner, to sessions being given at the summit.Here’s a link to go and register.

I’m very excited to be able to say that I’ll be presenting in this 24HOP. One of my presentations at the Summit this year is Identifying and Fixing Performance Problems Using Execution Plans. It covers pretty much what it says, methods for fixing performance problems by exploring the information available within execution plans. But, how do you know you have a performance problem? That’s where my preview session comes in. Identifying Costly Queries will show you several ways to gather metrics on your system so that you can understand which queries are causing you the most pain. Once you know which queries need tuning, you can use execution plans to tune them. Whether you’ll be attending the PASS Summit or not, and whether or not you’ll go to my session once you’re there, I think this 24HOP session will be useful to help you understand where the pain points are within your own systems. I hope you’ll attend.

More importantly though, check out all the other great sessions. This is an excellent collection of presenters and presentations. For anyone who has ever said “PASS doesn’t do anything for me,” I want you especially to take a look at the amazing training opportunities being offered by PASS, for free. The volunteers that run PASS do amazing things and this is just one of them. Take advantage of this opportunity and, hopefully, recognize that PASS is doing things for you. This just barely scratches the surface of all that PASS offers.

Permalink 3 Comments

PASS Summit 2009 Day 3

November 6, 2009 at 1:47 am (PASS, SQLServerPedia Syndication) (, , , )

The day started off with a mixed bag. First we had an honestly tearful farewell with Wayne Snyder saying goodbye to Kevin Kline, leaving the board for the first time since PASS was founded. This was followed by a painfully dull session with Dell all about their commitment to bread & butter DBA concerns. That was followed by Dr. DeWitt doing a deep dive into the history and the future of computing, showing and teaching in ways that only the very best can achieve. It was a fantastic performance, entertaining, enlightening, amazing… Just flat out incredible. It’s the kind of understanding that you wish you could get about most things, most of the time. Unfortunately, it came to an end.

Today I finally got to hit a lot of sessions. First I saw Andrew Kelly give a session on “Capturing and Analyzing File & Wait Stats.” It was great. Andrew Kelly is a good presenter and he knows this topic forwards and backwards. That makes it very easy to sit and learn from him. It’s the kind of useful information you can really take advantage of in your job. For lunch I went to a book signing to find out that both my books were sold out. A few people, including @sqlbelle, stopped by to get books signed. It was a real honor and privilege for them to do that. After that I went to two Buck Woody sessions, back-to-back. After the session yesterday I couldn’t have missed them. The first session was on “SQL Server Automation on Steroids.” The slide deck was laid out to look like a Zune. It was great stuff on fundamentals like how to configure SQL Agent, and drill downs on mechanisms for working with PowerShell, or POSH as Buck calls it. He showed several different scripts and I’m pretty jazzed to continue my pursuit of POSH skills after his session and Allen White’s earlier in the week. Yes, this sort of reinforcement of session on session with different people giving different views of the same tools used in varying ways is something you can only get at the PASS Summit. His second session was on “Performance Tuning with SQL Server 2008.” While I didn’t find it as technically useful as the previous two sessions I’d seen him do, it was every bit as entertaining and enlightening. He made my list of must see presenters. I finished out the day, and the PASS Summit, at Gail Shaw’s “Lies, damned lies and statistics.” Gail presented fantastic information in her clear, informative style. If you needed to know something about statistics, she laid them out for you in this session. Things were a bit subdued, this being the end of the Summit (not counting the post-conference) but Gail got the audience up and awake with some great demo’s and explanations of how statistics works inside SQL Server.

After hours it was off to the Friends of Red Gate party. I’m a friend of Red Gate because I sing the praises of their products, which are absolutely praiseworthy. But, I’ll tell you, I might be inspired to sing at least one praise more because of the meal we had. Nice food at a nice resteraunt with great, impassioned people, excited about what they do. It’s hard to enjoy things more.

So that’s the end of the Summit proper for me. I’ll be staying in Seattle through Friday because of a series of events that Microsoft is holding, but I won’t be blogging about them here. This has been one of the best PASS conferences I’ve been to, out of the five that I’ve attended.

Permalink Leave a Comment

Next page »