I’m trying to come up with some ideas for an article (or six) on SQL Server 2008. I know I’m getting about 40 views a day on this blog. That must translate to about 20 users (the statistics on this site aren’t clear, so I’m guessing). Do any of you have any good questions about new functionality in SQL Server 2008? I’ve been thinking about looking at sparse columns and filtered indexes or maybe the spatial data types. Those are the topics I’m most excited about as a database developer and designer. If I think about it in terms of being an admin, policy management, the data collector, and those types of new functionality are very interesting. I’m not even sure what’s new in the BI space.
So, if you’re one of twenty people reading this, leave a comment. What interests you? What concerns you? If you’re interested in something, I’m pretty sure more people are.
Peter Ward, the editor at SQL Server Performance, has published an article of mine on Breaking Down Complex Execution Plans. I go way beyond the blog entry below and show how the estimated costs in execution plans can mess you up, how to use the XML in execution plans to search through them for costly operations or operations that have mismatched estimated rows & actual rows and some other tips and tricks. Hopefully it’s worth a read.
There aren’t any. That’s not entirely true. There is the SQL Server Management Pack Guide from MS. It gives you the basics. After that you have to fall back generic documentation and help for the most part. If you haven’t already, get a copy of System Center Operations Manager 2007 Unleashed. It doesn’t cover the SQL Server Management Pack in any detail, but it gives you a lot of what you need to understand the management pack in general. The next best source of information is the newsgroup microsoft.public.opsmgr.sql. From there, you have to go to System Center Forum. Watch for articles by Tom LaRock. He seems to know as much about it as anyone. He’s presenting at the PASS Summit this year, so you might want to attend. There are occasional articles and posts over at SQL Server Central. You might keep an eye on the MS Team Blog, but I don’t recall seeing much that was SQL Server Management Pack specific. That’s all I’ve got. If anyone has anything more, please post it in the comments. I’ll add updates as I get them. Specifically, I’d like to see some good resources on creating custom reports against Operations Manager data.
If you go around to the various forums, you’ll see postings, including some I’ve put up, that say using subqueries, especially correlated subqueries in the SELECT statements of queries is bad because it effectively acts as a cursor.
I got called on it. So I had to do a bit of research.
Books Online talks about it right off when describing subqueries. That’s not proof though.
I did some more looking around. Adam Machanic does a nice job of slicing up Functions, which are basically the same thing. But it’s not exactly the same.
I kept looking. This blog entry is just flat wrong, but the second comment points out the fallacy. Jeff Moden would also like this example since it shows the disparity between the actual cost of queries and the estimated cost used by the graphical execution plans to display results. But this isn’t in the SELECT statement. Moving on.
Of course. The real answers always come from the SOURCE: Itzik
The new functionality coming out with 2008 includes the Data Collector which feeds to the Performance Data Warehouse. I presented this topic at the SNESSUG Heroes Launch event. It’s pretty slick functionality. I posted about it once before. A new article by Derek Comingore is available now over at SQL Mag. It’s worth the read.
The beauty of the new tool is the fact that it’s not all that new. It’s doing all this work using tools that we know and are comfortable with. This makes it very easy to implement and maintain. I agree with Mr. Comingore and I hope they expand the client to cover SQL Server 2000 and 2005.
I finally finished my initial set of tests with the GDR CTP release. It’s great! They’ve solved so many of the deployment problems that we had been experiencing that it’s now hard to wait until they actually release the product. The bad news is, it completely changes my presentation at PASS. I’m still going to cover 2005/2008, but now I’m going to cover the GDR as well. I have one hour to hit both processes… That’s not going to be easy.
Anyway, they’ve broken everything down to either work locally only or as part of the larger project. It all gets checked into source control. It all comes back out. No manual processes (like checking in the .user file) necessary.
Thank you Gert Drapers, wherever you are.
Following on to my adventures in creating multi-environment deployment processes with the new version of Data Dude (DBPro, VSTS Database Edition, whatever we’re calling it this week).
I’ve create a new configuration, copying all the settings from the Debug configuration. I’m adding a new Deployment configuration file and making a change. The deploy worked. Woo hoo! Now to get really funky. I’ll create a new “Sql command variables file:” and add a variable for setting the data directory. Now to deploy and… Urk! Failed. It’s not recognizing my variable.
Now I’m stuck. I’ve checked the syntax. It’s right. I double checked it all and reran deploy. Now it works… Color me confused. Whatever. Successful test. Time to create another configuration, simulating a QA server… Got that working too. I don’t know what went south yesterday, but it’s all working now. Now to quantify it all and begin to create some standards.
I’m also going to check this all into source control to see how the connection information is stored. In 2005/2008 (not GDR), the connection info was stored in the .user file. We finally started checking our .user files in with the rest of the project, to the abject horror of every MS developer that heard about it. More on this tomorrow.
Because I was having so many problems yesterday after I tried doing everything at once, creating multiple deployments to multiple servers with a variety of different settings… Anyway, it didn’t work. So today I went back to square one. I created a blank project. Yesterday I used the schema compare to capture the database, but today I right clicked on the project and found “Import Database Objects and Settings…” which did it all for me. Yee Ha!
Anyway, after getting all the objects in, I started trying to deploy. This time I got errors on FK’s. It somehow changed the order on the compound keys columns. While editing, I found that if I messed up the syntax and then saved the file, I didn’t get an instantaneous error. That bummed me out. I got used to that in the original Visual Studio. Anyway, I fixed all the FK errors and now I got the deploy to work. I’m now going to walk through creating custom configurations one step a time to see where the deployment stops working.
TJ Belt wrote a nice article over at SQL Server Central on how to do Sarbanes/Oxley compliant auditing using SQL Server 2000 and/or 2005. it’s worth a read.
Tonight we have Rob Walters of Microsoft speaking on Visual Studio 2008 Reporting Services. This should be a good one. Rob also recently published a book, Accelerated SQL Server 2008. I’m going to a party at the MS building in Waltham next week to celebrate the release of Rob’s book. Hopefully that means I can finagle a copy too.