New SCOM Book

March 28, 2008 at 12:04 pm (SCOM) (, , , , )

I got a new SCOM book in the mail yesterday, System Center Operations Manager Unleashed. It’s friggin’ huge. I’ve just started reading through it and it looks pretty good. So far, it’s much more thorough than the only other book available for SCOM, Mastering System Center Operations Manager 2007. I’m going to ready through more of it before I post a review to Amazon. I think I’ll write up a review for the PASS book reviews too. Hey, if you have time on your hands, get on over there, request a book and write up your own review.

Advertisements

Permalink Leave a Comment

SQL Server 2008 Free Instance

March 28, 2008 at 11:42 am (SQL Server 2008) (, , , )

If you don’t want to risk blowing up your laptop (like I did) and you’re not interested in setting up and maintaining a virtual server, but you still want to test SQL Server 2008. Then, PASS (Professional Association of SQL Server Users), Dell & MaximumASP have a solution. Just go to SQL Server Beta. You can set up a private instance and it will persist between sessions.

Permalink Leave a Comment

SCOM As a Code Source

March 25, 2008 at 2:28 pm (SCOM) (, , , , )

A friend has had an article posted over on SQL Server Central that’s worth a read. Scott Abrants outlines the type of code that SCOM uses for SQL Server auditing and how you can leverage that outside of SCOM for other work you may be doing. As noted in the comments, it’s also worth taking a look at how Microsoft set up it’s rules and monitors as a method for setting up your own.

Permalink Leave a Comment

Top vs. Max

March 21, 2008 at 2:56 pm (TSQL) (, , , , , , , , , )

The company I work for has a very well defined need for versioned data. In a lot of instances, we don’t do updates, we do inserts. That means that you have to have mechanisms for storing the data that enables you to pull out the latest version of all the data or a particular version of all the data, or the data at a particular moment in time, regardless of version.

 That means maintaining a version table and a series of inserts into various tables. Some tables will have pretty much a new row for each version, some tables may only have one or two versions out of a chain. With the help of a very smart Microsoft consultant, Bill Sulcius, we have a mechanism that works very well. However, questions about the ultimate tuning of the procedures remain. So we may have a query that looks like this:

SELECT *
FROM dbo.Document d
INNER JOIN dbo.Version v
ON d.DocumentId = v.DocumentId
AND v.VersionId = (SELECT TOP(1) v2.VersionId
                                    FROM dbo.Version v2
                                    WHERE v2.DocumentId = v.DocumentId
                                    ORDER BY v2.DocumentId DESC, v2.VersionId DESC)

There’s a clustered index on the Version table that has DocumentId & VersionId in it. This query works great.

But you can also write the same query to get the same results using MAX or ROW_NUMBER(). What’s more, those work well too, all nice clean clustered index seeks. You can also use CROSS APPLY rather than JOINS. All these appear to work well, but in different circumstances, some work better than others. That makes establishing a nice clean “if it looks like this, do this” pattern for developers to emulate difficult. I’m creating a series a tests to outline as much of the differences as I can. I’ll write it up and submit it all to Chuck over at SQL Server Standard first. If he doesn’t like it, it’s Steve’s. I’ll also post a few tid bits here.

Permalink Leave a Comment

Virtual Servers

March 21, 2008 at 2:38 pm (Misc) (, , )

The dev teams that we support have long believed that if only they had an infinite number of environments then the deployment and integration issues that they, and we, wrestle with would go away. Never mind pointing out that if you have problems integrating then spreading yourself on to even more servers with even more isolation of individual development teams would only radically increase the problem. I tried that. I was shut down. Anyway, we had been using a tool set from one of the heavy hitter virtual environment companies. Supposedly it was going to make it possible for us to implement as many virtual environments, not just servers, but sets of servers, as we wanted. Months of work have gone by. I just got the word through back channels… project scrapped. I documented the story into an article that’ll appear in SQL Server Central at some point in the future. What a pain.

Permalink Leave a Comment

Slick New Tool from RedGate

March 20, 2008 at 12:16 pm (Tools) (, , )

I have no intention of this becoming “tool of the day” or anything, but I can’t help tooting the horn for a tool that I’ve been using a lot from RedGate. It’s new and in beta right now, but it’s going to be pretty good. It’s a data generation tool called, are you ready, Sql Data Generator. Who saw that coming? Ok. I know. I’m not funny. Anyway, this is a great little tool. I’ve been using it to quickly slap large amounts of data into small sets of tables to test queries that I’m writing or for checking answers that I’m posting over at SQL Server Central. The tool lets you pick which tables and the columns inside those tables that you want filled with data. It has a number of data generation schemes built in to get either randomly generated data or data that looks like something (names from a list for example). It’ll let you pull data from other sources, and unlike the data load tool that comes with DataDude, this will let you do two things, ignore a table but still use data from it. Is that confusing? We have a build process using DataDude that we use to build our databases. Additionally, we load the standard look-up data into the database as part of the build by including it in a post deployment script. Works great. Unfortunately, when we try to use these tables in the MS data loader, it wants to replace the data or completely ignore the table (and associated foreign keys…). This tool from RedGate allows me to skip loading data into the table because it’s already there, but it lets me use that table as a look-up for the foreign key data that it is generating. Great stuff. The beta is pretty stable if you want to try it out and I think it’s going to be release soon.

Permalink 1 Comment

SCOM Troubles

March 19, 2008 at 5:23 pm (SCOM)

I inherited the SCOM project when one of our Admin people left the company. This person had done a ton of work setting things up but ran into some problems. One of the problems was identified as databases missing from particular instances. I investigated further and found that the database list was “distinct” across instances. Meaning, if two instances shared the same database name, only one of the databases was listed. After getting an email from PSS, I did more investigation today. Now it’s getting weirder. Some servers aren’t showing up in Explorer, some aren’t in both Explorer and the console. Not good. I’ve documented it all and shipped it off to the PSS guys to see what they think.

Permalink Leave a Comment

SQL Inspect

March 18, 2008 at 5:46 pm (Tools) (, )

I was asked if I would look at a new tool from a company I hadn’t heard of. It’s SQL Inspect from Yohz Software.

Nominally this is a SQL query editor. However, strictly as a query editor, especially when compared to what’s available in SQL 2008 or what tools like Red Gate’s SQLPrompt or SQLRefactor can do for you, it’s not so hot. Luckily, that’s not where its strengths lie. Instead, it’s all about performance tuning your queries and it does this very well indeed. It takes a query and returns a tree structured execution plan, looking a bit like the old text plans, but much more sophisticated with roll-ups, etc. It shows you all the details of the operations, just like an execution plan, but immediately accessible on the screen, like having the properties window open and expanded. You can record the results of the query and it will store the plan, the I/O, etc., off on the side. Then you can go back and tweak the query, rerun it to get new results and it will begin comparing these results to the original as a baseline. You can see the cost, duration, read & scan differences up or down, as you try out different queries. It shows the differences in the script and the differences in the query plan, side by side. For such a simple little tool, doing things you’ve got available elsewhere, it’s really great!

I’ve only scratched the surface trying it out, but I’m pretty sure I’ve got a new tool in the tool belt.

They’ve got a trial period (who doesn’t these days) so you can try it out. It’s worth a look.

Permalink Leave a Comment

SCOM SQL Agent Job Defaults

March 18, 2008 at 5:19 pm (SCOM) (, , )

I’m working through implementing SCOM, specifically the SQL Server Management Packs. I’ve been tweaking and tuning and for the most part I’m really impressed with how MS configured the defaults. Until today. Since I’ve hit all the really big and important monitors & rules (at least the ones that have caused problems) I’m starting to drill down a bit. I just hit the Duration monitor for SQL Agent Jobs. The run time for a warning state is 1 minute and an error state in 2 minutes. OK. So any backup on a database over about 2gb in size is going to go into a warning state? I don’t think so. What the heck were these guys thinking of?

Permalink Leave a Comment

Presentation Last Week

March 17, 2008 at 1:51 pm (SNESSUG, SQL Server 2005, SQL Server 2008, TSQL) (, , , )

Last week I presented at the Southern New England SQL Server User Group (SNESSUG) where I’m the Program Director. I presented from the book, Dissecting Execution Plans, that is supposed to be published any day now. I kept it basic and it flowed very well. The audience seemed receptive and there were a few questions. Mostly around figuring out what to do when you saw certain problems within the execution plans. Since that was the goal of the presentation, I was ready for most of the questions. I got stumped once on a Constant Scan operation.

I finally looked it up this morning and feel silly. Here’s the query:

INSERT INTO [AdventureWorks].[Person].[Address]
     ([AddressLine1]
     ,[AddressLine2]
     ,[City]
     ,[StateProvinceId]
     ,[PostalCode]
     ,[rowguid]
     ,[ModifiedDate]
VALUES (‘1313 Mockingbird Lane’
     ,’Basement’
     ,’Springfield’
     ,24
     ,’02134′
     ,NEWID()
     ,GETDATE());

Silly simple right? It creates an execution plan that starts with the Constant Scan operator. For those who aren’t snickering behind their hands because I forgot this simple fact, the Constant Scan operator creates a blank row, or rows, for the query in question to begin filling in. Simple. I still feel goofy that I didn’t remember it.

Permalink Leave a Comment

Next page »