Powershell Remoting with SQL Server

July 27, 2010 at 8:00 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , )

One of the best things to come out with Powershell V2 is remoting and asynchronous calls. Between the two of these, you can basically send commands simultaneously to a number of SQL Server instances. BUT… and there always seems to be one of those, there is a lot of work required to get this running right. I’m going to outline what I did recently to test the ability of PowerShell to begin administering my servers remotely. Hopefully this provide the basis for a checklist and a how-to. I’ll update this post over time so that I get things right.

Enable remoting on the machines you wish to call

This requires admin privileges, but it’s pretty simple unless you need to modify which ports are available, etc. But to get it going the easiest way:

Enable-PSRemoting

You’ll get a warning outlining what this will do to the system and asking if you want to continue:

Running command “Set-WSManQuickConfig” to enable this machine for remote management through WinRM service.
 This includes:
    1. Starting or restarting (if already started) the WinRM service
    2. Setting the WinRM service type to auto start
    3. Creating a listener to accept requests on any IP address
    4. Enabling firewall exception for WS-Management traffic (for http only).

Do you want to continue?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is “Y”):

There may be other prompts about the operations that are listed above (in fact, “will be” is a better way to phrase it). In general, work through the help files to understand what all that means.

Configure Sessions

We’re talking about working with SQL Server here, so you know that you need to load the snapin. Easy right. In fact, you probably already have it loading by default through your configuration. Ah, but here’s the issue. When you’re running remotely, the session on that other machine is not running under your context. So it’s not going to use your configuration at all. Instead you need to define a session. There are several ways you could do this. Steven Muraski (blog|twitter) has a mechanism for forcing configurations to the remote machines. I haven’t tried this yet. I did the brute force approach.

First you create a script that holds the session information you want on each of the machines, in this case:

add-pssnapin SqlServerCmdletSnapin100

add-pssnapin SqlServerProviderSnapin100

That loads the necessary snapins for SQL Server. Once that’s done, you have to register this as a named configuration on the system:

[sourcode language=”powershell”]Register-PSSessionConfiguration -Name SQLShell -StartupScript C:\scripts\sql.ps1[/sourcecode]

Again, this is run on each machine that you wish to remote to. You can do this through remote sessions, funny enough. Once you’ve enabled remoting and set up the sessions, they’ll stay in place, as is, even after rebooting the system. This means you only have to do this once.

Remoting to SQL Server

You can create multiple sessions and then call them asynchronously, but instead, I tried a different tack. This is my full script:


$instances = @{"Server1"="Server1\InstanceA"; "Server2"="Server2\InstanceB"}

$job = Invoke-Command -ComputerName (Get-Content "c:\scripts\serverlist.txt") -ScriptBlock {param($rinstances) Invoke-Sqlcmd -ServerInstance $rinstances.$env:computername -Query "SELECT * FROM sys.dm_exec_requests WHERE session_id > 50"} -JobName tsql -ConfigurationName SqlShell -ArgumentList $instances

Wait-Job tsql

$results = Receive-Job -Name tsql
$results

Stop-Job -Name tsql
Remove-Job -Name tsql

I created a list of servers and put it in a file, serverlist.txt. Because I’m dealing with instances, I need a way to match the list of servers to the instances. I did this with the associative array (aka hash table), $instances. I’m calling Invoke-Command and passing it a list of computer names through the serverlist.txt file. I pass that a script block, more about that in a minute, and I set a JobName as tsql. This makes the Invoke-Command into a remote job, but in my case, a set of remote jobs. I pass it the the configuration we created earlier with -ConfigurationName.

The ScriptBlock is pretty straight forward from there. The one extra wrinkle that I had to get right, and thanks to Steven Muraski, who got me through this little roadblock as well as several others (all through Twitter by the way), is that you have to get your local parameter into the script block through the use of param and -ArgumentList. Then I just called it based on the server name I was running on (yes, what if I had more than one instance, I didn’t, OK?).

Then, because I wanted to see it and didn’t want to go through a process of collecting information as it finished, I just told it to wait on the job, by name. Then I collected the results, displayed them through the default and shut down the job and removed it.

So this is one, admittedly less than perfect, approach to getting remoting working with SQL Server in PowerShell. I’m going to spend more time on this to refine the process.

For a completely different view of the exact same problem, go check out what Aaron Nelson (blog|twitter) did. We were working on this at roughly the same time and exchanging notes. He helped me, I hope I helped him a bit.

Permalink 4 Comments

SQL University: Introduction to Indexes, Part the Third

July 23, 2010 at 5:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , , , )

Nice to see most of you have managed to fight your way through the shoggoths outside to attend another lecture at the Miskatonic branch of SQL University. This will be the third and final part of the introduction to indexes lecture. Please, if you’re going mad, step out into the hall. Our previous two lectures introduced the concept of indexes and then talked about two types of indexes, clustered and nonclustered. This lecture will cover the concept of statistics as they relate to indexes.

If you followed the previous lecture then you know that indexes are stored in a Balanced Tree or B-Tree structure. You know that this storage mechanism is intended to provide fast retrieval of data. But, how can the query engine inside SQL Server know which index has the information it wants? How can the query engine know that a given index will provide a mechanism to speed up the query that is being run? Statistics.

Statistics are information generated defining the selectivity of a column or columns in the database. They may or may not be part of an index. Columns alone can have statistics and indexes can have statistics. The query optimizer uses statistics to determine how it will access data. You may have the greatest index in the world, but if your statistics are wrong, that index won’t get used. By default statistics are automatically created and automatically updated. The automatic update of statistics is based on a formula that depends on changes in the data and changes in structure, as outlined in this Microsoft documentation. In general, it’s a very good practice to leave this alone because SQL Server does a great job of maintaining statistics.

But what are statistics really?

Statistics are basically a sampling of data stored in such a way as to make it possible for the optimizer to understand what the data looks like without actually accessing it. It is, absolutely, meta-data, data about data. The sampling of the data is, usually, a scan across the data, dropping in every certain number of rows as a percentage of the data in the table, but it can be a full scan, where it looks at all the data in the table. This meta data that is collected is broken down into several pieces. Statistics are represented through:

  • The histogram, which provides a picture of the data in a column
  • The header, which provides information about the statistics or meta-data about meta-data
  • Density information, which stores the distribution of the data across the columns being tracked by the statistics

In short, a lot of information. To start understanding what this all means, let’s see how to retrieve a set of statistics:

DBCC SHOW_STATISTICS ("Sales.SalesOrderDetail", IX_SalesOrderDetail_ProductID);

This query uses tables stored in the AdventureWorks2008R2 to display statistics information. Just a side note, so many DBCC functions have been replaced by dynamic management views that I honestly expected one for statistics too, but it’s not there. Running the query results in the following set of information:

The first result set at the top is the header. Most of the information you see there should make sense to you based on what we’ve been talking about. A couple of interesting points worth noting are the “Rows Sampled” column which shows how many of the rows were looked at while the stats were being built, Steps, which gives you an indication of the size of the histogram, and Density, which shows the selectivity of the statistics, but that column is not the one used by the optimizer. The main thing I use header information for is to see when the statistics were last updated.

The second set of data shows the density of the columns used to define the index. You can see that each column is measured as it gets added to the index. This shows you how a compound index makes the data more and more selective. If those numbers are at all big, as opposed to vanishingly small like those presented, you have a selectivity isssue with your index. The less selective the index is, the less likely it will be used.

Finally, the last data set is the histogram. What I have displayed is actually just the first seven of two hundred rows. A histogram is basically a small snapshot of the data in your index. Specifically, the data in the first column of your index. That’s worth knowing. If you choose to, for some crazy reason, build an index with a bit column as the leading edge (that means the first column) and you’re surpised that your index isn’t getting used, this is where you can go to understand why. The histogram is one of the main tools that SQL Server uses to determine which operations it will use to retrieve your data. Here’s how it works. First you get the RANGE_HI_KEY which shows the upper bound column value, the top of the range of rows, that the this entry in  the histogram represents. Then you see the RANGE_ROWS that displays the number of rows within the range represented by the this entry in the histogram (except, for this entry, there is no range, it’s a single value). Next is the EQ_ROWS that tells you how many rows are equal to the value represented by the RANGE_HI_KEY. That number is large here because there is no range of rows, this value is represented by approximately 2838.166 rows inside the data. Following that is DISTINCT_RANGE_ROWS which shows you the distinct number of rows within each stepped range of values. In this case, again, because we’re not dealing with a range, that number is zero. Finally, you can see the AVG_RANGE_ROWS displaying the average number of duplicate values within the range.

Scrolling down within that set of data you can see what a range of values looks like:

In this case the range covers two values with the upper limit being 805 and the previous upper limit being 801.

In short, you can see what the optimizer sees and determine if you should have more up to date statistics. For example, if I query the table and retrieve ProductID values that are between 802 and 805, which would represent step 72 in the histogram, I get 246 rows, not 442, which is what I should see. Since the statistics are a little old on the table, they haven’t been updated since June 23rd, I’ll update them. You can use sp_updatestats, which will sample the data and generate a histogram with some guesses, or you can do this:

UPDATE STATISTICS Sales.SalesOrderDetail WITH FULLSCAN

Now when I run DBCC SHOW_STATISTICS, the header information shows that the rows sampled equal the rows in the table. My new histogram has almost exactly the same distribution, but the row counts are incredibly accurate. In my case the number of RANGE_ROWS is equal to 200 and the number of EQ_ROWS is equal to 46. Which equals the 246 rows that we got from querying the data directly. That represents a very accurate set of statistics.

So how do you know when to update your statistics? It’s really hard to say. If you start seeing different execution plans than you did previously, or if you note that the date on the statistics seems old, you might want to update stats. How do you determine if you should use a full scan or sampled statistics? Again, hard to say. In most circumstances a sampled set of statistics should work, but in some areas, if the data distribution is off, you may want to run a full scan.

There is clearly more and more to discuss about statistics and how to maintain them. There is a lot more to discuss about indexes. I didn’t even get to index fragmentation. Today you should remember:

  • Statistics are used to determine which indexes work well for a query
  • Statistics are composed of multiples sets of data, the header, density information, and the histogram
  • Histograms have a maximum of 200 steps
  • Histograms show the data distribution
  • Despite automatic maintenance, statistics can be out of date
  • The sampled update of statistics doesn’t always create the best set of data

For more information on statistics, I recommend Microsoft’s Books On Line for SQL Server. It’s a great resource.

But I hear the whipoorwills kicking up and I see a funny looking fog developing outside. It’s getting awfully dark for mid-day. One of the other professors stepped on some squealing white… thing… on his way into today and a one of the TA’s went stark raving mad at something they saw in the rectory. Time to end class. Best of luck on your journey back to your dorms. I’m locking myself in my office.

While I’m there. I might churn out one or two more entries on indexes. Look for them in the SQL University E-Book (no doubt to be published posthumously)

Permalink 4 Comments

SQL University: Introduction to Indexes, Part the Second

July 21, 2010 at 5:00 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

Welcome once more to the Miskatonic branch of SQL University. Please try to concentrate. I realize the whipoorwills singing outside the window in a coordinated fashion that sounds almost like laboured breathing can be distracting, but we’re talking about indexes here.

We left last class with a general idea what an index is, now it’s time for some specifics. There are several different kinds of indexes, as we talked about last class. But the two you’re probably going to work with the most are clustered, non-clustered. Each of these indexes is stored in a structure called a B-Tree, a balanced tree, not a binary tree. That’s a very important distinction.

A B-Tree is a double-linked list that is defined by the keys of the indexes on the top and intermediate pages, and at the leaf level by the data itself in the case of clustered indexes. Some of you no doubt think I’m quoting from De Vermis Mysteriis. Basically, for our purposes, a B-Tree consists of a series of pages. There is a top page, or root page, that defines the beginning of the index key. It points to a series of intermediate pages. Each intermediate page contains a range, a previous and a next value. These all point to each other, hence, double linked. The idea is that SQL Server can quickly identify which intermediate page has the pointers down to the leaf node, the final node in the stack. The values of these pointers are defined by the key of the index, the column or columns that you define when you create the index. There are always at least two levels, leaf & root, but there can be more, depending on the amount of data and the size of the keys. Just remember, the size of the key, which refers both to the data types in the key and the number of columns, determines how many key values can get on a page, the more key values on a page, the faster access will be, the fewer key values, the more pages that have to be read, and therefore, the slower the performance.

In general the purpose is to be able to quickly navigate to a leaf or set of leaf pages. When a B-Tree is used and the query engine is able to navigate quickly down to the leaf needed, that is an index seek. But when the B-Tree has to be moved through, in whole or in part, scanning for the values, you’re looking at an index scan. Obviously, in most cases, a seek will be faster than a scan becuase it’s going to be accessing fewer pages to get to the leaf needed to satsify the query. Just remember, that’s not always true.

Let’s get on to the indexes. It’s already been mentioned, but it bears repeating, the principle difference between a clustered and non-clustered index is what is at the leaf level. In a non-clustered index, it’s simply the key values and an values added through the use of the INCLUDE option along with a lookup value to either the clustered index key or an identifier within a table. In a clustered index, the data is stored down at the leaf. This is why people will frequently refer to a clustered index as being “better” than a non-clustered index, because you’re always going directly to the data when you’re looking information up within a clustered index. But, as with the scans vs. seek argument, this is not always true either.

I mentioned that a non-clustered index refers back to the clustered index, if there is one on the table. Because the data is stored at the leaf level of the clustered index, when you need to retreive other columns after performing a seek on a non-clustered index, you must go and get those columns from the clustered index. This is known as a key lookup, or in older parlance, a bookmark lookup. This operation is necessary when data not supplied by the non-clustered index, but can be very expensive because you’ve just added extra reads to your query.

What if there isn’t a clustered index on the table? What does the non-clustered index use to find other columns? If the table doesn’t have a clustered index, then that table is referred to as a heap. It’s called a heap because the data is simply stored in a pile, with no logical or physical ordering whatsoever. With a heap, SQL Server takes it on itself to identify the leaf level storage and creates a row id value for all the rows in the table. This row id can be used by the non-clustered index to find the data. That is referred to by the completely arcane and incomprehensible term, row id lookup. You might be thinking, hey, that means I don’t have to create a clustered index because SQL Server will create one for me. You’d be wrong. Maintaining the row id is an expensive operation  and it doesn’t help in retrieving the data in an efficient manner. It’s just necessary for SQL Server to get the data back at all. In general, this is something to be avoided.

A non-clustered index doesn’t necessarily have to perform a lookup. If all the columns referred to in a query are stored within a non-clustered index, either as part of the key or as INCLUDE columns at the leaf, it’s possible to get what is called a “covering” query. This is a query where no lookup is needed. Indexes that can provide a covering query everything it needs are referred to as covering indexes. A covering query is frequently one of the fastest ways to get at data. This is because, again, depending on the size of the keys and any INCLUDE columns, a non-clustered index will have more information stored on the page than a clustered index will and so fewer pages will have to be read, making the operation faster.

By and large, a good guideline is to put a clustered index on all tables. SQL Server works extremely well with clustered indexes, and it provides you with a good access mechanism to your data. If you don’t put a clustered index on the table, SQL Server will create and maintain a row ID anyway, but as I said before, this doesn’t save much work on the server and it doesn’t provide you with any performance enhancement.

That’s a basic introduction to the three concepts of the clustered index, the non-clustered index and the heap. The points I’d like you to remember are:

  • Indexes are stored in Balanced Trees
  • Balanced Trees have, generally, three levels, root page, intermediate page, and leaf page
  • In clustered indexes, data is stored at the leaf page
  • In non-clustered indexes, a pointer is maintained back to the clustered index or the row id
  • A heap is a table without a clustered index

Remember those things and you can really begin to dig down on how indexes work. Understanding how they work will assist you in designing them for your database and your queries.

Next class we’ll go over statistics.

I wouldn’t walk back to your dorm by way of the shore. I’ve seen some rather odd looking people near the docks lately that didn’t give me a good feeling. See you next time… maybe.

Permalink 3 Comments

SQL University: Introduction to Indexes, Part the First

July 19, 2010 at 5:00 am (PowerShell, spatial data, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

Right, all eldritch tomes are to be closed and Elder Signs are to be put away during this course.

Welcome to the History department here at the Miskatonic branch of SQL University. Why the History department? Well, first, because I like history and have frequently thought I would enjoy teaching it. Second, because I needed a hook upon which to hang part of the story I want to tell. What story is that you ask? Why, the story of the Dewey Decimal System. We are interested in studying history and historians must classify our subjects carefully. For advanced students we’ll be covering the Library of Congress Classification System and the…

Right, I give, this is the introductory class on indexes. If you thought we were covering something exciting and sexy like PowerShell, you’re in the wrong room.

Indexes… indexes…. There are, of course, different kinds of indexes. I’m sure that some of you, glancing ahead in your books, are already thinking, “yeah, two.” And you would, of course, be ABSOLUTELY WRONG! That’s why you’re in this class, because you don’t know. There are a large number of different kinds of indexes. Most people think of the standard indexes, of which there are two, clustered and non-clustered. But when pressed they can usually come up with the Full-Text index and possibly even the XML index. But that leaves out Spatial indexes, filtered indexes… more. Microsoft’s documentation lists eight different indexes:

  • clustered
  • non-clustered
  • unique
  • indexes with included columns
  • Full-Text
  • Spatial
  • Filtered
  • XML

But I’ve seen other people count them other ways and arrive at varying amounts. Is a compound index a different kind of index? If it’s not, is unique really a different kind of index? Things to think about.

Why so many? What the heck is an index good for? They must be useful critters or Microsoft wouldn’t have put so many different sorts (however many that is) into SQL Server. I started off talking about the Dewey Decimal System for a reason. An index, any of the indexes we’re going to talk about, is primarily meant, like the DDS, as a mechanism to make finding things easier. That’s all it is. Pretty simple, right? Wrong. You clearly haven’t spent time with SQL Server indexes or the DDS. It’s really complicated. But, just like the DDS, learning how indexes work will make using them much easier.

Remember, the main purpose of a database, despite what your DBA may secretly feel in his or her heart, is not to keep, store and protect data. No, the main purpose of a database is to feed that data up to your business users, whoever they may be, in a timely and accurate fashion. That’s where indexes come in. They will help your queries get the data out to your users faster. Think about your data like a really huge library and your data like a bunch of books. The index acts like the DDS as a mechanism to speed you through the library and quickly and easily retrieve the book that you want.

Enough comparisons, since this is introductory, I just wanted to get the idea of indexes into your head. In the next installment I’ll take on two (or four, depends on how you count them) different kinds of indexes, starting with the standard two that you expected me to cover, clustered and non-clustered indexes. I’ll also introduce the concept of a heap and we’ll talk about what the heck a B-Tree is.

See you next class, probably. Be careful crossing the quad, I’ve heard Wilbur Whately is back on campus and we all remember what happened last time.

Permalink 4 Comments

Steve Jones to Run for PASS Board

July 16, 2010 at 2:24 pm (PASS) (, , , , )

I couldn’t hide the lead. Steve Jones (blog|twitter) has announced he’s running for the PASS board. I’m excited. I’m almost as excited as if I were going to run. Steve is not simply a major influencer in what we call the SQL Community. He brings two things to the party that I think are going to make him very successful on the board. First, he gets things done. He just does. He’s one of those guys that seems to keep it together, just enough, to accomplish stuff. Second, he has real ideas. He’s not someone who is simply going to iron out a wrinkle or three at the Summit. He’s going to propose stuff that will make a difference. Put those two things together and I think we’ve got reason to celebrate.

Help me out here. Spread the word on this. We need people to be aware of who Steve is and what he does.

Permalink 4 Comments

SQL University: Introduction to Indexes

July 16, 2010 at 11:41 am (SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , )

For some crazy reason (probably everyone is on vacation), Jorge Segarra (blog|twitter) who runs SQL University has asked me to submit some material. I’ve got three posts, all coming up next week, that give a very, very basic introduction to indexes in SQL Server.

Permalink 1 Comment

In the Clouds

July 14, 2010 at 7:15 am (SNESSUG) (, , , , )

The July meeting for the Southern New England SQL Server Users Group is tonight. Andrew Novick will be talking about SQL Azure. It’ll be a great meeting. Our sponsor is Red Gate. They bought pizza.

Permalink 2 Comments

Red Gate SQL Source Control

July 13, 2010 at 9:23 pm (SQL Server 2005, SQL Server 2008, Tools, TSQL) (, , , )

You just have to love Red Gate tools. They find the small area that they want to cover and then they cover it extremely well. I rave regularly about SQL Prompt and SQL Compare and SQL Search (free one, btw). I’ve got SQL Data Compare and SQL Data Generator open & working on my desk regularly. I’m dabbling in their other tools fairly often as well. I just like Red Gate tools. I guess my constant & consistent praise is why I’m a “Friend of Red Gate.” I like to mention that before I start praising their tools some more, just so no one thinks I’m hiding it. Why would I hide it? I’m proud to say it. I am a Friend of Red Gate! … anyway… where was I… right, new software. I took a small part (a very small part) in the beta for their new software, SQL Source Control. I thought it was pretty cool when it wasn’t quite working right. Well, now it’s out, working very well, and it’s pretty slick.

Basically Red Gate has created a nice tight coupling between Source Control & your database. They currently support Apache Subversion and Microsoft’s Team Foundation Server (TFS). It let’s you create a mechanism for keeping track of your databases in the same way that you track your code. I honestly believe this is a must for any reasonably sized development team (read, more than two). I can expound on why, but instead I’ll just talk some more about SQL Source Control.

First thing you need to know is that it’s hooked into Management Studio. After you do the install, you get some extra windows in SSMS that look something like this:

I’ve scratched out my own server & database names, but you get the idea. The description summarizes it very well. Lots of people can work on the database, save the scripts into source control, and then they can pull that common set of scripts back out to do more work, just like working with code. It really is the best way to develop.

You just have to connect up the database following the directions and you’ll see something like this:

If you can see that, that’s a database (name hidden) that’s been hooked up to source control. Actually, that and the change to the set-up screen are about your only indications that this tool is running. I love the lack of intrusion.

Better still, each time you reconnect the database, as it goes and checks to see if there are updates in source control, you get a little spinning… looks like a yin/yang symbol.

Enough about pretty graphics. How does it work? Extremely well. I started adding new database objects, editing existing objects, and all it ever did was put one of it’s little symbols on the object that I had created or edited, marking it as a change. When I was ready to move the changes to source control, I just clicked on the Commit Changes tab. All the changes are listed and you see scripts showing before & after between the code in the database and the code in source control.

It just works. Same thing going the other way. A database already connected can just pull changes out and apply them. Nothing I did in all my testing hit a snag (granted, I was just working on pretty traditional tables, procedures, indexes, etc.).

The one thing I’ve found that I don’t like is that there doesn’t seem to be a facility for deploying the databases automatically. Instead, I had to create a blank database, hook that to the existing database in TFS and then pull down all the “missing” objects. Hopefully they’ll go to work on a way to automate that soon.

Just to reiterate, the point of the exercise is to get your code (and while you’re developing, a database is as much code as anything written in C#) into source control. Once you’re in source control, you manage your databases just like code, label, version, branch, whatever you need to do to maintain a tight coupling with the rest of the code for the app. SQL Source Control acts as a very fast and simple tool to enable that coupling for you.

Permalink 10 Comments

Delivering the Bad News

July 9, 2010 at 2:36 pm (sql server standard) (, )

It’s a Friday, the day governments & companies traditionally deliver bad news. I recived the bad news earlier in the week, but I’m passing it on now:

The Standard is dead.

Let me first say, a couple of authors are right in the middle of finishing up articles. Those will be completed and published and you’ll get paid.

Andy sums up some of the reasons why the Standard failed very nicely in his blog post. I agree with them, if not where the responsibility lies. Andy takes most of it on himself because, well, he’s that kind of guy, may the gods bless him. But, the fact is, I took on the job and just wasn’t prepared for what it would entail. The “editing” part of the job was hard. If you think it’s easy to tell people, “No, we’re not going to make you famous and give you $500,” think again. It was tough. But, actually, that was the easier part of what was needed.

The real difficulties were two-fold. First, pushing the documents through the pipeline. Frankly, that was a pain. Sometimes, I was the bottleneck, sometimes it was other people. But it required a great deal of attention and diligence and I wasn’t always giving it my all. Second, and this is the biggie, it really needed to be marketed, constantly, and widely. It needed to be up in people’s faces, all the time. I needed to be the one pushing that, hard. I blogged about it occasionally and I tweeted about it a few more times, but, here again, I didn’t give it the real attention it needed.

Yeah, I’ve got excuses for the shortcomings, some are valid, some aren’t. I’m not going to bother with them because frankly, they only really matter to me and Andy. Suffice to say, I did the job I could do and it wasn’t adequate.

So, the SQL Server Standard is dead, again. I think that makes it’s third death, depending on how you count them. Who knows, the thing keeps coming back like Dracula in the old Hammer films, we could see it again.

Thanks to everyone who wrote for it. Thank you, the few people who clicked through and logged in to get the chance to read it. Thanks to all the editors and photographers and everyone else involved. Thanks, a lot, for magnificent work as the head technical editor, to Brad McGehee (blog|twitter). Thanks, most of all, to Andy Warren (blog|twitter) for giving me the opportunity. Sorry I dropped the ball on this.

Permalink 7 Comments

Small PowerShell Script

July 6, 2010 at 9:59 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , , )

I’m still trying to learn PowerShell better. The opportunity to answer simple questions and problems with the tool is hard to pass up. We had a need to clean up data directories where data files were left behind or people put inappropiate files, so I wrote the following Powershell script:

param([string]$filelocation="",[string]$sqlinstance="(local)")

Set-Location $filelocation

foreach($file in get-childitem)

{$base = $file.Name;

$result = Invoke-Sqlcmd -ServerInstance $sqlinstance -Query "SELECT DB_NAME(mf.database_id) AS db FROM sys.master_files mf WHERE RIGHT(mf.physical_name,LEN('$Base')) = '$Base' UNION ALL SELECT 'NoDb' AS db WHERE NOT EXISTS (SELECT DB_NAME(mf.database_id) AS db FROM sys.master_files mf WHERE RIGHT(mf.physical_name,LEN('$Base')) = '$Base');" ;

if($result.DB -eq "NoDb" -and $file.Extension -ne ".cer"){Remove-Item $base}}

It’s a very simple script. It takes a UNC and a server instance and then walks through the files in the UNC and validates whether or not those files exist within databases on the server. If they don’t exist, it deletes them. That’s it.

I’ve published this to the Technet Script Center Repository right over here. I’m going to work on making it a bit better, so for updates, go there.

Permalink 1 Comment