Powershell is Really Easy… If you know what you’re doing

October 18, 2010 at 8:00 am (PowerShell, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication) ()

Sigh… I just spent almost a full work day trying to come up with, what turned out to be a single line script. The requirements were simple. After patching of servers is completed, and everything is rebooted, sometimes, not often, but sometimes, SQL Server Agent doesn’t restart correctly. I was asked to put together a script that will check a list of servers to see if Agent is running. Any PowerShell guru’s reading this are either laughing or crying, already.

So, I started exploring how to connect to the server using using some type of PS-Drive, but I quickly figured out that I’d have to loop through the servers that way. Plus, I don’t think there’s a way to check Agent status through the SQL drive. So that went out the window. Then, it was to use Invoke-SQLCmd. After all, I can call multiple servers through the -ServerInstance property, so I just need to find a good T-SQL statement to check Agent status… Googling & Binging didn’t turn up a good T-SQL statement to check Agent status (not saying there isn’t a way, but I couldn’t find one easily, plus, this is a story about PowerShell). On to SMO. All I had to do was figure out how to connect to the server and the SQL Agent object through SMO. I’d have to loop through the servers, but for such a simple script, that shouldn’t be hard. Ever tried to connect directly to the Agent through SMO? Me neither. It’s a pain.

Finally, I referred back to Don Jones (blog|twitter) book, Windows Powershell 2.0, TFM (and a more appropriately named book, there isn’t), and noticed a statement, Get-Service… Hey! Isn’t SQL Agent basically a service? Yes. Yes it is. Here’s the final script, in all it’s simplistic glory:

Get-Service -DisplayName "SQL Server Agent*" -ComputerName (Get-Content "servers.txt") | Select-Object MachineName,Status

Yep, it was that easy. Kids, don’t try this at home. Take the time to learn what you’re doing. Don’t assume you know.

Permalink 13 Comments

Powershell Remoting with SQL Server

July 27, 2010 at 8:00 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , )

One of the best things to come out with Powershell V2 is remoting and asynchronous calls. Between the two of these, you can basically send commands simultaneously to a number of SQL Server instances. BUT… and there always seems to be one of those, there is a lot of work required to get this running right. I’m going to outline what I did recently to test the ability of PowerShell to begin administering my servers remotely. Hopefully this provide the basis for a checklist and a how-to. I’ll update this post over time so that I get things right.

Enable remoting on the machines you wish to call

This requires admin privileges, but it’s pretty simple unless you need to modify which ports are available, etc. But to get it going the easiest way:

Enable-PSRemoting

You’ll get a warning outlining what this will do to the system and asking if you want to continue:

Running command “Set-WSManQuickConfig” to enable this machine for remote management through WinRM service.
 This includes:
    1. Starting or restarting (if already started) the WinRM service
    2. Setting the WinRM service type to auto start
    3. Creating a listener to accept requests on any IP address
    4. Enabling firewall exception for WS-Management traffic (for http only).

Do you want to continue?
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is “Y”):

There may be other prompts about the operations that are listed above (in fact, “will be” is a better way to phrase it). In general, work through the help files to understand what all that means.

Configure Sessions

We’re talking about working with SQL Server here, so you know that you need to load the snapin. Easy right. In fact, you probably already have it loading by default through your configuration. Ah, but here’s the issue. When you’re running remotely, the session on that other machine is not running under your context. So it’s not going to use your configuration at all. Instead you need to define a session. There are several ways you could do this. Steven Muraski (blog|twitter) has a mechanism for forcing configurations to the remote machines. I haven’t tried this yet. I did the brute force approach.

First you create a script that holds the session information you want on each of the machines, in this case:

add-pssnapin SqlServerCmdletSnapin100

add-pssnapin SqlServerProviderSnapin100

That loads the necessary snapins for SQL Server. Once that’s done, you have to register this as a named configuration on the system:

[sourcode language=”powershell”]Register-PSSessionConfiguration -Name SQLShell -StartupScript C:\scripts\sql.ps1[/sourcecode]

Again, this is run on each machine that you wish to remote to. You can do this through remote sessions, funny enough. Once you’ve enabled remoting and set up the sessions, they’ll stay in place, as is, even after rebooting the system. This means you only have to do this once.

Remoting to SQL Server

You can create multiple sessions and then call them asynchronously, but instead, I tried a different tack. This is my full script:


$instances = @{"Server1"="Server1\InstanceA"; "Server2"="Server2\InstanceB"}

$job = Invoke-Command -ComputerName (Get-Content "c:\scripts\serverlist.txt") -ScriptBlock {param($rinstances) Invoke-Sqlcmd -ServerInstance $rinstances.$env:computername -Query "SELECT * FROM sys.dm_exec_requests WHERE session_id > 50"} -JobName tsql -ConfigurationName SqlShell -ArgumentList $instances

Wait-Job tsql

$results = Receive-Job -Name tsql
$results

Stop-Job -Name tsql
Remove-Job -Name tsql

I created a list of servers and put it in a file, serverlist.txt. Because I’m dealing with instances, I need a way to match the list of servers to the instances. I did this with the associative array (aka hash table), $instances. I’m calling Invoke-Command and passing it a list of computer names through the serverlist.txt file. I pass that a script block, more about that in a minute, and I set a JobName as tsql. This makes the Invoke-Command into a remote job, but in my case, a set of remote jobs. I pass it the the configuration we created earlier with -ConfigurationName.

The ScriptBlock is pretty straight forward from there. The one extra wrinkle that I had to get right, and thanks to Steven Muraski, who got me through this little roadblock as well as several others (all through Twitter by the way), is that you have to get your local parameter into the script block through the use of param and -ArgumentList. Then I just called it based on the server name I was running on (yes, what if I had more than one instance, I didn’t, OK?).

Then, because I wanted to see it and didn’t want to go through a process of collecting information as it finished, I just told it to wait on the job, by name. Then I collected the results, displayed them through the default and shut down the job and removed it.

So this is one, admittedly less than perfect, approach to getting remoting working with SQL Server in PowerShell. I’m going to spend more time on this to refine the process.

For a completely different view of the exact same problem, go check out what Aaron Nelson (blog|twitter) did. We were working on this at roughly the same time and exchanging notes. He helped me, I hope I helped him a bit.

Permalink 4 Comments

SQL University: Introduction to Indexes, Part the First

July 19, 2010 at 5:00 am (PowerShell, spatial data, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, TSQL) (, , , , )

Right, all eldritch tomes are to be closed and Elder Signs are to be put away during this course.

Welcome to the History department here at the Miskatonic branch of SQL University. Why the History department? Well, first, because I like history and have frequently thought I would enjoy teaching it. Second, because I needed a hook upon which to hang part of the story I want to tell. What story is that you ask? Why, the story of the Dewey Decimal System. We are interested in studying history and historians must classify our subjects carefully. For advanced students we’ll be covering the Library of Congress Classification System and the…

Right, I give, this is the introductory class on indexes. If you thought we were covering something exciting and sexy like PowerShell, you’re in the wrong room.

Indexes… indexes…. There are, of course, different kinds of indexes. I’m sure that some of you, glancing ahead in your books, are already thinking, “yeah, two.” And you would, of course, be ABSOLUTELY WRONG! That’s why you’re in this class, because you don’t know. There are a large number of different kinds of indexes. Most people think of the standard indexes, of which there are two, clustered and non-clustered. But when pressed they can usually come up with the Full-Text index and possibly even the XML index. But that leaves out Spatial indexes, filtered indexes… more. Microsoft’s documentation lists eight different indexes:

  • clustered
  • non-clustered
  • unique
  • indexes with included columns
  • Full-Text
  • Spatial
  • Filtered
  • XML

But I’ve seen other people count them other ways and arrive at varying amounts. Is a compound index a different kind of index? If it’s not, is unique really a different kind of index? Things to think about.

Why so many? What the heck is an index good for? They must be useful critters or Microsoft wouldn’t have put so many different sorts (however many that is) into SQL Server. I started off talking about the Dewey Decimal System for a reason. An index, any of the indexes we’re going to talk about, is primarily meant, like the DDS, as a mechanism to make finding things easier. That’s all it is. Pretty simple, right? Wrong. You clearly haven’t spent time with SQL Server indexes or the DDS. It’s really complicated. But, just like the DDS, learning how indexes work will make using them much easier.

Remember, the main purpose of a database, despite what your DBA may secretly feel in his or her heart, is not to keep, store and protect data. No, the main purpose of a database is to feed that data up to your business users, whoever they may be, in a timely and accurate fashion. That’s where indexes come in. They will help your queries get the data out to your users faster. Think about your data like a really huge library and your data like a bunch of books. The index acts like the DDS as a mechanism to speed you through the library and quickly and easily retrieve the book that you want.

Enough comparisons, since this is introductory, I just wanted to get the idea of indexes into your head. In the next installment I’ll take on two (or four, depends on how you count them) different kinds of indexes, starting with the standard two that you expected me to cover, clustered and non-clustered indexes. I’ll also introduce the concept of a heap and we’ll talk about what the heck a B-Tree is.

See you next class, probably. Be careful crossing the quad, I’ve heard Wilbur Whately is back on campus and we all remember what happened last time.

Permalink 4 Comments

Small PowerShell Script

July 6, 2010 at 9:59 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , , )

I’m still trying to learn PowerShell better. The opportunity to answer simple questions and problems with the tool is hard to pass up. We had a need to clean up data directories where data files were left behind or people put inappropiate files, so I wrote the following Powershell script:

param([string]$filelocation="",[string]$sqlinstance="(local)")

Set-Location $filelocation

foreach($file in get-childitem)

{$base = $file.Name;

$result = Invoke-Sqlcmd -ServerInstance $sqlinstance -Query "SELECT DB_NAME(mf.database_id) AS db FROM sys.master_files mf WHERE RIGHT(mf.physical_name,LEN('$Base')) = '$Base' UNION ALL SELECT 'NoDb' AS db WHERE NOT EXISTS (SELECT DB_NAME(mf.database_id) AS db FROM sys.master_files mf WHERE RIGHT(mf.physical_name,LEN('$Base')) = '$Base');" ;

if($result.DB -eq "NoDb" -and $file.Extension -ne ".cer"){Remove-Item $base}}

It’s a very simple script. It takes a UNC and a server instance and then walks through the files in the UNC and validates whether or not those files exist within databases on the server. If they don’t exist, it deletes them. That’s it.

I’ve published this to the Technet Script Center Repository right over here. I’m going to work on making it a bit better, so for updates, go there.

Permalink 1 Comment

One of these things is not like the other

June 30, 2010 at 11:40 am (PowerShell, SQLServerPedia Syndication) (, , , )

I’m working with PowerShell, and digging it. I decided that I wanted to create a new script (blog post later, if I get it to work) and I wanted to try out different scripting tools to build it. I started with PrimalScript from Sapien. It’s slick. It’s powerful. It has very nice code completion, a great GUI, integration with source control out of the box.

I started scripting and everything was fine. Then, I needed to run invoke-Sqlcmd so I decided to change the profile on the shell so it would load the SQL Server cmdlets automagically. I started getting the error:

Get-ItemProperty : Cannot find path ‘HKLM:\SOFTWARE\Microsoft\PowerShell\1\ShellIds\Microsoft.SqlServer.Management.PowerShell.sqlps’ because it does not exist

Hmmm… weird. I’m pretty sure I used the same command on my workstation as on my laptop. So I checked it out. Sure enough, same command. So I navigated through the path to ensure everything was in place. I opened up a seperate shell on the laptop and on the workstation. They both showed the path just fine. I couldn’t figure it out. So I opened up the registry and browsed through directly. Yep, the path was there. I opened a shell directly from PrimalScript. I got the error again. Then I tried to navigate to my path. No love. Why?

Then I noticed, in the path in the window that opened from PrimalScript, two little numbers, 32. It’s running the 32 bit shell. Funny enough, the install of SQL Server is 64 bit. It’s visible to the 64 bit shell, the one I ran, but it’s not there to the 32 bit shell. Now I get to track down how to configure SQL Server cmdlets to be visible to both the 32 and the 64 bit shell. Fun.

Idera’s PowerShell Plus has a 64-bit version. I’m just trying to get Quest’s PowerGUI installed, but it looks like it’s 32-bit. This is something to keep an eye out for when choosing and configuring your editor.

Permalink 6 Comments

Powershell Fundamentals

June 25, 2010 at 11:13 am (PowerShell, SQLServerPedia Syndication)

I’ve had the opportunity this week to take class with Don Jones (blog | twitter), PowerShell guru and author of PowerShell 2.0 TFM (makes me laugh, each and every time). I’m not even going to attempt to try to teach you some of the amazing information Don has spent the last week pouring into my ears (since most of  it spilled on the floor). But, I do want to pass on some high points, gotchas, tips, insights, that Don communicated to us during the class because they might help you pick up on the fundamentals of PowerShell.

That’s the most important lesson I got out of this week, you need the fundamentals to really start to put PowerShell to work. It’s very easy to make the mistake that I did. I went into PowerShell with a programmer’s background. I have built applications with Paradox Application Language (PAL & OPAL), Basic, Visual Basic, Java, and C# and I’ve been working with the VB-Script scripting language for years including work setting up custom rules in Operations Manager. So, I attacked PowerShell like it was just another language. Problem is, PowerShell isn’t completely just another language. It really does have some fundamental differences that you need to be aware of or you will get side-tracked, as I did. Getting side-tracked will not only keep you from properly learning the abilities of PowerShell, but you’ll miss out on major benefits that the proper construction of PowerShell scripts will convey to you.

What constructs? The one key point that I think I was missing was the fundamental importance of the pipe. It’s not simply a neat way to pass information from one function to the next, it’s how you should be doing a lot of the work. I wasn’t, but I will.

Here, largely taken from my notes, so these are not Don’s precise words (all the time), are some things you might want to know (any & all errors are mine and mine alone):

  • No one is going to be bleeding from the eyes at the end of the course. Great news since I usually bleed from the ears after drinking from the Itzik Ben Gan firehose of information during his classes.
  • Don is convinced that it is Microsoft’s intention to move to a two tier structure of haves & have-nots consisting of the first class citizen of PowerShell users and the second tier citizen that uses only GUI’s. I commented on that at length here.
  • Law #1: Thou shalt ask for help. By this he didn’t just mean asking him for help during the class, but learning how to use PowerShell help. He’s right. The more you know how to access the help system, the better.
  • Use Consolas instead of the raster font. It really did help.
  • If you find yourself messing with text, you’re doing it wrong. Which pretty much invalidates half the scripts I’ve written so far in PowerShell.
  • F-7 to get a quick list of commands. Works wonderfully.
  • Set up a commandlet of the day calendar. He suggested tossing the Dilbert daily cartoon calendar and setting up a commandlet calendar so that you look at a commandlet in detail each day as a learning tool. I’ll be working on mine.
  • If an object binds by PSObject (or psvalue, can’t quite read my notes) it will always only bind by psobject despite what the documentation says. Ok.
  • Invoke-Command and remoting are too flipping cool. Parallel execution of the scripts.
  • Security is built into the product:
    • script exec is prohibited by default
    • Ps1 is not associated with the shell
    • must provide a path to a script
    • underlying security is not bypassed
  • Get a certificate then use and generate nothing but signed scripts. This makes sense and I’ll go to work on it. It’ll change how I post scripts here, but I’ll cope.
  • Don’t modify things out of scope. This is actually a fundamental bad behavior. You can modify stuff out of scope, but that doesn’t mean you should.
  • PowerShell GUI’s are possible, but if you need a multi-form app, there is a wonderful product called visual studio.
  • He had a function he called a “getto breakpoint” he used in v1. I just liked the term.
  • If you’re using FOREACH all over the place, you’re doing it wrong. And there go the other half of my scripts….
  • If all you’re doing is writing C# code in PowerShell, then use C#.
  • While we can do a ton of cool stuff with SQL Server through PowerShell, it’s actually exposed a very stunted set of methods & processes into PowerShell. As a matter of fact, running a query is the one thing it really exposed, which is why I’ve been using SMO so frequently.
  • My words, not Don’s, the ISE stinks, get an editor
  • Trying out Sapien‘s PrimalScript because of the class… it might replace Idera‘s PowerShellPlus in my tool box. We’ll see.

I’m sure there were better & more important things that I should have picked up, but this is the list of items that stood out. It was a great opportunity to get to learn this from Don and I heartily recommend you track down his classes and attend them. You won’t be sorry.

Permalink 6 Comments

Learning Powershell

June 23, 2010 at 8:35 am (PowerShell) (, , , )

I’ve been attending a Powershell fundamentals class with Don Jones (blog|twitter). If you read my blog you might be aware of the fact that I’ve posted a few PowerShell scripts in the past.  So why was I attending a fundamentals class? Because I didn’t know what I was doing. I knew going into the class that I needed a better grounding in the fundamentals of Posh, but after the first day of Don’s excellent class, I realized that I had been working with PowerShell and didn’t have a clue how it really worked.

Don’s class is excellent and I could spend a lot of time talking about just that (which I’m sure would make Don happy). However I want to concentrate on something that he said during class that really resonated because I think it’s true. You don’t hear “true” things all the time, so when one jumps up and bites you, it’s worth paying attention. I don’t have his exact quote written down, so this will be more than a bit of a paraphrase. Don laid down the argument that Microsoft is creating a two tier structure where low level admins will have a GUI and the real experts will be using PowerShell. He showed how some of the functionality available in AD is already split and he said that more and more products coming out of Microsoft are going to be showing this same split.

So why do I see this as important? Well, it’s simple really. I don’t want to be in the low level, second tier, LOWER PAID, MORE EASILY REPLACED, set of knowledge workers. I want to be in the group that really can make things go. I want to ensure my employability into the future (at least until I win the lottery… which will happen right after I start playing it). You may not like PowerShell. You may not like Microsoft, SQL Server, Windows or any of that stuff. You may be a devoted Mac-head (like Don, Brent, Aaron..). But if your job is related to managing Windows servers, SQL Server, Operations Manager, SharePoint, Exchange… you get the idea, regardless of how you feel about PowerShell and the rest, if you want to be better, more powerful, and more employable, you need to learn PowerShell.

The good news is, it’s not that hard. The better news is, you can contact Don and get into one of his classes. The even better news is, if you’re a lazy, good for nothing lay-about, like me, learning PowerShell will make you more efficient, which means more time to lay around doing nothing while still looking good for the boss. So for those of you who believed that they just didn’t need to learn PowerShell… well, you’re wrong. Go get started. I’d suggest learning the fundamentals first, not just hacking away at it like I did. Don’s book might be a good place to start (TFM…. [snicker]).

Permalink 4 Comments

Buy vs. Build

May 28, 2010 at 8:52 am (PowerShell, SCOM) (, , )

It’s the classic question faced by everyone in Information Services. I know how to do this and I could build software to do it, but I’m a lazy b_____d so I’d rather just pick up a piece of software that does it for me. I love working for large companies because I can usually get them to purchase stuff so that I can loll around stuffing my face all day instead of doing actual work. Unfortunately, not everyone can afford to pick up Microsoft’s Operations Manager or Idera’s Diagnostic Manager. But you still need to monitor your servers. With buy eliminated, that leaves build.

Which, is where this excellent blog post by Laerte Junior comes in. He lays out how to build a wrapper around calls to get Performan Counter information using PowerShell. It’s a pretty slick and worth a read. Becuase the thing is, when you need to build your monitoring products, you want to use a language that you know. Since everyone is learning Powershell (right?) this provides a good foundation for beginning your monitoring project.

Permalink 4 Comments

Powershell Script for Verifying Space

April 19, 2010 at 8:00 am (PowerShell, SQLServerPedia Syndication) (, , )

First let me say, I know my Powershell skills are sub-par. I’m working on it. Slowly but surely. That said, I had a problem to solve. It’s one that I could have done with TSQL, but it would be very ugly TSQL, probably involving dynamic queries, and even for admin scripts, I try to avoid that. So, I went for SMO and WMI wrapped by Powershell to solve the problem.

What was the problem you ask? We automate as many of our processes as we can. One process we do is resetting databases from production or other sources. Our processes work very well, but we occasionally run into a problem where the source system db has grown and the target system doesn’t have the required disk space. So, we needed a method for validating it. This is my first pass at that method. I know it needs work, but it’s functional, so I thought I’d share.

param($sourceserver,$targetserver,$databasename)
[reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) | out-null

$source = New-Object (“Microsoft.SqlServer.Management.Smo.Server”) “$sourceserver”
$target = New-Object (“Microsoft.SqlServer.Management.Smo.Server”) “$targetserver”

$sourcedb = $source.Databases[$databasename]
$targetdb = $target.Databases[$databasename]

$sourcelogfiles = $sourcedb.logfiles
$targetlogfiles = $targetdb.logfiles

## walk through all the log files

foreach ($slf in $sourcelogfiles)
{
$tlf = $targetlogfiles[$slf.name]
##See if the target is smaller than the source
if ($slf.Size -gt $tlf.size)
{
##if the target is smaller, check the drive for free space
$drive = Split-Path $tlf.FileName -Qualifier
$server = $targetserver.Substring(0,$targetserver.IndexOf(“\”))
$driveinfo = gwmi win32_logicaldisk -computername $server |where-object {$_.name -like “$drive”}|select name,freespace

if ($slf.size -gt ($driveinfo.freespace + $tlf.Size))
{
Write-Output “Drive: $drive has insufficient space. $databasename Source: $slf.size, Target: $tlf.size + $driveinfo.size”
}
}
}

$sourcedatagroups = $sourcedb.FileGroups
$targetdatagroups = $targetdb.FileGroups

##walk through all the data files
foreach ($sdg in $sourcedatagroups)
{
foreach ($sdf in $sdg.Files)
{
$tdg = $targetdatagroups[$sdg.name]
$tdf = $tdg.Files[$sdf.name]

if ($sdf.Size -gt $tdf.Size) {
$drive = Split-Path $tdf.FileName -Qualifier
$driveinfo = gwmi win32_logicaldisk -ComputerName $server | where-Object{$_.name -like “$drive”}|select name,freespace
if ($sdf.Size -gt ($driveinfo.freespace + $tlf.Size)) {
Write-Output “Drive: $drive has insufficient space. $databasename Source: $slf.size, Target: $tlf.size + $driveinfo.size”
}
}
}
}

It’s pretty straight forward. It gets a connection to each SQL instance it’s passed, goes to the database in question, which in our situation will always have the same name, and walks the log files and data files, which again, will always have the same logical names and the same file groups. If there’s insufficient space, it kicks out a message. That’s it. Seems to work.

Permalink Leave a Comment

Powershell SMO Problem

April 16, 2010 at 10:14 am (PowerShell, SQL Server 2008, SQLServerPedia Syndication) (, , , , , , )

We’ve been running the Enterprise Policy Management tools available from Codeplex for a few months now (Thanks to Buck Woody’s (blog | twitter) session at the PASS Summit). They’re honestly great. It’s a fantastic way to use Policy Based Management on 2000 and 2005 servers. We did hit some issues with timeouts and looking at the script, it made a call to invoke-sqlcmd, but didn’t pass the -querytimeout value. That means it default to 30 seconds and the import to database process was taking more than a minute for some of our queries. I did a little looking around and decided to just disable the timeout by passing a value of zero (0). But, I still got timeouts. Finally, after a bit of searching around, I found a closed (because it was posted in the wrong place) Connect item. It’s pretty simple to test. If you want to see a good run, do this:

Invoke-Sqlcmd “waitfor delay ’00:00:29′” -Database master -ServerInstance SomeServer -Querytimeout 0

It’ll work fine. Change it to this:

Invoke-Sqlcmd “waitfor delay ’00:00:31′” -Database master -ServerInstance SomeServer -Querytimeout 0

You’ll get a timeout. I don’t know if this is a bug or by design, but it’s a bit of a pain that you can’t simply bypass the timeout. There is a max value (a huge max value) 65535, but what happens if I run a sql command that runs longer than that?. Please go and vote on the new Connect item.

Permalink 6 Comments

Next page »