Red Gate SQL Source Control

July 13, 2010 at 9:23 pm (SQL Server 2005, SQL Server 2008, Tools, TSQL) (, , , )

You just have to love Red Gate tools. They find the small area that they want to cover and then they cover it extremely well. I rave regularly about SQL Prompt and SQL Compare and SQL Search (free one, btw). I’ve got SQL Data Compare and SQL Data Generator open & working on my desk regularly. I’m dabbling in their other tools fairly often as well. I just like Red Gate tools. I guess my constant & consistent praise is why I’m a “Friend of Red Gate.” I like to mention that before I start praising their tools some more, just so no one thinks I’m hiding it. Why would I hide it? I’m proud to say it. I am a Friend of Red Gate! … anyway… where was I… right, new software. I took a small part (a very small part) in the beta for their new software, SQL Source Control. I thought it was pretty cool when it wasn’t quite working right. Well, now it’s out, working very well, and it’s pretty slick.

Basically Red Gate has created a nice tight coupling between Source Control & your database. They currently support Apache Subversion and Microsoft’s Team Foundation Server (TFS). It let’s you create a mechanism for keeping track of your databases in the same way that you track your code. I honestly believe this is a must for any reasonably sized development team (read, more than two). I can expound on why, but instead I’ll just talk some more about SQL Source Control.

First thing you need to know is that it’s hooked into Management Studio. After you do the install, you get some extra windows in SSMS that look something like this:

I’ve scratched out my own server & database names, but you get the idea. The description summarizes it very well. Lots of people can work on the database, save the scripts into source control, and then they can pull that common set of scripts back out to do more work, just like working with code. It really is the best way to develop.

You just have to connect up the database following the directions and you’ll see something like this:

If you can see that, that’s a database (name hidden) that’s been hooked up to source control. Actually, that and the change to the set-up screen are about your only indications that this tool is running. I love the lack of intrusion.

Better still, each time you reconnect the database, as it goes and checks to see if there are updates in source control, you get a little spinning… looks like a yin/yang symbol.

Enough about pretty graphics. How does it work? Extremely well. I started adding new database objects, editing existing objects, and all it ever did was put one of it’s little symbols on the object that I had created or edited, marking it as a change. When I was ready to move the changes to source control, I just clicked on the Commit Changes tab. All the changes are listed and you see scripts showing before & after between the code in the database and the code in source control.

It just works. Same thing going the other way. A database already connected can just pull changes out and apply them. Nothing I did in all my testing hit a snag (granted, I was just working on pretty traditional tables, procedures, indexes, etc.).

The one thing I’ve found that I don’t like is that there doesn’t seem to be a facility for deploying the databases automatically. Instead, I had to create a blank database, hook that to the existing database in TFS and then pull down all the “missing” objects. Hopefully they’ll go to work on a way to automate that soon.

Just to reiterate, the point of the exercise is to get your code (and while you’re developing, a database is as much code as anything written in C#) into source control. Once you’re in source control, you manage your databases just like code, label, version, branch, whatever you need to do to maintain a tight coupling with the rest of the code for the app. SQL Source Control acts as a very fast and simple tool to enable that coupling for you.

Permalink 10 Comments

Confio Ignite: Part II

April 16, 2010 at 8:00 am (Tools) (, , , , )

I’m continuing to evaluate Confio’s Ignite database monitoring tool. I’ve had it collecting data on a couple of production servers for about a week now. Based on what I’ve seen so far, it’s looking like a pretty good piece of software.

Breaking with the usual tradition, I’m going to talk about the things I’m not crazy about with the software, before I start singing its praises. The first thing, that I thought was missing, but is actually just hard to find, is the ability to look at the query information that Ignite collects, broken down by database. It looks like you should be able to get to it by looking at the Databases tab, but instead you have to first drill down into a time-period, then select specific databases within that time period, which will show you the queries by database. I know that in my environment, it’s this database listing of queries that is probably going to get used a lot. Tracking it down required help from the Confio team, which they quickly provided. They also showed me a way I could run queries to get this from their database so that I could create a report.

Speaking of reports, because they support Oracle & DB2 as well as SQL Server through their interface, no Reporting Services. OK, I recognize that I’m more than a bit Microsoft-centric when I say this, but I’d like to see SSRS reports so that I can manipulate things the way I want to. Again, not the end of the world, but it’s just something I don’t like. Because the data store is available though, I can get in there and create my own reports to my heart’s content and, if I like their report, I can always run a trace to capture the query and see how they built it so that I can build one of my own that mirrors it. I think they provide a mechanism for customizing reports by building some XML that will add it to the interface, which is a bit of a pain, but shows they’re on top of things.

One other thing that bothered me as I worked with Ignite is that, in the real time monitoring section, it was hard to find my way to the list of locks. It had the list, but it just wasn’t obvious how to get to them and this is something I’m used to looking at when I’m worrying about real time performance issues.

Right, enough talking about things I don’t like. See this? I love this:

That’s a break down, by database, by day, of the cumulative waits on the system. Yeah, that little pink database is causing all sorts of problems on one of my production systems. I actually already knew this was a problematic database, but I wasn’t that precisely aware of when and how it had issues. You can drill down to see the same thing for a given day:

And it’s not just the pretty pictures either, showing that most of our production load is in the morning, but that there are some odd spikes at midnight, 4AM & 7PM, but there’s data available behind the graph. If you drill down and hover over the graphs, pop-ups like this one appear:

(Names have been blacked out to protect my job)

And it’s the focus on wait times and types provided by the trending views and reports that make this a very strong tool. Ignite also collects pretty standard performance metrics, buffer cache hit ratio, memory paging, etc. And you can customize which of those metrics it collects and which ones you display, all on your own. But almost any decent monitoring tool does that. I use a tool that does that, and might even do that a bit better. No, what seperates this tool from the pack is that ability to identify the wait states and the query that is causing them. That’s what will get you excited when you’re using Ignite.

It also has a little ping mechanism that shows response time on the server, a helpful and friendly little reminder that all performance isn’t around what’s happening on SQL Server, but what’s happening across the enterprise.

Big chunks of the interface are customizable and you can add or drop things using the little icons on the right side of the picture above.

I can keep going with all the stuff that’s good, because it’s a long list, but I’ll summarize what I like so far. I like the focus on wait states, a lot. I like the focus on query performance. Between those two I feel like I’m getting the majority of what I need from the tool, and more than I get from other, similar, monitoring tools. The fact that it does all the other standard data collection on top feels like gravy. The problems I have with the software so far are either personal issues, Reporting Services, or fairly minor problems. I have a monitoring product running against the same servers as Ignite and I haven’t seen any impact from Ignite’s monitoring showing up there, so it looks like this is a benign monitoring software.

For my final installment, I’m going to look up the skirt of Ignite and see what the underlying data structure looks like. The ability to gather and present data is all well and good, but more important is the ability to take that data and make serious use of it through reports or by building other data structures out of it for long term storage and reporting.

Permalink 2 Comments

Database Design Process

January 18, 2010 at 11:35 am (Misc, Tools) (, )

Buck Woody recently asked a question; how do you design a database. He outlined the process he followed and asked four questions about how each of us do our work:

  1. What process do you follow?
  2. How important are the business requirements?
  3. What tool do you use to create the design, do you need it to diagram, do you even care about diagrams?
  4. What’s your biggest pain-point about designing?

Funny enough, I haven’t done a full on database design in over a year. My company just finished about 6 years of very hard-core engineering work, designing and building or redesigning and building, the majority of our data input and collection systems. Then, I was doing lots of design work. Now, we’re either largely in maintenance mode for most of those systems, or the few new major projects we’re working on are using CRM, no database design, or hoping that database design will never, ever matter and using nHibernate to build the database on the fly, based on the object model (this, by the way, is still in super-double-secret probation development mode. I haven’t seen what they’re producing). All that means we’re doing very little design work now. That will change.

Process

The process I follow isn’t radically different from Buck Woody’s. I get the business requirements, which are hopefully written on something more substantial than a cocktail napkin, and with more detail than will fit on one, and I read them. Or I meet with the business people and ask lots and lots of questions, which I’ll probably do even if I have written requirements. Sometimes, especially at my current company, I’ll get a full logical diagram from another team. I’ll still basically do the same design work, even if I have a logical model, but I’ll use it as a reference point to double-check my understanding of the requirements. From there:

  1.  Identify the nouns. Figure out what kinds of objects, or things, or widgets that we’re talking about needing to store.
  2. Figure out the relationships between the widgets. What’s a parent and what’s a child and which of the widgets is related to many other widgets or to one other widget, etc.
  3. Lay out the attributes for the widget, meaning the columns in the table. Big points here include getting the data type correctly identified and figuring out which of the attributes are required and which are not. Further, which of the attributes come from pick lists, which means look-up tables.
  4. Identify the natural key. I’m with Buck, most of the time I use an alternate, artificial key (yeah, frequently an identity column), but I want to know the natural key so that I can put a unique constraint on the table, in addition to the primary key. This is a vital, but often missed step, in terms of the business processes being modeled.
  5. Figure out where I’m going to put the clustered index. Usually, but not always, this will be the primary key, but I do it as a fundamental part of the design. That means, as a fundamental part of the design, I think about the most common access path for the data. That’s going to be where the cluster needs to be.

How Important Are the Business Requirements

Buck, I respect you and I like you and I hate to do this, but you’re kidding with this question, right? No requirements, no database. The requirements are all. The most important reason you want the requirements written down is because that means that business at least thought them through, all the way, one time. It’s your best bet that you’re going to deliver something that slightly resembles what the business wants and needs. I live and breathe by the requirements. When someone from the business corrects my design, “Oh, that widget is related this thingie, not that watchamacallit,” I get them to change the requirements to reflect that new information. This way, when someone wonders why I did what I did, they’ll always be able to see the requirements the way I saw them.

Tools

I use Embarcadero’s ERStudio. I know there are other ER tools on the market, but I fell in love with this one on the day I needed to change the data type on a column that was in about 30 different tables and I did it in a couple of minutes using their scripting tool. I’ve been using it ever since, and we’re talking well more than ten years now. It’s just great. I only use it for the initial design and for documentation after the design. Once the design is done, the first time, and a physical database is constructed, I don’t work from the ER diagram to do builds and deployments, I work from source control. Do I have to do it like this? No, but I really enjoy the power of an ER tool while I’m doing the design because it lets you do a lot of changes, quickly and easily without having to rebuild a database over & over.

Biggest Pain Point

The largest pain point has to be changing requirements. Change happens. I embrace it. I’ve worked on agile projects and I like the general approach and mind set. And yet, changing databases is hard. It’s not so bad when you’re in the strict, isolated, ER diagram only stage, but as soon as you’ve built the database, even one time, change gets difficult. It’s not so bad if you work closely with the design, test & development teams and get their buy-in, early, that test data must be tossed & rebuilt with each fundamental design change. But that’s a hard agreement to get and so you end up spending a lot of time trying to retain the test data AND fundamentally redesign the data structure. This is not an enjoyable combination.

That’s about it. I would say the one thing I try to do, and it’s not easy, is be open to doing silly stuff. I try, and I don’t always succeed, to let the business or developers or logical modelling team make silly choices after I carefully tell them why they’re silly and the likely outcome. I do this because fighting them on every single silly decision is a losing proposition. Plus, it saves you for the fights against the stupid, as opposed to silly, things that come along occasionally.

Permalink 6 Comments

How do -You- use SQL Server

December 1, 2009 at 10:57 am (nHibernate, SCOM, SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, Tools, TSQL, Visual Studio) (, )

I’ve been tagged by a misplaced yankee, uh, New Englander, whatever. The question is, how do I/we use SQL Server where I work. That’s a tough one. It would make a much shorter, and easier, blog post to describe the things we don’t use it for. However, keeping with the spirit of these tags, I’ll try to lay out it.

For those that don’t know, I work for a rather large insurance company. This means that we have lots and lots of databases, but not much data. We also are cheap. That means we’ll run an app into the ground rather than spend the money & time to replace it. We have apps still running from the 70’s and 80’s propped up by ancient men with pocket protectors, spit, bailing wire and happy thoughts. This also means that we have apps running on SQL Server 7, 2000, 2005 and 2008. Give me a couple of weeks and I’m sure I can get an R2 app deployed. There is also a few Oracle databases, our warehouse and Peoplesoft in particular. We even have a DB2 and, I think, one Sybase database somewhere.

I don’t want to go into lots of details about the type of data we store, lest I get in trouble, but basically, think insurance and you’ll get a pretty good idea of a lot of it. Add in the fact that my company prides itself on engineering to avoid risk and you’ll know that we gather quite a bit of information about the things that we insure. There are lots and lots of very small databases. Our largest db’s are just breaking 100gb, but must are in the 20-60gb range. We have a ton of OLTP systems gathering all the different data. These have been developed in just about every possible way. We even have a couple of systems using nHibernate under development. We move, mostly, pretty standard structured data. We have a few processes that are using XML, mostly from third party sources, to import data, so we’ve learned how to shred that stuff into the database. Spatial data, insurance remember, is the big new thing on the block. We’re seeing lots more implementations taking advantage of this. We don’t see much in the way of unstructured data, but some of the reports from the engineers falls into this realm. We also get quite a few photo’s from them that want us to store. We’re working on using FileStream to keep those in sync with the db rather than storing them within the database itself.

Currently, and I hate this, the overwhelming majority of our OLTP data is collected in SQL Server. All our datamarts used for reporting are in SQL Server. BUT, in the middle sits our Oracle warehouse. So we have to convert our data from SQL Server into Oracle and then back into SQL Server. It’s fun. Swapping data types over & over, shrinking column names only to re-expand them into something a human being can read… It’s lots of fun.

We use SSIS for almost all our ETL processes, although we have a few DTS packages still chugging away on some of the 2000 servers. We’re running a bit of replication, but nothing major. We have several fail-over clusters in production. We’re running virtual machines in lots of places. We’re expanding our Reporting Services implementation pretty rapidly. After attending Buck Woody’s sessions at PASS this year we’re getting Central Management Servers and Policy Based Management implemented.

Most of my time is spent working with development teams. We do most our deployment work using Visual Studio. I do database design, stored procedure writing and tuning, data access methods… you name it and I’m involved with it. The level of our involvement varies from team to team, but by & large, we get involved early in most development projects and are able to influence how databases are developed.

For monitoring we’re using Microsoft System Center Operations Manager (or whatever it’s called this week). We’ve modified it somewhat, adding special rules & monitors to keep an eye on the system better. We also use Idera’s SQL Diagnostic Manager to help keep a more detailed eye on the systems. I already mentioned that we use Visual Studio for most of our development & deployments. We also use several Red Gate products, Compare, Data Compare, Prompt, pretty much every day.

That’s pretty much it. We keep busy and keep the systems up, running & happy most of the time.

Because I think they’ll have interesting answers, I’m going to pass this on to Jeremiah Peschka and Tim Ford.

Permalink 4 Comments

Snags With Profiler GUI

October 28, 2009 at 7:59 am (SQL Server 2005, SQL Server 2008, SQLServerPedia Syndication, Tools, TSQL) (, , , , )

Running the Profiler GUI against a production server is not something you should do. I’ve outlined my research into exactly why in the past. But I hit another little issue with the Profiler GUI as part of work I’m doing on a Microsoft PSS call (more on that in another post). We have a procedure on one of our systems that is erroring out, but only on a particular server and only when called from the application, not when it’s called, on the same server, from SQL Server Management Studio.

I needed to capture some trace events and do it quickly, so I decided to use the GUI, just this once. I put filters on it so that I would only collect certain data, and the database I was collecting from was not in active use, so I knew the only commands I’d be capturing were my own. I just captured SQL:BatchCompleted and SQL:BatchStarting. I needed to run the query one time to validate the behavior. Under these circumstances, with full knowledge of what the GUI can do to a production system, I figured I was ok.

I kicked off the trace through the GUI and ran my proc. Here’s what I saw:

Skipped

This is not evidence of the problems outlined above. You know how TextData normally shows the query that is being run? In this case the query being run involves a rather large XML parameter being passed in as part of the query. This is so large that the Profiler GUI skipped over it and didn’t bother clogging  the memory of the system or my network with capturing it. Don’t panic though. If I run the exact same trace as a server side trace, I get the full text of the call in the TextData field. And yes, if you’re wondering, I’m pretty positive this is by design (although the only reference I can see in BOL is to events greater than 1gb, which we are not into here).

On more reason to avoid the Profiler GUI as a data collection mechanism.

Permalink 7 Comments

Editorial on SQL Server Central

September 14, 2009 at 6:46 am (Misc, Tools) (, , , , )

My first one over there. It’s discussing whether or not you should do two things, build your own monitoring tool, come out in particular favor of one tool or suite of tools from a single vendor. Please read it and watch the video. And, even more importantly, leave a comment in the discussion.

Permalink Leave a Comment

Pay for it!

March 3, 2009 at 1:52 pm (Tools)

For those searching for the string “sql prompt 3.8 | rapidshare,”  that happened to somehow come by my blog, twice, I just looked it up on the web site. SQL Prompt is only $195. If you’re professional enough to need it, surely you can afford to pay the licensing fee. Seriously now. It’s not like Red Gate is some gigantic corporate entity that would be incapable of noticing that you just robbed them of a single license. Reg Gate is a company so small and personal that I, who have never worked for them a day in my life, have met the CEO. Great guy too.

I do understand where you’re coming from. Long, long ago I worked in tech support and only did programming on the side.  I “borrowed” a copy of the programming language I was using back then because I flat out couldn’t afford it and I needed time at night to practice. I appreciate that. But this tool is not an entire programming language, it’s a utility designed to make your full-time programming life easier. I understand why you want it, because, frankly, it works really well. But, since I know, if you need this tool, you can afford to pay for it, just pony up the cash. It’s not going to break the bank. Skip buying a copy of Halo Wars this week and pick it up used next week when you can save about $15.

Just stop trying to steal from a hard working bunch of people who’ve earned the right to charge you one heck of a lot more than they’re charging. Loser.

Permalink 4 Comments

VSTS:DB Composite Projects

February 4, 2009 at 11:16 am (SQLServerPedia Syndication, Tools, Visual Studio) (, , , )

I’ve seen several searches go by from people who are looking for how to create composite projects. Here’s a quick run-down. In the example, I’m going to create a main database project called BaseDB, a server project called MyServer, and a composite project that combines the two called Comp. This project is just another database project as you can see in Figure 1:

New Database Project

New Database Project

Once the new project is created, you need to right click on the References object in the Solution Explorer. Figure 2:

fig2

From the pop-up menu select Add Database Reference. The window in Figure 3 will pop up:

Add Database Reference

Add Database Reference

You can select the other projects for a dynamic view into their changes, or for a static look you can select a .dbschema file. Unfortunately, the .dbschema file is very static. You won’t get changes as they’re made. Usually I use the projects.

You can try to affect Database Reference Variables for the project, but usually this isn’t necessary. If there are indirect references you can suppress them. Again, this isn’t something I’ve had to use so far.

Once you’ve created the references to other projects or .dbschema files, you can see these on the References tab of the project Properties as shown in Figure 4:

fig4

You now have a composite project. There are a number of other options you should consider. For example, how are the deployments done. One common approach that I’ve taken is to have the base database do a full rebuild of the database and then have the other two projects do incremental deployments. You’ll need to make sure that all the same Configurations are set up between the projects so that the stuff like the connection strings are the same between projects. You’ll want to make sure that the base project and the composite project are deploying to the same database name.

There’s more, but that should get you started.

Permalink 1 Comment

New England Data Camp Evals

January 27, 2009 at 8:01 am (PASS, SQL Server 2008, Tools, TSQL) (, , , )

Anyone reading this who attended the New England Data Camp and filled out an eval, for any of the sessions, thanks. For those 63 evals between the two sessions that I received, thanks. Here are the aggregates on my sessions:

Using Visual Studio Team System Database Edition:

Average of Knowledge 8.344827586
Average of Presentation 8.482758621
Average of Preparation 8.103448276
Average of Interesting 8.172413793
Average of Overall 8.275862069
Number of Submissions 29

Understanding Execution Plans

Average of Knowledge 8.647058824
Average of Presentation 8.617647059
Average of Preparation 8.705882353
Average of Interesting 8.529411765
Average of Overall 8.625
Number of Submissions 34

These are all on a scale of 1-9. I’m really quite happy with the results. Here are the average results for all the speakers and all the sessions at the Data Camp:

Total Average of Knowledge 8.407843
Total Average of Presentation 7.912109
Total Average of Preparation 8.130859
Total Average of Interesting 7.962891
Total Average of Overall 8.096004
Total Number of Submissions 515

Overall, both sessions beat the average.  My knowledge level was marked down a bit on the Visual Studio session and I attribute that (mostly) to a lack of rehearsal and preparation. I changed that slide deck just the week before the Data Camp and it showed.  Same problem with the Visual Studio session regarding preparation. What practices and rehearsals I had done were on my desktop at work. I found out that morning that my laptop didn’t have the GDR release installed, so I had to RDP to my desktop. It created several technical issues. I’m glad that people picked up on it. It really does keep me honest. I guess the session on execution plans was well received (despite the fact that I kept saying page when I meant leaf when referring to an index structure, bleh).

There were some really nice comments, thanks everyone. A couple of the comments on the Visual Studio session talked about market penetration and the readiness of the tool set. I had about 60 people in the audience and only three (3!) were using the tool. More were using the Team Foundation System, but not to the extent we use it where I work. I don’t think that’s because the tool isn’t ready (although I think it has a few shortcomings in & around deployments, especially incremental deployments) but rather the fact that it costs a bloody fortune. Few individuals can afford it and not that many companies are going to be willing to pay for it, especially in this economy. Other than that, no suggestions for improving the presentation, despite the fact that I got marked down a bit on this one. I’ll take the preparation more seriously next time.

I only got one negative on the Understanding Execution Plans session and, unfortunately, it’s only marginally useful. One person gave me a 2 on “Interesting” (in a sea of 9’s a few 8’s and two 7’s). This person wanted to see a session on query tuning and optimization. But, that’s just not what the session is about, at all. So it’s hard to take this as a mechanism for improving my presentation on what is an execution plan and how do you read one. However, it does let me know that I should probably try to come up with some kind of performance tuning & tips session that I can give from the new book. Unfortunately, this is such a full field with great presenters like Gail Shaw already showing exactly what I’d show (except better) that I’m not sure what to do about it. I need some idea to drive the session, a hook like Gail’s “Dirty Dozen” (fantastic name). I’m thinking about this one.

Anyway, there are the results, all out in the open. Thank you again for sending in your evals (even the 2 was very helpful) and your comments. The compliments were extremely nice to read, thank you.

UPDATED: Typed Gail’s name wrong AND forgot to link to her site.

Permalink 3 Comments

SQL Server 2008 Management Studio Trick

January 22, 2009 at 12:57 pm (SQL Server 2008, SQLServerPedia Syndication, Tools) (, , )

I just had a Tremors moment. “Everybody knows about ’em Earl, we just never told you.” Except that no one I showed it to has ever seen it before. So maybe this is something a little new.

I had a database selected in the Object Explorer window and I had the Object Explorer Details window open. I noticed a little icon at the bottom of the screen:

2008ms_icon

Then I saw that the bar above it was a moveable bar. So I moved it and saw this:

2008ms_db

Whoa! So then I tried a table, HumanResources.Department from AdventureWorks2008:

2008ms_table

Which caused me to check a procedure:

2008ms_proc

Each line has a little icon on the side that lets you copy it, line by line. It’s really just a way to display the basic properties about the object in a readily accessible format. It’s not going to change my life, or yours, but it sure is handy to know a quick way to access some basic information.

Permalink 7 Comments

Next page »