MSDN Blog on Data Management in VSTS:DB
Another excellent post over on Barclay Hill’s blog. This time he’s showing how to use pre- and post- deployment scripts to manage data when deployment might result in data loss. I am jazzed for part 2 of this set to see how what I should have been doing all along.
Finally, the GDR is Released
From the Data Dude himself.
This is great news. I’ve been working with the CTP’s for several months now, telling the other DBA’s on my team that they had to wait until it was completely ready for release. It’s been a long wait, but I’m sure it’s worth it.
My congratulations to the team. I met several of you at the PASS Summit. I really apreciate the work you’ve put into this great tool. It really makes a difference in how we develop and deploy databases. The changes in the GDR are making a great utility even better. Thanks Mr. Drapers. Thanks also to Jamie Laflen, especially for helping validate some of the ideas I presented at PASS. Thanks to all the rest of the team, whose names I don’t recall, especially the guy who took so long to explain to me what the “literal” was for in the reference page. Excellent work everyone. You guys should be proud.
VSTSDBE RC0 Post 3
Second round of testing. Instead of associating with a project, I tried creating a reference to a .dbschema file. Same error. This time, I’m going to clear out everything. I tried creating it initially on top of the code from CTP 17.
So, another chance to try out the reverse engineer process of “Import Database Schema.” Works great. New AdventureWorks database inside Visual Studio with a tested deployment faster than you can spit.
Created a new server project. Added a login, just to give it something to do. It deployed fine.
On to the compound project. Let’s see what happens. Just to see, I did a build and deploy before I added references or objects or anything. It worked great. Whatever that means. I’m taking it a step at a time, to try to see where it breaks, if it breaks. So, I’ve added just the Adventureworks database project (not the .dbschema file). Trying a build & deploy now. Build went through. No issues. Deploying… Damn. I’m hitting a collation error. However, that’s good because I never got that far before… It worked! On to adding the server as a second reference and then deploying that… Whoops, there it is. Failure. Remove, reset and redeploy without the Server project.
Time for some more experimentation…
It looks like I was getting an error in the server project. I rearranged a few things and now I have the whole thing working again.
There is joy in Mudville!
VSTS Database Edition GDR CTP 16 II
I just finished setting up a new automated build process including TFS! I’m loving this new version. There’s so much more for me to learn and I’ve got to get it all in before PASS in November.
I was reading how they’ve changed the differential mechanism. It sounds like a huge improvement. Time for more testing.
Benefits of a Tear Down & Rebuild in a Database
Using the Team Edition of Visual Studio for Databases (VSDB) enables you to build a database out of source control. This means you can treat your database like code, as much as you can. The fact is, because of persistence, a database just isn’t code. Because you have to keep all the data previously entered, when you deploy a new version of your database to production, you don’t simply get to replace the database like you do with the code. You have to run scripts that alter that which can be altered, but preserve the existing data everywhere. That’s just how it is in production. You have to do the work necessary to protect your data.
Not so in Development. Development (and QA, Test, Financial Test, Performance Test) is the place where you can, for a time, treat your database like code. Do a full tear down and replace. I’m pushing this because I’m working with two extremes these days. Most of our new projects have been using tear down and rebuild from day one. Because of this, we know everything there is to know about the objects inside the database. There are no integrity issues, old code, missing keys, etc. We know this because every time we deploy, we get to rebuild the database. I also have a few older databases. These have not been rebuilt from scratch, ever. Ever.
I’ve just started going through and rebuilding one of them for the first time. There are broken stored procedures everywhere. These are procedures that got deployed to production years ago. Over time the structures which they referenced was changed or even dropped entirely, yet the procedures were left in place. Because they had already been stored by SQL Server and they’re not being referenced by application code (replaced completely by a deployment after all) they’re just sitting out there, waiting for the unwary. There are missing constraints on tables, found during data loads. There are all kinds of problems, usually identified quickly and easily when the database is rebuilt with each deployment.
It’s just reinforcing for me how useful tools like VSDB can become.
VSTS 2008 Database Edition GDR: Still More First Impressions
Following on to my adventures in creating multi-environment deployment processes with the new version of Data Dude (DBPro, VSTS Database Edition, whatever we’re calling it this week).
I’ve create a new configuration, copying all the settings from the Debug configuration. I’m adding a new Deployment configuration file and making a change. The deploy worked. Woo hoo! Now to get really funky. I’ll create a new “Sql command variables file:” and add a variable for setting the data directory. Now to deploy and… Urk! Failed. It’s not recognizing my variable.
Now I’m stuck. I’ve checked the syntax. It’s right. I double checked it all and reran deploy. Now it works… Color me confused. Whatever. Successful test. Time to create another configuration, simulating a QA server… Got that working too. I don’t know what went south yesterday, but it’s all working now. Now to quantify it all and begin to create some standards.
I’m also going to check this all into source control to see how the connection information is stored. In 2005/2008 (not GDR), the connection info was stored in the .user file. We finally started checking our .user files in with the rest of the project, to the abject horror of every MS developer that heard about it. More on this tomorrow.
Microsoft Connect and DBPro Enhancement
I’ve mentioned it before and I think it’s worth mentioning again, Microsoft Connect really works. I’ve seen bugs and enhancements listed there receive enough attention that they were in the next release or service pack of the product in question.
That brings me to DBPro. We use DBPro for all our database development. It’s a great tool. However, it’s still a bit to geared toward the individual user and not the team, despite it’s moniker (Visual Studio Team Edition for Database Professionals). One thing that really does bother me is how it stores some settings, such as Target Connection in the .user file within a project. This means that each individual sets the connection for the project each time they check it out after another user has had it. Another option is to actually check in the .user file with the project, checking it out and overwriting your own .user file each time you need to work on the project. This apparently gives the MS guys fits, but it’s what we’re doing and it works. However, I’d like to see these project level settings be stored with the project. So I’ve put forward a Connect Enhancement Request. Here’s the link. Please, go and and vote for it, rate it high. The more votes we can get on this, the more likely it’ll be in the next SP or the next release. Thanks.