Because I do. I’m working on a set of tests for an article comparing TOP, MAX & ROW_NUMBER. I have a simple data structure and I need a bunch of data in order to create my tests. I wanted that data to be distributed a certain way, to mimic some production system behavior I’ve seen in the past. Last night I got it all set by mucking about with the seed values of the various columns to get it just right and load up millions of rows in only a few minutes and doing this all on my lap top. Great tool!
One little potential for performance problems that we’ve seen comes out of how nHibernate generates it’s parameratized queries. It limits the length of any parameter to the length of the column, but if the length of that parameter is less than the column, it uses tha smaller length when declaring the variable. This results in a query that looks like this:
exec sp_executesql N'INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)',N'@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(8)',@p0=N'Ted Cool',@p1=N'abc123',@p2=Nemail@example.com',@p3='2008-04-29 14:10:44:000',@p4=N'ted_cool'
Note the parameter @p4 which is the parameter mapping to the primary key for the little sample table. In this query it’s declared as nvarchar( 8 ) because ‘ted_cool’ is composed of eight characters. But if we changed it to ‘ted_coolish’:
exec sp_executesql N'INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)',N'@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(11)',@p0=N'Ted Cool',@p1=N'abc123',@p2=Nfirstname.lastname@example.org',@p3='2008-04-29 14:13:30:000',@p4=N'ted_coolish'
Now that same parameter is declared as nvarchar(11). So if we look at the procedure cache to see what’s inside for this query:
OUTER APPLY sys.dm_exec_sql_text(p.plan_handle)
‘%INSERT INTO dbo.users%’
We get the following results (I ran the test a few times, so I have more than two rows):
Prepared 40960 (@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(11))INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)
Prepared 40960 (@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(10))INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)
Prepared 40960 (@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(12))INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)
Prepared 40960 (@p0 nvarchar(8),@p1 nvarchar(6),@p2 nvarchar(12),@p3 datetime,@p4 nvarchar(8))INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)
For what should have been a single plan, I have four (or more) clogging up the cache and causing unecessary compiles, etc. I’m not completely enamored with what this tool is going to do tofor me as a dba.
I’ve been using Red Gate products for years and I’ve turned into a bit of a cheerleader for them. I can’t help it. They have good products.
One of the tools that I’ve been using for a while (truth told since before it was owned by Red Gate) is SQL Prompt. A new version, 3.8, has recently been released. I’ve been using for a few days now. It really is an improvement over the previous version. I haven’t done system measurements or anything, but it feels faster. It’s picked up the schema’s from the systems I normally access quite well. It failed when I put it against the SCOM data mart, but so did the prior version. There are quite a few cosmetic enhancements. Some of the pop-ups look a bit more clean to my eye but I’m not sure what they’ve changed. I was looking through the options and noticed direct support for compound keys. That may not have been added to 3.8, but it’s relatively new. Most everything else is familiar. The functionality is sweet and goes well beyond what’s available in SQL Server 2008 (although, that’s pretty nice too). The startup time seems to be a bit faster.
The big new thing comes with the Pro version, Layout. It looks like they took the layout function straight out of SQL Refactor (as a side note, I loved Refactor, but this is most of what I used it for, it might fall off my machine). The options window is pretty different, so maybe this is new code. Either way, it’s great functionality. With Prompt catching most of the keywords and formatting them correctly, a lot of the time, only laying out the code is left. A quick couple of key strokes and it walks through getting everything into the right place. Having all your code formatted the same way makes it much easier to read for coding, troubleshooting and general maintenance. This ease increases your productivity, even if only a little bit.
Another cool thing I found was that you can hover over a table or a column and a menu appears. Clicking on the objects in the menu shows a definition for that table. Pretty slick stuff.
I’m sure there’s more to it (here are the release notes), but that’s what I’ve found so far. Another win from Red Gate.
Building a community is hard. I’m one of the leaders of the Southern New England SQL Server Users Group, SNESSUG, a PASS affiliate. We’ve been going for just over a year. We were launched by the drive and initiative of one person, who got us going and then had to leave. We’ve been on our own for about four months now. It’s been difficult without the drive of the founder, but we’re getting the job done & moving things forward. Andy Leanard, MVP, Solid Quality Mentor, nice guy (for a DBA), has a series of posts that, while short, are very helpful guide-posts for those attempting to start a community or keep one running (read them bottom to top). If you’re in that position or thinking about volunteering at your local community group (and if you haven’t volunteered, do it, they need the help), this is worth a read (or two).
I did a little bit, and I mean a little bit, of looking through the documentation on nHibernate and located a spot for the schema, actually a couple of spots. It can be added to the Hibernate Mapping definition, which will make it a default for all classes within the definition, and by extension all the tables in the database you connect to. You can also add it to the class definition, specifying a particular schema for a given table. So now the query looks like this:
exec sp_executesql N’INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)’,N’@p0 nvarchar(9),@p1 nvarchar(6),@p2 nvarchar(13),@p3 datetime,@p4 nvarchar(9)’,@p0=N’Jane Cool’,@p1=N’abc123′,@p2=N’email@example.com’,@p3=’2008-04-25 11:11:48:000′,@p4=N’jane_cool’
On to the data length problem.
If I’m going to have to support it, I want to understand it. So, I got going yesterday, installing nHibernate 2.0 and walking through the Quick Start Guide. My C# is a bit rusty, to say the least, but I managed to sqeak by. The Guide is meant for an older version of nHibernate and there have been a few changes made that affect the code displayed. What that means is, I had to do more than simply type up what was presented to me. Which, was actually good because it forced me to do a bit more learning in order to get everything to work.
What I found was interesting. I can see why developers like this. It really does let you treat the database as just another object to program against. More than that, with pretty minimal work (some of which could be eliminated by code generation), you don’t have to think about databases at all. It’s slick, no denying it. More good news, the TSQL queries it generates are very consistent and appropriately formatted parameratized queries. This gives me a great deal of comfort that we’ll get consistent results with this product in place. I also didn’t find it terribly chatty, meaning I didn’t see lots of extraneous calls made to the database. That’s all the good news. Now for the bad. Here’s a sample of the code:
exec sp_executesql N’SELECT user0_.LogonId as LogonId0_0_, user0_.Name as Name0_0_, user0_.Password as Password0_0_, user0_.EmailAddress as EmailAdd4_0_0_, user0_.LastLogon as LastLogon0_0_ FROM users user0_ WHERE user0_.LogonId=@p0′,N’@p0 nvarchar(8)’,@p0=N’joe_cool’
Yes, that’s hard to read. Sorry, but that’s how it comes out, so that’s part of what you’ll be living with. First off, and this is usually a minor nit, but can be a serious problem, the generated code from the quick start guide did not provide a schema for the table, ‘users’. In tiny little procs like this, that’s neither a performance hit nor is it likely to cause expensive recompiles. If the tool generates larger more complex queries, that is potentially an issue. Second, and much more important, here’s the DDL to create the ‘users’ table.
CREATE TABLE [dbo].[users](
[LogonID] [nvarchar](20) NOT NULL DEFAULT (‘0’),
[Name] [nvarchar](40) NULL DEFAULT (NULL),
[Password] [nvarchar](20) NULL DEFAULT (NULL),
[EmailAddress] [nvarchar](40) NULL DEFAULT (NULL),
[LastLogon] [datetime] NULL DEFAULT (NULL),
PRIMARY KEY CLUSTERED
) ON [PRIMARY]
) ON [PRIMARY]
Nothing fancy. But can you spot the discrepancy? The clustered primary key is on a NVARCHAR(20) column. The code that calls it is defaulting each parameter, not to the size of the column, but to the amount of data supplied, NVARCHAR( 8 ) in this case. That very simple query could use implicit data conversions and cause the indexes to not be used. That isn’t a problem here, but it’s something I’m going to be keeping a close eye on.
Ultimately, it looks like this is something I can support. I’m going to spend more time with it to see if I can work out these minor kinks and to introduce joins and more complex data structures to see how well it deals with things. This should make things interesting for a while. I still wonder how we’re going to deal with this in production.
Steve Jones’ editorial today was questioning the use of LINQ. His focus was on the security aspects. The discussion went past that. This is a bit of circular reference since I posted over there and linked back to my ORM Concerns post below. There are a lot of interesting points being made. Some of it, from a DBA stand-point, is quite repetitive although I’d prefer to think of it as reinforcing. Steve’s editorial is worth a read and the discussion is excellent.
We’ve been getting a lot of alerts saying “Performance Module could not find a performance counter.” It’s been making me nuts. I finally did a search on the web and found an entry over on the Operations Manager Product Team blog. It’s a known issues with SP1. I’m posting this, not because I’m complaining. I’m not. SP1 solved several issues that I had and I’m very happy to have installed it. Unfortunately, it introduced problems. Just not as many as it solved.
I’ve said it before and I’ll say it again, Microsoft Connect really works. There has been a campaign going throughout the SQL Server community to get a new service pack out for SQL Server 2005. Microsoft had, at the very least, suggested that they were not going to release any more service packs for 2005. That’s all changed now. I saw the announcement in Kalen Delaney’s editorial from SQL Sever Magazine UPDATE. That’s worth a read all on it’s own.
And consider this, 704 votes made this happen. That’s 704 people who took the time & trouble to click their mouse a few times. If that’s all it takes to influence Redmond, then you should all be over there using Connect as often as possible to get the fixes, changes or improvements into the SQL Server product that you want or need most.