Thoughts on ORM

April 9, 2010 at 8:00 am (nHibernate, Object Relational Mapping, SQLServerPedia Syndication) (, , )

I’ve posted before about issues I’m having either with behaviors of nHibernate, or behaviors of teams using nHibernate, but I don’t think I’ve made my thoughts on ORM too clear. Let me do that now. 

 I think some form of ORM is here to stay. There are lots of different ORM tools out there and acceptance of them is absolutely growing. Further, it should grow. Developing software is hard and if you can write code that reduces the overall amount of code you have to write, I’m in favor of it. I’m not convinced that the current crop of tools are quite as good as they ought to be, but most of them seem very flexible which should mean implementation of them can be, overall, beneficial to your project.

That’s all to the good. The problem is, and I don’t know if this is intentional marketing, poor understanding or just a general lack of ability, these tools are being hyped, or are perceived to be hyped, as a way to completely ignore and hide the unfortunate fact that there’s this dirty, tainted, completely un-object-oriented relational data engine persisting our information (or storing our data, if you will). Somehow, the idea that with an ORM tool, you can, and should, completely ignore the very idea of the database is persistant. Don’t believe me? Read through this excellent post by Daniel Auger. This guy is not in the enemy camp when it comes to ORM tools. He’s the very epitome of a booster of ORM. But read that post. Understand what it says. You need to take into account that you have a database, not a persistance layer, that is storing data, not information, into a relational data engine, not an object model. If you don’t, your ORM project will fail.

That’s all I’m after. I’m not advocating for the elimination of ORM tools. I think that’s silly. I see their benefit (conceptually, in my own experience to date, I haven’t seen any actual benefit at all). I’m in favor of them. Let me say that again, just so we’re clear, I am in favor of implementing ORM tools. But, I think if you’re implementing an ORM tool and there’s not a database developer or DBA involved with your project… you’re looking at trouble. Remember what ORM stands for, Object Relational Mapping. Relational is still a piece of the puzzle. Pretending otherwise doesn’t make the problem go away, it exacerbates it.

As an aside for those who are still reading, I wrote this whole thing after being inspired by reading Mr. Auger’s great post. That’s a developer I’d love to work with and learn from.

Permalink 2 Comments

Object Database Editorial

June 17, 2009 at 7:57 am (nHibernate, Object Relational Mapping, SQL Server 2005, SQL Server 2008)

I never used to read editorials. Not in emails, magazines, newspapers, whatever. Now, I make it a point of always reading them. You can learn as much from an editorial as you can from the technical articles within, sometimes more. Tony Davis has just posted a guest-editorial over at SQL Server Central. Tony is normally the editor at Simple-Talk, where he also writes interesting editorials. This one is not to be missed. It makes a very clear, and concise case for why object databases have a fundamental flaw for most business needs (not all, not always, but a pretty hefty majority). It’s worth a read.

Permalink 1 Comment

Sign Me Up!

November 25, 2008 at 3:02 pm (nHibernate, Object Relational Mapping, PASS, SCOM, SQL Server 2005, SQL Server 2008, Tools, TSQL, Visual Studio) (, )

I am joining the battle. It’s the Battle Against Lawless Database Design (BALD-D or baldy). Because, after all, enough is enough.

I encourage you too to join the battle. Cross your arms, join the battle cry! Enough is enough!

Permalink 2 Comments

Great Article on DBA-Developer Conflict

November 3, 2008 at 10:12 am (nHibernate, Object Relational Mapping, TSQL) ()

What a great way to phrase the issue. I love the concept of the people-people impedence mismatch. We’re going through it pretty regularly where I work. Our developers are convinced that using an ORM tool, in this case nHibernate, they’re eliminating all the problems with the database because they’re taking complete control of the database through nHibernate. All code will be on their side of the fence, no more messy stored procedures. All data structures will look like their objects, no more having to figure out those silly JOINS. Best of all, by setting all this up, no more messing with those stupid and obnoxious DBA’s.

Unfortunately, they’re still planning on object persistance (don’t call it data storage) inside of a SQL Server database… Um, guys, you haven’t eliminated a single problem. You’re storing your data in a less efficient manner and you’re using lowest common demoninator TSQL to access it… Performance is going to be a BIG issue. Scalability will also be a serious problem. All the problems that have been encountered are still there plus new ones. They seriously believe that not worrying about the idea of set based operations will just make the issues around data sets go away.

I understand why they want to do what they’re doing. I support the concept. Making development more efficient is a good and worthy goal. My problem isn’t with developers or their needs. It is hard to learn two different languages, TSQL & C# or VB, and understanding set based operations is a pain the bottom. I really don’t have any issues with the business and its needs. We need to create more flexible applications, faster to respond to changing business requirements. My problem is with this concept that by ignoring the database functionality it will suddenly, somehow, function better… It’s nuts.

Permalink 4 Comments

Buggy Whips

May 21, 2008 at 7:40 am (nHibernate, Object Relational Mapping) (, , , , )

I just spent two days learning about project management and the Feature Driven Development methodology from Jeff De Luca. He’s a fascinating and informative guy. He’s actually going to be running a project and mentoring a bunch of people where I work. It’s going to be interesting times. I expect to learn a lot.

Why buggy whips? What the heck do they have to do with FDD? Nothing, directly. A big part of FDD is the development of business models. These models can, and usually do, directly correlate to objects/classes in code. Because of this, object oriented methods are, not an inherent part of FDD, but certainly easily automated and used by those designing and developing systems in FDD. Buggy whips? I’m getting to it. Mr. De Luca has spent a lot of time working in object oriented languages, primarily Java and working with lots of object oriented development tools. Identifying automation methodologies to assist in developing, with or without these objects, is an inherent part of FDD and any intelligent developer’s approach. Buggy whips? Hang on. One major area of automation is around what the object oriented developer thinks of as the persistence layer. Others might refer to it as a database. Mr. De Luca very clearly stressed that writing TSQL and designing data storage were, for him and true adherents to object oriented approaches, a thing of the past. Sure, the need for data warehouses and operational data stores for historical storage and ad hoc reporting were necessary, but the days of designing a database along side the application were over.

Buggy whips. A long time ago, when I was just getting started in IT, I was working with desktop publishing software. I knew a bunch of people that did hot & cold typesetting and other types of traditional publishing. They were all convinced that desktop publishing was a niche or a flash in the pan. But someone I knew back then pointed out that these guys were manufacturing buggy whips as the Model T drove by. In other words, they were about to be out of a job and had better wake up and smell the coffee.

Buggy whips. I’ve worked as a developer and a dba and finally landed in the somewhat odd position of being a full time development dba. That means I spend a healthy chunk of every day thinking about database design and writing better TSQL code and trying to train developers in the same. I’m sort of wondering if I just saw a Model T drive by?

I was thinking about this and composing this post when I read Steve Jones’ editorial this morning. DBA’s are becoming a more demanded skill set. Of course, that’s the generic DBA. It doesn’t specify if that’s someone to design a BI system, a warehouse, run your backups, set up your DR plan, or help tune queries and design tables that aren’t coming out of an ORM tool.

Buggy whips. I think I agree with Steve that the constant growing of data means more and more demand for people to manage it, but now I’m wondering if that’s just the management side and not the development and design side? I’m wondering if I need to look into moving back into development if I want to do more than simply manage systems? I’m thinking maybe I need to spend more time learning BI. I’m wondering if anyone else saw that Model T drive by?

Permalink 1 Comment

Easy Fix To Problem #1

April 25, 2008 at 4:42 pm (nHibernate, Object Relational Mapping, Tools) (, , , )

I did a little bit, and I mean a little bit, of looking through the documentation on nHibernate and located a spot for the schema, actually  a couple of spots. It can be added to the Hibernate Mapping definition, which will make it a default for all classes within the definition, and by extension all the tables in the database you connect to. You can also add it to the class definition, specifying a particular schema for a given table. So now the query looks like this:

exec sp_executesql N’INSERT INTO dbo.users (Name, Password, EmailAddress, LastLogon, LogonId) VALUES (@p0, @p1, @p2, @p3, @p4)’,N’@p0 nvarchar(9),@p1 nvarchar(6),@p2 nvarchar(13),@p3 datetime,@p4 nvarchar(9)’,@p0=N’Jane Cool’,@p1=N’abc123′,@p2=N’jane@cool.com’,@p3=’2008-04-25 11:11:48:000′,@p4=N’jane_cool’

On to the data length problem.

Permalink Leave a Comment

nHibernate First Impressions

April 25, 2008 at 11:37 am (Object Relational Mapping, Tools) (, , , , , , , )

If I’m going to have to support it, I want to understand it. So, I got going yesterday, installing nHibernate 2.0 and walking through the Quick Start Guide. My C# is a bit rusty, to say the least, but I managed to sqeak by. The Guide is meant for an older version of nHibernate and there have been a few changes made that affect the code displayed. What that means is, I had to do more than simply type up what was presented to me. Which, was actually good because it forced me to do a bit more learning in order to get everything to work.

What I found was interesting. I can see why developers like this. It really does let you treat the database as just another object to program against. More than that, with pretty minimal work (some of which could be eliminated by code generation), you don’t have to think about databases at all. It’s slick, no denying it. More good news, the TSQL queries it generates are very consistent and appropriately formatted parameratized queries. This gives me a great deal of comfort that we’ll get consistent results with this product in place. I also didn’t find it terribly chatty, meaning I didn’t see lots of extraneous calls made to the database. That’s all the good news. Now for the bad. Here’s a sample of the code:

exec sp_executesql N’SELECT user0_.LogonId as LogonId0_0_, user0_.Name as Name0_0_, user0_.Password as Password0_0_, user0_.EmailAddress as EmailAdd4_0_0_, user0_.LastLogon as LastLogon0_0_ FROM users user0_ WHERE user0_.LogonId=@p0′,N’@p0 nvarchar(8)’,@p0=N’joe_cool’ 

Yes, that’s hard to read. Sorry, but that’s how it comes out, so that’s part of what you’ll be living with. First off, and this is usually a minor nit, but can be a serious problem, the generated code from the quick start guide did not provide a schema for the table, ‘users’. In tiny little procs like this, that’s neither a performance hit nor is it likely to cause expensive recompiles. If the tool generates larger more complex queries, that is potentially an issue. Second, and much more important, here’s the DDL to create the ‘users’ table.

CREATE TABLE [dbo].[users](
        [LogonID] [nvarchar](20) NOT NULL DEFAULT (‘0’),
        [Name] [nvarchar](40) NULL DEFAULT (NULL),
        [Password] [nvarchar](20) NULL DEFAULT (NULL),
        [EmailAddress] [nvarchar](40) NULL DEFAULT (NULL),
        [LastLogon] [datetime] NULL DEFAULT (NULL),
PRIMARY KEY CLUSTERED
(
        [LogonID] ASC
) ON [PRIMARY]
) ON [PRIMARY]

Nothing fancy. But can you spot the discrepancy? The clustered primary key is on a NVARCHAR(20) column. The code that calls it is defaulting each parameter, not to the size of the column, but to the amount of data supplied, NVARCHAR( 8 ) in this case. That very simple query could use implicit data conversions and cause the indexes to not be used. That isn’t a problem here, but it’s something I’m going to be keeping a close eye on.

Ultimately, it looks like this is something I can support. I’m going to spend more time with it to see if I can work out these minor kinks and to introduce joins and more complex data structures to see how well it deals with things. This should make things interesting for a while. I still wonder how we’re going to deal with this in production.

Permalink 1 Comment

LINQ and (by extension) ORM Discussion

April 22, 2008 at 3:03 pm (Object Relational Mapping) (, , , , )

Steve Jones’ editorial today was questioning the use of LINQ. His focus was on the security aspects. The discussion went past that. This is a bit of circular reference since I posted over there and linked back to my ORM Concerns post below. There are a lot of interesting points being made. Some of it, from a DBA stand-point, is quite repetitive although I’d prefer to think of it as reinforcing. Steve’s editorial is worth a read and the discussion is excellent.

Permalink Leave a Comment

ORM Concerns

April 17, 2008 at 11:49 am (Object Relational Mapping) (, , )

Object Relational Mapping (ORM) software is a great idea. You can’t deny that the mismatch between objects and relational data has to be dealt with. Instead of all the time, money and effort being spent here, why not get a tool that does most of the work for you? But… One direction that this can lead is towards dumb databases. After all, if putting a piece of software between the object & the db makes things easier, how much easier if the db and the object look exactly the same. Ta-da! Even less code to write & maintain. Unfortunately, TANSTAAFL (There Ain’t No Such Thing As A Free Lunch) still applies. What you save in initial coding you will pay for in reporting, data cleanup, integrity issues, data integration issues… Anyway, I’ve been researching this, since, as I mentioned before, my company is looking to implement ORM and the architects in charge of the project are really excited by the idea of making the database a reflection of the object. Here are the list of concerns and potential issues that I’ve come up regarding ORM. Any comments or suggestions around this would be useful. In no particular order:

  •  I/O increase due to “chattier” applications:
    Most documentation indicates a lot more, smaller transactions, not to mention the possibility of frequent requests to verify structure (the app checking to see if the database has changed) prior to generating & running queries.
    Mitigation is to ensure appropriate configuration & use of nHibernate. Monitoring can be done with Profiler
  • I/O increase due to loading larger data sets more frequently
    Ensure the use of “lazy” collections to reduce data moved
    No other mitigation possible.
  • General performance issues due to “generic” procedures using less efficient access methods
    Generic data types used in queries can lead to indexes not being used
  • Zero possibility to tune queries in a production setting
    Any, all, changes require re-coding and re-deployment. There are no methods available for database only tuning except applying an index or forcing parameterization.
  • Reporting
    This includes transactional level reporting as well as moving data between a model driven design and a more flexible design (normalized or star schema or whatever) that better supports reporting. Coding time reduced on the front end is, to some degree, tacked on to the back-end.
    A data cleansing mechanism may be required.
    Redundancy of data an authoritative sources for data may require some refinement.
  • Data integrity
    Generated structures are dependent on discovery to determine the proper constraints required on the data, or all constraints are assumed to be in the code.
    Without integrity maintained with data the possibility of “dirty” data is increased (“USA”, “U.S.A.”,”US” all values entered through app).
  • Security
    We will have to give over full read/write privileges at the table level to the application. I think, it’s possible, we’d have to give it ‘dbo’ in production. Based on a few statements in some of the research, it’s even possible we’d have to give it ‘sa’ (although that is completely unproven currently).
  • Integration with other systems at the data level
    Depending on the application, this may not be required. But if it is required at any point, it will entail a larger than normal effort to convert the data to a more normalized structure.

Permalink 4 Comments

nHibernate Database Benefits and Costs?

April 11, 2008 at 12:38 pm (Object Relational Mapping, SQL Server 2005, SQL Server 2008) (, , )

I posted this question over at SQL Server Central, just like my last post, I’m also posting it here. I need some help. I’ve been trying to research this and I can’t find good, hard facts. Any help would be deeply appreciated.

It looks like we might be facing a large project shifting over to using ORM methods through nHibernate. I’m trying to get a read from the database community on what exactly I should expect in terms of issues, challenges and headaches during the development process. I’m also interested in any long term maintenance issues, troubleshooting problems, etc. If your developers implemented ORM all the way down to storing object data on the database in a non-normalized/object oriented fashion, how did that affect you? Did it muck up reporting? What benefits did you realize on the database side of the house? I’m really as interested in benefits as I am costs.

I’m really looking for real-world, hands on information. Complaints or speculation about how stupid a lowest common denominator set of dynamic queries might be… well, I’ve got that complaint well in hand. I need as much hard data as I can collect so that I communicate enough information to my boss, his boss and his boss in order for them to make informed decisions about this and go into it with their eyes fully pinned open.

Permalink 4 Comments