Stored Procedures Versus LINQ to SQL for Dynamic WHERE Clauses

I was recently discussing the best way to solve the “bad plan gets cached for super-complex WHERE clause with a large number of @column IS NULL OR T.Column = @column conjuncts” problem with colleagues at work. This is another variant of the age-old “stored procedures versus code-generated SQL” (including ORMs here) debate.

I consider myself a T-SQL expert but I am also a firm believer in the “right tool for the right job.” LINQ to SQL (or even old-fashioned, manually-generated SQL) is clearly superior to stored procedures for UI-driven, single SELECT queries with large numbers of WHERE and/or ORDER BY options. In fact, I favor code-based solutions over stored procedures unless performance dictates otherwise (which, in practice, is not very often). But for the specific case I’m discussing here, I don’t think it’s even arguable that the stored procedure is the right way to go.

To make this point, I’m going to provide examples of all 4 scenarios and compare them. The base query is simplified to make the examples easier to follow. Obviously, this problem only becomes intractable using T-SQL when you have more than 3 search criteria, two tables, and two sort options. If your requirements were very stable (yeah, right) and that’s all the variability you had to deal with, you could write (3 x 2 =) 6 different “standard” stored procedures and call the appropriate one from code based on the mix of non-null parameters passed (or embed them all in one procedure within a bunch of IF statements). I’ve been the unfortunate victim (read “maintainer”) of this type of code and I still curse the person that left me that procedure to this day. Requirements aren’t usually that stable or by the time they are, you should have solved the problem another way already.

But I digress. Here’s the “standard” stored procedure in all it’s glory:

CREATE PROCEDURE StandardWhere 
	@LastName varchar(50) = null, 
	@FirstName varchar(50) = null,
	@PostingDt datetime = null,
	@OrderBy varchar(50) = null
AS
BEGIN
	SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount
	FROM AccountTransaction T (nolock)
	JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId
	WHERE (@LastName IS NULL OR A.LastName LIKE @LastName + '%')
	AND (@FirstName IS NULL OR A.FirstName LIKE @FirstName + '%')
	AND (@PostingDt IS NULL OR T.PostingDt = @PostingDt)
	ORDER BY CASE 
		WHEN @OrderBy = "LastName" THEN A.LastName
		WHEN @OrderBy = "FirstName" THEN A.FirstName
		ELSE T.PostingDt
	END
END

The beauty of the standard stored procedure is that you get compile-time syntax checking and the stored procedure is only parsed once. But…you also get *very* bad query plans if the first call to the procedure includes actual parameters that aren’t representative of the way the procedure is usually called.

“Standard” stored procedures potentially have four advantages: minimizing round trips, reducing network traffic (even for a single call), compile-time syntax checking, and single parse. Only the first two advantages are maintained when you introduce dynamic SQL (or use WITH RECOMPILE which often fails to solve the problem anyway). If you only have one statement, round trips are no longer an issue and, with modern networks, the the fact that the entire SQL statement has to sent over the wire versus just the stored procedure name and the actual parameters is probably undetectable in all but the highest volume situations.

Here’s the dynamic SQL version of the stored procedure (note that I’m not going to even bother discussing the use of EXECUTE which completely sucks and should be avoided at all costs):

CREATE PROCEDURE DynamicWhere 
	@LastName varchar(50) = null,
	@FirstName varchar(50) = null,
	@PostingDt datetime = null,
	@Orderby varchar(50) = null
AS
BEGIN
	DECLARE @where nvarchar(max)
	SELECT @where = '1 = 1'

	IF @LastName IS NOT NULL
		SELECT @Where = @Where + " AND A.LastName LIKE @LastName + '%'"

	IF @FirstName IS NOT NULL
		SELECT @Where = @Where + " AND A.FirstName LIKE @FirstName + '%'"

	IF @PostingDt IS NOT NULL
		SELECT @Where = @Where + " AND T.PostingDt = @PostingDt"

	DECLARE @orderBySql nvarchar(max)
	SELECT @orderBySql = CASE
		WHEN @OrderBy = "LastName" THEN "A.LastName"
		WHEN @OrderBy = "FirstName" THEN "A.FirstName"
		ELSE "T.PostingDt"
	END

	DECLARE @sql nvarchar(max)
	SELECT @sql = "
	SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount
	FROM AccountTransaction T (nolock)
	JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId
	WHERE " + @where + " 
	ORDER BY " + @orderBySql

	exec sp_executesql @sql,  N'@LastName varchar(50), @FirstName varchar(50), @PostingDt datetime', 
		@LastName, @FirstName, @PostingDt
END

Even though I’ve formatted this too look nice, if you’ve ever worked with procedures like this you realize they really suck to maintain and debug. All of the important SQL is actually in string variables and doesn’t get parsed and compiled until run-time. So when you compile this stored procedure, the only thing you know is that your T-SQL to generate the SQL that’s actually going to run at run-time is valid. This is exactly the same problem you have with SQL dynamically generated on the client. If I’m doing standard programming work like fancy string manipulation, I’d much rather do it in C# or Java than T-SQL. Particularly since I really need to unit test the hell out of this to make sure valid SQL is generated for all the different combinations of parameters. Oh, and now I get Intellisense and Resharper, at least for the SQL generating part of code. If I’m using LINQ, I get them for everything, including the query itself! But I’m getting ahead of myself.

Before we move on to manually generated dyamic SQL on the client, I want to address plan-caching and sql-injection because stored procedure zealots often bring these up. It turns out that neither of these are an issue with properly parameterized dynamic SQL, whether it’s generated in a stored procedure or off-server in C#, Java, or any other modern language. So don’t let these two canards deter you.

Now let’s see the manually-generated in C# version which is just a translation of the T-SQL code above into C#. For demo purposes, the function is self-contained but you could (and probably should) have a base class with common functions (like handling the ugly DbNull.Value).

public DataTable GetTransactions(string lastName, string firstName, DateTime? postingDt, string orderBy)
{
	string sql = "SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount " 
		+ "FROM AccountTransaction T (nolock) " 
		+ "JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId " 
		+ "WHERE 1 = 1 ";
	if (lastName != null) sql += " AND A.LastName LIKE @LastName + '%' ";
	if (firstName != null) sql += " AND A.FirstName LIKE @FirstName + '%' ";
	if (postingDt != null) sql += " AND T.PostingDt = @PostingDt ";
	sql += "ORDER BY ";
	sql += orderBy == "LastName"
		? "A.LastName "
		: orderBy == "FirstName"
			? "A.FirstName "
			: "T.PostingDt ";
	DataTable ret;
	using ( var cn = new SqlConnection(_connStr) )
	{
		var cmd = new SqlCommand(sql, cn);
		cmd.Parameters.AddWithValue("@LastName", (object) lastName ?? DBNull.Value);
		cmd.Parameters.AddWithValue("@FirstName", (object) firstName ?? DBNull.Value);
		cmd.Parameters.AddWithValue("@PostingDt", (object) postingDt ?? DBNull.Value);
		var adapter = new SqlDataAdapter(cmd);
		ret = new DataTable();
		adapter.Fill(ret);
	}
	return ret;
}

Like I said above, if I’m not getting any advantages from working in T-SQL, I’d rather work in C#. But, from a language-neutral perspective, the code above is not clearly easier to maintain and debug than the corresponding T-SQL.

Here’s essentially the same code using LINQ to SQL and a couple of extension methods:

using (new TransactionScope(TransactionScopeOption.Required,
	new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
	var query =
		from t in Context.AccountTransaction
		join a in Context.Account on t.AccountNoId equals a.AccountNoId
		select new { t.AccountNoId, a.LastName, a.FirstName, t.Postingdt, t.Billingamount };
	query = query.AddWhere(lastName, t => t.LastName.StartsWith(lastName));
	query = query.AddWhere(firstName, t => t.FirstName.StartsWith(firstName));
	query = query.AddWhere(postingDt, t => t.Postingdt == postingDt);
	query = query.OrderBy(orderBy);
	query.ToList();
}

I mentioned the benefits of Intellisense above. Since this is all “code”, I get to access the table and field names using Intellisense (and likewise, they are checked by the compiler/Resharper). This is a huge advantage in my book.

I prefer to use known (as opposed to anonymous) types but I wanted to keep the sample code as self-contained as possible. I end up converting anything in a DataTable to a generic List of T anyway so the fact the LINQ returns objects is a big bonus for me. If you like/want DataTables (you’re a massochist?), you could write an effecient reflection-based conversion routine.

The SQL generated by LINQ to SQL is quite good and you can see what it’s generating very easily. Once you understand how LINQ implements OUTER JOINS, it’s really quite easy to start thinking in LINQ instead of SQL. If you are persuaded by this article to give LINQ a try, I highly recommend LINQPad which provides an iterative and interactive way to develop LINQ queries similar to how I develop SQL queries in SSMS.

Advertisements
Posted in c#, LINQ, SQL. 3 Comments »

Expressions versus Delegates in LINQ to SQL Performance

Recently, I was working on a class that stored a delegate and later used that delegate to retreive a LINQ to SQL object from either the database or an EntitySet, depending on the situation. Unfortunately, the performance for the first case was terrible — I believe it was returning the entire table and then applying the delegate to each row in memory.

Here’s the relevant code (original version):

public abstract Func<D, bool> GetDbKeyQueryFunction(E entity);
//...
D dbEntity;
if ( isRoot ) {
	dbEntity = context.GetTable<D>().SingleOrDefault(GetDbKeyQueryFunction(entity));
} else {
	dbEntity = DbParentSet.Invoke(dbParent).SingleOrDefault(GetDbKeyQueryFunction(entity));
}

Here’s a typical implementation of GetDbKeyQueryFunction:

public override Func<SomeTable, bool> GetDbKeyQueryFunction(SomeEntity entity) {
	return dbEntity => dbEntity.Id == entity.Id;
}

Now before I created this abstract class, I had specific code that used a (seemingly) identical lambda expression to retrieve a row from the database quite quickly:

int id = 11;
dataContext.SomeTable.SingleOrDefault(dbEntity => dbEntity.Id == id));

When I hovered over SingleOrDefault in the IDE, I noticed that there were two overloads, one that took Expression<Func> and another that took Func. On the surface, these don’t seem much different. In fact, the reason I ended up using Func in my abstract class was because Expression is abstract and I couldn’t just put new Expression() around my lambda expession.

In practice, the difference is huge, as I mentioned above regarding the performance. It appears that LINQ to SQL can optimize expressions but cannot optimize delegates. The tricky part is all I had to do was change:

public abstract Func<D, bool> GetDbKeyQueryFunction(E entity);

to:

public abstract Expression<Func<D, bool>> GetDbKeyQueryFunction(E entity);

Surprisingly (to me, anyway), you don’t have to do anything in the implementing code other than modify the method signature to match. The body of the method remains unchanged, but like magic, the compiler is now creating an Expression instead of a Delegate for us.

Performance for the first case (querying the database) was back up to par. However, there was another minor hurdle to overcome.

public abstract Func<D, bool> GetDbKeyQueryFunction(E entity);
//...
D dbEntity;
if ( isRoot ) {
	dbEntity = context.GetTable<D>().SingleOrDefault(GetDbKeyQueryFunction(entity));
} else {
	dbEntity = DbParentSet.Invoke(dbParent).SingleOrDefault(GetDbKeyQueryFunction(entity));
}

In the code above, DbParentSet.Invoke(dbParent) is returning an EntitySet instance. Interestingly enough, you cannot directly apply an Expression to an EntitySet, as I found documented here. However, you can compile an Expression, and in my scenario, the type of the value returned by Compile just happened to be Func (it all seems so obvious now, hehe). So here’s the final version of the code that works well for both scenarioes:

public abstract Expression<Func<D, bool>> GetDbKeyQueryFunction(E entity);
//...
D dbEntity;
if ( isRoot ) {
	dbEntity = context.GetTable().SingleOrDefault(GetDbKeyQueryFunction(entity));
} else {
	dbEntity = DbParentSet.Invoke(dbParent).SingleOrDefault(GetDbKeyQueryFunction(entity).Compile());
}

So remember, always favor Expressions over Delegates when using LINQ. If you have to have a Delegate, you can always use Compile to turn your Expression into a Delegate as needed. But you can’t turn a Delegate into an Expression (that I’m aware of).

Posted in c#, LINQ, SQL. Tags: , , . 3 Comments »