SQL Server Table Design Rules of Thumb

These are not meant to be comprehensive (and don’t get into advanced normalization or intentional denormalization) but will result in a decent table design without too much thought in most situations. A case can be made for natural primary keys but going the surrogate route is easier and works well too.

  1. Always have a primary key. Typically a surrogate key using an identity column and not necessarily clustered.
  2. Almost always have at least one unique constraint if you used a surrogate primary key
    • consider making this/one of these the clustered index as they may be the most common way rows are selected
    • these will frequently save you from bugs/errors that would result in duplicate data
    • name and abbreviation in a state table are a good example — only one state should have the abbreviation TX and only one should have the name Texas
  3. Always include create date and created by columns. If rows can be updated, include updated date, and updated by columns as well.
  4. Consider effective and expiration date columns in place of “active” flags.
  5. Consider a timestamp (not datetime, but timestamp) field if multiple simultaneous edits are possible
  6. Avoid nullable columns – they result in ISNULL/COALESCES — and use defaults instead – EG ‘’ (empty string) for varchar fields, 0 or -1 for ints, etc. A large number of necessarily nullable columns suggest the table may need to be split.
  7. Avoid nullable foreign keys — they result in outer joins
  8. Consider indexes on any foreign keys
  9. Do not store compound values in one column.
  10. If you know in advance how your table will be queried and none of the indexes created from rules 1, 2, or 7 meet requirements, consider creating indexes on those columns if they are selective (have a wide distribution of values).
  11. You can break any rule except number 1 with good reason after very careful consideration

Stored Procedures Versus LINQ to SQL for Dynamic WHERE Clauses

I was recently discussing the best way to solve the “bad plan gets cached for super-complex WHERE clause with a large number of @column IS NULL OR T.Column = @column conjuncts” problem with colleagues at work. This is another variant of the age-old “stored procedures versus code-generated SQL” (including ORMs here) debate.

I consider myself a T-SQL expert but I am also a firm believer in the “right tool for the right job.” LINQ to SQL (or even old-fashioned, manually-generated SQL) is clearly superior to stored procedures for UI-driven, single SELECT queries with large numbers of WHERE and/or ORDER BY options. In fact, I favor code-based solutions over stored procedures unless performance dictates otherwise (which, in practice, is not very often). But for the specific case I’m discussing here, I don’t think it’s even arguable that the stored procedure is the right way to go.

To make this point, I’m going to provide examples of all 4 scenarios and compare them. The base query is simplified to make the examples easier to follow. Obviously, this problem only becomes intractable using T-SQL when you have more than 3 search criteria, two tables, and two sort options. If your requirements were very stable (yeah, right) and that’s all the variability you had to deal with, you could write (3 x 2 =) 6 different “standard” stored procedures and call the appropriate one from code based on the mix of non-null parameters passed (or embed them all in one procedure within a bunch of IF statements). I’ve been the unfortunate victim (read “maintainer”) of this type of code and I still curse the person that left me that procedure to this day. Requirements aren’t usually that stable or by the time they are, you should have solved the problem another way already.

But I digress. Here’s the “standard” stored procedure in all it’s glory:

CREATE PROCEDURE StandardWhere 
	@LastName varchar(50) = null, 
	@FirstName varchar(50) = null,
	@PostingDt datetime = null,
	@OrderBy varchar(50) = null
AS
BEGIN
	SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount
	FROM AccountTransaction T (nolock)
	JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId
	WHERE (@LastName IS NULL OR A.LastName LIKE @LastName + '%')
	AND (@FirstName IS NULL OR A.FirstName LIKE @FirstName + '%')
	AND (@PostingDt IS NULL OR T.PostingDt = @PostingDt)
	ORDER BY CASE 
		WHEN @OrderBy = "LastName" THEN A.LastName
		WHEN @OrderBy = "FirstName" THEN A.FirstName
		ELSE T.PostingDt
	END
END

The beauty of the standard stored procedure is that you get compile-time syntax checking and the stored procedure is only parsed once. But…you also get *very* bad query plans if the first call to the procedure includes actual parameters that aren’t representative of the way the procedure is usually called.

“Standard” stored procedures potentially have four advantages: minimizing round trips, reducing network traffic (even for a single call), compile-time syntax checking, and single parse. Only the first two advantages are maintained when you introduce dynamic SQL (or use WITH RECOMPILE which often fails to solve the problem anyway). If you only have one statement, round trips are no longer an issue and, with modern networks, the the fact that the entire SQL statement has to sent over the wire versus just the stored procedure name and the actual parameters is probably undetectable in all but the highest volume situations.

Here’s the dynamic SQL version of the stored procedure (note that I’m not going to even bother discussing the use of EXECUTE which completely sucks and should be avoided at all costs):

CREATE PROCEDURE DynamicWhere 
	@LastName varchar(50) = null,
	@FirstName varchar(50) = null,
	@PostingDt datetime = null,
	@Orderby varchar(50) = null
AS
BEGIN
	DECLARE @where nvarchar(max)
	SELECT @where = '1 = 1'

	IF @LastName IS NOT NULL
		SELECT @Where = @Where + " AND A.LastName LIKE @LastName + '%'"

	IF @FirstName IS NOT NULL
		SELECT @Where = @Where + " AND A.FirstName LIKE @FirstName + '%'"

	IF @PostingDt IS NOT NULL
		SELECT @Where = @Where + " AND T.PostingDt = @PostingDt"

	DECLARE @orderBySql nvarchar(max)
	SELECT @orderBySql = CASE
		WHEN @OrderBy = "LastName" THEN "A.LastName"
		WHEN @OrderBy = "FirstName" THEN "A.FirstName"
		ELSE "T.PostingDt"
	END

	DECLARE @sql nvarchar(max)
	SELECT @sql = "
	SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount
	FROM AccountTransaction T (nolock)
	JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId
	WHERE " + @where + " 
	ORDER BY " + @orderBySql

	exec sp_executesql @sql,  N'@LastName varchar(50), @FirstName varchar(50), @PostingDt datetime', 
		@LastName, @FirstName, @PostingDt
END

Even though I’ve formatted this too look nice, if you’ve ever worked with procedures like this you realize they really suck to maintain and debug. All of the important SQL is actually in string variables and doesn’t get parsed and compiled until run-time. So when you compile this stored procedure, the only thing you know is that your T-SQL to generate the SQL that’s actually going to run at run-time is valid. This is exactly the same problem you have with SQL dynamically generated on the client. If I’m doing standard programming work like fancy string manipulation, I’d much rather do it in C# or Java than T-SQL. Particularly since I really need to unit test the hell out of this to make sure valid SQL is generated for all the different combinations of parameters. Oh, and now I get Intellisense and Resharper, at least for the SQL generating part of code. If I’m using LINQ, I get them for everything, including the query itself! But I’m getting ahead of myself.

Before we move on to manually generated dyamic SQL on the client, I want to address plan-caching and sql-injection because stored procedure zealots often bring these up. It turns out that neither of these are an issue with properly parameterized dynamic SQL, whether it’s generated in a stored procedure or off-server in C#, Java, or any other modern language. So don’t let these two canards deter you.

Now let’s see the manually-generated in C# version which is just a translation of the T-SQL code above into C#. For demo purposes, the function is self-contained but you could (and probably should) have a base class with common functions (like handling the ugly DbNull.Value).

public DataTable GetTransactions(string lastName, string firstName, DateTime? postingDt, string orderBy)
{
	string sql = "SELECT T.AccountNoId, A.LastName, A.FirstName, T.PostingDt, T.BillingAmount " 
		+ "FROM AccountTransaction T (nolock) " 
		+ "JOIN Account A (nolock) ON A.AccountNoId = T.AccountNoId " 
		+ "WHERE 1 = 1 ";
	if (lastName != null) sql += " AND A.LastName LIKE @LastName + '%' ";
	if (firstName != null) sql += " AND A.FirstName LIKE @FirstName + '%' ";
	if (postingDt != null) sql += " AND T.PostingDt = @PostingDt ";
	sql += "ORDER BY ";
	sql += orderBy == "LastName"
		? "A.LastName "
		: orderBy == "FirstName"
			? "A.FirstName "
			: "T.PostingDt ";
	DataTable ret;
	using ( var cn = new SqlConnection(_connStr) )
	{
		var cmd = new SqlCommand(sql, cn);
		cmd.Parameters.AddWithValue("@LastName", (object) lastName ?? DBNull.Value);
		cmd.Parameters.AddWithValue("@FirstName", (object) firstName ?? DBNull.Value);
		cmd.Parameters.AddWithValue("@PostingDt", (object) postingDt ?? DBNull.Value);
		var adapter = new SqlDataAdapter(cmd);
		ret = new DataTable();
		adapter.Fill(ret);
	}
	return ret;
}

Like I said above, if I’m not getting any advantages from working in T-SQL, I’d rather work in C#. But, from a language-neutral perspective, the code above is not clearly easier to maintain and debug than the corresponding T-SQL.

Here’s essentially the same code using LINQ to SQL and a couple of extension methods:

using (new TransactionScope(TransactionScopeOption.Required,
	new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted }))
{
	var query =
		from t in Context.AccountTransaction
		join a in Context.Account on t.AccountNoId equals a.AccountNoId
		select new { t.AccountNoId, a.LastName, a.FirstName, t.Postingdt, t.Billingamount };
	query = query.AddWhere(lastName, t => t.LastName.StartsWith(lastName));
	query = query.AddWhere(firstName, t => t.FirstName.StartsWith(firstName));
	query = query.AddWhere(postingDt, t => t.Postingdt == postingDt);
	query = query.OrderBy(orderBy);
	query.ToList();
}

I mentioned the benefits of Intellisense above. Since this is all “code”, I get to access the table and field names using Intellisense (and likewise, they are checked by the compiler/Resharper). This is a huge advantage in my book.

I prefer to use known (as opposed to anonymous) types but I wanted to keep the sample code as self-contained as possible. I end up converting anything in a DataTable to a generic List of T anyway so the fact the LINQ returns objects is a big bonus for me. If you like/want DataTables (you’re a massochist?), you could write an effecient reflection-based conversion routine.

The SQL generated by LINQ to SQL is quite good and you can see what it’s generating very easily. Once you understand how LINQ implements OUTER JOINS, it’s really quite easy to start thinking in LINQ instead of SQL. If you are persuaded by this article to give LINQ a try, I highly recommend LINQPad which provides an iterative and interactive way to develop LINQ queries similar to how I develop SQL queries in SSMS.

Posted in c#, LINQ, SQL. 3 Comments »