Unit Testing the Data Access LayerBy Tim Stall
Unit testing is becoming very popular among .NET projects, but unit testing the Data Access Layer (DAL) still remains a common obstacle. The problem stems from the very nature of databases: they are slow, common, persistent, external, and relational. This conflicts the nature of unit tests, which should be quick, independent, and repeatable. Looking at the criteria for good unit tests (from Pragmatic Unit Testing in C# with NUnit, Andrew Hunt and David Thomas), it's easy to see why. The table below lists the goals of unit tests on the left along with the problems that databases pose for unit testing on the right.
|Unit Testing Criteria||Problem with Database Unit Testing|
|Automatic||Testing the database first requires that that database has been set up, therefore we need ways to automatically create the schema and adjust the base data.|
|Thorough||Thorough testing should cover objects like stored procedures and functions. However unlike business entities that exist in the code and are directly referenced with Intellisense support, data objects are only referenced indirectly through data access code.|
|Repeatable||A database is designed for persistent storage, yet unit tests should be without persistent side-affects. Therefore you need to ensure that data from one test doesn't interfere with data from another test.|
|Independent||Unlike business entities, databases are not instantiated. There is likely just one database shared by the entire application. Furthermore even a simple stored procedure can be intertwined with other objects � having custom functions or lookups from other tables. Therefore the database is a common and dependent set of objects, making it harder to write independent tests.|
|Professional||Professional includes industry performance, and databases are slow. This is especially problematic when repeatedly running hundreds of database tests.|
While mocking out the data layer can be useful when testing business objects, there is still tremendous value in having the real Data Access Layer thoroughly tested. For example, stored procedures can contain complicated search logic, or database functions can perform logic that must be done at the database level.
This article will show how to resolve these problems so that you can reliably unit test your Data Access Layer. First we need to establish two prerequisites:
- Run both large and small SQL scripts, and
- Ensure that a base-line schema is set up before running any of the tests.
Using these techniques, we can test database objects like stored procedures, functions, triggers, and the DAL that uses them. Finally, we'll explain how to add diagnostic tests in NUnit to ensure that a developer's environment is set up correctly for unit tests, and how to handle configuration values like changing the database name or user connection information.
(This article assumes that the reader is familiar with unit testing and already has a data-access utility class to run stored procedures and execute SQL statements, such as Microsoft's Data Access Application Block (DAAB). For more information on unit testing refer to Muthukamar's article Test Driven Development Using NUnit in C#; for information on Microsoft's DAAB, see Examining the Data Access Application Block or Working with the Enterprise Library's Data Access Application Block.)
In a nutshell, testing the database can be broken down into the following three steps:
- Set the state of the database to a "before" condition (this may include changes to both the schema and data)
- Execute a data object to be tested
- Check that the actual "after" state matches the expected state.
EMPLOYEE_INSERTthat inserts an employee record, and returns the newly-generated primary key for use in the application object's
IDfield. In order to adequately test this functionality, you'll need to have a data-access utility class that can access the database with at least stored procedure and SQL calls. This is standard, and already covered by many other articles.
The other prerequisites are that you have the ability to run SQL scripts, and reset the database schema. Running scripts allows us to both set the before state, as well as verify the after state. Resetting the database schema ensures that all the data objects that we'll call actually exist.
This article includes a downloadable code sample. The sample has three projects:
ApplicationDAL� A sample DAL for an application. This also includes a
DataHelperutility class with a method
ExecuteSqlto run SQL statements. Normally the
DataHelperwould be refactored to another assembly to encourage reuse, but we put it in this assembly for simplicity.
UnitTests� A console application that contains the test-driving-code used by NUnit. This also contains its own
TestUtilities� A class library that contains the prerequisite methods discussed above. This is used exclusively by the
App.Config, and creating the database user from your connection string in
App.Config. Note that the
DiagnoseEnvironmenttests are designed to fail if the database does not exist (which initially it does not), therefore you have two options for installing the database:
- Just run the tests twice. The
DiagnoseEnvironmentones will fail the first time, and then work from there on out, or
- If you want all tests to pass the first time then you'll need to manually run the create script.
Prerequisite: Run SQL Scripts
There are two types of SQL scripts that we'll need to call, short CRUD (Create, Read, Update, Delete) statements like
Select count(*) from Employee, and large batch scripts. Batch scripts could contain hundreds of short CRUD statements, or actual object-creation statements, and are used to setup the schema and data to a "before" state.
The short CRUD statements are easy to execute � we can just run those with
DataHelper like so:
While this is great for simple DML (Data Manipulation Language), it has some shortcomings:
- The SQL is hard-coded instead of being in an external text file, making it hard to maintain and update.
- This approach can't execute batch commands, like
GO, which is used by many standard SQL tools (such as the Create Scripts of Enterprise Manager).
- We can't just copy and run the SQL script in a Database tool because the SQL is intertwined with non-SQL code, like the
class and method call
Folder1\MyScript1.sql). That method gets the Root Script Directory and database connection info from the
App.Configfile, assembles the OSQL command prompt, and then runs the physical script file with OSQL in a hidden console window.
To implement this, let's start at the bottom. Running a script with OSQL requires at minimum the server, username, password,
and absolute script file path. For example, to run a batch script with OSQL where the username was
pw1, and the database server
localhost, we'd use the following command-line statement to
execute the SQL statements in the file
When you examine the code available at the end of this article you'll find that these connection parameters are stored in
App.config. The method
retrieves these settings, storing them in a
SqlAdmin structure. This structure is then handed off to the
CreateOsqlArguments() to create the OSQL arguments for the command line.
The script file path is the only remaining parameter. While OSQL needs the absolute path, it is more convenient for testing
purposes that our
RunScript() method use the relative path. Therefore we can have a public and private
RunScript() method. The private method takes the absolute path while the public one takes in just the relative
path, combining it with the root directory (stored in the
App.Config) to create the full absolute path. It
can then send this full path to the private method, as the code below illustrates.
RunHiddenConsole() method wraps
System.Diagnostics.Process() to run OSQL and wait for it to
exit. The implementation behind this method is outside the scope of this article, but is discussed more in my blog entry
System.Diagnostics to Run External
Now that we've seen how to run an arbitrary set of SQL commands, we're ready to examine how to use this functionality to reset the database schema to an acceptable "before" state. We'll tackle this in Part 2.