
My Own Fluent Argument Validation Library
The last couple of days I had some spare time. What does a workaholic do with spare time? Exactly: he builds his own library. :-)
Download: The CuttingEdge.Conditions library and source code can be downloaded from CodePlex.com. Visit the homepage at conditions.codeplex.com or go directly to the releases tab.
Warning: This post is based on a pre-release of CuttingEdge.Conditions. While most of the concepts and behavior of the library is the same, the final release has some changes, of which most noticeably is the removal of the extension method behavior of the Requires() and Ensures() methods. Please note that the following syntax isn’t supported anymore: c.Requires().IsNotNull(). Instead the proposed syntax is Condition.Requires(c).IsNotNull(). Please keep that in mind while reading this article.
Recently, I got very inspired by Fedrik Normén’s and Roger Alsing’s blog. They discussed a fluent way of writing the validation of method preconditions. It started all here and Roger came up with a specification here. A couple of weeks later Roger even came up with his own little framework containing a couple of things, one of them being his Fluent Argument Validation Specification.
By now it should be clear that the thing I’m currently building isn’t exactly my own idea. Roger and Fedrik deserve a lot of credit here.
Why Am I Building My Own?
Like I said, I was very inspired by Roger’s idea. I even downloaded and browsed through his code. However, I saw a couple of things I didn’t like. The main thing being the use of a class for the main type (Roger’s Validation class) instead of a struct. My main argument for using structs is performance. Using structs has one advantage over classes: They don’t allocate memory on the heap. I already commented about this on Roger’s blog, but I also observed that using structs instead of classes was still slower, despite the memory advantage. But still, I found the idea of no extra memory allocations more pleasing, also believing that the performance penalty of using structs would very soon vanish.
Now, a couple of days, 1900 lines of code, 1700 lines of comments, 37 different validations over 70 different method overloads, and almost 450 unit tests later, I have to admit that the actual reason to start my own library, isn’t valid anymore. So, I too am using classes now, but more on this later.
The Requirements
Before I actually wrote all this, I tried to figure out what I actually wanted to build and especially to what requirements? So I grabbed a pen and a piece of paper and started writing down some requirements. This is what I came up with:
- The API must differentiate between pre- and postcondition checks.
- The API must be as intuitive and easy to use as possible.
- The API must use the same terminology as Spec# does.
- The API should not throw unexpected exceptions.
- The API must be extendable.
- The library must have good performance.
- The library must be standalone.
- Each method should have correct (xml) documentation.
- Each method should have supporting unit tests.
I’ll try to explain why I think these requirements are important and try to describe how I think of achieving them.
1. The API must differentiate between pre- and postcondition checks
I think this makes sense. Violation of a precondition is always caused by the method’s caller and an ArgumentException should always be thrown in that circumstance (The Framework Design Guidelines are very clear about this). The violation of a postcondition however, has purely an internal cause. It can be considered a bug. Throwing an ArgumentException in that case would clearly confuse the user.
Because of this difference, I wanted the user to be able to explicitly state whether he is doing a precondition check or a postcondition check. All precondition checks should throw a ArgumentException or one of it’s descendants. For a postcondition exception there is actually not a suitable exception in the .NET framework. So I defined my own and named it the obvious: PostconditionException.
2. The API must be as intuitive and easy to use as possible
To make the API as easy and intuitive as possible, I defined the following characteristics, that should hold:
- The library should use extension methods to allow fluent code.
This is of course pretty obvious. When you look at Roger’s specification, you’ll understand that this is the easiest way to have code that reads much more naturally. - The API must have as few entry point methods as possible that enable the user to validate.
When we do not require this and allow the validation methods to show up on every single type of object, we will be faced with two problems. Firstly, all validation methods will always show up as instance methods on every type in the IntelliSense list, even if the user doesn’t want to validate. Secondly a lot of methods will show up that can’t be used for validation, while the user actually wants to validate something. I believe that this would be so annoying, that it would eventually prevent programmers from using such a library. - IntelliSense should only show methods the user can actually use, within a given context.
To make the API easy to use, we shouldn’t bother the user by showing methods which actually can’t be used on the type the user is validating and would throw an exception at runtime. To achieve this, most methods should be written as extension methods. Type inference within the C# IDE will then help us to do the required filtering. Still we’ll have to think carefully about which generic type constraints we want to define on our methods. These type constraints will also help doing the filtering. - The API should be constructed in such a way that the actual user code becomes as readable as possible.
Extension methods and entry point methods already help here. But I’m also thinking about naming the methods in such a way that they become specification like, readable and fluent, e.g., We should rather name a method IsNull() instead of Null(). - Prevent the user from having to program more complex statements.
It is perhaps difficult to determine what a ‘complex’ statement for a developer is. The code analysis tool FxCop could actually help here. For instance, it defines a rule that states that generic methods should rather provide a type parameter, than a generic type. Therefore we should prefer the method IsTypeOf(Type) over IsOfType<T>(). - Allow the user to access every check in a single step.
We should prefer not to group certain checks and make every check directly accessible. So we don’t want the user to write this:col.Precondition().Collections().Contains(x);
Rather, I’d like to see this:col.Precondition().Contains(x);
- Allow users to just call the entry point method once for every argument.
The user should be able to chain the validation methods like this:c.Precondition().IsNotNull().IsOfType(typeof(X));
rather then forcing the user to do this:c.Precondition().IsNotNull();
Again, this is not something new. This is exactly what Roger is doing in his specification.
c.Precondition().IsOfType(typeof(X)); - Prevent implicit conversions.
The consequence of using entry point methods is that they must return a type that wraps the validated value. The validation extension methods can then be hooked to this type. It could be tempting to allow implicit casting from the wrapper to the original type. It allows you to do a postcondition check and a return statement in a single line of code. This all seems okay, but it saves just a single line of code, while making it less readable. Besides that, this doesn't always work as espected. Therefore it would be bad to include such a feature.
3. The API must use the same terminology as Spec# does
I believe Spec#, or rather the lack of Spec#, is probably the whole reason Roger and I are building these validation frameworks. Spec# is a language that provides method contracts in the form of pre- and postconditions. Spec# is currently a research project and it will probably not be released in the near future. We also shouldn’t expect any validation support like that of Spec# within our mainstream C# language.
Because of my interest for Spec#, I’d like to use the same terminology as Spec# uses. Spec# uses the requires keyword to define preconditions and the ensures keyword to define postconditions. With my library I will stay true to these keywords, and therefore:
- precondition checks can be performed using the .Requires() extension method;
- postcondition checks are performed using the .Ensures() extension method.
In respect to all requirements explained so far, a simple use case could look like this:
value.Requires().IsInRange(1, 1000);
a.Requires("a").IsGreaterThan(0);
collection.Ensures("collection").IsEmpty();
4. The API should not throw unexpected exceptions
The user expects a precondition check to fail with an ArgumentException or simply to succeed, even if the checked value is null. The user doesn’t expect the library to throw a NullReferenceException, on the contrary; the user uses the library to prevent NullReferenceExceptions from being thrown. Therefore the library must always check for null and act appropriate.
5. The API must be extendable
Users must be able to extend the library by writing code in their own project, without altering or recompiling the validation library itself. Extension methods make this easy.
6. The library must have good performance
In my opinion, performance isn’t as important as correctness, but philosophizing about performance (even if it’s just theoretical) is really fun. Besides that, I also want to make sure users of my library don’t have to worry about performance. That’s why I will try to keep performance as good as possible. To give an example, I tried to implement the extension methods in such a way that most of them could be inlined by the JIT compiler. This is the reason I currently chose to use classes instead of structs, as I mentioned at the beginning of my article. After the release of the coming .NET 3.5 SP1 I will reinvestigate if the use of structs would improve performance. For now, I’ll stick to classes.
7. The library must be standalone
The library must be an small package, not depending on other libraries or assemblies (besides of course the usual .NET assemblies). It would be annoying for developers having to add a large amount of assemblies, when the only thing they want to do is use the validation library.
8. Each method should have correct (xml) documentation
No matter how good the library is, we still need some documentation ;-).
9. Each method should have supporting unit tests
We don’t want to release something, before we’re pretty sure it works as designed.
Wrapping it up
Hopefully, I’ll soon be able to publish my library. Once again I have to credit Roger Alsing for his influence. This work is all based on his idea. Currently I’m still working on some minor things. I’ll keep you posted on the progress.
- .NET General, C#, CuttingEdge.Conditions, LINQ, Visual Studio - four comments / No trackbacks - § ¶
Hi,
Thanks for all the creds :-)
Even tho I did include implicit casts for returnstatements, I do agree with you that it is a bad idea.
Eg. it will behave incorrectly if you have a method that returns "object" and it is pretty much bad design to throw exceptions from implicit casts.
So I agree 100% with you on that one.
How are you deciding what kind of exception to throw?
are you storing some info in the "Validation of T" that get setup in the Require / Ensure methods?
//Roger
Roger (URL) - 11 07 08 - 22:46
Roger,
The problem with implicit casting is that it's just half a solution. It works fine in a simple use case like this:
string GetString()
{
string s = "string";
return s.Ensures("s").IsNotNull();
}
But we'd expect it to always work. But look at the following example:
string GetString()
{
object s = "string";
return (string)s.Ensures("s").IsNotNull();
}
We'd expect this to work, because we'd expect Validator to do an implicit cast to object and after that we cast the object to String and return it. But the code actually doesn't compile, because the C# compiler tries to cast a Validator to System.String. The next example doesn’t compile either:
static ICollection GetSome()
{
ArrayList c = new ArrayList();
return c.Ensures("c").IsNotNull();
}
And I even missed the issue you pointed out. You're absolutely right about returning an object. The next example will actually return the Validator itself instead of the String object:
static object GetSome()
{
string s = "string";
return s.Ensures("s").IsNotNull();
}
>> How are you deciding what kind of exception to throw?
>> Are you storing some info in the "Validation of T"
>> that get setup in the Require / Ensure methods?
With structs this had to be the case, but since I'm using classes now, the answer is rather simple. Validator is abstract and contains an abstract 'Throw' method. The objects returned from the .Requires and .Ensures methods are called RequiresValidator and EnsuresValidator. Both have their own implemention of the abstract Throw method.
Steven (URL) - 11 07 08 - 22:48
Hi! Did you ever publish your source? I'd be very interested in seeing the code....
Louis Berman - 21 04 09 - 04:19
Yes I sure did Louis. It's on CodePlex: http://conditions.codeplex.com/
Also read more here: http://www.cuttingedge.it/blogs/steven/p.. and here: http://www.cuttingedge.it/blogs/steven/p..
Steven (URL) - 21 04 09 - 12:44