Wednesday, September 3, 2014

Test Ignorance

Every unit test framework I can think of comes with a way to ignore tests, usually it's as simple as adding an attribute to the test. While I was thinking of what syntax to use for ignore in AAATest, I starting wondering if it should be a feature at all.

The biggest problem with ignored unit tests is that there is nothing compelling anyone to turn them back on. Once they are ignored they have a tendency to stay ignored forever. A test can stay in an ignored state for years without anyone noticing or caring. The person who decided to ignore it in the first place could be long gone.

At the moment I am leaning towards not providing a mechanism to ignore tests(and not just because it's the easiest :p thing to do). It would force the user to either delete or comment out the test.

Deleting would obviously force you to consider if the test would really be needed in future. Deleting is something that we don't do lightly, only when we are absolutely sure. Commenting is often used as a soft delete, many a person has argued that if code is commented out then it should be deleted. Does this rule follow for unit tests? I'm still not sure but I'm leaning toward yes.

On a side note, I wonder if it's possible to create an NDepend rule that fails if code is commented?

The only functional difference between commenting and ignoring is that commented code does not have any maintenance overhead. If your ingnoring the test then is paying the maintenance cost for it worthwhile? Only if you plan to enable it again. If your ignoring a test for long enough that it get's out of sync with the code, should it be deleted? Absolutely.

The project I'm working on at the moment has a failing test, it is expected to fail and something that I intend to work on soon. In this case I should have been working in a branch and wasn't. By the time the branch is ready to be pulled back in then the test should be passing.

A branch is the correct place for incomplete code and is therefore allowed to contain failing tests. If your code is in an incomplete state for long enough that you want to ignore tests then you really should be working on a branch anyway.

My current, though not staunchly held, opinion is that tests should be deleted rather than ignored and that being able to ignore tests is a hack for poor SCM usage. Until someone can provide me with a better argument AAATest will not be able to ignore tests.

Tuesday, September 2, 2014

Appveyor Impressions

So I wanted to setup a CI process for AAATest. I don't have a server lying around, so I thought I'd give this newfangled cloud CI a try. Technically I do have a machine lying around that I could use, but it's on the other side of the room, I can reach the cloud from here. The cloud CI of choice was Appveyor.


First Impressions


My first impression wasn't great. My project is setup to use a git submodule for the wiki, which is just how github works. I created a project file to hold the documentation and the main solution references this. Personally I'd prefer the documentation to exist in the same repo, but that is an issue for another day.

The UI build, as far as I could tell, did not allow for subrepositories, so straight away I had to resort to the appveyor build tool, configured via a yaml file in the root of the repository. This is when the problems with appveyor became immediately apparent.

A while ago I wrote about avoiding spaghetti builds. Appveyor violates every one of these rules. I went into detail on the biggest issues below.


Who's the Boss?


Even Abed would be confused by this. Does the source contain the build or does the build control the source? It seems to be both. The checkout process goes like this:

1. Appveyor checks out the code.
2. Appveyor executes the build found in the repository.
3. The build checks out more code.

There is a circular relationship between the source and the build, bound to end in tears. This circular relationship leads to the next issue.


NoLocal Builds


One of the golden rules of a build system is that it has to be able to run locally. Without this you can't try a change without committing it. If the build takes several minutes or more (integration tests, deployments, etc) then your downtime between iterations is just as long, not great for productivity.

Imagine if you had to commit your code and wait for a cloud service to compile before seeing any errors. This is essentially what appveyor forces upon your build.


Build Configuration == Build Process


The other major issue is that there is no real way to configure a build. The configuration is either global or tied to a branch. This limits you to a single build process and there are many good reasons to have more than one.

I generally want to compile and test on every check in, but integration tests take much longer and I'm happy for them to only run as often as possible. I might want performance tests to be compiled in release mode. I might only want the deployment to be run manually, etc.

These scenarios are impossible with the way configuration is handled by Appveyor.


What is Appveyor


Is it a CI server, competing with team city? Is it a build framework, competing with nant/msbuild? Is it a deployment server, competing with octopus?

Unfortunately the answer seems to be all three. The only good news is that the CI server is what shows the most promise, which is the only pat of it I'm interested in using.


Conclusion


Will it work as a CI server for AAATest? At the moment I think it will, just barely. AAATest is a very simple project, build, test and deploy (nuget) are the only build steps requried and I think Appveyor will manage.

For a more complex project, with complex configuration scenarios? I think you would drive yourself mad.

I'm hoping they really focus on the CI server part in future and leave the building and deployment to better tools.

Introducing AAATest

I've had a bit of time on my hands lately and was determined to finish one of my projects, or at least make enough progress that I have something to show for it, something that can be improved upon later.

The project I decided on was a unit test framework. I had started work on it several months ago but never got much past the exploratory phase. It was just an experiment to see how far I could push the boundaries of c# and the .net framework, to see how much could be done in a simpler and more expressive way.

The framework is quickly approaching it's 0.1 BFW (barely works) milestone and unlike most of my projects I'm quite pleased with the direction. So I thought now would be a good time to start writing about it. Today will be about why it was created and future posts will go into more depth on some design decisions and experiences as I setup publishing for a new library.

Warning: Some of the comments below might seem like a criticism of nUnit. My intention isn't to criticize it but to use it to contrast it with my own effort. I've happily used nUnit for the best part of a decade and will probably use it for many years in future.


Unit Testing Evolution


Test frameworks have barely evolved over the last decade. In that time .net community (and Microsoft itself) has changed quite dramatically. MVC has been embraced as the way to build web apps. Nuget and the countless OSS tools it provides have been embraced. Continuous integration and deployment are no longer foreign words.

But our test frameworks are largely the same. If I had to use a release of nUnit that is ten years old I doubt I would notice. This isn't because nUnit is bad, quite the opposite is true. It just hasn't changed because it works, it works well and we've all just learned to accept the warts as just the way things are.

The other part of the problem, I think, is that our test frameworks are general test frameworks. nUnit works well for pure unit testing and it works well for integration tests. Being versatile is a good thing, but many have gone down the wrong path with unit testing based on the lack of direction.

On the other hand, integration testing tools have evolved quite a bit. There was nothing like SpecFlow a decade ago. Selenium barely worked at the time. Of course, with the largely static pages of the period there wasn't as much a need for browser driven testing.

So why did integration testing continue to evolve but unit testing not? I believe this is because new tools were developed and their focus was entirely on creating a better experience for integration tests. In contrast, our tools for unit testing were stuck in there generalist philosophy.


Narrowing Focus


When I started work on AAATest I wanted to see what would happen if I created a test framework purely for unit testing. I was sick of adding arrange, act and assert comments in every single test. AAATest would be more expressive and have this baked in.

My IOC containers know what dependencies my classes require and work it out just fine at run time. But my tests, they require me to create the class in every test fixture. Not just the class but all of it's dependencies. AAATest would automatically manage these dependencies.

I wanted it to work with idiomatic code and to push people down the path of writing idiomatic, modern c# code. It is an opinionated framework, the exact extent of which I don't know yet.


Presenting


AAATest github page. I could go on, but github has the majority of the content and is much better than blogger for code samples.

I also put together the tutorial on TDD with AAATest. I'm planning to extend this in future with more examples.

So far it can only run the the tests included in the example test project, but you have to start somewhere.