Unit Testing and tight timescales

I’ve worked on a number of projects, where we’ve started the project with the idea to have 100% coverage with unit tests, but as soon as deadlines start to get closer, unit tests are the first to go.

What are the reasons for this?

There are several reasons why this happens, as the deadline for a project gets closer, you’re developers are put under more pressure to get all the features completed. So they need to focus on that, the time spent on updating or creating new Unit Tests is then spent on getting these features delivered.

If the project being developed in an internal project, and the stakeholder sees that the release deadline is getting closer, they are probably wondering why their developers are spending time writing tests and not being that new feature they have just asked for. I’ve seen this several times, the project manager or business owner, is under pressure to deliver and they hear a lot of talk from developers saying they need to write the tests for a feature, but to the PM that seems a waste of time. On one project we were asked to stop developing unit tests as we needed to get a version of the website ‘out the door’.

I’ve also been on projects where, as a team, it was decided to ‘come back and write the tests later’. As an approach to getting this high level of quality that the project needed. This wasn’t a good idea, as we never went back to write the tests and eventually the code coverage on the project was very poor.

Unit tests and test coverage are sometimes seen as nice things to have. We know as good developers we should have them, we should have tests for every line of code we write. There are many talks, books, podcasts, where ‘experts’ are telling us, developers, that you should be writing tests. It’s like when you are trying to eat healthily, you know you should make a salad for lunch, but those crisps and cheese rolls are just so much easier and quicker to make.

How can this problem be solved?

The central cause of this problem is time, projects basically run out of time to get all the features of a project completed and still write the tests. One solution to this problem is better planning up front. When scoping out a feature, developers need to see unit testing as an integral part of the development process. If they are using the points system in scrum to say what the effort involved in creating a new feature. Writing the unit tests needs to be considered as part of this effort and it is up to the senior members of the team to remind all the developers to include this unit test development as part of their estimates of effort.

Developers need to also show the Project Managers and stakeholders the importance of maintaining good test coverage is to not only to the project as it is now, but a year or two down the line, when a new feature of the site/app is going to be added to the project. When these new changes start to be developed for the project, the PM/stakeholder needs to know how important it is to have good tests during this ‘adding a new feature’ development phase is, in order to make sure the existing site/app doesn’t have bugs introduced.

As well as developers not adding the effort required to their scrum estimates, and PMs not aware of the importance of keeping tests up to date in a project, another issue is that writing test does take a long time, especially if you are just starting out writing tests.  I’ve worked on projects where when I was writing my unit tests I wasn’t convinced that the tests I was writing were actually ‘testing’ my code, but was probably just checking that a value is present. The quality of the tests was not very good..

As developers, we are shown all about the features of a new framework or library, how it does the new latest thing, how fast it runs and how it is better than framework X, but it is very little in the framework documentation on how to plan your tests? What approach should developers take when writing tests? We see many examples where a test suite is set up, and the test checks the value of a <span> in a template, or a test for an HTTP request of a static file, but there isn’t much information about the ‘theory’ of writing good unit tests. Have you tried writing tests for AngularJS directives? It’s easy to check the HTML of a directive, but testing the inner workings of your directive? Nearly impossible.

If developers spend the time learning about writing tests, and if the authors of these frameworks/libraries show others how to write good tests for their framework based apps. The time developers spend writing tests will be reduced. The more we practice something, the quicker and better we get.

If PMs and stakeholders can see unit tests and coverage as an investment into ensuring the quality of a project and the providing long-term support to a project, they will allow the developers to spend this extra time need to write tests.

Creating a Definition of Done

Today in our project planning sessions we discussed the idea of a Definition of Done. Now this is the first agile project I’ve worked on where we’ve actually defined what we, as a team, think a definition of done is.

As a developer I can really see the benefit that having a DoD set out from the beginning has. It gives us a checklist to go through when we have finished a task so we know that we can ‘officially’ say it’s done.

I think for 2 main reasons why the DoD is important for a developer, one that I can be happy to say that I have done/finished a tasks when I have met all the criteria that the DoD has set out. The second reason I think having a DoD is important for developers is that the quality of the project stays at a higher level as the project goes along.

If your Definition of Done says that a task can only be finish if it has 100% code coverage and has been checked over by a designer to make sure the work matches what they have designed. Then the quality of the project remains at a consistant high level, because you can’t say something is finished until all the quality checks have been made.