Shaun Mccran

My digital playground

07
N
O
V
2009

Testing methodologies - Regression testing

One of the more overlooked forms of testing (you do test don't you?) is regression testing. I'm a big fan of scripted testing using both scripted tests to actually run against your code base (think cfUnit or Junit) and scripted testing as in a basic word doc of testing instructions.

This word doc can be as simple as 'click button N' - what displayed on screen? You can literally just list the actions, expected consequences and actual conqequences.

Regression testing is the practice of going back after a release and testing the functionality that was already present. IE did you break anything by releasing your new functionality. Often the business and IT focus is on the shiny new development, not the integrity of the existing application.

Developers in particular are guilty of zoning in on the specifc area that they are directly involved with. This can sometimes lead to other areas suffering, especially if you have an OO application layer. In just how many places is each individual object referenced? A change to it may work in one area, but have devastating consequences in another.

I've seen cases of this where its been months later before an error has reared its head, and without an accurate change log it can be difficult to track the root cause down. Needless error tracking and bug fixes take developers away from actually developing, and essentially cost the business money due to bad practice.

I mentioned scripted testing above as it has had unforseen beneficial consequences. If you have done anything like this in the past, your regression testing will be very easy. You will have a handy library of repeatable scripted tests, so it is very easy for you to measure the previous results against any new tests you might perform. Thus making it instantly obvious wether your functionality is still behaving as it was before the release.

15
A
P
R
2009

Post Implementation Reviews – why bother?

Often a project's success is measured by whether or not it was successfully delivered, and if the release was relatively painless.

People see the finish line of a project and start making compromises or try to expedite the project delivery, as they can see that the end is in sight. It is this race for the finish that can compromise a projects completion.

In my experience one of the most important parts of the project life cycle is the Post Implementation Review. It is also the most often overlooked part of the project for the reasons mentioned above, or indeed omitted altogether if a company isn't used to accurately measuring the success of a project in more in-depth terms than 'did it go live successfully' (Which is the poorest measure of success).

Essentially this process answers the question "Did we manage to deliver what we set out to do". It tries to accurately gauge whether or not the business case for a project has been met, and if not, where the shortfalls are.

The Post Implementation Review is an ideal opportunity to validate several aspects of the project, and feed that data back into your PM process to the benefit of future projects.

It can be as basic as a simple document asking several basic questions, such as:

  1. What of the initial business objectives did we meet?
  2. What did we learn from this project?
  3. What was different than expected?
  4. Were there an unexpected issue during the project.
  5. Would we manage any elements of the project differently if repeated?
  6. How do our initial estimates relate to our actual delivery timescales?
  7. How did we manage the unforeseen?

Several important sets of data should be included in it though, for example if you have any sort of estimation and/or time tracking process this is the ideal place to correlate that data with the actual project time frames as you now know exactly how long it took to deliver the project. This is an important step in refining your estimation analysis, and can greatly improve the accuracy of future projects. After all what is the point of time tracking a project, if you don't actually match up the estimated figures to the actual figures.

This is also a good opportunity to examine the release management process implemented, and assess whether all potential risks to the project and the existing infrastructure were accurately foreseen. This is easily answered by a brief comparison against any release documentation you may have, as you should quickly be able to see if any risks highlighted in that were minimized.

It is also a very good opportunity to explain and release shortfalls to interested shareholders. Often if a project misses a deadline, or is badly implemented it is never actually explained to the business why. They simply assume that there was a problem. By explaining here you are keeping them informed, and on-side. By explaining that here you are also documenting it for future projects.

If you did not see a risk that affected your projects release this time, then surely it needs documenting and a greater level of understanding about it reached for next time. In this way you have a more complete picture of your overall architecture.

Just because a project has been released it does not mean you cannot learn anything else from it, often it is entirely the opposite. Haste to proceed onto the next project can often cause this step to be skipped entirely, but once you have tried it a few times its value will become more than apparent, both as a project analysis tool, and a vehicle for shaping real process change inside the business.

I will include a template at the bottom of this article that I have found useful in the past. Hopefully this brief explanation will give you the incentive to look at introducing a Post Implementation Review into your projects.

An example Post Implementation Review document.

_UNKNOWNTRANSLATION_