Deployment and Testing Automation

In Application Lifecycle Management (ALM) consulting engagements with customers, we usually cover the automation of various activities in the development process. Most of the development teams I work with have some sort of automated build system in place. The automated build usually goes as far as compiling the solution(s) and then running some unit tests to verify the build. Build automation adds a lot of value but the true value of the development process automation can only be realised when we take that final step and automate the deployment and testing process too. This is because the automated build (without deployment) cannot tell if all of the system components are going to work together to meet the requirements. It is also very hard (if not impossible) to run any integration tests in such environment because there is no deployment so the tests we can run will be limited to those that are related to individual units or those that rely on mocking frameworks.

When I ask development teams why they don’t take that final step, this is what I usually hear back:

  • Our environment and deployment process is very complex so we can’t automate it.
  • We have a shared database server we use for development and it is constantly changing so we don’t have a reliable baseline to test against.
  • We don’t store our configuration files in the source control because it contains sensitive information.
  • We have different environments we use for development and testing and every one of those has its own settings so which one should we keep in the source control?
  • We need some data to be available in the database for testing and since the database is changing, the base data set is changing too.
  • If we want to include the deployment and testing in the process, it is going to take n (n>5) hours.

There are various solutions to any of these issues and to be honest, none of these are a good excuse for not automating the deployment and testing. Points similar to the ones listed above highlight fundamental problems somewhere else in the development process so even if we don’t want to automate the deployment and testing, we are still going to be affected by those inefficiencies and problems.

For example, if our argument is that the database schema is constantly being changed (by a different team and probably on a different release cycle), this means when we decide to have a release, we have no idea if the latest version of the code is in sync with the database schema and scripts at that point. We may be lucky and manage to release in few hours but next time we may end up spending a day or two sorting out inconsistencies. The typical solution to this is to make sure the database artifacts are always in sync with the source code and are stored together with the code in the source control. Historically, maintaining the database scripts (especially the DDL change scripts) in source control has been a huge problem but today, we have great support from development tools in the market for managing the database artifacts (and their changes). Many of the teams I talk to are simply unaware of the developments we have made in these areas.

The point about having a dependency on the base test data is interesting too. If we don’t have a reliable set of test data for the code we are working with, we will end up copying a database from a previous release and then start making changes to that database to bring it in line with the latest version of the database schema. And guess what? Every developer needs to go through this error-prone process. As the testing team and other developers start testing, they will discover new sets of data that allows them to test new scenarios but since all of the developers have spent a considerable time on creating their “precious” test data, they will hang on to what they have. I am sure this is one of those experiences we have all had and we can spend a lot of time on maintaining our test data instead of implementing features for our product. The easiest solution to this challenge is to maintain a set of base test data and store it together with the source code in the source control. Yes somebody needs to maintain this but we are in a better position as less people need to be worried with maintenance of that data and other developers will get the latest test data when they get latest from the source control. This base test data can also be used by the automated processes for testing purposes.

Implementing an automated deployment and testing process is not trivial but is certainly achievable and there are things you can do to make it simpler. If you ask me to come up with #1 guideline in this area, I will say put all of the artifacts required for deployment in the source control but even more importantly, make sure they are in the right location! Nobody questions why we need to put the source code under source control but people usually miss other artifacts such as database artifacts and build and deployment scripts. The interesting point is that many teams actually put these artifacts somewhere under source control but they put it in the wrong location, which makes them less effective and those artifacts will be unusable when we want to automate deployment and testing. So what is the right location for the artifacts?

The basic idea is that all of the artifacts required for deployment should follow our branching hierarchy.

image

For example, if we have three branches (DEV, MAIN and REL), each branch needs its own copy of the artifacts. Why? Because we want to be able to deploy and test any of those branches independently. As you can see this is a fairly obvious (and simple) solution but many teams don’t do this for various reasons and that makes the automated deployment and testing very hard (or impossible).

image

As I said earlier, even if we are not going to automate deployment and testing, we will still benefit from following this structure and making sure all of the artifacts under each branch are consistent and in sync with one another. Once this structure is in place, the next step is to make sure we have the capability of creating and managing the target environments as we will be deploying and testing over and over again. [I am writing the next few sentences with my “good corporate citizen” hat on…] This is where tools such as Visual Studio Lab Management can help. Many developers and testers I talk to still haven’t heard of this great product or don’t know what it does so if you fall in that category, you may want to have a look at the resources (product team blog, videos) to find out more about it. Please note that at the time of writing this post, Visual Studio Lab Management is still in pre-release status so make sure you read the pre-release license terms (rtf) before installing it.

You will be amazed when you see how easy it is to automate the process of managing the target environments (which usually consists of one or more virtual machines) as part of the build process in TFS 2010.

image

The above diagram shows the list of activities provided out of the box and you can use the workflow editor for the build to customise the end-to-end process.

Originally posted by Mehran Nikoo on 6th June 2010 here https://mnikoo.net/2010/06/06/deployment-and-testing-automation/