I’ve been delivering quite a few technical training’s during past year and one of the most discussed thing is the setup of the development environments for large scale projects. Especially large ISV’s are really interested on the the practicalities of utilizing the TFS as the continuous integration (CI) and/or application lifecycle management (ALM) platform. For standard .net projects this has been the way to manage large projects and it’s obvious that the investment and practices are wanted to be utilized also for the SharePoint based development.
Since SharePoint development differs quite a lot from the standard asp.net development, this has not been that straight forward. Following scenario is example done using Visual Studio and TFS, but the principles and practices can be easily adapted also for other continuous integration solutions, like the CruiseControl.NET (CCNet).
Setting up the Visual Studio solution for the TFS
Before the continuous integration can be setup in the TFS side, we need to configure the Visual Studio project correctly, so that when ever build is initialized, newly compiled solution package (.wsp) is created. There are numerous blog entries available from the Internet including the detailed steps for this.
Basically the idea is to configure the Visual Studio solution such away that each assembly is first compiled and then the solution package is compiled using the MakeCab.exe. For the VS solution where you have multiple projects, make sure that you have defined the project dependencies such away that that the actual solution package project (the one which output is the wsp file) is dependent on the assembly projects (outputs dll’s). This ensures that the assembly projects are compiled, before the wsp package is generated.
Creating the auto build project for TFS
When the auto build process of the TFS has finalized by compiling the Visual Studio solution, we have received fully package solution package(s), which are ready to be deployed to any SharePoint server. Since the TFS is not aware of these kind of file types, it does not by default copy the wsp package to the drop location. This is not an issue, since we can modify little bit the build project to be able to initiate the portal recreation. By opening the build project file (by default TFSBuild.proj located in the TeamBuildType/[build name]/ folder in TFS source control) and adding following xml elements, we make sure that the wsp package is also copied to the drop location and additional batch file (this case the rebuild.bat) is executed.
Note. Above example of using rebuild.bat is dependent on the fact that the SharePoint is located in the same server as where the build happens, which in most of the time, is not the case. Alternative solution is declared in the following chapter.
Really nice feature concerning the auto build is also the fact that operations and actions logged by the MakeCab are automatically included in the TFS auto build report, which is generated for each executed auto build. If there’s anything wrong with your solution files (manifest, ddf etc.), the errors will be automatically logged here. Each executed build has it’s own detailed information, from where you can access the build log as we can see from the following image.
Build log (BuildLog.txt) has huge amounts of details concerning the actions taken in particular build. All the MakeCab logged information is also included in the log for detailed analyses on the SharePoint solution package compilation.
Adding rebuild of the portal to scenario
When the wsp package has been created, it has to be of course deployed to the portal first, before it can be tested. This can accomplished manually, but it can also be automated, so that the portal is recreated automatically as part of the auto build process.
Personally I have done this few different ways. Initially I created a console application, which was executed as a scheduled task by the Windows OS. More convenient way to do the same would be to create few new extensions to the stsadm, which are responsible of setting up the environment, so that project members can access the latest version without any manual intervention. If the build server and SharePoint server are different servers (most likely the case), you can schedule the execution of the stsadm commands to batch file located in the server of the drop location.
Following defines one approach used. The tasks are dependent on the type of development and can be customized based on your requirements.
- Redeploy the new solution package to the farm – remove any previous versions, if exists
- Recreate the portal hierarchy using portal site definitions
- Define access to the newly created hierarchy for the project managers and testers
For these objectives, I created following stsadm extensions, which are sequentially processed. These commands access the farm using object model. By default the stsadm provides already similar functionalities, but by creating my own commands, I can easily improve and/or add any actions to be deployed as part of the auto build process.
|deploysolutionadv||Responsible of deploying the new solution package to the farm. Retracts and removes any previous versions from the farm, if exits.
Command is used to redeploy the solution package as part of the daily builds.
|recreatesitecollection||Command recreates site collection using specific template defined as parameter. If site collection already exists in the farm, it’s deleted.
Command is used to recreate the site collection for the daily builds. Portal site definitions are great way of providing immediately full hierarchies for the newly created site collection.
|assignuserstogroup||Grant access to defined site collection for the users defined as parameter.
Command is used to define access to the newly created site for the persons responsible of verification tasks.
Full scenario for continuous integration
Following image defines the key steps for the continuous integration within the SharePoint development. This model can be considered as the development time process for the project.
Following table defines the steps and phases one-by-one.
Phase / Element
Developers develop individual features and functionalities based on module plan (part of the technical specification) using their independent virtualized environments, which have access to the TFS server for work items, source control etc.
TFS Server used to store source code and other project related information. TFS is scheduled to build the integrated version of the package using build automation functionalities.
Developers can also sync their environment using the artifacts stored in TFS.
Development integration server, which is used to setup the outputs from the TFS. If required, this server environment can be utilized by multiple projects as long as they have separate application on which the solution is automatically deployed (often the case in ISV environments).
Project members (for example project manager, testers and even customers representatives in some cases) can follow the progress of the project and give feedback based on the builds deployed.
Possibilities to test and verify the provided functionalities in the development integration server depend heavily on the type of the solution to build. In case declared above, complete custom site definitions with initial configuration of custom web parts are included and there for when the portal site definition is used to create the structures, new functionalities are directly visible in the portal.
On the other hand, it’s quite common that you are developing functionalities, which are associated to the out-of-the-box site definitions using site stapling techniques. In these kind of projects, the new functionalities would be available, as long as you create the definitions, on which the stapling has been added.
Even though you would be developing only few custom web parts, by utilizing the deployment model as declared above, you could verify the deployment packages for your project and test the web parts in the environment. If you are only adding few new web parts to out-of-the-box portal, you might want to consider automated activation of your custom features, which deploys the .webpart files to the portal. This way the tester(s) could verify the functionalities by adding new web parts to the portal using standard web part picker.
SharePoint artifact development
One of the thing to consider when setting up the project is the storage location of the SharePoint artifacts. Even though SharePoint provides version control for artifacts it stores, it cannot be considered as a actual source control system. Especially if the development is done by ISV, which is the most common case, it’s good to have the source code including the artifacts in sync in the source code system (like TFS) to be able to label the actual releases of the developed features. Consider the practicalities to update your customizations from version 1.0 to 2.0 (I’ll write later practices for version handling of your SharePoint project).
Artifact development on the ISV side can of course utilize the standard SharePoint tools, like SharePoint Designer, which increase the productivity during the initial creation of the functionalities. There’s however no easily way to sync the artifacts from SharePoint to the Visual Studio package responsible of the encapsulating the solution packages. Whenever the development for particular artifact is finalized, it can be however copied to the package manually. This way for example the master page developer, can first finalize and verify the functionalities in her/his own virtualized environment and provide versions to the official solution package when appropriate.
Real life experiences
I initially created this process and the necessary configurations for one enterprise project, which started July 2007, where I was acting as the technical lead for the infrastructure architecture and for the customized development (at the time project started developers from ISV didn’t have that much experience on the customization models). Overall amount of developers in the project was up to seven persons and since the development happened at customer premises, the daily builds provided easy way for the customer representatives to follow the progress and give instant feedback whenever required.
Similar setup would be however extremely useful for also any ISV, which does SharePoint development. Since the recommended deployment method for any customizations in the SharePoint landscape is to use solution packages, this process would be useful to any development project, no matter the amount of the customizations (from one web part to enterprise projects with tens of developers).
One additional advantages from automation came as a surprise during the project – or initially it was not foreseen. One of the guidelines we kept in the project was to utilize immediately fresh copy of the virtualized environments, if unexplained errors were encountered during development, that could not be solved in timely fashion. By utilizing the same process as for the auto build, we could recreate the full portal for the development environment from scratch (huge WCM portal) just by running the predefined batch files. This decreased the time required for setting up the development environments with latest build and there for saved project resources for actual activities to be completed.
Summary & more information
Utilization of continuous integration practices within the SharePoint development projects provides fairly easy way to increase the quality of the delivered functionalities. The process might first seem difficult, but when the initial configurations and actions have been completed, process can be reproduced easily to any number of projects.
Links to the concepts defined in this blog post
- Overview of Team Foundation Build
- How to: Extend the STSADM Utility
- SharePoint Solutions Overview
- Automating Solution Package Creation for Windows SharePoint Services by Using MSBuild
I’ll write more guidelines concerning the ALM (Application Lifecycle Management) and other project practices for SharePoint development in upcoming posts.