Better approaches to unit testing PowerShell scripts that call SharePoint cmdlets


For those who have come across my blog before you would have seen my recent post on how to unit test PowerShell scripts that call SharePoint cmdlets, which after I posted started a conversation between myself and Jakub Jareš (PowerShell MVP and Pester Owner) around the approach I had taken and some of the flaws in it. What ended up coming out of that was a much better approach to what I was doing, as well as a massive refactor of my entire test suite for the xSharePoint DSC resources – so I wanted to capture the approach here for anyone else who is interested.

So to start with, what was wrong with my previous approach:

  • Everything was dynamic – the execution of the SP cmdlets was done through a runtime creation of a script block, this meant that there was no intellisense when editing, and there was no way for a test to validate that parameters were of the right type or that was included all required parameters
  • Testing multiple versions of SharePoint – it was next to impossible to execute the pester tests to know that they would work against both SharePoint 2013 and 2016, with the slight differences in parameters between versions being the issue here. So without a full set of cmdlets from each version available to test against, I was only ever going to get out of the tests what I mocked
  • Complexity of solution – the previous solution wasn’t necessarily straight forward, you needed to understand how it worked to be able to read and contribute to the solution

So with those things in mind, Jakub suggested that instead of doing it the way we had done, that we looked at mocking the entire cmdlet set for SharePoint and using that for our tests. He even wen’t so far as to provide a bit of script that would iterate through the cmdlets and generate the stubs, that I adjusted very slightly to work with SharePoint:

function Write-xSharePointStubFiles() {
param
(
[parameter(Mandatory = $true)] [System.String] $SharePointStubPath
)

Add-PSSnapin Microsoft.SharePoint.PowerShell

$SPStubContent = ((Get-Command | Where-Object { $_.Source -eq "Microsoft.SharePoint.PowerShell" } ) | ForEach-Object -Process {
$signature = $null
$command = $_
$metadata = New-Object -TypeName System.Management.Automation.CommandMetaData -ArgumentList $command
$definition = [System.Management.Automation.ProxyCommand]::Create($metadata)
foreach ($line in $definition -split "`n")
{
if ($line.Trim() -eq 'begin')
{
break
}
$signature += $line
}
"function $($command.Name) { `n $signature `n } `n"
}) | Out-String

foreach ($line in $SPStubContent.Split([Environment]::NewLine)) {
$line = $line.Replace("[System.Nullable``1[[Microsoft.Office.Server.Search.Cmdlet.ContentSourceCrawlScheduleType, Microsoft.Office.Server.Search.PowerShell, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]], mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]", "[object]")
$line = $line.Replace("[System.Collections.Generic.List``1[[Microsoft.SharePoint.PowerShell.SPUserLicenseMapping, Microsoft.SharePoint.PowerShell, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c]], mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]", "[object]")
$line = $line -replace "\[System.Nullable\[Microsoft.*]]", "[System.Nullable[object]]"
$line = $line -replace "\[Microsoft.*.\]", "[object]"

$line | Out-File $SharePointStubPath -Encoding utf8 -Append
}
}

There are a few things that we are doing here – firstly we iterate through all of the cmdlets from the SharePoint snap-in and generate an empty stub for them, adding them to a giant string of text which we’ll save to text later on. Before we commit that though we need to deal with one more issue here, and that is around custom data types. Yes generating the stubs gets us to a much better place, but what if the machine I’m executing tests on doesn’t have a clue about the data types like SPAssignmentCollection – the mocks are useless. So through some find and replace work with regular expressions what we are doing here is to change any of the data types starting with “Microsoft.” at the start of the declaration for generic objects. What we lose from this is that we won’t know for sure that our tests are passing objects of the correct data type, but we gain the ability to run them anywhere. We explored generating some stub types to go with the stub cmdlets but this became more complex when we needed to look at implementing some complex behaviours in the stubs (think of the “PipeBind” types used in the SharePoint PowerShell cmdlets – these in reality will let you pass in things like strings, Guids, other specific object types and they will handle the translation of that to the cmdlet, this wasn’t going to be simple to just stub out like the cmdlets were).

Now that we had a way to generate the stubs from SharePoint this opened up a much more interesting scenario for us – what if we generated the stubs from multiple SharePoint versions? Then we could run our tests against the 2013 cmdlets, and run them again against the 2016 cmdlets – this would give us a much more realistic view of how our parameter use was going to work on actual deployments. This was a big win for us, it also means that where PowerShell cmdlets are changed between things like cumulative updates its going to be possible to just generate a new set of stubs from that build and then execute the tests against those as well. The next step was to integrate this in to Pester so we could only write the tests once and have the framework run them twice against each cmdlet set. A quick chat with Jakub and review of the Pester documentation later revealed the “Script” parameter on Invoke-Pester, which gives us the ability to pass an array of hashtables that contain paths and parameters for each test run – we build this in to a quick test harness script which would call both of our stub modules.

function Invoke-xSharePointTests() {
param
(
[parameter(Mandatory = $false)] [System.String] $testResultsFile
)

$repoDir = Join-Path $PSScriptRoot "..\" -Resolve

$testCoverageFiles = @()
Get-ChildItem "$repoDir\modules\xSharePoint\**\*.psm1" -Recurse | ForEach-Object { $testCoverageFiles += $_.FullName }

$testResultSettings = @{ }
if ([string]::IsNullOrEmpty($testResultsFile) -eq $false) {
$testResultSettings.Add("OutputFormat", "NUnitXml" )
$testResultSettings.Add("OutputFile", $testResultsFile)
}
Import-Module "$repoDir\modules\xSharePoint\xSharePoint.psd1"

$results = Invoke-Pester -Script @(
@{
'Path' = "$repoDir\Tests"
'Parameters' = @{
'SharePointCmdletModule' = (Join-Path $repoDir "\Tests\Stubs\SharePoint\15.0.4693.1000\Microsoft.SharePoint.PowerShell.psm1")
}
},
@{
'Path' = "$repoDir\Tests"
'Parameters' = @{
'SharePointCmdletModule' = (Join-Path $repoDir "\Tests\Stubs\SharePoint\16.0.4316.1217\Microsoft.SharePoint.PowerShell.psm1")
}
}
) -CodeCoverage $testCoverageFiles -PassThru @testResultSettings

return $results
}

You can see in this script we are doing a few things – we generate a list of files we want to calculate code coverage for, set up for some test output parameters, and then we call Invoke-Pester with our hash tables point to two different sets of the SharePoint stub files. By putting them in folder with the build number it also let me mock up my calls that test for what version of SharePoint is installed – if I have a resource which needs to know what build of SharePoint is installed I call one of my utility cmdlets called Get-xSharePointInstalledProductVersion – so my mock for that just needs to know the name of the folder that the module was loaded in.

 $versionBeingTested = (Get-Item $Global:CurrentSharePointStubModule).Directory.BaseName
$majorBuildNumber = $versionBeingTested.Substring(0, $versionBeingTested.IndexOf("."))

Mock Get-xSharePointInstalledProductVersion { return @{ FileMajorPart = $majorBuildNumber } }

The above mock will let it return the major build number from the folder name, meaning that each time the tests run this will return either 15 (for SharePoint 2013) or 16 (for SharePoint 2016) which is exactly how these would behave on an actual server, meaning any logic in my script that changed based on the version of SharePoint that was installed could now be tested in a scenario that was far closer to how it would run for real when the scripts are deployed.

There were a few things along the way which didn’t come across so easily though – the first was distributed cache (as if that wasn’t already the bane of every SharePoint administrators life). I tried to use the same script as we did for the SharePoint stuff to generate the mocks for that, but I continued to see some errors around the metadata for each function. At the end of the day I needed 3 functions, so I simply chose to manually type of the stubs for this one. The second one was one of the offshoots of taking out all the custom types when we generated the stubs. There are times when we need to call a specific function from an object that is returned from a SharePoint cmdlet – think about when you call Get-SPWebApplication and make some changes to the object, you call the .Update() method on it to commit the updates back to SharePoint. So we needed to mock these calls up as well, which in the tests we did like this:

Mock Get-SPServiceApplication { return @(
New-Object Object |
Add-Member NoteProperty ID ([Guid]::Parse("21946987-5163-418f-b781-2beb83aa191f")) -PassThru |
Add-Member NoteProperty TypeName "User Profile Service Application" -PassThru |
Add-Member ScriptMethod SetSynchronizationMachine {
param($computerName, $syncServiceID, $FarmUserName, $FarmPassword)
} -PassThru
)}

The above example is in the tests for the user profile sync service, where we need to call the SetSynchronizationMachine() function on the service app object. What we are doing here is building up an object at runtime, adding properties to it and then using the ScriptMethod member type we are adding a function to that object – now in this case I don’t really care about what that function does so again it’s just an empty stub. In this case I’ve just let all the parameters be objects, but I could (and really should) put types on to each parameter there so I know when it gets called in my test that I pass the expected parameter types to it. But this again lets me test the main scripts without having to do obscure things to it in order to separate things out to test them.

The last piece of the rework here was around my approach to testing in that I had really been writing tests with the goal being “see passes on the CI server”, which was not the right approach to be taking. Now we are making much better use of the Context containers in tests to describe different states SharePoint could be in and then testing my methods within those – so a context for “no farm exists locally”, “a farm exists locally” and “no farm exists but an unsupported version of SharePoint is installed” are the contexts we use for the xSPCreateFarm resource, which is exactly the three scenarios we could expect to see in reality of this script running (at least as far as the scenarios we have scripted for so far), so we run methods within each context returning different values from the mocks to match it all. What this resulted in was a much more comprehensive view of how our scripts could be expected to run, and in adjusting for that I picked up a heap of little bugs and saw the code coverage of the solution jump from 70% to 87% at the same time, which will help us protect the integrity of those scenarios we know we test for. All of these changes are currently live in the dev branch of xSharePoint if you want to see what it looks like.

So looking at the initial issues from the previous approach and how they have been addressed:

  • Everything was dynamic – now we are back to calling the SP cmdlets directly, so with a quick call to load one of our stub modules we get intellisense and syntax validation
  • Testing multiple versions was almost impossible – now we have way better validation of how SharePoint expects to be called in different versions, and it took very little to set up
  • Solution was complex – the new approach lets us write scripts the ways we have always written scripts, and we can model the tests around it, instead of writing our scripts specifically to let us test things

There you have it – so if you are serious about unit testing stuff that calls SharePoint, this is absolutely a better approach to take!

Comments (0)

Skip to main content