This morning I spent about an hour with one of our partners and their customer discussing how we ensure quality in the design and development of Microsoft Dynamics CRM. The request for my input was not surprising (nor unusual) given the highly customizable and extensible nature of our product, as well the stringent requirements around security, reliability, data integrity, and performance that are necessary to mission critical line of business applications. During that conversation, it occurred to me that this topic would be interesting and potentially useful to the broader MS CRM community. After all, ensuring quality in and end-to-end solution doesn’t end with the initial installation of the core software. With this said, there are many facets to our internal engineering systems, processes, tools, and approaches. Thus, I will be blogging over time about various aspects of our quality assurance approaches in bite sized chunks.
While there are many testing models, levels, and techniques used in the industry, our overarching testing strategy is Scenario Based Testing. This is not meant to imply that we do not utilize or employ aspects of other testing modes (which we do), but rather that the primary determination of our products readiness for general usage is rooted in a “scenario” approach. I should note that Scenario Based Testing is not a new testing model or concept. However, as with many such models, the depth at which they are employed varies greatly within the industry. While there are various definitions for Scenario Based Testing, the one I currently use is as follows:
“Scenario Based Testing is a holistic approach rooted in component interdependence & realized through focus on end-to-end usage scenarios.”
Okay, I would agree that that definition sounds a bit wordy. What it really suggests is a testing model that exercises a products functionality in the ways in which real partners and customers would actually use it. The point about component interdependence simply is a recognition of the fact that verifying that an individual component behaves as specified does not mean that an end-to-end system (with various dependencies and interconnection points) will function as designed. This is even more important for a product like MS CRM because our platform relies heavily on (and interacts with) many components for which we do not directly develop or test.
Our move toward Scenario Based Testing within our R&D organization started with first education and socializing the concept within our test community. This was not hard as our test engineers uniformly feel personally responsible when bugs are discovered that are related to the component areas they are assigned to. Because many of the most critical bugs are found at the integration points or grey areas between components, it’s not hard to understand why Scenario Based Testing resonates with them. Beyond this, our internal testing processes and tools infrastructures are designed to ensure scenario test case development. For example, our top tier test suites are in fact our SVTs (Scenario Verification Tests) and are used to determine general product ship readiness. Finally, we have organized our testing groups to optimize for scenarios. For example, within our current project, one test team is responsible for all of our customization and extensibility functionality so that they are accountable for complete end-to-end VAR and ISV scenarios. Incidentally, they call themselves the ICE team (ISV Customization & Extensibility). I think is a very “cool” acronym for the team (slight pun intended.)
Before I close out, let me make this discussion a bit more grounded by giving you an example of what one of our test scenarios might looks like. To set the stage, this scenario is defined as a series of steps that are taken by a user (or a set of users) in a meaningful sequence to leverage product functionality. In other words, a complete business problem is solved with these series of steps. For the purposes of this example, this scenario is not simple functionality within the bounds of a feature, but rather deep functionality interacting between multiple features:
Test Case Scenario: Linking/promoting in Outlook:
Create new Outlook contact, click Create in CRM button, Save
Update contact, adding email address (as another user)
Click View in CRM button on Outlook Contact, create new opportunity for contact
Create new email in Outlook, click To button and add contact to email from CRM ABP
Click on Create in CRM button for email, also add opportunity as regarding object
Check email sent, activity created, email in sent items folder
Reply to email from contact's mailbox
Wait for email received by CRM user to be auto-tagged
Click View in CRM on email, verify regarding set to opportunity
Nancy discovers that an ex-colleague of hers, Vanessa Jones, is now working in a strategic position for a large company. Nancy has a short phone conversation with Vanessa and realizes that Vanessa’s company is a potential customer.
Nancy adds Vanessa’s contact information as a new Outlook contact (File + New + Contact). While filling out the contact information Nancy clicks on the “Create in CRM” button. She saves the Outlook contact form.
When she saves the form she realizes she didn’t get Vanessa’s email address. Nancy asks her assistant (also a CRM user) to call Vanessa back and ask for her email address. Nancy’s assistant calls Vanessa immediately and updates the email address for Vanessa (Nancy’s assistant users a separate PC / CRM account). Twenty minutes later Nancy opens the Outlook contact record for Vanessa and notices the email address has been updated.
Nancy clicks on the “View in CRM” button from the Outlook contact record. This opens the CRM web form. Nancy creates a new opportunity with Vanessa as the contact.
Nancy decides to send an email to Vanessa. She clicks on the “To…” button in the Outlook email window. She expected to choose Vanessa from the Outlook Address Book \ Contacts section, but she notices there is a new entry in the Address Book called Microsoft CRM \ Contacts. She selects that part of the address book and notices that Vanessa is one of the contacts listed. She selects Vanessa from the list and clicks OK to add Vanessa as the email recipient. Next, Nancy clicks on the “Create in CRM” button. She also clicks on the “Regarding” button and sets the value to the opportunity that she just created. Nancy fills out the body of the email and sends the email to Vanessa.
After sending the email, Nancy notices the item is included in her sent items folder. She also notices a special icon for the item (signifying the item is tracked in CRM).
About an hour later Nancy receives a response from Vanessa. She notices the subject is slightly different than the original mail – it now contains the CRM tracking token information at the end of the subject. Shortly after the email is received Nancy notices the icon for the item changes (signifying the item is tracked in CRM). She opens the email item. On the toolbar she notices a “View in CRM” button (which allows her to view the web-based email activity form). She also notices the item is linked to the opportunity she created (there is a button that says “Regarding:” plus the name of the opportunity she created).
So, in summary, while we do extensive component level testing using a variety of methodologies, our overall testing strategy is scenario based as it offers the most comprehensive quality gates for ensuring that we catch those really tricky bugs that cross component or system level boundaries. As a takeaway, these techniques are equally applicable during testing and validation of full-end-to end customer deployments of Microsoft CRM.