ALM Rangers ALM Dogfooding … proving the new process and lifting the last confusions (we hope) – Part 7

Backed by a successful dog fooding story!

The vsarGuidance – TfsBranchTool team was the first team to start and finish a project using our latest revision of the Ruck process. With a minor burn down hiccup, resulting from team taking on more work mid-sprint (scope creep), the three sprint burn-down charts pleased even our most critical and pedantic Ruck Masters.

imageimageimage

image   … a huge THANK YOU to a phenomenal team and sorry for always pushing back on your ideas, which I regarded as scope creep!

      

So, what are the new process innovations and why have we introduced them?

Related posts you can (should) read first:

A colleague recently told me that the new ALM Rangers process is confusing, which resulted in this blog post. Here are some of the innovations we introduced with the latest version of our Ruck  (practitioners Scrum):

Innovation

Intent

Findings from vsarGuidance – TfsBranchTool project

Last year we had a pattern of two triages and two large waves of ALM Ranger projects as shown below. The Visual Studio 2012 ALM Readiness “wave” even reached a total of 20 projects … pushing the Rangers, their ecosystem, the supporting infrastructure and the Ruck process to the limits. scan0003 We introduced a quarterly triage, resulting in four smaller waves of “time-boxed” ALM Ranger projects, whereby the second wave has an odd shape as we are blocking our December as a low-bandwidth, little-to-zero development and maximum focus on family month.scan0002 Why? We are aligning ourselves with a quarterly product cycle, delivering less quantity, better quality and reducing the pressure on our part-time volunteers. Time-boxing each project forces the teams to define their objectives carefully and constantly balancing features and capacity. Using the motto “we can always do better (and add details) tomorrow” the teams strive to deliver a release and then letting the community decide whether an additional release is needed, or if the team can re-focus on another adventure. The time-boxed approach made the team think carefully about what and how to deliver the features, resulting in a focused team that worked as one. At the start of the third sprint the development was complete and thanks to the continuous reviewer feedback and close alignment with the product group, it was possible to decide that the optional Epic would deliver no short to long-term value. Instead of “wasting” capacity on a low-value feature, the team could re-group and focus on raising the quality bar.
We have switched from large >10 project teams consisting or contributors and reviewers, to smaller (5+1+1) teams optimally consisting of 4 contributors, 1 project lead, one product owner and one Ruck Master. Why? We believe that smaller teams, especially those made up of part-time, geographically distributed and culturally diverse team members work better if they are small, and the pressure on the project lead is dramatically reduced. Reviewers should not be committed from the start, resulting in committed yet idle resources, but be looped in as needed. This team was simply “fantastic” … Star‘s! Team members supported and motivated each other and in the end even the product owner was surprised with the final deliverable he was “forced” to mention and demo in his latest interview on Channel 9. The only real challenge was pushing back against “scope creep”, keeping the team focused on what had to be (not could be) delivered … but that was a great challenge to have!
We began to enforce a deliverable after each sprint, whereby the teams can deliver a show-what-we-have video, a tool or guidance deliverable for review, or the final product. Why? Delivering something is a great motivator and getting early feedback is crucial to adjust to the continuously changing requirements. Sharing ideas, features and/or guidance and harvesting feedback is more important than polishing the content, especially in the early sprints of a project. Releasing the BETA tooling early allowed reviewers to help us test the features, raise the code quality and adjust to the changing business requirements. Feedback from the testing and review influenced the subsequent sprints and the final product … positively!
We encourage teams to focus on an Epic in a Sprint, rather than starting all Epics in parallel. Why? Focusing on one Epic at a time allows the team to collaborate and focus on en Epic, rather than working in isolation and “hoping” that the worlds will collide at the end. For us it was a natural way to work through the backlog, but it is not a silver bullet and should not be used as a de-facto model.
We introduced new documentation and tooling guidance, templates and quality gate validations. Why? First impressions last! Delivering a handful of top quality features definitely receives better resonance from the users than 1001 mediocre features. Thanks to a phenomenal quality reviewer and mentor (Michael Fourie), the final product is not only delivering the expected features, but makes us really proud. Feedback on the recent ALM Rangers ship the new Branching and Merging Guide v2.1, based on the new documentation templates, was really supportive of this initiative. We are convinced the improved quality of the latest quick response sample code will be appreciated as well.

Any more questions or concerns? If yes, add a comment to this post and let’s start a chat!