Building Books in patterns and practices

Book building is art and science. I've built a few books over the years at patterns & practices. In this post, I'll share a behind the scenes look at what it takes to do so. I'll save the project management piece for another day, and focus on the core of book building. 

Book Examples
Before we get into the details, here's a quick look at my past books:

If you're familiar with the books, particularly Improving Web Application Security and Improving .NET Application Performance and Scalability, you'll know that the books aren't like typical books. They're optimized to be executed rather than read. The expectation is you'll use them to improve your effectiveness on the job. That's why you can get the books in the bookstore, online or in Visual Studio ... in print, PDF or HTML.

Competitive Assessments
The books are targeted at real-world problems and real-world solutions.   They've been used for competitive assessments:

Book Approach
At a high-level, you can think of the approach in five main workstreams:

  1. Researching and analysis.
  2. Designing.
  3. Building,
  4. Testing.
  5. Releasing.

It's a "test-driven" approach, meaning we start with tests (questions and tasks) that our prescriptive guidance needs to pass. The bulk of the work is building "nuggets" that can be used standalone.  We then assemble an end-to-end guide.   Throughout the process we verify with test cases, lab repros, internal and external reviews, both with subject matter experts and every day users.

Researching and Analysis
This workstream is about getting clarity on the problem space.  It includes:

  • Identify the Key Resources in the Domain.  This is about finding all the relevant blogs, sites, articles, code, slides ... etc. for a given domain or problem area.  Here's an example of our Visual Studio Team System Resource List
  • Identify the Key People in the Domain.   This includes enumerating internal folks (product support, field, product teams, experts) as well as external folks (partners, MVPs, customers, experts, ... etc.)
  • Identify the Key Categories in the Domain.  Knowing the terms in the domain, speeds up our research efforts.  For example, if we know the names of the features for a product, we can quickly explore the product docs.  If we know how the community is tagging the information, we can quickly parse the broader "Web community KB."   Reviewing key blogosphere is a fast way to figure out the folksonomy that people use.  This helps us build information models for the space.
  • Identify the Key Questions, Tasks, and Scenarios.   This is a process of making tickler lists of the questions that users ask, the tasks they perform and the key scenarios.
  • Identify Key Contributors.   This includes finding folks that want to participate and actually create content.  A lot of folks raise their hands, but only a few deliver.  That's the nature of the beast.
  • Identify Key Reviewers.   Finding reviewers is pretty easy since lots of folks want to steer.  The trick is finding the folks that inject insights and dramatically raise the quality of the guidance.  We cherish our key reviewers.
  • Identify Key Reference Examples.  This is about gathering all the working examples we know of from support, field, and customers.  I like to be able to point to working instances of the guidance in practice.  I also like to reverse engineer success (success leaves clues.)
  • Build data points.  If you're into information, these would rock your world.  These are dense collections of insights that we share among the team.  Sometimes we use a Wiki, but we always use Word docs to capture and share them, since they get are very dense and we track the sources of the information.

For more information on researching, see my related posts: Analyzing a Problem Space and How To Research Efficiently.

Designing
This workstream is an iterative process of spiraling down on solutions.  It includes:

  • Create a scenario frame.  This is where we frame out the guidance.  It includes organizing scenarios into key categories and sub-categories.  This is where the folksonomy and information models helps.  Here's examples of our Visual Studio Scenario Frames.
  • Review the scenario frame.   This is painful but rewarding.  We review the scenario frames with customers and experts, internally and externally.  It's a spiral down process.
  • Identify the priorities.   Once we have scenario frames and we've spent the time reviewing, we're at a great vantage point to separate the forest from the trees.  One technique we use to prioritize is we ask product team members to think of the top 10 issues they'd like to make go away from their inbox ;)
  • Identify candidate nugget types and titles.   This is where we figure out whether something should be a "How To" set of steps or an "Explained" articles, such as an explanation of how something works or it's intended usage patterns.  We hammer on the titles so that they both represent compelling information, they set the right expectations, and they are more findable on the Web.
  • Identify the objectives for each nugget.   We identify the key "tests for success" in each nugget, by listing the specific tasks, questions or objectives you'll accomplish by using that particular nugget.  For an example, see the "Objectives" section in How To Protect from Injection Attacks in ASP.NET.
  • Prototype the Table of Contents (TOC.)   I do this on the whiteboard and paint the broad strokes first.  I factor the guide into two main parts: a fast path through the key strategies and concepts, and an actionable reference of nuggets (principles, patterns and practices.)  I also come up with a story I can tell in the hall.  For example, the story behind Improving Web Application Security was "how to build a hack proof app."  That set the stage and kept things on track when things got complex (as they always do.)
  • Create conceptual frameworks.   This involves figuring out the most effective ways to look at and tackle the problem space.  For examples, see Performance Fast Track and Security Fast Track.

For more information, see my related posts: Guidance 2.0, Scenarios in Practice, Scenario Frames for Guidance, and Driver's Guide vs. Owner's Manual.

Building
This workstream is where we do the bulk of our solution engineering.  It includes:

  • Prototype nuggets.  This is where we frame out and create skeletal examples of nuggets.  We incrementally render the final nugget by adding information and details as we learn, build, and test.
  • Create problem repros.   Reproducing problems helps to both understand the problem space as well as make it easier to share the problems and get more eyes on them.  We also need to be able to test the solutions against the problems.  We typically capture problem repros in notepad.
  • Create solution repros.   This is where we create the leanest, stripped down example of solving the problem.  We capture these as steps in notepad, including copy+pastable code or configuration steps as needed.  Each person on the team should be able to quickly run through the solution repro.  This helps us find the sticking spots and reduce the friction.
  • Create key figures.   This is a painful process.  The goal is find the fastest way to convey the information visually.   I think my favorite all time examples are our "cartoons" for showing our security scenarios and solutions.
  • Write nuggets.   This involves fleshing out the details.   I try to keep the text as tight as possible throughout the process.  It's more work to make it dense, but it pays off where we spend the time.
  • Write chapters.  This is similar to writing the nuggets but from a zoom level this is where it's about showing the forest.
  • Modify the TOC.  This is a continuous process.  As we learn more about the space, it gets easier to organize the chapters and nuggets into parts and a story that we can stand behind.
  • Refactor nuggets and chapters.   For an example TOC, see the Landing Page for Improving Web Application Security.

Testing
This workstream is about verifying the solutions from both a technical and user experience perspective.  It includes:

  • Test solutions in the lab.  Since the bulk of our books are executable, it's fairly straightforward to test the solutions to make sure they work in the lab.  Scenario and scoping is important, so this is where our scenario frames and objectives for each nugget really help.
  • Test solutions in the field.  This is where we try to find customers that share the problems we've solved and put our solutions into practice.   Usually, this involves leveraging our field or product support that acts as a proxy for the customer.  We also have a few customers running alongside us that can help test in their scenarios.
  • Dog Fooding.  One rule I have on our team is eating our own dog food so we have first hand experience with the usability of the guidance and can directly use it to solve problems.  See Eating One's Own Dog Food on Wikipedia.
  • Review solutions.   This includes reviews from internal and external stakeholders.  We do this incrementally.  First our immediate team has to approve, next our immediate key contributors and reviewers (the ones that have more bandwidth and commitment) and then more broadly (such as on CodePlex.)  For an example of the sets of reviewers, see the Landing Page for Improving .NET Application Performance and Scalability.
  • Improve solutions.  Rather than just focus on finding issues or proving that something technically works, it's about raising the bar in terms of user experience, or finding a more effective technique or solution.  We use Timeboxing to help scope. See How To Use Time Boxing for Getting Results.

For more information, see my related post: Test-Driven Guidance.

Release
This workstream is about making the guidance customer available.  It's incremental, iterative and we stabilize over time.  it includes:

  • Create an online community KB.  This is where we create an online site to share the nuggets.  For an example, see our Visual Studio Team System Online Community KB on CodePlex.  By sharing the nuggets here, we can test and vet before baking into more durable form factors such as the PDF and print versions.
  • Create a PDF.   We do this once we've stabilized enough that the PDF is a good example, of what the final book will be.  While details may change, the overall frame is durable and we use the PDF as a simple way to share the book before we're baked.  For an example PDF, see patterns & practices Team Development with Visual Studio Team Foundation Server.
  • Create the book.   This includes creating a fit and finish PDF, as well as creating print-quality graphics, and coordinating with our MS Press team to create the final print versions of the book, as well as getting the book on the shelf.
  • Port to MSDN.   We port the guidance to a MSDN so that we get better reach and because customers expect it there.  In the ideal scenario, we host the PDF and HTML, as well as have a print version.

Keep in mind that it's a stabilization process over time of various form factors and channels.  We do our agile guidance on CodePlex, then stabilize and port to MSDN and a book when we're baked.  For more information, see my related post CodePlex, GE and MSDN.

Key Concepts
I walked through the process first so that you have a good idea of the end-to-end approach.  Here I'll highlight some of the key concepts that underlie my approach:

  • Solution Engineering.  This captures the heart of what we do.  Rather than "writing a book", we're really engineering solutions for problems.  That's why the team includes architects, testers, developers ... etc.
  • Question-Driven.  I'm a fan of using questions to prioritize, drive the scope, and figure out when we're done.  I know we're done when we've addressed enough of the right questions.
  • Principles, Patterns, and Practices.  The bulk of the book is a collection of principle-based recommendations.  Although we don't always formally write up patterns or name the patterns, each book contains pattern-based solutions.
  • Life Cycle Approach.  I find using a life cycle makes the solution stronger.  If I just give you technical guidance, that only helps you so far.  If I give you a system for results, it changes the game.
  • Test-Driven.     I'm a fan of using explicit test cases for content.  See Test-Driven Guidance.
  • Conceptual Frameworks.   This is the backbone of the guides.  This is where innovation happens the most.  These are mental models, lenses and perspectives, and systems for results.  For an example, see Fast Track - A Guide for Getting Started and Applying the Guidance.
  • Scenario-Based Approach.  You can't do a design review in a vacuum.  We use scenarios to evaluate against.
  • Scenario Frames.  Scenario Frames are organized sets of usage scenarios.   We use these to frame out the problem space.   See Scenario Frames for Guidance
  • Executable Content.    Use the content to execute your tasks.
  • Action vs. Reference.   One key to building executable content is factoring reference information from action.
  • Guidance Types.  Another key to building executable content, is using specific guidance types.  For "reference" information, we use "Explained" nuggets.  For "action" information, we use "How Tos", "Checklists" ... etc.   Our guidance nugget types include: App Scenarios, At a Glance, Checklist Items, Code Examples, Explained, Guidelines, How Tos, Inspection Questions, Patterns, Practices, Principles, Roadmaps, Techniques, and Test Cases.  You can browse these collections using patterns & practices Guidance Explorer.
  • Context-Precision.    This is a term I coined to get more specific about the scenario.  This improves both the modularity of our nuggets as well as the usability.  Rather than a single nugget spread over many contexts, we factor and use precision.  This avoid a spaghetti problem and helps reusability and maintainability.  See Context-Precision.
  • Holistic Over Piecemeal.   The guides are meant to get you up and running.  The example I use is teaching you to drive.  I show you how to forward, reverse, steer, brake, shift to get you going and then drill into shifting or braking as needed, rather than show you how to go forward today, then how to reverse another day, and some day how to steer.  This means compressing and distilling knowledge that's been spread out over time and space.  
  • User Effectiveness Over User Sat.  To do my job well, I focus on customer effectiveness over customer satisfaction.  Ideally, it's both, but there's a lot of FUD in our industry and satisfaction is very subjective.  I find it more reliable to focus on measuring and testing customer effectiveness.
  • Criteria for Effectiveness.  Part of what makes this successful is having criteria for the guidance which includes: Compliance with proven practice, complexity, quality, user competence and time to implement.
  • Incremental Adoption.  Each guide is designed to be incrementally adopted.  While the sum is better than the parts, I can't ignore the reality that incremental adoption is more likely than monolithic adoption.  That's why the guides can be read end-to-end, or just grab the parts you need.
  • What To Do, Why, and How.  While this pattern directly applies to writing prescriptive guidelines, it really exemplifies the overall guide.  Rather than lose you in endless enumerations of options, you should be able to quickly figure out what to do, why, and how throughout the guide.
  • Entry Points.    How does a user find the specific guidance they're looking for?  I tend to focus on "question-based" entry points and "task-based" entry points.  You're either asking a question or trying to figure out how to do something.  For an example of question-based entry points, see VSTS Source Control Questions and Answers.  For an example of task-based entry points, see ASP.NET 2.0 Security Practices at a Glance.
  • Team Guidance.   Few problems can withstand sustained thinking.  No problem can withstand sustained collective thinking.  To get an idea of the team-based approach, see the list of contributors and reviewers at the bottom of the Landing Page for Improving .NET Application Performance and Scalability.
  • Customer-Connected Engineering.   Simply put, this means involving customers throughout the process.  How would we know we have the right problems or the right solutions without involving the people it's for?

Feedback
How do you build books? If you have thoughts, questions or feedback on my book building approach, feel free to share them here or drop me a mail.  While this approach has been proven effective over time, there's always room for improvement.  I'd like to hear what works for you.  If you're a fellow book builder, please share your approach.

My Related Posts