In June 2011, Microsoft released Debugger Canvas on DevLabs, the result of a year-long collaboration between Microsoft Research, the Microsoft Visual Studio product team, and Brown University. Debugger Canvas transforms how software developers use and experience their programming environments.
In a traditional programming environment, a developer views code like most people view the web: by hopping from document to document, following link after link, with many documents opened in tabs across the top of the screen. Just like hyperactive web surfers, developers often get “lost in the tabs,” struggling to find (and re-find!) information that is relevant to their tasks. Debugger Canvas replaces these tabbed documents with a pan-and-zoom presentation of the specific source code that is relevant to the task. This keeps all of the necessary pieces together in one place, eliminating a lot of disorienting navigation steps.
Debugger Canvas is the result of a bit of serendipity. At last summer’s International Conference on Software Engineering, two separate teams—one from Microsoft Research and one from Brown University—each presented a paper about redesigning programming environments. The two teams quickly discovered each other, found many points of overlap between their designs, and decided to join forces and combine the best of both designs. With support from Microsoft Research Connections, we pulled together a team from Microsoft Research, Brown University, and the Visual Studio product team. The goal was to create a “power tool” (that is, an experimental extension) for Visual Studio that enables professional developers across the world to try out these new ideas. The result: Debugger Canvas.
The initial public reaction to Debugger Canvas has been overwhelmingly positive both on Twitter and in the comments area of blog posts that are discussing the tool. (One of my favorite tweets: “Thank you Debugger Canvas http://bit.ly/ls7zgn I found the error in secs after I installed you.”)
Up next: the collaborative team is currently adding enhancements based on user feedback, as well as scheduling interviews with active users to learn how they are using the tool. That feedback, plus other input and personal observations, will inform our next release of the tool.