Software is usually created by typing source code into a text editor, yet the current solution for understanding complex software systems is usually to generate a diagram* or visualization* that exists independently from the original source code text editor. The current solutions have several drawbacks, such as:
- The level of detail in the diagram does not match the level of detail in the text editor.
- The spatial layout of items in the diagram usually does not match the spatial layout in the text editor.
- It is hard to correlate the different parts of the diagram to the actual location(s) in source code that are represented by those portions of the diagram.
- Even if an application provides a quick way to switch back and forth between a diagram and the source code text editor, the entire spatial and relational context of the diagram is lost since it is not visible within the context of the text editor.
*The term “diagram” includes things like UML-type diagrams, sequence diagrams, flow charts, etc..
*The term “visualization” includes things like SeeSoft/Augur, CodeCity, Software Terrain Maps, etc..
To summarize, the current solution of switching between the user’s native environment (the source code editor) and a view that shows structure and relationships (such as a diagram) is disorienting and difficult.
Our solution is to combine the source code text editor and the structural/relational diagram into a single user interface by hosting multiple source code text editors onto an infinitely-large two-dimensional surface which can be “zoomed out” to see the actual source code text in the context of the larger structural/relational diagram. When the user is all the way “zoomed in” to a source code file, it looks and behaves identically to how it did in their original source code text editor environment. The user can then “zoom out” so that the source code text becomes smaller (yet still interactive) and the user can start to see the context of the rest of the diagram around it. This is analogous to a mapping application like Virtual Earth in the sense that when the user is all the way “zoomed in” then they can see only a single house, but as they “zoom out” then they can start to see the buildings and cities around it.
We also use “semantic zoom” for both usability and scalability. For example, when the user zooms out enough that the source code text becomes too small to read, we begin to show labels and landmarks for semantically meaningful parts of the code. This is analogous to the way a mapping application will show names for lakes, forests, and cities when the streets become too small to see. We use various and disparate sources of data to determine which labels and landmarks are semantically meaningful. For example, when the user first starts to zoom out, we show all of the names of the functions and properties within the source code text. As the user continues to zoom out, we only show the names of files and the type definitions within those files. As the user continues to zoom out from there, we only show the names of files and also start to show the names of the projects containing those files. As the user continues to zoom out, we only show the names of projects and we start combining items into clusters (or groups) that represent the architectural layers of the system. Going back to the mapping application analogy, this is similar to showing cities, states, countries, and finally continents as the user continues to zoom out. This not only provides the user with valuable high-level information as they zoom in and out; it also allows our implementation to limit the amount of objects it has to display on the screen at any given time.
We are also able to show directional relationships directly within the source code, even when editing at 100% zoom, since the source code is embedded within the large-scale diagram surface. For example, when the user is creating source code that calls some other function, we draw an arrow from the code that the user just typed to the function definition that exists elsewhere on the diagram. This not only lets the user know what function they are calling, but it also lets them know where it’s at in the context of the software diagram. This method also utilizes the user’s built-in spatial memory and keeps the user oriented in the large-scale diagram without zooming out.