Testing Office’s ODF Implementation

In this blog post, I’m going to cover some of the details of how we approached the challenges of testing our ODF 1.1 implementation that was released in Office 2007 SP2.

Adding support for a new document format such as ODF to Office is a large and complex project.  Office has a very broad range of functionality, and we had to map that functionality to the structures defined in ODF.  This mapping then needed to be rigorously tested, in isolation and also in rich documents that reflect typical usage of various combinations of features, to assure that our generated documents are conformant to the specification and to maximize interoperability with other implementations.

High-Level Planningimage

When we began work on our ODF 1.1 implementation, we started by developing a set of high-level guiding principles that we would follow.  I covered those in a blog post last year, as well as a recent post that explained how we see the relationship between standards and interoperability.

After we had reached agreement on these principles, the various feature teams began designing the details.  A “feature team” here at Microsoft is made up of three groups of people: program managers (PMs), developers, and testers.  In broad simple terms, PMs are responsible for writing down the specifications, developers are responsible for implementing those specifications, and testers are responsible for verifying that everything works as intended.  Since there was a specification for ODF in hand already, the main job of the feature team was to write down the details of how we would implement it.  In this post I’ll be focusing on the work of the testers, although inevitably that will include some discussion of the work of the PMs and developers, because the  three disciplines work very closely together in an iterative manner.

Most of the people who planned and executed our ODF implementation are members of the same teams that are responsible for other aspects of the design, development and testing of the Office clients.  We created an “ODF virtual-team” that included specific individuals from each of the relevant product teams – Word, Excel, PowerPoint, and graphics, primarily --- and the v-team approached the project with the same management structure and business processes that we use for other work on Office.  Attendees of the DII workshop in Redmond last summer had a chance to meet several key members of the ODF v-team, who gave presentations and participated in the roundtable discussions at that event.

In addition to these people in Redmond, we have other teams that we can call on for projects like this one, and for the testing work on our ODF implementation we pulled in people from the Office group in four countries, as well as people who worked on Office years ago but have moved on to other roles (for their expertise in older features that we wanted to verify are supported correctly in our ODF implementation).

Mapping Between ODF and Open XML

Office’s internal representation of documents is very closely aligned with the Open XML formats, so one of the first steps in planning our ODF implementation was to do detailed mapping between the Open XML structures that Office already supported, and the ODF structures that we would be saving and loading to/from in ODF 1.1 documents.

The PMs had primary responsibility for this, and they created sets of spreadsheets to capture the mappings between every ODF and Open XML element and attribute.  This mapping needed to be defined in both directions: OXML->ODF for File/Save operations, and ODF->OXML for File/Open operations.

As a simple example of how that worked, here is part of the spreadsheet for the concept of bold text, as mapped from OXML to ODF:

image

This excerpt is just a subset of what was captured in the mapping; the PMs also identified required/optional status, default values, and other information.

And here’s the converse mapping for bold text, going from ODF to OXML:

image

I’ve used a very simple example here, and yet as you can see there are many details involved.  There were thousands of details like this in the mapping spreadsheets, and collectively these spreadsheets served two roles:

  • they were the spec for the developers
  • they defined the scope of the test plan for the testers

The process of creating the mapping spreadsheets is interesting unto itself, due to the many places where ODF and Open XML had different approaches or different capabilities.  I’ll cover the mapping spreadsheets in more detail in a future blog post.

Test Tools and Test Documentsimage

Like any professional test team, the Office testers have a wide variety of tools they’ve built to help automate their work.  Here are a few examples of the tools that were used to test Office’s ODF implementation:

  • Verifying conformance to the schemas in the standard was a high priority, and we used Jing (called by an internal tool we call ODE) to validate against ODF’s RNG schemas.
  • The Excel team used an internal tool named Trippy to automate round-tripping.  They ran this tool against a test library of over 700,000 test documents, each of which was saved as an ODS file and then validated against the reference schemas.
  • The Word team used tool called OHarness, which can be used to run the same operation on each one of a batch of files.  They used a library of over 100,000 documents, saving each one as an ODT file, logging bugs for the developers, and repeating the tests until they drove the number of non-conformant documents to zero.

These tools, and others developed by the test teams, all work against large collections of documents.  These test documents came from a variety of sources:

  • Text documents that have been used in the past for the binary formats documentation and other purposes.
  • Real-world documents which have been given to us by customers for the purpose of helping us see how they use our products and seeing the problems they have run into.
  • Documents from test libraries created by other organizations, such as the test documents from the University of Central Florida atomic test suite and the test documents that Dialogika has created based on their work in developing the European Commission’s corporate style package for official and legislative documents.
  • Documents manually created by the testers to cover every element, attribute and attribute value defined in the ODF schemas.
  • Public documents collected from the internet.

Our libraries of test documents are dynamic and constantly growing.  As a recent example, we found that the latest Committee Draft of the ODF 1.2 specification uses styles in a way that exposed a bug in Word’s implementation.  (Rick Jelliffe has blogged about this bug.)  So we’ve added that document to our test library going forward.  (We’ve also fixed that bug and tested the fix, which will appear in a future update.)

Verifying Mapping

After the developers had written code to handle the mappings as defined in the spreadsheets (which were essentially the specs for their work), the testers got to work testing this code.

One aspect of testing was the small documents for verifying specific elements and attributes.  These were handled in an automated manner using tools such as Trippy and OHarness, as mentioned above.

Another aspect of this testing was the creation of complex “real-world documents” that contained combinations of functionality to test various scenarios that we’ve found typically occur in actual use of Word, Excel, or PowerPoint.

For example, many Excel users create spreadsheet documents that contain a large worksheet of raw data like this one:

image

… and that data is often summarized that data in pivot tables and/or formatted reports like these:

image

The test team would create documents like this one, then manually verify that the document could be saved as either an ODS or XLSX file without change in appearance or functionality.  In this particular case, the test team verified that a variety of details were handled the same in Open XML and ODF, including:

  • Formatting of cell content, including conditional formatting
  • Data with Autofilter on data sheet
  • PivotTable in Pivot sheet based on above data
  • Results of formula calculations
  • Data validation

Verifying Conformance

As I mentioned earlier, the product teams each have a large corpus of test document that are used for automated testing of conformance.  Binary documents and Open XML documents are opened and then saved as ODF, and each of these documents is validated against the ODF schemas.  By analyzing the results of these tests, the testers can identify problems that need to be corrected, and then the tests are re-run.

The goal of this process is simple: to drive the number of non-conformant documents to zero.  We reached that goal for the Office 2007 SP2 implementation of ODF, and as of this writing I don’t know of a way to make Word, Excel or PowerPoint write a non-conformant ODF document.  It may theoretically be possible to do so – and if anyone happens to come across such a scenario please let me know – but we have verified that the hundreds of thousands of documents in our test libraries can be saved as fully conformant ODF 1.1 files from Office 2007 SP2.  By conformant,  I mean here fully schema-compliant and also conformant with our reading of the text of the ODF 1.1 spec.

Security Testing

When we add support for a new format, one area that requires intensive testing is security.  Does our implementation of the new format create any new security risks that need to be mitigated?  Is there any way that an ODF document can be corrupted (deliberately or accidentally) that could cause a security problem?  The test teams were responsible for answering these questions.

The key tool used for this aspect of the test plan was Distributed File Fuzzing (DFF).  The basic concept is that thousands of documents are corrupted in random ways, and these documents are opened on large numbers of PCs in a distributed environment.  Data is collected on the ways in which these corrupted files fail to open, and this data is used to verify that there are not security problems caused by bad error handlers, buffer overruns, integer overflow, or other issues.

When issues are found in security testing,  the process is the same as in the other types of testing: the testers log bugs, and the developers check whether the problem is in design or implementation, and based on those findings we either modify the design and re-code, or correct the code.  The tests are then repeated, and this process continues until the number of open security issues reaches zero.

Testing Interoperabilityimage

The final piece of the testing puzzle is interoperability testing: verifying that documents created in Office can be opened in other implementations, and vice versa.

This type of testing is nothing new for the test teams, because we do it every time we add a feature to Office.  In the past, we focused primarily on interoperability between various versions of Office, but now that test matrix has been expanded to include the latest versions of major ODF implementations.

To verify interoperability with other ODF implementations, the test teams created documents from scratch in OpenOffice.org and Symphony, and then opened those documents in Office.  They also created documents in Office and opened them in the other implementations.

In addition to these types of simple tests, we also wanted to verify that our implementation was not dependent on details of other implementations that aren’t actually standardized in the specification.

A good example of this sort of issue is the question of how parts are named and where they’re stored in the ZIP package that comprises an ODF document.  I’ve blogged in the past about this same issue in Open XML – an implementation of the Open XML standard shouldn’t assume that the document start part is word/document.xml, just because Word happens to use that name and location.

In ODF, some of those details are standardized – the start part is always named content.xml, for example – but others are not.  So the testers used ODE to manually modify documents that had been created by OpenOffice.org, to change certain details such as the name of the folder containing embedded images.  They then opened these documents in Office, to verify that our implementation will be able to interoperate with implementations that have made different design decisions within the range of options that the ODF standard allows.

Summary

As you can see, there are many things to consider when creating and executing a test plan for support of a new document format in Office.  At an abstract level, it’s just another test plan – we design, then code, then test, with ongoing revisions to all three as needed to reach our design goals.  But the specifics of the ODF implementation test plan were geared toward the details of the ODF standard, as outlined above.

Due to the work our test teams did on the ODF 1.1 implementation in Office 2007 SP2, we are very confident that the implementation we produced adheres to the details of the design we had created, as documented on the implementer notes web site.  I realize that some people may disagree with some of the design decisions we made in our implementation, and we welcome constructive debate of those details.

I’m posting this from The Hague, where I will be attending the ODF plugfest today and tomorrow.  My colleague Peter Amstein – who led the technical work on our ODF implementation – is also here, and we’re looking forward to learning about how other implementers approach document format interoperability testing, and discussing how we can all work together on ODF interoperability going forward.

Parliament building, The Hague