Hi, I’m Andy Rich, and I’m a Software Development Engineer in Test on the Visual C++ Compiler Front End. My testing focus during the Orcas product cycle thus far has been compiler conformance.
Implementing a conformance feature usually breaks down into two categories: positive and negative conformance. Implementing a positive conformance feature typically means we enable a scenario that previously issued an error. Implementing negative conformance means we take something that works (but is not conformant), and cause it to issue a diagnostic instead of accepting it.
Customers have various reasons for desiring higher compiler conformance, but by far the largest reason is for code portability with other compilers. Positive conformance is especially necessary for people who use other compilers as their primary development environment, and port their code to the Visual C++ compiler. If a code construct works with other compilers, and fails to work with our compiler, it causes extra work for that customer, and can fragment codebases as the customer uses macros to work around the issue.
With negative conformance, however, the opposite is the case. Typically, this is a problem for the customer who is developing cross-platform code on the Visual C++ compiler. When we don’t implement a bit of negative conformance, that customer may discover that code which works on our C++ compiler fails to compile on other compilers, because Visual C++ was permitting a technically illegal construct (and the other compiler is correctly issuing an error). This is also frustrating, and costs our customers a lot of time.
Ideally, with respect to our conformance, we would like to fully support both of these customer scenarios, and give people as much conformance as we can, to make the job of porting as painless as possible.
However, there is a third customer we are also trying to support at the same time: a customer who has already written a lot of code using a previous version of the compiler. These customers are often not interested in porting their code to other platforms, and are more concerned with rapid development. These customers would like to take advantage of compelling new features in a more recent version of the compiler, but do not want to expend a lot of resources updating their code to comply with restrictions of a new compiler (especially where the previous compiler was happily accepting code, and doing the right thing with it). For these customers, implementing negative conformance can be a huge adoption blocker.
Testing for conformance parallels implementation: you need to test for both positive and negative conformance. Once again, positive conformance is fairly straightforward: look at the relevant C++ Standard sections and write testcases that test and stress the scenarios permitted by the Standard. For negative conformance, however, you need to find interesting cases where the compiler should issue an error. The standard calls out canonical cases, but there are often sticky, dark corners where the implications of three divergent sections leads to the conclusion that a program is ill-formed. This issue is further muddied when you consider my previous statement that not all customers desire this negative conformance.
It is often helpful to think of negative conformance as a knob: turned one way, we’re completely conformant, and turned the other direction, we silently allow a nonconforming construct. Between these two endpoints are varying levels of diagnostic, from warnings to errors which can be turned off, to errors which can be turned off and only issue when running under the “extensions disabled” switch (/Za), and even so far as to create whole new switches that change the compiler behavior (as in the case of -Zc:forScope and -Zc:wchar_t).
The problem is that determining where to set this knob is very dependent on how prevalent a particular construct is “in the wild.” For these purposes, we maintain what we refer to as “real-world” code. These are branches of huge codebases of Microsoft products, such as Windows, SQL Server, and Office. We build these codebases on a regular schedule with the very latest compiler in an attempt to find potentially breaking changes. (We will also build these codebases prior to checking in any features we feel have a high risk of causing breaking changes.) If code that previously compiled without error now fires errors in a number of places, we know that we’ve found a breaking change. These sorts of issues are also raised to us with customer feedback
Many factors come into play when determining if a breaking change is acceptable or not. First, we determine how common the code is. Next, we determine if the previous code was incorrect, and how incorrect it was. (For example, if the previously-compiling code causes bad codegen, it’s probably better to issue an error than to silently generate bad code.) Third, we determine how hard it is to fix all of the places the code is broken.
These factors, combined with many others, influence our decision on how to make these changes in a way that pleases as many customers as possible. To some degree, we consider these sorts of issues with every bug we fix.
Any time our customers face issues like these, we like to hear about them, as it provides us more data that is useful in making these sorts of judgments now, and in the future. Feedback is especially important for beta products, where new features and breaking changes are being seen for the first time. Finally, as always, the best way to provide us with this sort of feedback is through the Connect website: http://connect.microsoft.com/.
Visual C++ Compiler QA