What are the ACT Compatibility Evaluators Really Good For?

I receive a number of questions on the compatibility evaluators in ACT that revolve around one central question: what are they actually good for?

Seems kind of a harsh question, eh? Well, I’m not intending to be rude. However, I do try to help people avoid assumptions that will end up making them sad. However, I have since discovered that, in my attempt to prevent you from spending a lot of money and ending up sad, I’ve erred in the direction of leaving you sad but with your money still sitting in your pocket. I guess that’s slightly better, but I’d rather you weren’t sad.

You see, a lot of people approach the Application Compatibility Toolkit with a perspective of reverence. I mean, look at the name! It’s made by the Windows team! It has to be all I need to get the job done! (In fact, if you are an ACF partner, I believe it’s even mandatory to use it.) But, if I choose to use the compatibility evaluators, what can I then do with that data?

Well, people initially assumed that they could run the evaluators and it would tell them which applications are broken, and which are not. They could then use that data to project costs for the project. Like so:

Run Evaluators
Project Costs
Fix all issues the agents flagged
Ready to deploy!

And, if you do that, you end up sad. Because we’re going to let you down. We don’t find all of the apps that have problems, and we don’t find all of the issues in the apps we do flag. We are runtime evaluators, so we have to be concerned with performance. Even if we could look for all bugs (hint: we can’t), in production your users would hate us for making their apps miserably slow by looking in too many places if we tried that. So, unless your app just so happens to have a bug that’s extremely common, we won’t even notice the bug.

So, why do we have these evaluators that don’t help you either project costs or find all of your issues? Is it because we don’t know how to write programs? Nope. (Not this time at least.) You see, there is a really good use for this data, and if you pick this use, then you not only end up unsad, you may even end up happy.

Issues detected by compatibility evaluators come with a priority automatically set. We only set it to Priority 2 or Priority 3. (Setting it as Priority 1 – critical to fix – is something left for you.) What this means is that, with a Priority 2 bug, you have an application bug that is probably not automatically fixed by the OS, in a bit of code that somebody actually ran, so you probably want to fix that. If we flag it as a priority 3 bug, then it’s still a bug in a bit of code that somebody was actually running as part of their job, but it’s something that’s probably automatically fixed. For example, UACCE will flag file writes. If we predict that UAC virtualization will fix it automatically, then priority 3 (nice to fix). If we predict that UAC virtualization will not automatically fix it, then you should consider fixing it so we classify it as priority 2 (must fix).

So, I’m not so much interested in seeing the original estimate (since we miss so much stuff), but the data does come in handy down the line. For example, here is a segment of an application testing workflow that incorporates this data:

Perform Install Testing
Any Sev2 Issues? –yes—> Remediation
Perform smoke testing, user testing. etc.

Now I’m using this data in a productive way to save time from a manual effort. You know that some user ran into this problem while performing their actual work, so the data fidelity is very high. Why send a known broken application over to testing and waste manual testing hours discovering a bug you can discover with nothing more than a few mouse clicks (or perhaps may outright miss if you don’t have a good test script)?

ACT agent data is relatively inexpensive to collect if you need the inventory anyway. But, you need to avoid getting tricked by overly optimistic sales people into believing that this data is everything you could ever want, but at the same time make sure you don’t ignore valuable data. Runtime data is awesome because you know for a fact that bad thing actually happened, and if run in production you know it happened as part of doing your real work (which are the only bugs you care about).

Feed your workflow, save manual effort, and reduce your risk. Now that is what ACT agent data is good for.

And, of course, we certainly do wish that we could highlight all busted apps for your organization (to help you better estimate project cost), as well as discover all individual app issue. Static analysis tends to do a better job at the app level (is the app broken – yes or no?) simply because it can perform WAY more tests without interrupting somebody’s work. But it doesn’t do nearly as well at the issue level. In the end, a balance between runtime tools, static tools, and manual effort is what most people end up doing to build the plan that really works for them. Bringing it all together, you can build the optimal mix of low cost and reduced risk during an app compat project. Don’t ignore a component of your solution just because it isn’t perfect. Because, alas, it’s all not perfect. There are no silver bullets. But we do have a few lead ones.

Comments (4)

  1. One option may be to analyze the EXE/DLL for known issues, in addition to the runtime analysis?

    At the moment the tool doesn’t seemt to separate ‘no issues’ and ‘no data collected’ that at least would be useful – it seems that it is possible for applications to be picked up in an inventory, but if the user doesn’t run it there will be ‘no issues’ it would be nice if we could separate between ‘no issues’ and ‘no information’

    So far having done some deployments of the tool i’ve certainly been able to collect some useful information…but I hope the next release comes soon and can offer some functionality that will further increase accuracy. Ideally could pick up known support status of more applications, and I know this may rely on vendors – but for example many Adobe products vendor assessment doesn’t show up, but on the Adobe site there is a Windows 7 assessment. It would be nice if the tool had this vendor information for more products…even if vendors could just report to MS – the minimum version of the product we provide support for Windows 7 is version x, and automatically everything less then that gets flagged by the toolkit as not supported by vendor, refer to x for upgrade information.

  2. Ram says:

    Hi Jackson,


    1. Need to check compatibility issues for 32 bit application to be deployed in windows 7 64 bit OS using ACT 5.5.

    Steps taken to achieve the same.

    1. Created DCP and deployed in WinXp OS.

    2. Anaysed the Windows7 compatibility issues for the applications installed using ACT 5.5, but unable to figure out the exact compatibility issues for Windows 7 64 bit operating system.


    1. Do we need to query any table to retrieve compatible information for Windows 7 64 bit OS.

    2.Is there any other way to achieve the same?

    Please let us know.

  3. George says:

    Hi Chris,

    Is that second app testing workflow diagram (or any workflow diagram, for that matter) part of something bigger that you’d be able to post or show (to help direct collection, testing, and remediation process efforts)?


  4. cjacks says:

    @George – I have an article in TechNet magazine from June of last year that walks through the process, but it doesn’t have complete workflow diagrams.

Skip to main content