It's entirely likely that more than a few people who commented on that post will point to this survey as an example of how open source development projects can obtain useful information about usability and user's needs. I'd argue, however, this makes my case more than augurs against it.
Take a close look at the two surveys. Try to figure out how the task-based questions will provide insight into real-world user problems. Then take a look at the "Guidelines" questions. Even after reading the "more info" link for "GNOME designs dialog to yield closure," it's not at all clear what they're asking for--particularly if you're not well-versed in the terminology of software usability.
Folks, this is an example of how not to conduct a usability study. If you really want useful information, you plunk real users down in front of your product, preferably an instrumented version of the product, and you ask them to perform a number of high-level tasks.
And that's the easy part. Before you even get to the point of plunking real users down in front of your software, you have to figure out whether the tasks you're asking them to perform have anything to do with what most users will need to do in order to get their work done. In order to have that information, you need a well-done market segmentation that identifies your major groups of users, and what their jobs are. You need to understand why they'll even want to use your program in the first place.
Now, anyone who does any kind of work in human-computer interactions knows all of this, which makes me wonder what the UMBC hopes to achieve with this survey. Perhaps they're seeking to demonstrate the inefficacy of usability surveys compared to other methods of studying usability?
Currently playing in iTunes: Rockin' Horse by The Allman Brothers Band