Marketing are Bad

Todd Bishop’s story in yesterday's Seattle Post-Intelligencer about Professor Sandeep Krishnamurthy’s fisking of Word’s grammar checker is making the rounds. It even got a blurb on NPR’s Morning Edition today.

Word’s grammar checker is a by-product of the Natural Language Processing group at Microsoft Research. I’m not an expert in natural language processing. My job is taking the work they do and figuring out how to interface that work with Word. So, you can take what I say with an appropriately-sized grain of salt.

I can say, however, that natural language processing is “hard.” In computer science the word “hard” has a very technical meaning in terms of computational complexity. Problems that are “hard” don’t lend themselves to being broken down into a finite set of well-defined steps using a handful of basic operations.

Computers are stupid. They really only know how to do a very limited set of things. They can add, subtract, multiply and divide. They can move a value from one location to another. And they can compare two numbers. Everything that your computer does is built up from these basic operations. Everything.

Well, some pedant might pipe up and say that the granularity is even smaller; that even these operations are constructs built up from basic logical operations (“and,” “or,” “exclusive or,” etc.), but that’s a digression not worth exploring for now. Let’s just content ourselves with the operations that are provided by the processor’s machine language.

The job of the folks in our NLP research group is to take the general problem of parsing natural language sentences, like “Marketing are bad?” and break that problem down into these six basic operations. What makes the problem difficult is ambiguity.

Take the subject of this post. The word “Marketing” is obviously a noun, but is it a gerund? Or, am I referring to the marketing group collectively as individuals? If the former, then the sentence I wrote is grammatically incorrect. If the latter, then it’s grammatically correct. How might the computer know? How would you know without some further context for that sentence?

By the way, American sports writers are notorious for getting this one wrong. They’ll often write something like, “Seattle is on a pace to win the American League West.” As written, the sentence seems correct, but substitute “The Mariners” for “Seattle.” That substitution shouldn’t change the plurality of the verb, because, semantically, the subject hasn’t changed. The correct verb for either wording of the subject is, “are.”

Prof. Krishnamurthy’s fisking of Word’s grammar checker consists of a cobbling-together of a number these ambiguous sentences. Even in context, it’s difficult to tell if, say, “Gates” is a singular proper noun or a plural noun that simply has incorrect capitalization. If you can’t figure out the plurality of a noun in a sentence, how can you decide that the plurality of the associated verb matches its subject?

Which brings me to an ancillary facet to the overall problem. What should software do in response to this kind of ambiguity? If the grammar checker is unable to figure out whether a sentence is correct or incorrect, should Word err on the side of accepting the sentence as correct, or should Word err on the side of flagging it as an error? No matter how we answer that question, a non-negligible group of users won’t be happy.

“Well, so add a preference,” you say. At first, this seems like a simple answer, but it doesn’t always work that way. Word’s auto-formatter is probably the classic example. The primary reason people curse it is because of ambiguity, and we already have preference-related issues with the auto-formatter. And, yes, we’ve heard the complaints. We’re working on a solution.

But, ambiguity is our bane. It’s the heart of a number of problems we’re trying to solve, and its very existence means that we aren’t going to find complete solutions to those problems (at least not short of figuring out how to get a computer to mimic the human brain using just six basic operations). Given this limitation on solving problems like effective grammar checking, should we, as Prof. Krishnamurthy suggests (demands?), scrap the feature entirely? Or, is it better to offer a feature that, while less than perfect, still retains some utility?

If a feature has potential for solving real user problems, then I tend to shade toward adding it even if the feature is limited in its ability to solve the problem. One very significant benefit from putting even a partial solution into users’ hands is the feedback you get about the feature’s limitations. That’s why dialogues are important. As the cluetrain manifesto says, markets are conversations. It’s hard to have a conversation about a feature that isn’t there.

 

Rick

Currently playing in iTunes: From Now On by Supertramp