Cheating at Scrabble with LINQ to Objects

I read an interesting post on Eric Lippert’s blog recently where he was using LINQ to Objects to find possible words from a set of letters in Scrabble (he wasn’t actually cheating – that just makes for a more interesting title!). Eric made a great follow-up post that highlights the need for both defining performance goals and for profiling when working to improve performance. I thoroughly recommend reading both Eric’s posts because they are great posts – plus they set the scene for this post ;-). In Eric’s posts, he noted that he’d met his performance goals for the code but having taken a copy of the code and started to play with it, I became the new user. I guess that I can be a little impatient at times and wanted it to feel as though the results were returned instantly! I stuck with Eric’s decision to not simply cache the dictionary in memory – after all, I may want to use a dictionary that won’t fit into memory at some point.

Running the profiler against the code on my machine shows that even before the modifications in Eric’s second post, reading the data from disk is the bottleneck on my machine. This highlights another issue with profiling your application: you need to perform the profiling on representative hardware. In my case I’m running the application on a laptop so it may be that my hard drive is slower. Having said that, the changes from Eric’s post still made a noticeable difference to performance:

Method Search time(ms)
Original 2700
Updated 1658

Having determined that reading from disk is the culprit (at least on my machine!), I looked through the code again. In the original code, the SearchDictionary method is called once with the original rack and then 26 further times as a result of adding an extra letter to the rack (to allow matching to existing tiles on the board).

First off, I refactored the code slightly and introduced a SearchResult class

class SearchResult
    public string Rack { get; set; }
    public IEnumerable<string> Words { get; set; }

This allows my Search method to return an IEnumerable<SearchResult> that includes the original rack plus the virtual racks formed by using a letter from the board. My implementation of the SearchDictionary method now handles searching with the additional letters. With this change in place I can now update the search method so that it only hits the file once. Without boring you with the details too much, I could have just added the extra items to the HashSet that Eric added in his second post but I wanted to be able to correlate the matches with the original rack for outputting the results (essentially I constrained myself to producing the same output as the original method). I created a type to correlate the original and expanded/canonicalised rack info (called RackInfo) and set up a dictionary keyed on the rack value after expansion for placeholders and canonicalisation. This dictionary can then replace the HashSet.

The original LINQ to Objects query to perform the matching was

from line in FileLines(DictionaryFilename)
where line.Length == originalRack.Length
where racks.Contains(Canonicalize(line))
select line;

Since we’re working with the original rack and the rack combined with additional letters, the line length clause needs a minor tweak. The racks.Contains also needs tweaking to use ContainsKey. Other than that, the main part of the query remains unchanged!

from line in FileLines(DictionaryFilename)
where line.Length == originalRack.Length || line.Length == originalRack.Length+1
where groupedRackInfos.ContainsKey(Canonicalize(line))
select new { Word = line, OriginalRacks = groupedRackInfos[Canonicalize(line)] };

Notice that the results of this query is an IEnumerable that returns each matched word along with the racks that produced a match for that word. The results need to be re-shaped back into the matches for each rack, but even with this small amount of extra work (done with LINQ to Objects, naturally!) the timings speak for themselves:

Method Search time(ms)
Original 2700
Updated 1658
Single file pass 108

Not a bad result. In fact, it feels near enough to instantaneous to me so I’m calling it a day on this one!

I should probably point out that this post isn’t a dig at the method that Eric used. He was the only consumer and decided (perfectly validly) that the performance was sufficient. On my machine I decided that it wasn’t (hmm, maybe I need to become more patient!). The key things about this for me are that you need to define your performance goals, measure the performance against those goals and then profile your app to determine where the performance bottlenecks are. Oh, and that LINQ to Objects is pretty powerful – I was able to make this change very easily and with minimal change to the original query.

Comments (4)

  1. Eric Lippert says:

    Nice result!

    After my initial stab at the problem, clearly there are three ways to proceed if you want better time performance:

    1) Read the dictionary into a memory structure optimized for fast lookup.  Cost becomes the cost of iterating over the dictionary once plus the cost of eating up all that memory.

    2) Your approach — since iterating over the dictionary is the expensive part, do all your queries against the dictionary at once instead of over and over again. Cost is again the cost of iterating the dictionary once, no additional memory cost.

    3) Do a pre-processing pass that rewrites the on-disk dictionary into a new format that is much faster to search. For example, if the dictionary file is a b-tree of canonical forms instead of an alphabetical list of non-canonical forms then you can search the dictionary extremely rapidly without ever reading most of it even once.

  2. stuartle says:

    Hi Eric,

    Thanks for your comment. I’d ruled out the first option because I wanted to allow for dictionaries that wouldn’t fit into memory (either because the dictionary was much larger, or because less memory was available e.g. on a mobile device). Similiarly, I wanted to leave the file format as it is – I like the simplicity and the idea that it could easily be updated (although looking back, I didn’t mention this in the post).

    Having said that, I was very pleased with how simple it was to update the query to perform multiple queries in a single pass of the file. I did have one other slight twist on this, but didn’t have time to fully investigate a slight inconsistency in the results – who knows, I may well get round to looking at it again at some point (for interest’s sake).

  3. I read an interesting post on Eric Lippert’s blog recently where he was using LINQ to Objects to find

Skip to main content