Some thoughts on the RoutedCommand design

I think we have
conflated several concepts with RoutedCommands:

1)  The ICommand part with it’s Execute/CanExecute methods …
basically this is just a form of method dispatch that lends itself to
declarative systems
2)  Common Verbs or Actions — these are what we now call built-in commands

3)  The mapping of basic InputGestures (Ctrl-C) to these Commands, the InputGesture collection on RoutedCommand

To understand this, imagine for the moment that we had a slightly different

First, we define something called Verbs.  A Verb is just
a thing with a name, like “Paste”, it almost could be a string, but
type-safety is nice.  In this design we would have
ApplicationVerbs.Paste instead of ApplicationCommands.

PasteCommand.  These could also be called Actions or Intents (and I have another name I’ll reveal below).

Second, we define a way to map Verbs to ICommands.

class VerbBinding
     public ICommand Command { … };
     public Verb Verb { …. };

Any UIElement could define VerbBindings just like they can CommandBindings and InputBindings today.    

Third, we have ways to map input to Verbs. 

class InputToVerbBinding
   public InputGesture InputGesture { … };
   public Verb Verb { … };

These could be defined “globally” in the input system, or scoped to tree elements.

In this design, the View maps basic input like keystrokes and mouse
and touch gestures (all InputGestures) either to ICommands directly on
the ViewModel or maps them to generic Verbs like Copy and Paste.  Verbs
in turn act like input and route through the visual tree until they
find a Binding that maps them to a ICommand on the ViewModel.  Imagine
we had a VerbBinding which took Verb and an ICommand and called execute
on the ICommand whenever the Verb was handled.  So for example, a menu
might contain Verb=”ApplicationVerbs.Paste” and there would also be a
default key binding that would map Ctrl-V to ApplicationVerbs.Paste and
the developer might also decide to map TwoFingerTouch to
ApplicationVerbs.Paste.  Whenever the menu was hit or the Ctrl-V key
was pressed, the PasteVerb would be fired and route just like input
until it was handled by a VerbBinding and directed to the ViewModel.  
(One nuance is that TextBox and other controls may also handle common
Verbs like Paste…but let’s set that aside for a moment). 

If you squint at this design, you start to realize that Verbs act
just like InputGestures.  And funnily enough if you look in the input
system you find we already have precedence for taking one input event
and turning it into another one:  we turn Stylus input gestures into
Mouse gestures so that applications that are not specifically
programmed to handle a Stylus will still work.    Similarly, in the
future we will cause dragging a finger across a touch screen fire not
only TouchMove but MouseMove gestures so that apps written before Touch
was supported will still work (with limitations).  So
InputToVerbBinding could just be a way to extend the input system to
map one set of InputGestures to another generically.  More abstractly,
if we introduce a Touch gesture that means Paste, if the system just
adds a global InputToVerbBinding, then any app that handles the Paste
Verb will be future proofed.

Hmmm…does that mean Verbs are just InputGestures?  I mentioned I
had another name for Verb in mind.  How about “AbstractGesture”?
AbstractGesture would just be a peer to KeyGesture and MouseGesture
(lousy name though…VerbGesture?).  If Verbs are InputGestures, then
we no longer need a special VerbBinding, InputBinding is sufficient.  I
also mentioned that there was a nuance that controls need to handle
common Verbs.  Well, controls can handle InputGestures and if Verbs are
a type of InputGesture…so we’re done.  Alternatively and more
abstractly, TextBox can be thought of as a ViewModel for a string
property on your model…but I don’t blame you if your head starts
spinning now.

In the final design, we get rid of RoutedCommand and add a new
sub-class of InputGesture called Verb.  CommandBinding goes away in
favor of reusing InputBinding.  The InputGesture property of
RoutedCommand is replaced by a new input extensibility that allows us
to map one InputGesture to another.  ApplicationCommands,
EditingCommands etc. become collections of common verbs and their
default mappings from other InputGestures.  I’d probably invent a new
thing like the InputToVerbBinding I mentioned, but I don’t have a good
name for it.

Feedback appreciated.

Comments (8)

  1. Rob says:

    John, can you provide some mocked up code and talk in more detail about what types of problems this would solve – aside from mapping of multiple gestures to a "concept."

  2. Neil Mosafi says:

    I like this idea.  When you write a ViewModel and you expose an ICommand, it would be nice to be able to natively wire up the input event to the command and still have the benefits of the existing RoutedCommand framework.  Your proposal seems to solve that problem by segmenting the command behaviour from the command execution.  Whilst RoutedCommand uses inheritance to solve this, verbs would use composition to do it meaning you could get the routing of commands with any ICommand implementation.

    I think this simplifies things – as you said commands are just a special type of input gesture so why treat them differently?

  3. Stefan Olson says:


    I agree with Rob that it would be good to have some example code to try and work through in my head exactly how this works.

    Your post inspired me to write up on my blog some of the issues that I had with routed commands and how I solved them by creating what would what I’ve called targeted commands, which is similar to what is in blend but on a much more simplified scale:  I will be making the source code to this system available in the near future, but it is just another in the many different ways of handling commands that are out there in the community.

    Your indication this post is that it would still continue to route the same way routed commands do, which as I described my article is not always suitable.

    One thing I’d really like to see solved with commands is the issue of indicating your checked state.  Right now there is no easy way to check a toggle button from your CanExecute, if the text selected is bold.  Are there some plans to improve that?  I haven’t yet checked out the ribbon control to see if they have come up with a solution in their code.

    I do have to add how nice it was in WPF that the CommandManager works with ICommands, so with my TargetedCommand class I get the same CanExecute behaviour as routed commands.  I had not expected it to be quite so simple!


  4. There’s a lot to be said for allowing failure if one wants to have a high probability of success.  The problem with the WPF Command subsystem is the same as the problem with the man who is 200 lbs overweight with broken health — there may not be any good solutions today, only  some that are worse than others.  The good solutions were only available 5, 10, 20, and 30 years ago.

    Be careful.  WPF’s biggest flaw is that it’s not purely OO, but rather some hybrid of OO and architect’s-kitchen-sink.  You want to get rid of RoutedCommand, but fail to explain to the layman why RoutedCommand is a poor idea for a public API, and an especially poor API for a supposedly OO UI framework.

    RoutedCommands fundamental flaw is that it doesn’t allow a delegation model: Any external information the object needs to get the task done is passed to it.  RoutedCommand doesn’t do this.  Rather than being given routing instructions, it routes the object.  You also can’t pass additional details, such as whether this command gets logged (perhaps to more easily implement Undo stack.  OO systems should feature objects that never ask other objects for information to do something; instead, the object should ask the peer object that already has the information to do the work.  

    It turns out that my comments are correct both in theory and in practice.  It turns out OO principles is a really common concept, as Stefan has roughly created such a solution with his ICommandReporter argument to his TargetedCommandManager.

    But how much more theoretical can I make my argument and still be attached to sound practice?  It turns out routing is a general concept in distributed, real-time systems.  In particular, you want a distributed event bus with subscription-based routing and policy-driven management of discovery and federation.  A TargetedCommand, and even Josh Smith’s RelayCommand, are just different routing protocols for building distributed, discoverable, federated systems.  The interfaces for these commands also create adapters to stabilize what kind of subscription you want: Pruning what objects or sub-tree of objects to send commands to is also theoretical, and known as quenching (activation/deactivation of event sources).  A command by itself should never know its routing protocol.  That’s the job of the sender: picking first class mail over 27th class mail, etc.  Delegate the routing protocol, and also have CommandManager keep several journals, allowing journaling to be delegated to various journaling algorithms.  The journaling algorithms can also, in turn, be event-based and give birth to event monitoring tools.

    Similar stuff could be done to fix the silly design of the journaling system for page navigation, which is also non-OO, and completely architect’s-kitchen-sink..  All my comments are backed up by theory, and not from-the-gut thought as to ideal abstractions.  Your Verb concept didn’t lose me, I’ve done similar designs in Swing based on Swing guidance, though your design seems iffy and non-OO.  It seems very inheritance-based, underutilizing delegation?  Verbs also need semantics, not syntax.  Am I cutting on a dime, escaping a linebacker, or am I cutting text from a buffer?  Seems like what you really want support for is Multi-Events, the event analog to multi-methods.  And I’d agree.  Writing tk code in lisp is far more expressive than other languages for this reason.  Unfortunately, .net languages don’t support multi-methods or multi-events; it is based on the type binding system of the CTS.  To do what you seem to truly want, you’d need to talk to the CLR folks about changing the design of type binding.  Sounds like a problem for MS Research?  Even then, as a MS Architect, you need to expose an API usable from all .NET languages…

    Also, I feel I’ve had a hard time convincing other programmers of sane design in my life, so I’ll settle for a religious victory: FrameworkGesture fits better.  AbstractGesture is a little too artistic for me, and also seems to go against WPF naming convention for ABC’s.

  5. @Verbs in turn act like input and route through the visual tree until they find a Binding that maps them to a ICommand on the ViewModel.

    I’m not 100% sure how you’d do this, but my reaction is, Why make Binding decide?  Does DataContext’s separation from Binding guarantee a Binding+DataContext will map correctly? Probably not… Sometimes you want to tightly bind these together, which explains all the cruft on the Binding/MultiBinding interfaces, such as ElementName. Something I’d have to think further about, as standalone Binding could create a nasty schroedinbug.  I suspect most bugs in WPF apps to be schroedinbugs from the inheritance context mechanism.

  6. johnzabroski says:


    @Whilst RoutedCommand uses inheritance to solve this, verbs would use composition to do it meaning you could get the routing of commands with any ICommand implementation.

    Using composition still doesn’t solve the underlying problem that the design is non-OO.

    Composition (through interfaces) doesn’t equal OO.  What composition gives you is resiliency to bugs resulting from "fragile base classes", or, as recent research points out, fragile middle classes.  In a nutshell, good use of composition frees you from making assumptions about implementation details.  However, good use of composition can be done with structured programming and functional programming.

    It’s delegation that is missing, in many places, from WPF.  Examples include navigation and controltemplates.  Without delegation, WPF apps still feel very much like VB6.  RoutedCommands actually map directly to the VB6 way to do things: Frame/Controller paradigm, where the controller ends up knowing too much.

  7. Karl says:

    Sounds interesting. A number people have been critical of RoutedCommands as they are implemented.

    You may want to check out the Composite Command concept over in the Prism project