Humility and handwriting


Designing the actual experience of handwriting in OneNote 2003 was quite a challenge. We didn’t want to just replicate paper, since that didn’t seem to be adding enough value. So we got quite interested in the idea of trying to determine the structure of ink. That is very hard and the subject of ongoing research, so we decided to experiment with a different approach. We’d of course let you write anywhere, but we would also show you special places to write (called “affordances” in the jargon) where we would interpret your ink as part of an outline or list structure. If any of you remember the first public beta of OneNote and had a TabletPC to try it on, there were “writing guides” visible on the page to show you where you could write to tell OneNote to link two items together, or to continue a line. You can turn these on in OneNote 2003 even now using Tools/Options – they really help if you care about structure.




We thought we were pretty clever when we designed that. But as some of us had feared, people came to think that they MUST write in the writing guides. Even if they knew they didn’t have to, writing outside the guides seemed wrong somehow, and it was distracting to cross a visual boundary you don’t have on paper. No matter how dim we made the guides, some people found them distracting, and the rest couldn’t even see them. And then the more we thought about it, the more this design seemed backward. People already knew where to write to continue a line or a list – they didn’t need us to tell them. In fact all these guides did was tell you where NOT to write if you DIDN”T want to continue a list or a line. And even that most people were pretty good at – they naturally wrote far enough away from existing stuff.




So we went back to fundamentals. What’s important with handwriting? Well, just getting the ink on the page is a big one (maybe 80% of the value at least is achieved just with that). Let’s make sure that is easy. Next might be searching the ink (say that’s 10% more). Next might be recognition to text (“reco”). About 8%? And then there is structure of the ink. Maybe that is 2%. Somehow we had got all excited as a team (me included) about the least important thing. It turns out that technology people do that all the time – its endemic.




We did what we could to fix up the ink to get the “80%” working as best we could. We also got the searching and reco of course. And there is even structure. But the whole experience is not the way we want it to be, so when we get a chance to really sit down and do it the way we want to, you’ll see great things.




BTW, one of the difficult things about finding bugs with ink (both in design and code) is being able to reproduce the problem in a developer’s office. Most bugs you can describe in a few steps, but we found very few bugs were being logged for ink even late in the product cycle. It turns out this was because the testers were embarrassed to write “draw some ink and it doesn’t work the way you expect” in the bug report. That’s not really reproducible, even with a picture of the results. So we implemented a way to capture the “wet” ink and send that directly to the developer so they could replay the ink being drawn. After awhile things got better. But then we noticed that the ink seemed to be behaving well in terms of code errors being few, but every person not familiar with “Scribbler” who tried to use it complained it was hard to use the ink. We couldn’t see it too well, but those fresh eyes could.




We organized events we called “ink-o-ramas”. We would collect about 10 people (actually Microsoft employee guinea pigs) who have never seen “Scribbler”, put them in a room, gave them Tablets, and asked them to take notes on a video taped lecture we played for them for an hour (and we gave them pizza – which didn’t go too well with the Tablets but we made do). Once people got used to Scribbler’s idiosyncratic ink interface, we had to change them out and get fresh “normal” people. We had 13 ink-o-ramas in total, each with a different focus near the end. We had one where we asked only native writers of East Asian languages to come, since those languages have a different interface for inking (Japanese, Korean, Simplified and Traditional Chinese). We got a lot of great, raw feedback, and we were able to see people having trouble in person – the devs were in the room too. By the time we finished all this, we had the ink working well enough to ship it, and it is what you see today. But we definitely know it can be better. Much better. We have plans…

Comments (16)

  1. John Porcaro says:

    Excellent blogging, best blog I’ve seen from Microsoft (my own included, I think). I love the peek into the thought process behind tools we use everyday. Thanks, and please keep it up.

  2. Tejas Patel says:

    Nice one again, well Your’s is second Blog from Microsoft after Scoble’s that I look forward to reading now, the reasons being it has your thought process into a product as John said before, you are showing us how you felt the need of the product and what other users are using the product for as well.

  3. Jason Salas says:

    Hi Chris,

    Great thoughts. During my first Microsoft interview (of 2), I was asked about what feature(s) I’d like to see in future versions of Word.

    Long story short, the PM said you guys have quite a daunting challange, because were it not for customer requirements and hardware limitations of the vast majority of consumers, you guys could build the best program the world has ever seen. He told me this, and said the bummer of realism at the end of the day is that they have to keep reminding themselves that all they’re doing is creating an electronic typewriter. 🙂

    Great blog!

  4. mike says:

    These entries are fascinating. Looking forward to more!

  5. Well, I have to disagree about electronic typewriter! Today a Word document can "know" it is an invoice and direct you to pick from your accounting or ERP program the customer you are sending the invoice to, and automatcially fill out the invoice for you (thanks to XML markup). This is the sort of thing that helps people be more productive. It is hardly a typewriter!

  6. "…just getting the ink on the page is a big one (maybe 80% of the value at least is achieved just with that)."

    I have to disagree with this a bit. Getting the ink on the page is important, yes. But if you can’t search it oneNote would have the same disadvantages as paper. I would say the ink input experience and ability to search are equally important.

    Once you can search ink doesn’t recognition to text pretty much come for free?

  7. I hope one of the ink things you’re looking at is gesture recognition; not being able to scribble out a mistake is a real pain for me. As you say, Journal is the other main ink tool and it’s the poster child for ink and you work very differently; but the gestures should be OS consistent (and the OS should let me ink into dialog boxes directly but that’s another rant and I think Longhorn will Do It Right (or else, of course)) 😉

  8. Kartik, believe me, we also started out thinking that ink on a page was not too valuable without structure (that includes determining text vs. drawing, so that you can do reco, and also determining what words are connected into sentences). But as time went by we were forced to admit that getting the ink down was not only fundamental (as we had always assumed), but a large part of the value was that you could simply have your paper notebook on your PC. Even without searching, people liked that you could flip through your notes and see them. Journal, for example, although it allows searching of ink does not use notebooks – you create files and save them, as you would with Word. That makes searching across multiple files very difficult. So in effect Journal only has electronic paper, yet many Journal users see it as very valuable.

  9. Mary, we’re looking at gestures for next tme. "scratch out" is not very controversial, but most others are. Both the tablet and OneNote teams looked at doing many more gestures, but the problem is that most of them are not natural like scratch-out is. If people don’t know the gesture, most will either never use it since they’d have to realize that such a thing as gestures might exist, then go teach themselves. What we found was that if you enable gestures by default, people unaware of them execute them by accident once in awhile, and that is very distrubing – when text disappears because you moved your pen, people stop trustng the application. Making them optional would mean that they would be used only by a small niche of users, so they did not make the cut for version 1.

  10. William Dowell says:

    Great read as always from you! The "re:…" comments are a valuable addition too!

  11. Peter Torr says:

    Supporting the PocketPC gestures for newline, space, etc. would be cool. (Not that I’ve ever actually been able to use text recognition with OneNote… I need to flatten my Tablet for that to happen).

  12. Those of you who have a TabletPC are probably familiar with the built-in note-taking application called…

  13. That’s a very interesting note on the handwritting. Those old typewriters do the trick for me every time! 😀

  14. I promised some time ago to write about the Tablet experience in OneNote 12 and how it has changed. This

  15. Those of you who have a TabletPC are probably familiar with the built-in note-taking application called