Avalon’s Media Integration Layer


There is a new article on MSDN describing Avalon’s Media Integration Layer. The article provides insight into some of the key architectural features that differentiate it from other technologies. Earlier this month, Ian Griffiths also wrote about this layer in his Graphical Composition in Avalon article, focusing on composition. Ian’s article led to a discussion on the differences between Apple’s Quartz model and the Avalon model. The MSDN article provides more insight into the differences that Ian described. Given my experience in Mac development and my limited experience with Avalon, a couple of differences jump out at me.


 


One of the key differences is that Quartz renders to bitmaps and then does compositing and transformations on those bitmaps. Avalon retains the drawing instructions so that it can rerender content as needed with full fidelity. For example, the Macintosh has a function to zoom in on the screen for accessibility. However, you just get a zoomed desktop. I can imagine that on Longhorn you could get a full fidelity zoom (does this already exist?). It also means that you can do things like animations completely within the MIL layer – the client code does not have to install a timer and rerender.


 


The other key difference I find interesting is deeper support throughout the OS for resolution independence. At the lowest layer, Quartz is resolution independent, but if you were to run Mac OS X on a 200 DPI monitor, everything would be tiny. There’s no way for an application to know that it’s running on a 200 DPI monitor, and there’s no way it can make things like window titles or standard controls larger. The size of Avalon controls are all specified in real world measurements, not pixels. I remember being disappointed that Apple did not solve this problem when moving to Mac OS X. I think it would have been the perfect opportunity to make this jump and update the APIs. If you’ve ever worked with small Japanese text on today’s monitors, you’d really appreciate the need for higher resolution monitors.


Comments (11)

  1. So did you read all the way to the end of the Mac discussion/rantfest? (The forum system crops comments after a certain level of nesting and you have to click on a ‘View’ link to see the rest of the thread.)

    I was arguing from a disadvantage in the discussion forum, because my Mac OS X experience is somewhat limited – I bought myself a Mac a while ago because I wanted to learn more about its graphics system. I played around a lot, and learned a fair bit, but I’ve not written any serious software on it.

    So I’m wondering whether I made any gaffes in what I said about OS X… How did I do?

    (I’m also wondering whether everything the mweiher guy said was quite correct… He seemed convinced that OS X really has solved the resolution independence problem. It doesn’t look that way to me, but he seems so sure…)

    Oh and to answer your question, you can get a full fidelity zoom in the PDC build of Longhorn within a single window. But the PDC build doesn’t have the desktop composition engine – in that build the Avalon compositor isn’t being used for the whole desktop, it’s just being used inside of windows that choose to use it. So as far as I know you can’t do a full fidelity zoom on the whole screen in the version of Longhorn that’s currently publically available. (But you can easily test how your application will fare when enlarged, because you can zoom the entire contents of any given window.)

  2. Chris Hanson says:

    Dan, measurements in Quartz are actually specified in real-world units, "points." Just like in PostScript, points are 1/72 of an inch. Now, Quartz currently makes the simplifying assumption – when drawing to the screen, not to printers! – that one point is one pixel, but there’s no reason in the Quartz architecture that has to be the case. All it takes is a different device matrix concatenated to the current transformation matrix to change the effective resolution.

    You’re also backwards on the bitmaps. Quartz rendering flows through the same sort of pipeline that rendering in PostScript does. So you aren’t bound to screen coordinates at any time before final render & composite when rendering to a bitmap context like a display. You can set up any transform you want before drawing and it affects that drawing.

    The central disconnect between people talking about Quartz versus Avalon is that Quartz is an immediate-mode API and Avalon is a retained-mode API. That is, Avalon declares shapes and lets you manipulate them, while Quartz lets you issue commands that make shapes. You can easily build a retained-mode API on Quartz, and I’m sure that there’s also an immediate-mode API in Avalon. As was determined many years ago in the 3D graphics world, the two types of API are complementary: OpenGL is an immediate-mode API on which scene graph APIs like OpenInventor/Coin are built. The one doesn’t cancel out the other.

    Marcel Weiher really, really knows what he’s talking about when it comes to imaging on Mac OS X. He’s a long-time NeXT developer who has implemented his own PostScript interpreter, and who writes high-end tools for the graphics and publishing industry. It’s generally wise to pay attention to what he has to say.

  3. Dan Crevier says:

    Historically, all Macintosh displays were 72 dpi, and Apple was very strict about this. Over the years, they have finally started shipping higher resolution displays. So, one point is no longer equal to 1/72 of an inch on the screen. There is no way on the Mac for me to create a button that’s 1 inch wide without figuring out the resolution of the screen and scaling it.

    I completely agree that the Quartz *architecture* is resolution independent. However, I just wish Apple had built Cocoa and Carbon on top of it to be resolution independent. Both APIs just take integer "points" in their view systems. If they used floating point values instead (like the Quartz subsystems) then you could get some true resolution independence in the UI. The key difference is in the comparison of Avalon to Carbon+Quartz or Cocoa+Quartz. It’s too bad that the frameworks built on top of Quartz loose his valuable Quartz feature. Who knows, maybe Apple is adding this already and it’ll be out before Longhorn.

    For the bitmap/rendering issue, I think it’s a terminology issue. Once you "flush" drawing on a context, it ends up as a bitmap, and that’s all that the *window* compositing engine deals with from then on.

    I don’t think I said anything that contradicts what Marcel Weihler said.

    In summary:

    1. Quartz is resolution independent, but the layers built on top of them are not, removing the ability to have a resolution independent UI (including controls, window title bars, etc). This is fixable, although it requires changes throughout Carbon and Cocoa.

    2. Avalon uses a retained-mode, while Quartz 2D rasterizes things. I think this leads to some interesting stuff like the animations handled by the MIL. I did not claim it was novel, just interesting.

  4. You can take Marcel’s statements on Quartz’s capabilities as authoritative. He’s been deep in the guts of Postscript, PDF, and Quartz for at least a decade. Check out his products at metaobject.com.

    Ian, regarding your statement: "There’s no way for an application to know that it’s running on a 200 DPI monitor, and there’s no way it can make things like window titles or standard controls larger."

    Actually, there *is* a way for an app to know the DPI of the device it’s currently drawing into, but most of the time a Cocoa or Carbon app doesn’t *care*.

    Usually, we only care if we’re drawing to a screen or a printer, since we may make different choices about what to draw for hard copy (like not printing the background color of a web page, for example.)

    Chris, just a minor correction: quartz doesn’t assume that one pixel is one point, it’s the clients of quartz, Cocoa and Carbon, that make that assumption. Quartz doesn’t care, it will render what you tell it to render through whatever device transform you set.

    -jcr

  5. John C. Randolph says:

    "Both APIs just take integer "points" in their view systems. "

    No, NSPoint is defined as a pair of floats. Apple supports the legacy Quickdraw imaging model, which uses 16-bit integer coordinates, but that has been deprecated for some time now.

    NSWindow, NSBezierPath, NSView, and Carbon’s HIView all express their coordinate spaces in floating point values, and always have.

    -jcr

  6. Chris Hanson says:

    Dan: Cocoa and CoreGraphics both use floating-point coordinates, not integer coordinates.

    Thanks for the correction, John. I guess Quartz just renders what it’s given into wherever it’s told…

  7. Dan Crevier says:

    Oops, you are right, they are floats in Cocoa. The fact still stands that on Apple systems, 1.0 maps to 1 pixel on all shipping macs, no matter what the screen resolution is. Is that not true?

  8. John C. Randolph says:

    On the *displays* of all currently shipping macs, that is correct. It’s worth pointing out however, that any NSView that can display itself can also print or fax itself, which necessarily involves rendering at a different resolution.

    -jcr

  9. Dan Crevier says:

    True. To further clarify what I wish the Mac had, is the ability to sensibly handle *screens* of different resolution. Can everyone agree on that?

  10. John C. Randolph says:

    If you want to handle screens of differing resolutions, it’s just a matter of writing the appropriate display driver.

    -jcr