I want to talk about the XNA Framework Content Pipeline! I do, I do, I do!
But apparently Michael is working on an overview post, so I'm going to wait for that before diving into any details. In the meantime, I decided to talk about some of the background assumptions that went into our design.
Game content (by which I mean graphics, sounds, physics settings, AI data – basically everything that isn't your actual code) is created in a DCC (Digital Content Creation) tool. This could be Photoshop, 3D Studio Max, Maya, Milkshape, Paintshop Pro, MSPaint, the Visual Studio XML editor, Notepad, or perhaps even a custom editor written just for your game.
After it has been created, this content is then used by your game.
So where's the problem? Why do we need a pipeline here at all? In fact what even is a "pipeline"? Should environmentalists be worried about potential impacts on the Alaskan wilderness?
The fundamental issue here is that DCC tools tend not to create content in the right format for games. For instance:
2D paint programs usually create images with a 32 bit color depth. But for efficient rendering on a graphics card, most textures should be using the compressed DXT format.
3D modeling programs often create meshes using a mixture of triangles, quadrilaterals, and complex polygons with large numbers of vertices. But graphics cards usually only support triangles.
- Intel processors are little-endian. The Xbox CPU is big-endian. This means data created on an Intel PC needs to have the byte order swapped before it can be used in an Xbox game.
Those examples are just off the top of my head: there are many more. I'm sure you get the point that lots of conversion work is needed before a game can use data out of a DCC tool.
So tell me, I hear you cry, how and where should this work be done?
There are really only three possible options.
We could do the conversion inside the editing tool, by writing a custom exporter that saves directly into a game format. The Photoshop plugin that exports to DirectX .dds files is a good example of this approach. The disadvantage is that you'd have to re-export all your data any time the conversion requirements changed. For instance if you were making a game for more than one platform, you'd have to export everything several times using different options for each target platform.
We could do the conversion directly inside your game, as a side effect of loading content. The D3DX Mesh class and CreateTextureFromFile methods are good examples of this approach. The disadvantage is that we'd have to repeat the same conversion work every time you loaded content, and if the processing was at all complicated, this would slow down your game loading. A pet peeve of mine is games with long load times. I don't like having to wait around before I get to play!
We could do the conversion in between the editing tool and the game, using an independent program. The dxops utility (part of the DirectX SDK) is a good example of this approach. The disadvantage is that if you aren't careful, it can be pretty confusing remembering which converter utilities you have to run on each file, and it is easy to forget to re-convert something after you change the source asset.
Anyone care to venture a guess which approach we chose for XNA?