“Holy Grail or Fools’ Errand”, Indeed!

There is a lot of discussion and buzz around Intel, Larrabee, and ray tracing on the GPU.

So I decided to investigate. First, we have some required reading J.

Resources to get your head wrapped around the current debate and the questions it raises:

Wikipedia on “what is Larrabee”

https://en.wikipedia.org/wiki/Larrabee_%28GPU%29

Deano Calver on Real-Time Ray Tracing

https://beyond3d.com/content/articles/94/5

PC Perspective articles on Daniel Pohl’s work on converting Quake 3 and Quake 4 to be a ray tracing graphics engine.

Ray Tracing and Gaming - Quake 4: Ray Traced Project

https://www.pcper.com/article.php?aid=334

Rendering Games with Raytracing Will Revolutionize Graphics https://www.pcper.com/article.php?aid=455

Ray Tracing and Gaming - One Year Later

https://www.pcper.com/article.php?aid=506

Intel briefing on Larrabee and Visual Computing

https://www.gamasutra.com/php-bin/news_index.php?story=17898

nVidia comments

https://www.pcper.com/article.php?aid=530

Carmack comments

https://games.slashdot.org/article.pl?sid=08/03/12/1918250

https://www.pcper.com/article.php?aid=532&type=overview

Next, some fundamental research papers:

Ray Tracing

Turner Whitted’s seminal 1980 paper “An Improved Illumination Model For Shaded Display

https://delivery.acm.org/10.1145/360000/358882/p343-whitted.pdf?key1=358882&key2=3584595021&coll=portal&dl=ACM&CFID=33609274&CFTOKEN=12069724

Streaming and Computation

Accelerator

ftp://ftp.research.microsoft.com/pub/tr/TR-2005-184.pdf

Brook

https://graphics.stanford.edu/papers/brookgpu/brookgpu.pdf

SH

https://www.gamasutra.com/features/20040716/mccool_01.shtml

https://libsh.org/

Tracing on GPUs

Ray Tracing on a Stream Processor, PhD Dissertation

https://graphics.stanford.edu/papers/tpurcell_thesis/tpurcell_thesis.pdf

Ray Tracing on Programmable Graphics Hardware

https://graphics.stanford.edu/papers/rtongfx/rtongfx.pdf

Ray Tracing on GPU, slides

https://www.cs.unc.edu/~lastra/comp870/Slides/Ray%20Tracing%20on%20GPU.ppt

Ray Tracing on GPU ,paper with implementation

https://gpurt.sourceforge.net/DA07_0405_Ray_Tracing_on_GPU-1.0.5.pdf

Stackless KD-Tree Traversal for High Performance GPU Ray Tracing

https://graphics.cs.uni-sb.de/Publications/TR/2007/StatelessTrav.pdf

Hybrid Ray Tracing

https://www.iam.unibe.ch/~robert/doc/hybrid-rt-2007.pdf

A Hybrid CPU-GPU Renderer

https://www.stadtwald21.de/mcbeister/gpu-cpu/paper.pdf

Ray Tracing fully implemented on programmable graphics hardware, Master’s Thesis at Chalmers

https://www.ce.chalmers.se/edu/proj/raygpu/downloads/raygpu_thesis.pdf

GPU Performance

Understanding GPUs Through Benchmarking

https://www.gpgpu.org/sc2006/slides/10.houston-understanding.pdf

https://graphics.stanford.edu/projects/gpubench/

Paradigm Shift?

Interactive Rendering In The Post-GPU Era

https://www.graphicshardware.org/previous/www_2006/presentations/pharr-keynote-gh06.pdf

“What’s it all about Alfie?”

Now with that out of the way, what do I think?

Examining the stream programming model in all of this, but especially the Purcell dissertation and the SourceForge hosted paper, it is clear that streams are a powerful and expressive concept. The hybrid paper shows how to mix use of the CPU and GPU within a ray-tracing algorithm in an approach that shows a lot of promise because it could easily be adapted to Larrabe where the GPU parts run on the Larrabee GPU and the CPU parts run on the Larrabee CPU. Without a hybrid approach, the acceleration techniques that rely on spatial data structures cannot be run on a GPU efficiently today. This is where Larrabee does present interesting possibilities.

For ray tracing, though, there are still some problems to be solved. The scene aliasing problem can be solved by jittering the primary rays, so that requires more performance. The static vs dynamic scene problem can be solved by the use of hybrid sw techniques or hybrid hw ( Larrabee ). The memory issues affecting overall algorithm throughput can be solved with hybrid hw ( Larrabee ) or upgraded hw ( traditional rasterization hw which is already moving this way ). These are the easy problemsJ.

The cognitive gap problem in teaching the masses how to do stream programming is perhaps the hardest problem to solve. A new toolkit and a dedicated evangelism team, plus the runway of a multi-year investment in the hardware and software toolkit is what that takes. It took DirectX 4+ years (DX3 in 1996 to DX8 in 2000) to reach its place as a solid, well-respected, and widely used standard. Will Intel stay in the game that long?

So the ray tracing idea is certainly interesting; however it is not clear all the problems are solved enough to gain critical mass - thus we’ll just have to wait. If the CPU+GPU hw that is Larrabee allows the hybrid approach where you can implement a tracing algorithm as a mix of rasterization phases on a GPU with algorithm phases on a CPU and Intel have an API that makes that path approachable for the average game programmer - that might be a win.

More generally, I think the move to a new abstraction around stream processing provides many benefits; the biggest one is it explicitly enables the expression of many more algorithms without performing unnatural acts with a graphics API. If we could perform both in the same frame, that is dangerously close to nirvana. Of course, I have had a copy of the SH book for almost 2 years, downloaded CUDA as soon as it was available, and am always alert for paradigm shifts so my opinion may be suspect. J

With all of that said, the point that stream processing is fundamental does not necessarily portend “doom and gloom” for current IHVs as it seems to me that nVidia and ATI both have stream processor designs and thus can certainly adapt where they have weak links in their respective food chains if Intel really does prove to have a rabbit in its hat. ATI may have an easier time of that since they and AMD are one. However, nVidia is not to be discounted, they are smart and aggressive and know how to play to win so it would not surprise me one iota if they pulled a 2nd rabbit out of their hat in response if and when Intel shows its cards as a strong hand.

Let the debate begin J.