All of these terms are overloaded, even in the context of parallel computing. However, we’ve used them extensively to describe how well our parallel algorithms and demo applications work. And sometimes, we throw them around carelessly on the blog, forums, etc., so here are our general definitions.

**Performance** is an attribute that refers to the total elapsed time of an algorithm’s execution. Less elapsed time means higher performance.

**Speedup** is a metric that quantifies performance by comparing two elapsed time values. In parallel computing, these two values are usually generated by the execution of a serial algorithm and a parallelized version of the same algorithm. Speedup is then calculated using the following equation:

Speedup = Serial Execution Time / Parallel Execution Time

So if a serial algorithm takes 100 seconds to complete, and the parallel version takes 40 seconds, the speedup is “2.5x”.

**Efficiency** is a metric that builds on top of speedup by adding awareness of the underlying hardware. It is usually calculated using the following equation:

Efficiency = Speedup / # of cores

So if speedup is “2.5x” on a 4-core machine, efficiency is 0.625 or 62.5%.

**Scalability** is an attribute that refers to the speedup of an algorithm given different numbers of cores/processors. The efficiency metric is good for quantifying scalability, because if efficiency holds constant as the number of cores changes, we have linear scaling (or awesome scalability).

## Join the conversation

Add Comment