Sorting Concepts

Before we look at individual sorting algorithms, let's start by talking about what makes a good one.

Time Complexity

Good sorting algorithms are quick. In general, a fast sorting algorithm is O(n log n), while a slow sorting algorithm is O(n²). We can see this here, comparing quick sort with bubble sort.

Space Complexity

Ideally, a sort won’t need any extra memory to run (beyond a few variables). However, some algorithms require extra space to run. Here’s heap sort (which has O(1) space complexity) and merge sort (which has O(n) space complexity).

Adaptivity

A good sorting algorithm is able to take advantage of semisorted data, using it to speed up its search. Insertion sort is adaptive and can do this, while heap sort is not/cannot. Here's a case where the two are dealing with random data:

But here's semisorted data, which insertion sort can exploit:
(Note: this variant of insertion sort is very good at taking advantage of semisorted data, but is weaker with random data.)

Stability

In a stable sort, the order of two tied elements is determined by their order before the sort took place. In an unstable sort, their final order is often independent of how they were originally ordered. Here, the bars have letters so that you can keep track of their pre- and post-sort orders. The first bar of a certain size is "A", the second is "B" and so on. Insertion sort keeps things ordered, while quick sort does not.

Simplicity

Although secondary to other concerns, simplicity and ease-of-coding should not be ignored. Bubble sort is a much easier sort to understand than heap sort. It compares adjacent elements and gradually floats large values to the top, while heap sort is making complicated comparisons and ordering them in a way that is harder to see.

Let's start with bubble sort...