Before we look at individual sorting algorithms, let's start by talking about what makes a good one.
Good sorting algorithms are quick. In general, a fast sorting algorithm is O(n log n), while a slow sorting algorithm is O(n²). We can see this here, comparing quick sort with bubble sort.
Ideally, a sort won’t need any extra memory to run (beyond a few variables). However, some algorithms require extra space to run. Here’s heap sort (which has O(1) space complexity) and merge sort (which has O(n) space complexity).
A good sorting algorithm is able to take advantage of semisorted data, using it to speed up its search. Insertion sort is adaptive and can do this, while heap sort is not/cannot. Here's a case where the two are dealing with random data:
But here's semisorted data, which insertion sort can exploit: (Note: this variant of insertion sort is very good at taking advantage of semisorted data, but is weaker with random data.)In a stable sort, the order of two tied elements is determined by their order before the sort took place. In an unstable sort, their final order is often independent of how they were originally ordered. Here, the bars have letters so that you can keep track of their pre- and post-sort orders. The first bar of a certain size is "A", the second is "B" and so on. Insertion sort keeps things ordered, while quick sort does not.
Let's start with bubble sort...