**Bubble Sort** looks at each adjacent pair in turn, swapping
them so that those two are in order. The largest element rises to the
top during each pass, like a bubble. It is a **O(n²)**
sort.

**Shaker Sort** improves on Bubble Sort by alternating forward
and backward passes. That way a small element that starts near the top
is more quickly shifted back closer to where it belongs, so that it
doesn't slow down the larger elements. It usually performs
significantly better than Bubble Sort, though it is still
a **O(n²)** sort.

**Selection Sort** doesn't exchange every pair of elements it
finds out of order. Instead it makes n passes through the data, each
time *selecting* which one is the largest. When it finishes, it
swaps that one with the one at the end. It saves time by only doing n
swaps, but it is often not as efficient when dealing with semisorted
data. It is also a **O(n²)** sort.

**Insertion Sort** adds successive elements to a slowly growing
in-order list. For each element, it first searches in the previous elements
(which are already sorted) for the proper location in which to insert it. Then
it shifts the element back element-by-element, until it’s in the right place.
With the right search pattern, it can be the fastest sort on semisorted data.
It is a **O(n²)** sort.

**Heapsort** is a staged sort. First it quickly organizes the
data into a *heap*, where each element is larger than two other
elements that lie about twice as far from the start as it is. From
there, it is very easy to swap the top element to the end, and then
reform the heap. The process repeats until the whole series is
sorted. Heapsort is a **O(n log n)** sort.

**Mergesort** is a “divide-and-conquer” sort, that
very quickly splits the problem into two subproblems, then splits those
two, and so on. It also needs more memory than the other algorithms to
run, because it has a backup buffer that is half the length of the array
being sorted. At every stage, the algorithm first recurses on each
side—sorting them via a smaller mergesort. Then, it merges the two
already-sorted halves. First it copies the left half into the buffer.
Then it steps forward through each half, taking the smallest element
(which is at the beginning) and moving it to the end of the growing
merged array. Mergesort is a
**O(n log n)** sort.

**Quicksort** is a another divide-and-conquer sort, that splits
its elements based on the first element, which is the *pivot*.
It quickly swaps elements so that the pivot is in the right place,
those elements before it are all less than it (though they are not yet
in order), and the ones after it are all larger. It then repeats the
process on each side, until the entire thing is sorted. It is
*usually* a very efficient **O(n log n)**
sort. However, if the choice of pivot is consistently bad (as when the
data is already semisorted), it devolves to being a **O(n²)**
sort.

**Bogosort** is little more than a bad joke. It just keeps on shuffling the array, until it’s sorted. This makes it a **O((n+1)!)** sort. To sort an array of size 10, this program would take (on average) about two weeks. To sort an array of size 50, it would average 1.64×10^{57} years! Clearly, some sorts are better than others.

Let's take a look at them all together.