I 'get' the standard deviation, and understand why it is useful, and *almost* why it is calculated like it is:

It is designed to show the average departure from the mean. But why is it not calculated thus:

My formula would be 'mean of the difference between x and mean(X)', which would logically be a good way to describe the spread of data, no?

I see the standard deviation comes out with a similar answer to my method, so what makes the real method more useful than mine? I just want to fully understand why it is done like it is.

Thanks in advance.