Let's say I have 3 different normal populations, population A has a mean of 10 and a standard deviation of 1, population B has a mean of 30 and a standard deviation of 3, population C has a mean of 60 and a standard deviation of 6. I choose a data point of 11 from A, 36 from B, and 66 from C, and I want to see how these compare. The points from A and C, both 1 standard deviation away from their mean, are higher than about 84% of their populations, while the point from B is 2 standard deviations higher than its mean so it's about 98% higher than its mean.


It's easy enough to show how each value would do fitted to one of the other means. 36, being 2 standard deviations above its mean, would be equivalent to 12 in population A and 72 in population C. Comparing each of these data points would take 9 different values to give the full picture (the actual data point and what it would look like in the other 2 populations). It would take 16 values for 4 populations, 25 for 5, etc. N^2 values to fully compare N populations.


Here's what I was thinking. 66 is 500% higher 11. 11 being one standard deviation higher than the mean means that 1 standard deviation is about 68% better than the mean when compared to how much of the sample it's better than. Why not look at each data point from each population based on how much better it is than the mean? So, 66*1.68 = 110.88, 11*1.68 = 18.48. These values aren't indicative of their populations but allow the different populations to be compared much more easily, since 110.88/18.48 = 66/11 = 6. 2 standard deviations would be 96% better than the mean, and this would be 1.96/1.68 = 1.167 about 16.7% better than 1 standard deviation. 36 is 227.3% higher than 11, and it's 16.7% better compared to their standard deviations, so you'd expect it to be 327.3%*116.7% = 381.8%, so 281.8% higher in the comparative value, and (36*1.96)/(11*1.68) = 3.818.


Would this be a good way to compare populations, and if yes what's the name for this analysis?