# Thread: Meaning of Inverse CDF

1. ## Meaning of Inverse CDF

Hi,

There is one theorm in my Applied Probability course that I am having trouble understanding. It has to do with how to derive various types of random variables from the transformation X= g(U)

It says, Let U be a uniform (0,1) random variable and let F(x) denote a cumulative distribution function with an inverse F^-1(u) defined for 0<u<1. The random variable X = F^-1(U) has a CDF FX(x) = F(x).

I think my lack of understand of what F^-1(u) is hindering my understanding of this theorm. Would you be able to explain?

From what I understand and the examples looked at, this theorm allows us
derive random variables of different types, using the uniform random
variable.

2. ## Meaning of Inverse CDF

You're correct in that this allows you to simulate random variables of different distributions. It may help to explain with an example.

Suppose I'm simulating a two-parmeter pareto distribution, with a cumulative distribution function of F(X) = 1 - [ B / (B + X) ] ^ A, where A = 2 and B = 5.

Here's how the cumulative distribution function of this pareto distributionn would look for a few different values of X:
00.0%130.6%249.0%360.9%469.1%575.0%679.3%782.6%885.2%987.2%1088.9%1190.2%1291.3%1392.3%1493.1%1593.8%

All the above does is replace X in the CDF with the numbers 1-15 and solve the CDF equation with A=2 and B=5.

Now we want to simulate a pareto value using the uniform distribution. So we ask for a random, uniform number between 0 and 1 and let's suppose that we get 0.872. Based on the above, the random number from the pareto distribution that corresponds to the uniform random number 0.872 is 9, since the value of 9 corresponds to the 87.2% point on the pareto cumulative distribution function.

So in this case, X is the random number generated by the uniform random number generator (0.872) and F^-1(U) is the pareto value that has a cumulative distribution function value corresponding to it (9).

Does that help?

- Steve J

3. Thanks Steve! That does make sense and does help a lot.

However, I don't really see how this is useful though and why you would want to simulate random variables like this...Is there a reason why this is used?

Thanks again

4. ## Meaning of Inverse CDF

If you start with the premise that people will want to simulate random variables of a lot of different types of distributions, the question is how to come up with random variables that fit those distributions - especially when there are so many distributions that can be siimulated, each with a virtually unlimited supply of parameter values that would lead to different sets of simulation requirements. So the cleanest way to set up a simulation for a specific distribution/parameter set is to "map" what you want to simulate to a common base. And the common base most often used (as far as I've seen) is the uniform distribution, between 0 and 1.

- Steve

5. Thanks! I think Im getting the hang of this. Also, can this concept be extended to joint distributions? For example, given CDF Fx,y, can we generate samples of (x,y)? I was thinking of getting the marginals, however, I dont see where to go from there, as to how to ensure there is a relationship between the two...I hope you understand what I mean.

Thanks again

6. Sorry, I'm not sure I follow. I apologize, but hopefully someone else might be able to help out with this one.

- Steve J

7. I think I made it a bit too complicated...Its just asking given a joint density function f(x,y), how can you generate samples for the pair (x,y) using the same method.

My thought is to find the conditional densities (f_X|Y=y and f_Y|X=x). That shouldnt be hard as it is just the joint density divided by the appropriate marginal density. Then, I guess it be the essentially the same type of format, find the inverse for both, and generate the samples?