I know what they are, but I don't understand how to derive them from the probability mass function. Any help/hints would be much appreciated!
Start by reading this thread: http://www.mathhelpforum.com/math-he...ion-62210.html
All right thanks, I read through that and it shed some light. At least now I understand where the normalizing constant comes from! But I tried solving again using similar methods and I'm still stuck. The problem for me is that the solution in that thread is working from the gamma->beta direction but I have to solve (or at least I think?) from the opposite direction and I can't seem to wrap my head around it. I'm in the mind frame of E[X] = integral xf(x) and I don't know where I'm supposed to put the x or really how to proceed from there. If you could give me a push in the right direction (ie. where do I start?) I'd be very thankful.
I will do $\displaystyle \mu = E(X)$:
$\displaystyle E(X) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \, \Gamma(\beta)} \int_0^1 x \cdot x^{\alpha - 1} (1 - x)^{\beta - 1} \, dx$
$\displaystyle = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \, \Gamma(\beta)} \int_0^1 x^{\alpha} (1 - x)^{\beta - 1} \, dx$
$\displaystyle = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \, \Gamma(\beta)} \cdot \frac{\Gamma(\alpha + 1) \, \Gamma(\beta)}{\Gamma(\alpha + \beta + 1)}$ (using the proven result at the link I gave you)
$\displaystyle = \frac{\Gamma(\alpha + \beta)}{ \Gamma(\alpha + \beta + 1)} \cdot \frac{\Gamma(\alpha + 1)}{\Gamma(\alpha)}$
$\displaystyle = \frac{\alpha}{\alpha + \beta}$ (using the well known property of the Gamma function).
The calculation of $\displaystyle E(X^2)$ and hence $\displaystyle Var(X) = E(X^2) - [E(X)]^2$ is left for you.
Dear Mr Fantastic.
Would you please be so kind and give me a hint how you solved your last step in the proof.
The one with the comment: "using the well known property of the Gamma function"
Thanks a lot,
Ziguri
BTW: i already read through all the links you mentioned in this post.
--I don't think that is what I am searching for. Let me try to explain the problem more clearly...
standard beta distribution is defined from 0 to 1 and has the pdf_1 =
f(x;a,b) = [1 / B(a,b)] * x^(a - 1) * (1 - x)^(b - 1)
-where 'a' and 'b' are the alpha and beta parameters respectively
but the generalized beta function has the pdf_2=
f(x;a,b,c,d) = [1 / B(a,b)] * [1 / (d - c)^(a + b - 1)] * (x - c)^(a - 1) * (d - x)^(b - 1)
-where 'c' and 'd' are the min and max scale parameters respectively. I believe they are supposed to scale the original distribution (having 0 to 1 range) to the specified c to d range distribution.
The question I have is in two parts...
1) How is the pdf_2 derived?
2) How to derive the expected value using pdf_2?
The link you gave is broken , i cannot open it. so i could not understand how did you derive from this integral of beta function
$\displaystyle = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \, \Gamma(\beta)} \int_0^1 x^{\alpha} (1 - x)^{\beta - 1} \, dx$
this gamma function result
$\displaystyle = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \, \Gamma(\beta)} \cdot \frac{\Gamma(\alpha + 1) \, \Gamma(\beta)}{\Gamma(\alpha + \beta + 1)}$ (using the proven result at the link I gave you)
Use repeated integration by parts (do it $\displaystyle \beta - 1$ times):
$\displaystyle \displaystyle u = (1 - x)^{\beta - 1} \Rightarrow du = -(\beta - 1) (1 - x)^{\beta - 2}$
$\displaystyle \displaystyle dv = x^{\alpha} \Rightarrow v = \frac{1}{\alpha + 1} x^{\alpha + 1}$
etc.
until you end up integrating $\displaystyle \displaystyle x^{\alpha + \beta - 1}$ (with appropriate factors out the front). It is a standard technique found in many subject-appropriate textbooks. And I'm sure a Google search will also turn up a proof.