# Thread: 0!=1

1. Originally Posted by ebaines
By that argument if you let $\displaystyle n = 0$ you get $\displaystyle 0! = 0(-1!) = 0$. Doesn't seem to work....
maybe
$\displaystyle (-1)! = \frac 10$ ...i'm kidding! i'm kidding!

Good observation, kid!

2. Originally Posted by Krizalid
Well having let and the conclusion follows.
By that argument if you let you get . Doesn't seem to work....
Just define the given formula for all $\displaystyle n\geq 1$ and it works. The problem now is that this formula can show that 0!=1! without proving that $\displaystyle 1!=1$. Otherwise this formula would explain all. So using the fact that $\displaystyle 1!=1$ (since $\displaystyle n!=n(n-1)...1$) this works.

3. Originally Posted by arbolis
Just define the given formula for all $\displaystyle n\geq 1$ and it works. The problem now is that this formula can show that 0!=1! without proving that $\displaystyle 1!=1$. Otherwise this formula would explain all. So using the fact that $\displaystyle 1!=1$ (since $\displaystyle n!=n(n-1)...1$) this works.
yes, i think that's best. things start going weird for negative factorials. i don't even think (-1)! exists, so plugging it in is probably invalid. the formula always works for n >= 1, so define it that way...

4. The Gamma function is not defined on the non-positive integers.
And the Gamma function plays the role of a factorial function, thus there is no reasonable way to define factorial for negatives.

5. Originally Posted by ThePerfectHacker
The Gamma function is not defined on the non-positive integers.
And the Gamma function plays the role of a factorial function, thus there is no reasonable way to define factorial for negatives.
yeah, i thought so

6. Originally Posted by ThePerfectHacker
The Gamma function is not defined on the non-positive integers.
And the Gamma function plays the role of a factorial function, thus there is no reasonable way to define factorial for negatives.
Is this because if you put negative values in for the gamma function you run into an infinite discontinuity at 0?

7. Originally Posted by Mathstud28
Is this because if you put negative values in for the gamma function you run into an infinite discontinuity at 0?
Just look at how the Gamma function is defined $\displaystyle \Gamma (x) = \int_0^{\infty} e^{-t} t^{x-1} dt$. It can be proven that this converges when $\displaystyle x>0$. At $\displaystyle x=0$ the function becomes unbound as $\displaystyle x\to 0^+$. What about negative values? We cannot use the integral anymore. But we have a way around that. We use the property that $\displaystyle \Gamma (x+1) = x\Gamma (x)$. And so we can extend this to negative values also, for example, we define $\displaystyle (-1/2)\Gamma (-1/2) = \Gamma (1/2)$. And so we can find $\displaystyle \Gamma (-1/2)$. The reason why we do that is to extend this property. The only problem is that at $\displaystyle x=0$ we have a problem and so we cannot redefine it at $\displaystyle x=-1,-2,-3,...$ as well.

(In complex analysis it is possible to extend the Gamma function everywhere on the complex plane except the non-positive integers).

8. Just out of curiosity... did the gamma function originate from considerations other than extensions of the factorial? I ask because I always wondered why $\displaystyle \Gamma(n+1)=n!$ rather than $\displaystyle \Gamma(n)=n!$ (the Pi function). It always trips we up because the latter seems more natural.

9. Originally Posted by sleepingcat
Just out of curiosity... did the gamma function originate from considerations other than extensions of the factorial? I ask because I always wondered why $\displaystyle \Gamma(n+1)=n!$ rather than $\displaystyle \Gamma(n)=n!$ (the Pi function). It always trips we up because the latter seems more natural.
That is a good question. I think it comes from a historical mistake (this needs reference). I forgot who made this mistake, either Gauss or Euler. I think it was Euler. He used the Gamma function just defined about and therefore we have $\displaystyle \Gamma (n+1) = n!$ rather than $\displaystyle \Gamma (n) = n!$. There are many times in math were a mistake is kept for historical reasons.

10. ## Very basic

So why 0!=1?
this is an assumption. But now the question arise why this assumption was taken?
Its answer is very basic.
1) We know that number of ways of arranging r different things out n different things is =nPr=n!/(n-r)!
2)From fundamental principal of counting we know that number of ways of arranging n different things is n!
But number of ways of arranging n different things must also be equal to nPn (replacing r by n as all things are included)
therefor nPn=n! Or
n!/(n-n)!=n!
1/0! = 1
which is only possible if it is asrumed that 0!=1. Hence the assumption was taken.

11. Originally Posted by nikhil
So why 0!=1?
this is an assumption. But now the question arise why this assumption was taken?
It is not an assumption it is a definition.

RonL

12. I should have said why it is defined instead saying why its assumed!thanks

13. Originally Posted by nikhil
I should have said why it is defined instead saying why its assumed!thanks
For consistency in a number of important situations where otherwise the zero'th term would have to be an exception and handled differently.

RonL

14. Originally Posted by nikhil
I should have said why it is defined instead saying why its assumed!thanks
This has probably been said already but I can't be bothered wading through the whole thread .....

How many ways can you choose zero objects from n objects ....? Therefore, what do you require the value of 0! to be ....?

Page 2 of 2 First 12