1. Originally Posted by Drexel28

Here is quite a beautiful one (in my opinion).

Problem: Let $\displaystyle \Omega\subseteq\mathbb{R}^n$ be the set of all $\displaystyle (x_1,\cdots,x_n)$ such that $\displaystyle x_j\geqslant 0,\text{ }j\in[n]$ and $\displaystyle \displaystyle \sum_{j=1}^{n}x_j=1$. Compute

$\displaystyle \displaystyle \int\cdots\int_{\Omega}x^{\alpha_1-1}\cdots x^{\alpha_n-1}\text{ }dx_1\cdots dx_n$

where $\displaystyle \text{Re }\alpha_k>0,\text{ }k\in[n]$.
the integrand should be $\displaystyle x_1^{\alpha_1-1} \cdots x_n^{\alpha_n - 1}.$

call the integral $\displaystyle I_n$ and substitute $\displaystyle x_n=y_n$ and $\displaystyle x_i = (1-y_n)y_i$ for $\displaystyle i < n$. then the Jacobian would be $\displaystyle (1-y_n)^{n-1}$ and thus

$\displaystyle \displaystyle I_n=I_{n-1}\int_0^1 y_n^{\alpha_n -1}(1-y_n)^{\alpha_1 + \cdots + \alpha_{n-1}} dy_n = I_{n-1} B(\alpha_n, \alpha_1 + \cdots + \alpha_{n-1}+1),$

where $\displaystyle B(-,-)$, as usual, is the beta function. it's easy now to see that

$\displaystyle \displaystyle I_n = \frac{\Gamma(\alpha_1) \Gamma(\alpha_2) \cdots \Gamma(\alpha_n)}{\Gamma(\alpha_1 + \cdots + \alpha_n +1)}.$

2. Originally Posted by Drexel28
$\displaystyle \displaystyle \sum_{j=1}^{n}j{n\choose j}$
Consider the more general: $\displaystyle \sum_{k=0}^n {\binom{n}{k}\cdot \binom{k}{j}}$

First note that what we are doing is choosing subsets of size $\displaystyle j$ by the following procedure:

Fix $\displaystyle k\in\{0,..,n\}$

1. Choose a subset of size $\displaystyle k$ .

2. Now choose a subset of size $\displaystyle j$ from the subset chosen in 1 - if possible.

It is clear that we are overcounting, but by how much? Well, any subset of size $\displaystyle j$ is part of $\displaystyle \binom{n-j}{k-j}$ subsets of size $\displaystyle k$ - that's the number of ways of completing the set -, that is $\displaystyle \binom{n}{k}\cdot \binom{k}{j} = \binom{n-j}{k-j}\cdot \binom{n}{j}$

So in fact we have $\displaystyle \sum_{k\geq 0}{\binom{n}{k}\cdot \binom{k}{j}} = \sum_{k\geq 0}{\binom{n}{j}\cdot \binom{n-j}{k-j}} = \binom{n}{j}\cdot 2^{n-j}$

Now plug $\displaystyle j = 1$ and we get $\displaystyle \sum_{k=0}^{n}{\binom{n}{k}\cdot k} = n\cdot 2^{n-1}$

EDIT: I am going to add a couple of problems for you to enjoy.

1. $\displaystyle e = \prod_{k = 1}^{\infty}{\left(\frac{2^k}{2^k-1}\right)^{\frac{\phi(k)}{k}}}$ where $\displaystyle \phi$ is Euler's Totien function.

2. This one is beautiful too, let $\displaystyle \pi \in S_n$ $\displaystyle (*)$ we define $\displaystyle \text{inv}(\pi)$ to be the number of pairs of integers $\displaystyle (i,j)$ that satisfy $\displaystyle 1\leq i < j \leq n$ and $\displaystyle \pi(j) < \pi ( i)$ - such a pair is called an inversion.

Show that: $\displaystyle \sum_{\pi \in S_n}{q^{\text{inv}(\pi)}}=(1+q)\cdot ... \cdot (1+q+...+q^{n-1})$

What is the average number of inversions ?

$\displaystyle (*)$ See here

3. Originally Posted by PaulRS

2. This one is beautiful too, let $\displaystyle \pi \in S_n$ $\displaystyle (*)$ we define $\displaystyle \text{inv}(\pi)$ to be the number of pairs of integers $\displaystyle (i,j)$ that satisfy $\displaystyle 1\leq i < j \leq n$ and $\displaystyle \pi(j) < \pi ( i)$ - such a pair is called an inversion.

Show that: $\displaystyle \sum_{\pi \in S_n}{q^{\text{inv}(\pi)}}=(1+q)\cdot ... \cdot (1+q+...+q^{n-1})$
See if this suffices:

Spoiler:

This is fairly easy if you know the (common?) combinatorial fact that that the number of $\displaystyle j$-inversions of $\displaystyle [n]$ is the $\displaystyle (j,n)^{\text{th}}$ Mahnonian number which is the coefficient of $\displaystyle x^j$ in $\displaystyle \displaystyle \prod_{k=1}^{n}\sum_{r=0}^{k-1}x^r$ denotes this by $\displaystyle \displaystyle \text{coeff}_j\left(\prod_{k=1}^{n}\sum_{r=0}^{k-1}x^r\right)$. Thus, note that for any $\displaystyle n^{\text{th}}$ degree polynomial we have that $\displaystyle \displaystyle p(x)=\sum_{k=0}^{n}x^k \text{coeff}_k(p(y))$ (clearly). Note though that that the degree of $\displaystyle \displaystyle (1+x)\cdots(1+x^{n-1})$ is $\displaystyle \displaystyle (n-1)+(n-2)+\cdots+1=\frac{n(n-1)}{2}={n\choose 2}$. That said it's also clear that a permutation of $\displaystyle [n]$ may have (and indeed has at least one) $\displaystyle j$-inversion for $\displaystyle \displaystyle j=0,\cdots,{n\choose 2}$. Thus,

\displaystyle \displaystyle \begin{aligned}\sum_{\pi\in S_n}q^{\text{inv}(\pi)} &= \sum_{k=0}^{{n\choose 2}}q^k\text{coeff}_k\left(\prod_{k=1}^{n}\sum_{r=0 }^{k-1}x^r\right)\\ &= \prod_{k=1}^{n}\sum_{r=0}^{k-1}q^r\end{aligned}

as required.

Let me know if that identity I stated is too obscure.

4. Originally Posted by PaulRS
1. $\displaystyle e = \prod_{k = 1}^{\infty}{\left(\frac{2^k}{2^k-1}\right)^{\frac{\phi(k)}{k}}}$ where $\displaystyle \phi$ is Euler's Totien function.
Let $\displaystyle F(x) = \sum_{k=1}^\infty \phi(k) \frac{x^k}{1-x^k}.$

We have $\displaystyle F(x) = \sum_{k=1}^\infty \sum_{n=1}^\infty \phi(k)x^{nk} = \sum_{m=1}^\infty \sum_{d|m}\phi(d)x^m = \sum_{m=1}^\infty mx^m = x\left(\frac{1}{1-x}\right)^2$.

Now consider

$\displaystyle G(t)=\int_{0}^t\frac{F(x)}{x}dx = \sum_{k=1}^\infty \phi(k) \int_0^t \frac{x^{k-1}}{1-x^k} dx = \sum_{k=1}^\infty \frac{\phi(k)}{k} \int_0^{t^k} \frac{du}{1-u} = -\sum_{k=1}^\infty \frac{\phi(k)}{k} \log(1-t^k).$

We also have $\displaystyle G(t) = \int_{0}^t\left(\frac{1}{1-x}\right)^2 dx = \frac{1}{1-t} - 1$

Hence $\displaystyle G(1/2) = 1 = -\sum_{k=1}^\infty \frac{\phi(k)}{k} \log(1-2^{-k}) = \sum_{k=1}^\infty \frac{\phi(k)}{k} \log(\frac{2^k}{2^k-1}).$

Taking the exponential, we get the desired equality.

5. Originally Posted by Drexel28
See if this suffices:

Spoiler:

This is fairly easy if you know the (common?) combinatorial fact that that the number of $\displaystyle j$-inversions of $\displaystyle [n]$ is the $\displaystyle (j,n)^{\text{th}}$ Mahnonian number which is the coefficient of $\displaystyle x^j$ in $\displaystyle \displaystyle \prod_{k=1}^{n}\sum_{r=0}^{k-1}x^r$ denotes this by $\displaystyle \displaystyle \text{coeff}_j\left(\prod_{k=1}^{n}\sum_{r=0}^{k-1}x^r\right)$. Thus, note that for any $\displaystyle n^{\text{th}}$ degree polynomial we have that $\displaystyle \displaystyle p(x)=\sum_{k=0}^{n}x^k \text{coeff}_k(p(y))$ (clearly). Note though that that the degree of $\displaystyle \displaystyle (1+x)\cdots(1+x^{n-1})$ is $\displaystyle \displaystyle (n-1)+(n-2)+\cdots+1=\frac{n(n-1)}{2}={n\choose 2}$. That said it's also clear that a permutation of $\displaystyle [n]$ may have (and indeed has at least one) $\displaystyle j$-inversion for $\displaystyle \displaystyle j=0,\cdots,{n\choose 2}$.

\displaystyle \displaystyle \begin{aligned}\sum_{\pi\in S_n}q^{\text{inv}(\pi)} &= \sum_{k=0}^{{n\choose 2}}q^k\text{coeff}_k\left(\prod_{k=1}^{n}\sum_{r=0 }^{k-1}x^r\right)\\ &= \prod_{k=1}^{n}\sum_{r=0}^{k-1}q^r\end{aligned}

as required.

Let me know if that identity I stated is too obscure.

I feel as though it's a little cheap to prove this theorem via the other one (the fact that the number of $\displaystyle j$-inversions is $\displaystyle \text{coeff}_j\left((1+x)\cdots(1+x+\cdots+x^{n-1})\right)$) without proving it first. So, I'll modify the proof of that theorem to prove this problem directly.

Spoiler:

Claim: $\displaystyle \displaystyle \sum_{\pi\in S_n}q^{\text{inv}(\pi)}=\prod_{k=1}^{n}\sum_{r=0}^ {k-1}q^r$ for $\displaystyle n\geqslant 2$.
Proof: We proceed by induction. For $\displaystyle n=2$ this is clear since $\displaystyle S_2=\left\{\text{id}_{[2]},(1,2)\}$ and so

$\displaystyle \displaystyle \sum_{\pi\in S_2}q^{\text{inv}(\pi)}=q^{\text{inv}(\text{id}_{[2]})}+q^{\text{inv}((1,2))}=q^0+q^1=1+q$

as claimed. Assume now that this is true for $\displaystyle n$ and notice that $\displaystyle S_{n+1}$ is partitioned into two blocks call the blocks $\displaystyle B_1$ and $\displaystyle B_2$. Namely, those which fix $\displaystyle n+1$ and those that don't. Identify now each $\displaystyle \pi\in S_{n+1}$ with it's associated $\displaystyle n+1$-tuple. Then, $\displaystyle S_{n+1}$ is partitioned into the two blocks of the form $\displaystyle (\ast,\cdots,\ast,n+1)$ and $\displaystyle (\ast,\cdots,n+1,\cdots,\ast)$. Note that if $\displaystyle \pi$ is of the second form with $\displaystyle \pi=(\underbrace{\ast,\cdots,\ast}_{\ell},n+1,\cdo ts,\ast)$ and if $\displaystyle \sigma$ is the element of $\displaystyle S_n$ which results from removing that $\displaystyle \ell+1^{\text{th}}$ slot (the one containing $\displaystyle n+1$) then $\displaystyle \text{inv}(\pi)=\text{inv}(\sigma)+\ell$. Thus, it's clear from this that

$\displaystyle \displaystyle \sum_{\pi\in B_2}q^{\text{inv}(\pi)}=\sum_{\ell=1}^{n}q^\ell\su m_{\pi\in S_n}q^{\text{inv}(\pi)}$

Noticing then that for each $\displaystyle \pi\in B_1$ we have that $\displaystyle \pi$ is just a permutation of $\displaystyle [n]$ with the extra condition that $\displaystyle n+1$ always maps to $\displaystyle n+1$ (and in particular it adds not inversions) we can see that

$\displaystyle \displaystyle \sum_{\pi\in B_1}q^{\text{inv}(\pi)}=\sum_{\pi\in S_n}q^{\text{inv}(\pi)}$

Thus,

\displaystyle \displaystyle \begin{aligned}\sum_{\pi\in S_{n+1}}q^{\text{inv}(\pi)} &= \sum_{\pi\in B_1}q^{\text{inv}(\pi)}+\sum_{\pi\in B_2}q^{\text{inv}(\pi)}\\ &=\sum_{\pi\in S_n}q^{\text{inv}(\pi)}+\sum_{\ell=1}^{n}q^{\ell}\ sum_{\pi\in S_n}q^{\text{inv}(\pi)}\\ &= \sum_{\pi\in S_n}q^{\text{inv}(\pi)}\left(1+q+\cdots+q^n)\\ &= \prod_{k=1}^{n}\sum_{r=0}^{k-1}q^r\sum_{r=0}^{n}q^r\\ &= \prod_{k=1}^{n+1}\sum_{r=0}^{k-1}q^r\end{aligned}

and the induction is complete.

6. Originally Posted by Drexel28
I feel as though it's a little cheap to prove this theorem via the other one (the fact that the number of $\displaystyle j$-inversions is $\displaystyle \text{coeff}_j\left((1+x)\cdots(1+x+\cdots+x^{n-1})\right)$) without proving it first. So, I'll modify the proof of that theorem to prove this problem directly.
Hahaha, that's true. When I saw the post above I thought : "But that result you are mentioning is the whole problem itself" but now it's ok ;P

7. Here are a few sums for you to calculate. However these may be too easy...

$\displaystyle \displaystyle\sum_{n=1}^\infty \frac{\text{H}_n}{2^nn}$

$\displaystyle \displaystyle\sum_{n=1}^\infty \frac{\text{H}_n}{n^2}$

and one that involves much more algebra:

$\displaystyle \displaystyle\sum_{n=0}^\infty \frac{(-1)^n}{3n+1}$

Where $\displaystyle \text{H}_n$ is the $\displaystyle n$'th harmonic number.

8. Originally Posted by Unbeatable0
Here are a few sums for you to calculate. However these may be too easy...

$\displaystyle \displaystyle\sum_{n=1}^\infty \frac{\text{H}_n}{2^nn}$

Where $\displaystyle \text{H}_n$ is the $\displaystyle n$'th harmonic number.
Spoiler:

First consider $\displaystyle \displaystyle \sum_{n=1}^\infty H_n x^n$:

$\displaystyle \displaystyle \sum_{n=1}^\infty H_n x^n = \sum_{n=1}^\infty\sum_{i=1}^n \frac1i x^n = \sum_{i=1}^\infty \frac1i \sum_{n=i}^\infty x^n = \sum_{i=1}^\infty \frac1i \frac{x^i}{1-x} = -\frac{\log(1-x)}{1-x}$

-------

Now define $\displaystyle \displaystyle F(x) = \sum_{n=1}^\infty \frac{H_n}n x^n$. We're after $\displaystyle F(\tfrac12)$.

From the work above, we see $\displaystyle \displaystyle F(x) = -\int_0^x\frac{\log(1-t)}{t(1-t)} dt = -\int_0^x \frac{\log(1-t)}t dt + \frac12 \log^2(1-x)$.

Let the integral equal $\displaystyle G(x)$.

Thus $\displaystyle \displaystyle F(\tfrac12) = G(\tfrac12) + \frac12\log^2 2 = \frac1{12}\pi^2$.

-------

We can obtain $\displaystyle G(\tfrac12)$ by deriving the identity $\displaystyle \displaystyle G(x) + G(1-x) = \frac16\pi^2 - \log(x)\log(1-x)$ for $\displaystyle 0 < x < 1$, which can be shown directly.

Edit: I see $\displaystyle G(\tfrac12)$ has already been tackled earlier in this thread!

9. Here's my rebuttal:

For $\displaystyle \alpha > 1$, evaluate $\displaystyle \displaystyle \lim_{t\to\infty} \frac t{\log t}\sum_{n=0}^\infty \frac1{\alpha^n+t}$.

10. Originally Posted by Unbeatable0
Here are a few sums for you to calculate. However these may be too easy...

$\displaystyle \displaystyle\sum_{n=1}^\infty \frac{\text{H}_n}{2^nn}$
Spoiler:

Begin by recalling that the Cauchy product of two power series $\displaystyle \displaystyle \sum_{n\in\mathbb{N}}a_n z^n$ and $\displaystyle \displaystyle \sum_{n\in\mathbb{N}}b_n z^n$ is $\displaystyle \displaystyle \sum_{n\in\mathbb{N}}\sum_{k=0}^{n}a_k b_{n-k}z^n$. So, in particular

$\displaystyle \displaystyle \left(\sum_{n\in\mathbb{N}}a_n z^n\right)\left(\sum_{n\in\mathbb{N}}z^n\right) &= \sum_{n\in\mathbb{N}}\sum_{k=0}^{n}a_k z^k$

for where both series are defined. Thus, note that since $\displaystyle \displaystyle -\ln(1-z)=\sum_{n\in\mathbb{N}}\frac{z^n}{n}$ that the above implies that

\displaystyle \displaystyle \begin{aligned}\frac{-\ln(1-z)}{1-z} & = \left(\sum_{n\in\mathbb{N}}\frac{z^n}{n}\right)\le ft(\sum_{n\in\mathbb{N}}z^n\right)\\ &= \sum_{n\in\mathbb{N}}\sum_{k=1}^{n}\frac{1}{k}z^n\ \ &= \sum_{n\in\mathbb{N}}H_n z^n\end{aligned}

Therefore,

\displaystyle \displaystyle \begin{aligned}\sum_{n\in\mathbb{N}}H_n \frac{\left(\frac{1}{2}\right)^n}{n} &= \sum_{n\in\mathbb{N}}H_n\int_0^{\frac{1}{2}} z^{n-1}\text{ }dz\\ &= \int_0^{\frac{1}{2}} \sum_{n\in\mathbb{N}}H_n z^{n-1}\text{ }dz\\ &= \int_0^{\frac{1}{2}} \frac{-\ln(1-z)}{z(1-z)}\text{ }dz\end{aligned}

Thus, it suffices compute this integral. To do this we merely note that

\displaystyle \displaystyle \begin{aligned}\int_0^{\frac{1}{2}}\frac{-\ln(1-z)}{z(1-z)}\text{ }dz &= -\int_0^1 \frac{\ln(1-\frac{\xi}{2})}{\xi(1-\frac{\xi}{2})}\text{ }d\xi\\ &= -\int_0^1\frac{\ln(1-\frac{\xi}{2})}{\xi}\text{ }d\xi-\frac{1}{2}\int_0^1\frac{\ln(1-\frac{\xi}{2})}{1-\frac{\xi}{2}}\text{ }d\xi\end{aligned}

This second integral easily comes out to $\displaystyle -\frac{1}{2}\ln^2(2)$. To do the first one merely note that

\displaystyle \displaystyle \begin{aligned}-\int_0^1\frac{\ln(1-\frac{\xi}{2})}{\xi}\text{ }d\xi &= \int_0^1 \sum_{n=1}^{\infty}\frac{\xi^{n-1}}{2^n n}\\ &= \sum_{n=1}^{\infty}\frac{1}{2^n n^2}\end{aligned}

But this is a fairly well-known series (EDIT: You actually did it in the second post!) and it sums to $\displaystyle \frac{\pi^2}{12}-\frac{1}{2}\ln^2(2)$. Thus, adding together we get

$\displaystyle \displaystyle \sum_{n=1}^{\infty}\frac{H_n}{2^n n}=\frac{\pi^2}{12}$

$\displaystyle \displaystyle\sum_{n=1}^\infty \frac{\text{H}_n}{n^2}$
Spoiler:

We first note that that in our previous solution we derived that

$\displaystyle \displaystyle \frac{-\ln(1-z)}{1-z}=\sum_{n\in\mathbb{N}}H_n z^n$

so that

$\displaystyle \displaystyle \frac{1}{2}\ln^2(1-z)=\sum_{n\in\mathbb{N}}\frac{H_n}{n+1}z^{n+1}$

and thus

$\displaystyle \displaystyle \sum_{n\in\mathbb{N}}\frac{H_n}{(n+1)^2}=\frac{1}{ 2}\int_0^1\frac{\ln^2(1-z)}{z}\text{ }dz$

Note though that letting $\displaystyle x=\ln(1-z)$ transfers our integral into

$\displaystyle \displaystyle \int_0^{\infty}\frac{x^2}{1-e^{-x}}e^{-x}\text{ }dx=\int_0^{\infty}\frac{x^2}{e^x-1}\text{ }dx=2\zeta(3)$

where we've used the well-known identity $\displaystyle \displaystyle \Gamma(s)\zeta(s)=\int_0^{\infty}\frac{x^{s-1}}{e^x-1}\text{ }dx$.

Proof of identity:

Spoiler:

We recall that $\displaystyle \displaystyle \Gamma(s)=\int_0^{\infty}t^{s-1}e^{-t}\text{ }dx$. Thus, it's simple to check that

$\displaystyle \displaystyle \frac{\Gamma(s)}{n^s}}=\int_0^{\infty}e^{-n t}t^{s-1}\text{ }dt$

where the result follows by summing both sides over $\displaystyle \mathbb{N}$ and recalling that $\displaystyle \displaystyle \sum_{n\in\mathbb{N}}e^{-nt}=\frac{1}{e^t-1}$

Thus,

$\displaystyle \displaystyle \sum_{n=1}^{\infty}\frac{H_n}{(n+1)^2}=\zeta(3)$

Note lastly then that we have

\displaystyle \displaystyle \begin{aligned}\sum_{n=1}^{\infty}\frac{H_n}{n^2} &= 1+\sum_{n=2}^{\infty}\frac{H_n}{n^2}\\ &= 1+\sum_{n=2}^{\infty}\frac{H_{n-1}+\frac{1}{n}}{n^2}\\ &= \sum_{n=2}^{\infty}\frac{H_{n-1}}{n^2}+\zeta(3)\\ &= \sum_{n=1}^{\infty}\frac{H_n}{(n+1)^2}+\zeta(3)\\ &= 2\zeta(3)\\ &=2\sum_{n=1}^{\infty}\frac{H_n}{(n+1)^2}\end{alig ned}

Remark: This last equality wasn't necessary just an astounding fact! I haven't thought about it too deeply, but I wonder if one could classify the set of $\displaystyle f:\mathbb{N}\to\mathbb{N}$ such that $\displaystyle \displaystyle \sum_{n\in\mathbb{N}}\frac{f(n)}{n^2}=2\sum_{n\in\ mathbb{N}}\frac{f(n)}{(n+1)^2}$ or any of the obvious generalizations.

and one that involves much more algebra:

$\displaystyle \displaystyle\sum_{n=0}^\infty \frac{(-1)^n}{3n+1}$
Spoiler:

This is just simple algebra since

\displaystyle \displaystyle \begin{aligned}\sum_{n=0}^{\infty}\frac{(-1)^n}{3n+1} &=\sum_{n=0}^{\infty}\int_0^1 (-1)^n x^{3n}\text{ }dx\\ &=\int_0^1 \sum_{n=0}^{\infty}(-1)^n x^{3n}\text{ }dx\\ &=\int_0^1\frac{dx}{1+x^3}\\ &= \frac{1}{18}\left(2\sqrt{3}\pi+\ln(64)\right)\end{ aligned}

Where the last part is gotten by factoring $\displaystyle x^3+1=(x+1)(x^2-x+1)$, performing partial fractions, and then integrating the good old fashioned way.

11. Originally Posted by Drexel28
\displaystyle \displaystyle \begin{aligned}\sum_{n=0}^{\infty}\frac{(-1)^n}{3n+1} &=\sum_{n=0}^{\infty}\int_0^1 (-1)^n x^{3n}\text{ }dx\\ &=\int_0^1 \sum_{n=0}^{\infty}(-1)^n x^{3n}\text{ }dx\\ &=\int_0^1\frac{dx}{1+x^3}\\ &= \frac{1}{18}\left(2\sqrt{3}\pi+\ln(64)\right)\end{ aligned}
Nice work! From this, we can see in general for $\displaystyle a>0$, $\displaystyle \displaystyle \sum_{n=0}^\infty \frac{(-1)^n}{an+1} = \int_0^1 \frac{dx}{x^a+1}$.

12. Originally Posted by chiph588@
Nice work! From this, we can see in general for $\displaystyle a>0$, $\displaystyle \displaystyle \sum_{n=0}^\infty \frac{(-1)^n}{an+1} = \int_0^1 \frac{dx}{x^a+1}$.
Unfortunately those integrals are damn near impossible to compute even when $\displaystyle a>5$ and $\displaystyle a\in\mathbb{N}$! Since clearly the key is to see the cyclotomic factorziation and then use pfracs, but the non-linear term becomes intractable as you can clearly guess.

13. Originally Posted by chiph588@
Here's my rebuttal:

For $\displaystyle \alpha > 1$, evaluate $\displaystyle \displaystyle \lim_{t\to\infty} \frac t{\log t}\sum_{n=0}^\infty \frac1{\alpha^n+t}$.
Tough one!

Let $\displaystyle F(t) = \sum_{n=0}^\infty \frac{1}{\alpha^n+t}$.

We have, with $\displaystyle t=\alpha^m, m>1$, $\displaystyle \frac t{\log t} F(t) = \frac{\alpha^m}{m\log \alpha} \sum_{n=0}^\infty \frac1{\alpha^n+\alpha^m} = \frac{1}{m \log \alpha}\sum_{n=0}^\infty \frac1{\alpha^{n-m}+1}.$

We can write this as $\displaystyle \frac{1}{m \log \alpha}\left(\sum_{n=1}^m \frac1{\alpha^{-n}+1}+F(1)\right)$. Now $\displaystyle F(1)/m \to 0$ as $\displaystyle m \to \infty$ hence to find the limit we can just look at $\displaystyle \frac{1}{m \log \alpha} \sum_{n=1}^m \frac{\alpha^n}{1+\alpha^{n}}$.

Now we have $\displaystyle \lim_{m\to \infty}\frac{1}{m}\sum_{n=1}^m \frac{\alpha^n}{1+\alpha^n} = 1$ because $\displaystyle \frac{1}{m}\sum_{n=1}^m \frac{1}{1+\alpha^n} < \frac{1}{m}\sum_{n=1}^m \frac{1}{\alpha^n} \to 0$ as $\displaystyle m \to \infty$.

Hence the desired limit is $\displaystyle \frac{1}{\log \alpha}$.

14. Originally Posted by Bruno J.
Tough one!

Let $\displaystyle F(t) = \sum_{n=0}^\infty \frac{1}{\alpha^n+t}$.

We have, with $\displaystyle t=\alpha^m, m>1$, $\displaystyle \frac t{\log t} F(t) = \frac{\alpha^m}{m\log \alpha} \sum_{n=0}^\infty \frac1{\alpha^n+\alpha^m} = \frac{1}{m \log \alpha}\sum_{n=0}^\infty \frac1{\alpha^{n-m}+1}.$

We can write this as $\displaystyle \frac{1}{m \log \alpha}\left(\sum_{n=1}^m \frac1{\alpha^{-n}+1}+F(1)\right)$. Now $\displaystyle F(1)/m \to 0$ as $\displaystyle m \to \infty$ hence to find the limit we can just look at $\displaystyle \frac{1}{m \log \alpha} \sum_{n=1}^m \frac{\alpha^n}{1+\alpha^{n}}$.

Now we have $\displaystyle \lim_{m\to \infty}\frac{1}{m}\sum_{n=1}^m \frac{\alpha^n}{1+\alpha^n} = 1$ because $\displaystyle \frac{1}{m}\sum_{n=1}^m \frac{1}{1+\alpha^n} < \frac{1}{m}\sum_{n=1}^m \frac{1}{\alpha^n} \to 0$ as $\displaystyle m \to \infty$.

Hence the desired limit is $\displaystyle \frac{1}{\log \alpha}$.
I did fairly the same thing except I proved that the limit existed. Namely, one can prove that if $\displaystyle \displaystyle G(t)=\frac{t}{\log(t)}\sum_{n=0}^{\infty}\frac{1}{ \alpha^n+t}$ that for sufficiently large $\displaystyle t$ one has $\displaystyle G'(t)\leqslant 0$. Moreover, one can show that $\displaystyle \frac{1}{\log(\alpha)}\leqslant G(t)$ and thus it suffices to prove that $\displaystyle G(\alpha^m)\overset{m\to\infty}{\longrightarrow}\f rac{1}{\log(\alpha)}$.

15. Originally Posted by Drexel28
Have fun with this one (you won't, unless you 'see it')

Problem 3: Compute

$\displaystyle \displaystyle \lim_{n\to\infty}(n+1)\int_0^1 x^n \Gamma^3(x+1)\zeta^3(x+3)\text{ }dx$
How about a little hint on this one..

Page 3 of 8 First 1234567 ... Last