I came across a fun problem recently that gave me a good opportunity to exercise my approximation muscles.

Problem:Compute , if it exists.

The basic approach to such sums is that the first few terms contribute to the sum because they are large and the rest of the terms contribute to the sum because there are a lot of them, so it makes sense to approximate the two parts of the sum separately. This is an important idea, for example, in certain estimates in functional analysis.

Since it follows that the limit, if it exists, is at least . In fact, this is the precise value of the limit. We’ll show this by giving progressively sharper estimates of the quantity

.

In the discussion that follows I’m going to ignore a lot of error terms to simplify the computations.

**First parameter**

Suppose we split the sum after terms, where is a parameter to be optimized later. Then the sum of the first terms is bounded above by , and the sum of the rest of the terms is bounded above by . It’s clear that in order for this latter term to even stay bounded, needs to be on the order of . However, the former term will go to zero as long as grows slower than . And as one term gets larger, the other gets smaller. So ideally, we want them to have the same growth rate for large , and this should occur when has growth rate somewhere between and .

Well, if we suppose that grows fast enough so that goes to zero, the latter term is asymptotic to , so setting gives as our optimum. (We don’t need to worry about rounding since disregarding finitely many terms of the sum doesn’t affect the limit.) This in turn gives

.

How sharp is this estimate? Well, we know that (at least approximately), so there’s certainly room for improvement.

**Second parameter**

Now suppose we split the sum **twice**, first after and then after terms. Then the sum of the first terms is bounded above by , the sum of the next terms is bounded above by , and the sum of the rest of the terms is bounded above by , again under the assumption that grows faster than . Now we should compare the three growth rates

.

It’s somewhat less obvious what to do here. Under the assumption that is a fixed constant and that grows, equating the latter two growth rates gives

hence . By choosing larger and larger (but fixed) values of , it follows that

for every . But this isn’t optimal, since the term is smaller than it could be. If we allow to depend on then we have to be a little careful, since the effect of adding a constant to is to modify the middle growth rate considerably. With that in mind, thinking of the middle growth rate as suggests that we try since we’ll get a constant in the numerator. This gives

hence , giving us a final growth rate of

for any choice of . This suggests that we can do better. At this point the calculations get a little tedious, so I’m just going to tell you that an even better choice is

and that, after the appropriate calculations, it gives

.

**A tight bound?**

Of course, if we suspect that is very close to , then we shouldn’t be using the term to do our estimates at all. In other words, we should really be estimating , i.e. starting from the term. This gives, essentially,

The fact that the first term is now much smaller opens up the possibility that can grow like a power of . Trying out gives , hence and

and it’s not hard to see that this is essentially the best we can do.

Given that adding additional parameters isn’t likely to help much because the behavior of the sum smooths out considerably after the first batch of terms, I’m inclined to think this bound is reasonably tight, but I’m curious if a more sophisticated method can do better. Any takers?

on April 16, 2013 at 1:33 pm |xpaulI have a simple solution for this limit. Note that

$$ \frac{1}{n}\sum_{k=1}^nn^{1/k}=1+\frac{1}{n}\sum_{k=2}^{n}n^{1/k}. $$

Now we will show that

\begin{eqnarray}

\tag{1}

\lim_{n\to\infty}\frac{1}{n}\sum_{k=2}^{n}n^{1/k}=1.

\end{eqnarray}

Let $a_n=\sum_{k=2}^{n}n^{1/k}$ and $b_n=n$. By the Stolz-Cesaro theorem,

\begin{eqnarray*}

\lim_{n\to\infty}\frac{1}{n}\sum_{k=2}^{n}n^{1/k}&=&\lim_{n\to\infty}\frac{a_n}{b_n}\\

&=&\lim_{n\to\infty}\frac{a_{n+1}-a_n}{b_{n+1}-b_n}\\

&=&\lim_{n\to\infty}(a_{n+1}-a_n)\\

&=&\lim_{n\to\infty}\sum_{k=2}^{n+1}(n+1)^{1/k}-\sum_{k=2}^{n}n^{1/k}\\

&=&\lim_{n\to\infty}\left(\sum_{k=2}^{n}\left[(n+1)^{1/k}-n^{1/k}\right]+(n+1)^{1/(n+1)}\right)\\

&=&\lim_{n\to\infty}\sum_{k=2}^{n}\left[(n+1)^{1/k}-n^{1/k}\right]+1\\

&=&\lim_{n\to\infty}\sum_{k=2}^{n}\frac{1}{\sum_{i=0}^{k-1}(n+1)^{(k-1-i)/k}n^{i/k}}+1.

\end{eqnarray*}

Note that

$$ \sum_{i=0}^{k-1}(n+1)^{(k-1-i)/k}n^{i/k}\ge k n^{(k-1)/k}\ge kn^{1/2} $$

and hence

\begin{eqnarray*}

0<\sum_{k=2}^{n}\frac{1}{\sum_{i=0}^{k-1}(n+1)^{(k-1-i)/k}n^{i/k}}\le\sum_{k=2}^{n}\frac{1}{kn^{1/2}}=\frac{1}{n^{1/2}}\sum_{k=2}^n\frac{1}{k}=\frac{1}{n^{1/2}}(\ln n-1+\gamma+o(1))\to 0

\end{eqnarray*}

as $n\to\infty$. So (1) is true. Thus

$$ \lim_{n\to\infty}\frac{1}{n}\sum_{k=1}^nn^{1/k}=2. $$

on February 9, 2010 at 6:51 am |Vishal LamaPerhaps, using Stolz-Cesaro in the beginning might help in

simplifyingthe solution a bit. This is not to suggest you hadn’t thought along that line.on February 9, 2010 at 8:58 am |Qiaochu YuanA friend of mine tried this, and it does let you compute the limit correctly, but you have to apply Stolz-Cesaro

countably many times(!), at least the way he did it, and I don’t know of a version of Stolz-Cesaro that tells you what happens to the rate of convergence.on February 8, 2010 at 9:21 pm |AliHi, why not bound this sum above and below by integrals? You get <= \int_0^n n^{1/x} dx as an upper bound. Then plugging in n^{1/x} = e^{\ln n/x} = \sum_{i=0}^{\infty} (\ln n / x)^i 1/i!, and integrating each term gives a nice closed form formula.

on February 8, 2010 at 10:25 pm |Qiaochu YuanHave you carried out this strategy? Someone tried something very much like it on AoPS and it appears to force you to attempt an exchange of a sum and integral that isn’t valid.