Parseval’s identity and Liouville’s theorem

Unlike polynomials power series do not grow uniformly. For instance, if $n \geq 1$,
\[
P(z) = a_n z^n + \cdots + a_1 z + a_0,
\] and $a_n \neq 0$, then $P(z) \sim a_n z^n$. In particular, $|P(z)| \to \infty$ as $|z| \to \infty$. But a power series, such as sine or cosine, can have zeros tending to $\infty$. However, power series do grow on average. We explore this below.

Let $f(z) = \sum_{n = 0}^{\infty} a_n (z-a)^n$, $z \in D(a, R)$. Then for $0 \leq r < R$ we have
\begin{align*}
\int_{0}^{2 \pi} |f(a + r e^{i \theta})|^2 \, d \theta &= \int_{0}^{2 \pi} \sum_{n = 0}^{\infty} \sum_{m = 0}^{\infty} a_n \overline{a_m} r^{n + m} e^{i(n- m) \theta} \, d \theta \\ &= \sum_{n = 0}^{\infty} \sum_{m = 0}^{\infty} a_n \overline{a_m} r^{n + m} \int_{0}^{2 \pi } e^{i (n-m) \theta} \, d \theta \\ &= 2 \pi \sum_{n = 0}^{\infty} |a_n|^2 r^{2n}
\end{align*} Note that we can switch the order of summation and integration since the double sum converges absolutely and uniformly (due to Weierstrass $M$-test) as a function of $\theta$. Thus we have
\[
\frac{1}{2 \pi} \int_{0}^{2 \pi} |f(a + r e^{i \theta})|^2 \, d \theta = \sum_{n = 0}^{\infty} |a_n|^2 r^{2n}.
\] This is called Parseval’s identity. We now employ this formula to give a proof of an important result in complex analysis.

Theorem. (Liouville’s theorem) Let $f$ be bounded and entire. Then $f$ must be constant.

Proof. Let $f$ be a bounded and entire function with $|f(z)| \leq M$ for every $z \in \mathbb{C}$. Since $f$ is entire we can express it as a power series throughout $\mathbb{C}$ as
\[
f(z) = \sum_{n = 0}^{\infty} a_n z^n.
\] It now follows by Parseval’s identity that
\[
\sum_{n = 0}^{\infty} |a_n|^2 r^{2n} = \frac{1}{2 \pi} \int_{0}^{2 \pi } |f(re^{i \theta})|^2 \, d \theta \leq M^2
\] for every $r > 0$. Therefore we have
\[
|a_n|^2 \leq \frac{M^2}{r^{2n}}
\] for each $n \geq 1$. It now follows by letting $r$ tend to $\infty$ that $a_n = 0$ for $n \geq 1$ and hence $f$ must be constant.

Posted in Complex Analysis | Tagged , | Leave a comment

Evaluation of integrals using Cauchy’s residue theorem

Let $R$ be a rational function of two variables in $\mathbb{C}$, i.e., $R \in \mathbb{C}(x, y)$. Our goal is to evaluate integrals of the form
\[
\int_{0}^{2 \pi} R(\cos \theta, \sin \theta) \, d\theta.
\] Let $z = e^{i \theta}$. Then
\[
\cos \theta = \frac{1}{2}(z + z^{-1}), \qquad \sin \theta = \frac{1}{2i}(z- z^{-1})
\] and $d\theta = dz/(iz)$. Therefore
\[
\int_{0}^{2 \pi} R(\cos \theta, \sin \theta) \, d \theta = \int_{C(0, 1)} R \left( \frac{1}{2}(z + z^{-1}), \frac{1}{2 i} (z- z^{-1}) \right) \, \frac{dz}{iz}.
\]

Example. Let $0 < a < 1$. We will evaluate
\[
\int_{0}^{2\pi} \frac{d\theta}{1- 2 a \cos \theta + a^2}.
\] Let $z = e^{i \theta}$. Then
\begin{align} \int_{0}^{2 \pi} \frac{d \theta}{1- 2 a \cos \theta + a^2} &= \int_{C(0, 1)} \frac{1}{1- a (z + z^{-1}) + a^2} \frac{dz}{iz} \\ &= -i\int_{C(0, 1)} \frac{dz}{(1 + a^2)z- a z^2- a} \\ &= -i \int_{C(0, 1)} \frac{dz}{(z- a)(1- az)}. \end{align} Let
\[
f(z) = \frac{-i}{(z- a)(1- az)}.
\] Then $f$ has simple poles at $z = a$ and $z = a^{-1}$. Since $0 < a < 1$, only $z = a$ is in the interior of $C(0, 1)$. We have
\[
\text{Res}(f, a) = \lim_{z \to a} (z- a) f(z) = \frac{-i}{1- a^2}.
\] Therefore, by Cauchy’s residue theorem we have
\[
\int_{0}^{2 \pi} \frac{d \theta}{1- 2 a \cos \theta + a^2} = 2 \pi i \, \text{Res}(f, a) = \frac{2 \pi}{1- a^2}.
\]

If $f$ is a rational function, i.e., $f \in \mathbb{C}(z)$, such that $f(z) = o(|z|^{-1})$ for $z$ large and $f$ has no singularities on the real line, then we can evaluate integrals of the form
\[
\int_{-\infty}^{\infty} f(x) \, dx.
\] Here the integral converges in the principal sense. Pick $R$ sufficiently large so that all the poles of $f$ in the upper half plane are in the interior of the contour $C_R$ describing the line segment $[-R, R]$ and the semicircle of radius $R$ in the upper half plane centered at the origin. If $z_1, \dots, z_n$ are the poles of $f$ in the upper half plane, then
\[
\int_{-\infty}^{\infty} f(x) \, dx = 2 \pi i \sum_{i = 1}^{n} \text{Res}(f, z_i).
\]

Example. We will now show that
\[
\int_{-\infty}^{\infty} \frac{x^2}{(1 + x^2)^2} \, dx = \frac{\pi}{2}.
\] Consider
\[
\int_{C_R} f(z) \, dz,
\] where
\[
f(z) = \frac{z^2}{(1 + z^2)^2}.
\] The integrand of $f$ clearly has poles of order $2$ at $z = \pm i$, but only $i$ is in the upper half plane. Note that $f(z) \ll |z|^{-2}$. Moreover, we have
\[
\text{Res}(f, i) = \lim_{z \to i} \frac{d}{dz} (z- i)^2 f(z) = \lim_{z \to i} \frac{2z(z + i)^2- 2z^2 (z + i)}{(z + i)^4} = -\frac{i}{4}.
\] Hence, we find that
\[
\int_{-\infty}^{\infty} \frac{x^2}{(1 + x^2)^2} \, dx = 2 \pi i \left( -\frac{i}{4} \right) = \frac{\pi}{2}.
\]

Consider integrals
\[
\int_{-\infty}^{\infty} f(x) \cos nx \, dx, \qquad \int_{-\infty}^{\infty} f(x)\sin nx \, dx,
\] where $n$ is a positive integer. Such integrals are very important in Fourier analysis. In order to evaluate such integrals we consider
\[
\int_{C_R} f(z) e^{i n z} \, dx,
\] where $C_R$ consists of the line segment $[-R, R]$, together with the semicircle of radius $R$ in the upper half plane centered at the origin taken in the positive orientation. If $|f(z)| \leq M/|z|$ for $|z|$ large, then we show that
\[
\lim_{R \to \infty} \int_{S_R} f(z) e^{inz} \, dz = 0,
\] where $S_R$ is the semicircle. Note that
\[
\left| \int_{S_R} f(z) e^{i n z} \, dz \right| = \left| \int_{0}^{\pi} f(R e^{i \theta}) e^{i n R e^{i \theta}} i R e^{i \theta} \, d\theta \right| \leq M\int_{0}^{\pi} e^{- n R \sin \theta} \, d \theta = 2 M \int_{0}^{\pi/2} e^{-nR \sin \theta} \, d\theta.
\] Using the inequality $(\sin \theta)/\theta \geq 2 /\pi$ for $0 < \theta \leq \pi/2$ we find that
\[
\int_{0}^{\pi/2} e^{-n R \sin \theta} \, d \theta \leq \int_{0}^{\pi/2} e^{-2n R \theta/\pi} \, d\theta = \frac{\pi}{2 n R} (1- e^{-nR}) \leq \frac{\pi}{2 n R}.
\] Hence, we have
\[
\left| \int_{S_R} f(z) \, dz \right| \leq \frac{M \pi}{n R}
\] and so the assertion follows.

Example. We will evaluate
\[
\int_{-\infty}^{\infty} \frac{\cos x}{1 + x^2} \, dx.
\] Consider
\[
f(z) = \frac{e^{iz}}{1 + z^2}
\] and
\[
\int_{C_R} f(z) \, dz,
\] where $C_R$ is as above. Clearly, $f$ has simple poles at $z = \pm i$ and only $i$ is in the interior of $C$. We have
\[
\text{Res}(f, i) = \lim_{z \to i} (z- i) f(z) = \lim_{z \to i} \frac{e^{iz}}{z + i} = \frac{1}{2 i e}.
\] By Cauchy’s residue theorem,
\[
\int_{-R}^{R} \frac{e^{ix}}{1 + x^2} \, dx + \int_{S_R} f(x) \, dz = 2 \pi i \cdot \frac{1}{2 i e} = \frac{\pi}{e}.
\] Since $1/(1 + z^2) \ll |z|^{-2}$, it follows that
\[
\lim_{R \to \infty}\int_{S_R} f(z) \, dz = 0
\] Hence, we find that
\[
\int_{- \infty}^{\infty} \frac{e^{ix}}{1 + x^2} \, dx = \frac{\pi}{e}
\] and so
\[
\int_{-\infty}^{\infty} \frac{\cos x}{1 + x^2} \, dx = \frac{\pi}{e}.
\]

We now consider integrals of the form
\[
\int_{-\infty}^{\infty} f(x) \, dx,
\] where $f$ is a rational function with some simple poles on the real line and satisfying $|f(z)| \leq M /|z|^2$ for $|z|$ large. For this we need the following lemma.

Lemma. (small arc lemma) Let $f$ have a simple pole at $z_0$ and let $\gamma_{\epsilon} \subset C(z_0, \epsilon)$ a circular arc of angle $\alpha$. Then
\[
\lim_{\epsilon \to 0^{+}} \int_{\gamma_{\epsilon}} f(z) \, dz = i \alpha \, \text{Res}(f, z_0).
\]

Proof. In a deleted neighborhood $D'(z_o, r)$, we have
\[
f(z) = \frac{a_{-1}}{z- z_0} + g(z),
\] where $g$ is holomorphic on $D(z_0, r)$. If $\gamma_{\epsilon} \subset D(z_0, r)$, then we have
\[
\int_{\gamma_{\epsilon}} f (z) \, dz = a_{-1} \int_{\gamma_{\epsilon}} \frac{dz}{z- z_0} + \int_{\gamma_{\epsilon}} g (z) \, dz = i \alpha a_{-1} \int_{\gamma_{\epsilon}} g (z) \, dz
\] Let $|g(z)| \leq M$ for $z \in \overline{D(z_0, \delta)}$, where $0 < \delta < r$. Then
\[
\left| \int_{\gamma_{\epsilon}} g(z) \, dz \right| \leq M \alpha \epsilon
\] for $\epsilon < \delta$. Hence,
\[
\lim_{\epsilon \to 0^+} \int_{\gamma_{\epsilon}} g(z) \, dz = 0
\] and so
\[
\lim_{\epsilon \to 0^{+}} \int_{\gamma_{\epsilon}} f(z) \, dz = i \alpha a_{-1}.
\]

Example. We evaluate the integral
\[
\int_{0}^{\infty} \frac{\sin x}{x} \, dx.
\] Consider the function
\[
f(z) = \frac{e^{iz}}{z}
\] which has a simple pole at $z = 0$. Let $S_R$ denote the semicircle centered at the origin and of radius $R$ in the positive orientation and let $\gamma_{\epsilon}$ denote the semicircular arc of radius $\epsilon$ in the negative orientation. Then by Cauchy’s theorem we have
\[
\int_{-R}^{- \epsilon} f(x) \, dx + \int_{\gamma_{\epsilon}} f(z) \, dz + \int_{\epsilon}^{R} f(x) \, dx + \int_{S_R} f(z) \, dz = 0.
\] By simple change of variables we have
\[
\int_{\epsilon}^{R} \frac{e^{ix} -e^{-ix}}{x} \, dx + \int_{\gamma_{\epsilon}} f(z) \, dz + \int_{S_{R}} f(z) \, dz = 0.
\] By the small arc lemma
\[
\lim_{\epsilon \to 0^+} \int_{\gamma_{\epsilon}} f = -i \pi \text{Res}(f, 0) = -i \pi.
\] On the other hand, it can be shown that
\[
\lim_{R \to \infty} \int_{S_R} f(z) \, dz = 0
\] using an argument similar to the one we used in the previous example. Consequently, if we let $\epsilon \to 0^+$, $R \to \infty$ we obtain
\[
2 i \int_{0}^{\infty} \frac{\sin x}{x} \, dx = i\pi
\] or equivalently
\[
\int_{0}^{\infty} \frac{\sin x}{x} \, dx = \frac{\pi}{2}.
\]

Posted in Complex Analysis | Tagged | Leave a comment

Homotopy version of Cauchy’s theorem

Let $\gamma_0$ and $\gamma_1$ be piecewise smooth curves defined on the interval $[a, b]$ with the same end points, i.e., $\gamma_0(a) = \gamma_1(a)$ and $\gamma_0(b) = \gamma_1(b)$. If $U \subset \mathbb{C}$ is an open set, then $\gamma_0$ and $\gamma_1$ are said to be homotopic in $U$ if there exists a continuous function $F : [0, 1] \times [a, b] \to U$ such that
\[
F(s, a) = \gamma_0(a) = \gamma_1(a), \qquad F(s, b) = \gamma_0(b) = \gamma_1(b)
\] for every $s \in [0, 1]$ and
\[
F(0, t) = \gamma_0(t), \qquad F(1, t) = \gamma_1(t)
\] for every $t \in [a, b]$. In this case we call $F$ a homotopy between $\gamma_0$ and $\gamma_1$ and $F(s, t)$ is usually denoted by $\gamma_s(t)$.

Theorem. Let $f$ be holomorphic on an open set $U$. If $\gamma_0$ and $\gamma_1$ are piecewise smooth curves that are homotopic in $U$, then
\[
\int_{\gamma_0} f = \int_{\gamma_1} f.
\]

Proof. Let $F$ be a homotopy between $\gamma_0$ and $\gamma_1$ as above. Then $F$ is continuous and so the image of $F$ is a compact set since image of a compact set under a continuous map is compact. Let us denote the image of $F$ by $K$. Note that we have $d(K, U^c) > 0$. Moreover, if $0 < \epsilon < d(K , U^c)$, then $D(z, \epsilon) \subset U$ for every $z \in K$. By uniform continuity of $F$ there is a $\delta > 0$ such that
\[
|\gamma_{s_1} (t) -\gamma_{s_2}(t)| < \epsilon
\] for all $s_1, s_2 \in [0, 1]$ with $|s_1 -s_2| < \delta$ and $t \in [a, b]$. This yields
\[
\sup_{t \in [a, b]} |\gamma_{s_1}(t) -\gamma_{s_2}(t)| \leq \epsilon.
\] for all $s_1, s_2 \in [0, 1]$ with $|s_1 -s_2| < \delta$.
We now fix $s_1, s_2 \in [0, 1]$ with $|s_1- s_2| < \delta$ and let $a = t_0 < t_1 < \cdots < t_n = b$ be a uniform partition of $[a, b]$, i.e., $t_{i}- t_{i-1} = (b- a)/n$ for every $1 \leq i \leq n$. Because $\gamma_{s_1}$ and $\gamma_{s_2}$ are uniformly continuous, there exist a $\eta > 0$ such that
\[
|\gamma_{s_1}(t)- \gamma_{s_1}(t’)| < \epsilon \qquad \text{and} \qquad |\gamma_{s_2}(t)- \gamma_{s_2}(t’)| < \epsilon
\] whenever $t, t’ \in [a, b]$ and $|t- t’| < \eta$. We choose $n$ to be so large that $(b- a)/n < \eta$. Let $z_i = \gamma_{s_1}(t_i)$ and $w_i = \gamma_{s_2}(t_i)$ for $0 \leq i \leq n$. Let $D_i = D(z_i, 2 \epsilon)$ for each $0 \leq i < n$. We claim that $\gamma_{s_1}([t_i, t_{i + 1}]) \cup \gamma_{s_2}([t_i, t_{i + 1}]) \subset D_i$ for each $0 \leq i < n$.

Posted in Complex Analysis | Tagged | Leave a comment

Jensen’s inequality and Borel-Carathéodory lemma

The Jensen’s inequality bounds the number of zeros of an analytic function in a small disc in terms of size of the function in a slightly larger disc. Although Jensen’s inequality is a simple consequence of Jensen’s formula but we will provide a different proof based on Blaschke factors.

Theorem. Let $f$ be a function that is analytic on an open set containing the closed disc $\overline{D(0, R)}$. If $|f(z)| \leq M$ for $z \in \overline{D(0, R)}$ and $f(0) \neq 0$, then for $r < R$ the number of zeros of $f$ counted with multiplicity in the disc $\overline{D(0, r)}$ does not exceed
\[
\frac{\log (M/|f(0)|)}{\log (R/r)}.
\]

Proof. Let $z_1, \dots, z_K$ be the zeros of $f$ in the open disc $|z| < R$ counted with multiplicity. Note that there can only be finitely many zeros of $f$ in the disc $D(0, R)$ since a complex analytic function that is not identically zero cannot have infinitely many zeros in a compact set. We now define
\[
g(z) = f(z) \prod_{k = 1}^{K} \frac{R^2- z\overline{z}_k}{R(z- z_k)}.
\] Note that $g$ is analytic on the domain of $f$. Each factor in the product above in the definition of $g$ has the property that it has a pole at $z_k$ and that it is $1$ whenever $|z| = R$. The later can be seen by observing that
\[ \left| \frac{R^2- z \overline{z}_k}{R(z- z_k)} \right|^2 = \frac{R^4- 2 R^2 \text{Re}(z \overline{z}_k) + |z|^2|z_k|^2}{R^2 |z|^2- 2 R^2 \text{Re}(z \overline{z}_k) + R^2 |z_k|^2} = 1 \] as $|z| = R$. Thus for $|z| = R$ we have
\[
|g(z)| = |f(z)| \prod_{k = 1}^{K} \left| \frac{R^2- z \overline{z}k}{R(z- z_k)} \right| = |f(z)| \leq M.
\] Hence by the maximum modulus principle we have $|g(0)| \leq M$ and consequently
\[
|g(0)| = |f(0)| \prod_{k = 1}^{K} \frac{R}{|z_k|} \leq M.
\] Each of the factor in the product above is $\geq 1$ as $|z_k| < R$. Moreover, if $|z_k| \leq r$, then $R/|z_k| \geq R/r$. Let $L$ be the number of zeros of $f$ in the disc $\overline{D(0, r)}$ counted with multiplicity. Then we obtain that $|g(0)| \geq |f(0)| (R/r)^L$. This leads to the desired bound.

Using Jensen’s inequality it can be easily proven that a polynomial of degree $n$ with complex coefficients has at most $n$ roots counted with multiplicity.

We now proceed to prove a result which provides bounds for a complex analytic function and its derivatives based on one-sided bounds for its real or imaginary part in a slightly larger region. The key idea is to bound Taylor coefficients, the modulus of which can be expressed as a linear combination of integrals in Cauchy integral formulæ.

Theorem. (Borel-Carathéodory lemma) Suppose that $h$ is analytic in an open set containing the disc $\overline{D(0, R)}$, that $h(0) = 0$, and that $\text{Re}( h(z)) \leq M$ for $z \in \overline{D(0, R)}$. If $r < R$, then for $z \in \overline{D(0, r)}$ we have
\[
|h(z)| \leq \frac{2Mr}{R- r}
\] and
\[
|h'(z)| \leq \frac{2MR}{(R- r)^2}.
\]

Proof. Note that it suffices to show that
\[
\left| \frac{h^{(k)}(0)}{k!} \right| \leq \frac{2M}{R^k}
\] for $k \geq 1$, for then we have
\[
|h(z)| \leq \sum_{k = 1}^{\infty} \left| \frac{h^{(k)}(0)}{k!} \right| r^k \leq 2M \sum_{k = 1}^{\infty} \left( \frac{r}{R} \right)^k = 2M \left( \frac{r/R}{1- r/R} \right) = \frac{2Mr}{R- r}
\] and
\[
|h'(z)| \leq \sum_{k = 1}^{\infty} \frac{|h^{(k)}(0)|kr^{k-1}}{k!} \leq \frac{2M}{R} \sum_{k = 1}^{\infty} k \left( \frac{r}{R} \right)^{k-1} = \frac{2M}{R} \frac{1}{(1- r/R)^2} = \frac{2MR}{(R -r)^2}
\] By Cauchy’s integral formula we have
\[
\int_{0}^{1} h(Re(\theta)) \, d \theta = \frac{1}{2 \pi i} \oint_{C(0, R)} \frac{h(z)}{z} \, dz = h(0) = 0. \tag{1}
\] If $k > 0$, then we have
\[
\int_{0}^{1} h(Re(\theta))e(k \theta) \, d\theta = \frac{R^{-k}}{2 \pi i} \oint_{C(0, R)} h(z) z^{k- 1} \, dz = 0
\] by Cauchy’s theorem as $h(z) z^{k-1}$ is analytic on the disc $\overline{D(0, R)}$ and
\[
\int_{0}^{1} h(R e (\theta)) e(-k \theta) \, d \theta = \frac{R^{k}}{2 \pi i} \oint_{C(0, R)} h(z) z^{-k-1} \, dz = \frac{R^k h^{k}(0)}{k!}.
\] For $k > 0$ and for any real $\phi$ we have
\[
\int_{0}^{1} h(R e(\theta)) e(k\theta + \phi) \, d \theta = e(\phi) \int_{0}^{1} h(Re(\theta)) e(k \theta) \, d \theta = 0 \tag{2}
\] and
\[
\int_{0}^{1} h(Re(\theta)) \overline{e(k \theta + \phi)} \, d\theta = e(- \phi)\int_{0}^{1} h(Re(\theta)) e(-k \theta) \, d\theta = \frac{R^k e(-\phi) h^{(k)}(0)}{k!}. \tag{3}
\] Adding (1), (2) and (3) we end up with
\[
2\int_{0}^{1} h(Re(\theta))(1 + \cos 2 \pi( k \theta + \phi)) \, d\theta = \frac{R^k e(-\phi) h^{(k)}(0)}{k!}.
\] Now taking the real parts we obtain
\[
\text{Re}\left(\frac{R^k e(- \phi) h^{(k)}(0)}{k!} \right) \leq 2M \int_{0}^{1} (1 + \cos 2\pi (k \theta + \phi)) \, d\theta = 2M.
\] Choosing $\phi$ to be such that $e(-\phi) h^{(k)}(0) = |h^{(k)}(0)|$ we see that $|h^{(k)}(0)|/k! \leq 2M R^{-k}$.

Posted in Complex Analysis | Tagged , , | Leave a comment

Landau’s theorem on Dirichlet series

Let $\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}$ be a Dirichlet series with abscissa of convergence as $\sigma_c$. Then it is natural to think that $\alpha(s)$ must have some kind of singularity on the line $\sigma = \sigma_c$ which causes the Dirichlet series $\alpha(s)$ to diverge to the left of $\sigma_c$. It turns out that this is partially correct. Landau showed if the coefficients $a_n$ are nonnegative (for sufficiently large $n$), then $\alpha(s)$ has a singularity at the point $s = \sigma_c$. However, if not all $a_n$ are nonnegative, then in some cases $\alpha(s)$ can in fact be analytically continued to (an open set containing) $\sigma_c$. For instance, if $q > 1$ and $\chi$ is not a principal character modulo $q$, then the Dirichlet $L$-function
\[
L(s, \chi) = \sum_{n = 1}^{\infty} \chi(n) n^{-s}
\] can be analytically continued to the entire complex plane. We now state and prove Landau’s amazing theorem.

Theorem. (Landau) Let $\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}$ be a Dirichlet series with finite abscissa of convergence $\sigma_c$. If $a_n \geq 0$ for every $n$, then $\sigma_c$ is a point of singularity of $\alpha(s)$.

We remark here that it suffices to assume that $a_n \geq 0$ for all sufficiently large $n$ since $\sum_{n \leq N} a_n n^{-s}$ is an entire function for a fixed $N$.

Proof. Suppose for the sake of contradiction that $\sigma_c$ is not a point of singularity of $\alpha(s)$, i.e., there exists an analytic continuation of $\alpha(s)$ to an open set $U$ containing $\sigma_c$. Let us denote this analytic continuation by $A(s)$ and let $\delta > 0$ be such that $D(\sigma_c, \delta) \subset U$. Take $\sigma_0 = \sigma_c + \delta/3$. Then we have $D(\sigma_0, 2 \delta/3) \subset D(\sigma_c, \delta)$. It then follows that $A(s)$ has a power series expansion at $\sigma_0$;
\[
A(s) = \sum_{k = 0}^{\infty} \frac{A^{(k)}(\sigma_0)}{k!} (s- \sigma_0)^k \tag{1}
\] for $s \in D(\sigma_0, 2 \delta/3)$. Because $\sigma_0 > \sigma_c$ we have
\[
A^{(k)}(\sigma_0) = \alpha^{(k)}(\sigma_0) = (-1)^k\sum_{n = 1}^{\infty} a_n (\log n)^k n^{-\sigma_0}.
\] Plugging this into (1) we find that
\begin{align*} A(s) &= \sum_{k = 0}^{\infty} \frac{(-1)^k}{k!} \sum_{n = 1}^{\infty} a_n (\log n)^k n^{-\sigma_0} (s- \sigma_0)^k \\ &= \sum_{k = 0}^{\infty} \sum_{n = 1}^{\infty} \frac{1}{k!} a_n (\log n)^k n^{-\sigma_0} (\sigma_0- s)^k. \end{align*} for $s \in D(\sigma_0, 2 \delta/3)$ If $s = \sigma$ and $\sigma_c- \delta/3 < \sigma < \sigma_0$, then each summand is nonnegative and so we can interchange the order of summation to obtain
\begin{align} A(\sigma) &= \sum_{n = 1}^{\infty} a_n n^{-\sigma_0} \sum_{k = 0}^{\infty} \frac{(\log n)^k (\sigma_0- \sigma)^k}{k!} \\ &= \sum_{n = 1}^{\infty} a_n n^{-\sigma_0} e^{(\log n)(\sigma_0- \sigma)} = \sum_{n = 1}^{\infty} a_n n^{-\sigma_0} n^{\sigma_0- \sigma} = \alpha(\sigma). \end{align} This implies that $\alpha(\sigma)$ converges for $\sigma_c- \delta/3 < \sigma < \sigma_0$, a contradiction.

One can obtain an analogous result for power series (see Exercise 7 of Section 1.2 of Multiplicative Number Theory I: Classical Theory by Montgomery and Vaughan). The above result helps us to find abscissa of convergence of Dirichlet series with nonnegative coefficients. Let us illustrate this with an example. Let $d_k^*(n)$ denote the number of factorizations of $n$ as a product of $k$ positive integers each greater than $1$. Then $d_k^{*}$ is the $k^{\text{th}}$ fold convolution of $I -e$, which is the characteristic function of integers greater than $1$. Thus we have
\[
\sum_{n = 1}^{\infty} d_k^*(n) n^{-s} = (\zeta(s)- 1)^k
\] for $\sigma > 1$. This implies that the abscissa of convergence of $\sum_{n = 1}^{\infty} d_k^*(n) n^{-s}$ is $1$ as $\zeta(s)$ has a singularity at $s = 1$.

Posted in Analytic Number Theory, Dirichlet series | Tagged , | Leave a comment

Integral representation of Dirichlet series and Kronecker’s lemma

Let $\alpha(s) = \sum_{n =1}^{\infty} a_n n^{-s}$ be a Dirichlet series and let $A(x) = \sum_{n \leq x} a_n$. In this article we will establish a relationship between $\alpha(s)$ and $A(x)$.

Theorem. Let $\alpha(s)$ and $A(x)$ be as above. If $\sigma > \max(\sigma_c, 0)$, then
\[
\alpha(s) = s \int_{1}^{\infty} A(x) x^{-s-1} \, dx. \tag{1}
\] Moreover, if $\sigma_c \geq 0$, then
\[
\limsup_{x \to \infty} \frac{\log |A(x)|}{\log x} = \sigma_c. \tag{2}
\]

Proof. Note that by Abel’s summation by parts formula we have
\[
\sum_{n \leq x} a_n n^{-s} = A(x)x^{-s} + s \int_{1}^{x} A(u) u^{-s-1} \, du
\] for every $s$. Now let $\phi$ denote the left-hand side of (2). If $\theta > \phi$, then we have $A(x) \ll x^{\theta}$. Suppose $\sigma > \theta$. Then note that $A(x) x^{-s}$ tends to $0$ as $x \to \infty$ and the integral is absolutely convergent. Thus $\alpha(s)$ converges for $\sigma > \theta$ and so we have $\sigma_c \leq \theta$. Because $\theta$ was taken to be arbitrary we conclude that $\sigma_c \leq \phi$. Moreover, the formula (1) holds for every $\sigma > \phi$.

If $\sigma_c < 0$, then $A(x) \ll 1$ as $\alpha(0)$ converges and so $\phi \leq 0$. Thus in this case (1) holds for every $\sigma > 0$. Now suppose that $\sigma_c > 0$. We will now show that $\phi = \sigma_c$ in this case. We already have the inequality $\sigma_c \leq \phi$ so it suffices to show that $\sigma_c \geq \phi$. Let $\sigma > \sigma_c$. Then by Abel’s summation by parts formula we have
\[
A(x) = \left( \sum_{n \leq x} a_n n^{- \sigma} \right) x^{\sigma}- \sigma \int_{1}^{x} \left( \sum_{n \leq u} a_n n^{-\sigma} \right) u^{\sigma- 1} \, du.
\] Since $\alpha(\sigma)$ converges we have $\sum_{n \leq x} a_n n^{- \sigma} \ll 1$ and so we obtain
\[
A(x) \ll x^{\sigma} + \sigma \int_{0} ^{x} u^{\sigma- 1} \, du \ll x^{\sigma},
\] where the implicit constant depends on $\sigma$. This implies that $\phi \leq \sigma$. Because $\sigma$ was taken to be arbitrary we have $\sigma_c \geq \phi$. This completes the proof.

It follows from what we showed above that $A(x) = o(x^{\sigma})$ for every $\sigma > \max(\sigma_c, 0)$. We now show that if the Dirichlet series $\alpha(s)$ converges for $\sigma > 0$, then $A(x) = o(x^{\sigma})$. This is slightly stronger in the case when $\alpha(\sigma_c)$ converges and $\sigma_c > 0$ for then we have $A(x) = o(x^{\sigma_c})$ which does not follow from above.

Theorem. (Kronecker’s lemma) If $\alpha(s)$ converges and $\sigma > 0$, then $A(x) = o(x^{\sigma})$.

Proof. Let us define $B(x) = \sum_{n \leq x} a_n n^{-x}$. It now follows by Abel’s summation by parts formula that
\begin{align*}
A(x) &= B(x) x^{s}- s \int_{1}^{x} B(u) u^{s- 1}\, du \\
&= s \int_{0}^{x} B(x) u^{s- 1} \, du- s \int_{0}^{x} B(u) u^{s- 1} \, du \\
&= s \int_{0}^{x} (B(x)- B(u)) u^{s- 1} \, du.
\end{align*} Let $\delta(x) = \sup_{u, v \geq x} |B(u)- B(v)|$. Then $\delta(x) = o(1)$ since $B(x) \to \alpha(s)$ as $x \to \infty$. Let $0 < y < x$. Then we have
\[
A(x) \ll \int_{0}^{y} u^{\sigma- 1} \, du + \delta(y) \int_{y}^{x} u^{\sigma- 1} \, du \ll y^{\sigma} + \delta(y) x^{\sigma},
\] where the implicit constant depends on $s$ but we don’t care as $s$ is fixed. Taking $y = \sqrt{x}$ it immediately follows that $A(x) = o(x^{\sigma})$ as $y^{\sigma} = o(x^{\sigma})$ and $\delta(\sqrt{x}) = o(1)$.

The Kronecker’s lemma does not hold for $\sigma < 0$. Consider the Dirichlet series
\[
\alpha(s) = \sum_{n = 1}^{\infty} n^{-2} n^{-s} = \zeta(s + 2)
\] which converges for $\sigma > -1$. Note that
\[
A(x) = \sum_{n \leq x} n^{-2}
\] converges to $\zeta(2)$ and $A(x) \neq o(x^{\sigma})$ for any $-1 < \sigma < 0$.

Posted in Analytic Number Theory, Dirichlet series | Tagged , | Leave a comment

Absence of zeros of $\zeta(s)$ on the line $\sigma = 1$ under prime number theorem

In this article we show that prime number theorem implies nonvanishing of $\zeta(s)$ on the line $\sigma = 1$ and the argument we present here follows closely the approach taken in Ingham’s book The Distribution of Prime Numbers. The key tool is the formula
\[
-\frac{\zeta’}{\zeta}(s) = s \int_{1}^{\infty} \frac{\psi(x)}{x^{s + 1}}\, dx,
\] which holds for $\sigma > 1$. This formula follows from the following general fact about Dirichlet series: If $\alpha(s) = \sum_{n= 1}^{\infty} a_n n^{-s}$ is a Dirichlet series with abscissa of convergence $\sigma_c$ and $A(s) = \sum_{n \leq x} a_n$, then
\[
\alpha(s) = s \int_{1}^{\infty} \frac{A(x)}{x^{s + 1}} \, dx
\] for $\sigma > \max(\sigma_c, 0)$. Since
\[
\int_{1}^{\infty} \frac{dx}{x^s} = \frac{1}{s-1}
\] for $\sigma > 1$ we obtain
\[
\int_{1}^{\infty} \frac{\psi(x) -x}{x^{s + 1}} \, dx = -\frac{1}{s} \frac{\zeta’}{\zeta}(s)- \frac{1}{s-1} \tag{1}
\] for $\sigma > 1$. It is natural to consider the above integral, which we call $\phi(s)$ for the rest of our discussion, since we want to use $\psi(x) = x + o(x)$ to glean information about zeros of $\zeta(s)$. Let $\varepsilon > 0$ be fixed and take $M > 1$ be such that $|\psi(x)- x| \leq \varepsilon x$ for $x \geq M$. It now follows that
\[
|\phi(s) | \leq \int_{1}^{\infty} \frac{|\psi(x)- x|}{x^{\sigma + 1}} \, dx \leq \int_{1}^{M} \frac{|\psi(x)-x|}{x^2} \, dx + \varepsilon \int_{M}^{\infty} \frac{dx}{x^{\sigma}} \leq I_M + \frac{\varepsilon}{\sigma- 1}.
\] Now let $\delta > 1$ be such that $(\delta- 1) I_M < \varepsilon$. Then for $1 < \sigma < \delta$ we have $|\phi(s)(\sigma- 1)| \leq 2 \varepsilon$. This shows that $\phi(s)(\sigma- 1) \to 0$ as $\sigma \to 1^{+}$.

Suppose for the sake of contradiction that $\zeta(1 + i t_0) = 0$ for some $t_0 \neq 0$. Then $\frac{\zeta’}{\zeta}(s)$ has a simple pole at $s = s_0 = 1 + i t_0$ and so we have
\[
\lim_{\scriptstyle \sigma \to 1^+ \atop \scriptstyle t = t_0} (\sigma- 1) \left( -\frac{1}{s} \frac{\zeta’}{\zeta}(s)- \frac{1}{s-1} \right) = -\frac{1}{ s_0} \underset{s = s_0}{\operatorname{Res}} \frac{\zeta’}{\zeta}(s) \neq 0.
\] But this is a contradiction due to (1).

Posted in Analytic Number Theory, Prime number theorem, Riemann zeta function | Tagged , , | Leave a comment

Bounds for the Riemann zeta function in the critical strip

In this article we obtain upper bounds for $\zeta(s)$ in the strip $\delta \leq \sigma \leq 2$, where $\delta > 0$. We will first show that for a fixed $0 < \varepsilon < \delta \leq 1$ we have
\[
\zeta(s) \ll \tau^{1- \delta + \varepsilon}
\] for $\delta \leq \sigma \leq 2$ and $|t| \geq 1$, where $\tau = |t| + 2$ and the implicit constant depends on $\delta$ and $\varepsilon$. Later we will establish a slightly stronger bound.

Note that for $\sigma > 1$ we have
\[ \begin{align} \zeta(s) = \sum_{n = 1}^{\infty} \frac{1}{n^s} &= \sum_{n = 1}^{\infty} \left( \frac{1}{n^s}- \int_{n}^{n + 1} \frac{du}{u^{s}} \right) + \int_{1}^{\infty} \frac{du}{u^s} \\ &= \sum_{n = 1}^{\infty} \left( \frac{1}{n^s}- \int_{n}^{n + 1} \frac{du}{u^{s}} \right) + \frac{1}{s-1}. \end{align}
\] Now let
\[
\varphi_n(s) = \frac{1}{n^s}- \int_{n}^{n + 1} \frac{du}{u^s}.
\] Then we have
\[
\zeta(s)- \frac{1}{s- 1} = \sum_{n = 1}^{\infty} \varphi_n(s).
\] Observe that
\[
|\varphi_n(s)| = \left|\int_{n}^{n + 1} \left( \frac{1}{n^s}- \frac{1}{u^s} \right) \, du \right| = \left|\int_{n}^{n + 1} \int_{n}^{u} \frac{s}{v^{s + 1}} dvdu \right| \leq \frac{|s|}{n^{\sigma + 1}}
\] for $n \in \mathbb{N}$ and $\sigma > 0$. Due to this inequality the series $\sum_{n = 1}^{\infty} \varphi_n$ converges uniformly on compact subsets of the half plane $\sigma > 0$. Hence, $\sum_{n = 1}^{\infty} \varphi_n$ is holomorphic in the half plane $\sigma > 0$ as each $\varphi_n$ is an entire function.

We will now combine the inequality $|\varphi_n(s)| \leq |s|/ n^{\sigma + 1}$ together with the trivial inequality $|\varphi_n(s)| \leq 2/n^{\sigma}$. This is because the first inequality helps with convergence in the critical strip whereas the second inequality does not involve $s$ and hence helps us get a sub power of $|s|$ in the bound. Let $0 < \lambda < 1$. Then we have
\[
|\varphi_n(s)| = |\varphi_n(s)|^{\lambda} |\varphi_n(s)|^{1-\lambda} \leq \left( \frac{|s|}{n^{\sigma + 1}} \right)^{\lambda} \left( \frac{2}{n^{\sigma}} \right)^{1- \lambda} \leq \frac{2 |s|^{\lambda}}{n^{\sigma + \lambda}}.
\] If $\sigma + \lambda > 1$, then we have
\[
\left| \sum_{n = 1}^{\infty} \varphi_n(s) \right| \leq 2 |s|^{\lambda} \sum_{n = 1}^{\infty} \frac{1}{n^{\sigma + \lambda}}.
\] Let $0 < \varepsilon < \delta \leq 1$. We take $\lambda = 1 -\delta + \varepsilon$ so that $0 < \lambda < 1$ and for $\sigma \geq \delta$ the series to the right converges. Thus for $\sigma \geq \delta$ we have
\[
\left| \sum_{n = 1}^{\infty} \varphi_n(s) \right| \leq 2 |s|^{1 -\delta + \varepsilon} \sum_{n = 1}^{\infty} \frac{1}{n^{1 + \varepsilon}} \ll |s|^{1- \delta + \varepsilon},
\] where the implicit constant depends on $\delta$ and $\varepsilon$. Hence, it follows that
\[
\zeta(s) \ll \frac{1}{|s- 1|} + |s|^{1- \delta + \varepsilon} \ll |s|^{1- \delta + \varepsilon}
\] for $\sigma \geq \delta$, $|t| \geq 1$. For $\delta \leq \sigma \leq 2$ we have $|s| \ll \tau$ and so for fixed $0 < \varepsilon < \delta \leq 1$ we find that
\[
\zeta(s) \ll \tau^{1- \delta + \varepsilon}
\] for $ \delta \leq \sigma \leq 2, |t| \geq 1$, where the implicit constant depends on $\delta$ and $\varepsilon$.

Let $\delta > 0$ be fixed. We now show that
\[
\zeta(s) \ll (1 + \tau^{1- \sigma}) \min \left( \frac{1}{|\sigma- 1|}, \log \tau \right)
\] uniformly for $\delta \leq \sigma \leq 2$, $|t| \geq 1$. It is easily seen by Abel’s summation by parts formula that
\[
\zeta(s) = \sum_{n \leq x} n^{-s} + \frac{x^{1-s}}{s-1} + \{x\} x^{-s} -s \int_{x}^{\infty} \{u\} u^{-s-1} \, du
\] for $\sigma > 0$ and $s \neq 1$. We can bound the integral to the right as
\[
\int_{x}^{\infty} \{u\} u^{-s-1} \, du \ll \int_{x}^{\infty} u^{- \sigma- 1} \, du = \frac{x^{- \sigma}}{\sigma}
\] for $\sigma > 0$. Thus we find that
\[
\zeta(s) = \sum_{n \leq x} n^{-s} + \frac{x^{1-s}}{s-1} + O(\tau x^{-\sigma}) \tag{1}
\] uniformly for $\sigma \geq \delta > 0$, $s \neq 1$. The key idea behind obtaining our next bound is to split the ranges of $s$ depending on the proximity of $\sigma$ to $1$.

Note that we have
\[
\left| \sum_{n \leq x} n^{-s} \right| \leq \sum_{n \leq x} n^{- \sigma} \leq 1 + \int_{1}^{x} u^{-\sigma} \, du
\] for $\sigma \geq 0$ and $x \geq 2$. If $0 \leq \sigma \leq 1 -\log x$, then
\[
\sum_{n \leq x} n^{-s} \ll 1 + \frac{x^{1- \sigma}}{1- \sigma} \ll \frac{x^{1- \sigma}}{1- \sigma},
\] where the constant $1$ gets absorbed as $x^{1- \sigma}/(1- \sigma) \geq 1$ for $0 \leq \sigma \leq 1- 1/\log x$. If $|\sigma- 1| \leq 1/ \log x$, then $1- 1/\log x \leq \sigma \leq 1 + 1/\log x$ and so we have
\[
e^{-1} u^{-1} \leq u^{-1 -1/\log x} \leq u^{-\sigma} \leq u^{-1 + 1/\log x} \leq e u^{-1}
\] for $1 \leq u \leq x$. Thus $u^{-\sigma} \asymp u^{-1}$ uniformly for $1 \leq u \leq x$. Using this we find that
\[
\sum_{n \leq x} n^{-s} \ll 1 + \int_{1}^{x} u^{-1} \, du \ll \log x.
\] If $1 + 1/\log x \leq \sigma \leq 2$, then
\[
\sum_{n \leq x} n^{-s} \ll 1 + \int_{1}^{\infty} u^{-\sigma} \, du = 1 + \frac{1}{\sigma- 1} \ll \frac{1}{\sigma- 1}.
\]

We now need to show that in each of the ranges of $\sigma$ the desired estimate holds. If $0 \leq \sigma \leq 1- 1/\log x$, then $|\sigma- 1| \geq 1/ \log x$ and so $1/|\sigma- 1| \leq \log x$. Thus $x^{1- \sigma}/(1- \sigma) \ll (1 + x^{1- \sigma}) \min (1/|\sigma- 1|, \log x)$ for $0 \leq \sigma \leq 1 -1/\log x$.

If $|\sigma- 1| \leq 1/\log x$, then $\log x \leq 1/|\sigma- 1|$ and so $\log x \ll (1 + x^{1- \sigma}) \min (1/|\sigma- 1|, \log x) $ for $|\sigma- 1| \leq 1/\log x$.

Finally, if $1 + 1/\log x \leq \sigma \leq 2$, then $|\sigma -1| \geq 1/\log x$ and $1/|\sigma- 1| \leq \log x$. Hence, $1/|\sigma- 1| \ll (1 + x^{1- \sigma}) \min (1/|\sigma- 1|, \log x)$ for $1 + 1/\log x \leq \sigma \leq 2$.

Now suppose that $|t| \geq 1$ and $\delta \leq \sigma \leq 2$. By taking $x = \tau$ in (1) we find that
\[
\zeta(s) \ll \sum_{n \leq x} n^{-s} + \tau^{1- \sigma} \ll (1 + \tau^{1- \sigma}) \min \left( \frac{1}{|\sigma- 1|}, \log \tau \right).
\] as $1/|s- 1| \leq 1$ due to $|t| \geq 1$ and $\min \left( 1/|\sigma- 1|, \log \tau \right) \gg 1$ for $0 \leq \sigma \leq 2$.

Posted in Analytic Number Theory, Riemann zeta function | Tagged , | Leave a comment

Bounds for divisor and Euler’s totient function

The divisor function $d(n)$ counts the number of divisors of an integer $n$. It is a multiplicative function and so can be written as
\[
d(n) = \prod_{p^a || n} (a + 1).
\] We will now show that $d(n) \ll_{\varepsilon} n^{\varepsilon}$ for every $\varepsilon > 0$. Let $\varepsilon > 0$ be fixed. Then we have
\[
\frac{d(n)}{n^{\varepsilon}} = \prod_{p^a || n} \frac{a + 1}{p^{a \varepsilon}} \leq \prod_{\scriptstyle p^a || n \atop \scriptstyle p < 2^{1/\varepsilon}} \frac{a + 1}{p^{a \varepsilon}}
\] for if $p \geq 2^{1/{\varepsilon}}$, then $p^{\varepsilon} \geq 2$ and so $p^{a \varepsilon} \geq 2^{a} \geq a + 1$ which gives $(a + 1)/p^{a \varepsilon} \leq 1$. Now observe that
\[
\frac{d(n)}{n^{\varepsilon}} \leq \prod_{\scriptstyle p^a || n \atop \scriptstyle p < 2^{1/\varepsilon}} \frac{a + 1}{2^{a \varepsilon}} \leq \prod_{\scriptstyle p^a || n \atop \scriptstyle p < 2^{1/\varepsilon}} \frac{a + 1}{{a \varepsilon \log 2}}
\] as $2^{a \varepsilon} = e ^{a \varepsilon \log 2} \geq a \varepsilon \log 2$. Finally we have
\[
\frac{d(n)}{n^{\varepsilon}} \leq \prod_{p < 2^{1/\varepsilon}} \frac{2}{\varepsilon \log 2} \leq \left( \frac{2}{\varepsilon \log 2} \right)^{\pi(2^{1/\varepsilon})}.
\] Because the right-hand side depends only on $\varepsilon$ we conclude that $d(n) \ll_{\varepsilon} n^{\varepsilon}$.

One can readily obtain a lower bound for $\varphi(n)$ using the above upper bound for $d(n)$. Note that
\[
\varphi(n) = n\prod_{p \, | \, n} \left(1- \frac{1}{p} \right) \geq \frac{n}{2^{\omega(n)}} \geq \frac{n}{d(n)} \gg_{\varepsilon} n^{1- \varepsilon}
\] as $1- 1/p \geq 1/2$ for every prime $p$ and $d(n) \geq 2^{\omega(n)}$ for every $n \in \mathbb{N}$.

I present another proof of the bound $\varphi(n) \gg_{\varepsilon} n^{1 -\varepsilon}$ which was casually discovered by me. My bound is effective in the sense that the constant is explicit. The idea is to show that $\prod_{p \, | \, n} \left( 1 -1/p \right) \gg_{\varepsilon} n^{-\varepsilon}$ for every $\varepsilon > 0$. Taking the logarithm we obtain
\[
\log \prod_{p \, | \, n} \left( 1 -\frac{1}{p} \right) = \sum_{p \, | \, n} \log \left( 1 -\frac{1}{p} \right) = -\sum_{p \, | \, n} \sum_{k = 1}^{\infty} \frac{1}{k p^k} = -\sum_{k = 1}^{\infty} \frac{1}{k} \sum_{p \, | \, n} \frac{1}{p^k}.
\] Now observe that for any $k \in \mathbb{N}$ and $U \geq 1$ we have
\[
\sum_{p \, | \, n} \frac{1}{p^k} \leq \sum_{p \leq U} \frac{1}{p^k} + \frac{\omega(n)}{U^k}.
\] Thus we obtain
\[ \begin{align} \sum_{k = 1}^{\infty} \frac{1}{k} \sum_{p \, | \, n} \frac{1}{p^k} &\leq \sum_{k = 1}^{\infty} \frac{1}{k} \sum_{p \leq U} \frac{1}{p^k} + \omega(n)\sum_{k = 1}^{\infty} \frac{1}{k U^k} \\ &= -\sum_{p \leq U} \log \left( 1- \frac{1}{p}\right)-\omega(n) \log \left( 1- \frac{1}{U} \right). \end{align}
\] Using the inequality $\omega(n) \leq \log_2 n$ we get
\[
\log \prod_{p \, | \, n} \left( 1- \frac{1}{p} \right) \geq \sum_{p \leq U} \log \left( 1- \frac{1}{p} \right) + \frac{\log n}{\log 2} \log \left( 1- \frac{1}{U} \right)
\] If we denote the sum to the right by $C_U$ and $\varepsilon_U = -\frac{1}{\log 2} \log \left( 1- \frac{1}{U} \right)$, then we get
\[
\log \prod_{p \, | \, n} \left( 1- \frac{1}{p} \right) \geq C_U -\varepsilon_U \log n.
\] Finally, taking exponential we find that
\[
\prod_{p \, | \, n} \left( 1- \frac{1}{p} \right) \geq e^{C_U} n^{-\varepsilon_U}.
\] Since $\varepsilon_U \to 0$ as $U \to \infty$ we obtain the desired conclusion.

One can obtain a stronger lower bound for $\varphi(n)$ without much effort as follow: Observe that
\[\begin{align} \varphi(n)\sigma(n) &= \prod_{p^a || n} \varphi(p^a) \sigma(p^a) = \prod_{p^a || n} p^{a-1} (p- 1) \left( \frac{p^{a + 1}- 1}{p- 1} \right) \\ &= \prod_{p^{a} || n} p^{a- 1}(p^{a + 1}- 1) = n^2 \prod_{p^a || n} \left( 1- \frac{1}{p^{a + 1}} \right). \end{align}
\] Since each of the factor is $\leq 1$ we obtain the inequality $\varphi(n) \sigma(n) \leq 1$. Moreover, note that
\[
\prod_{p^a || n} \left( 1- \frac{1}{p^{a + 1}} \right) \geq \prod_{p} \left( 1- \frac{1}{p^2} \right) = \frac{1}{\zeta(2)} = \frac{6}{\pi^2}.
\] Thus we have
\[
\frac{6n^2}{\pi^2} \leq \varphi(n) \sigma(n) \leq n^2. \tag{1}
\] By definition of $\sigma(n)$ we have
\[
\sigma(n) = n \sum_{d \, | \, n} \frac{1}{d} \leq n (\log n + 1).
\] This combined with (1) leads to
\[
\varphi(n) \geq \frac{6n^2}{\pi^2 \sigma(n)} \geq \frac{6n}{\pi^2 (\log n + 1)},
\] i.e., $\varphi(n) \gg n/\log n$.

Posted in Analytic Number Theory, Divisor function, Euler's totient function | Tagged , | Leave a comment

Little Picard theorem

In this article we prove the little Picard theorem assuming the existence of a nonconstant holomorphic function $\lambda : \mathbb{C} \backslash \{0, 1\} \to \mathbb{C}$ which satisifies $\text{Re} \lambda(z) \leq 0$.

In the proof below we will repeatedly use the following fact: If $f$ is holomorphic on a domain (open and connected subset) $D$ of $\mathbb{C}$ and $K$ is a compact subset of $D$ such that $f(z) = 0$ for infinitely many $z \in K$, then $f \equiv 0$.

Theorem. (Little Picard theorem) If $f$ is a nonconstant entire function, then the image of $f$ must be entire complex plane minus at most one point.

Proof. Let $f$ be a nonconstant entire function. Assume that there exist two distinct points $\alpha$ and $\beta$ that do not lie in the range of $f$. We rescale $f$ by defining a function $g$ as
\[
g(z) = \frac{f(z) -\alpha}{\beta -\alpha}.
\] Note that $g$ is also an entire function and $0, 1 \not\in g(\mathbb{C})$. It now follows that $\lambda \circ g$ is a constant function since it is an entire function and its real part is bounded. This implies that
\[
(\lambda \circ g)'(z) = \lambda'(g(z)) g'(z) = 0 \tag{1}
\] for every $z \in \mathbb{C}$. Observe that $g(\overline{\mathbb{D}})$ is a compact subset of $\mathbb{C} \backslash \{0, 1\}$. If $g(\overline{\mathbb{D}})$ is finite, then $g$ must be a constant function so should be $f$, a contradiction. Now suppose that $g(\overline{\mathbb{D}})$ is infinite. Furthermore, if $g'(z) = 0$ for infinitely many $z \in \overline{\mathbb{D}}$, then $g’ \equiv 0$ and as a result $g$ must be constant. This is again a contradiction as $f$ is nonconstant. We now assume that $g'(z) = 0$ for only finitely many $z \in \overline{\mathbb{D}}$. Due to (1) it follows that $\lambda'(w) = 0$ for infinitely many $w \in g(\overline{\mathbb{D}})$. As a consequence $\lambda’ \equiv 0$ and hence $\lambda$ must be constant, a contradiction. Thus we conclude that our initial supposition about $\alpha$ and $\beta$ was false.

Posted in Complex Analysis, Little Picard theorem | Tagged , , | Leave a comment

A Möbius function formulation of prime number theorem

The prime number theorem states that
\[
\pi(x) \sim \frac{x}{\log x}.
\] It is equivalent to $\psi(x) \sim x$ or $\theta(x) \sim x$. Let $M(x) = \sum_{n \leq x} \mu(n)$. In this article we will show that prime number theorem is also equivalent to the estimate $M(x) = o(x)$. We will do so by showing that $M(x) = o(x)$ if and only if $\psi(x) \sim x$. The proof of this equivalence is not entirely obvious.

First we define an associated sum
\[
L(x) = \sum_{n \leq x} \mu(n) \log n.
\] It follows by Abel’s summation by parts formula that
\[
L(x) = M(x) \log x- \int_{1}^{x} \frac{M(u)}{u} \, du.
\] Now using the trivial estimate $M(x) \ll x$ we have
\[
L(x) = M(x) \log x + O(x)
\] which we rewrite as
\[
\frac{L(x)}{x \log x} = \frac{M(x)}{x} + O \left( \frac{1}{\log x} \right).
\] Thus we have $M(x) = o(x)$ if and only if $L(x) = o(x \log x)$. Note that
\[
\Lambda(n) = -\sum_{d \, | \, n} \mu(d) \log d.
\] It follows by Möbius inversion that
\[
\mu(n) \log n = – \sum_{qd = n} \mu(d) \Lambda (q).
\] Using this we write $L(x)$ as
\[
L(x) = -\sum_{n \leq x} \sum_{qd = n} \mu(d) \Lambda(q) = -\sum_{qd \leq x} \mu(d) \Lambda(q) = -\sum_{d \leq x} \mu(d) \psi \left( \frac{x}{d} \right).
\] Now suppose that $\psi(x) \sim x$. Let us write $\psi(x) = x(1 + \delta(x))$, where $\delta(x) = o(x)$. We now take $1 < y < x$ to be some quantity that will depends on $x$ and the choice of which will be made later. We split the sum for $L(x)$ as
\[
L(x) = -\sum_{n \leq y} \mu(n) \psi \left( \frac{x}{d} \right) -\sum_{y < n \leq x} \mu(n) \psi \left( \frac{x}{d} \right)
\] and bound each term separately. First note that
\[\begin{align} \left|\sum_{n \leq y} \mu(n) \psi \left( \frac{x}{n} \right) \right| &= \left|\sum_{n \leq y} \mu(n) \left( \frac{x}{n} + \psi \left( \frac{x}{n} \right) -\frac{x}{n} \right) \right| \\ &\leq x \left|\sum_{n \leq y} \frac{\mu(n)}{n}\right| + \sum_{n \leq y} \left| \psi \left( \frac{x}{n} \right) -\frac{x}{n} \right|. \end{align}
\] Let $\delta^+(x) = \sup_{u \geq x} |\delta(u)|$. Note that $\delta^+(x) = o(1)$. Due to corollary in this post it follows that
\[
\sum_{n \leq y} \mu(n) \psi \left( \frac{x}{n} \right) \ll x + x\sum_{n \leq y} \frac{1}{n} \left| \delta \left( \frac{x}{n} \right) \right| \ll x + x \delta^+ \left( \frac{x}{y} \right) \log y .
\] For the other sum we observe that
\[
\sum_{y < n \leq x} \mu(n) \psi \left( \frac{x}{n} \right) \ll x \psi \left( \frac{x}{y} \right).
\] Taking $y = x/ \log \log x$ we see that
\[
\sum_{n \leq y} \mu(n) \psi \left( \frac{x}{n} \right) \ll x + x \log x \delta^{+}(\log \log x).
\] which implies that
\[
\sum_{n \leq y} \mu(n) \psi \left( \frac{x}{n} \right) = o(x \log x)
\] as $\delta^+ (\log \log x) = o(1)$. For the other sum using the estimate $\psi(x) \ll x$ which follows from assumption we find that
\[
\sum_{y < n \leq x} \mu(n) \psi \left( \frac{x}{n} \right) \ll x \log \log x
\] and so this sum is also $o(x \log x)$. Hence, we have $L(x) = o(x \log x)$.

Now suppose that $M(x) = o(x)$. Note that
\[
\psi(x)- \lfloor x \rfloor + 2 \gamma = \sum_{n \leq x}( \Lambda(n) -1(n) + 2 \gamma e(n) ).
\] Using the identities $\Lambda = \mu * \log$, $1 = \mu * d$, and $e = 1 * \mu$ we get
\[\begin{align} \psi(x)-\lfloor x \rfloor + 2 \gamma &= \sum_{n \leq x} \sum_{qd = n} \mu(d) (\log q -d(q) + 2 \gamma ) \\ &= \sum_{dq \leq x} \mu(d) \left( \log q -d(q) + 2 \gamma \right) \\ &= \sum_{q \leq x} M \left( \frac{x}{q} \right) (\log q -d(q) + 2 \gamma). \end{align}
\] We write $M(x) = x \epsilon(x)$, where $\epsilon (x) = o(x)$. Again we take $1 < y < x$ to be some quantity depending on $x$ which will be chosen later. We now split the sum as
\[ \psi(x) -\lfloor x \rfloor + 2 \gamma = \sum_{q \leq y} M \left( \frac{x}{q} \right) (\log q- d(q) + 2 \gamma) + \sum_{\scriptstyle d \leq x/y \atop \scriptstyle y < q \leq x/d } \mu(d) (\log q -d(q) + 2\gamma).
\] Note that we can bound the first sum as
\[ \left| \sum_{q \leq y} M \left( \frac{x}{q} \right) (\log q- d(q) + 2 \gamma) \right| \leq x\sum_{q \leq y} \frac{1}{q} \left|\epsilon \left( \frac{x}{q} \right)\right| |\log q -d(q) + 2 \gamma|. \] Let $\epsilon^+ (x) = \sup_{u \geq x} |\epsilon(u)|$. Note that $\epsilon^+ (x) = o(1)$. It then follows that the first sum is
\[ \ll x y \log y \, \epsilon^+ \left( \frac{x}{y} \right) \ll xy^2 \, \epsilon^+ \left( \frac{x}{y} \right)
\] Let
\[\Delta(x) = \sum_{n \leq x} (d(n)- \log n + 2 \gamma).
\] Observe that for the other sum we have
\[\begin{align} \left| \sum_{\scriptstyle d \leq x/y \atop \scriptstyle y < q \leq x/d } \mu(d) (\log q -d(q) + 2\gamma) \right| &\leq \sum_{d \leq x/y} \left| \sum_{y < q \leq x/d} \log q- d(q) + 2 \gamma \right| \\ &= \sum_{d \leq x/y} |\Delta(x/d) -\Delta(y)| \\ &\ll \sum_{d \leq x/y} \left( \frac{\sqrt{x}}{\sqrt{d}} + \sqrt{y} \right) \ll \frac{x}{\sqrt{y}}.
\end{align}\] It is easy to see that $\epsilon^+$ is a decreasing function and $\epsilon^+(x) > 0$ for every $x \geq 1$ for if $\epsilon^+(M) = 0$ for some $M \geq 1$, then $\epsilon(x) = 0$ eventually which in turn implies that $\mu(n) = 0$ eventually, a contradiction. We now take $y = \min \left( \sqrt{x}, \epsilon^+(\sqrt{x})^{-1/3} \right)$ and note that
\[
xy^2 \, \epsilon^+ \left( \frac{x}{y} \right) \leq x y^2 \, \epsilon^+(\sqrt{x}) = x \, \epsilon^+(\sqrt{x})^{1/3} = o(x).
\] Since $\epsilon^+(\sqrt{x}) = o(1)$ we obtain that $y \to \infty$ as $x \to \infty$ and so $x/\sqrt{y} = o(x)$. Thus we have shown that
\[
\psi(x) -\lfloor x \rfloor + 2 \gamma = o(x),
\] which implies $\psi(x) \sim x$.


Posted in Analytic Number Theory, Möbius function, Prime number theorem | Tagged , , , | Leave a comment

Lifting of units and Menon’s identity

Let $d$ be a divisor of $n$. It is natural to ask the following question: Does a unit $a$ modulo $d$ lifts to a unit modulo $n$, i.e., if $a$ is a unit modulo $d$, then does there exist a unit $b$ modulo $n$ such that $a \equiv b \pmod{d}$? Note that if $a$ is a unit modulo $n$, then it must also be a unit modulo $d$ since $(a, n) = 1$ implies $(a, d) = 1$. Thus $a$ being a unit modulo $d$ is a necessary condition. Consider the canonical projection map $\pi : \mathbb{Z}/n\mathbb{Z} \to \mathbb{Z}/d\mathbb{Z}$ defined as $\pi(a \pmod{n}) = a \pmod{d}$. It is easy to verify that $\pi$ is a ring homomorphism which maps identity to identity element. Thus $\pi$ induces the group homomorphism $\pi^{\times} : (\mathbb{Z}/n\mathbb{Z})^{\times} \to (\mathbb{Z}/d\mathbb{Z})^{\times}$. The question of lifting of units can be reformulated as: Is the induced map $\pi^{\times}$ surjective? We now show below that the answers to this question is yes using Chinese Remainder Theorem.

Let $b \in (\mathbb{Z}/d\mathbb{Z})^{\times}$ and let $q$ be the product of prime divisors of $n$ that do not divide $n$. Then $(q, d) = 1$ and so by Chinese Remainder Theorem there exists $a$ such that
\[a \equiv b \pmod{d} \quad \text{and} \quad a \equiv 1 \pmod{q}. \] It now follows that $(a, n) = 1$ and so $a \in (\mathbb{Z}/n\mathbb{Z})^{\times}$ and $\pi^{\times}(a \pmod{n}) = b \pmod{d}$. Hence, $\pi^{\times}$ is surjective and consequently
\[
(\mathbb{Z}/n\mathbb{Z})^{\times} / \ker \pi^{\times} \cong (\mathbb{Z}/d\mathbb{Z})^{\times}
\] As a result $|\ker \pi^{\times}| = \varphi(n)/\varphi(d)$. We can write this result in summation notation as
\[
\sum_{\scriptstyle a = 1 \atop {\scriptstyle (a, n) = 1 \atop \scriptstyle a \equiv 1 \pmod{d}}}^{n} 1 = \frac{\varphi(n)}{\varphi(d)}.
\]

As an application we evaluate the sum
\[
\sum_{\scriptstyle a =1 \atop \scriptstyle (a, n) = 1}^{n} (a- 1, n).
\] Note that the above sum is
\[
\sum_{\scriptstyle a =1 \atop \scriptstyle (a, n) = 1}^{n} \sum_{d \, | \, (a- 1, n)} \varphi(d) = \sum_{\scriptstyle a =1 \atop \scriptstyle (a, n) = 1}^{n} \sum_{\scriptstyle d \, | \, n \atop \scriptstyle d \, | \, a- 1} \varphi(d).
\] Interchanging the order of summation we find that the sum is
\[
\sum_{d \, | \, n} \varphi(d) \sum_{\scriptstyle a = 1 \atop {\scriptstyle (a, n) = 1 \atop \scriptstyle a \equiv 1 \pmod{d}}}^{n} 1 = \sum_{d \, | \, n} \varphi(d) \frac{\varphi(n)}{\varphi(d)} = \varphi(n) d(n).
\]

Posted in Euler's totient function | Tagged , , | Leave a comment

Problems about Ramanujan’s sum

Below we discuss some problem about Ramanujan’s sum.

Problem 1. Let us denote $e(\alpha) = e^{2 \pi i \alpha}$. Show that
\[
\frac{1}{q}\sum_{a = 1}^{q} e \left( \frac{an}{q} \right) =
\begin{cases}
1 & \text{if $q \, | \, n$}, \\
0 & \text{otherwise}.
\end{cases}
\]

Solution. Note that if $q \, | \, n$, then $e(an/q) = 1$ for every $1 \leq a \leq q$ and so we have
\[
\frac{1}{q} \sum_{a = 1}^{q} e(an/q) = \frac{1}{q} \sum_{a = 1}^{q} 1 = 1.
\] Now suppose that $q \hspace{-2pt} \nmid \hspace{-2pt} n$. Then we have $e(n/q) \neq 1$ and so
\[
\frac{1}{q} \sum_{a = 1}^{q} e(an/q) = \frac{1}{q} \sum_{a = 1}^{q} e(n/q)^a = \frac{1}{q} \left( \frac{e(n/q)^{q+1} -1}{e(n/q) -1} -1 \right) = 0
\]as $e(n/q)^{q+1} = e(n/q)$.

Problem 2. The Ramanujan’s sum $c_q(n)$ is defined as
\[
c_q(n) = \sum_{\scriptstyle a = 1 \atop \scriptstyle (a, q) = 1}^{q} e \left( \frac{an}{q} \right).
\] Show that
\[
c_q(n) = \sum_{d \, | \, (n, q)} d \mu\left( \frac{q}{d} \right).
\] Deduce that
\[
\mu(q) = \sum_{\scriptstyle a =1 \atop \scriptstyle (a, q) = 1}^{q} e \left( \frac{a}{q} \right).
\]

Solution. Observe that
\[ \begin{align} c_q(n) &= \sum_{\scriptstyle a = 1 \atop \scriptstyle (a, q) = 1}^{q} e \left(\frac{an}{q} \right) = \sum_{a = 1}^{q} e\left(\frac{an}{q} \right) \sum_{d \, | \, (a, q)} \mu(d) = \sum_{a = 1}^{q} e \left(\frac{an}{q} \right) \sum_{\scriptstyle d \, | \, a \atop \scriptstyle d \, | \, q} \mu(d). \end{align}
\]Changing the order of summation we get
\[ \begin{align} c_q(n) &= \sum_{d \, | \, q} \mu(d) \sum_{\scriptstyle a = 1 \atop \scriptstyle d \, | \, a}^{q} e\left(\frac{an}{q} \right) = \sum_{d \, | \, q} \mu(d) \sum_{r = 1}^{q/d} e\left(\frac{rdn}{q} \right) = \sum_{d \, | \, q} \mu(d) \sum_{r = 1}^{q/d} e\left(\frac{rn}{q/d} \right). \end{align} \] Thus we can rewrite $c_q(n)$ as
\[
c_q(n) = \sum_{d \, | \, q} \mu \left( \frac{q}{d} \right) \sum_{r = 1}^{d} e \left( \frac{rn}{d} \right).
\] Using the identity $\frac{1}{q}\sum_{a = 1}^{q} e(an/q) = \mathbf{1}_{q \, | \, n}$ we end up with
\[
c_q(n) = \sum_{\scriptstyle d \, | \, q \atop \scriptstyle d \, | \, n} d\mu \left( \frac{q}{d} \right) = \sum_{d \, | \, (q, n)} d \mu \left( \frac{q}{d} \right).
\] Finally, note that
\[
\mu(q) = c_q(1) = \sum_{\scriptstyle a = 1 \atop \scriptstyle (a, q) = 1}^{q} e \left( \frac{a}{q} \right).
\]

Problem 3. Let $\ell =(n, q)$. Show that
\[
c_q(n) = \mu \left( \frac{q}{\ell} \right) \varphi(q) \varphi \left( \frac{q}{\ell} \right)^{-1}.
\]

Solution. Note that
\[
c_q(n) = \sum_{d \, | \, \ell} d \mu \left( \frac{q}{d} \right) = \sum_{d e = \ell} d \mu \left( \frac{qe }{\ell} \right).
\] Let $q_1 = q/\ell$. Then observe that $\mu(q_1 e) = 0$ if $(q_1, e) \neq 1$ and $\mu(q_1 e) = \mu(q_1) \mu(e)$ if $(q_1, e) = 1$. Using this we get
\[
c_q(n) = \sum_{d e = \ell} d \mu(q_1 e) = \sum_{\scriptstyle de = \ell \atop \scriptstyle (q_1, e) = 1} d \mu(q_1) \mu(e) = \mu(q_1) \sum_{\scriptstyle de = \ell \atop \scriptstyle (q_1, e) = 1} d \mu(e) = \mu(q_1) \ell \sum_{\scriptstyle de = \ell \atop \scriptstyle (q_1, e) = 1} \frac{\mu(e)}{e}.
\] It is easy to see that
\[\begin{align} \sum_{\scriptstyle e \, | \, \ell \atop \scriptstyle (e, q_1) = 1} \frac{\mu(e)}{e} &= \prod_{\scriptstyle p \, | \, \ell \atop \scriptstyle p \, \nmid \, q_1} \left( 1 -\frac{1}{p} \right) = \prod_{\scriptstyle p \, | \, q \atop \scriptstyle p \, \nmid \, q_1} \left( 1 -\frac{1}{p} \right) \\ &= \prod_{p \, | \, q} \left( 1 -\frac{1}{p} \right) \cdot \prod_{p \, | \, q_1} \left( 1- \frac{1}{p} \right)^{-1} \ \\ &= \frac{\varphi(q)}{q} \cdot \frac{q_1}{ \varphi(q_1)} = \frac{\varphi(q)}{\ell \varphi(q_1)}. \end{align}
\] Hence we have
\[
c_q(n) = \mu(q_1) \ell \cdot \frac{\varphi(q)}{\ell \varphi(q_1)} = \mu(q_1) \varphi(q) \varphi(q_1)^{-1}.
\]

Problem 4. Prove that
\[
\sigma(n) = \frac{\pi^2 n}{6} \sum_{q = 1}^{\infty} \frac{c_q(n)}{q^2}.
\]

Solution. We rewrite $\sigma(n)$ as
\[
\sigma(n) = n\sum_{d \, | \, n} \frac{1}{d} = n\sum_{d = 1}^{n} \frac{1}{d} \left( \frac{1}{d} \sum_{a = 1}^{d} e(an/d) \right) = n \sum_{d = 1}^{n} \frac{1}{d^2} \sum_{a = 1}^{d} e(an/d)
\] since $\frac{1}{d}\sum_{a = 1}^{d} e(an/d)$ is the characteristic function of the divisors of $n$. Because $\sum_{a = 1}^{d} e(an/d) = 0$ for $d > n$, we can extend the above finite sum to an infinite sum as
\[
\sigma(n) = n \sum_{d = 1}^{\infty} \frac{1}{d^2} \sum_{a = 1}^{d} e(an/d). \tag{1}
\] Observe that
\[\begin{align} \sum_{a = 1}^{d} e(an/d) &= \sum_{q \, | \, d} \sum_{\scriptstyle a = 1 \atop \scriptstyle (a, d) = q}^{d} e(an/d) = \sum_{q \, | \, d} \sum_{\scriptstyle r = 1 \atop \scriptstyle (r, d/q) = 1}^{d/q} e(rqn/d) \\ &= \sum_{q \, | \, d} \sum_{\scriptstyle r = 1 \atop \scriptstyle (r, d/q) = 1}^{d/q} e(rn/(d/q)) = \sum_{q \, | \, d} c_{d/q}(n) = \sum_{q \, | \, d} c_q(n). \end{align}
\] Substituting this into (1) we obtain
\[
\sigma(n) = n \sum_{d = 1}^{\infty} \frac{1}{d^2} \sum_{q \, | \, d} c_q(n) = n \sum_{d = 1}^{\infty} \sum_{q \, | \, d} \frac{c_q(n)}{d^2}.
\] Changing the order of summation, we get
\[\begin{align} \sigma(n) &= n \sum_{q = 1}^{\infty} \sum_{\scriptstyle d = 1 \atop \scriptstyle q \, | \, d}^{\infty} \frac{1}{d^2} c_q(n) = n \sum_{q = 1}^{\infty} \sum_{\ell = 1}^{\infty} \frac{c_q(n)}{(q \ell)^{2}} \\ &= n \sum_{q = 1}^{\infty} \sum_{\ell = 1}^{\infty} \frac{1}{\ell^{2}} \frac{c_q(n)}{q^{2}} = n \left( \sum_{\ell = 1}^{\infty} \frac{1}{\ell^{2}} \right) \sum_{q = 1}^{\infty} \frac{c_q(n)}{q^2} \\ &= \frac{n \pi^2}{6} \sum_{q = 1}^{\infty} \frac{ c_q(n)}{q^{2}}, \end{align}
\] where we use $\sum_{k = 1}^{\infty} 1/k^2 = \pi^2/6$ in the final equality. The change of order of summation is justified by the fact that $c_q(n) \ll_n 1$ and $d(n) \leq \sqrt{n}$.

Posted in Analytic Number Theory, Divisor function, Ramanujan's sum | Tagged , | Leave a comment

Chebyshev functions

The Chebyshev’s $\psi$-function and Chebshev’s $\theta$-function are defined as
\[
\psi(x) = \sum_{p^k \leq x} \log p, \qquad \theta(x) = \sum_{p \leq x} \log p.
\] We can rewrite $\psi(x)$ in terms of von Mangoldt function as
\[
\psi(x) = \sum_{n \leq x} \Lambda(n).
\] The Chebyshev functions are related to each other via Möbius inversion as shown in the following result.

Theorem. For $x > 0$ we have
\[\begin{align}
\psi(x) &= \sum_{k} \theta(x^{1/k}), \tag{1} \\
\theta(x) &= \sum_{k} \mu(k) \psi(x^{1/k}) \tag{2}
\end{align}\]

Proof. Note that
\[
\psi(x) = \sum_{p^k \leq x} \log p = \sum_{k} \sum_{p \leq x^{1/k}} \log p = \sum_{k} \theta(x^{1/k}).
\] The identity (2) follows either by substituting (1) for the sum or by using divisor sum of $\mu$ as
\[\begin{align} \theta(x) &= \sum_{p \leq x} \log p = \sum_{p^k \leq x} \log p \sum_{d | k} \mu(d) = \sum_{d} \mu(d) \sum_{\scriptstyle p^k \leq x \atop \scriptstyle d | k} \log p \\ &= \sum_{d} \mu(d) \sum_{p^{d\ell} \leq x} \log p = \sum_{d} \mu(d) \sum_{p^{\ell} \leq x^{1/d}} \log p = \sum_{d} \mu(d) \psi(x^{1/d}). \end{align}\]

Theorem. For $x \geq 2$ we have
\[
\psi(x) = \theta(x) + O( \sqrt{x} \log x). \tag{3}
\]

Proof. Note that
\[
\psi(x) = \sum_{k} \theta(x^{1/k}) = \sum_{k \leq \log_2 x} \theta(x^{1/k})
\] as $x^{1/k} < 2$ for $k > \log_2 x$ in which case $\theta(x^{1/k}) = 0$. Now using the crude estimate $\theta(x) \ll x \log x$ we see that
\[
\psi(x) -\theta(x) = \sum_{2 \leq k \leq \log_2 x} \theta(x^{1/k}) \ll \sum_{2 \leq k \leq \log_2 x} x^{1/k} \log x^{1/k} \ll \sqrt{x} \log x,
\] where the last estimate follows as the first term is of order $\sqrt{x} \log x$ and the sum of the rest of the terms is $\ll x^{1/3} \log^2 x $.

The Chebyshev functions are closely related to prime counting function, $\pi(x)$. For instance, the prime number theorem
\[
\pi(x) \sim \frac{x}{\log x}
\] is equivalent to both $\psi(x) \sim x$ and $\theta(x) \sim x$. The equivalence of the latter two is obvious from the estimate (3). Note that by Abel’s summation by parts formula we have
\[
\theta(x) = \pi(x) \log x -\int_{2}^{x} \frac{\pi(u)}{u} \, du.
\] If prime number theorem holds, then we have
\[
\int_{2}^{x} \frac{\pi(u)}{u} \, du \ll \int_{2}^{x} \frac{du}{\log u} \ll \frac{x}{ \log x}
\] and so
\[
\theta(x) = \pi(x) \log x + O \left( \frac{x}{\log x} \right).
\] This now immediately leads to $\theta(x) \sim x$. As for the other direction we can write $\pi$ in terms of $\theta$ as
\[
\pi(x) = \frac{\theta(x)}{\log x} + \int_{2}^{x} \frac{\theta(u)}{u (\log u)^2} \, du.
\] If we now assume $\theta(x) \sim x$, then we have
\[
\int_{2}^{x} \frac{\theta(u)}{u (\log u)^2} \, du \ll \int_{2}^{x} \frac{du}{(\log u)^2} \ll \frac{x}{(\log x)^2}
\] and hence
\[
\pi(x) = \frac{\theta(x)}{\log x} + O\left( \frac{x}{(\log x)^2} \right).
\] The prime number theorem now follows immediately.

We now show that $\theta(x) \ll x$. The proof makes use of the properties of the middle binomial coefficient. First we note that
\[
\prod_{n < p \leq 2n} p \, | \, \binom{2n}{n}
\] and so in particular we have $\prod_{n < p \leq 2n} p \leq 4^n$. Taking the logarithm we get
\[
\theta(2n) -\theta(n) \leq 2n \log 2.
\] Now suppose that $2^{m} \leq n < 2^{m + 1}$. Then we have
\[\begin{align} \theta(n) \leq \theta(2^{m + 1}) &= \sum_{k = 0}^{m} (\theta(2^{k + 1}) -\theta(2^k)) \leq \sum_{k = 0}^{m} 2^{k + 1} \log 2 = (2^{m + 2} -2) \log 2. \end{align}
\] Finally note that $2^{m + 2} = 4 \cdot 2^m \leq 4n$. Thus we have $\theta(n) \leq 4n \log 2$ for every $n \in \mathbb{N}$. If $x \geq 1$, then $\theta(x) = \theta(\lfloor x \rfloor) \leq 4 \lfloor x \rfloor \log 2 \leq 4x \log 2$. Hence, we obtain the estimate $\theta(x) \ll x$. This coupled with the estimate (3) yields $\psi(x) \ll x$.

Using the bound $\psi(x) \ll x$ we now obtain some estimate involves von Mangoldt function and primes. First observe that by our all time favorite Abel’s summation formula we have
\[
\sum_{n \leq x} \log n = \lfloor x \rfloor \log x -\int_{1}^{x} \frac{\lfloor u \rfloor}{u} \, du = x \log x -x + O(\log x). \tag{4}
\] But note that we can write $\log$ in terms of $\Lambda$ to get
\[
\sum_{n \leq x} \log n = \sum_{n \leq x} \sum_{d | n} \Lambda(d) = \sum_{n \leq x} \Lambda(n) \left\lfloor \frac{x}{n} \right\rfloor = x \sum_{n \leq x} \frac{\Lambda(n)}{n } + O (x),
\] where we use the estimate $\psi(x) \ll x$ to get the error term. Using the estimate (4) we find that
\[
\sum_{n \leq x} \frac{\Lambda(n)}{n} = \log x + O(1).
\] Since higher prime powers do not contribute much we should have
\[
\sum_{p \leq x} \frac{\log p}{p} = \log x + O(1). \tag{5}
\] We can justify this by noting that
\[
\sum_{\substack{p^k \leq x \ k \geq 2}} \frac{\log p}{p^k} \leq \sum_{p} \log p \sum_{k \geq 2} \frac{1}{p^k} = \sum_{p} \frac{\log p}{p(p -1)},
\] where the series to the right converges.

Theorem. If the limit
\[
\lim_{x \to \infty} \frac{\pi(x) \log x}{x}
\] exists, then it must be $1$.

Proof. Using Abel’s summation by parts formula we obtain
\[
\sum_{p \leq x} \frac{\log p}{p} = \frac{\pi(x) \log x}{x} -\int_{2}^{x} \pi(u) \left( \frac{1- \log u}{u^2} \right) \, du.
\] Employing the estimate (5) we see that
\[
\log x + O(1) = \frac{\pi (x) \log x}{x} + \int_{2}^{x} \frac{\pi(u) \log u}{u} \left(\frac{1}{u}- \frac{1}{u \log u} \right) \, du.
\] Hence we obtain
\[
\int_{2}^{x} \frac{\pi(u) \log u}{u} \left(\frac{1}{u} -\frac{1}{u \log u} \right) \, du = \log x + O(1) \tag{6}
\] as $\pi(x) (\log x)/x \ll 1$ by assumption. Now let
\[
L = \lim_{x \to \infty} \frac{\pi(x)\log x }{x}
\] and let $\varepsilon > 0$ be fixed. Let $M > 1$ be such that $\pi(x) (\log x)/x < L + \varepsilon$ for $x > M$. Note that
\[\begin{align} \int_{2}^{M} \frac{\pi(u) \log u}{u} \left(\frac{1}{u} -\frac{1}{u \log u} \right) \, du \ll 1
\end{align}\] as $M$ is fixed and
\[
\int_{M}^{x} \frac{\pi(u) \log u}{u} \left(\frac{1}{u} -\frac{1}{u \log u} \right) \, du \leq (L + \varepsilon) \log x.
\] Hence, we have
\[
\int_{2}^{x} \frac{\pi(u) \log u}{u} \left(\frac{1}{u} -\frac{1}{u \log u} \right) \, du \leq (L + \varepsilon) \log x + O(1),
\] where the implicit constant depends on $\varepsilon$. Combining this with (6) we immediately deduce that $L \geq 1$. Similarly we have
\[
\int_{2}^{x} \frac{\pi(u) \log u}{u} \left(\frac{1}{u} -\frac{1}{u \log u} \right) \, du \geq (L- \varepsilon) \log x + O(1).
\] Again combining this with (6) it follows that $L \leq 1$. Thus we conclude that $L = 1$.

Posted in Chebyshev functions, Von Mangoldt function | Leave a comment

Dirichlet series

A Dirichlet series is a series of the form
\[
\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}.
\] It is a general theme in analytic number theory to study a sequence ( arithmetic function) by means of its Dirichlet series. By studying analytic properties of Dirichlet series $\alpha(s)$ we can glean information about the sequence $\{a_n\}$. The theory of Dirichlet series parallels the theory of power series in many ways. We highlight two such similarities below:

  • The points where a power series converges are separated from the points where it does not by a circle, i.e., if $P(z) = \sum_{n = 1}^{\infty} a_n z^n$ is a power series, then there exists a radius of convergence $r_c \in [0, \infty]$ such that $P(z)$ converges for $|z| < r_c$ and diverges for $|z| > r_c$. As for the Dirichlet series, the points where it converges are separated from the points it does not by a vertical line, i.e., given a Dirichlet series $\alpha(s)$ there exists a $\sigma_c \in [-\infty, \infty]$ which we call the abscissa of convergence such that $\alpha(s)$ converges for $\sigma > \sigma_c$ and diverges for $\sigma < \sigma_c$.
  • Just as in the case of power series which is analytic inside its disc of convergence, a Dirichlet series is also analytic to the right of its abscissa of convergence.

We also note two key differences between power series and Dirichlet series.

  • A power series converges absolutely inside its disc of convergence but the analogous statement does not hold in the case of Dirichlet series; if $\sigma > \sigma_c$, then the Dirichlet series $\alpha(s)$ need not converge absolutely. There is however a vertical line $\sigma = \sigma_a$ which separates the points where $\alpha(s)$ converges absolutely from those points where it does not. We shall see later that $\sigma_a$ and $\sigma_c$ cannot be too far apart.
  • An important property of power series is that they converge until a singularity is encountered. Precisely speaking, if $r_c$ is the radius of convergence of a power series $P(z) = \sum_{n = 0}^{\infty} a_n z^n$, then $P(z)$ cannot be analytically continued to an open set containing the closed disc of radius $r_c$. This fails however for Dirichlet series, i.e., a Dirichlet series $\alpha(s)$ can possibly be analytically continued to the left of the line $\sigma = \sigma_c$ and in some cases to the entire complex plane. We will later see that if however $a_n \geq 0$ for every (sufficiently large) $n$, then $\alpha(s)$ must have a singularity at $\sigma_c$.

It is easy to see that there exists a $\sigma_a \in [-\infty, \infty]$ such that $\alpha(s)$ converges absolutely for $\sigma > \sigma_a$ and does not converge absolutely for $\sigma < \sigma_a$. Note that if $\alpha(s)$ converge absolutely at the point $s = s_0$, then for $\sigma \geq \sigma_0$ we have
\[
\sum_{n = 1}^{\infty} |a_n n^{-s}| = \sum_{n = 1}^{\infty} |a_n| n^{- \sigma} \leq \sum_{n = 1}^{\infty} |a_n|n^{-\sigma_0} = \sum_{n = 1}^{\infty} |a_n n^{-s_0}|.
\] Now $\sigma_a$ is just the infimum of all $\sigma \in \mathbb{R}$ for which $\alpha(\sigma)$ converges absolutely. Note that if $a_n \ll 1$, then $\sigma_a \leq 1$.

We now show that tail of a Dirichlet series decays exponentially with $\sigma$. Observe that if $N \in \mathbb{N}$ and $\sigma_0 > \sigma_a$ is fixed, then for $\sigma \geq \sigma_0$ we have
\[
\left| \sum_{n \geq N} a_n n^{-s} \right| \leq \sum_{n \geq N} |a_n| n^{-\sigma} = \sum_{n \geq N} |a_n| n^{-\sigma_0} n^{\sigma_0 – \sigma} \leq N^{\sigma_0 – \sigma} \sum_{n =1}^{\infty} |a_n| n^{-\sigma_0}.
\]We can write this concisely as
\[
\sum_{n = N}^{\infty} a_n n^{-s} \ll_{N} N^{-\sigma}
\] where $\sigma \geq \sigma_0$. An interesting consequence of this bound is that the first coefficient of a Dirichlet series is uniquely determined. This is because
\[
a_1 = \lim_{\sigma \to \infty} \alpha(s).
\] Using induction it is not difficult to show that all of the coefficients of a Dirichlet series are uniquely determined. But we prove this in a slightly different and more general way as follows:

Theorem. Let $\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}$ and $\beta(s) = \sum_{n = 1}^{\infty} b_n n^{-s}$ be Dirichlet series both of which converge absolutely for $\sigma > \sigma_0$. Let $\{s_k\}_{k = 1}^{\infty}$ be a sequence of complex numbers such that $\sigma_k > \sigma_0$ for every $k$ and $\sigma_k \to \infty$ as $k \to \infty$. If $\alpha(s_k) = \beta(s_k)$ for every $k$, then we have $\{a_n\} = \{b_n\}$.

Proof. Let $c_n = a_n- b_n$. Then $\gamma(s) = \sum_{n = 1}^{\infty} c_n n^{-s}$ converges absolutely for $\sigma > \sigma_0$ and $\gamma(s_k) = 0$ for every $k$. Our goal is to show that $\{c_n\} = 0$ . Suppose for the sake of contradiction that this is not the case. Then take $N$ to be smallest $n$ such that $c_n \neq 0$. Now note that if $\gamma(s) = 0$, then
\[
c_N= -N^s \sum_{n > N} c_n n^{-s}.\] Without loss of generality assume that $\sigma_0 > \sigma_a(\gamma)$ (for if $\sigma_0 = \sigma_a(\gamma)$, then we can take $\sigma_0$ to be slightly bigger). Then it follows from above result about decay of tail of Dirichlet series that if $\gamma(s) = 0$ and $\sigma \geq \sigma_0$, then
\[
|c_N| \leq N^{\sigma} (N +1)^{\sigma_0 – \sigma} \sum_{n = 1}^{\infty} |c_n|n^{-\sigma_0} \ll \left( \frac{N}{N + 1} \right)^{\sigma},
\] where the implicit constant depends on $\sigma_0$ and $N$ but is independent of $s$. Since $\gamma(s_k) = 0$ for every $k$ we have
\[
|c_N| \ll \left( \frac{N}{N + 1} \right)^{\sigma_k}.
\]The quantity to the right tends to $0$ as $k \to \infty$ and so we conclude that $c_N = 0$, a contradiction. Thus we must have $c_n = 0$ for every $n$.

It follows from above result that if $\{a_n\} \neq 0$ and $\sigma_a < \infty$, then there exists a half plane $\sigma > \sigma_0$ in which the Dirichlet series $\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}$ does not vanish. We now show that the sequence $\{a_n\}$ of coefficients of a Dirichlet series $\alpha(s)$ is determined completely by values of $\alpha(s)$ along a vertical line $\sigma = \sigma_0$ to the right of abscissa of absolute convergence ($\sigma_0 > \sigma_a$). The fact that $\{a_n\}$ is determined uniquely by values of $\alpha(s)$ along the vertical line $\sigma = \sigma_a$ follows from the previous result by analytic continuation.

Theorem. Let $\{a_n\}$ be a sequence and let $a_x = 0$ for $x > 0$ and $x$ not an integer. Then we have
\[
a_x = \lim_{T \to \infty} \frac{1}{2 T} \int_{-T}^{T} \alpha(\sigma + it) x^{\sigma + it} \, dt
\] for $\sigma > \sigma_a$.

Proof. Note that
\[
\int_{-T}^{T} \alpha(\sigma + it) x^{\sigma + it} \, dt = x^{\sigma} \int_{-T}^{T} \sum_{n = 1}^{\infty} a_n n^{-\sigma} \left( \frac{x}{n} \right)^{it} \, dt.
\] Since the Dirichlet series converges uniformly on the line $\sigma- i \infty, \sigma + i \infty$, we can interchange the order of summation and integration to obtain
\[
\int_{-T}^{T} \alpha(\sigma + it) x^{\sigma + it} \, dt = x^{\sigma} \sum_{n = 1}^{\infty} a_n n^{-\sigma} \int_{-T}^{T} e^{it \log (x/n)} \, dt.
\] Note that if $n = x$, then the integral is $2T$. However, if $n \neq x$, then we have
\[
\int_{-T}^{T} e^{it \log(x/n)} \, dt = \frac{2 \sin(T \log (x/n))}{\log (x/n)}
\] Observe that we have $\log (x/n) \gg 1$ for $n \neq x$, where the implicit constant depends on $x$. It thus follows that for a fixed $x$ we have
\[
\int_{-T}^{T} e^{it \log (x/n)} \, dt \ll 1
\] for $n \neq x$. This now yields the estimate
\[
\int_{-T}^{T} \alpha(\sigma + it) x^{\sigma + it} \, dt = 2Ta_x + O \left( x^{\sigma} \sum_{n = 1}^{\infty} |a_n| n^{-\sigma} \right),
\] where the implicit constant depends on $x$ only. This immediately leads to the desired formula.

Let $\alpha(s) = \sum_{n =1}^{\infty} a_n n^{-s}$ and $\beta(s) = \sum_{n = 1}^{\infty} b_n n^{-s}$ be two Dirichlet series and let $\gamma(s) = \alpha(s) \beta(s)$. If $\alpha(s)$ and $\beta(s)$ both converge absolutely, then
\[
\gamma(s) = \sum_{n = 1}^{\infty} a_n n^{-s} \sum_{m = 1}^{\infty} b_m m^{-s} = \sum_{n = 1}^{\infty} \sum_{m = 1}^{\infty} a_n b_m (nm)^{-s}.
\] Because of absolutely convergence we can rearrange the terms and obtain
\[
\gamma(s) = \sum_{k = 1}^{\infty} \left(\sum_{nm = k} a_n b_m \right) k^{-s}.
\] Thus $\gamma(s)$ is a Dirichlet series, $\gamma(s) = \sum_{k =1}^{\infty} c_k k^{-s}$, with coefficients $c_k = \sum_{nm = k} a_n b_m$. Moreover, $\gamma(s)$ also converges absolutely.

Let $\{a_n\}$ be such that $a_1 \neq 0$ and let $\{b_n\}$ be the Dirichlet inverse of $\{a_n\}$, i.e., $\sum_{d \, | \, n} a_n b_{n/d} = e(n)$, where $e(1) = 1$ and $e(n) = 0$ for $n > 1$. If the Dirichlet series $\alpha(s) = \sum_{n = 1}^{\infty} a_n n^{-s}$ and $\beta(s) = \sum_{n = 1}^{\infty} b_n n^{-s}$ both converge absolutely for $\sigma > \sigma_0$, then $\alpha(s) \beta(s) = 1$ for $\sigma > \sigma_0$. In particular, $\alpha(s) \neq 0$ for $\sigma > \sigma_0$. If $\{a_n\}$ represents a completely multiplicative arithmetic function, then $\{b_n\} = \{\mu(n) a_n\}$ is the Dirichlet inverse of $\{a_n\}$ and so $\beta(s)$ converges absolutely for $\sigma > \sigma_a$, where $\sigma_a$ is the abscissa of absolute convergence of $\alpha(s)$. Hence, $\alpha(s) \neq 0$ for $\sigma > \sigma_a$ and
\[
\beta(s) = \sum_{n = 1}^{\infty} b_n n^{-s} = \sum_{n = 1}^{\infty} \mu(n) a_n n^{-s} = \alpha(s)^{-1}
\] for $\sigma > \sigma_a$. This shows that the Dirichlet series of a completely multiplicative function does not vanish in its half plane of absolute convergence. As an example, the Riemann zeta function
\[
\zeta(s) = \sum_{n = 1}^{\infty} n^{-s}
\] does not vanish for $\sigma > 1$ and
\[
\sum_{n = 1}^{\infty} \mu(n) n^{-s} = \frac{1}{\zeta(s)}.
\]

Posted in Analytic Number Theory, Dirichlet series | Leave a comment

Abel’s summation by parts formula

The Abel’s summation by parts formula is one of the most important and ubiquitous results in analytic number theory which is frequently employed to estimate the partial sums of an arithmetic functions weighted by some smooth function.

Theorem. (Abel’s summation by parts formula) Let $a : \mathbb{N} \to \mathbb{C}$ be an arithmetic function, let $0 < x < y$ be real numbers and let $f : [x, y] \to \mathbb{C}$ be a continuously differentiable function. Then we have
\[
\sum_{x < n \leq y} a(n)f(n) = A(y)f(y) -A(x)f(x) -\int_{x}^{y} A(u) f'(u) \, du,
\] where $A(u) = \sum_{n \leq u} a(n)$.

Proof. Let $m = \lfloor x \rfloor$ and $M = \lfloor y \rfloor$. We can rewrite the weighted sum as
\[
\sum_{x < n \leq y} a(n) f(n) = \sum_{n = m + 1}^{M} a(n) f(n).
\] By definition $a(n) = A(n) -A(n-1)$ so we can replace $a(n)$ to get
\[
\begin{align}
\sum_{n = m + 1}^{M} a(n) f(n) &= \sum_{n = m + 1}^{M} (A(n) -A(n-1)) f(n) \\
&= \sum_{n = m + 1}^{M} A(n) f(n) -\sum_{n = m}^{M -1} A(n)f(n+1) \\
&= A(M)f(M) -A(m) f(m + 1) -\sum_{n = m + 1}^{M -1} A(n)(f(n+1) -f(n)) \label{sum by parts 1} \tag{1}
\end{align} \] Since $f(n+1) -f(n) = \int_{n}^{n+1}f'(u) \, du$ and $A(u) = A(n)$ for all $u \in [n, n+1)$, we get
\[\begin{align}
\sum_{m + 1}^{M -1} A(n)(f(n+1) -f(n)) &= \sum_{m+1}^{M-1} A(n) \int_{n}^{n+1} f'(u) \, du \\
&= \sum_{m+1}^{M-1} \int_{n}^{n+1} A(u) f'(u) \, du \\
&= \int_{m + 1}^{M} A(u) f'(u) \, du. \tag{2}
\end{align} \] Substituting (2) into (1), we get
\[
\sum_{n = m + 1}^{M} a(n)f(n) = A(M) f(M) -A(m)f(m + 1) -\int_{m + 1}^{M} A(u) f'(u) \, du. \tag{3}
\] Using Fundamental Theorem of Calculus and observing that $A(u) = A(x)$ for $u \in [x, m +1)$, we get
\[
\begin{align}
\int_{x}^{m + 1} A(u)f'(u) \, du &= A(x)f(m + 1) -A(x)f(x) \\
&= A(m)f(m + 1) -A(x)f(x). \tag{4}
\end{align} \] Doing a similar calculation for $\int_{M}^{y}A(u)f'(u) \, du$ yields
\[
\int_{M}^{y} A(u)f'(u) \, du = A(y)f(y) -A(M)f(M). \tag{5}
\] Using (4) and (5), one can easily turn (3) into the required form.

Corollary. Let $a : \mathbb{N} \to \mathbb{C}$ be an arithmetic function and let $f : [1, x] \to \mathbb{C}$ be a continuously differentiable function where $x \geq 1$. Then we have
\[
\sum_{n \leq x} a(n)f(n) = A(x)f(x) -\int_{1}^{x} A(u) f'(u) \, du.\]

We now estimate harmonic sums.

Theorem. If $x \geq 1$, then we have
\[
\sum_{n \leq x} \frac{1}{n} = \log x + \gamma + O\left(\frac{1}{x}\right), \tag{6}
\]where $\gamma$ is the Euler-Mascheroni constant.

Proof. Taking $a(n) = 1$ and $f(x) = 1/x$ in the summation by parts formula, we get
\[
\sum_{n \leq x} \frac{1}{n} = \frac{\lfloor x \rfloor}{x} + \int_{1}^{x} \frac{\lfloor u \rfloor}{u^2} \, du.
\] Substituting $\lfloor x \rfloor = x -\{x\}$ in, we get
\[
\begin{align} \sum_{n \leq x} \frac{1}{n} &= 1 -\frac{\{x\}}{x} + \int_{1}^{x} \frac{du}{u}-\int_{1}^{x} \frac{\{u\}}{u^2} \, du \\
&= 1 + O\left( \frac{1}{x} \right) + \log x -\int_{1}^{x} \frac{\{u\}}{u^2} \, du \\
&= 1 + O\left( \frac{1}{x} \right) + \log x -\int_{1}^{\infty} \frac{\{u\}}{u^2} \, du + \int_{x}^{\infty} \frac{\{u\}}{u^2} \, du. \end{align}\]
Taking $C = 1 -\int_{1}^{\infty} \{u\}u^{-2} \, du$, we obtain
\[
\sum_{n \leq x} \frac{1}{n} = \log x + C + O\left( \frac{1}{x} \right) + \int_{x}^{\infty} \frac{\{u\}}{u^2} \, du.\] We can bound the improper integral as
\[
\int_{x}^{\infty} \frac{\{u\}}{u^2} \, du \leq \int_{x}^{\infty} \frac{du}{u^2} = \frac{1}{x}
\] and so
\[
\int_{x}^{\infty} \frac{\{u\}}{u^2} \, du = O\left( \frac{1}{x} \right).
\] It thus follows that
\[
\sum_{n \leq x} \frac{1}{n} = \log x + C + O\left( \frac{1}{x} \right).
\] It can be easily seen by taking limit as $x$ approaches $\infty$ that $C = \gamma$.
Note that in the above proof we obtained the following integral expression for $\gamma$
\[
\gamma = 1 -\int_{1}^{\infty} \frac{\{u\}}{u^2} \, du.
\]

Theorem. If $x \geq 1$, then for any $s \in \mathbb{C}$ with $s \neq 1$ and $\sigma = \text{Re}(s) > 0$ we have
\[
\sum_{n \leq x} \frac{1}{n^{s}} = \frac{x^{1-s}}{1-s} + \frac{s}{s-1} -s \int_{1}^{\infty} \frac{{u}}{u^{s + 1}} \, du + O(x^{-\sigma}),
\] where the implicit constant depends on $s$.

Proof. We apply the Abel’s summation by parts formula with $a(n) = 1$ and $f(x) = x^{-s}$. For $x \geq 1$ we then get
\[
\sum_{n \leq x} \frac{1}{n^s} = \frac{\lfloor x \rfloor}{x^{s}} + s \int_{1}^{x} \frac{\lfloor u \rfloor}{u^{s+1}} \, du
\] Substituting $\lfloor x \rfloor = x -\{x\}$, we obtain
\[ \begin{align}
\sum_{n \leq x} \frac{1}{n^{s}} &= x^{1-s} -\frac{{x}}{x^{s}} + s\int_{1}^{x} \frac{du}{u^{s}} -s \int_{1}^{x} \frac{{u}}{u^{s+1}} \, du \\
&= x^{1-s} + s \left( \frac{x^{1-s}}{1-s} -\frac{1}{1-s} \right) -s \int_{1}^{x} \frac{{u}}{u^{s+1}} \, du + O(x^{-\sigma}) \\
&= \frac{x^{1-s}}{1-s} + \frac{s}{s -1} -s\int_{1}^{\infty} \frac{{u}}{u^{s+1}} \, du + s \int_{x}^{\infty} \frac{{u}}{u^{s+1}} \, du + O(x^{-\sigma}).
\end{align} \] Finally note that
\[
\left|\int_{x}^{\infty} \frac{\{u\}}{u^{s + 1}} \, du \right| \leq \int_{x}^{\infty} \frac{du}{u^{\sigma+1}} = \frac{x^{-\sigma}}{\sigma}. \]
This leads to the desired estimate.

As a corollary we obtain that if $x \geq 1$ and $\sigma > 1$, then we have
\[
\sum_{n > x} \frac{1}{n^s} = O(x^{1-\sigma}),
\] where the implicit constant depends on $s$.

Theorem. (estimate of power sums) If $x \geq 1$, then for any $s \in \mathbb{C}$ with $\sigma > 0$ we have
\[ \sum_{n \leq x} n^s = \frac{x^{1 + s}}{1 + s} + O(x^{\sigma}),
\] where the implicit constant depends on $s$.

Proof. Applying the Abel summation by parts formula with $a(n) = 1$ and $f(x) = x^s$, we get
\[
\sum_{n \leq x} n^s = \lfloor x \rfloor x^s -s \int_{1}^{x} \lfloor u \rfloor u^{s -1} \, du.
\] Substituting $\lfloor x \rfloor = x -\{x\}$, we obtain
\[
\begin{align}
\sum_{n \leq x} n^s &= x^{s + 1} -\{x\}x^{s} -s \int_{1}^{x} u^s \, du + s \int_{1}^{x} \{u\} u^{s -1} \, du \\
&= x^{s + 1} + O(x^{\sigma}) -s \left( \frac{x^{1 + s}}{1 + s} -\frac{1}{1 + s} \right) + s \int_{1}^{x} \{u\} u^{s -1} \, du. \tag{7}
\end{align} \] Now note that
\[
\left| s\int_{1}^{x} \{u\} u^{s -1} \, du \right| \leq |s| \int_{1}^{x} u^{\sigma -1} \, du = \frac{|s|}{\sigma}( x^{\sigma} -1) = O_{s}(x^{\sigma}).
\] Hence (7) simplifies to
\[
\sum_{n \leq x} n^s = \frac{x^{1 + s}}{1 + s} + \frac{s}{1 + s} + O(x^{\sigma})
\] Observe that the error term $O(x^\sigma)$ absorbs the constant $s/(1 + s)$ and thus we get the desired result.

Posted in Analytic Number Theory, Summation by parts | Leave a comment

Dirichlet product and multiplicative functions

The Dirichlet product (or Dirichlet convolution) of two arithmetic functions $f$ and $g$ is defined as
\[
(f * g)(n) = \sum_{d | n} f(d)g(n/d).
\] The Dirichlet product arises when multiplying two Dirichlet series, that is, if two Dirichlet series
\[
\sum_{n = 1}^{\infty} f(n) n^{-s} \quad \text{and} \quad \sum_{n = 1}^{\infty} g(n) n^{-s}
\] converge absolutely, then their product is the Dirichlet series
\[
\sum_{n = 1}^{\infty} (f * g)(n) n^{-s}.
\] It is easily verified that Dirichlet product is both commutative and associative and that $e * f = f * e =f$ for every arithmetic function $f$. Thus the set of all arithmetic functions forms a commutative monoid. The next result allows us to characterize arithmetic functions that are invertible under Dirichlet product.

Theorem. If $f$ is an arithmetic function with $f(1) \neq 0$, then there is a unique arithmetic function $f^{-1}$ such that $f^{-1} * f = f * f^{-1} = e$. The function $f^{-1}$ is given by
\[
f^{-1}(1) = \frac{1}{f(1)}, \qquad f^{-1}(n) = -\frac{1}{f(1)} \sum_{\scriptstyle d |
n \atop \scriptstyle d < n} f^{-1}(d) f(n/d) \quad \text{for } n > 1.
\]
The above result show that the set of all arithmetic functions $f$ satisfying $f(1) \neq 0$ form an abelian group under Dirichlet product.

The Dirichlet multiplication provides a convenient notation to write relations among different arithmetic functions in a compact fashion
\[
\mu * 1 = e, \quad \varphi * 1 = I, \quad \varphi = \mu * I.
\]

Theorem. (Möbius inversion formula) Let $f$ and $g$ be arithmetic functions. Then
\[
f(n) = \sum_{d | n} g(d) \quad \text{for } n \in \mathbb{N}
\] if and only if
\[
g(n) = \sum_{d | n} f(d) \mu (n/d) \quad \text{for } n \in \mathbb{N}.
\]

Corollary. If $n \geq 1$, then
\[
\Lambda(n) = -\sum_{d | n} \mu(d) \log d.
\]

Proof. By Möbius inversion we have
\[
\Lambda(n) = \sum_{d | n} \mu(d) \log \frac{n}{d} = \log n \sum_{d | n} \mu(d)- \sum_{d | n} \mu(d) \log d = e(n) \log n -\sum_{d | n} \mu(d) \log d.
\] This completes the proof as $e(n) \log n = 0$ for every $n$.

An arithmetic function $f$ is called multiplicative if $f$ is not identically zero and $f(mn) = f(m) f(n)$ whenever $(m, n) = 1$. A multiplicative function $f$ is called completely multiplicative (or totally multiplicative) if $f$ is not identically zero and $f(mn) = f(m) f(n)$ for all $m, n$. We note some common examples of multiplicative functions.

  • The power function $I_{\alpha}$ is completely multiplicative.
  • The identity function $e$ is completely multiplicative.
  • The Möbius function $\mu$ is multiplicative but not completely multiplicative as $\mu(4) = 0 \neq 1 = \mu(2)^2$.
  • The Euler totient function $\varphi$ is multiplicative but not completely multiplicative as $\varphi(4) = 2 \neq 1 = \varphi(2)^2$.
  • The Liouville function $\lambda$ is completely multiplicative.

Note that if $f$ is multiplicative, then $f(1) = 1$. From this it immediately follows that $\Lambda$ is not multiplicative as $\Lambda(1) = 0$.


Theorem. Let $f$ be an arithmetic function with $f(1) = 1$.
(i) $f$ is multiplicative if and only if
\[
f(p_1^{a_1} \cdots p_k^{a_k}) = f(p_1^{a_1}) \cdots f(p_k^{a_k}),
\] where $p_1, \dots, p_k$ are distinct primes.
(ii) If $f$ is multiplicative, then $f$ is completely multiplicative if and only if
\[
f(p^a) = f(p)^a
\] for all primes $p$ and all integers $a \geq 1$.

The above result shows that a multiplicative function is uniquely determined by its values on prime powers, and a completely multiplicative function is uniquely determined by its values on primes.

Theorem. If $f$ and $g$ are multiplicative, then so is their Dirichlet product $f * g$.

Proof. Let $m$ and $n$ be relatively prime integers. Then observe that
\[\begin{align*}
(f * g)(mn) &= \sum_{d | mn} f(d) g\left( \frac{mn}{d} \right) = \sum_{\scriptstyle a | m \atop \scriptstyle b | n} f(ab) g \left( \frac{mn}{ab} \right)
\end{align*}
\] as every divisor of $mn$ can be uniquely written as $ab$, where $a | m$ and $b | n$. Using the multiplicativity of $f$ and $g$ we obtain
\[\begin{align*}
(f * g)(mn) &= \sum_{\scriptstyle a | m \atop \scriptstyle b | n}f(a)f(b) g \left( \frac{m}{a} \right) g \left( \frac{n}{b} \right) = \sum_{a | m} \sum_{b | n} f(a) f(b) g \left( \frac{m}{a} \right) g \left( \frac{n}{b} \right) \\
&= \sum_{a | m} f(a) g \left( \frac{m}{a} \right) \sum_{b | n} f(b) g \left( \frac{n}{b} \right) = (f * g)(m) (f * g)(n).
\end{align*}
\]

The Dirichlet product of two completely multiplicative functions need not be completely multiplicative. For instance, the divisor function $d = 1 * 1$ is not completely multiplicative as $d(4) = 3 \neq 4 = d(2)^2$ whereas $1$ clearly is.

Theorem. If $f$ is multiplicative, then so is it’s Dirichlet inverse $f^{-1}$.

First Proof. Suppose for the sake of contradiction that $f^{-1}$ is not multiplicative. Then there exist positive integers $m$ and $n$ with $(m, n) = 1$ such that
\[
f^{-1}(mn) \neq f^{-1}(m) f^{-1}(n).
\] We choose such a pair $m$ and $n$ for which the product $mn$ is the smallest. Since $f$ is multiplicative therefore $f^{-1}(1) = 1/f(1) = 1$ and hence neither $m$ nor $n$ can be $1$. In particular, $mn > 1$. By the construction of the product $mn$, $f(ab) = f(a)f(b)$ for all positive integers $a$ and $b$ with $(a, b) = 1$ and $ab < mn$. It now follows that
\[
\begin{align*} f^{-1}(mn) = -\sum_{\scriptstyle a | m \atop {\scriptstyle b | n \atop \scriptstyle ab < mn}} f^{-1}(a b) f \left( \frac{mn}{ab} \right) = -\sum_{\scriptstyle a | m \atop {\scriptstyle b | n \atop \scriptstyle ab < mn}} f^{-1}(a) f^{-1}(b) f \left( \frac{m}{a} \right) f\left( \frac{n}{b} \right)
\end{align*}\]
Splitting the sum we obtain
\[\begin{align*} f^{-1}(mn) &= -f^{-1}(n) \sum_{\scriptstyle a | m \atop \scriptstyle a < m} f^{-1}(a) f\left( \frac{m}{a} \right) -f^{-1}(m) \sum_{\scriptstyle b | n \atop \scriptstyle b < n} f^{-1}(b) f\left( \frac{n}{b} \right) \\ & -\sum_{\scriptstyle a | m \atop \scriptstyle a < m} \sum_{\scriptstyle b | n \atop \scriptstyle b < n} f^{-1}(a) f^{-1}(b) f \left( \frac{m}{a} \right) f\left( \frac{n}{b} \right) \\ &= f^{-1}(n)f^{-1}(m) + f^{-1}(m)f^{-1}(n) – f^{-1}(m)f^{-1}(n) \\ &= f^{-1}(m)f^{-1}(n). \end{align*} \] This contradiction proves the result.

Second Proof. Let $g$ be an arithmetic function defined as
\[
g(n) = \prod_{p^a || n} f^{-1}(p^a).
\] Then $g$ is a multiplicative function by definition and so it suffices to show that $f^{-1} = g$. Note that
\[
\begin{align*}
(g * f)(p^k) &= \sum_{d | p^k} g(d) f(p^k/d) = \sum_{i = 0}^{k} g(p^i)f(p^{k-i}) \\
&= \sum_{i = 0}^{k} f^{-1}(p^i) f(p^{k-i}) = \sum_{d | p^k} f^{-1}(d) f(p^k/d) = (f^{-1} * f)(p^k) = e(p^k).
\end{align*}\] Because $g * f$ and $e$ are both multiplicative functions and agree on prime powers, it follows that $g * f = e$ and so $g = f^{-1}$.

The next result allows us to characterize completely multiplicative functions.

Theorem. Let $f$ be multiplicative. Then $f$ is completely multiplicative if and only if $f^{-1} = \mu f$.

Proof. Suppose $f$ is completely multiplicative. Then observe that
\[
(f * \mu f)(n) = \sum_{d | n} \mu(d) f(d) f \left( \frac{n}{d} \right) = f(n) \sum_{d | n} \mu(d) = f(n) e(n) = e(n).
\] Conversely, assume that $f^{-1} = \mu f$. Then observe that
\[
\sum_{d | n} \mu(d) f(d) f \left( \frac{n}{d} \right) = 0
\] for $n > 1$. Let $n = p^a$, where $a \geq 1$. Then, we get
\[
\mu(1)f(1)f(p^{a}) + \mu(p)f(p)f(p^{a -1}) = 0.
\] It then follows that
\[
f(p^{a}) = f(p)f(p^{a -1}).
\] This implies that $f(p^{a}) = f(p)^{a}$. Thus $f$ is completely multiplicative.

Posted in Dirichlet product, Multiplicative functions, Von Mangoldt function | Leave a comment

Euler’s totient function

The Euler’s totient function, denoted $\varphi$ (or $\phi$), is defined at $n$ to be the number of positive integers not exceeding $n$ that are relatively prime to $n$. We can rewrite $\varphi(n)$ in the summation notation as
\[
\varphi(n) = \sum_{\scriptstyle k =1 \atop \scriptstyle (k, n) = 1}^{n} 1.
\] The function $\varphi$ is ubiquitous in number theory, particularly in prime number theory. For instance, the degree of the irreducible polynomial of $e^{2\pi i/n}$ over $\mathbb{Q}$ is $\varphi(n)$ or equivalently, $[\mathbb{Q}(e^{2 \pi i/n}) : \mathbb{Q}] = \varphi(n)$. This can be used to show that the torsion subgroup of the group of units in a number field is finite. More elementarily, the number of generators in a cyclic group of order $n$ is $\varphi(n)$. Next, we find the divisor sum of $\varphi$.

Theorem. If $n \geq 1$, then
\[
\sum_{d | n} \varphi(d) = n.
\]

Proof. We partition the set $\{1, \dots, n\}$ into subsets $A_d = \{1 \leq k \leq n : (k, n) = d\}$, where $d$ is a divisor of $n$, and note that there is a one-to-one bijection between elements of $A_d$ and integers $1 \leq r \leq n/d$ satisfying $(r, n/d) = 1$. This then implies that
\[
n = \sum_{d | n} | A_d| = \sum_{d | n} \varphi(n/d) = \sum_{d \div n} \varphi(d).
\]

There is a relationship between $\varphi$ and $\mu$ which can be deciphered easily from the divisor sum of $\varphi$ once one knows about Möbius inversion formula but we find it directly using what we have at the moment.

Theorem. If $n \geq 1$, then
\[
\varphi(n) = \sum_{d | n} \mu(d) \frac{n}{d}.
\]

Proof. We use the formula for the divisor sum of $\mu$ to obtain
\[
\varphi(n) = \sum_{k = 1}^{n} e((k, n)) = \sum_{k = 1}^{n} \sum_{d | (k, n)} \mu(d) = \sum_{k = 1}^{n} \sum_{\scriptstyle d | n \atop \scriptstyle d | k} \mu(d)
\] Changing the order of summation we find that
\[
\varphi(n) = \sum_{d | n} \sum_{\scriptstyle k = 1 \atop \scriptstyle d | k}^{n} \mu(d)
= \sum_{d | n} \mu(d) \sum_{\scriptstyle k = 1 \atop \scriptstyle d | k}^{n} 1 = \sum_{d | n} \mu(d) \frac{n}{d}.
\]

Next we have a nice product formula for $\varphi$.

Theorem. For $n \geq 1$ we have
\[
\varphi(n) = n \prod_{p | n} \left( 1- \frac{1}{p} \right).
\]

Proof. If $n = 1$, then the product on the right hand side is empty and so the formula trivially holds. Now let $p_1, \dots, p_k$ be the prime divisors of $n$. Then expanding the product, we get
\[\begin{align} \prod_{i = 1}^{k} \left( 1- \frac{1}{p_i} \right) &= \sum_{I \subset [k]} \prod_{i \in I} \left(-\frac{1}{p_i} \right) = \sum_{I \subset [k]} \frac{(-1)^{|I|}}{\prod_{i \in I} p_i} = \sum_{d | n} \frac{\mu(d)}{d} = \frac{\varphi(n)}{n}. \end{align} \]

Corollary. If $m$ and $n$ are positive integers, then
\[
\varphi(mn) = \varphi(m) \varphi(n) \frac{d}{\varphi(d)},
\] where $d = (m, n)$. In particular, if $m$ and $n$ are coprime, then $\varphi(mn) = \varphi(m) \varphi(n)$.


Proof. The assertion follows from the product formula for $\varphi$ as
\[\begin{align} \frac{\varphi(mn)}{mn} &= \prod_{p | mn} \left( 1- \frac{1}{p} \right) \\ &= \prod_{p | m} \left( 1- \frac{1}{p} \right) \prod_{\scriptstyle p | n \atop \scriptstyle p \hspace{1pt} \nmid \hspace{1pt} m} \left( 1- \frac{1}{p} \right) \\ &= \prod_{p | m} \left( 1- \frac{1}{p} \right) \prod_{\scriptstyle p | n} \left( 1- \frac{1}{p} \right)\prod_{\scriptstyle p | n \atop \scriptstyle p | m} \left( 1- \frac{1}{p} \right)^{-1} \\ &= \frac{\varphi(m)}{m} \frac{\varphi(n)}{n} \prod_{p | (n, m)} \left( 1- \frac{1}{p} \right)^{-1} \\ &= \frac{\varphi(m)}{m} \frac{\varphi(n)}{n} \frac{d}{\varphi(d)}. \end{align}\]
From the multiplicativity of $\varphi$ it follows that if $n$ divides $m$, then $\varphi(n)$ divides $\varphi(m)$. This is because if $n$ and $m$ are prime powers, then the statement holds.

For a positive integer $k$ let $A(k)$ denote the number of solutions to the equation $\varphi(n) = k$. It is easy to see that $A(k)$ is finite for every $k$ since $\varphi(n) \to \infty$. In 1999, Kevin Ford (UIUC) proved Sierpinski’s conjecture, which states that all of positive integers greater than $1$ lie in the image of function $A$, using sieve theory. It is not known if $A(k) = 1$ for some $k$. The answers seems to be in the negation and this is known as Carmichael’s conjecture (Robert Carmichael originally stated this as a result instead of a conjecture in his book which he later retracted).

It is easy to see that $\varphi(n) \to \infty$ as $n \to \infty$. To see this let $M > 0$ be fixed and consider all integers $n$ for which $\varphi(n) \leq M$. Take $n = \prod_{i = 1}^{k} p_i^{a_i}$ be such that $\varphi(n) \leq M$. Then we have $\prod_{i = 1}^{n} p_i^{a_i- 1}(p_i- 1) \leq M$. This implies that $p_i \leq M + 1$ and $a_i \leq \log_2(2M)$ for every $1 \leq i \leq k$. Thus there are only finitely many choices of primes $p_i$ and exponents $a_i$ and consequently there are finitely many $n$ for which $\varphi(n) \leq M$. Hence, it follows that $\lim_{n \to \infty} \varphi(n) = \infty$.

Posted in Analytic Number Theory, Euler's totient function | Leave a comment

Möbius function

The Möbius function is one of the most important functions in number theory. It is defined as
$$ \mu(n) = \begin{cases}
1 & \text{if } n = 1, \\
(-1)^k & \text{if $n = p_1, \dots p_k$, where $p_i$ are distinct primes}, \\
0 & \text{otherwise}.
\end{cases} $$ It is the signed characteristic function of squarefree integers. The definition above may seem somewhat out of the blue and unmotivated but once one knows about Dirichlet product and Möbius inversion it is easy to see where the definition comes from (actually it is the inverse of the unit function under Dirichlet product). The Möbius function is intimately connected to the Riemann zeta function, $\zeta(s)$, which is one of the most important functions in analytic number theory. For instance, the estimate
$$
M(x) = \sum_{n \leq x} \mu(n) \ll x^{1/2 + \varepsilon}
$$ is equivalent to Riemann Hypothesis, one of the notoriously difficult problems in analytic number theory (or arguably in all of mathematics), which says that all of the nontrivial (interesting) zeros of $\zeta(s)$ lies on the line $\text{Re}(s) = 1/2$. This equivalence was shown by Littlewood. The Riemann Hypothesis is also equivalent to convergence of the Dirichlet series
$$
\sum_{n = 1}^{\infty} \mu(n) n^{-s}
$$ for every $s$ with $\text{Re}(s) > 1/2$. It is known that the above Dirichlet series converges for every $s$ with $\text{Re}(s) \geq 1$ and that the limit is $0$ when $s = 1$. The convergence at $s = 1$ alone (without regard for the limit value) implies prime number theorem, which is one of the celebrated results in analytic number theory.

Around 1897, Stieltjes conjectured that $|M(x)| < \sqrt{x}$ for every $x > 1$. Strangely, it is known as Merten’s conjecture and it was proven to be false by Odlyzko and te Riele in 1984. Their proof did not produce a counterexample but they showed that
$$
\limsup_{x \to \infty} \frac{M(x)}{\sqrt{x}} > 1.06 \quad \text{and} \quad \liminf_{x \to \infty} \frac{M(x)}{\sqrt{x}} < -1.009.$$ It is not known but seems probable that $\limsup_{x \to \infty} |M(x)|/ \sqrt{x} = \infty$. Although it follows from above but it is also easy to see that one cannot have $M(x) \ll x^{\theta}$ for any $\theta < 1/2$ since this would imply that $\zeta(s)$ has no zeros for $\sigma > \theta$.

Next we evaluate the divisor sum of $\mu$. It is precisely this result that connects $\mu$ to the Riemann zeta function.

Theorem. If $n \geq 1$, then $$ \sum_{d | n} \mu(d) = e(n).$$

Proof. If $n = 1$, then the equality is obvious. Now suppose that $n > 1$ and let $n = p_1^{a_1} \cdots p_k^{a_k}$. Because $\mu(d) = 0$ if $d$ is squarefree we can restrict the sum to squarefree divisor of $n$. Note that
\[\sum_{d | n} \mu(d) = \sum_{I \subset [k]} \mu \left( \prod_{i \in I} p_i \right) = \sum_{I \subset [k]} (-1)^{|I|} = \sum_{i = 0}^{k} \sum_{\scriptstyle I \subset [k] \atop \scriptstyle |I| = i} (-1)^{|I|} = \sum_{i = 0}^{k} (-1)^{k} \sum_{\scriptstyle I \subset [k] \atop \scriptstyle |I| = i} 1.\] It thus follows that
\[
\sum_{d | n} \mu(d) = \sum_{i = 0}^{k} \binom{k}{i} (-1)^{i} = (1 -1)^{k} = 0.
\]

As remarked earlier, it is not so easy to show that the series \[
\sum_{n = 1}^{\infty} \frac{\mu(n)}{n}\] converges. However, using the above result about the divisor sum of $\mu$ we can easily bound the partial sums of this series by $1$.

Corollary. If $n \geq 1$, then
\[
\left| \sum_{n \leq x} \frac{\mu(n)}{n} \right| \leq 1.\]

Proof. Note that
\[ 1 = \sum_{n \leq x} e(n) = \sum_{n \leq x} \sum_{d | n}
\mu(d) = \sum_{qd \leq x} \mu(d) = \sum_{d \leq x}\mu(d) \left \lfloor \frac{x}{d} \right \rfloor. \] Writing $\lfloor x/n \rfloor = x/n- \{x/n\}$ we obtain
\[
1 = x\sum_{n \leq x} \frac{\mu(n)}{n}- \sum_{n \leq x} \mu(n) \left\{ \frac{x}{n} \right\}.\] Observe that
\[ \sum_{n \leq x} \left\{ \frac{x}{n} \right\} = {x} + \sum_{2 \leq n \leq x} \left\{ \frac{x}{n} \right\} \leq {x} + \lfloor x \rfloor -1 = x-1. \] Thus we have
\[
\left|x \sum_{n \leq x} \frac{\mu(n)}{n} \right| \leq 1 + \sum_{n \leq x} \left\{\frac{x}{n} \right\} \leq x. \] This completes the proof.

Posted in Analytic Number Theory, Möbius function | Leave a comment