Motivation from compound interest
Increase the number of times that the interest is compounded each year ($n$), so as to increase the final amount of money ($A$)
$t$ and $r$ are the number of years and the interest rate per annum.
As $n$ becomes large, we can approximate the amount by
The value of $A$ in the above formula is the amount compounded continuously.
Questions
- What is (the definition of) the constant $e$?
- In practice, $r$ is a rational number. (i.e. $r \in \Q$) What if $r$ is irrational? Then we would run into irrational indices.
Goal
To store the technical details somewhere on the web.
- To give a logical introduction to the concept of two elementary functions: exponential function and logarithms.
- To fill the gap between rational and irrational indices.
Raison d’être
In exam-oriented secondary education, students are required to make calculations using their scientific calculators. It would be convenient, for school teachers, to infer from the law of indices on $\Q$ to their counterpart on $\R$, and to ask students to write the calculator’s output (say, for $2^\pi$). Math teachers aren’t math major (like me), so some of them might not even know how these two elementary functions are defined and developed. Their job is to pass the classic formulae to students. As a result, students resort to rote or muscle memory. The former is prone to mistakes, especially under exam stress. This leads to low achievement.
This article aims to fill up the gap for interested students. In the first few big proofs, I only use integer exponents, so that knowing the basic definition of indices would suffice.
Facts needed
-
Product rule of limits: $\lim (A \cdot B) = \left(\lim A \right) \left(\lim B \right)$, provided both limits on the RHS exist.
-
Binomial Theorem:
$$(a+b)^n = \sum_{k = 0}^n \binom{n}{k} a^{n-k} b^{k}.$$ -
Geometric sum formula (finite version):
$$\sum_{i = 0}^n ar^i = \frac{a(1-r^{n+1})}{1-r}, \; r \ne 1.$$ -
Limit exists implies boundedness: if $\lim_{n\to+\infty} a_n$ exists, then there exists a constant $M>0$ such that for all $n$, $\lvert a_n \rvert \le M$.
-
Monotone Convergence Theorem and monotonicity of limits:
$$\forall n: a_n \le b_n \implies \lim_n a_n \le \lim_n b_n.$$
Recall the definition of exponential function
For any real number $s$, the limit $\lim_{n\to+\infty} (1+s/n)^n$ exists, and it’s denoted by $e^s$, or $\exp(s)$.
Proof:
We’ll show that the sequence with the general term $(1 + s/n)^n$ is bounded, then strictly increasing for $s > 0$, so that we can apply the Monotone Convergence Theorem to establish the existence of the desired limit for $s > 0$. After that, we’ll use the quotient rule for limits to finish.
Boundedness
The main idea is that factorial function $k \mapsto k!$ is much greater than the exponential function (with a fixed base $m$) $k \mapsto m^k$.
- Our definition of exponential function is a limit of a binomial,
- which can be expanded into a sum involving binomial coefficients,
- which is a fraction with $k!$ as the denominator.
- by bounding $1/k!$ above by a term from a well-chosen exponential function (with a base strictly smaller than $1$), we can apply the geometric sum formula (for finitely many terms, then discard the negative term in the numerator) to set up an upper bound independent of $n$.
The above argument only works for $\lvert s \rvert < 2$. For larger $\lvert s \rvert$, we need to change ‘$2$’ to a larger integer $M$, so that the geometric sum formula (for finitely many terms) can be applied. We choose this $M \in \N$ so that $2 \le \lvert s \rvert < M$. It’s clear that $M \ge 3$.
For the case $\lvert s \rvert \ge 2$, in the sum (4), we discard the fraction on the right (by bounding it above by $1$), and we divide the sum into two parts
We’ll only focus on the last term (the “tail”).
It took me a while to work out the tedious details.
Strictly increasing
Suppose that $s > 0$.
Negative exponents
For any $s > 0$, we’re going to show that
We claim that the last term tends to zero as $n \to +\infty$.
This shows the desired limit in the beginning of this subsection. Apply this together with the case that $s>0$ (that we’ve just proved) and the quotient rule for limits to see that
Exercise: try to find a gap in the above argument.
From the above proof, we have the identity $e^s \cdot e^{-s} = 1$.
Remark: Here the condition that $s > 0$ is needed. You may experiment with large negative numbers for $s$ (say $s = -100$) and a short array of large consecutive values of $n$ (say $n \in \lbrace 61, 62, \dots, 80 \rbrace$) to see the oscillation of the sequence $\left((1+s/n)^n\right)_n$.
Main result
The well-known basic identity $e^{s+t} = e^s \cdot e^t$.
Derivation
We’re going to see that only the first term in the above sum is relevant.
Claim:
Proof:
The trick is to find an upper bound which is a term of a geometric sequence.
We used the fact (that allows the definition of the exponential function) that $\lim_{n\to+\infty} (1+(s+t)/n)^n$ exists.
From the above inequality, we have
Apply this limit to the equation before the claim to see that only the first term of the sum has to be kept.
Definition of log function
To see that the exponential function $\exp:\R\to\R_{+}$ is strictly increasing, we apply the above multiplicative formula. Suppose that $s < t$, and we want to compare $e^s$ with $e^t$. We have $e^t = e^s \cdot e^{t-s}$. Once we can show that $e^{t-s}>1$, then we’re done. Note that the exponent $t-s>0$, so from the first four lines in the subsection Boundedness, we see that $e^{t-s} \ge 1+(t-s)>1$ (only the first two terms in the fourth line are taken).
Thus, the exponential function is bijective, and it has an inverse function, which is called the natural log and is denoted by $\ln$. It’s obvious that $e^s = a \iff s = \ln(a)$. Two useful forms of this “$\iff$” statement:
- $\ln(e^s) (= \ln(a)) = s$ (i.e. $e^s \overset{\ln}{\mapsto} s$)
- $e^{\ln(a)} (= e^s) = a$ (i.e. $\ln(a) \overset{\exp}{\mapsto} a$).
Corollary
Proof: write $a = e^s$ and $b = e^t$. Then $ab = e^{s + t}$. Take log on both sides to see that $\ln(ab) = s + t = \ln(a) + \ln(b)$.
General definition of exponents
Let $a > 0$ and $s \in \R$. We define $a^s := e^{s\ln(a)}$. It’s read as “$a$ to the power $s$”.
Rediscovery of law of indices
Let $a > 0$, and $s,t \in \R$.
Zero power: $a^0 = 1$
The last equality is from our definition of the exponential function with $s=0$, so that the expression on the right of the $\lim$ symbol becomes the constant $1$.
Exercise: What is $a^1$?
Negative power: $a^{-s} = 1 / a^s$
Recall: at the end of the subsection Negative Exponents, we have the identity $e^t \cdot e^{-t} = 1$, so $e^{-t} = \left(e^t\right)^{-1} = 1 / e^t$ for all $t \in \R$. Put $t = s\ln(a)$, which is the exponent on the RHS of $a^s = e^{s\ln(a)}$ to see that $a^{-s} = 1 / e^{s\ln(a)} = 1 / a^s$.
Multiplication: $a^s \cdot a^t = a^{s + t}$.
Division: $a^s / a^t = a^{s - t}$
Power of a power: $(a^s)^t = a^{st}$
Multiplication with same index: $(ab)^s = a^s \cdot b^s$
Rediscovery of law of logarithms
Let $a,b > 0$ and $s \in \R$ such that $a^s = b$. A textbook defines $\log_a b$ to be $s$. Using our definition of exponents, we can easily retreive the change of base formula
Here’re two conventions.
base $a$ | $\log_a$ written as | name |
---|---|---|
$e$ | $\ln$ | natural log |
$10$ | $\log$ | common log |
I’ll left the derivation of remaining laws as an exercise, so that you can practice your understanding of the above definitions and formulae.
Try new service. Here’s a proof for Minikowski’s inequality. Suppose that $a,b>0$. If $a = b$, we have the equality, so WLOG, we assume that $a > b$.
We want to show that $(a-b)^{(n-1)/n} \le \sum_{k=0}^{n-1} \sqrt[n]{\mathstrut a}^k \sqrt[n]{\mathstrut b}^{n-1-k}$. We make a change of variables:
The above inequality becomes $h^{n-1} \le \left(\sum_{k=0}^{n-1} \sqrt[n]{\mathstrut a+h}^k \sqrt[n]{\mathstrut a}^{n-1-k}\right)^n$. On RHS, consider the term with $k = n - 1$:
Here we use the strict monotonicity of the function $x \mapsto x^n$ on $[0,\infty)$. We have strict inequalities since we assume that $a > b$, so $h > 0$. Hence,