NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
A cute proof that makes e natural (poshenloh.com)
dawnofdusk 1 days ago [-]
The arxiv preprint linked in this is really good. I'm American so I got my education on e from the compound interest limit which isn't natural at all, as Loh points out. Why should it matter how many times I "split up" my compounding?

IMO exponentials should just not be taught at all without basic notions of calculus (slopes of tangent lines suffice, as Po Shen Loh does here). The geometric intuition matters more than how to algebraically manipulate derivatives. The differential equation is by far the most natural approach, and it deserves to be taught earlier to students as is done apparently in France and Russia.

seanhunter 14 hours ago [-]
I think the reason that the compound interest limit is used may well be the history - that was how Jacob Bernoulli derived e initially[1] - and that around the time in your mathematics education when you might be learning the exponential and natural log functions is probably about the right time to teach series and it's a lovely example.

[1] This is why it's named Euler's number - because it was discovered by Bernoulli. Many of the things that Euler discovered (like Lambert's W function etc) are named after other people too in the same tradition.

pests 12 hours ago [-]
Your footnote, wut?
seanhunter 11 hours ago [-]
It's a thing in maths that stuff gets named after whoever people decide at the time deserves it, not necessarily the person who discovered it.

General Taylor series were discovered by James Gregory (long after the first Taylor series for sine and cosine etc were written down by Madhava of Sangamagrama) who taught them to Maclaurin who taught them to Taylor.

Lambert's W function (also known as the product log function) was the function that Euler discovered that solved a problem that Lambert couldn't solve.

Gauss' law in physics was discovered by Lagrange. In turn, Lagrange's notation for derivatives was used by Lagrange, but was invented far earlier by Euler.

"Feynman's Trick" in calculus of parameterizing and then differentiating under the integral was also discovered by Euler. Like yeah. 250 years isn't enough to stop someone stealing the name of something you discovered. I think Euler discovered so many things people just decided at some point they couldn't name everything after Euler so started giving other people a chance.

The Gaussian distribution was discovered by de Moivre. Gaussian elimination was in textbooks in the time of Gauss so in his work he calls it "common elimination".

Arabic numerals were invented by Indian mathematicians.

Practically the only thing we know for absolute certain about Pythagoras is that he didn't discover Pythagoras' theorem (that had been known to the Babylonians centuries earlier).

Bayes never published his paper during his lifetime but it involves a very important thought experiment in probability and not the equation that everyone knows as Bayes' theorem, which was actually written by Laplace after reading Bayes paper.

Cantor didn't discover the Cantor set.

etc etc. There are hundreds or possibly thousands of examples. This is known as Stigler's law. https://en.wikipedia.org/wiki/Stigler's_law_of_eponymy

There are two more fun examples then I’ll stop. Kuiper published a paper stating that a ring of asteroids didn’t exist in the solar system. So when such a ring was discovered naturally it was named the Kuiper belt after him. Not maths, but in the same vein, in chess an early theorist called Damiano published an analysis showing that 1 e4 e5 Nf3 f5 was losing for black, so now that’s called “Damiano’s defence “

srean 6 hours ago [-]
Funniest but is that Stigler was not the first to discover Stigler's law.

Fibonacci series goes far far back in time than Leonardo of Pisa.

sebastiennight 5 hours ago [-]
That was great reading, thank you! TIL
munchler 24 hours ago [-]
> Why should it matter how many times I "split up" my compounding

It doesn’t, but the limit as the number of splits approaches infinity is obviously an interesting (i.e. “natural”) result.

The perimeter of a polygon with an infinite number of sides is also interesting for the same reason.

thaumasiotes 17 hours ago [-]
>> Why should it matter how many times I "split up" my compounding

> It doesn’t, but the limit as the number of splits approaches infinity is obviously an interesting (i.e. “natural”) result.

Except that the limit as the number of splits approaches infinity is just the declared rate of interest. The computation that ultimately yields e is a mistake, not a natural quantity to calculate.

yorwba 12 hours ago [-]
It is a natural mistake to make, so spending some time showing that it gives the wrong result is probably appropriate. Of course giving that wrong result a special name (especially something short like "e") or even calculating its value to a high degree of precision are pointless at this stage.

Then later when you have formally introduced sequences and how to prove convergence, you can show that (1+1/n)^n is monotonically increasing and bounded above, hence convergent. This is no longer a mistake, but closer to a fun (and quite difficult) mathematical puzzle than anything practical. Naming it "e" is still premature at this point.

Then even later when you've introduced differentiation, it's time to talk about the derivative of arbitrary exponential functions, which is where that sequence reappears, and giving e a special name finally becomes appropriate.

It seems like American math curricula are typically so excited to talk about e that they try to skip over all the intermediate steps?

thaumasiotes 10 hours ago [-]
Well, I don't have my calculus textbook to hand, but I can tell you what I took from the class.

1. e is the exponential base for which f'(x) = f(x).

2. ln is the logarithm base e, and when f(x) = ln x, f'(x) = 1/x.

3. e is the sum of the series x^n / n! .

4. The textbook did specifically cover the fact that e is the limit of (1 + 1/n)^n as n goes to infinity, and it also specifically tied this in to the idea of computing interest by an obviously incorrect method. You could only call this a "natural mistake to make" in the same sense that it's "natural" to assume the square root of 10 must be 5, or that the geometric mean of two numbers is necessarily equal to the arithmetic mean.

5. However, the limit is important in that it illustrates that one to an infinite power is an indeterminate form.

6. As detailed in points (1) and (2), and hinted by the name "natural logarithm", we measure exponentials and logarithms by reference to e for the same reason we measure angles in radians.

It's possible that this particular definition of e is important to a proof of one of the properties of e^x or ln x, but if so I don't remember reading about it in the textbook and it wouldn't have been covered in class. In my real analysis class, we used the Maclaurin series for e; (1 + 1/n)^n was never mentioned.

(It's really easy to show that that series is monotonically increasing.)

> Then later when you have formally introduced sequences and how to prove convergence

This is not material you'd expect at all in a calculus class. If sequences are mentioned, it would only be in passing as you move to series. Several methods of testing infinite series for convergence are covered. What it means for a sequence to converge is not. Limits are not defined in terms of sequences. Infinite series would be covered after, not before, differential and integral calculus.

You have to have a name for e because otherwise it would be impossible to work with. But it is interesting and the wrong way to compute interest isn't; there's no point in trying to motivate something important with something unimportant.

ogogmad 22 hours ago [-]
Looking at it in terms of compound interest seems random. That said, I think the expression lim((1+x/n)^n) is better motivated within Lie theory, since every Lie group admits a faithful linear representation in which the expression lim((1+x/n)^n) makes sense (even if infinite-dimensional Hilbert spaces might be needed, as in the metaplectic group). Then the subexpression 1+x/n approximates a tangent vector to 1.
dawnofdusk 17 minutes ago [-]
A nice approach (which is also in Po Shen Loh's preprint) is that this limit can be seen as the Euler scheme for solving the differential equation f' = f. This is related to your viewpoint about Lie theory... maybe slightly more digestible for high school students.
btilly 1 days ago [-]
While this may convince students, you haven't actually prove that any exponential function has a slope. The usual presentation doesn't even demonstrate that such functions are defined at irrational numbers.

That said, it is worthwhile to go through the algebra exercise to convince yourself that, for large n, (1+x/n)^n expands out to approximately 1 + x + x^2/2 + x^3/6 + ...

Hint. The x^k terms come out to (x/n)^k (n choose k). This will turn out to be x^k/k! + O(x^k/n). As n goes to infinity, the error term drops out, and we're just left with the series that we want.

ogogmad 23 hours ago [-]
> for large n, (1+x/n)^n expands out to approximately 1 + x + x^2/2 + x^3/6 + ...

The rigorous version of this argument uses the Dominated Convergence Theorem in the special case of infinite series.

btilly 21 hours ago [-]
There are several ways to make this rigorous.

An explicit epsilon-delta style proof is not that hard to produce, it's just a little messy. What you have to do is, for a given x and 0<ε, pick N large enough that you can bound the tail after x^N/N! onwards with a geometric series adding up to at most ε/2. Now pick n large enough enough that the sum of errors in the terms up to N is also bounded by ε/2. From that n on, (1+x/n)^n is within ε of the power series for e^x.

LegionMammal978 24 hours ago [-]
How my high-school calculus textbook did it was to first define ln(x) so that ln(1) = 0 and d/dx ln(x) = 1/x, then take exp(x) as the inverse function of ln(x), and finally set e = exp(1). It's definitely a bit different from the exp-first formulation, but it does do a good job connecting the natural logarithm to a natural definition. (It's an interesting exercise to show, using only limit identities and algebraic manipulation, that this is equivalent to the usual compound-interest version of e.)
jcranmer 23 hours ago [-]
That's how my textbook did it as well (well, it defined e as ln(e) = 1, but only because it introduced e before exp).

The problem with this approach is that, since we were already introduced to exponents and logarithms in algebra but via different definitions, it always left this unanswered question in my head about how we knew these two definitions were the same, since everyone quickly glossed over that fact.

LegionMammal978 23 hours ago [-]
I suppose the method would be to derive ln(xy) = ln(x) + ln(y) and the corresponding exp(x + y) = exp(x)exp(y), then see how this lets exp(y ln(x)) coincide with repeated multiplication for integer y. Connecting this to the series definition of exp(x) would also take some work, but my textbook wasn't very big on series definitions in general.
20 hours ago [-]
ogogmad 23 hours ago [-]
Did you eventually realise that the expression a^b should be understood to "really" mean exp(b * ln(a)), at least in the case that b might not be an integer?

I think even in complex analysis, the above definition a^b := exp(b ln(a)) makes sense, since the function ln() admits a Riemann surface as its natural domain and the usual complex numbers as its codomain.

[EDIT] Addressing your response:

> Calculus glosses over the case when a is negative

The Riemann surface approach mostly rescues this. When "a" is negative, and b is 1/3 (for instance), choose "a" = (r, theta) = (|a|, 3 pi). This gives ln(a) = ln |a| + i (3 pi). Then a^b = exp((|a| + i 3 pi) / 3) = exp(ln |a|/3 + i pi) = -|a|^(1/3), as desired.

Notice though that I chose to represent "a" using theta=3pi, instead of let's say 5pi.

LegionMammal978 21 hours ago [-]
I see what GP's point is, high-school-level calculus generally restricts itself to real numbers, where the logarithm is simply left undefined for nonpositive arguments. After all, complex analysis has much baggage of its own, and you want to have a solid understanding of real limits, derivatives, integrals, etc. before you start looking into limits along paths and other such concepts.

Even then, general logarithms become messy. It's easy to say "just take local segments of the whole surface" in the abstract, but any calculator will have to make some choice of branch cuts. E.g., clearly (−1)^(1/3) = −1 for any sane version of exponentiation on the reals, but many calculators will spit out the equivalent of (−1)^(1/3) = −e^(4πi/3) instead.

(Just in general, analytic continuation only makes sense in the abstract realm. If you try doing it numerically to extend a series definition, you'll quickly find out how mind-bogglingly unstable it is. I think there was one paper that showed you need an exponential number of terms and exponentially many bits of accuracy w.r.t. the number of steps. Not even "it's 2025, we can crank it out to a billion bits" can save you from that.)

selimthegrim 20 hours ago [-]
I was once a contractor for TI (and we wrote the same subroutines for Casio) so I can actually answer this. See my story here: https://news.ycombinator.com/item?id=6017670
selimthegrim 16 hours ago [-]
And of course by Muphry's law I managed to confuse real and principal roots in that answer. -1 is not the principal cube root of -1.
jcranmer 22 hours ago [-]
The problem is a^b := exp(b ln(a)) sort of breaks down when a is negative, which is a case that is covered in algebra class but glossed over in calculus.
20 hours ago [-]
ogogmad 23 hours ago [-]
I think this approach is the most logically "efficient". You can phrase it as defining ln(x) to be the integral of 1/t from 1 to x. Maybe not the most intuitive, though.

Interestingly, a similar approach gives the shortest proof that exp(x) and ln(x) are computable functions (since integration is a computable functional, thanks to interval arithmetic), and therefore that e = exp(1) is a computable real number.

LegionMammal978 23 hours ago [-]
Yeah, the hairiest part is probably the existence and uniqueness of the antiderivative, followed by the existence of an inverse for exp(1). In fact, I can't quite recall whether the book defined it as a Riemann integral or an antiderivative, but of course it had a statement of the FTC which would connect the two. (It was just a high-school textbook, so it tended to gloss over the finer points of existence and uniqueness.)
pwdisswordfishz 13 hours ago [-]
And here I thought it was irrational.
nathan_douglas 21 hours ago [-]
That was lovely; I really enjoyed it. Thank you.
ogogmad 23 hours ago [-]
Tangential fact (har har): The Taylor series for e^x, combined with the uniqueness of representing a real number in base-factorial, immediately shows that e is irrational.
jeremyscanvic 7 hours ago [-]
I know that the irrationality of e can be proven using the theory of continued fractions or by showing that finite Taylor sums for e give out rational approximations that are too good for e to be rational. What you're saying does not ring a bell though. Can you elaborate on that? This sounds really interesting!
20 hours ago [-]
21 hours ago [-]
analog31 1 days ago [-]
It also makes f flat.
nayuki 15 hours ago [-]
Explaining the joke: In standard Western music's 12-tone equal temperament scale, the pitch class E-natural is exactly the same as the pitch class F-flat. Putting it another way, look at an F key on a piano, flatten it by one semitone by moving left, and you get an E key.
johnp314 1 days ago [-]
At first your comment fell flat with me but then I realized it was pretty sharp. You are a natural.

But the cute proof was pretty cute. I recommend calc teachers try to work it into their lecture on e.

1 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:49:39 GMT+0000 (Coordinated Universal Time) with Vercel.