Tuesday, December 11, 2007

Totally Clueless Challenge #2 - By All Means!

It's been awhile but something this good is always worth waiting for!
TC has sent me some fascinating challenge problems for our readers. If you are now sick of watching amateur videos on the Arithmetic and Geometric Mean Inequality, it's time to raise the bar. The following involves a well-known generalization of these means but the results are worth your efforts, particularly parts (c) and (d) below.

If a and b are positive, we can define their generalized mean to be:

GNM = ((ak + bk)/2)(1/k)

This would look far prettier in LaTeX but I'm hoping it's readable. In words, we're looking at:
The kth root of the arithmetic mean of the kth powers of a and b.

(a) What is another name for the result when k = 1? (we're starting off easy here!)
(b) What is another name for the result when k = -1? (slightly harder algebraically)
(c) Ok, now for the real challenge for you Calculus lovers:
What is the limit of GNM as k-->0? The result is totally cool!
(d) TC's Super Bonus: Show that the limit of GNM as k-->∞ is the maximum of a and b.

Note: These have been slightly edited from tc's original problems, but they are essentially the same. Solutions may be posted in a couple of days although the notations will be hard to render. I might just have to do another video or wait for that special technology I mentioned earlier! We're hoping some of you will tackle the harder ones and comment!

14 comments:

Anonymous said...

Hi dave:
The answer to c is:
sqrt(a*b)

Dave Marain said...

Nice, Cecil!
Now prove it using L'Hopital's Rule!

Anonymous said...

Hi Cecil,

If you obtained that answer without using L'Hospital's rule, maybe you could tell us how.

The answer to (c) is the geometric mean of a & b, as you point out. What I find cool is how we started out with an additive mean and ended up with a multiplicative one !!!

TC

Dave Marain said...

Of course whenever we see
additive relationships turn into multiplicative ones, we know logs are lurking! Students who are preparing for the BC Calculus Exam need to know how to handle this more difficult application of L'Hopital's (they may not see this topic for a few months yet).

And speaking of the theoretical development of Calculus, say hello today to our mystery mathematician #3!

Eric Jablow said...

TC,

I can't prove it without l'Hôpital's rule, but I can explain the result.

For t small, a^t ≈ 1 + t log a, by the first-order Taylor approximation. Similarly, b^t ≈ 1 + t log b, and so (a^t + b^t)/2 ≈ (2 + t log a + t log b)/2 = 1 + t log√(ab).

Now, raise that expression to the 1/t power. You'd probably recognize it better if we let 1/t = s; now s →+∞, and we are looking for (1+log√(ab)/s)^s, and this tends to e^(log √(ab)) = √(ab).

To do this right, I would need to show that the error in the approximation doesn't matter. I tried to avoid resorting to l'Hôpital's rule when I was teaching mathematics; you don't learn anything from l'Hôpital's rule.

Eric Jablow said...

Those were supposed to be approximation symbols, not equals symbols. Sorry.

Dave Marain said...

Eric--
Your students were very fortunate.
What you're objecting to of course is teaching procedurally as opposed to conceptually. I hope you know that I share those sentiments completely. Whenever possible, I avoided teaching algorithms as if they were a black box.

When introducing L'Hopital's, we would first investigate numerical values of ratios like (x^2)/(x^3+1) as x→∞. I would ask, "How is it that both the numerator and denominator are increasing without bound, yet the ratio approaches zero?" Invariably, someone would reply, "Because x^3+1 is increasing at a faster rate!" At that moment, I would feel like Archimedes running into the street, yelling "Eureka!" This motivated L'Hopital's approach. If the numerator and denominator both evaluated to zero, I would use linear approximation just as you suggested (Taylor's came later):
(f(a) +f'(a)(x-a))/(g(a)+g'(a)(x-a)) which becomes f'(a)/g'(a).This always made sense to them before starting a more formal proof or just applying the method. Of course, Eric, this discussion applies to EVERY math concept we present, doesn't it?

Anonymous said...

Thank you, Eric, for that nice explanation.

I knew there was a limit form of e^x lurking there somewhere but did not pursue it more.

A couple of extensions if anyone is interested:
(1) Simple: The limit of the generalized mean as k-> -Infinity
(2)Not easy (IMHO, I haven't been able to complete a proof yet): Show that the generalized mean of two numbers a & b is an increasing function of k.

I have seen this behavior in experiments, and we certainly know that the result for k=-1, 0 and 1 are in increasing order.


TC

Anonymous said...

dave:
Using L'Hospital's rule or theorem is the way i did it. Part D is done the same way but first define which is larger a or b.

Dave Marain said...

Cecil--
Thanks for the clarification. I agree both are handled the same way but the max part seemed more complicated. I actually replaced a and b by t and 1 where t = a/b and assumed, WLOG, that a is greater than b. This simplified the L'Hopital's work considerably. I may be posting tc's detailed solutions rendered in pretty Texify in a day or so.

TC-- do we have another TCC (TC Conjecture) or TCT (TC Theorem) here with the increasing piece?

Anonymous said...

Hi Dave,

In the absence of a proof, it is still a conjecture, and I would really, really want it to be true.


TC

Anonymous said...

So, not related to this posting.

I have an entry to the mathematician contest, but I can't seem to find your email address anywhere. Help!

Eric Jablow said...

I figured out how to finish that argument without using l'Hôpital's rule.

We are trying to find the limit of [(a^t + b^t)/2]^(1/t), which can be expressed as [1+log√(ab) t + q(t)]^(1/t),
where q is a power series starting with the t^2 term. Just use the full Taylor series for a^t and b^t. Note that q converges everywhere.

Now, write 1 + log√(ab) t + q(t) as [1 + log√(ab) t] [1 + r(t)] for another power series r. You can find r by division, or by multiplying by 1/[1 + log√(ab) t]. The resulting power series has radius of convergence 1/|log√(ab)|> 0. Furthermore, r(t) also begins with the quadratic term. So, we are looking for

[1 + log√(ab) t]^(1/t) [1 + r(t)]^(1/t). By convergence, for t small enough, the second factor is dominated by [1+c t^2]^(1/t) for some c (greater than the t^2 coefficient of r). But the base goes to 1 faster than the exponent goes to infinity!

In fact, we'd get that [1+c t^2]^(1/t^2) goes to e^c, and so [1+c t^2]^(1/t) has the same limit as e^(ct). But that goes to 1.

Manipulations of infinite series are a somewhat old-fashioned mathematical technique, but they tell you what is going on. Now, I don't have that much disdain for l'H. Certainly the professional engineer needs it for itself. Students need to learn it. But mathematics is more than simple calculation.

Anonymous said...

Never mind my last post. I just didn't look carefully enough. Oops!