Main

January 5, 2007

Add so as to multiply (part 2)

Add so as to multiply looked at differentiable solutions to the functional equation
multiplicative1.jpg
and showed that they were all of the form
multiplicative2.jpg
i.e. they are all logarithms in some base. But now what if we drop differentiability and just look for a function of the positive real numbers, and proceed from first principles? (We stick to positive numbers for now because g has a problem at 0. If we set y to 0 in the functional equation, we get
multiplicative3.jpg
so if g ever takes on a non-zero value we have a contradiction, unless g is undefined at 0.) We pursue an exploratory argument that could be made a lot more compact, but I won't do that for exploration's sake. Perhaps I will write a very compact version as a later blog article.

First consider the functional equation when we set x and y to 1.
multiplicative4.jpg
So
multiplicative5.jpg
Now what if we set y to be x to the power n-1 where n is a positive integer. Then
multiplicative6.jpg
so we may decrement the exponent of x by adding g(x). Decrementing it by n gives
multiplicative7.jpg
Note that this also works when n is 0 because g(1) is 0. Call this the natural exponent rule. So if x is a positive natural number and we factor it into a product of distinct prime powers
multiplicative8.jpg
then we may use the original functional equation and the natural exponent rule to find that
multiplicative9.jpg
So the value of g at each positive natural number x depends upon its values at all of the prime numbers. In fact as we will see, we may define g(2), g(3), g(5), g(7), g(11) and so forth quite arbitrarily. So unless we impose some constraint upon g, it can be quite nasty. However, we will now show that if we insist that g is a continuous function, then fixing one value of g fixes all others ... ok, back to the details. Suppose we set y to be the reciprocal of x in the original functional equation. This gives
multiplicative10.jpg
and since g(1) is zero we have
multiplicative11.jpg
Call the last rule the quotient rule. So now we have the rule
multiplicative12.jpg
for any integer a. Now what about rational exponents? Let's start with an exponent of 1/n, where n is a natural number.
multiplicative13.jpg
So now we know that the rule
multiplicative14.jpg
works when a is the reciprocal of a positive natural number. What about any positive rational exponent, m/n ? Applying the rules we know so far, we get
multiplicative15.jpg
So now we know that the rule
multiplicative16.jpg
works for any positive rational exponent. What about any negative rational exponent -m/n ?
multiplicative17.jpg
Putting all these together we know that the rule
multiplicative18.jpg
applies whenever a is rational. Call this the rational exponent rule, and note that there's nothing here forcing x to be rational, just positive by our current convention.

What about g(2), g(3), g(5), g(7), g(11) and so forth? Roughly speaking these are independent in general, because the primes 2,3,5,7,11 etcetera are not mutually related by rational exponents, only by real ones. And unless g is required to be continuous, we do not know that the rule
multiplicative19.jpg
applies for all real exponents. If we knew this, then we could conclude for example that
multiplicative20.jpg
where (lg denotes the base 2 logarithm as usual) and so only one g value would be arbitrary, say g(2) and then the rules would determine g on any rational argument (you might like to check this). So suppose we fix g(2) and assert that g must be continuous. Let p be an even prime (i.e. not 2). We know that
multiplicative21.jpg
but we need to approximate this arbitrarily closely with rational exponents, so we can use the rational exponent rule. Here's how we may play that game! Let n be an arbitrarily large natural number, then
multiplicative22.jpg
Notice that the two exponents have numerators that differ by 1, as we can definitely use < because the log of p is irrational. But the denominators of the exponents are both n, so the error in each exponent is at most 1/n. So by choosing sufficiently large n we may approximate the genuine but irrational exponent as closely as we please. That is to say
multiplicative23.jpg
If g is continuous (which informally means approximable i.e. g of an approximation to p is approximately g of p, and this can be made as accurate as you like by choosing better and better approximations of p), then applying g to both sides gives
multiplicative24.jpg
where continuity allows us to move the limit. (Notice how this is just a limit of approximations as in the parenthesised statement above.) Now using rational exponent rule for g gives
multiplicative25.jpg
and as we take the limit the floor makes a vanishing difference, so
multiplicative26.jpg
So we may conclude that
multiplicative27.jpg
if g is continuous, for any odd prime p. So now if x is any natural number, g(x) is determined from its factorization as before
multiplicative28.jpg
but now we know that for each i
multiplicative29.jpg
and so
multiplicative30.jpg
Now the quotient rule gives g for x any positive rational number m/n because
multiplicative31.jpg
and because g is continuous, this defines g on all positive real numbers as well (by rational approximation in the spirit of continuity as exemplified above). So we conclude that the functional equation has continuous solutions on the positive reals only of the form
multiplicative32.jpg
i.e. all solutions are differentiable, and there are no more than before.

What happens for negative x ? Well if we set both x and y to -1 in the original functional equation we get
multiplicative33.jpg
So if we just set y to -1 in the original functional equation we get
multiplicative34.jpg
So the final conclusion is
multiplicative35.jpg

December 15, 2006

Add so as to multiply

In a discussion of entropy it may be pointed out that when physical systems are paired the total number of microstates multiplies, since a microstate of the composite system is a pair of states of the separate systems. And yet the entropy, (which is a measure our ignorance of the exact state of a system) adds under these conditions. Boltzmann took this to mean that the entropy is proportional to the logarithm of the number of microstates, a famous result that endures today. This is because any logarithm satisfies the following functional equation, mapping multiplication of numbers to the addition of their logarithms.
logfunc.jpg
But now we may ask which functions g actually satisfy this equation! If g is differentiable, then we may proceed as follows. First partial differentiate with respect to y.
logfunc2.jpg
Now set y to 1, and divide by x.
logfunc3.jpg
Look familiar? Integrating both sides gives a logarithm.
logfunc4.jpg
And we may write k for the coefficient of the logarithm, and note that if this is to satisfy the original functional equation, the constant of integration C must be zero. So the general differentiable solution is
logfunc5.jpg
But what if we care only about continuous functions: are there any more solutions? More on this soon!