## January 5, 2007

### Add so as to multiply (part 2)

Add so as to multiply looked at differentiable solutions to the functional equation

and showed that they were all of the form

i.e. they are all logarithms in some base. But now what if we drop differentiability and just look for a function of the positive real numbers, and proceed from first principles? (We stick to positive numbers for now because g has a problem at 0. If we set y to 0 in the functional equation, we get

so if g ever takes on a non-zero value we have a contradiction, unless g is undefined at 0.) We pursue an exploratory argument that could be made a lot more compact, but I won't do that for exploration's sake. Perhaps I will write a very compact version as a later blog article.

First consider the functional equation when we set x and y to 1.

So

Now what if we set y to be x to the power n-1 where n is a positive integer. Then

so we may decrement the exponent of x by adding g(x). Decrementing it by n gives

Note that this also works when n is 0 because g(1) is 0. Call this the natural exponent rule. So if x is a positive natural number and we factor it into a product of distinct prime powers

then we may use the original functional equation and the natural exponent rule to find that

So the value of g at each positive natural number x depends upon its values at all of the prime numbers. In fact as we will see, we may define g(2), g(3), g(5), g(7), g(11) and so forth quite arbitrarily. So unless we impose some constraint upon g, it can be quite nasty. However, we will now show that if we insist that g is a continuous function, then fixing one value of g fixes all others ... ok, back to the details. Suppose we set y to be the reciprocal of x in the original functional equation. This gives

and since g(1) is zero we have

Call the last rule the quotient rule. So now we have the rule

for any integer a. Now what about rational exponents? Let's start with an exponent of 1/n, where n is a natural number.

So now we know that the rule

works when a is the reciprocal of a positive natural number. What about any positive rational exponent, m/n ? Applying the rules we know so far, we get

So now we know that the rule

works for any positive rational exponent. What about any negative rational exponent -m/n ?

Putting all these together we know that the rule

applies whenever a is rational. Call this the rational exponent rule, and note that there's nothing here forcing x to be rational, just positive by our current convention.

What about g(2), g(3), g(5), g(7), g(11) and so forth? Roughly speaking these are independent in general, because the primes 2,3,5,7,11 etcetera are not mutually related by rational exponents, only by real ones. And unless g is required to be continuous, we do not know that the rule

applies for all real exponents. If we knew this, then we could conclude for example that

where (lg denotes the base 2 logarithm as usual) and so only one g value would be arbitrary, say g(2) and then the rules would determine g on any rational argument (you might like to check this). So suppose we fix g(2) and assert that g must be continuous. Let p be an even prime (i.e. not 2). We know that

but we need to approximate this arbitrarily closely with rational exponents, so we can use the rational exponent rule. Here's how we may play that game! Let n be an arbitrarily large natural number, then

Notice that the two exponents have numerators that differ by 1, as we can definitely use < because the log of p is irrational. But the denominators of the exponents are both n, so the error in each exponent is at most 1/n. So by choosing sufficiently large n we may approximate the genuine but irrational exponent as closely as we please. That is to say

If g is continuous (which informally means approximable i.e. g of an approximation to p is approximately g of p, and this can be made as accurate as you like by choosing better and better approximations of p), then applying g to both sides gives

where continuity allows us to move the limit. (Notice how this is just a limit of approximations as in the parenthesised statement above.) Now using rational exponent rule for g gives

and as we take the limit the floor makes a vanishing difference, so

So we may conclude that

if g is continuous, for any odd prime p. So now if x is any natural number, g(x) is determined from its factorization as before

but now we know that for each i

and so

Now the quotient rule gives g for x any positive rational number m/n because

and because g is continuous, this defines g on all positive real numbers as well (by rational approximation in the spirit of continuity as exemplified above). So we conclude that the functional equation has continuous solutions on the positive reals only of the form

i.e. all solutions are differentiable, and there are no more than before.

What happens for negative x ? Well if we set both x and y to -1 in the original functional equation we get

So if we just set y to -1 in the original functional equation we get

So the final conclusion is

## December 15, 2006

### Add so as to multiply

In a discussion of entropy it may be pointed out that when physical systems are paired the total number of microstates multiplies, since a microstate of the composite system is a pair of states of the separate systems. And yet the entropy, (which is a measure our ignorance of the exact state of a system) adds under these conditions. Boltzmann took this to mean that the entropy is proportional to the logarithm of the number of microstates, a famous result that endures today. This is because any logarithm satisfies the following functional equation, mapping multiplication of numbers to the addition of their logarithms.

But now we may ask which functions g actually satisfy this equation! If g is differentiable, then we may proceed as follows. First partial differentiate with respect to y.

Now set y to 1, and divide by x.

Look familiar? Integrating both sides gives a logarithm.

And we may write k for the coefficient of the logarithm, and note that if this is to satisfy the original functional equation, the constant of integration C must be zero. So the general differentiable solution is

But what if we care only about continuous functions: are there any more solutions? More on this soon!