### Add so as to multiply (part 2)

Add so as to multiply looked at differentiable solutions to the functional equation

and showed that they were all of the form

i.e. they are all logarithms in some base. But now what if we drop differentiability and just look for a function of the positive real numbers, and proceed from first principles? (We stick to positive numbers for now because * g* has a problem at

**. If we set**

*0**to*

**y****in the functional equation, we get**

*0*so if

*ever takes on a non-zero value we have a contradiction, unless*

**g***is undefined at*

**g****.) We pursue an exploratory argument that could be made a lot more compact, but I won't do that for exploration's sake. Perhaps I will write a very compact version as a later blog article.**

*0*First consider the functional equation when we set

**and**

*x**to*

**y****.**

*1*So

Now what if we set

**y**to be

**to the power**

*x***where**

*n-1***is a positive integer. Then**

*n*so we may decrement the exponent of

**by adding**

*x***. Decrementing it by**

*g(x)***gives**

*n*Note that this also works when

**is**

*n***because**

*0***is**

*g(1)***. Call this the natural exponent rule. So if**

*0***is a positive natural number and we factor it into a product of distinct prime powers**

*x*then we may use the original functional equation and the natural exponent rule to find that

So the value of

**at each positive natural number**

*g***depends upon its values at all of the prime numbers. In fact as we will see, we may define**

*x**and so forth quite arbitrarily. So unless we impose some constraint upon*

**g(2), g(3), g(5), g(7), g(11)****, it can be quite nasty. However, we will now show that if we insist that**

*g***is a continuous function, then fixing one value of**

*g***fixes all others ... ok, back to the details. Suppose we set**

*g***y**to be the reciprocal of

**in the original functional equation. This gives**

*x*and since

**is zero we have**

*g(1)*Call the last rule the quotient rule. So now we have the rule

for any integer

**. Now what about rational exponents? Let's start with an exponent of**

*a***, where**

*1/n***is a natural number.**

*n*So now we know that the rule

works when

**is the reciprocal of a positive natural number. What about any positive rational exponent,**

*a***? Applying the rules we know so far, we get**

*m/n*So now we know that the rule

works for any positive rational exponent. What about any negative rational exponent

**?**

*-m/n*Putting all these together we know that the rule

applies whenever

**is rational. Call this the rational exponent rule, and note that there's nothing here forcing**

*a***to be rational, just positive by our current convention.**

*x*What about ** g(2), g(3), g(5), g(7), g(11)** and so forth? Roughly speaking these are independent in general, because the primes

**etcetera are not mutually related by rational exponents, only by real ones. And unless**

*2,3,5,7,11***is required to be continuous, we do not know that the rule**

*g*applies for all real exponents. If we knew this, then we could conclude for example that

where (

**lg**denotes the base

**logarithm as usual) and so only one**

*2***value would be arbitrary, say**

*g***and then the rules would determine**

*g(2)***on any rational argument (you might like to check this). So suppose we fix**

*g***and assert that**

*g(2)***must be continuous. Let**

*g***be an even prime (i.e. not**

*p***). We know that**

*2*but we need to approximate this arbitrarily closely with rational exponents, so we can use the rational exponent rule. Here's how we may play that game! Let

**be an arbitrarily large natural number, then**

*n*Notice that the two exponents have numerators that differ by

**, as we can definitely use**

*1***because the log of**

*<***is irrational. But the denominators of the exponents are both**

*p***, so the error in each exponent is at most**

*n***. So by choosing sufficiently large**

*1/n***we may approximate the genuine but irrational exponent as closely as we please. That is to say**

*n*If

**is continuous (which informally means approximable i.e.**

*g***of an approximation to**

*g***is approximately**

*p***of**

*g***, and this can be made as accurate as you like by choosing better and better approximations of**

*p***), then applying**

*p***to both sides gives**

*g*where continuity allows us to move the limit. (Notice how this is just a limit of approximations as in the parenthesised statement above.) Now using rational exponent rule for

**gives**

*g*and as we take the limit the floor makes a vanishing difference, so

So we may conclude that

if

**is continuous, for any odd prime**

*g***. So now if**

*p***is any natural number,**

*x***is determined from its factorization as before**

*g(x)*but now we know that for each

*i*and so

Now the quotient rule gives

**for**

*g***any positive rational number**

*x***because**

*m/n*and because

**is continuous, this defines**

*g***on all positive real numbers as well (by rational approximation in the spirit of continuity as exemplified above). So we conclude that the functional equation has continuous solutions on the positive reals only of the form**

*g*i.e. all solutions are differentiable, and there are no more than before.

What happens for negative

**? Well if we set both**

*x***and**

*x***to**

*y***in the original functional equation we get**

*-1*So if we just set

**to**

*y***in the original functional equation we get**

*-1*So the final conclusion is