This post is a continuation of my previous one on branch cuts. My usual status update will again come on friday, and this time I actually have something to show off.

As before, the aim of this post is two-fold: both to clarify my thoughts, and to invite suggestions for solution.

It turn out my previous treatment of branched functions was a bit short-sighted (who would have guessed that, hu? :D). In order to make progress, I have for now been ignoring problems related to branches/arguments; but by now this is the last big problem before my branch can finally become “pull request ready” (more on that in friday’s post). Let me first describe the problem(s) at hand. I have picked five (six) representative examples.

- (IFT stands for inverse fourier transform) As explained previously, this comes out as (roughly) . The must not be simplified to here.
- As also explained previously, the G-function algorithm cannot compute this without realising that is unbranched, i.e. that the argument of this function may indeed be converted to .
- . (MT stands for mellin transform) This is curious. To understand what is going on, we have to realise that the function is actually represented as for some constant independent of , and a function . Thus and
*together*determine the branch of the square root taken. Thus putting a negative in above expression yields , where for once denotes the principal branch. This is surely (?) not what the user had in mind when entering the above expression (and btw we cannot compute the answer for this argument anyway). - matches with . This is wrong for the same reason as the example above, but this time the problem is that matching code thinks .
- This mellin transform only exists for in the left half plane. The unsimplified result has a term (roughly) like , except that it is more complicated so that powdenest() doesn’t put the into the exponent. This only happens when calling simplify. The end result is that when we substitute the real for the dummy an important factor of is lost. (In case this explanation is too confusing, look at this session log. Crucially, the last equation has a factor of right before the gamma function, whereas the first does not.)
- (ILT stands for inverse laplace transform) In this case a double minus sign is cancelled, for similar reasons as above. (Here is a log.)

I think the following conclusions can be drawn from these examples:

- powdenest(force=True) is not a reliable way to propagate exponents and thus not sufficient to be sure that we can safely substitute for a dummy. [It has other problems as well, e.g. powdenest(abs(x), force=True) == x, and that’s probably not even a bug.]
- It is (thus) not tenable to try to hide all branch computations from the user in the way I tried so far (i.e. to return a result with fully computed branches, where -s can be cancelled in etc).
- There needs to be a function similar to arg() and unbranched_argument() that can return reduced arguments if this is known to be correct.
- It would be desirable to to have functions somewhat like powdenest(force=True) that do all simplifications valid on the riemann surface of the logarithm. Note that fore=True works by pretending all symbols are positive, this is not necessarily the same thing.

**I’m looking for an unintrusive way to solve the above four problems.**

I’m toying with the idea of adding a new assumption “polar” to sympy. I think almost no code needs to know about it, and it seems like it could solve my problems. I’m not very experienced with the sympy core, though, so I may be wrong. I imagine that this assumption can be set on various objects to make them behave like numbers on the riemann surface of the logarithm. In particular exp(z, polar=True) stops exp(2*pi*I) == 1. We could also allow this on e.g. I or -1, so that exp(pi*I, polar=True) can return -1 with the polar=True flag set. Multiplying two such factors together will propagate back to exp. Also polar=True should stop exp from being expanded trigonometrically (since adding polar numbers does not really make sense, as far as I can see—adding shouldn’t be disallowed though, since it can be useful formal notation). Matching code could propagate this flag (I’m fuzzy on the details). Similarly powdenest etc could watch out for it.

This solves all but (3), for which I could just make a new function. My existing machinery for handling branch cuts should be adapted, obviously (actually most of it could go now). argument_dummy can go of course.

It might even be advantageous to have an assumption “period” which can have a numerical value (where the old behaviour means period=None, and the above behaviour means period=oo); but I think assumptions must be flags so this cannot work. If it would, it might allow for a unified treatment of (1-4), but that’s probably not worth it.

Your example #2 is reminiscent of the reason that the atan2 function exists.

From what you explain, exp(polar=True) wouldn’t be an assumption in the sense of the assumptions system (maybe this is just a poor choice of words on your part). This would just be an option to exp(). If that is indeed the case, and I am not just misunderstanding your idea, then it could be anything, not just a boolean.

Also, consider the (API) advantages of just making specific functions, like exp_polar(2*pi*I) (this is similar to the approach for atan2, except in the latter case it is essential to have a separate function).

Well, it’s not really an assumption, that’s true. I just mean “something to be tacked onto objects as auxiliary information”. The point is that I would really like to have this on (negative) rationals and ImaginaryUnit as well, because I want to avoid having silly factors like e^{pi i} everywhere. Also by using essentially the same objects, existing simplification functions will keep working.

In any case I will clean up the rest of my code now, and keep thinking about this problem ^^.