This post will elaborate on the recent problems I’m having with “multivalued” functions. My usual status update will come tomorrow. I shall try to be almost pedantically clear in my description, both in order to clear my own thoughts and to increase the chance of someone suggesting a way out. In what follows, will denote the riemann surface of the logarithm (this only means infinitely many copies of the complex line, glued together approprietely so that becomes an entire, single-valued function on ). Zero is not an element of .

The problem we will study is how to compute the inverse fourier transform . First of all it is easy to see that the integral converges only for real, non-zero . Also using contour integration it is easily established that , where denotes the Heaviside step function.

How does the Meijer G-function algorithm compute this integral? First of all it is split up into two parts, and . Let’s concentrate on for now. The integral exists if the exponential has a decaying component, or if it is oscillatory. That is to say it converges for . But let’s be a bit more specific: all meijer G functions are defined on , and may or may not descend to . Thus define to be our integral (where denotes the region of convergence). Of course when the integral converges (note that this equation is not vacuous: on ). We thus find:

The integral converges if , for some integer .

Having established this, let’s see what the algorithm computes. I should explain the output. Note that debugging output is enabled. The first piecewise result is what would ordinarily be presented to the user: there is a closed expression, holding under various conditions. And there is the “otherwise” part of the piecewise, which just says that if the conditions do not hold, we don’t know the answer. Consider the conditions. unbranched_argument(t) means the argument of (whereas arg(t) would mean some value between and , and that function is not continuous either). Thus the conditions are just a funny way of saying . Now consider the claimed result. It looks a bit daunting, but this is because the functions in which it is written are defined on , not . The debugging output line “Result before branch substitutions …” shows what the result would be on . I have computed this in output [8] (notice that what looks like here is a dummy, to stop evaluation of etc). Of course is an unbranched function on , so the first two factors are uninteresting. But the last factor is not: we see that the result is related to the upper incomplete gamma function. This is a branched function, i.e. it is properly defined on and it does not descend to . If is a positive real number, then the incomplete gamma function is to be evaluated at argument . *This* is where the extra terms (like argument_period()) come from: on argument means negative reals. But with the branch cut conventions as implemented (in mpmath, effectively, but also enforced throughout sympy), negative reals mean argument . So the answer cannot just be , because this would end up on the wrong branch. Thus we conclude (or rather the algorithm did for us):

Define by . Then for .

Notice in particular that our original integral has two natural extensions to : the naive definition , and the analytic continuation . The algorithm **has to** return some subset of the first definition, and **not** the second. In a sense this is why the convergence condition is necessary (although technically it is there because the integral representation of the G-functions used to establish this formula diverges outside the region specified by the convergence conditions).

We could now do a similar analysis for . The story is essentially the same, with the upper half plane replaced by the lower half plane () throughout. So when can we compute the combined integral? We already noted that this should be the case exactly when is real, and working in this is easy to see: the intersection of the (closed) upper and lower half planes is the real axis. But notice how this **fails** on The intersection of (where we can evaluate the first integral) and is , i.e. only *positive* .

To further amplify the confusion, computing said fourier transform with argument , where is declared positive, works. This is because before “switching” to working on , the algorithm carelessly cancels some minus signs … which is fine for entire functions, of course.

Here ends this rather lengthy explanation. It remains to figure out how we can teach the algorithm that the integrals are periodic in argument , instead of knowing them only on one half-sheet. I have some vague ideas for this, but input is appreciated. In any case, I’ll think about this for a while. For now there is enough other work to do.

This may or may not help, but it might be instructive to temporarily disable automatic simplification of

`exp(n*I*pi)`

. Just add`return`

to line 58 of sympy/functions/elementary/exponential.py. I get:I don’t even know if that’s right or not. If it’s wrong, it’s could just be because some part of your code relies on the automatic simplification for correctness.

This is exactly the expected behaviour (you can either disable the simplification, or simulate everything using dummies). The first transform not being evaluated is correct. [I did not explain this above, but if you use exp(-I*pi)*t, then the *first* integral cannot be done. The thing is that the first has do be done with argument pi, and the second with argument -pi—no matter which one we pass, one of them will not work.]

The second equation is again the analytic continuation and not what we want.

I think I know a simple sufficient condition for a G-function to be entire, which I believe is also fairly close to necessary. It does apply to all functions currently in the table (which is admittedly not many). Building this in should make these examples work.

But I will instead extend the tables tomorrow (finally), and sleep about this for a few more days.