This is a little earlier than my usual status update, but I think now is a good time. I just commited some new code, and many new interesting things work now, hence it is a good time to show off. On the other hand there a quite a number of bugs in my code, so that’s probably what I will be spending next week on…

Anyway. First I added lerch phi to hyperexpand. Here are a few examples. There is not a lot to be said here, but it is good to see many common sums to work now.

Next I improved hyperexpand() to handle some expansions at “special points”. This means evaluating hypergeometric (or meijer g) functions at say , even if we don’t know any closed-form expressions for general . There is a vast literature on such “hypergeometric identities”, and my code is a very humble start at best. It basically just implements Gauss’ and Kummer’s summation theorems for 2F1 and nothing else, but this is fairly effective. Then I improved one of the convergence conditions—it turns out that in addition to what is listed on the wolfram functions site, in the russian book from which they took the conditions there is a crucial extra part. After finding someone to translate it to me I could implement this; now we can do some more integrals. The upshot of this is that the mellin transform of a product of bessel functions can now be *derived* by the code, instead of having to put it into the table.

Let me put this into perspective. There are (at least) the functions . The general products can be exrpessed as g-functions, and similarly for . Also many special products can be expressed. I had previously put in a few of the identities of the first kind. Now almost all of them can be derived from just the entries for single functions. (There are some problems with functions of the second kind, which tend to be singular and/or rapidly growing, so that they don’t really have mellin transforms; deriving formulae for these is difficult in the current setup). On the other hand, this can be somewhat slow; for this reason I only commented out the formulae instead of removing them. Here are a few timings. These are evidently not great, I’ll have to see what can be done. My guess is that hyperexpand() is relatively slow, but I haven’t looked into this further. [Note also that running this with cache off is much slower, since the algorithm internally uses caching.]

Finally, I improved the integration heuristics so as to be able to do some more integrals [with a few additional factors the last integral is a representation of besselj]. Again I don’t know what makes this so slow.

In closing, let’s look at some more fun definite integrals (all played around with long enough until I found a variation that can be done in closed form😉 ). Again not much to say here (except that (9) shows a bug in the hyperepand table, the minus sign must be ); the numerical computations are for comparison with wolfram alpha.

Cool. Should lerchphi be pretty printed as a capital Phi?

For timing, checkout kernprof/line_profiler. It’s pretty awesome. You just decorate the functions you want to profile with @profile (you don’t even have to import this; kernprof injects it into __builtins__), create a script with the problematic code, say problem.py, then run “kernprof.py -l problem.py”. Then run “python -m line_profiler problem.py.lprof”.

I wonder how long it takes Mathematica to compute these integrals. integrate(erf(x)*besselj(0,x)**2/sqrt(x), (x,0,oo)) requires extra time in WolframAlpha, but the 900 ms of your function doesn’t seem to be that slow. Maybe they have very strict time limits (and also, they are computing more than just the symbolic integral).

Capital Phi is the standard notation, and indeed that is what is done in latex. The unicode capital Phi does not look very pretty on my machine, and in any case I figured pretty printing a slightly esoteric function might lead to more confusion than good.

If you think it should be pretty printed I can add it without any trouble.

You may be right. Maple prints Phi, and it always confuses me (the do the same thing with other functions too).

You can decide. I don’t particularly care either way on this one.