This is a little earlier than my usual status update, but I think now is a good time. I just commited some new code, and many new interesting things work now, hence it is a good time to show off :-). On the other hand there a quite a number of bugs in my code, so that’s probably what I will be spending next week on…
Anyway. First I added lerch phi to hyperexpand. Here are a few examples. There is not a lot to be said here, but it is good to see many common sums to work now.
Next I improved hyperexpand() to handle some expansions at “special points”. This means evaluating hypergeometric (or meijer g) functions at say , even if we don’t know any closed-form expressions for general . There is a vast literature on such “hypergeometric identities”, and my code is a very humble start at best. It basically just implements Gauss’ and Kummer’s summation theorems for 2F1 and nothing else, but this is fairly effective. Then I improved one of the convergence conditions—it turns out that in addition to what is listed on the wolfram functions site, in the russian book from which they took the conditions there is a crucial extra part. After finding someone to translate it to me I could implement this; now we can do some more integrals. The upshot of this is that the mellin transform of a product of bessel functions can now be derived by the code, instead of having to put it into the table.
Let me put this into perspective. There are (at least) the functions . The general products can be exrpessed as g-functions, and similarly for . Also many special products can be expressed. I had previously put in a few of the identities of the first kind. Now almost all of them can be derived from just the entries for single functions. (There are some problems with functions of the second kind, which tend to be singular and/or rapidly growing, so that they don’t really have mellin transforms; deriving formulae for these is difficult in the current setup). On the other hand, this can be somewhat slow; for this reason I only commented out the formulae instead of removing them. Here are a few timings. These are evidently not great, I’ll have to see what can be done. My guess is that hyperexpand() is relatively slow, but I haven’t looked into this further. [Note also that running this with cache off is much slower, since the algorithm internally uses caching.]
Finally, I improved the integration heuristics so as to be able to do some more integrals [with a few additional factors the last integral is a representation of besselj]. Again I don’t know what makes this so slow.
In closing, let’s look at some more fun definite integrals (all played around with long enough until I found a variation that can be done in closed form ). Again not much to say here (except that (9) shows a bug in the hyperepand table, the minus sign must be ); the numerical computations are for comparison with wolfram alpha.