>>> nsum(lambda i,j: 2**(-i-j), [0,inf], [0,inf])
4.0
>>> nsum(lambda i,j,k: 2**(-i-j-k), [0,inf], [0,inf], [0,inf])
8.0
>>> nsum(lambda i,j,k,l: i/2**j*(i+l)/2**k, [1,2], [0,inf], [1,inf], [2,3])
50.0
>>> nsum((lambda i,j,k,l: 1/(2**(i**2+j**2+k**2+l**2))), *([[-inf,inf]]*4))
20.5423960756379
>>> jtheta(3,0,0.5)**4
20.5423960756379
One could of course also make nested calls to nsum, but having a direct syntax is much more convenient. Nested evaluation is also usually inefficient for convergence acceleration, so this is not what nsum does internally. Instead, it combines all the infinite summations to a single summation over growing hypercubes. The distinction is very important for conditionally convergent series.
For example, nsum can now directly evaluate the Madelung constant in three dimensions (ignore=True is to ignore the singular term at the origin):
>>> nsum(lambda i,j,k: (-1)**(i+j+k)/(i**2+j**2+k**2)**0.5,
... [-inf,inf], [-inf,inf], [-inf,inf], ignore=True)
-1.74756459463318
While this evaluation takes several seconds, it is somewhat remarkable that a precise value can be obtained at all. A better way to compute the Madelung constant is using the following rapidly convergent 2D series:
>>> f = lambda i,j: -12*pi*sech(0.5*pi*sqrt((2*i+1)**2+(2*j+1)**2))**2
>>> nsum(f, [0,inf], [0,inf])
-1.74756459463318
>>> mp.dps = 100
>>> nsum(f, [0,inf], [0,inf])
-1.74756459463318219063621203554439740348516143662474175815282535076504062353276
117989075836269460789
Another nice application is to evaluate Eisenstein series directly from the definition:
>>> tau = 1j
>>> q = qfrom(tau=tau)
>>> nsum(lambda m,n: (m+n*tau)**(-4), [-inf,inf], [-inf,inf], ignore=True)
(3.1512120021539 + 0.0j)
>>> 0.5*sum(jtheta(n,0,q)**8 for n in [2,3,4])*(2*zeta(4))
(3.1512120021539 + 0.0j)