Tag Archive | NumPy

NUFFT with Julia


Julia language offers an interesting alternative to python when crunching numbers. Python has several ways to improve its inherently low performance, such as numpy, cython, or numba.
In an excellent post Jake Vanderplas discusses how to use numba to achieve a performance similar to Fortran, without writing any cython or Fortran code. The results are amazing.
Because the philosophies behind Julia and numba are similar, I wanted to see how Julia would perform. As I am new to Julia, this was also a way to learn the language, and I am sure I have made several mistakes that I would appreciate if you readers can point out.

We’ll start with a Direct Fourier Transform (DFT). We’ll import some python things into Julia, to compare different results.

using PyCall
@pyimport nufft as nufft_fortran
@pyimport numpy as np

Next we define 3 versions of the DFT. The first one uses Python and is as close as I could get to In[1] in Jake’s post. The other two use element-wise operations instead of whole-array operations.

function nufftfreqs(M,df=1.0)
    """Compute the frequency range used in nufft for M frequency bins"""
    df * [-fld(M, 2): M - fld(M, 2)-1]
end

function nudftpy(x, y, M, df=1.0, iflag=1)
    """Non-Uniform Direct Fourier Transform. Using numpy"""
    sign = iflag < 0 ? -1 : 1
    (1 / length(x)) * np.dot(y, np.exp(sign * 1im * x*transpose(nufftfreqs(M, df))))
end

function nudft(x, y, M, df=1.0, iflag=1)
    """Non-Uniform Direct Fourier Transform. Using whole array operations"""
    sign = iflag < 0 ? -1 : 1
    (1 / length(x)) * *(transpose(y''), exp(sign * 1im * x*transpose(nufftfreqs(M, df))))
end

function nudft2(x::Vector, y::Vector, M::Int, df=1.0, iflag=1)
    """Non-Uniform Direct Fourier Transform. Using comprehensions"""
    freqs = nufftfreqs(M, df)
    sign = iflag < 0 ? -1 : 1
    n = size(x,1)
    m = size(freqs, 1)
    r = (1 / n) * [y[i]*exp(sign*1im*x[i]*freqs[j]) for i=1:n, j=1:m]
    sum(r,1)
end

Now we test them:

x = 100 * rand(1000)
y = sin(x)
Y0 = @time nudftpy(x, y, 1000) 
Y1 = @time nudft(x, y, 1000)
Y2 = @time nudft2(x, y, 1000)
Yf = @time nufft_fortran.nufft1(x, y, 1000)
print([np.allclose(Y0, Yf), np.allclose(Y1, Yf),np.allclose(Y2, Yf))
elapsed time: 0.272123779 seconds (32289652 bytes allocated)
elapsed time: 0.211703597 seconds (32309564 bytes allocated, 26.77% gc time)
elapsed time: 0.14629479 seconds (32874852 bytes allocated)
elapsed time: 0.207123126 seconds (32918144 bytes allocated, 28.28% gc time)
Bool[true,true,true]

The results agree! But they are incredibly inefficient. The functions are also slower than the same numpy function called from python (about twice as fast in python). Probably there is some overhead in calling python from Julia. Besides, the @time macro in Julia works differently than the @timeit magic function…

Our real interest is comparing Julia with numba, so I’ll go on. Here is an FFT implementation of the nuft_numpy function of Jake’s post (In[4]). To be easily identifiable, I’ve kept the same doc-strings and function names, which kind-of make no sense here…

function _compute_grid_params(M, epsilon)
    # Choose Msp & tau from eps following Dutt & Rokhlin (1993)
    if epsilon <= 1E-33 || epsilon >= 1E-1
        error(@sprintf("eps = %f; must satisfy 1e-33 < eps < 1e-1.", epsilon))
    end
    ratio = epsilon > 1E-11 ? 2 : 3
    Msp = itrunc(-log(epsilon) / (pi * (ratio - 1) / (ratio - 0.5)) + 0.5)
    Mr = max(ratio * M, 2 * Msp)
    lambda_ = Msp / (ratio * (ratio - 0.5))
    tau = pi * lambda_ / M^2
    Msp, Mr, tau
end

function nufft_python(x, c, M, df=1.0, epsilon=1E-15, iflag=1):
    """Fast Non-Uniform Fourier Transform with Python"""
    Msp, Mr, tau = _compute_grid_params(M, epsilon)
    N = length(x)

    # Construct the convolved grid
    ftau = zeros(typeof(c[1]), Mr)
    Mr = size(ftau,1)
    hx = 2pi / Mr
    mm = [-Msp:Msp-1]
    for i=1:N
        xi = (x[i] * df) % (2 * pi)
        m = 1 + div(xi,hx)
        spread = exp(-0.25 * (xi - hx * (m + mm)).^2 / tau)
        ftau[1+mod(m + mm, Mr)] += c[i] * spread
    end
    # Compute the FFT on the convolved grid
    if iflag < 0
        Ftau = (1 / Mr) * fft(ftau)
    else
        Ftau = ifft(ftau)
    end
    Ftau = [Ftau[end-div(M,2)+1:end], Ftau[1:div(M,2)+M%2]]
    # Deconvolve the grid using convolution theorem
    k = nufftfreqs(M)
    (Ftau.*(1 / N) * sqrt(pi / tau)).* exp(tau * k.^2)
end

Following again Jake’s post, I write a couple of functions to test our implementations. They are a litteral translation from the Python code.

function test_nufft(nufft_func, M=1000, Mtime=100000)
    # Test vs the direct method
    print(repeat("-",30), "\n")
    print("testing ",nufft_func, "\n")
    x = 100 * rand(M + 1)
    y = sin(x)
    for df in [1.0, 2.0]
        for iflag in [1, -1]
            F1 = nudft(x, y, M, df, iflag)
            F2 = nufft_func(x, y, M, df, 1E-15, iflag)
            assert(all(x -> isapprox(x...), zip(F1, F2)))
        end
    end
    print("- Results match the DFT\n")
    
    # Time the nufft function
    x = 100 * rand(Mtime)
    y = sin(x)
    times = Float64[]
    for i = 1:5
        tic()
        F = nufft_func(x, y, Mtime)
        t1 = toq()
        push!(times,t1)
    end
    @printf("- Execution time (M=%d): %.2f sec\n",Mtime, median(times))
end

Let’s test it:

test_nufft(nufft_python)
test_nufft(nufft_fortran.nufft1)
------------------------------
testing nufft_python
- Results match the DFT
- Execution time (M=100000): 1.07 sec
------------------------------
testing fn
- Results match the DFT
- Execution time (M=100000): 0.12 sec

The results are an order of magnitude slower than the fortran code. They are about three times faster than the numpy_python code in my computer (3.7 sec). Good! So pure Julia is faster than simple python!

Python spends most of the time in the loop. This can be improved using numpy add function. As this function does not exist in Julia, I used a loop instead. The result is pretty cumbersome, but it was just a game to see if I could get something close to the numpy version.

function nufft_numpy(x, y, M, df=1.0, epsilon=1E-15, iflag=1):
    """Fast Non-Uniform Fourier Transform"""
    Msp, Mr, tau = _compute_grid_params(M, epsilon)
    N = length(x)
    # Construct the convolved grid
    ftau = zeros(typeof(y[1]), Mr)
    hx = 2pi / Mr
    xmod = map(mod2pi, x*df)
    m = 1+ int(xmod/hx)
    mm = [-Msp:Msp-1]
    mpmm = broadcast(+, transpose(m), mm)
    spread = broadcast(*, exp(-0.25 * (transpose(xmod).- hx*mpmm).^ 2 / tau), transpose(y))
    for (i,s) in zip(map(xi->1+mod(xi, Mr), mpmm), spread)
        ftau[i] += s
    end
    # Compute the FFT on the convolved grid
    if iflag < 0
        Ftau = (1 / Mr) * fft(ftau)
    else
        Ftau = ifft(ftau)
    end
    Ftau = [Ftau[end-div(M,2)+1:end], Ftau[1:div(M,2)+M%2]]
    # Deconvolve the grid using convolution theorem
    k = nufftfreqs(M)
    (Ftau.*(1 / N) * sqrt(pi / tau)).* exp(tau * k.^2)
end

test_nufft(nufft_numpy)
test_nufft(nufft_fortran.nufft1)
------------------------------
testing nufft_numpy
- Results match the DFT
- Execution time (M=100000): 4.58 sec
------------------------------
testing fn
- Results match the DFT
- Execution time (M=100000): 0.12 sec

So our attempt to emulate the numpy setup was a disaster. This could have been expected. After all, in python we were trying to remove a for loop, but these loops are not inherently slower in Julia, so that the complicated broadcastic resulted in a degradation of performance. It’s nice to see that complex code works worse!

Let’s see if the numba code results in more efficient Julia code. This is the line-by-line translation of the numba code (In[11]):

function build_grid(x, c, tau, Msp, ftau)
    Mr = size(ftau,1)
    hx = 2pi / Mr
    for i=1:size(x,1)
        xi = mod2pi(x[i])
        m = 1 + int(xi/hx)
        for mm=-Msp:Msp-1
            ftau[1 + mod((m + mm) , Mr)] += c[i] * exp(-0.25 * (xi - hx * (m + mm))^2 / tau)
        end
    end
    ftau
end

function nufft_numba(x, c, M, df=1.0, eps=1E-15, iflag=1)
    """Fast Non-Uniform Fourier Transform from Python numba code"""
    Msp, Mr, tau = _compute_grid_params(M, eps)
    N = length(x)

    # Construct the convolved grid
    ftau = build_grid(x * df, c, tau, Msp, zeros(typeof(c[1]), Mr))

    # Compute the FFT on the convolved grid
    if iflag < 0
        Ftau = (1 / Mr) * fft(ftau)
    else
        Ftau = ifft(ftau)
    end
    Ftau = [Ftau[end-div(M,2)+1:end], Ftau[1:div(M,2)+M%2]]

    # Deconvolve the grid using convolution theorem
    k = nufftfreqs(M)
    (1 / N) * sqrt(pi / tau) .* exp(tau * k.^2).*Ftau
end

test_nufft(nufft_numba)
test_nufft(nufft_fortran.nufft1)
------------------------------
testing nufft_numba
- Results match the DFT
- Execution time (M=100000): 0.32 sec
------------------------------
testing fn
- Results match the DFT
- Execution time (M=100000): 0.12 sec

This is a better performance! Last, we can potentially gain some more speed pre-calculating the exponentials. Again, this is a pure translation of Jake’s code.

function build_grid_fast(x, c, tau, Msp, ftau, E3)
    Mr = size(ftau,1)
    hx = 2pi / Mr
    # precompute some exponents
    for j=0:Msp
        E3[j+1] = exp(-(pi * j / Mr)^2 / tau)
    end
    # spread values onto ftau
    for i=1:size(x,1)
        xi = mod2pi(x[i])
        m = 1 + int(xi/hx)
        xi = xi - hx * m
        E1 = exp(-0.25 * xi^2 / tau)
        E2 = exp((xi * pi) / (Mr * tau))
        E2mm = 1
        for mm=0:Msp-1
            ftau[1+mod((m + mm) , Mr)] += c[i] * E1 * E2mm * E3[mm+1]
            E2mm *= E2
            ftau[1+mod((m - mm - 1) , Mr)] += c[i] * E1 / E2mm *E3[mm+2]
        end
    end
    ftau
end

function nufft_numba_fast(x, c, M, df=1.0, eps=1E-15, iflag=1)
    """Fast Non-Uniform Fourier Transform from Python numba code"""
    Msp, Mr, tau = _compute_grid_params(M, eps)
    N = length(x)

    # Construct the convolved grid
    ftau = build_grid_fast(x * df, c, tau, Msp, 
    zeros(typeof(c[1]), Mr), zeros(typeof(x[1]), Msp+1) )

    # Compute the FFT on the convolved grid
    if iflag < 0
        Ftau = (1 / Mr) * fft(ftau)
    else
        Ftau = ifft(ftau)
    end
    Ftau = [Ftau[end-div(M,2)+1:end], Ftau[1:div(M,2)+M%2]]

    # Deconvolve the grid using convolution theorem
    k = nufftfreqs(M)
    (1 / N) * sqrt(pi / tau) .* exp(tau * k.^2).*Ftau
end

test_nufft(nufft_numba_fast)
test_nufft(nufft_fortran.nufft1)
------------------------------
testing nufft_numba_fast
- Results match the DFT
- Execution time (M=100000): 0.70 sec
------------------------------
testing fn
- Results match the DFT
- Execution time (M=100000): 0.12 sec

Here I am surprised to see that the performance is worse than before. I really don’t see any reason for that, and of course, a profiler should be the next step. Whereas the nufft_numba_fast in python is almost as efficient as the fortran code (0.14 sec vs. 0.11 sec), with Julia it is 100% slower than the simpler nufft_numba.

Conclusion? Julia is easy and powerful, but for those used to python, numba is a great alternative that can produce even faster code with less effort (for a Python programmer).

As I am new to Julia I may have made several mistakes and I would appreciate if readers can point them out.

You can find a very similar version of this post as a python notebook.

Python for Fortran programmers 8: Looking ahead


This series of posts were not a Python tutorial, just some tips for those Fortran programmers who are learning python. Once you know the basics of Python, you can focus on extensions useful to scientists.

The most essential one is Numpy, which gives python the ability to work efficiently with arrays. If you install Numpy, it is worth also installing Scipy. Scipy includes a wealth of algorithms that you probably need but you don’t want to code. From Fourier transforms and splines to minimizations and numerical integrations. There are excellent tutorials for both Numpy and Scipy and a good place to start is at http://www.scipy.org/.
Remember that although you can translate a Fortran code almost line by line into Python, the resulting code will not be optimal, neither for clarity nor efficiency. Learn to be Pythonic:
http://www.cafepy.com/article/be_pythonic/
http://blog.startifact.com/posts/older/what-is-pythonic.html
Use dictionaries, use sets, use list comprehension, and even consider using classes! Remember that almost everything is iterable in Python.
This is my last post for the series Python for Fortran programmers, but I will continue writing about Python tools that I find useful for my research. I hope they will also help other computational chemists and biophysicists.

Python for Fortran programmers II: why Python?


Python is a general-purpose language, used in extremely different fields. Take a look at http://wiki.python.org/moin/PythonProjects Many of the projects are available at the Python repository PyPI. That means the language is active and adequate for many applications. But of course, we want it to be also good at number crunching and data visualization.

For that you need some packages. Packages are extensions of the core language, a kind of library in Fortran. They need to be imported before they are used. Some packages are a must for scientists: numpy, matplotlib and possible, scipy. Installing python packages is easy. I will explain that in the future, but these three packages are on most Linux repositories (certainly in Ubuntu) and that is the simplest way to install them.

Because python is an interpreted language (It’s gonna be very slow!! Wait, wait…) you can use different ‘shells’. I recommend iPython. That, together with the previous packages, turns python into a powerful scientific development tool. If you have time (I promise to keep this post short) watch this amazing talk by Fernando Perez, the author of ipython:

http://www.youtube.com/watch?feature=player_embedded&v=F4rFuIb1Ie4#!

If you are still not convinced, take a look at this survey which compares Python to Fortran:

http://hammerprinciple.com/therighttool/items/python/fortran

Convinced? Then start by typing import this and start absorbing the Zen of python. Then impress your colleagues by defying gravity with import antigravity (only in Python 3). Aha! You look more pythonic now…

If you are new to Python and want to install it you will have to decide whether to use Python 2 or Python 3. In the next post we will see how to make this decision. The short answer is ‘use Python 3’.