Jupyter on GCE

I was recently inspired to setup Jupyter to run remotely on a GCE instance. I have access to a lot of computing resources for work, so it’s silly to run things on locally my laptop, but running interactive Python sessions remotely can be painful due to latency and the vagaries of terminals. Running Jupyter seems like a perfect fit here, since the editing is done locally – no lag – and Jupyter can be nicely self documenting for moderate sized projects1)Once projects hit a certain size though, Jupyter becomes inscrutable and really needs to be modularized.

Jeff Delaney has a helpful post on setting Jupyter up GCE and the Jupyter docs on running a public server also have some useful information. However, the solutions for exposing Jupyter to the web were not terribly secure or painful to implement, or both. Since I’m only interested in being able to run the server myself, a simple, relatively secure solution is to use ssh tunneling. So rather than exposing ports publicly on GCE, just start the Jupyter server on your GCE instance with the –no-browser option.

jupyter notebook --no-browser

Then, on your local machine run

gcloud compute ssh nnet-inference \
                       --ssh-flag="-L" \
                       --ssh-flag="9999:localhost:8888"

And point your browser to http://localhost:9999.

That’s it. Now you can use Jupyter remotely without opening up public ports on your GCE instance.  2)A couple of minor notes: I run my notebook inside tmux so that it stays alive if my connection drops. And if the connection drops you’ll need to restart the tunnel.

References   [ + ]

1. Once projects hit a certain size though, Jupyter becomes inscrutable and really needs to be modularized.
2. A couple of minor notes: I run my notebook inside tmux so that it stays alive if my connection drops. And if the connection drops you’ll need to restart the tunnel.

This is (mostly) a Test

The primary purpose of this post is to test using IPython notebooks as part of WordPress blog posts. I already did a some testing of this (see my previous post), but figured more couldn’t hurt. If someone finds it interesting, so much the better.

We transition from regular WordPress to a notebook just below:

IPython Notebooks in WordPress

Animating Weather Station Data

Every time there’s a big storm here, which is admittedly not that often, I think that there should be a better way to visualize the rainfall data that exists.  The Flood Control District of Maricopa County maintains an extensive network of rainfall and streamflow gauges. The data is available for download, but the only forms I’ve seen it in is either as static map or as table. As a little project, I decided to download the data from the big storm we had here on September 8th, 2014 and see if I could produce an interesting animation.

Due to some prodding from friends the scope of this project crept, I ended up adding weather radar data as well as sound, and the whole thing took much longer than I’d planned on. The result is below: the blue circles are the rainfall gauges – circle area is proportional to the previous hours rainfall at that location, the red is the weather radar composite reflectivity, and the rainfall sound at a given time is proportional to the total rain at all stations1)I just add up the rainfall at all of the stations. It would be more accurate to scale the contribution of each station based local density of stations since rain in areas with more stations is overepresented in the current scheme.. The animation was produced using Python, primarily with the matplotlib package. If you are interested in how this is done, I’ve described the process of animating the just the rainfall data in an IPython notebook which can be viewed at nbviewer.ipython.org2)I just discovered this and I really like it. or downloaded from github.com. I plan to add notebooks that show how to produce the rest of the animation shortly.

References   [ + ]

1. I just add up the rainfall at all of the stations. It would be more accurate to scale the contribution of each station based local density of stations since rain in areas with more stations is overepresented in the current scheme.
2. I just discovered this and I really like it.

A Partially Successful Adventure in RPython

Before I talk about RPython, I need to say at least a little bit about PyPy. PyPy is Python interpreter, written in Python that outperforms the standard, C implementation of Python (CPython) by about a factor of six.  Given that Python is generally thought to be much slower than C, one might think that was impossible.  However, it is in fact faster, for most things at least, as one can see on the PyPy benchmark page.

This is possible because PyPy is only nominally written in Python. Sure, it can run (very slowly) on top of the standard CPython interpreter, but it’s really written in a restricted subset of Python referred to as RPython.  The exact definition of RPython is a little vague, in fact the documentation says “RPython is everything that our translation toolchain can accept”, but essentially it removes the most dynamic and difficult to optimize portions of Python.  The resulting language is static enough that the aforementioned toolchain can translate RPython code into native code resulting in a tremendous speed up. Not only that, and this is the truly amazing part, if the RPython code looks like an interpreter, it will automagically add a JIT compiler to it

That sounds impressive, but I wanted to try out the RPython toolchain to get a better feel for what is involved in translating a interpreter and how large the performance gains one could expect for straightforward, non highly tuned implementation. To this end I looked around and I found lis.py, minimal scheme interpreter written by Peter Norvig.  I tried translating with this RPython and quickly discovered that this implementation is much too dynamic to be successfully translated with with RPython.  So, using the basic structure of lisp.py, I rewrote the interpreter so that RPython could translate it.  The result, named skeem.py (pronounced skimpy) was roughly five times longer and at least several times uglier, but did translate to a native code version named skeem-c. (This is RPython’s choice of name and I believe results from the fact that RPython first translates the Python code to C code, then compiles it).

To get an idea of the speedup, I wrote a lisp program to write out the mandelbrot set in PGM format.  This will not run with the standard lis.py; I had to add a couple of commands to make generating the set feasible.  Running the mandelbrot code using skeem.py takes 45,000 seconds using Python 2.7.  Using PyPy it takes 532 seconds, a considerable improvement.  Finally, using the translated skeem-c reduced the run time to 98 seconds. That’s a whopping 450-times improvement over the original!

Unfortunately, adding a JIT was not a success. I managed to get the skeem.py to translate with –opt=jit, but the resulting interpreter was five times slower than the interpreter without the JIT. I suspect that this is related to the main “loop” of the interpreter being recursive rather than iterative, but I didn’t dig into that.

Later, I will post some results from a second, uglier but faster, interpreter I wrote, which got a factor of two speed boost from –opt=jit.

(define x-centre -0.5)
(define y-centre 0.0)
(define width 4.0)
(define i-max 1600)
(define j-max 1200)
(define n 100)
(define r-max 2.0)
(define colour-max 255)
(define pixel-size (/ width i-max))
(define x-offset (- x-centre (* 0.5 pixel-size (+ i-max 1))))
(define y-offset (+ y-centre (* 0.5 pixel-size (+ j-max 1))))
 
(define (*inside? z-0 z n)
 (and (< (magnitude z) r-max)
 (or (= n 0)
 (*inside? z-0 (+ (* z z) z-0) (- n 1)))))
 
(define (inside? z)
 (*inside? z 0 n))
 
(define (boolean->integer b)
 (if b colour-max 0))
 
(define (pixel i j)
 (boolean->integer
 (inside?
 (make-rectangular (+ x-offset (* pixel-size i))
 (- y-offset (* pixel-size j))))))
 
(define plot
 (lambda ()
 (begin (display (quote P2)) (display nl)
 (display i-max) (display nl)
 (display j-max) (display nl)
 (display colour-max) (display nl)
 (do ((j 1 (+ j 1))) ((> j j-max))
 (do ((i 1 (+ i 1))) ((> i i-max))
 (begin (display (pixel i j)) (display nl)))))))
 
(plot)

[1] And, if Python in Python is six time faster, would Python in Python in Python be 36 times faster? One can only wish!

NumPy on PyPy with a side of Psymeric

I’ve been playing around with PyPy lately (more on that later) and decided I’d take a look to see how the NumPy implementation on PyPy (NumPyPy[1]) is coming along.  NumPyPy is potentially very interesting. Because the JIT can remove most of the Python overhead, more of the code can be moved to the Python level. This in turn opens up all sorts of interesting avenues for optimization, including doing the things that Numexpr does, only better.  Therefore, I was very exciting when I saw this post from 2012, describing how NumPyPy was running “relatively real-world” examples over twice as fast as standard NumPy.

I downloaded and installed the NumPyPy code from https://bitbucket.org/pypy/numpy.  This went smoothly except I had to spend a bit of time messing with permissions. I’m not sure if this was something I did on my end, or if the permissions of the source are odd. In either event, installation was pretty easy.  I first tested the speed of NumPyPy using the micro-optimization code from last week – this was my first indication that this wasn’t going to be as impressive as I’d hoped. NumPyPy was over 10⨉ slower than standard NumPy when running this code!

I did some more digging around and found an email chain that described how the NumPyPy developers are focusing on completeness before speed. That’s understandable, but certainly isn’t as exciting as a very fast, if incomplete, version of NumPy.

I tried another, very simple,  example, using timeit; in this case standard NumPy was about 7× faster than NumPyPy:

$ python -m timeit -s "import numpy as np; a = np.arange(100000.0); b=a*7"  "x = a + b"
10000 loops, best of 3: 71.3 usec per loop
 $ pypy-2.3.1-osx64/bin/pypy -m timeit -s "import numpy as np; a = np.arange(100000.0); b=a*7"  "x = a + b"
1000 loops, best of 3: 953 usec per loop

Just for fun, I dusted off Psymeric.py, a very old replacement for Numeric.array that I wrote to see what kind of performance I could get using Psyco plus Python.  There is a copy of Psymeric hosted at https://bitbucket.org/dblank/pure-numpy/src, although I had to tweak that version slightly to ignore Psyco and run under both Python 2 and 3.  Running the equivalent problem with Psymeric using both CPython and PyPy gives an interesting result:

python -m timeit -s "import psymeric as ps; a = ps.Array(range(100000), ps.Float64); b=a*7"  "x = a + b"
10 loops, best of 3: 30.7 msec per loop
pypy-2.3.1-osx64/bin/pypy -m timeit -s "import psymeric as ps; a = ps.Array(range(100000), ps.Float64); b=a*7"  "x = a + b"
1000 loops, best of 3: 510 usec per loop

Running with CPython this is, predictably, pretty terrible (note the units are ms in this case versus µs in the other cases). However, when run with PyPy, this actually faster than NumPyPy.  Keep in mind that Psymeric is pure Python and we are just relying on PyPy’s JIT to speed it up.

These results made me suspect that NumPyPy was also written in Python, but that appears to not be quite right.  It appears that the core of NumPyPy is written in the RPython, the same subset of Python the PyPy itself is written in. This allows the core to be translated into C. However, as I understand it, in order for this to work, the core needs to be part of PyPy proper, not a separate module. And that appears in fact that is the case: the core parts of numpy are contained in the module _numpy defined in the PyPy source directory micronumpy and they are imported by the NumPyPy package, which is installed separately. If all that sounds wishy-washy, it’s because I’m still very unsure on how this is working, but this is my best guess at the moment.

This puts NumPyPy in an odd position. One of main attractions of PyPy from my perspective is that it’s quite fast. However, NumPyPy is still too slow for most of the applications I’m interested in.  From comments on the mailing list, it sounds like their funding sources for NumPy are more interested in completeness than speed, so the speed situation may not improve soon. I should put my time where my mouth is and figure out how to contribute to the project, but I’m not sure if I’ll have the spare cycles soon.

[1] This was a name used for NumPy on PyPy for a while. I’m not sure if it’s still considered legit, but I can’t go around writing “NumPy on PyPy” over and over.

NumPy Micro-Optimization and Numexpr

Let’s suppose I wanted to compute the sine of one of the angles of a right triangle given the length of two of its sides (don’t ask me why, that’s just the first example that popped into my head).  In Python, I might start with a function like so:

def sin(x, y):
 return y / np.sqrt(x**2 + y**2)

Now suppose I want to calculate a whole lot of sine values?  My first instinct is always to use NumPy. In this case, all I need to do is pass sin a pair of arrays instead of a pair of Python floats. If the resulting performance is good enough, we are done. If not, things get more complicated.  There are several tools for speeding up this type of numerical code such as Cython, Weave, Numexpr, etc. I’ll discuss Numexpr in a bit, but first let’s see if there’s anything we can do to speed up the above function using Numpy itself.

The first step, of course, is to quantify the performance of our initial function. For this we throw together some simple benchmark code:

def make_x_y(n):
 return (np.random.random([n]), 
         np.random.random([n]) + 1.0)

def test_timing(func, sizes, repeats=10):
 n_max = max(sizes)
 for n in sizes:
   x, y = make_x_y(n)
   t = 1e99
   for i in range(repeats):
     t0 = time.time()
     func(x,y)
     t1 = time.time()
     t = min(t, t1-t0)
   print("{0:e} ns per element: {1}".format(n, 1e9*t/n))

When I plot the time per element it takes to execute sin, several things are apparent. First, below about 103 elements, the result is dominated by setup overhead. Second, near both 106 and 107 elements, the time per element jumps significantly. Finally above 2×109 elements, RAM starts to swap out to SSD and performance craters. The jump at 106 elements was easy to associate with the 6 MiB L3 cache starting to overflow, but the jump 107 was puzzling since, as far as I knew, the Haswell CPU in my macbook doesn’t have an L4 cache. However, I found out with a little digging, that some of the Haswell CPUs, including mine, have 128 MiB of GPU eDRAM that functions as a joint cache for the GPU and CPU. The overflow of what is effectively an L4 cache is likely the source of this second jump in time per element.

sin_basic

It is clear that part of our optimization effort should be directed to reducing these memory bottlenecks.  I’ll show two approaches to this, the first using in-place operations and the second using a technique known as blocking. The first, in-place operations, takes advantage of the somewhat obscure out argument that most NumPy functions have as shown below.  This significantly reduces the allocations of temporary arrays and thus should reduce both allocation overhead and the memory bottlenecks.

def sin_inplace(x, y):
 temp1 = np.empty_like(x)
 temp2 = np.empty_like(y)
 np.multiply(x, x, out=temp1)
 np.multiply(y, y, out=temp2)
 np.add(temp1, temp2, out=temp1)
 np.sqrt(temp1, out=temp1)
 np.divide(y, temp1, out=temp1)
 return temp1 

The second approach is to break the calculation into smaller chunks so as not to overwhelm the cache. This is referred to as blocking although I sometimes still refer to it as chunking for obvious reasons.

CHUNK_SIZE = 4096
def sin_blocks(x, y):
 [n] = x.shape
 result = np.empty_like(x)
 for i in range(0, n, CHUNK_SIZE):
   x1 = x[i:i+CHUNK_SIZE]
   y1 = y[i:i+CHUNK_SIZE]
   result[i:i+CHUNK_SIZE] = y1 / np.sqrt(x1**2 + y1**2)
 return result

Finally, these approaches can be combined:

def sin_both(x, y):
 result = np.empty_like(x)
 [n] = x.shape
 buffer_size = min(CHUNK_SIZE, n)
 tmp_base = np.empty([buffer_size])
 for i in range(0, n, CHUNK_SIZE):
   x1 = x[i:i+CHUNK_SIZE]
   y1 = y[i:i+CHUNK_SIZE]
   [n1] = x1.shape
   tmp = tmp_base[:n1]
   r = result[i:i+CHUNK_SIZE]
   np.multiply(x1, x1, tmp)
   np.multiply(y1, y1, r)
   np.add(tmp, r, r)
   np.sqrt(r, r)
   np.divide(y1, r, r)
 return result

So far, I have taken a simple, clear function and turned it into three successively more incomprehensible ones. However, these new versions have one redeeming feature; as shown in the next plot, all three of these optimizations increase performance significantly. The in-place version is over 40% faster above 2×106 elements and the other two versions offer even greater speedups (at least for large element counts) in exchange for their lack of readability.

sin_numpy_opt

These techniques might be useful in a situation where you need to squeeze a little more performance out of a specific bit of code, although I would recommend using them sparingly. Remember:

premature optimization is the root of all evil – Knuth

There are other approaches to speeding up Python and as promised, I’ll now say a bit about Numexpr.  Internally, Numexpr automagically applies the blocking techniques discussed above to something that looks a lot like a NumPy expression. The most impressive part of Numexpr is not necessarily the speedup it delivers, but how clean the interface is:

import numexpr as ne
def sin_numexpr(x, y):
 return ne.evaluate("y / sqrt(x**2 + y**2)")

This takes the quoted expression, parses it, converts to byte code and then uses a specialized interpreter to perform the operation in chunks to improve performance. I would like to go into more detail on this, but this post is getting too long as it is and, more importantly, I haven’t looked at the internals of Numexpr in quite a while. One interesting, but superficial thing to notice though is that there is some sys._getframe() trickery going on here that allows evaluate to know the values of x and without having them passed in.

Numexpr offers significant improvement in speed with very little loss of readability. Although, in this case the hand optimized code still performs better in the single threaded case, Numexpr does a good job of taking advantage of multiple cores and with 8 threads enabled, Numexpr handily outperforms the hand optimized code.

sin_opt_vs_numexpr

Numexpr is limited primarily to vector operations, although It does have some limited support for the sum and product reduction operations.  It supports most of the vector operations one would consider a core part of NumPy including where, which can be leveraged to compute some expressions that would normally require a conditional. I refer you to the manual for the full list of supported expressions. While Numexpr has it’s limitations, it’s a fairly painless way to speed up certain classes of expressions in NumPy.

Next week: perhaps something about RPython? Or perhaps something simpler as this entry took longer than I’d planned.