Archive for July, 2008

GPL v3 loophole

Thursday, July 31st, 2008

For the most part I like the v3 GPL - it makes some things clearer and closes a few loopholes that allow people to violate the spirit of GPLv2 while sticking to the letter of it.

The one part I don't like is the anti-Tivoization clause. Not because I think it's draconian, but because I think that it is ineffective. Suppose you are Tivo and you want your users to only be able to use your signed version of some particular piece of GPLv3 software on it. All you have to do is split into two companies (Tivo Hardware and Tivo Software).

Tivo Hardware doesn't distribute any GPLv3 code at all - it just ships boxes which download software from some internet site, check that it is signed correctly and then install it. Tivo Hardware is therefore not bound by GPLv3. Tivo Software just modifies GPLv3 software they way they want to, signs it it with some private key and then uploads to some internet site (the same one the Tivo boxes happen to download from). Tivo Software are not "conveying an object code with or specifically for use in a User Product", they're just distributing signed code as many (if not all) GNU/Linux distributions do. That it happens to be the same key that Tivo Hardware boxes accept is irrelevant - that's not Tivo Software's fault, Tivo Hardware could do that with anyone's public key.

If there was a way that the Tivoization clause could work, someone could really mess with the FSF by releasing a piece of hardware that didn't include any GNU software but would only run GNU software signed by the FSF.

Given that this clause is so easily circumvented it might as well not be there to simplify the license. While I appreciate what the FSF are trying to do with this clause I don't think there is any way to make it work reliably without being overly draconian. The "distribution to business" compromise is also weird and could possibly be thought of as a "code smell".

The other minor gripe I have with GPLv3 is that it is much longer and more difficult to understand than GPLv2. I guess that's an inevitable (and unfortunate) consequence of making it more lawyer-proof, though.

Bloggers' remorse

Wednesday, July 30th, 2008

Sometimes I'll be really proud of a blog post after having written it, but then when it comes time to actually post it I'll cringe a bit to remember it (especially if there's anything at all controversial in it). The feeling usually goes away (at least mostly) when I re-read the post and it isn't as confrontational or embarrasing as I remembered it but sometimes I just have to grit my teeth and post anyway. There have been times when I have just pulled posts altogether though - either because I no longer agree with what I wrote or because I want to find a better form to put those thoughts in. Perhaps someday I'll come back to those posts and see if they can be resurrected in some form or other.

The story of the sprite, the string and the castle

Tuesday, July 29th, 2008

Many years ago I wrote a ASCII/ANSI/text-mode design program called SCRDES (for screen designer). It was probably the most sophisticated thing I had ever written at the time - it had many features (circles, lines, rectangles etc.) and little text mode windows that would pop up over your drawing to do things like select different tools and colours and save or load your drawing. I mostly used it for designing levels for my game "Cool Dude".

A bug in this program spawned an interesting little puzzle. The bug appeared right after I had added the user friendly interface with the text mode windows. These worked by saving the part of the screen behind the window and restoring it to make the window disappear again. However, the first time I tried to remove a window that was on top of a picture, I got some strange garbage instead of the picture I wanted. For example, the image:

would be turned into this:

Each time I created and destroyed the window, the garbage changed. I quickly realized that the bug was that I was saving the image into a string one way (scanning horizontally left to right and then top to bottom) and restoring it another way (scanning vertically top to bottom and then left to right). So I was putting back the same data, just in the wrong order.

The patterns in the garbage were interesting, so I wrote another program, called SPRITE (small bitmaps are often called sprites) which took an image and transformed it in this way repeatedly and which could be made to go forwards or backwards. I quickly discovered the following things:

  • After a certain number of iterations in one direction, the picture becomes the same as the original. This number depends on the dimensions of the picture. A square picture, for example 8x8 or 25x25, would be restored to the original after only 2 iterations but anything more complicated takes many more. The 80x25 castle image above takes 999 iterations to return to the original.
  • Some of the intermediate images consisted of smaller copies of the original arranged in a rectangular grid:

    Sometimes these smaller images would be upside-down, sideways, or even diagonal (straddling the edges in a toroidal fashion):

  • Sometimes, instead of being shrunk, the image appears to be enlarged and interleaved with the other parts of itself:

After thinking about this for a while one can sort of see why it happens. Consider what happens to the point (1,0). After some number of iterations it will end up at the point (x,y) and the point (2,0) will end up at (2x,2y). The transformation is linear in some sense (modulo the edges) so you get affine transformations of the original image.

To figure out how many iterations it will take to return an x*y image back to its original self, I think one can just consider the trajectory of the point which starts off at (1,0) under the repeated applications of the transformation (a,b)->(⌊(bx+a)/y⌋,(bx+a) mod y) or, equivalently, the point 1 under the transformation c->(c mod y)*x+⌊c/y⌋ - once that ends up back where it started the rest will fall into place by linearity. I think the answer is the multiplicative order of x modulo xy-1, based on some tinkering with the OEIS, but I'm not sure how to go about proving it.

The trajectory can contain every point in the grid (except the top left and bottom right, which remain fixed) but does not necessarily do so. Some grids (for example 78x66) do but some only contain a fraction (<0.5) of the points. Of the 1998 remaining points in the 80x25 grid, for example, half of them are in the trajectory, the white ones in this image:

As you can see, this image is symmetrical under the transformation of rotating by 180° and switching black with white.

The following image was made by looking at all the of the possible grids from 2x2 to 65x65 inclusive, calculating the number of iterations required in each case, dividing by the number of points in the grid minus 2 and converting the resulting fraction into a shade of grey (white=1, black=0). So the white points correspond to grids that take the maximum number of iterations (xy-2), the next most common shade (mid-grey) corresponds to grids that take (xy-2)/2 iterations and so on.

I ran the program for grids up to 2001x2002 but the resulting image looks pretty homogeneous. White pixels seem to appear more often on lines corresponding to (some) prime numbers, but don't seem to get as rare as quickly as prime numbers do. Rational numbers <0.5 with small denominators seem to be show up more often.

Another open question is, given a x*y grid and a number of transformations n, how many castles will there be in the resulting image and what angles will they be at? (Or, dually, what fraction of a castle is magnified up to the entire sprite?).

Edit 14th July 2013:

This seems to be closely related (although not quite identical) to Arnold's cat map.

Pot or not-pot

Monday, July 28th, 2008

This post is based on a maths lesson I had as a very young child. It was in my first primary school so I couldn't have been more than 9. I had some fantastic maths lessons at that school - one fun one that I remember was my first introduction to the Fibonacci series by a population of breeding rabbits. Another involved colouring squares of the multiplication table according to whether each number could be divided by some other number and seeing what patterns emerged.

The idea behind this lesson was this. You have a (bizarre) snooker table which is a rectangle whose width and length are integers, and whose only pockets are in the corners. You start with the ball in one corner and hit it at a 45 degree angle from one of the sides, very hard. It bounces off the various edges some number of times and eventually ends up in one of the other 3 corner pockets (it can't end up in the pocket where it started because it would have had to retrace its steps, which must have meant that it bounced off some edge perpendicular to its path, but its paths are always at 45 degree angles to the only edges).

If it ends up in the pocket opposite where it started, that is a "pot" and if it ends up in one of the other two corners, that is a "not pot". Can you find the rule for determining whether an x by y table results in a "pot" or a "not pot"? What fraction of tables result in a "pot"? As I recall we spent a happy time drawing rectangles and tracing out the patterns made by the ball. We noticed that square tables (plus tables whose width was odd multiple of their length) were always "pot"s and tables whose width was an even multiple of their length were "not pot"s. I think we noticed that given any integer n, an x by y table had the same result as an nx by ny table. We may even have statistically determined that about 1/3 of the tables were "pot"s, but as I recall we didn't find the general pattern.

Today (at least 20 years later!) I finally got around to doing some more investigation on this. I wrote a program to perform the algorithm for different sized tables and plotted the results on a graph (red is "pot" and the green and blue are the other two corners):

There is an interesting fractal structure here. If you have the picture for tables up to size 2n by 2n you can get the picture for tables up to 2n+1 by 2n+1 by repeating the 2n by 2n picture 4 times and then fixing up the 2n by 2n+1 and 2n+1 by 2n tables. From the recurrence relation thus found one can prove that given a random table, the three pockets are equally likely.

The patterns here make me suspect that there might be some simple algorithm for finding the pocket based on the binary expansions of the table dimensions. I'm not sure what it is though.

One simple (non-binary) method for finding the pocket given the table dimensions is described here.


Sunday, July 27th, 2008

I think that of all the features of C++, destructors are probably my favorite. They seem simple enough (code that is run when an object is destroyed, ensuring resources are cleaned up) but when you look into the consequences of that, they change the entire game and mean that C++, while it looks at first glance quite similar to C, is a very different language altogether.

RAII gets talked about a lot but the real meat of it is RRID - the destruction part.

For one thing, they mean that some things can now happen implicitly, rather than having to be explicitly coded. That's a good thing - it means the higher-level code can be terser and terser code is less likely to contain bugs. A common complaint of C programmers learning C++ for the first time is that they can't see exactly what's going on by looking at the code any more - but what they haven't yet realized is that that is a good thing.

For another thing, they make exceptions possible, with all the benefits they bring, and without the overhead, non-orthogonality and non-determinism of garbage collection.

Because destructors are not like normal functions, there are some things that you should not do with them (like throw exceptions from them). One way to think about this is that normal functions (including constructors) do "forward" work and destructors to "reverse" work (undoing things, releasing resources etc.). The reverse work should never be able to fail because that means that neither forward progress nor reverse progress can be made - it means the program is irretrievably broken and must be terminated immediately. Hacks like uncaught_exception() should not be used - if you're tempted to use it is a sign that you're trying to make forward progress in a destructor and that you should re-think your design.

Another way of writing "reverse direction" code in some languages is "finally". It's too bad that C++ doesn't have this (you either have to catch and rethrow or write a special object) - there are occasions when you want to do some one-off cleanup and don't want to have to write a whole class just to get a destructor, and the "finally" notation makes it clearer than the catch/rethrow notation. I wouldn't give up destructors for finally, though, as some languages do.

Homogeneous computer

Saturday, July 26th, 2008

I was thinking about FPGAs recently, and that got me wondering about the possibility of designing a computer architecture that is completely homogeneous - that is, composed of a number of identical cells arranged in a square grid. Some of the cells (presumably those on the edges or even just a corner or two) would be connected to IO devices (perhaps directly, perhaps via a bus for a more general machine).

Each cell would have some kind of logic and a small amount of memory (enough to remember how it is connected to adjacent cells and to use as actual user memory).

We would like to be able to have different pieces of the machine doing different things at the same time, so cells should be independent (except where we explicitly connect it to an adjacent cell).

We also want to be able to have the hardware reconfigure itself (so while group-of-cells-A is doing something else, group-of-cells-B can be reprogramming group-of-cells-C). In order to do this, we need to make it possible for each cell to be able to address any other cell that its connected to.

If we have some string of cells connected together in a line, how do we differentiate between data sent all the way to the end of the line and programming instructions sent to a particular cell somewhere along the line? For a chip with 2n cells we need to have at least n+1 bits connecting each cell to its neighbours - n data bits (x) and one "program" bit (p). If a cell in "interconnect" mode sees an input of (p,x)==(0,x) then it passes (0,x) out the other side. If it sees (1,x>0) then it passes (1,x-1) out and if it sees (1,0) then it is reprogrammed. I guess we'd also need some additional data bits to tell the cell what mode it should be in after reprogramming. A cell with 4m input bits and m output bits has requires 24mm bits to completely describe its truth table and this is clearly going to be larger than m so obviously we can't program it with any arbitrary truth table in one go - we'd either have to upload it in pieces or limit the possible cell modes to some useful subset of the possible truth tables.

This is probably an incredibly inefficient way to build a computer, since for any given state a large number of the cells are likely to be acting as just interconnects or just memory and only using a tiny proportion of their capacity. But having the cells be identical might mean that doesn't matter so much if we can build a very large number of them very cheaply.

A variation on this architecture might be to have the cells arranged three-dimensionally in a cube rather than than on a flat 2D surface. Current photolithographic fabrication technologies aren't quite up to this yet (the fattest chips are only maybe 10 layers thick) but I expect that the nanoassembly and microfluidic channels for integrated cooling that will be required to make fully 3D chips aren't too far off. When that happens the chips will be so complex that we'll have to use concepts like repetition and fractal-like scale invariance to a greater extent to make their designs tractable. In other words, chips will start to look more biological - more like living tissue. Perhaps a more homogeneous computer design will come into its own in that era.

Concrete data type

Friday, July 25th, 2008

A handy thing to have around (and something I want to put into the UnityALFE language) is a data type for representing physical quantities. This keeps track of a real number plus powers of metres, seconds, kilograms, amps, Kelvins (and maybe others, including user-defined ones). Two values of type Concrete can always be multiplied or divided but can only be added or subtracted if their dimensions match, and can only be converted to other types if they are dimensionless.

Common units of measurement and physical constants can be given as constants. Because the units are tracked automatically you can do things like:


and get the right answer.

Usually the compiler should be able to check the dimensions at compile time and elide them like other type information, or give a compile error for expressions like:


Along similar lines, the log() function could be considered to return a value of type Concrete with some special non-zero dimension. Then you can (and indeed must) specify to which base the logarithm should be by dividing by another logarithm (e.g. log(x)/log(e), log(x)/log(10) or log(x)/log(2)). This syntax is rather more verbose than the usual method of having separate functions for common bases (log(), lg(), ln() etc.) but I find that this is more than made up for by the fact that one doesn't have to remember which function corresponds to which base - it's self-describing.

Another useful type for scientific/engineering work would be a value with confidence interval (e.g. 5±1 meaning "distributed normally with a mean of 5 and a standard deviation of 1"). There are well-defined rules for doing arithmetic with these. A generalization of this to other distribution functions might also be useful.

Twenty years of programming

Thursday, July 24th, 2008

I recently came across an archive of old files that I hadn't looked at in a while, and found what I think are the first programs I ever wrote. In particular, the oldest program I can find that I'm sure I wrote all by myself (that wasn't written by a relative or typed in from a book) is dated 24th July 1988, 5:23pm. It's possible that I started it earlier than that or that other programs written at the same time predate it - the only timestamp supported by the DOS filing system was "last modified".

"More or Less" was written in Locomotive BASIC2 which ran under GEM on our Amstrad PC1512. GEM and BASIC2 (and therefore "More or Less") can still be run on a modern PC with the help of the instructions here and DOSBox (when run under Windows XP the instruction pointer seems to end up pointing to non-code data for some reason, and Vista graphics drivers don't support graphical DOS programs).

"More or Less" is a rough clone of a piece of educational software of the same name that ran on the BBC micros at school. I found a copy of the original - the copyright information says:
MLESS / More or Less
Microelectronics Education Programme
MICROPRIMER Software Pack 4
(c) CET 1982 /ISBN 0-86184-085-2
Program idea: A Straker, I Drysdale, A J Curtis
Programmed at C.E.C.C.
Editor: Bob Coates & Five Ways Software
Version 1.0 / 1 November 1982
Works with BBC Micro 32k (480Z and SPECTRUM versions also available)

The program displays two numbers and you have to say whether the left one is more than, less than or the same as the right one.

Not particularly challenging (even for 9 year olds) but I think the BBC version at least kept track of how long you took. I intended to add more features but (like so many of my programs since) I never got around to finishing it.

Another interesting feature of the BBC version was the intro/title screen. This printed "More or Less" in letters that grew and shrunk in height so that the line formed by the tops of the letters bulged in and out:

(kind of a primitive version of texture mapping I suppose). I wanted to reproduce this effect, but my programming skills were not up to it at the time. But I recall being quite proud of what I came up with instead - the title was centered, printed in a large font, double-spaced and the letters appeared one by one, each one accompanied by a beep (without a third party driver that I wouldn't come across for several years, BASIC2 was only capable of generating that one sound programmatically).

My version did have a few interesting features - it asked for the player's name and addressed him/her by name for right and wrong answers. The program maintained a table of scores in a disk file (not sorted, though, and not necessarily "high scores" - I didn't know about arrays at that point).

I'm sure I had written some other programs around the same time or earlier, but I guess I wasn't sufficiently proud of them to save them. In my last year at Goring primary school (which ended around the same time I wrote "More or Less") I also wrote some Logo programs to draw letters of the alphabet with turtle graphics. These were not particularly sophisticated (just sequences of forward and turn instructions, plus loops for the curved bits) but were probably the first actual programs I wrote. I don't have any record of them, though.

My first made-up language

Wednesday, July 23rd, 2008

Years ago (more than a dozen), I tried to write (in C) an interpreter/compiler for a dialect of BASIC that I made up. It was based on BBC BASIC and the BBC's graphics facilities (which I was very familiar with at the time) but it would have run on the PC and had some additional features:

  • Operators and control structures from C
  • More string and array processing facilities
  • More graphics commands
  • Interactive editor and debugger
  • Commands for accessing PC facilities (interrupts, calling native code etc.)
  • Built-in help
  • Facilities for storing chunks of code as a library
  • Complex numbers
  • Self-modification and introspection facilities

The last of these is what I want to write about today.

As a child I used many of the 8-bit BASIC variants which had line numbers and it always irked me that there were some things you could do in immediate mode that you couldn't do from a running program, such as editing the program. Why was it that typing:


printed "HELLO" immediately and:


printed "HELLO" when the program was run but


didn't create a program that replaced itself with the program '10 PRINT "HELLO"' when it was run? While doing so didn't seem terribly useful it seemed to me that an interpreter would be just as easy (if not easier) to write with this facility than without it, and that it was an unnatural ommission.

Along similar lines, my dialect had an "INTERPRET" command which took a string and ran it as BASIC code and a "PROGRAM$" array which contained the lines of the program.

I got some way in to writing a parser for it but as I didn't know how to write a parser back then I got horribly tangled trying to write code for each possible combination of current state and next piece of input.

The similarities between this and the UnityALFE language that I'm in the (very slow and gradual) process of writing haven't escaped me.


Tuesday, July 22nd, 2008

I recently reorganized a big box of junk and put into smaller various boxes. Behold my amazing junk classification scheme:

  • Broken electronics, for spare parts or recycling
  • Things that work but which I don't currently have a use for
  • Spare Ikea parts
  • Ikea tools
  • Stuff that I don't know what it is.
  • String
  • Nails
  • Screws
  • Nuts and washers
  • Screw eyes and pins
  • Stickers
  • Picture hooks
  • Screw hooks
  • Rawl plugs (or wall anchors as they are called here)
  • Anti-static bags
  • Small plastic toys (I'm sure Alexander will have a lot of fun with this box when he's old enough not to swallow the parts)
  • Junk (stuff that doesn't fit into any of the other categories)

It occurs to me that the category system on my blog works a similar way (though the category corresponding to "junk" is called "random").