Archive for August, 2008

Different kinds of truth

Monday, August 11th, 2008

I used to think that the truth was just that - The Truth, singular. That there was just one "Platonic" set of true mathematical facts. I no longer subscribe to this point of view - what's true depends on who you ask.

First there are some basic truths that we have to agree on to have a discussion about anything, like "if A is true, and if A implies B, then B is also true". If we don't accept these basic logical principles as true the consequences are simply that we can't deduce anything, or that we have to accept that everything is true, or that nothing is true. We accept these truths because if we didn't what we get is a rather limited and boring set of mathematics, useless for doing anything interesting (like modelling the real world) with. Those who would deny them can't be disproven, but they can't be reasoned with either. So these truths just have to be admitted as axioms.

Next there are empirical truths like "the sky is blue" and "2+2=4". These can be thought of as facts about the universe we live in. We know they are true because we can see that they are. One could in principle do mathematics without such facts (just using pure logic) but most mathematicians generally accept these truths as well as it makes mathematics more interesting (and definitely more useful).

Sometimes mathematicians envisage mathematical objects which cannot exist in our universe - objects which are infinite in some sense (not necessarily infinitely big - a perfect sphere is infinitely smooth, for example, and the real number line contains infinitely many points). Infinity is a very slippery thing to deal with precisely because infinities are never directly observed in the universe. How can we say anything about infinity then? Well, mathematicians have developed techniques like "epsilon delta" (for every delta you can name, no matter how small, I can name an epsilon with such and such a property). These arguments break down in physics (nothing can be smaller than the Planck length or the concentration of energy required to confine it in that interval would cause a black hole) so they are purely mathematical in nature. Nevertheless they form a consistent and beautiful theory, and they do turn out to be useful for approximating physics, so we accept them.

But when infinities start to get involved, things get very weird - you start to find that there are multiple different versions of mathematics (multiple different sets of "true facts") which are consistent with themselves, consistent with our universe and interesting. Two of these are accepting and denying the "Axiom of Choice" (AC). If we accept the AC it allows us to prove things about infinities without actually constructing or defining them. This has some very weird results (like being able to disassemble a sphere into 5 pieces, move and rotate them and end up with 2 identical spheres of the same size as the original with no gaps). But denying the AC also gives you some weird results (every set can be put into order). Each are just as "true" but give different sets of mathematics. Currently mathematics including the AC is more popular as it seems to provide greater richness of intellectual territory.

As mathematics develops, it seems likely that more of these "interesting" axioms will be discovered (some of which might already have been assumed in some proofs) and that mathematics will fracture into increasng numbers of "branches" depending on which axioms one chooses to accept and which to deny. In fact, Gödel's Incompleteness Theorem says that for any axiomatic system of mathematics there will be "obviously true" statements that can't be proved from these axioms, in other words that the "bulk of mathematics" (though not necessarily the bulk of interesting mathematics) is found at the leaves of this metamathematical tree.

There are other branches of mathematics whose "truth value" is currently unknown to human mathematicians. For example, many theorems have been proven under the assumption that the Riemann hypothesis is true. We think it probably is but nobody has been able to prove it yet. The volume of work which assumes it makes it one of the most important unsolved problems.

Computer algebra system

Sunday, August 10th, 2008

At some point in the future I'd like to write a computer algebra system, like Maple or Mathematica, just because I think it would be really fun piece of software to write. The basic idea is that you can tell the computer things you know (e.g. "x+2=3") and ask it questions (like "x?") and the computer would attempt to give the simplest possible answer by searching its database of facts. When printing formulae on the screen it would use algorithms from TeX to give attractive output.

Another nice feature would be the ability to directly manipulate formulae, for example rearranging terms of an equation by dragging them with the mouse or expanding multiplications by clicking on them (the program, of course, would prevent manipulations that aren't mathematically correct). These kinds of manipulations can be very tedious to do by hand.

Yet another feature that I want is the ability to create animated, interactive proofs. Rather than just presenting a static sequence of "X implies Y implies Z" on a page, the program could actually create an animation of X turning into Y. And if, at some stage, a derivation is unclear, the user could right-click on it, select "Why?" and the program would attempt to explain. That sounds difficult to do but I think much of this is really quite mechanical. When studying mathematics at university, I often wished that the proofs were presented like this - it would have made things much easier.

As well as an interactive algebra and mathematical presentation system, this program would also contain a big database of mathematical facts, both to aid in the program's own proofs and as an (interactive) mathematics textbook in its own right. Mathematicians using the software could contribute their discoveries to this database/textbook in addition to (or even instead of) the traditional distribution method of writing papers.

Lost: The most complex story ever told?

Saturday, August 9th, 2008

I am a big fan of Lost. It has beautiful scenery, wonderful, well-paced storytelling, an incredibly compelling plot and some terrific acting. It always keeps me guessing and often surprises me. Many of its concepts ("The Others", "Constants") seem to be destined to become cultural symbols like "Big Brother" and "Room 101" from 1984 (it always used to amuse me when the TV shows of these names were on at the same time in the UK - what would George Orwell have thought about that?).

Lost's most notable feature, though, might be that it is (as far as I can tell) the most complex story ever told. Most fictional television shows (like Star Trek and Buffy the Vampire Slayer) tell a story of the course of an episode or two - the only point of the continuity is to avoid having to introduce all the characters for each story. But you can't really watch a single episode of Lost outside of the context of the show and have it make much sense. Nothing ever seems to happen on the show without a reason - every detail seems to have some purpose (even if isn't revealed until much later in the story).

Most novels seem to take 2-4 hours to tell on screen, and even multi-novel series like Harry Potter or the Belgariad/Malloreon are at most only 8-10 times as complex as this. "War and Peace" was made into a 7 hour film. "Lord of the Rings" was a little over 11 hours in total for the extended editions. Wagner's "Ring Cycle" is about 15 hours. Lost will be about 85 hours (without adverts) by the time it is finished. The only thing that even comes close is Babylon 5 at 77 hours, and much of that consists of standalone episodes.

I'm not counting soap operas or the Xanth series (a guilty pleasure of my youth) here because these are really separate stories that happen to occur in the same setting, involving overlapping subsets of characters, without an overall story arc.

English as a language for programming

Friday, August 8th, 2008

Programmers write programs in computer languages but the comments and identifiers (which are important, but not meaningful to the computer) are written in a human language.

Usually this human language is English, but not always - I have occasionally run across a pieces of source code in French, German and Hebrew. I guess it makes sense for a programmer to write code in their first language if they are not expecting to collaborate with someone who doesn't speak that language (or if that piece of code is very specific to that language - like a natural language parser).

On the other hand, it seems kind of short-sighted to write a program in anything other than English these days. There can't be many programmers who don't speak some amount of English (since most of the technical information they need to read is written in English), and it seems likely that all but the most obscure hobby programs will eventually be examined or modified by someone who doesn't speak the first language of the original author (if that language isn't English).

There are other advantages to standardizing on English - a common vocabulary can be developed for particular programming constructs which makes programs easier to understand for those who are not familiar with their internal workings. The aim is, of course, that any programmer should be able to understand and work on any program.

That there is a particular subset of the English language that is used by programmers is already evident to some extent - I think it will be interesting in the next few years and decades to see how this subset solidifies into a sub-language in its own right.

I should point out that I'm not advocating putting legal or arbitrary technical barriers to prevent programs being written in other languages - more that it might be useful to have tools which can help out with programming tasks for programs written in English.

Having said all that I think that there will in years to come, a higher proportion of programming will be done to solve particular one-off problems rather than create lasting programs - there's no reason why these throw-away programs shouldn't be in languages other than English. Tool support for this can be very minimal, though - perhaps just treating the UTF-8 bytes 0x80-0xbf and 0xc2-0xf4 as alphabetic characters and the sequence 0xef, 0xbb, 0xbf as whitespace.

Economy in ubiquity

Thursday, August 7th, 2008

I think that at some point in the future we will develop nano-replication technology (a la Star Trek: The Next Generation) and there will be perfect open-source recipes for these machines for everything from roast beef to nano-replication machines.

Shortly after that will come the complete collapse of society, because we won't need anyone else for our basic necessities any more. We will just replicate power generators (solar, wind or both depending on climate) and devices for collecting rainwater and making it drinkable. Our waste will be recycled into the raw materials the replicators need (as disgusting as that sounds, I'm sure we'll learn to deal with it) and our internet connections will be made out of wi-fi meshes and long-range point-to-point radio connections.

How can you have any sort of economy with such an absence of scarcity? Well, presumably there will still be some things that are scarce and desirable (like land in nice locations). And presumably there will still be people doing particularly creative things that improve peoples' lives (making movies or inventing new recipes) - we'd just need some way to connect the two.

I'm not advocating for intellectual property as such here (that makes the accounting far more complicated than it really needs to be and introduces artificial barriers to creativity) - instead I'm imagining something more like Nielsen ratings (but for everything) to determine who gets the scarce wealth.

I guess we'll probably always have some form of money, and I'm sure the arguments about how it is accounted for will be as heated as ever. How do you decide on the relative value of the recipe your dinner was made from verses the movie you watched?

Animal testing

Wednesday, August 6th, 2008

Live animal testing seems to be a particularly contentious issue, given the number of people who protest against it and the lengths that they go to to attempt to prevent it.

The first question is, I suppose, whether we should be using live animals for testing in the first place. I think we should - a living animal is an extremely complicated thing, so we can't get the same information with a computer model. If it were possible to get the same results without using live animals, you an be sure that the scientists would, because live animals are messy and expensive (and dangerous if you factor in the increased risk of being victimized by animal rights terrorists). Testing on humans would be even more contentious. And not doing the testing at all would pretty much bring a stop to the advances in medicine and macrobiology that we have been making for centuries, and which account for much of our life expectancy and quality of life today.

The second question is whether animal testing is unnecessarily cruel - i.e. more cruelty is involved than is actually necessary to perform the experiment. Certainly there are documented examples of such cruelty. I suppose this is inevitable - as long as there are some people who get a kick out of hurting animals, and no matter how good the screening processes and safeguards are, some of them will be drawn to careers in animal testing, and when they do these things will happen. But I very much doubt that these isolated incidents are indicative of systemic failure of process - if they were then the animal rights people would not need to distort the evidence so much to make it look that way.

The number of animals killed for testing purposes is tiny compared to the number of animals killed for food, and I suspect that the science animals are better looked after on average. Yet there don't seem to be quite so many protests about that. Perhaps this is because monkeys seem more human than cows and chickens. Or that farming, being more common, is seen as less mysterious and therefore less threatening. Or that extracting sustenance from animals is somehow nobler than extracting information from them.

Webifying old games

Tuesday, August 5th, 2008

One of these days I'd like to put some of my old DOS programs on the web to make them a bit more accessible. I used to think that I would do that in Java but these days Flash seems to be the most popular way to do interactive thingies on the web so maybe I should learn that. Or even just plain old HTML with Javascript - if someone can implement Lemmings in DHTML then I'm sure Cool Dude and probably some of the VGA demos could probably be implemented that way as well.

Digger adventure

Monday, August 4th, 2008

Lately I haven't really had any interest in improving Digger, but I still occasionally think it might be fun to write a sequel. "Digger 2" or "Digger's Adventure" would have a 2D continuously scrolling playfield, impenetrable walls, new monsters, bosses, new items to collect, more weapons, shields, warps, speed-boosters, and maybe the possibility to dig to the surface where it would be possible to jump.

Another 2D game

Sunday, August 3rd, 2008

Another 2D game I'd like to write (which might even be the same game as that one) is a bouncing ball game.

The player would be in control of a ball which could be accelerated by means of the direction keys. So holding down right would give a velocity increase in the positive x direction proportional to the length of time the key was held down (there have been a few games that work like this but in my experience in most non-simulation games the player only has one or two speeds and acceleration to/from these is instanteous). Wind resistance would be implemented to avoid the possibility of the ball reaching unlimited speeds.

Once the player has got used to these controls the next complication is maze traversal. While you could in principle just maneuvre around the maze slowly, it is much more fun to play at speeds close to the ball's terminal velocity. At these speeds, the ball's turning radius will be much larger than the scale of features in the maze, so the ball will hit the walls. When it does so, the obvious thing to happen is that it should bounce off (unless the wall is dangerous or special in some way).

Another possible complication is areas of the maze with gravity fields. The strength of the gravity could be smaller than the control acceleration (in which case it just changes the controls a bit) or could be greater (in which case entire directions become impossible to travel in). Gravity fields can be constant or central (or may have even more complicated configurations).

Then there could be places in the maze where the drag factor is proportional to some the velocity plus some constant field (i.e. windy places).

Some parts of the maze could be flooded with various different kinds of fluid, giving the opportunity to show off fun 2D fluid dynamics simulations.

Then of course there are the usual game elements - things to collect, power ups, space warps, spikes, enemies with various degrees of artificial intelligence and so on.

Emulation for fun and profit

Saturday, August 2nd, 2008

There's much more that you can do with an emulator than just play old computer games. In fact, I think that the usefulness of emulators is seriously underrated. Here are some useful things I can think of doing with an emulator that has some appropriate extensibility hooks:

  • Debugging. A debugger with an integrated emulator might be able to do the following:
    • Debug a program without the program being able to tell that it is running in a debugger - handy for investigating malware like viruses and DRM.
    • Save (delta) states at each step to make it possible to undo steps or perform backwards-in-time debugging.
    • Debug multi-threaded programs deterministically by simulating multiple threads on a single thread and allowing the user to decide when to context switch.
  • Reverse engineering. The problem of finding the actual code in the binary is Turing-complete in general but if you can find most of the important code by actually running it you can get most of the way there.
  • Static analysis. Finding bugs in code after it's been compiled by running it and (as it runs) checking things that would be difficult to check at compile time (code invariants). For example, assertions might not be compiled into an optimized binary but could be provided as metadata that could be understood by the analyzer. This would be a great help for tracking down those tricky bugs that disappear when you switch to debug binaries.