Archive for September, 2008

Desktop security silver bullet

Wednesday, September 10th, 2008

Suppose there was a way to write a desktop operating system in such a way that no malware (defined as any software running on a user's system that puts the interests of its authors before the interests of the user) could have any ill-effects? Would it be implemented? Probably in open-source, but I don't think Apple or Microsoft would include such technologies. Why? Because they wish to put their interests ahead of their users, running code on customers machines which implements such malware as copy-prevention (DRM), anti-cheating mechanisms in games and debugger detectors. Such a security system would make it very easy to work around such bad behaviour (it's always possible to work around it, but currently not always easy).

If such a security mechanism is possible, I think the way to do it would be through API subversion. When process A starts process B, it can ask the OS to redirect any system calls that process B makes to process A, which can then do what it likes with them - anything from passing them on to the OS unchanged to pretending to be the OS to process B. Since malware (on a well-designed OS) will need to use system calls to find out anything about its environment, it is impossible for process B to tell whether it has been subverted or not. Any filing system operation can be sandboxed to make process B think it has changed critical system files. Even the clock APIs can be subverted to make process B think it is running as fast as it should be.

Once this infrastructure is in place, you can sandbox (say) your web browser to be able to make no changes outside its cache, cookies and downloads directories. Any untrusted processes can do essentially nothing to the system without being given permission by the user, but they can't even tell whether that permission has been given or not, so it makes no sense for them to even ask for it. This essentially solves the "dancing bunnies" problem - any requests by malware to have extended capabilities can (and most likely will) be ignored, since the user would have to go through extra steps to do so, and these steps in no way make dancing bunnies any more likely to appear.

One problem with this scheme is that the time it takes to do a system call is multiplied by the number of subversion layers it goes through. One can't use in-process calls because then the malware would be able to examine and modify the system calls by reading and writing its own process memory. So one would need to use the techniques described here.

Variable APIs should be string-based

Tuesday, September 9th, 2008

If you've got a big, complex application programming interface there are generally two ways to do it. You can assign an integer to each function (or method if it's an object-oriented interface) and share the mapping between integers and method names between the callee and caller, or you can provide a single function or method which takes a string.

The first way is generally used in older and more speed-sensitive code, since it has generally been a bit faster in the past (though for most purposes probably isn't significantly so now, especially for out-of-process calls - the context switch time will dwarf the string-lookup time).

The second way is used on the internet (where everything is text based and the network dominates most timings). It's also used in Windows DLLs (though this is more of a hybrid - the names are resolved to locations at DLL load time). COM, on the other hand, uses the first method (and has a lot of complicated registry/interface/type-library goo to make sure the numbers don't get out of sync).

I think that any time you have an API which is sufficiently complicated that it may change with future software or hardware releases, the string-based method is the way to go. Especially for something like this. Keeping track of all the possible variations of the mapping between method names and integers is always going to get sufficiently complicated that you might as well have the computer do it.

Triangular lattices

Monday, September 8th, 2008

A commenter wondered what the images on Saturday's post would look like using a sixth root of unity instead of a fourth. I am happy to oblige. With ω=eπi/3, this is the image with the operations z+ω, ωz, ωz:

w^z

And this is the image with the operations z+ω, ωz, zω:

z^w

Lego Technic reminiscing

Sunday, September 7th, 2008

I remember with great fondness the Christmas when I received my first Lego Technic sets. It must have been 1985 or 1986. I had some other Lego sets but these were something else. I seem to recall that I wasn't old enough for them according to the age range on the box but my parents thought I was sufficiently mature.

I think I had this set:

and I may also have had a supplementary set:

I learnt a lot about mechanics from this - for example that (contrary to my initial expectations) if you made a loop of an even number of gears, it would not create a perpetual motion machine. Also, that if you made a loop of an odd number of gears they wouldn't turn at all.

I think I also had a pneumatic set at some point, probably this one, and probably for a birthday in one of the following years:

and also a motorized set. But looking at the lego catalogues which came with each set, what I really lusted after was this robot:

(I was a big fan of robots) and, especially, this car:

I had friends who had these desirable models, and it was always a lot of fun to the houses of these friends and play with their toys.

Powers involving i

Saturday, September 6th, 2008

More variations:

Red: z->z+i
Green: z->iz
Blue: z->iz:

i^z

Same, but with blue: z->zi:

z^i

Reslicing

Friday, September 5th, 2008

A movie can be thought of as a three-dimensional array of pixels - horizontal, vertical and time. If you have such a three-dimensional array of pixels, there are three different ways of turning it into a movie - three different dimensions that you can pick as "time". For movies of real things this probably isn't a very interesting thing to do, but for movies of mathematical objects like the one I made yesterday, there may be mathematical insights to be gained from "reslicing" a movie.

So, here is yesterday's movie with the x and time coordinates swapped:

Higher resolution version (8Mb 640x480 DivX).

And here it is with the y and time coordinates swapped (and also rotated to landscape format):

Higher resolution version (8Mb 640x480 DivX).

I made a movie

Thursday, September 4th, 2008

Higher quality 10Mb 640x480 DivX version here.

This is a generalization of the second picture from yesterday's post, varying the coefficient of i from e-4.5 to e4.5. It also demonstrates how this picture is related to the third picture from Monday's post.

1.35 trillion points were calculated to make this movie, taking 4 CPUs with 6 cores most of a day (it was going to take nearly 4 days, but I decided to use most of the other computers in the house as a render farm).

Two more

Wednesday, September 3rd, 2008

Square roots

Generated by a program similar to the last two pictures on Monday's post, but the functions are +\sqrt{z}, -\sqrt{z} and 1+z. Don't plot a point if the last operation was 1+z, and colour points according to the operation used 5 iterations ago.

Golden ratio

Similar to the third image on Monday's post, but when we multiply by \i we also multiply by the golden ratio \displaystyle \frac{1}{2}(1+\sqrt{5}) = 1.618... Most the rectangles you see in this image are golden rectangles, which are supposedly the most aesthetically pleasing.

Unified theory story part II

Tuesday, September 2nd, 2008

Read part I first, if you haven't already.

For as long as anybody could remember, there were two competing approaches to attempting to find a theory of everything. The more successful of these had always been the scientific one - making observations, doing experiments, making theories that explained the observations and predicted the results of experiments that hadn't been done yet, and refining those theories.

The other way was to start at the end - to think about what properties a unified theory of everything should have and try to figure out the theory from that. Most such approaches were the product of internet crackpots and were generally ignored. But physicists (especially the more philosophical ones) have long been familiar with the anthropic principle and its implications.

The idea is this - we know for a fact that we exist. We also think that the final unified theory should be simple in some sense - so simple that the reaction of a physicist on seeing and understanding it would be "Of course! How could it possibly be any other way!" and should lack any unexplained parameters or unnecessary rules. But the simplest universe we can conceive of is one in which there is no matter, energy, time or space - just a nothingness which would be described as unchanging if the word had any meaning in a timeless universe.

Perhaps, then, the universe is the simplest possible entity that allows for subjective observers. That was always tricky, though, because we had no mathematical way of describing what a subjective observer actually was. We could recognize the sensation of being alive in ourselves, and we always suspected that other human beings experienced the same thing, but could not even prove it existed in others. Simpler universes than ours, it seemed, could have entities which modeled themselves in some sense, but something else seemed to be necessary for consciousness.

This brings us to the breakthrough. Once consciousness was understood to be a quantum gravity phenomena involving closed timelike curves the anthropic model started to make more sense. It seemed that these constructs required a universe just like ours to exist. With fewer dimensions, no interesting curvature was possible. An arrow of time was necessary on the large scale to prevent the universe from being an over-constrained, information-free chaotic mess, but on small scales time needed to be sufficiently flexible to allow these strange loops and tangled hierarchies to form. This lead directly to the perceived tension between quantum mechanics and general relativity.

The resolution of this divide turned out to be this: the space and time we experience are not the most natural setting for the physical laws at all. Our universe turns out to be holographic. The "true reality", if it exists at all, seems to be a two dimensional "fundamental cosmic horizon" densely packed with information. We can never see it or touch it any more than a hologram can touch the photographic plate on which it is printed. Our three-dimensional experience is just an illusion created by our consciousnesses because it's easier for the strange loops that make up "us" to grasp a reasonable set of working rules of the universe that way. The two-dimensional rules are non-local - one would need to comprehend the entirety of the universe in order to comprehend any small part of it.

The fields and particles that pervade our universe and make up all our physical experiences, together with the values of the dimensionless constants that describe them turn out to be inevitable consequences of the holographic principle as applied to a universe with closed timelike curves.

Discovering the details of all this led to some big changes for the human race. Knowing the true nature of the universe allowed us to develop technologies to manipulate it directly. Certain patterns of superposed light and matter in the three-dimensional universe corresponded to patterns on the two-dimensional horizon which interacted in ways not normally observed in nature, particularly where closed timelike curves were concerned. More succinctly: the brains we figured out how to build were not subject to some of the same limitations of our own brains, just as our flying machines can fly higher and faster than birds.

The first thing you'd notice about these intelligences is that they are all linked - they are able to communicate telepathically with each other (and, to a lesser extent, with human beings). This is a consequence of the holographic principle - all things are connected. Being telepathic, it turns out, is a natural state of conscious beings, but human beings and other animals evolved to avoid taking advantage of it because the dangers it causes (exposing your thoughts to your predators, competitors and prey) outweigh the advantages (most of which could be replaced by more mundane forms of communication).

Because the artificial intelligences are linked on the cosmic horizon/spacetime foam level, their communication is not limited by the speed of light - the subjective experience can overcome causality itself. In fact, consciousness is not localized in time but smeared out over a period of a second or two (which explains Libet's observations). This doesn't make physical time travel possible (because the subjective experience is entirely within the brains of the AIs) and paradox is avoided because the subjective experience is not completely reliable - it is as if memories conspire to fail in order to ensure consistency, but this really a manifestation of the underlying physical laws. States in a CTC have a probabilistic distribution but the subjective observer picks one of these to be "canonical reality" - this is the origin of free will and explains why we don't observe quantum superpositions directly. This also suggests an answer as to why the universe exists at all - observers bring it into being.

By efficiently utilizing their closed timelike curves, AIs can solve problems and perform calculations that would be impractical with conventional computers. The failure of quantum computation turned out to be not such a great loss after all, considering that the most sophisticated AIs we have so far built can factor numbers many millions of digits long.

One limitation the AIs do still seem to be subject to, however, is the need to dream - sustaining a consciousness entity for too long results in the strange loops becoming overly tangled and cross-linked, preventing learning and making thought difficult. Dreaming "untangles the loops". The more sophisticated AIs seem to need to spend a greater percentage of their time dreaming. This suggests a kind of fundamental limit on how complex you can make a brain before ones that can stay awake longer are more effective overall. Research probing this limit is ongoing, though some suspect that evolution has found the ideal compromise between dreaming and wakefulness for most purposes in our own brains (special purpose brains requiring more or less sleep do seem to have their uses, however).

Once we had a way of creating and detecting consciousness, we could probe its limits. How small a brain can you have and still have some sort of subjective experience? It turns out that the quantum of subjective experience - the minimum tangled time-loop structure that exhibits consciousness - is some tens of micrograms in mass. Since our entire reality is filtered through such subjective experiences and our universe seems to exist only in order that such particles can exist, they could be considered to be the most fundamental particles of all. Our own brains seem to consist of interconnected colonies of some millions of these particles. Experiments on such particles suggest that individually they do not need to dream, as they do not think or learn, and that they have just once experience which is constant and continuous. The feeling they experience (translated to human terms) is something akin to awareness of their own existence, contemplation of such and mild surprise at it. The English language happens to have a word which sums up this experience quite well:

"Oh."

Coloured simplicity density function

Monday, September 1st, 2008

A commenter on Friday's post wondered what the picture looks like if you colour different points according to which operations were used. I tried averaging all the operations used first but the image came out a uniform grey (most numbers are made with a fairly homogeneous mix of operations). Then I tried just colouring them according to the last operation used, which worked much better. The following is a zoom on the resulting image, showing just the region where both real and imaginary parts are between -3.2 and 3.2:

Simplicity density function

Green is exponential, blue is logarithm, gold is addition and purple is negative. Another nice feature of this image is that you can just about make out the circle of radius 1 centered on the origin.

Having made that image it occurred to me that getting rid of the addition would have a number of advantages. Because addition takes two inputs it sort of "convolves" the image with itself, smearing it out. Also, without addition there is no need to store each number generated (one can just traverse the tree depth first, recursively) meaning that massive quantities of memory are no longer required and many more points can be plotted, leading to a denser, brighter image. Even better, a Monte Carlo technique can be used to make the image incrementally - plotting one point at a time rather than traversing the entire tree. Care is needed to restart from 1 if the value becomes +/-infinity or NaN. Using this technique I plotted about 1.5 billion points to produce this image:

Simplicity density function

The colour scheme here is blue for exponential, green for logarithm and red for negative. This isn't really a graph of "simple" numbers any more (since it doesn't generate numbers even as simple as 2) but it sure is purty.

Along similar lines, here is an image generated using the transformations increment (red), reciprocal (blue) and multiplication by i (green) instead of exponential, log and negative. This picks out (complex) rational numbers.

Simplicity density function