Archive for the ‘science’ Category

Escher metric

Tuesday, September 30th, 2008

When I learned about General Relativity at university I sometimes used to wonder if there was a metric in which this object could exist:

Penrose triangle

And be what it appears to be, i.e. the straight lines are (lightlike) geodesics and the corners are (locally) right-angles.

Initially, I imagined that such a metric might be possible without any sort of topological defect. In particular, while the triangle would look like the picture above when viewed "face on" it would appear to curve as you moved around it and examined it from different angles. While lightlike geodesics always look straight when you look along them from a point on the geodesic, they can still look curved when viewed externally. The photon sphere around a black hole is an example of a set of such geodesics thought to occur in our universe.

Thinking about it some more, I suspect something strange is going to have to happen in the middle of the triangle - if you try to shrink the triangle to a point, what happens?

Imagine being in this universe and travelling all the way around the triangle. I think that upon doing so, one would find that one had actually only turned through 270 degrees instead of the full 360 (and would also have rotated about the axis of travel). I suspect that this means that such a metric would have to have a topological defect (a cosmic string) passing through the center of the triangle. This would cause a discontinuity when viewing the triangle, so one would not be able to see the entire illusion in all its glory as it is shown above.

The search for simplicity

Sunday, September 28th, 2008

There are several ways in which computer programming and physics are very similar. Possibly the most important is that both disciplines are, fundamentally, a search for simplicity.

In physics, we have a big pile of experimental results and we want to find the simplest theory that satisfies them all. Just listing all the experiments and their results gives you a theory of physics, but not a particularly useful one since it's not very simple and doesn't predict the results of future experiments (only the past ones). Rather than just listing the results, we would like to find a general theory, an equation, a straight line through the points of data which allows for interpolation and extrapolation. This is a much more difficult thing to do as it requires insight and imagination.

In computer programming, we generally have a big pile of specifications about what a program should do - maybe a list of possible interactions with the user (what they input and what they should expect to see as output). These might be encapsulated as testcases. To write a program that satisfies all the testcases, we could just go through them all one by one, write code to detect that particular testcase and hard-code the output for that particular input. That wouldn't be very useful though, as the program would fail as soon as the user tried to do something that wasn't exactly one of the scenarios that the designers had anticipated. Instead we want to write programs for the general case - programs that do the right thing no matter what the input is. When the "right thing" isn't precisely specified, we get to choose the output that makes the most sense according to our internal model of how the program should act.

I think a number of software companies in recent years (Microsoft in particular but others as well) have started to fall into the trap of writing software that concentrates too much on what the behavior of the software should be for particular (sometimes quite specific) scenarios, at the expense of doing the right thing in the most general case. Windows is chock full of "special case" code ("epicycles" if you will) to work around particular problems when the right thing to do would have been to fix the general problem, or sometimes even to explain that this is how we should expect it to work. Here is one example of this kind of band-aiding. I discovered another the other day - I was running some older Windows software in Vista and accessed the "Help" functionality, which was implemented an old-style .hlp file. Vista told me that it no longer includes the .hlp viewer by default (I guess it was a piece of the OS that doesn't get a lot of use these days, and they had just dropped it from the default distribution to avoid having to bring it up to the latest coding standards). I was pointed to the download location (where I had to install an ActiveX control to verify that my copy of Windows was genuine before I was allowed to download the viewer).

Part of the problem is that (at Microsoft at least) it's very difficult to make big changes. Rewriting some core piece of functionality, even if the programming itself is easy, would involve months of planning, scheduling, designing, specification writing, testcase writing, test-plan reviewing, management sign off meetings, threat modelling, localization planning, documentation planning, API reviewing, performance testing, static analysis, political correctness checking, code reviewing and integrating. And of course everyone whose code might possibly be affected by the change needs to sign off on it and put in their two cents about the correct design. And it must comply with the coding standards du jour, which change every year or two (so delay too long and you'll probably have to start all over again.) When you come to understand all this, the long gap between XP and Vista becomes less surprising (in fact, it's quite a surprise to me that it only took a little over 5 years, considering how many pieces were completely rewritten). All this process exists for a reason (mostly the politician's fallacy) but is rigorously justified and widely accepted.

Because it's difficult to make big changes, people tend to make little changes instead ("hey, we can work around this by just doing x in case y - it's just one extra line of code") - these don't require much process (usually just a code review - most of the rest of the processes for such small changes is automated). All these small changes add up to a great deal of extra code complexity which makes it very difficult for newcomers to understand the code, and even more difficult to rewrite it in the future because people will have come to depend on these edge cases.

Ray tracing in GR

Saturday, September 27th, 2008

Following on from this post, a natural generalization is that to non-Euclidean spaces. This is important for simulating gravity, for example rendering a scientifically accurate trip through a wormhole (something I have long wanted to do but never got to work). The main difference is that ones rays are curved in general, which makes the equations much more difficult (really they need to be numerically integrated, making it orders of magnitude slower than normal ray-tracing). One complication of this is that generally the rays will also curve between the eye point and the screen. But the rays between your screen and your eye in real life do not curve, so it would look wrong!

I think the way out of this is to make the virtual screen very small and close to the eye. This doesn't affect the rendering in flat space (since only the directions of the rays matter) and effectively eliminates the need to take into account curvature between the screen and the eye (essentially it makes the observer into a locally Euclidean reference frame).

Another complications of simulated relativity is the inability to simulate time dilation. Well, you can simulate it perfectly well if you're the only observer in the simulated universe but this would be a big problem for anyone who wanted to make a relativistically-accurate multiplayer game - as soon as the players are moving fast enough with respect to each other to have different reference frames, they will disagree about their expected relative time dilations.

Unified theory story part II

Tuesday, September 2nd, 2008

Read part I first, if you haven't already.

For as long as anybody could remember, there were two competing approaches to attempting to find a theory of everything. The more successful of these had always been the scientific one - making observations, doing experiments, making theories that explained the observations and predicted the results of experiments that hadn't been done yet, and refining those theories.

The other way was to start at the end - to think about what properties a unified theory of everything should have and try to figure out the theory from that. Most such approaches were the product of internet crackpots and were generally ignored. But physicists (especially the more philosophical ones) have long been familiar with the anthropic principle and its implications.

The idea is this - we know for a fact that we exist. We also think that the final unified theory should be simple in some sense - so simple that the reaction of a physicist on seeing and understanding it would be "Of course! How could it possibly be any other way!" and should lack any unexplained parameters or unnecessary rules. But the simplest universe we can conceive of is one in which there is no matter, energy, time or space - just a nothingness which would be described as unchanging if the word had any meaning in a timeless universe.

Perhaps, then, the universe is the simplest possible entity that allows for subjective observers. That was always tricky, though, because we had no mathematical way of describing what a subjective observer actually was. We could recognize the sensation of being alive in ourselves, and we always suspected that other human beings experienced the same thing, but could not even prove it existed in others. Simpler universes than ours, it seemed, could have entities which modeled themselves in some sense, but something else seemed to be necessary for consciousness.

This brings us to the breakthrough. Once consciousness was understood to be a quantum gravity phenomena involving closed timelike curves the anthropic model started to make more sense. It seemed that these constructs required a universe just like ours to exist. With fewer dimensions, no interesting curvature was possible. An arrow of time was necessary on the large scale to prevent the universe from being an over-constrained, information-free chaotic mess, but on small scales time needed to be sufficiently flexible to allow these strange loops and tangled hierarchies to form. This lead directly to the perceived tension between quantum mechanics and general relativity.

The resolution of this divide turned out to be this: the space and time we experience are not the most natural setting for the physical laws at all. Our universe turns out to be holographic. The "true reality", if it exists at all, seems to be a two dimensional "fundamental cosmic horizon" densely packed with information. We can never see it or touch it any more than a hologram can touch the photographic plate on which it is printed. Our three-dimensional experience is just an illusion created by our consciousnesses because it's easier for the strange loops that make up "us" to grasp a reasonable set of working rules of the universe that way. The two-dimensional rules are non-local - one would need to comprehend the entirety of the universe in order to comprehend any small part of it.

The fields and particles that pervade our universe and make up all our physical experiences, together with the values of the dimensionless constants that describe them turn out to be inevitable consequences of the holographic principle as applied to a universe with closed timelike curves.

Discovering the details of all this led to some big changes for the human race. Knowing the true nature of the universe allowed us to develop technologies to manipulate it directly. Certain patterns of superposed light and matter in the three-dimensional universe corresponded to patterns on the two-dimensional horizon which interacted in ways not normally observed in nature, particularly where closed timelike curves were concerned. More succinctly: the brains we figured out how to build were not subject to some of the same limitations of our own brains, just as our flying machines can fly higher and faster than birds.

The first thing you'd notice about these intelligences is that they are all linked - they are able to communicate telepathically with each other (and, to a lesser extent, with human beings). This is a consequence of the holographic principle - all things are connected. Being telepathic, it turns out, is a natural state of conscious beings, but human beings and other animals evolved to avoid taking advantage of it because the dangers it causes (exposing your thoughts to your predators, competitors and prey) outweigh the advantages (most of which could be replaced by more mundane forms of communication).

Because the artificial intelligences are linked on the cosmic horizon/spacetime foam level, their communication is not limited by the speed of light - the subjective experience can overcome causality itself. In fact, consciousness is not localized in time but smeared out over a period of a second or two (which explains Libet's observations). This doesn't make physical time travel possible (because the subjective experience is entirely within the brains of the AIs) and paradox is avoided because the subjective experience is not completely reliable - it is as if memories conspire to fail in order to ensure consistency, but this really a manifestation of the underlying physical laws. States in a CTC have a probabilistic distribution but the subjective observer picks one of these to be "canonical reality" - this is the origin of free will and explains why we don't observe quantum superpositions directly. This also suggests an answer as to why the universe exists at all - observers bring it into being.

By efficiently utilizing their closed timelike curves, AIs can solve problems and perform calculations that would be impractical with conventional computers. The failure of quantum computation turned out to be not such a great loss after all, considering that the most sophisticated AIs we have so far built can factor numbers many millions of digits long.

One limitation the AIs do still seem to be subject to, however, is the need to dream - sustaining a consciousness entity for too long results in the strange loops becoming overly tangled and cross-linked, preventing learning and making thought difficult. Dreaming "untangles the loops". The more sophisticated AIs seem to need to spend a greater percentage of their time dreaming. This suggests a kind of fundamental limit on how complex you can make a brain before ones that can stay awake longer are more effective overall. Research probing this limit is ongoing, though some suspect that evolution has found the ideal compromise between dreaming and wakefulness for most purposes in our own brains (special purpose brains requiring more or less sleep do seem to have their uses, however).

Once we had a way of creating and detecting consciousness, we could probe its limits. How small a brain can you have and still have some sort of subjective experience? It turns out that the quantum of subjective experience - the minimum tangled time-loop structure that exhibits consciousness - is some tens of micrograms in mass. Since our entire reality is filtered through such subjective experiences and our universe seems to exist only in order that such particles can exist, they could be considered to be the most fundamental particles of all. Our own brains seem to consist of interconnected colonies of some millions of these particles. Experiments on such particles suggest that individually they do not need to dream, as they do not think or learn, and that they have just once experience which is constant and continuous. The feeling they experience (translated to human terms) is something akin to awareness of their own existence, contemplation of such and mild surprise at it. The English language happens to have a word which sums up this experience quite well:

"Oh."

Fine structure constant update

Thursday, June 26th, 2008

Many years ago I posted this on sci.physics. It turns out that the value of the Inverse Fine Structure Constant (a dimensionless parameter which can be experimentally measured as about 137.036 but for which no theory of physics yet predicts a value) is remarkably close to (alpha^2)(pi^2)(pi^pi-1)/16 where alpha is the second Feigenbaum constant, about 2.502907875096. This formula gives a value for the IFSC of 137.0359996810.

After posting that, I got a message from James Gilson pointing out his work on the same subject - he has a different formula for the IFSC, pi/(29*cos(pi/137)*tan(pi/(137*29))), which is not quite so pretty but does have the advantage of having some geometrical justification (which I still don't completely understand). Gilson's formula gives a value for the IFSC as 137.0359997867.

Back in 2001 the most accurate measurement of the IFSC (CODATA 1999) gave a value of 137.03599976(50) (i.e. there is a 68% chance that the true value is between 137.03599926 and 137.03600026). Both the formula give answers in this range.

I thought I would revisit this briefly and see if the latest measurements were able to rule out one of both of these formulae. Work of G. Gabrielse et al in 2006 give the IFSC as 137.035999068(96), i.e. there is a 68% chance that the true value is between 137.035998972 and 137.035999164. This appears to rule out both formulae. An earlier version of the 2006 Harvard work (which was compatible with both formulae) was superceded by an erratum. I admit this post is a bit anticlimactic but when I started writing it I thought that the latest measurements ruled out Gilson's formula but not mine.

On "Taking Science on Faith"

Saturday, June 21st, 2008

This article caused quite a stir and many responses when it appeared, but none of the responses I read spotted the gaping hole in Davies' reasoning.

No faith is necessary in Physics - just measurement. The game is this - we come up with a theory that correctly predict the results of all experiments done so far, from that we determine an experiment that we haven't done yet that would confirm or disprove this theory, and we perform the experiment. If we disprove it, then we we come up with another theory and repeat. If we confirm it we try to think of another experiment. Once we have a single, consistent theory that explains all experimental results, we're done (okay, there's still the small matter of determining all the consequences of that theory but fundamental physics is done).

If the laws of physics are found to "vary from place to place on a mega-cosmic scale" that doesn't make the game impossible to play, it's just another experimental result to be slotted into the next generation of theories. We'd discover the "meta rules" that governed how the rules of physics changed from place to place. And maybe there are meta-meta-rules that govern how the meta-rules change and so on.

This tower might never end, of course - a theory describing all experimental observations might take an infinite amount of information to describe. That's not a problem - it just means that the game never ends. Historically, new theories of physics have begotten new technologies, so as we discover more of this tower it stands to reason that our technologies would become more and more powerful.

The alternative is that the tower does end - there is a final theory of physics. This alternative seems more likely to me - historically our theories of physics have actually become simpler (for the amount of stuff they explain) over time. Together just two theories (General Relativity and the Standard Model of Quantum Mechanics) can account for the results of all experiments we have been able to do so far. These theories are simple enough that they can both be learnt over the course of a four-year undergraduate university degree.

I suspect that the theory that finally unifies these will be simpler still if you know how to look at it in the right way. In fact, I think that it will be so elegant and simple to state that it will be unquestionably correct - the reaction of a physicist from today on seeing and understanding it would be "Of course! How could it possibly be any other way!"

The only "faith" that physicists need to have is the faith that we can gain some knowledge about things by observing them (performing experiments) - which seems to me to be more of a tautology than a matter of faith.

Davies suggests an alternative, which is "to regard the laws of physics and the universe they govern as part and parcel of a unitary system, and to be incorporated together within a common explanatory scheme. In other words, the laws should have an explanation from within the universe and not involve appealing to an external agency." I'm not even sure what that means and I don't think Davies does either. I can't bring myself to believe that there is some box in a dusty attic in some corner of a distant galaxy which somehow "explains the laws of physics".

Genetic engineering - a danger to roads?

Friday, June 13th, 2008

A common argument against genetic engineering seems to be a perceived danger that a genetically engineered organism will escape into the wild and run rampant, smothering forests and other natural habitats.

However, this seems to be very unlikely to me. If such a successful organism were possible, evolution would have found it already.

If some species is going to spread rapidly, it is going to have to be in a niche that hasn't existed on evolutionary timescales. For example, suppose it is possible to have an organism which lives on road asphalt (extracting its energy from that material and in the process breaking it down) or concrete. If such an organism is possible it would likely take evolution tens or hundreds of thousands of years to find it (by which time, hopefully we will not be so reliant on it for our transportation needs).

But when genetic engineering is thrown into the mix (even if no organism is deliberately designed to eat roads) it is as if we are creating a whole new way of evolving (essentially genetic mutations that change many things at once and don't destroy the viability of the organism). The consequences seem likely to be a drastic increase in evolutionary speed, and consequently the more rapid filling of niches by genetically engineered organisms and their descendants.

It seems to me that it isn't the forests that we should be worried about with genetic engineering - it's the roads.

SpaceTime Algebra gravity

Monday, June 9th, 2008

The STA gauge theory of gravity substitues STA-multivector-valued linear functions of STA-multivectors for the rank 4 tensors of the usual treatment of GR. That is a quantity of (24×24=)256 real degrees of freedom.

I wonder if these quantities could be replaced by single multivectors in a geometric algebra with 8 basis vectors. These also have (28=)256 degrees of freedom, but they might make the equations simpler.

This would mean having a second set of 4 basis vectors in addition to the normal 4 (North, West, Up and Stopped). I wonder what the physical interpretation of these vectors would be? (Some sort of dual vectors perhaps?) Would they obey the normal rules of geometric algebra or would some generalization be required (perhaps to non-associativity like in the octonions or sedenions).

Complex analysis and Clifford algebra

Friday, June 6th, 2008

Complex analysis is a very beautiful and useful mathematical theory. Clifford (geometric) algebra is also very beautiful and useful. So it makes sense to wonder if they can be combined. Turns out that they can. I wonder why I haven't seen more stuff about this in the wild? Probably because it's pretty new as mathematics goes. I expect it will be part of every undergradaute mathematics degree in 50 years or so. But I suppose it depends if it turns out to be as useful as it seems, by rights, it ought to be.

Extending "The Elements"

Thursday, June 5th, 2008

Tom Lehrer's terrific song The Elements is unfortunately lacking in one respect - it is outdated as it does not include the elements discovered/named since the song was written.

Here is one attempt at bringing it up to date but I don't think just adding an extra verse fits well with the rest of the song. I wonder if it is possible to fit in the extra elements but keep the list format, perhaps at the expense of (part of) the last two lines. There is also some flexibility about where to put the "and"s - Lehrer doesn't use them consistently and even throws in an "also" in one place.