Archive for the ‘physics’ Category

Black holes are weird

Friday, September 30th, 2011

Assuming they exist (which seems likely according to our current best theories) black holes are very strange things indeed. In particular, the fate of something heading over the black hole horizon depends seems to be different depending on whether you cross the horizon or not.

In the frame of reference of somebody passing through the horizon, the horizon doesn't seem very special at all - you could cross the horizon of a sufficiently large black hole without even noticing. After passing the horizon, however, escape is impossible and you would inevitably end up spaghettified by the massive tidal forces near the central singularity.

In a frame of reference of somebody outside the black hole, the picture is very different. From that perspective, the gravitational time dilation at the horizon means that the progress of the falling object becomes slower and slower, grinding to a complete stop at the horizon itself. The red-shifting of any photons emitted from close to the horizon also means that the object looks redder and redder, going through infra-red, terahertz radiation, microwaves, and radio waves of increasing wavelength until they are too low energy to detect but they never quite stop altogether.

In the meantime, the black hole evaporates by the process of Hawking Radiation. This takes an unimaginably long time - as long as 10100 years for the largest black holes but if you wait there long enough it will happen. Supposing that you could somehow detect the infalling observer during the entire period of the evaporation, you'd see the infalling observer crossing the horizon at the exact moment that the black hole disappeared altogether in a bright flash of energy (the smaller the black hole, the more it radiates). But of course, at that moment the infalling observer has zero mass-energy (as does the black hole as a whole) so how can it be the same observer in its own frame of reference (when its mass is the same as it was before the experiment began)?

Clearly our current theories of physics are insufficient here - we don't yet have a consistent theory of quantum gravity so we just don't know what happens near a tiny black hole when the spacetime curvature is high enough to be in the quantum regime.

One way out of the apparent paradox is simply that the two observers can never compare notes because whatever information the infalling observer collects is inevitably destroyed at the singularity, and no information from inside the black hole can ever be transmitted to the outside universe. The interior of the black hole is causally disconnected from the outside - it can be considered to exist infinitely far in the future (long after the black hole evaporates) or can even be considered to be an entirely different universe in its own right. If the two observers can never get back together to compare notes (and find that they disagree) there isn't really any disagreement. It's a bit like the Many Worlds Interpration of quantum mechanics - observers in different parallel universes can't subsequently interact so they can't disagree about the observable outcome of some experiment.

But in both of these cases there is a philosophical problem - if the universe is qualitatively different for different observers then it seems to make a mockery of the very idea of an objective reality. Just as the IO Monad theory of subjective experience has no problems with observers disagreeing on matters that make no difference in an objective sense, it seems like general relativity may require that we have no problems with disagreements between causally disconnected objective realities.

The mathematical universe

Wednesday, September 7th, 2011

Max Tegmark's Mathematical Universe Hypothesis is extremely compelling - it's the only idea I've ever seen that comes close to the holy grail of theories of everything "Of course! How could it possibly be any other way!". And yet it seems unsatisfying in a way, perhaps because it's conclusion is (when you think about it for a bit) completely obvious. Of course the universe is a mathematical structure containing self-aware substructures - what else could it be?

Also, if it is true then it leaves a lot of questions unanswered:

  • What is the mathematical structure that describes our universe?
  • What is the mathematical definition of a self-aware substructure (SAS)?
  • If any mathematical (or even computable, or finite) structure that contains SASs exists in the same way for those SASs that our universe exists for us, why is our universe as simple and explicable as it is (Tegmark calls this the "measure problem").

My feeling is (as I've said on this blog before) that it's extremely likely that our universe is in some sense the simplest possible universe that supports SASs (i.e. the "measure" punishes complexity to the maximum possible extent). I have no a priori justification for this, though - it just seems to me to be the most likely explanation for the third point above. While it may seem unnecessary to have three generations of leptons and quarks, I strongly believe that when we have a more complete theory of physics we'll discover that they are completely indispensible - a universe like ours but with only a single generation of these particles would either be more complex (mathematically speaking) or make it impossible for SASs to exist. I suppose it is possible however, that when we do find the theory of everything (and/or a theory of SASs) we'll be able to think of a simpler structure in which SASs are possible.

The other thing about MUH is that I'm not convinced that it really does make any predictions at all, because it seems like whatever we discover about our universe with respect to the other possible universes in the Level IV multiverse can be made consistent with MUH by an appropriate choice of measure.

The slow intelligence explosion

Monday, August 29th, 2011

Each new technology that we invent has improved our ability to create the next generation of technologies. Assuming the relationship is a simple proportional one, our progress can be modelled as \displaystyle \frac{dI}{dt} = \frac{t}{\tau} for some measure of "intelligence" or computational power I. This differential equation has solution \displaystyle I = I_0e^\frac{t}{\tau} - exponential growth, which matches closely what we see with Moore's law.

The concept of a technological singularity is a fascinating one. The idea is that eventually we will create a computer with a level of intelligence greater than that of a human being, which will quickly invent an even cleverer computer and so on. Suppose an AI of cleverness I can implement an AI of cleverness kI in time \displaystyle \frac{1}{I}. Then the equation of progress becomes \displaystyle \frac{dI}{dt} = I^2(k-1) which has the solution \displaystyle I = \frac{1}{(k-1)t}. But that means that at time t = 0 we get infinite computational power and infinite progress, at which point all predictions break down - it's impossible to predict anything about what will happen post-singularity from any pre-singularity time.

Assuming human technology reaches a singularity at some point in the future, every human being alive at that time will have a decision to make - will you augment and accelerate your brain with the ever-advancing technology, or leave it alone? Paradoxically, augmentation is actually the more conservative choice - if your subjective experience is being accelerated at the same rate as normal progress, what you experience is just the "normal" exponential increase in technology - you never actually get to experience the singularity because it's always infinitely far away in subjective time. If you leave your brain in its normal biological state, you get to experience the singularity in a finite amount of time. That seems like it's the more radical, scary and dangerous option. You might just die at some point immediately before the singularity as intelligences which make your own seem like that of an ant decide that they have better uses for the atoms of which you are made. Or maybe they'll decide to preserve you but you'll have to live in a universe with very different rules - rules which you might never be able to understand.

The other interesting thing about this decision is that if you do decide to be augmented, you can always change your mind at any point and stop further acceleration, at which point you'll become one of those for whom the singularity washes over them instead of one of those who are surfing the wave of progress. But going the other way is only possible until the singularity hits - then it's too late.

Of course, all this assumes that the singularity happens according to the mathematical prediction. But that seems rather unlikely to me. The best evidence we have so far strongly suggests that there are physical limits to how much computation you can do in finite time, which means that I will level off at some point and progress will drop to zero. Or maybe growth will ultimately end up being polynomial - this may be a better fit to our physical universe where in time t we can access O(t^3) computational elements.

To me, a particularly likely scenario seems to be that, given intelligence I it always takes the same amount of time to reach kI - i.e. we'll just keep on progressing exponentially as we have been doing. I don't think there's any reason to suppose that putting a human-level AI to work on the next generation of technology would make it happen any faster than putting one more human on the task. Even if the "aha moments" which currently require human ingenuity are automated, there are plenty of very time-consuming steps which are required to double the level of CPU performance, such as building new fabrication facilities and machines to make the next generation of ICs. Sure, this process becomes more and more automated each time but it also gets more and more difficult as there are more problems that need to be solved to make the things work at all.

In any case, I think there are number of milestones still to pass before there is any chance we could get to a singularity:

  • A computer which thinks like a human brain albeit at a much slower rate.
  • A computer which is at least as smart as a human brain and at least as fast.
  • The development of an AI which can replace itself with smarter AI of its own design without human intervention.

Special relativity game

Friday, August 19th, 2011

I think it would be awesome if somebody made a 3D, first person computer game where the speed of light was significantly slower (perhaps 30mph as in the Mr Tompkins books) and did relativistically correct rendering so that you could see the geometric distortions and Doppler shifts as you walked around. It might be necessary to map the visible spectrum onto the full electromagnetic spectrum in order continue to be able to continue to see everything (albeit with reddish or bluish hues) when you're moving quickly.

It would have to be a single player game because (in the absence of time travel) there is no way to simulate the time dilation that would occur between players moving around at different speeds.

It appears that I'm not the first person to have this idea, though the game mentioned there (Relativity) doesn't seem to be quite what I'm looking for.

I do realize that it might be quite difficult to do this with current graphics engines, but I'm sure it could be done in real time, perhaps with the aid of suitable vertex and pixel shaders for the geometric/chromatic distortions respectively.

The physics of a toroidal planet

Friday, July 16th, 2010

I love the idea of futuristic or alien civilizations using advanced technology to create planetary scale art.

I wonder if such an advanced civilization would be able to create a toroidal planet. It would have to be spinning pretty fast for it not to fall into the central hole - fast enough that the effective gravitational field at the outermost ring of the surface would be quite weak and the gravitational field at the innermost ring would be quite strong. The gravitational "down" direction would vary across the surface of the planet and wouldn't be normal to the average surface except on those two rings. I think if you slipped you probably wouldn't fall off (unless the effective gravity was very low) but you would generally fall towards the outermost ring. Any oceans and atmosphere would also be concentrated at the outermost ring. It would be as if there was a planet-sized mountain with its top at the innermost ring and its bottom at the outermost. It would therefore have to be made of something very strong if the minor radius was reasonably large - rock (and probably even diamond) tends to act as a liquid at distance scales much larger than mountains.

I need to try to remember to google my ideas before writing about them. Having written this, I've just discovered that lots of people have thought about this before.

Neutral particles in E8 theory

Tuesday, November 4th, 2008

I didn't pay too much attention to "surfer dude" Garrett Lisi's Exceptionally Simple Theory of Everything when the paper first appeared, but since his TED talk about it was posted last month I have found myself fascinated.

Please excuse any errors in the following - I haven't studied physics at or beyond graduate level, so my understanding may be flawed in some respects.

Here's the game. You have 8 numbers (quantum numbers) - let's call them s, t, u, v, w, x, y and z for the sake of argument. Each of these numbers can be either -1, -1/2, 0, 1/2 or 1. So there are 58 = 390625 possible sets of numbers. However, there are certain rules eliminating some combinations:

  1. They must be all half-integers (-1/2, 1/2) or all integers (-1, 0, 1).
  2. They must add up to an even number.
  3. If they are integers, exactly six of them must be 0.

With these rules, there are only 240 possible sets (128 with half-integers and 112 with integers).

If you take these sets as the corners of an 8-dimensional shape, you get the pretty pictures that you might have seen associated with E8 theory.

Each of the 240 possible sets corresponds to a fundamental particle in a model that includes all the standard model particles and gravity.

  • 144 of the particles are quarks (up, down, strange, charm, top, bottom)×(spin up, spin down)×(red, green, blue)×(left-handed, right-handed)×(particle, antiparticle). These are the particles that make up the bulk of the mass of atoms (plus some variants).
  • 24 of the particles are charged leptons (electron, muon, tau)×(spin up, spin down)×(left-handed, right-handed)×(particle, antiparticle). These are the particles that determine the shapes, textures, colours and chemistry of things (plus some variants).
  • 24 of the particles are neutrinos (electron, muon, tau)×(spin up, spin down)×(left-handed, right-handed)×(particle, antiparticle). These don't have much of an effect - trillions of them pass right through your body each second.
  • 6 of the particles are gluons (orange, chartreuse, azure, aquamarine, indigo, violet [1]), responsible for the "strong force" that holds atomic nuclei together.
  • 2 of the particles are W bosons (particle, anti-particle). These cause the "weak force" which is the only way that neutrinos can interact with the other particles. They are quite heavy so never go very far.
  • 16 of the particles are frame Higgs bosons (2 types)×(spin up, spin down)×(left-handed, right-handed)×(particle, anti-particle). Interactions with these are what gives particles their mass.
  • 4 of the particles are spin connection bosons (spin up, spin down)×(left-handed, right-handed). These are responsible for gravity.

[1] These aren't the names that physicists usually use, but I prefer them.

That adds up to 220 - what about the other 20? Lisi's theory predicts some new particles which we haven't seen (because they're too heavy) but which might turn up in the Large Hadron Collider. These particles fall into two classes:

  • 2 of the particles are Pati-Salam W' bosons (particle, anti-particle). These cause a force similar to the weak force but even weaker.
  • 18 of the particles are coloured Higgs bosons (3 generations)×(red, green, blue)×(particle, anti-particle). These are like the other Higgs bosons except that they can also interact via the strong force.

Lisi's paper contains several tables which show the values of s, t, u, v, w, x, y and z for each of these particles.

Different combinations of the quantum numbers correspond to the conserved charges that physicists are more familiar with:

  • electric charge is (3v+x+y+z)/3.
  • colour charges are combinations of x, y and z.
  • weak and weaker isospins are (u+v)/2 and (v-u)/2.
  • weak hypercharge is (3(v-u)+2(x+y+z))/6.
  • two gravitational/spin charges are (s+t)/2 and (t-s)/2.
  • a charge corresponding to fermion generations (i.e. electron/muon/tau leptons, up/charm/top quarks and down/strange/bottom quarks) is w.

(Two of these are new charges predicted by E8 theory).

As well as these 240 particles, there are 8 particles with all-zero quantum numbers. The 8 neutral bosons are (I think):

  1. The photon (electromagnetic radiation: light, radio waves etc.)
  2. The Z boson (weak force)
  3. A seventh gluon (strong force)
  4. An eighth gluon (strong force)
  5. A fifth spin connection boson (gravity)
  6. A sixth spin connection boson (gravity)
  7. The Z' boson (weaker force)
  8. The w particle (generations)

As well as describing all the particles, this theory also describes their interactions. Two particles can combine to form a third (or appear as a result of the third decaying) if and only if you can add up their corresponding quantum numbers and obtain the quantum numbers of the third particle. If one of the three is a neutral boson it must be a neutral boson corresponding to a charge which is non-zero for other two particles.

If you know the quantum numbers for a particle, you can obtain the quantum numbers for the corresponding anti-particle by multiplying them all by -1. So, neutral bosons are their own anti-particles and a non-neutral particle can always interact with its own anti-particle to produce a neutral boson.

E8 theory does have some problems:

  1. It seems to imply a very large cosmological constant.
  2. Some of the higher generations of fermions have the wrong charges.

But these problems may be surmountable with more work and experimental data. Even if E8 theory is not correct in its current form, the pattern is so compelling that there is surely some nugget of truth in there somewhere that will form part of the final theory. It's not the end of physics and probably not even the beginning of the end, but it might just be the beginning of the end of the beginning.

Learning about all this stuff makes me want to create an E8 screensaver that simulates a soup of E8 particles all colliding with each other and interacting as they would at very high energies where the symmetries are unbroken. I wrote something similar many years ago when I learnt about quantum electrodynamics (a soup of electrons and photons) but this one would be much more colourful and interesting.

Chemically controlled radioactivity

Sunday, October 26th, 2008

Most of the properties of materials (including all of chemistry) are modelled by assuming atomic nuclei to be indivisible particles. You can derive almost all chemistry and properties of materials from the laws of quantum mechanics and the masses and charges of electrons and nuclei.

Nuclear physics, on the other hand, is all about the internals of the nuclei and (except under extreme conditions) a nucleus is unaffected by electrons and other nuclei (even in electron capture the particular chemical configuration of an atom doesn't affect its decay rate significantly).

This is useful in some ways - you can replace some nuclei with different isotopes (for example in nuclear medicine) without changing the chemistry. Of course, once it decays (other than gamma decay) the nuclear charge will change and the chemistry will be different.

Suppose that nuclear decay rates did depend significantly on chemical configuration - what would the consequences be? The obvious consequence is that it would be possible to make nuclear weapons that are more difficult to detect, since they could be made non-radioactive until they undergo a chemical reaction.

More subtly, there would be a whole new spectrum of chemical processes made possible by the storage of energy in (and release of energy from) the atomic nuclei. This could lead to new (cleaner, safer) forms of nuclear power, and all sorts of other interesting applications.

This is all wild speculation of course, but this suggests there is still much that we don't understand about atomic decay processes.

New technologies from new physics

Sunday, October 19th, 2008

Almost every fundamental new discovery in physics so far has yielded great advances in technology. The exception seems to be general relativity - probably because gravity is such a weak force, it's difficult to make consumer items out of it.

I like to wonder what new technologies we could hope (in our wildest dreams) to obtain with a complete theory of physics. It might take a while, because we don't even know of any practical way of even getting experimental evidence for a grand unified theory so far, let alone make technology from those experimental results.

One possibility is new particles. Many promising theories predict various new particles. Unfortunately most particles other than the ones that make us up tend to be very short-lived and therefore don't yield any new materials. But if we do find a new long lived particle (and it doesn't cause a phase transition that swallows us all up) there is a possibility of new materials heavier, lighter, stronger or with better information storage abilities than the ones we have.

Another possibility is gravitational engineering. Particularly if we can find a way to violate the weak energy condition, we might be able to build stable, traversable wormholes, time machines and other such time/space abominations.

Even more far-fetched (but also possible) would be more ways to manipulate matter and energy, as in The Trigger and Ed stories.

Escher metric

Tuesday, September 30th, 2008

When I learned about General Relativity at university I sometimes used to wonder if there was a metric in which this object could exist:

Penrose triangle

And be what it appears to be, i.e. the straight lines are (lightlike) geodesics and the corners are (locally) right-angles.

Initially, I imagined that such a metric might be possible without any sort of topological defect. In particular, while the triangle would look like the picture above when viewed "face on" it would appear to curve as you moved around it and examined it from different angles. While lightlike geodesics always look straight when you look along them from a point on the geodesic, they can still look curved when viewed externally. The photon sphere around a black hole is an example of a set of such geodesics thought to occur in our universe.

Thinking about it some more, I suspect something strange is going to have to happen in the middle of the triangle - if you try to shrink the triangle to a point, what happens?

Imagine being in this universe and travelling all the way around the triangle. I think that upon doing so, one would find that one had actually only turned through 270 degrees instead of the full 360 (and would also have rotated about the axis of travel). I suspect that this means that such a metric would have to have a topological defect (a cosmic string) passing through the center of the triangle. This would cause a discontinuity when viewing the triangle, so one would not be able to see the entire illusion in all its glory as it is shown above.

The search for simplicity

Sunday, September 28th, 2008

There are several ways in which computer programming and physics are very similar. Possibly the most important is that both disciplines are, fundamentally, a search for simplicity.

In physics, we have a big pile of experimental results and we want to find the simplest theory that satisfies them all. Just listing all the experiments and their results gives you a theory of physics, but not a particularly useful one since it's not very simple and doesn't predict the results of future experiments (only the past ones). Rather than just listing the results, we would like to find a general theory, an equation, a straight line through the points of data which allows for interpolation and extrapolation. This is a much more difficult thing to do as it requires insight and imagination.

In computer programming, we generally have a big pile of specifications about what a program should do - maybe a list of possible interactions with the user (what they input and what they should expect to see as output). These might be encapsulated as testcases. To write a program that satisfies all the testcases, we could just go through them all one by one, write code to detect that particular testcase and hard-code the output for that particular input. That wouldn't be very useful though, as the program would fail as soon as the user tried to do something that wasn't exactly one of the scenarios that the designers had anticipated. Instead we want to write programs for the general case - programs that do the right thing no matter what the input is. When the "right thing" isn't precisely specified, we get to choose the output that makes the most sense according to our internal model of how the program should act.

I think a number of software companies in recent years (Microsoft in particular but others as well) have started to fall into the trap of writing software that concentrates too much on what the behavior of the software should be for particular (sometimes quite specific) scenarios, at the expense of doing the right thing in the most general case. Windows is chock full of "special case" code ("epicycles" if you will) to work around particular problems when the right thing to do would have been to fix the general problem, or sometimes even to explain that this is how we should expect it to work. Here is one example of this kind of band-aiding. I discovered another the other day - I was running some older Windows software in Vista and accessed the "Help" functionality, which was implemented an old-style .hlp file. Vista told me that it no longer includes the .hlp viewer by default (I guess it was a piece of the OS that doesn't get a lot of use these days, and they had just dropped it from the default distribution to avoid having to bring it up to the latest coding standards). I was pointed to the download location (where I had to install an ActiveX control to verify that my copy of Windows was genuine before I was allowed to download the viewer).

Part of the problem is that (at Microsoft at least) it's very difficult to make big changes. Rewriting some core piece of functionality, even if the programming itself is easy, would involve months of planning, scheduling, designing, specification writing, testcase writing, test-plan reviewing, management sign off meetings, threat modelling, localization planning, documentation planning, API reviewing, performance testing, static analysis, political correctness checking, code reviewing and integrating. And of course everyone whose code might possibly be affected by the change needs to sign off on it and put in their two cents about the correct design. And it must comply with the coding standards du jour, which change every year or two (so delay too long and you'll probably have to start all over again.) When you come to understand all this, the long gap between XP and Vista becomes less surprising (in fact, it's quite a surprise to me that it only took a little over 5 years, considering how many pieces were completely rewritten). All this process exists for a reason (mostly the politician's fallacy) but is rigorously justified and widely accepted.

Because it's difficult to make big changes, people tend to make little changes instead ("hey, we can work around this by just doing x in case y - it's just one extra line of code") - these don't require much process (usually just a code review - most of the rest of the processes for such small changes is automated). All these small changes add up to a great deal of extra code complexity which makes it very difficult for newcomers to understand the code, and even more difficult to rewrite it in the future because people will have come to depend on these edge cases.