Archive for October, 2009

Does the human brain tap into a third form of computing?

Wednesday, October 21st, 2009

There are two forms of computing currently thought to be possible in our universe. One is the classical, deterministic computing that we all know and love. Many people think the human brain is a kind of (very large and complicated) classical computer. However, it is still unknown whether (and if so, how) a classical computer can give rise to consciousness and subjective experience.

The second form of computing is quantum computing, where you essentially run a pile of classical computers in superposition and allow their outputs to interfere in order to obtain the result. Anything quantum computers can do can also be done by classical computers (albeit much more slowly). The human brain might be a quantum computer, but (unless there's something about quantum computing that we don't yet understand) that still doesn't solve the problem of consciousness.

A third form of computing is possible if you have a time machine. I've speculated before that the human brain could be a time travelling computer. These computers are faster still than quantum computers, but still can't compute anything that can't in principle (given long enough) be computed by a classical computer, so this still doesn't solve the consciousness problem.

Could it be that by accident of evolution the human brain has tapped into a form of computing that is qualitatively different from classical computing, much as birds and bees have tapped into a qualitatively different method of flying (flapping) than the method use in our aeroplanes? While this smells of dualism, I think it's a possibility that can't be fully discounted without a complete theory of physics.

One such qualitatively different form of computing is the infinity machine. This can verify true things in finite time even if there is no finite proof that those things are true. Thus it can find completely new truths that are not provable by conventional mathematics.

It seems rather unlikely that the infinity machine is possible in our universe (quantum mechanics puts an absolute limit on clock speed) but there could be other forms of computation that we've just never thought of.

Penrose's Orchestrated Objective Reduction theory is one such possibility.

Why should healthcare be a right anyway?

Tuesday, October 20th, 2009

Ron Paul says healthcare isn't a right. He's correct, technically - the US constitution guarantees no such right. However, unlike Dr Paul, I think healthcare should be a right - at least, some basic level of healthcare.

There are several reasons for this. One is a basic moral imperitive - enshrining such a right would save tens of thousands of lives each year.

Another is that healthcare doesn't work like other goods. I have a choice about whether to buy a new car or a new computer. If I am in need of life-saving healthcare - if my "choice" is to get the healthcare I need, or to die - that isn't really a choice at all. Faced with such a need, my only rational response would be to obtain the healthcare I needed by any means necessary, even if that means using all my savings and selling everything I own. No-one "shops around" for the cheapest cancer treatments - people go to the doctor, get fixed, and then worry about the cost afterwards. Until the bill comes, they probably have no idea how much it's going to cost.

Food and shelter are also essential goods, but they don't bankrupt people like medical bills do (at least in the US) because they are very predictable costs - one isn't going to suddenly find oneself faced with a food bill of hundreds of thousands of dollars due to circumstances beyond one's control. A "basic minimum for survival" level of food is also very cheap (we spend so much on food because we like it to be tasty and interesting as well as nutritious) but healthcare is very expensive because doctors are very highly trained professionals.

Yet another reason is that universal healthcare benefits everyone, even the healthy. Preventitive medicine and quick treatment of disease (as opposed to people avoiding going to the doctor because they can't afford it) means a healthier population overall, less disease, fewer sick days and greater productivity.

One other reason that doesn't seem to be mentioned anywhere is peace of mind. Unless you work for a very large corporation, health insurance isn't reliable. Even if you have insurance, you might still be bankrupted by deductibles, copays and coinsurance. If you have an individual insurance plan, your insurance company will look for any excuse to drop you as a customer if you develop a particularly expensive condition - woe betide you if you left an i undotted or a t uncrossed on your application form! If you work for a small company, the insurance company may pressure your employer to fire you (with threat of cripplingly higher rates) if you get too sick. All these add up to worries (and stress) that those who are guaranteed healthcare by their governments simply don't have.

What would constitute a proof of God?

Monday, October 19th, 2009

One argument that theists sometimes bring up in arguments with atheists is "What proof would convince you of the existence of God?" the implication being that the existence of the universe and all the wonderful things in it is proof enough.

Well, if you define God as just being the things we don't understand (currently the big bang and the mysteries of the human mind) then that's a valid argument, but the resulting God is just the "God of the gaps" who has been shrinking rapidly as science improves. Pretty much everything else in the universe (excepting only a few relatively minor details) we have good working scientific theories about. I think eventually we'll come to understand scientifically both the human mind and the big bang as well (in fact, I think it will not be possible to understand either without the other).

Most theists don't seem to believe in just a God who created the universe at the beginning and then left it alone - they believe things like "praying works". This suggests a simple test involving praying for (for example) heads in a coin toss and then seeing if there is any statistically measurable effect. Once an effect is found, the experiment could be refined to determine which religion and sect has the most effective prayers. Theology would become a science. Theists will usually claim that prayers don't work that way, but ultimately they either work or they don't, and if they do work then that effect can be observed and experimented on. I understand some such experiments have been done, and have shown no statistically significant effects with the possible exception of medical patients who know they are being prayed for. This can be attributed to the placebo effect.

Another example of such a possible proof comes from the observation that "if God is so great, why does he keep needing money to fix church roofs?" I would find it a very compelling piece of evidence towards God's existence if consecrated buildings did not suffer the same kinds of wear and tear that unconsecrated buildings do.

XKCD standard creepiness rule

Sunday, October 18th, 2009

I think the XKCD standard creepiness rule is a good idea - it makes much more sense than the usual age of consent rules (especially when one or more of the people involved is under the age of majority). I built a quick web calculator to allow you to figure out if your relationship is (or would be) creepy, and when it stops (or would have stopped) being so.

CGA: Why the 80-column text mode requires the border color to be set

Saturday, October 17th, 2009

The original IBM Color Graphics Adapter has a curious quirk - it won't by default display colour on the composite output in 80-column text mode. By looking at the schematics, I've figured out why this is, and what the CGA's designers could have done differently to avoid this bug. The following diagram illustrates the structure of the various horizontal and vertical sync pulses, overscan and visible areas in the CGA.

There are two horizontal sync pulses - there's the one generated by the 6845 (the 160-pixel wide red/grey/yellow band in the diagram) and there's the one output to the monitor (the 64-pixel wide grey band within it). The CGA takes the 6845's hsync pulse and puts it through various flip flops to generate the output hsync pulse (delayed by 2 LCLKs and with a width of 4 LCLKs) and also the color burst pulse (in yellow, delayed by 7 LCLKs and with a width of 2 LCLKs).

The 6845 can generate an hsync pulse anywhere from 1 to 16 clock ticks in width. The IBM's BIOS sets it up at 10 ticks (as shown in the diagram). However, in 80-column text mode those ticks are only half as wide, so only extend 3/4 of the way through the output hsync pulse. The 6845's hsync pulse ends before the color burst pulse gets a chance to start, so it never happens and the display will show a monochrome image.

By changing the overscan color to brown, one can create one's own color burst signal at the right point in the signal, and this was the usual way of working around the problem (possibly the only way that works reliably)

By changing the 6845's pulse width to the maximum of 16, one could generate the first half of the color burst pulse (I think) and some monitors might recognize this as a color burst.

If the CGA's designers had started the output hsync pulse at the beginning of the 6845's hsync pulse (or delayed by only 1 LCLK instead of 2) then using the maximum pulse width would have been sufficient to generate the correct color burst. I guess they were just trying to center the output hsync pulse and the color burst within the 6845 pulse, without thinking of the high-res case.

The diagram also shows why interlaced mode doesn't work on the CGA - the output vertical sync pulse is generated in a similar way to the output horizontal sync pulse, only it's 3 lines instead of 4 LCLKs. It always starts at the beginning of an output hsync pulse, so a field can't start halfway through a scanline.

CGA: Reading the current beam position with the lightpen latch

Friday, October 16th, 2009

Here is a little known trick that a genuine IBM Color Graphics Adapter can play, that I noticed when looking at its schematic recently. There are two ports (0x3db and 0x3dc) which are related to the light pen. A read to or write from 0x3db clears the light pen strobe (which you need to do after reading the light pen position so that you'll be able to read a different position next time). A read to or write from 0x3dc sets the light pen strobe - what's the point of that? One possibility might be to implement a light pen that signals the computer in a different way (via an interrupt) rather than being connected directly to the CGA card. That wouldn't work very well though - the interrupt latency of the original IBM PCs was extremely high.

Another possibility is to allow the programmer to directly find the position of the beam at any moment, to an accuracy of 2 scanlines (in graphics modes) and one character width (1/40th of the visible screen width in graphics modes and 40-column text modes, 1/80th of the visible screen width in 80-column text modes). Read from 0x3db and 0x3dc and then read the light pen CRTC registers to find out where the beam was when you read from 0x3dc. This technique is so obscure it probably won't work on non-IBM CGA cards, so its usefulness is rather limited. Might be useful for an oldskool demo, though. I'll be sure to implement this technique when I finally get around to making my extremely accurate PC emulator.

Building 3D chips

Thursday, October 15th, 2009

In the not-too-distant future, we'll hit a limit on how small we can make transistors. The logical next step from there will be to starting building up - moving from chips that are almost completely 2D to fully 3D chips. When that happens, we'll have to figure out a way to cool them. Unlike with a 2D chip, you can't just stick a big heatsink and fan on top because it would only cool one surface, leaving the bulk of the chip to overheat. What you need is a network of cooling pipes distributed throughout the chip, almost like a biological system.

I suspect these pipes would work best if they go straight through the chip and out the other side. At small scales, fluid is very viscous and trying to turn a corner would probably slow down the flow too much. So suppose you have a cubic chip with lots of tiny pipes going in one face and coming out the opposite face. The next problem is that, if the fluid is all moving the same way, one side of the chip (the "incoming fluid" side) would get much hotter than the other. The effect could be mitigated somewhat by having some of the pipes flowing in the opposite direction. Ideally you'd want fluid coming in on all 6 faces to maximize cooling. Another possibility is pipes that split up within the chip. A wide pipe of cold fluid will have a similar effect as several smaller pipes of warmer fluid (the increase in fluid temperature is offset by the extra surface area). It would be an interesting puzzle to try to model the heat flows and come up with optimal pipe configurations. In doubling the side of the chip, one probably has to increase the proportion of chip volume dedicated to cooling by some factor 2^n - I wonder what this fractal dimension is.

For most efficient cooling, one would probably want to take the cooling fluid from the CPU and any other hot parts of the system and compress it (just like the coolant in a fridge), allowing it to expand inside the CPU. Then rather than having lots of noisy fans one has one noisy compressor (which would probably be easier to acoustically isolate - maybe even by putting it outside). Fans are a big problem for noise and reliability - my main desktop machine (at the time of writing) has five of them, of which two have failed and a third is on its last legs.

Another major problem that will need to be solved is pluggable cooling lines. People expect to be able to build their own computers, which means that it must be possible to plug together a CPU, motherboard, graphics card and cooling system without an expensive machine. That means we'll need some kind of connector for plugging the coolant lines from the CPU (and other hot components) into the cooling system. Ideally it will be easy to connect up and disconnect without the possibility of introducing dirt or air into the coolant lines, and without the possibility of coolant leaks. I suspect that whoever invents such a connector will make a lot of money.

Plan for welfare

Wednesday, October 14th, 2009

I think social welfare is generally a positive thing for society - nobody should have to starve, become homeless or sacrifice their retirement savings due to circumstances beyond their control (such as losing their job). However, it is also important to not destroy the incentive to work - we should balance the aim of having everyone contribute usefully to society with the safety net. It seems the UK isn't doing a very good job with this at the moment as there are people on welfare who could work and would like to except that it would mean having less money for greater effort (a minimum wage job pays less than welfare, and if you have the job you stop getting the welfare). There are also many teenage girls who become pregnant just so that they will get a council house. [EDIT: This is an unsubstantiated and almost certainly false claim - I heard it on the Jeremy Vine show and failed to research it before repeating it here.]

If we were designing the system from scratch, one question might be to ask "what do we do with someone who is just Terminally Lazy (TL)?" I.e. what do we do with somebody who simply refuses to work or contribute to society at all? What sort of lifestyle should they have? For humanitarian reasons, I don't think we should let them freeze to death in the streets, so I think they should have some sort of Basic Minimum Standard Of Living (BMSOL). We would also like to avoid the possibility of them committing crimes simply so they get sent to prison and have a place to sleep (on the general principle that encouraging crimes is a bad idea). I also don't think we should treat TL-ness as a crime itself - if a TL person wakes up one day and decides to go and get a job and become a productive member of society, that should be encouraged - they should not lose the ability to do that.

I think that the concept of "prison" is the right idea here, though - apart from the "freedom to leave" thing, there is no reason to provide any better BMSOL to a TL person than to a convicted criminal. In both cases, we only provide that BMSOL for humanitarian reasons. Let anyone who wants to have a bed for the night in conditions that are roughly the same as those in prison: shelter, a bed, basic hygiene facilities, up to three square meals per day and a basic level of medical care. No sex, booze, drugs or TV.

Paying for the BMSOL for the TL whilst making the non-TL pay for those things isn't exactly fair, though, so let's have a guaranteed minimum income (equal to the cost of the BMSOL) for everyone, and give people the choice of receiving it in the form of cash or BMSOL (or some of each).

If you want more than the BMSOL you have to do some work. With the BMSOL system in place, the minimum wage could be scrapped which would mean there would be plenty of work to go around (the problem with unemployment isn't that there's a lack of stuff to do, it's that because of the minimum wage there's a lack of money to pay people to do it).

How to pay for all this? I tend to favour an income tax since it's cheaper to collect and more progressive than a sales tax. I think inheritance tax is one of the most fair taxes, as I've mentioned here before. Seignorage (if carefully controlled) is probably also a good idea. It would be nice if the government had a national surplus rather than a national debt, so that it could make some money from investments rather than having to pay interest. Sin taxes (on booze, tobacco, recreational drugs, pollution and gambling) should be levied exactly as much as necessary to undo the harm that they cause (so smokers should pay the costs of treatment of smoking related diseases, etc) - any less leads to the abstinent paying for the sins of the partakers, and any more could lead to encouragement of the sins.

Income tax should be calculated as a function yielding y (the take home pay) from x (the salary) such that:

  • There is always an incentive to earn more (i.e. an increase in your salary always corresponds to an increase in your take-home pay) - \displaystyle \frac{dy}{dx} > 0.
  • The tax is progressive (i.e. the more you earn the less of your salary you take home): \displaystyle \frac{d}{dx}\frac{y}{x} < 0.
  • There's no tax on the first penny (i.e. \displaystyle \frac{dy}{dx} = 1 at x = 0).
  • A minimum income is guaranteed (i.e. y >= m) rather than a minimum wage (x >= n).

Because the minimum income is part of the income tax system, the tax bill x-y is negative for people making less than a certain amount (not necessarily the same as the minimum income).

It would be interesting to write a program to simulate the economics of a society and see what effect various tax and welfare schemes have.

Edit 14th July 2013:

Here's a good article about basic income.

Sometimes, doing things incrementally hurts more than it helps

Monday, October 12th, 2009

Usually the best way to make a major change to a piece of code is to try to break it down into small changes and to keep the code working the same after each such small change. The idea being that if you make too many changes and break the code too badly, you might never get it working again. Without working code it can be difficult to figure out what the next step should be.

But sometimes, incremental changes just don't work. In particular, if you're making major architectural changes, trying to construct something that is 90% original architecture and 10% new architecture is going to involve just as much extra work to try to make the incompatible pieces fit. In these cases, sometimes the only thing you can do is take the whole thing to pieces and put it back together again the way you want it.

Scaling/scanlines algorithm for monitor emulation

Monday, October 12th, 2009

For my TV emulation, I wanted to render scanlines nicely and at any resolution. xanalogtv does vertical rescaling by duplicating rows of pixels, which unfortunately makes some scanlines appear wider than others. Blargg's NTSC filters don't do any vertical rescaling at all.

The first thing I tried was a sinc interpolation filter with the kernel scaled such that the scanline only covered 70% of the pixels vertically (essentially modelling the scanlines as long thin rectangles). This worked great except that it was far too slow because of the sinc function's infinite extent (I was doing a multiplication for each combination of horizontal position, vertical position and scanline). So I windowed the kernel with a Lanczos window. I got annoying aliasing effects using less than 3 lobes. With 3 lobes it was still too slow because each pixel was a weighted sum of 3-4 separate scanlines. Also, because of the negative lobes I needed extra headroom which meant I either had to reduce my colour resolution or use more than 8 bits per sample (which would also be slow).

The next thing I tried was a Gaussian kernel. This has several nice features:

  1. The Fourier Transform of a Gaussian is also a Gaussian, which is also a better approximation of a scanline than a rectangle (the focussing of the electron beam isn't perfect, so to a first approximation their distribution around the beam center is normal).
  2. It dies off much more quickly than the sinc function.

The Gaussian kernel also gave a good image, so I kept it.

The next thing I wanted to do was improve the speed. I still had several scanlines contributing to every pixel. However, that doesn't make much physical sense - the scanlines don't really overlap (in fact there is a small gap between them) so I figured I should be able to get away with only using the highest coefficient that applies to each pixel. I tried this and it worked beautifully - no difference in the image at large sizes and it speed the program up by a factor of several. The downside was at small sizes - the image was too dark. This is because the filter was set up so that each pixel would be the average of several scanlines, but if only one scanline is contributing then then the brightness is 1/several. To fix this I just divided all the coefficients by the largest. There's no mathematical justification for this, but it looks fine (apart from the fact that some of the scanlines don't contribute to the picture at all).

If each pixel is only in one scanline, lots more optimizations are possible - for example, one can generate the image progressively, a scanline at a time, which helps keep data in the caches.

Finally, I still needed it to be faster so I moved all the rescaling (vertical and horizontal) to the GPU. I came up with a devishly clever hack to implement the same scanline algorithm on the GPU. No shader is needed - it can be done just using textures and alpha blending. There are two passes - the first draws the actual video data. The second alpha-blends a dark texture over the top for the scanlines. This texture is 1 texel wide and as many texels high as there are pixels vertically.

One other complication is that I wanted the video data texture to be linearly interpolated horizontally and nearest-neighbour interpolated vertically. This was done by drawing this texture on a geometry consisting of a number of horizontal stripes, each of which has the same v-texture-coordinate at its top as at its bottom.