Archive for August, 2011

Progressively optimizing compiler

Wednesday, August 31st, 2011

Normally when compiling a program, you tell the compiler how much optimization you want and what the input files are, it goes off and does its thing and comes back when it's done. Sometimes I think it might be nice if one instead told the compiler how much time to spend doing the compiling. It would then do the absolute minimum to make a working binary, and then gradually do more and more optimizations until the timer ran out. This would make the time it takes to rebuild things much more predictable. One downside is that the performance of the resulting program would depend on unpredictable things like how busy the build machine was. Another is that it's probably more efficient for a compiler to decide upfront what the optimizations should be rather than making lots of intermediate binaries of different optimization levels.

However, this sort of thing might be more useful as a run-time process - i.e. a JIT compiler. The system can monitor which bits of the code are being run most often (this is very easy to do - just interrupt at a random time and see what code is being run) and concentrate optimization efforts on those parts. The compilation can continue (gradually making the program faster and faster) until the point of diminishing returns is reached. I understand there's a Java Virtual Machine which can do this.

Fractals on the hyperbolic plane

Tuesday, August 30th, 2011

Some amazing images have been made of fractal sets on the complex plane, but I don't think I've ever seen one which uses hyperbolic space in a clever way. I'm not counting hyperbolic tessellations here because the Euclidean analogue is not a fractal at all - it's just a repeated tiling.

The hyperbolic plane is particularly interesting because it is in some sense "bigger" than the Euclidean plane - you can tile the hyperbolic plane with regular heptagons for example. Now, you could just take a fractal defined in the complex plane and map it to the hyperbolic plane somehow, but that doesn't take advantage of any of the interesting structure that the hyperbolic plane has. It's also locally flat, so doesn't add anything new. If you use some orbit function that is more natural in the hyperbolic plane, I think something much more interesting could result. I may have to play about with this a bit.

Similarly, one could also do fractals on the surface of a sphere (a positively curved space - the hyperbolic plane is negatively curved and the Euclidean plane has zero curvature).

The slow intelligence explosion

Monday, August 29th, 2011

Each new technology that we invent has improved our ability to create the next generation of technologies. Assuming the relationship is a simple proportional one, our progress can be modelled as \displaystyle \frac{dI}{dt} = \frac{t}{\tau} for some measure of "intelligence" or computational power I. This differential equation has solution \displaystyle I = I_0e^\frac{t}{\tau} - exponential growth, which matches closely what we see with Moore's law.

The concept of a technological singularity is a fascinating one. The idea is that eventually we will create a computer with a level of intelligence greater than that of a human being, which will quickly invent an even cleverer computer and so on. Suppose an AI of cleverness I can implement an AI of cleverness kI in time \displaystyle \frac{1}{I}. Then the equation of progress becomes \displaystyle \frac{dI}{dt} = I^2(k-1) which has the solution \displaystyle I = \frac{1}{(k-1)t}. But that means that at time t = 0 we get infinite computational power and infinite progress, at which point all predictions break down - it's impossible to predict anything about what will happen post-singularity from any pre-singularity time.

Assuming human technology reaches a singularity at some point in the future, every human being alive at that time will have a decision to make - will you augment and accelerate your brain with the ever-advancing technology, or leave it alone? Paradoxically, augmentation is actually the more conservative choice - if your subjective experience is being accelerated at the same rate as normal progress, what you experience is just the "normal" exponential increase in technology - you never actually get to experience the singularity because it's always infinitely far away in subjective time. If you leave your brain in its normal biological state, you get to experience the singularity in a finite amount of time. That seems like it's the more radical, scary and dangerous option. You might just die at some point immediately before the singularity as intelligences which make your own seem like that of an ant decide that they have better uses for the atoms of which you are made. Or maybe they'll decide to preserve you but you'll have to live in a universe with very different rules - rules which you might never be able to understand.

The other interesting thing about this decision is that if you do decide to be augmented, you can always change your mind at any point and stop further acceleration, at which point you'll become one of those for whom the singularity washes over them instead of one of those who are surfing the wave of progress. But going the other way is only possible until the singularity hits - then it's too late.

Of course, all this assumes that the singularity happens according to the mathematical prediction. But that seems rather unlikely to me. The best evidence we have so far strongly suggests that there are physical limits to how much computation you can do in finite time, which means that I will level off at some point and progress will drop to zero. Or maybe growth will ultimately end up being polynomial - this may be a better fit to our physical universe where in time t we can access O(t^3) computational elements.

To me, a particularly likely scenario seems to be that, given intelligence I it always takes the same amount of time to reach kI - i.e. we'll just keep on progressing exponentially as we have been doing. I don't think there's any reason to suppose that putting a human-level AI to work on the next generation of technology would make it happen any faster than putting one more human on the task. Even if the "aha moments" which currently require human ingenuity are automated, there are plenty of very time-consuming steps which are required to double the level of CPU performance, such as building new fabrication facilities and machines to make the next generation of ICs. Sure, this process becomes more and more automated each time but it also gets more and more difficult as there are more problems that need to be solved to make the things work at all.

In any case, I think there are number of milestones still to pass before there is any chance we could get to a singularity:

  • A computer which thinks like a human brain albeit at a much slower rate.
  • A computer which is at least as smart as a human brain and at least as fast.
  • The development of an AI which can replace itself with smarter AI of its own design without human intervention.

Debugger feature

Sunday, August 28th, 2011

Here is a debugger feature that I think I would find really useful: avoid stopping at any given location more than once. This would be particularly useful when finding my around an unfamiliar bit of code - it would step into a function that I hadn't stepped into before, and then step over it the next time it was called, for example. In fact, I often find myself trying to do exactly this, just remembering which functions I've stepped into before (and often getting it wrong).

The other use case for this feature would be doing a sanity test once some new code has been written - the debugger would just stop on the new/changed lines of code so that you can make sure that they're working as expected.

One difficult part about this feature (especially for the second use case) would be keeping track of which locations have been stopped at even across recompiles (so the instruction pointer location won't necessarily be the same) and across source changes (so the source line location won't necessarily be the same either). So really to implement this properly we need to completely overhaul the entire compilation system so that it takes as input the deltas from the last compilation and performs the corresponding modifications to the output files (including both the binary and the list of which locations have been visited).

Such an overhaul would be extremely difficult, but would be useful for other reasons as well - in particular, it has the potential to make builds much faster, which shortens the edit/recompile/test sequence and makes programming much more pleasant. Also, it could integrate with editor syntax highlighting and code completion features, so that these always reflect the current state of the program.

The logic of the minimum wage

Saturday, August 27th, 2011

I've said before that I would prefer to replace the minimum wage with a guaranteed minimum income, but I've since thought of a justification for keeping the minimum wage.

In any economy, there are commodities - things which are easily available and the prices of which are easy to discover such as gold, oil or milk. One very important commodity is the hourly labour of a human who is healthy and competent at following simple instructions but otherwise unskilled. Even without a minimum wage law, there will be a natural price for such labour. It will fluctuate with time and space but will generally stay within a certain range. In some times and places it might actually go higher than the legislated minimum wage (which corresponds to a state of zero unemployment).

Having a legislated minimum wage has the effect of setting a scale factor (or a "gauge", for the physicists) of the economy as a whole - it's sort of like having an electrical circuit that isn't connected to anything and then connecting one part of it to ground. If the minimum wage is set too high then it will cause an inflationary pressure which will dissipate once everyone has a job again. If it's set too low then it will have a negligible effect anyway, since there would be very few people who would be unable to get a job for more than minimum wage. According to this theory, the minimum wage has nothing to do with making sure the poorest members of society are paid a respectable wage (which makes sense since a minimum wage is actually a regressive policy) - it's just an economic control factor.

Now, as an economic control factor, a minimum wage has a number of problems. One is that it takes a while for the inflation to eliminate unemployment after the market rate for labour goes down, so there's always some residual unemployment. Another is that people are not all equal - the labour of some people is just naturally below the basic labour rate (because they are unskilled and also either unhealthy or incompetent). While this is unfortunate, essentially forbidding them from working at all (by preventing employers from hiring them at the market rate) seems to add insult to injury (not to mention creating yet another dreaded poverty trap). A third problem is that there are many other ways that governments "ground" economies - in the electrical circuit analogy they're connecting various bits of a circuit to voltage sources that correspond to "what we think this ought to be" rather than what it actually is, which seems like a good way to get short circuits.

Net neutrality

Friday, August 26th, 2011

Lots of things have been written online about net neutrality. Here's my contribution - a "devil's advocate" exploration of what a non-neutral internet would mean.

From a libertarian point of view, a non-neutral internet seems like quite a justifiable proposition. Suppose people paid for their internet connections by the gigabyte. This wouldn't be such a bad thing because it would more accurately reflect the costs to the internet service provider of providing the service. It would eliminate annoying opaque caps, and heavy users would pay more. Even as a heavy user myself, I'd be okay with that (as long as it didn't make internet access too much more expensive than it currently is). There would be a great incentive for ISPs to upgrade their networks, since it would allow their customers to pay them money at a faster rate.

Now, some services (especially video services like YouTube and NetFlix) will require a lot of bandwidth so it seems only natural that these services would like to be able to help out their users with their bandwidth. Perhaps if YouTube sees you used X Gb on their site last month and knows you're with an ISP that costs $Y/Gb they might send you a cheque for $X*Y (more than paid for by the adverts you watch on their site, or the subscription fees in the case of NetFlix) so that you'll keep using their service. Good for you, good for YouTube, good for your ISP. Everyone's happy.

Next, suppose that that $X*Y is sent directly to the ISP (or indirectly via the intermediate network providers) instead of via the consumer. Great - that simplifies things even more. YouTube doesn't have to write so many cheques (just one to their network provider) and everyone's happy again. Your ISP still charges per megabyte, but at different rates for different sites.

The problem is then that we have an unintended consequence - a new barrier to entry for new internet services. If I'm making an awesome new video sharing site I'll have to do deals with all the major ISPs or my site will be more expensive to users than YouTube, or I'll have to write a lot of bandwidth refund cheques (which would itself be expensive).

There's also the very real possibility of ISPs becoming de-facto censors - suppose my ISP is part of a media conglomerate (many are) and wishes to punish competing media conglomerates - all they have to do is raise the per gigabyte price across the board and then give discounts for any sites that don't compete with them. Once this has been accomplished technically, governments could lean on ISPs to "soft censor" other sites that they disapprove of. Obviously this is enormously bad for consumers, the internet and free speech in general.

We can't trust the market to force the ISPs to do the right thing because in many areas there is only one broadband option. Perhaps if there were as many choices for an ISP as there are choices of coffee shop in Seattle, having a few non-neutral network providers would be more palatable (non-neutral ones would probably be very cheap given their low quality of service).

As I see it there are several possible solutions:

  1. Force ISPs to charge at a flat rate, not per gigabyte (discouraging infrastructure investments).
  2. Forbid sites from offering bandwidth rebates to customers (directly or via the ISPs).
  3. Forbid ISPs from looking at where your packets are going to end up (they can only check to see what's the next hop that they need to be sent to).

I think pretty much anything else really works out as a variation on one of these three things. The third one seems to be the most practical, and should be considered by the ISPs as a penalty for having insufficient competition.

Legalize all drugs

Thursday, August 25th, 2011

Some people who know me in person might be surprised to learn that I think drugs should be legalized. After all, I'm not a user of illegal drugs or a die-hard libertarian (though I am finding that I have increasing sympathies for some libertarian points of view as I get older). In fact, this is something that I've changed my mind on in the past - in secondary school I thought they should remain illegal because it makes it easier for impressionable teenagers (like myself) to say "no" to them (however, I did somehow manage to avoid trying cigarettes in secondary school.)

Some things I've learnt since then which have contributed to me changing my mind:

  • To get people to stop taking drugs, treating addiction as a medical conditional rather than a crime is much more effective.
  • To get people to avoid taking them in the first place, making sure everybody is informed about their effects would much more effective.
  • The cost in terms of police, courts and prisons is much greater than the costs to society in terms of loss of productivity and medical costs related to drug usage and addiction.
  • Making drugs illegal creates an enormous black market, leading to a great increase in crime and a flow of wealth to unscrupulous individuals (drug kingpins certainly don't want drugs to be legalized - it would destroy their monopolies.)
  • Stories of innocent people being killed in mistaken "no-knock" drug raids.
  • The existence of illegal drugs makes it very easy for dishonest police officers to frame an innocent person - just plant some illegal drugs on the person you want to imprison (or their house/car/belonging).
  • It is now de-facto illegal to drive across the US with large quantities of cash - it can be confiscated by police if found during an unrelated stop, and forfeited even if nobody is convicted of any crime.
  • The fact that the people convicted of drug crimes are overwhelmingly poor, causing the drug war to be a massive poverty trap.
  • The horrible racist and protectionist reasons marijuana (specifically) was originally prohibited.
  • The general principle that sane adults should be solely responsible for what they put into their bodies.
  • The fact that banning some difficult-to-obtain but mostly harmless drugs has created a market for easy-to-make but much more harmful drugs. Legal drugs are likely to be safer for drug users in other ways too - illegal drugs are sometimes contaminated and sometimes of unknown purity.
  • The fact that Portugal has decriminalized drugs with great success.
  • Drug laws inconvenience law-abiding people too - you can't stock up on decongestant if your whole family has colds, since (to discourage methamphetamine production) you're only allowed to buy a small amount in any given period of time. Also, shops are forbidden from stocking such "drug paraphernalia" as tiny plastic bags.
  • It seems that cannabis (in particular) is actually a very useful medicine for some conditions (such as reducing the side-effects of chemotherapy).

Since only a few of these are specific to marijuana, I'm in favor of legalizing all illegal drugs and taxing them at a rate which neutralizes as closely as possible the harm that they cause to society (thus avoiding a perverse incentive for governments to either encourage or prohibit drug use). However, there are some drugs which are so harmful that taking them should be evidence that a person is not sufficiently sane to make such decisions - people taking those drugs should be committed, not imprisoned.

If it's legal to sell harmful drugs then it wouldn't make much sense for it to be illegal to sell beneficial drugs without a prescription. So, along with legalizing currently illegal drugs I would also get rid of prescription requirements (although having a prescription from a doctor for a potentially harmful drug would still be a very good idea just as a matter of common sense). Having FDA approval for a drug would no longer be necessary in order for doctors to prescribe it or for pharmacies to sell it, but I imagine the FDA would continue to exist as a voluntary safety testing and labeling scheme, and most sensible people would avoid taking drugs which had not been declared as safe (or whose side-effects do not outweigh their benefits) except when circumstances warrant it (such as it being the last hope of curing an otherwise incurable disease). There should be some kind of public awareness campaign so that people know what mark to look for when they are buying such things. To avoid the safer drugs being more expensive, the costs of FDA labeling should be borne publicly.

Such a system would also be much more sensible for small scale food manufacturers.

Correcting injustices

Wednesday, August 24th, 2011

Currently the US legal system (at least, certainly in many other places as well) has a system of plea bargaining - if you are charged with a crime the prosecutor can offer to reduce the charge to a lesser one in return for a guilty plea.

I think this is a huge infringement of rights and is the fundamental cause of a great many miscarriages of justice. I would like to see the practice ended. While it does reduce the burden on the courts, this should not be the primary concern of the justice system. The one and only concern of the justice system should be uncovering the truth and protecting innocent people.

If we didn't care about protecting innocent people, the courts would not be needed at all - the police would just be able to arrest anyone they liked and lock them up for as long as they liked. Obviously that would be awful. The courts are the check on this system - making sure that only people who have actually committed crimes end up in prison.

Overworked, underpaid public defenders will often advise innocent people to plead guilty because they would have little chance of being found innocent by the court. This is an obviously disastrous state of affairs.

Plea bargains wouldn't be so bad if public defenders were actually adequate, but public defenders secure acquittals at a much lower rate than more costly lawyers. This is also obviously disastrous - there's essentially once justice system for the rich and one for the poor, and the one for the rich is much more lenient and forgiving. There's a simple way to fix this (although of course it does require spending enough money to get a functional system) - pick a sampling of publicly defended defendants at random and give them a highly-paid lawyer. If the public defenders do worse, increase the amount of money spent on the public defense system until the difference is no longer statistically significant. The randomness is important because there may be statistically significant differences between the crime rates of wealthy and impoverished people (in fact, this seems highly likely - poverty causes desperation and desperate people take desperate measures).

A third cause of massive injustice seems to me to be how long it takes to get a trial - some people languish in prison for months and even years without ever having been found guilty (the more well off can usually obtain bail). I think all trials should be set for no more than 48 hours after arrest (maybe a week tops if there are extenuating circumstances, like a key witness in a murder case being elusive). The average should be no more than 24 hours. If you don't have the evidence to prosecute someone after that time you shouldn't have arrested them in the first place. The entire bail system should be made unnecessary and scrapped.

Injustices (especially injustices like these that predominantly affect the less well-off portion of society) have the unfortunate side effect of increasing inequality - someone who is imprisoned awaiting trial can't earn money to improve his situation, and being convicted of a crime substantially reduces one's future prospects.

My proposed reforms would be expensive, but I think not unaffordable to a rich country like the US. I think eventually they will happen, as these injustices become less tolerated by society.

Secrets and leaks

Tuesday, August 23rd, 2011

The idea that states should have the capability to keep secrets from their citizens often seems to be taken for granted. Clearly there are cases where states should be able to keep secrets (it is doubtful, for instance, that the allied powers would have won the second world war if the cracking of the German Enigma code had not been kept secret). But lately I've been of the opinion that this privilege should be extremely limited, and should only be used in the most extreme of circumstances.

From Wikileaks, we have learned that states are keeping things secret even when it is not in the interests of citizens or justice for those things to be secret. Sometimes these secrets are kept to protect special interests, or to avoid embarrassment to those in power. Keeping such things secret is antithetical to informed democracy.

I would like to see a system of checks and balances to avoid abuse of state secrets. Wikileaks forms (informally) one such check - as long as they redact things that really do need to be kept secret (for example information that would reveal the identities of undercover operatives). As far I have been able to tell they have done this. However, it does require individual leakers to put their careers (and sometimes even their very freedom) in jeopardy and can therefore only go so far alone. A good additional balance would be to make it illegal for the government to withhold information from the public without good reason. Then if something is leaked which reveals that secrets have been kept unjustifiably, the secret-keepers could be prosecuted on that basis.

An alternative (or complementary) approach would be for a (trusted) third party to hold on to all government information, and to publicly release all the information that doesn't need to be kept secret. Determining which is which isn't free, so there would need to be some kind of penalty for revealing information which endangers operatives. Then, to prevent this organization from just redacting everything there would need to be an economic incentive to release as much (non-endangering) information as possible. Then there would have to be some process for keeping these rates properly tuned to avoid too much information being withheld or released. This tuning process would have to be done with public input (to make sure the balance doesn't swing too far one way or the other) but can't be done by the normal government (or there would be too much temptation to just turn the secrecy level way up). So it essentially needs to be made a whole new branch of government, with that responsibility and no other. Tricky.

Tinkering with the defragmentation API

Monday, August 22nd, 2011

I was looking at the Windows file defragmentation APIs recently and it made me want to write something that uses them. Not a defragmentation tool necessarily (there are tons of those, and with the advent of SSDs they're becoming irrelevant anyway). But maybe something for visualizing how directories, files and other structures are laid out on disk. At one pixel per sector, this would make for a very big bitmap (~25000 pixels square for the 300Gb drive I use for data on my laptop and ~75000 pixels square for the largest 3Tb drives) so I'd have to make a way to zoom in and out of it. And how should I arrange the sectors within the bitmap? Row major is the obvious way, but a Hilbert space-filling curve might make for nicer results. For colouring, I was thinking of using the file extension as a hint (e.g. red for images, green for audio, blue for video, grey for binary code, yellow for text, black for free space) and maybe making the first sector of a file brighter or darker so that you can see if it's one big file or lots of small ones (though weighting the sector by how much of it is used would have a similar effect). I also want the program to show which sector of which file is under the mouse pointer.