Archive for September, 2011

Rewinding DVDs

Saturday, September 10th, 2011

Some DVD players now remember where you stopped playing a DVD, even if the disk is removed. When you put the disk back in, it reads the disk identifier, looks in its memory to see if it has a previous position for that disk, and (if it does) starts playing from that point.

This is all very well, except for the situation where you have a rental house containing such a DVD player and a selection of DVDs to watch - at the end of the rental period, the remembered positions of the DVDs might not all be at the start, leading to somebody putting on a movie and it unexpectedly starting half way through.

What is needed to solve this problem is some kind of mechanism... for rewinding DVDs!

(Perhaps just a button on the front that says "rewind all" which causes all the remembered positions to be forgotten. Or what our DVD player does when it starts somewhere other than the beginning - putting a "press such-and-such button to start from the beginning".)

Fortunately the problem is rather less severe than the problem of rewinding VHS tapes, so we won't have to remember to rewind rental DVDs.

What to do about software patents

Friday, September 9th, 2011

Patents are an enormous problem for the software industry. Software development organizations can never be sure that the software they develop doesn't infringe on somebody else's patent, so the only defence against being sued for patent infringement is to build up an arsenal of your own patents on methods as trivial as you can get away with in the hope that if someone sues you can counter-sue them for infringing on your patents and end up cross licensing. This doesn't protect against patent trolls, though (who produce no products and therefore don't need to worry about infringing someone else's patents). Also, it favours large software development organizations over smaller ones - patents are very expensive to obtain.

The usual answer to this is that we should get rid of software patents altogether. Perhaps this will happen, if enough powerful companies are burnt by patent trolls. On the other hand, perhaps some software patents are actually useful for progress - perhaps there are areas of software development where patents make much more sense than copyright - where simply examining the end product would allow somebody to (without infringing copyright) copy the part that took hard work to discover (i.e. an idea rather than the implementation). Perhaps there are ideas could not have been had without the investment that a promise of patentability would bring. Perhaps the additional secrecy that would have to be put in place without patents would cripple the industry.

Here's a proposal for a different sort of patent reform, which aims to solve these problems:

First, get rid of all the patent examiners. They're doing a lousy job anyway, granting all sorts of patents on things which have been done before and which are perfectly obvious to a person skilled in the art. Many of these patents don't stand up in court, but it's still expensive to defend against attacks from bogus patents. Instead, the patenting process should be a simple rubber stamp - anyone can send in a description of their idea (preferably not written in patent-ese), have it published on the USPTO (or equivalent) website and hey presto, that concept is now patented.

Of course, that doesn't get rid of the problem of bogus patents - it just moves the workload from the patent office to the court, which seems like it would make things worse. So the next stage is to set up a special patent court. If A wants to sue B for infringing on A's patent, it all gets sorted out in this court. The court needs to find a person skilled in the art (PSITA) who is not affiliated with A or B, and also to determine if B referred to A's product or patent. If the patent is bogus, or if B is determined to have invented the same thing independently, then the patent is thrown out and A has to pay the costs of the entire proceeding. If the patent stands up and B is found to have based their product on A's product or patent, then B must suffer the usual consequences, including having to pay for the proceedings. So (assuming the system works justly) all B needs to do to say safe is not to read unexpired patents and not to refer to competitors products - there should be no cost to them for accidental infringement or bogus patents (which really amount to the same thing - if it's possible to accidentally infringe on a patent it's a bogus patent by definition).

The third element we need is a way for a B to be sure that the product they want to release doesn't infringe on any patents (or to find out which patents it does infringe on). In practice it isn't possible to avoid seeing all competing products, so this step is necessary. The costs of this step must be borne by B, since otherwise B could cost A a lot of money by frivolously requesting repeated patent checks. The benefit is that (assuming the report comes back clean) there is then no risk of B losing a patent court battle. This sounds like it would be a very expensive exercise since it's essentially the cost of a patent trial multiplied by the number of active patents - however, a properly indexed patent database should make it reasonable.

One other element I'd like to see is a "use or it lose it" rule for patents, like those for trademark infringement. If you hold a patent, become aware of someone infringing that patent, and not choose to sue them immediately, you lose that patent altogether. This avoids problems with "submarine" patents.

Energy saving idea

Thursday, September 8th, 2011

One of the most inefficient things we do with energy is heat up water with it only to let most of that water (and most of the energy) go right down the drain when we take showers (baths are a little better because more of the energy goes into heating the house in the winter, but they do use more water).

So I think that if we were serious about wanting to save energy one thing we could do is try to encourage shorter showers by making people aware of how much energy they use. What I imagine is a little display that shows you how much energy you've used since the start of your shower (if you like you could factor in the cost of the water and sewage as well as the energy used to heat the water). For maximum effectiveness, I think the value displayed by the meter should be in units of local currency, and correspond to the amount you'd spend taking a shower of that length every day for a year - that way it feels like you're spending money really fast (of course, you'd really only be spending it at 1/365th of that rate so it's a bit of a psychological hack/cheat).

I think many people would appreciate such a device for the money it would save them, though I suppose some people might believe that long, guilt-free showers are worth that extra money.

The mathematical universe

Wednesday, September 7th, 2011

Max Tegmark's Mathematical Universe Hypothesis is extremely compelling - it's the only idea I've ever seen that comes close to the holy grail of theories of everything "Of course! How could it possibly be any other way!". And yet it seems unsatisfying in a way, perhaps because it's conclusion is (when you think about it for a bit) completely obvious. Of course the universe is a mathematical structure containing self-aware substructures - what else could it be?

Also, if it is true then it leaves a lot of questions unanswered:

  • What is the mathematical structure that describes our universe?
  • What is the mathematical definition of a self-aware substructure (SAS)?
  • If any mathematical (or even computable, or finite) structure that contains SASs exists in the same way for those SASs that our universe exists for us, why is our universe as simple and explicable as it is (Tegmark calls this the "measure problem").

My feeling is (as I've said on this blog before) that it's extremely likely that our universe is in some sense the simplest possible universe that supports SASs (i.e. the "measure" punishes complexity to the maximum possible extent). I have no a priori justification for this, though - it just seems to me to be the most likely explanation for the third point above. While it may seem unnecessary to have three generations of leptons and quarks, I strongly believe that when we have a more complete theory of physics we'll discover that they are completely indispensible - a universe like ours but with only a single generation of these particles would either be more complex (mathematically speaking) or make it impossible for SASs to exist. I suppose it is possible however, that when we do find the theory of everything (and/or a theory of SASs) we'll be able to think of a simpler structure in which SASs are possible.

The other thing about MUH is that I'm not convinced that it really does make any predictions at all, because it seems like whatever we discover about our universe with respect to the other possible universes in the Level IV multiverse can be made consistent with MUH by an appropriate choice of measure.

Issues as political proxies

Tuesday, September 6th, 2011

Suppose you own a large successful business which makes money by telling customers things they want to hear - reassuring stories, comforting platitudes and advice and guidance about how to live their lives. Suppose also that, for tax reasons, you are not allowed to use your influence over your customers to push them towards voting for one particular candidate over another, and you're also not allowed to donate any of the company's profits to political parties or candidates.

However, you'd still prefer to have one candidate elected over another because your preferred candidate might lower your taxes or give you more freedom to run your business the way you want to run it, or maybe just because he's a good customer. How could you covertly support that candidate?

One thing you could do is a pick a couple of social issues which aren't fundamentally a big deal to you one way or another but which differentiate your preferred candidate from their opposition and which the opposition is unlikely to change their minds on (perhaps because they are objectively correct in their position). Then you can use your platform to tell your audience that your preferred position on said social issues is vitally important, and deciding the wrong way on them will lead the country to ruin. You don't even need to mention the names of the political candidates or the upcoming election to your audience at all - they can figure out themselves what they need to do.

For this reason I think we need to avoid making "tax deductions for political neutrality" deals - it's too easy for the organizations in question to be covertly politically non-neutral and the tricks they use cause pressure to move candidates away from objectively correct positions in this kind of issue.

CPU usage visualization

Monday, September 5th, 2011

I saw this interesting visualization of Atari 2600 binaries. It makes me want to do something similar, but arrange the instructions according to the position of the raster beam when the instruction is executed rather than the position in the ROM. The 2600 is a unique platform in that it lacks a frame buffer, so to produce coherent images, the code must be synchronized with the raster beam. If we make an image that has 4 horizontal pixels per color carrier cycle, that gives an 912 pixel wide image. There are 76 CPU clock cycles per raster line and instructions take 2-7 cycles giving us 24-84 horizontal pixels per instruction, which (with a bit of squishing) ought to be enough. A raster line would have to correspond to a line of text so we probably wouldn't want to show a full frame (field). However, a typical game will only have a few different "line sequences" so a full frame image would be very repetitive anyway.

How do we get there from here?

Sunday, September 4th, 2011

So we have some ideas about how we want the world to look - the next question is "How do we get there from here?" It seems to be very difficult to get anything changed at least in US politics because there are so many entrenched interests, but here's the best idea I've had about it so far.

We use this fantastic information transfer medium of the internet to get as many people interested, involved and well informed as possible. We get these people to vote in on-line elections (that are at least to begin with unofficial, non-binding and informal but are as secure as possible and only open to registered, authenticated voters). We then try to persuade politicians to take these polls into account (as well as what they suppose the opinions of the rest of the electorate to be) when making their decisions. Participating in this system costs the politician nothing at first (since when they disagree with what the poll says they can say "oh that's just the opinion of a small minority of people, most people have the opposite opinion"), but as more and more people participate in these polls they eventually become impossible to ignore ("it's the will of the people"). When politicians vote against the will of the people, we call them out on it and hopefully get them voted out of office in the next election. Once the system has sufficient momentum, we start to field candidates who run on a platform of voting according to the results of these polls rather than their own opinions. Then eventually we can transition away from having elected politicians at all and just have a system of direct delegated democracy so that the people can vote (directly or by proxy) on every piece of proposed legislation. This is much less susceptible to corruption by corporations, because decisions are not made by wealthy minority.

In the meantime, we have to do something about the media. It's no good having a democracy if people are voting against their own interests and blindly following the instructions of corporate mouthpieces. I think this is more of a US problem than a UK one the BBC is much more impartial than private media can be. Here in the US there are massive numbers of people who get all their information from Fox News and conservative talk radio which are really just fronts for organizations like Koch Industries. This is how we get public support for absurd wars and other policies that are disastrous for almost all of the people who are voting for them. The usual method we use as a society for determining which side of an argument is true is the judicial system, so I'm wondering if we can somehow make news organizations liable for things that are not true that they present as news. Don't make the penalty too big because sometimes mistakes happen but make it large enough so that the likes of Fox can't continue their current scam. And if that puts too much power in the hands of judges, then we'll need some entirely new system of checks and balances to prevent abuse there. I guess to avoid stepping on the first amendment there would have to be some kind of voluntary labeling scheme for news organizations, and we would have to learn to take with rather more salt news from sources which don't stand by what they say by participating in this scheme.

We still need to keep the economy growing as fast as possible. Unlike the conservatives, I don't think the way of doing this is reducing taxes on the rich and reducing services on the poor. I think we need more small businesses, and that there are a lot of impediments preventing people from setting up or taking over small businesses. These impediments need to be identified and removed. More small businesses means more competition for large corporations. In the US, creating a functional public healthcare system would be a great benefit for small businesses (companies in the US are can't attract the best employees without providing health insurance plans, which is much more expensive for small companies than for big ones).

Hacker show and tell

Saturday, September 3rd, 2011

As CodeSourcery employees work from home, we get together for a week-long meeting once a year in order to better understand each other. CodeSourcery was bought by Mentor Graphics last year, but the tradition continues for the division.

One of my favorite parts of the annual meeting is getting together with extremely talented people and having fascinating discussions about all sorts of technical things. Sometimes these are work related and sometimes we show each other our personal programming projects (the same sorts of things I talk about in this blog).

I often think it would be great to be able to have these kinds of in-person discussions more often than once a year. I've thought about setting up some kind of regular meetup in Seattle for just this kind of thing. I found Hack and Tell which seems to be exactly what I'm looking for except for the fact that it's in New York, so perhaps I should try to set up a Seattle chapter.

I also found a few things in Seattle which are similar but weren't quite what I was looking for one way or another:

  • Dorkbot seems be more "electronic art" centric (although I haven't been to one of their meetings).
  • Ignite Seattle seems to have a more formal format - 5 minute talks.
  • Saturday House and Meet at the Pig seem to be inactive.
  • Seattle Wireless HackNight at Metrix Create:Space - I actually went to this for a couple of months earlier this year. It's a very cool space and the people there seemed interesting, but the focus seemed to be more on having a set place and time to do some hacking and less on showing off personal projects to each other. I stopped going after a while partly because it didn't seem to be quite what I was looking for but mostly because it was in the evening which is problematic when you have two children who wake up you up at 7am sharp (at the latest) every day - if I don't go to bed at a sensible time I tend to be a bit of a zombie the next day.

Euclid's orchard improved image

Friday, September 2nd, 2011

I wasn't very impressed with the Euclid's Orchard perspective picture on Wikipedia (grey background, limited number of "trees", no anti-aliasing) so I made my own:

Postel's law revisited

Thursday, September 1st, 2011

There is a general principle in computing called the Robustness principle or Postel's law, which says that you should be conservative in what you send but liberal in what you accept.

This seems like a no-brainer, but adhering to the principle does have some disadvantages. Being liberal in what you accept introduces extra complexity into software. More problematically, being liberal in what you accept allows others to be less conservative in what they send. Nowhere is this more noticable than HTML. The first web browsers were extremely liberal in what they accepted - allowing HTML that is broken in all sorts of different ways. Many people wrote HTML by hand, tested it on just one browser and then uploaded it. Other browsers would often mis-render the broken HTML, leading people to put little buttons with slogans like "best viewed in Netscape 4" on their pages. As HTML evolved, continuing to accept all this misformed HTML while adding new features became a big headache for browser developers.

Lately, the best practices involve marking your HTML in such a way that browsers will only accept it if it's correct - that way you find out quickly about any mistakes and fix them early.

In general, I think that new protocols should now be designed to have a very simple canonical form and that only data that adheres to this form should be accepted - other data should be rejected as early as possible.

Inputs directly from users can still be interpreted liberally just because it makes computers easier to use, but that data should be transformed as early as possible into canonical form.