Archive for the ‘maths’ Category

Fractals on the hyperbolic plane

Tuesday, August 30th, 2011

Some amazing images have been made of fractal sets on the complex plane, but I don't think I've ever seen one which uses hyperbolic space in a clever way. I'm not counting hyperbolic tessellations here because the Euclidean analogue is not a fractal at all - it's just a repeated tiling.

The hyperbolic plane is particularly interesting because it is in some sense "bigger" than the Euclidean plane - you can tile the hyperbolic plane with regular heptagons for example. Now, you could just take a fractal defined in the complex plane and map it to the hyperbolic plane somehow, but that doesn't take advantage of any of the interesting structure that the hyperbolic plane has. It's also locally flat, so doesn't add anything new. If you use some orbit function that is more natural in the hyperbolic plane, I think something much more interesting could result. I may have to play about with this a bit.

Similarly, one could also do fractals on the surface of a sphere (a positively curved space - the hyperbolic plane is negatively curved and the Euclidean plane has zero curvature).

The logic of the minimum wage

Saturday, August 27th, 2011

I've said before that I would prefer to replace the minimum wage with a guaranteed minimum income, but I've since thought of a justification for keeping the minimum wage.

In any economy, there are commodities - things which are easily available and the prices of which are easy to discover such as gold, oil or milk. One very important commodity is the hourly labour of a human who is healthy and competent at following simple instructions but otherwise unskilled. Even without a minimum wage law, there will be a natural price for such labour. It will fluctuate with time and space but will generally stay within a certain range. In some times and places it might actually go higher than the legislated minimum wage (which corresponds to a state of zero unemployment).

Having a legislated minimum wage has the effect of setting a scale factor (or a "gauge", for the physicists) of the economy as a whole - it's sort of like having an electrical circuit that isn't connected to anything and then connecting one part of it to ground. If the minimum wage is set too high then it will cause an inflationary pressure which will dissipate once everyone has a job again. If it's set too low then it will have a negligible effect anyway, since there would be very few people who would be unable to get a job for more than minimum wage. According to this theory, the minimum wage has nothing to do with making sure the poorest members of society are paid a respectable wage (which makes sense since a minimum wage is actually a regressive policy) - it's just an economic control factor.

Now, as an economic control factor, a minimum wage has a number of problems. One is that it takes a while for the inflation to eliminate unemployment after the market rate for labour goes down, so there's always some residual unemployment. Another is that people are not all equal - the labour of some people is just naturally below the basic labour rate (because they are unskilled and also either unhealthy or incompetent). While this is unfortunate, essentially forbidding them from working at all (by preventing employers from hiring them at the market rate) seems to add insult to injury (not to mention creating yet another dreaded poverty trap). A third problem is that there are many other ways that governments "ground" economies - in the electrical circuit analogy they're connecting various bits of a circuit to voltage sources that correspond to "what we think this ought to be" rather than what it actually is, which seems like a good way to get short circuits.

Net neutrality

Friday, August 26th, 2011

Lots of things have been written online about net neutrality. Here's my contribution - a "devil's advocate" exploration of what a non-neutral internet would mean.

From a libertarian point of view, a non-neutral internet seems like quite a justifiable proposition. Suppose people paid for their internet connections by the gigabyte. This wouldn't be such a bad thing because it would more accurately reflect the costs to the internet service provider of providing the service. It would eliminate annoying opaque caps, and heavy users would pay more. Even as a heavy user myself, I'd be okay with that (as long as it didn't make internet access too much more expensive than it currently is). There would be a great incentive for ISPs to upgrade their networks, since it would allow their customers to pay them money at a faster rate.

Now, some services (especially video services like YouTube and NetFlix) will require a lot of bandwidth so it seems only natural that these services would like to be able to help out their users with their bandwidth. Perhaps if YouTube sees you used X Gb on their site last month and knows you're with an ISP that costs $Y/Gb they might send you a cheque for $X*Y (more than paid for by the adverts you watch on their site, or the subscription fees in the case of NetFlix) so that you'll keep using their service. Good for you, good for YouTube, good for your ISP. Everyone's happy.

Next, suppose that that $X*Y is sent directly to the ISP (or indirectly via the intermediate network providers) instead of via the consumer. Great - that simplifies things even more. YouTube doesn't have to write so many cheques (just one to their network provider) and everyone's happy again. Your ISP still charges per megabyte, but at different rates for different sites.

The problem is then that we have an unintended consequence - a new barrier to entry for new internet services. If I'm making an awesome new video sharing site I'll have to do deals with all the major ISPs or my site will be more expensive to users than YouTube, or I'll have to write a lot of bandwidth refund cheques (which would itself be expensive).

There's also the very real possibility of ISPs becoming de-facto censors - suppose my ISP is part of a media conglomerate (many are) and wishes to punish competing media conglomerates - all they have to do is raise the per gigabyte price across the board and then give discounts for any sites that don't compete with them. Once this has been accomplished technically, governments could lean on ISPs to "soft censor" other sites that they disapprove of. Obviously this is enormously bad for consumers, the internet and free speech in general.

We can't trust the market to force the ISPs to do the right thing because in many areas there is only one broadband option. Perhaps if there were as many choices for an ISP as there are choices of coffee shop in Seattle, having a few non-neutral network providers would be more palatable (non-neutral ones would probably be very cheap given their low quality of service).

As I see it there are several possible solutions:

  1. Force ISPs to charge at a flat rate, not per gigabyte (discouraging infrastructure investments).
  2. Forbid sites from offering bandwidth rebates to customers (directly or via the ISPs).
  3. Forbid ISPs from looking at where your packets are going to end up (they can only check to see what's the next hop that they need to be sent to).

I think pretty much anything else really works out as a variation on one of these three things. The third one seems to be the most practical, and should be considered by the ISPs as a penalty for having insufficient competition.

Bifurcation fractal

Thursday, August 18th, 2011

I wrote a little program to plot the Bifurcation fractal when I was learning to write Windows programs from scratch - this is an updated version of it. Unlike most renderings of this fractal, it uses an exposure function to get smooth gradients and scatters the horizontal coordinate around the window so you get both progressively improvements in image quality and can see very fine details (such as the horizontal lines of miniature versions of the entire picture).

You can zoom in by dragging a rectangle over the image with the left mouse button, and zoom out by dragging a rectangle with the right mouse button.

Credit card designed for internet shopping

Friday, July 23rd, 2010

It seems like it would be possible to make a great deal of money by creating a payment system better than credit cards. Paypal has come closest to doing this, but has a lot of problems.

How could one make credit cards better? It would be very difficult to get your payment system into as many retailers as Visa and Mastercard, so perhaps aiming for a niche would be a good idea. One such niche might be internet purchases. Develop a payment system/credit card designed specifically for internet purchases and it could be very popular.

One major difference between a payment system designed for the internet and one predating it is that one could authenticate the transaction, not the identity. Whenever you buy something online, you put in your card number as usual but then (unlike with normal credit cards) there is an extra step - you log into the payment system's website and approve the requested transaction. Until you have done that, the merchant doesn't get any money (and won't deliver the goods). This cuts out all "stolen card" type fraud, since the thief would also need to steal your payment system password (which never goes anywhere near the merchant). This would allow this payment system to undercut the existing credit card issuers and become competitive. Fraud caused by loss of the payment system password would be treated the same as fraud caused by the loss of any other online banking password (which I think varies from place to place).

This system would still need a "chargeback" mechanism to combat fraud from merchants (and mechanisms to combat fraudulent chargebacks) but my impression is that the costs of these are small compared to the "stolen card" costs.

With suitable cellphone applications for authorization, the system could even be used for brick-and-mortar stores as well.

Unlike Paypal, this system would actually be able to lend money like a credit card, it wouldn't need to be linked to a bank account or credit card.

Time derivative accounting

Wednesday, July 21st, 2010

Sometimes I think it would make things simpler if, when I logged onto my bank's website, there were two sorts of transactions listed. One would the normal sort: "x dollars paid to y on date z". The other sort would be "x dollars per month paid to y since date z". For example, my monthly mortgage payment would be listed as a single transaction, not a separate one each month.

Certain calculations are much easier when done with time derivative values. For example, interest on my mortgage accumulates daily but the payments are monthly, so to figure out what is actually owed at any time requires a lengthy spreadsheet or complicated program. If the interest accumulated continuously, and the payment also happened continuously, the calculations would be much simpler.

Normally, interest is specified as a percentage by which a debt grows. But to know what that really means, you have to specify how long it takes to grow by that much - this can lead to confusion (deliberate or accidental). It would be much better if interest was specified as a growth constant f (with standard units of Hz, bizarrely enough), such that if you have a debt x_0 at time 0 and changes only because of interest (no payments or fees) then after time t the debt will have grown to xe^{tf}. If you are also making payments at the rate of r per unit time, then the debt x is given by the differential equation \displaystyle \frac{dx}{dt} = xf - r which has the solution

\displaystyle x = \frac{r}{f} + (x_0 - \frac{r}{f})e^{ft}

Hence one can easily solve for the time it takes to repay the loan:

\displaystyle t = \frac{log(-\frac{r}{x_0f - r})}{f}

Or you can solve for the repayment rate if you want to repay the loan in a specific amount of time:

\displaystyle r = \frac{x_0fe^{ft}}{e^{ft} - 1}

Similarly, one can do calculations around retirement savings. Suppose you have x_0 in retirement savings today and the inflation growth constant is i. If you are saving at a rate of r e^{it} and investing in something with a growth rate of f, and you want to retire when you can maintain an income of y e^{it}.

The amount you need in savings at retirement is x in:

\displaystyle \frac{dx}{dt} = xf - ye^{it} = xi

Plugging this into the equation for the savings rate while working:

\displaystyle \frac{dx}{dt} = xf + re^{it}

Gives:

\displaystyle y = r+(x_0(f-i)-r)e^{(f-i)t}

Solving for retirement time:

\displaystyle t = \frac{\log(\frac{-r}{x_0(f-i)-r})}{f-i}

Or savings rate:

\displaystyle r = \frac{e^{ft}x_0(f-i)}{-e^{it}+e^{ft}}

Of course, all that assumes you can get a fixed rate of return and that inflation stays constant. In reality, these variables are constantly changing. There's also the matter of risk to consider - trading off means for variances, but you get the idea.

The current inflation rate is about 300 picoHertz. The highest recorded hyperinflation is about 6000000 picoHertz (6 microHertz).

How should payment rates be specified in user interfaces? "Dollars per month" is probably the most meaningful multiplier - to make it consistent we should use a standard average month length of 2629746 seconds. Current balance would be updated continuously supposing that the paying of salary and recurring bills could also be paid continuously. One would want to take care to set limits on the payment rate for bills (just as one does with direct debits) so that a mistake at the gas company doesn't instantly wipe out your savings.

Further derivatives could be taken into account so that one could pay exponentially increasing bills like the retirement savings above without making any adjustments. The ultimate outcome of this would be to link everything to inflation and display/specify amounts in "year 2000 dollars" (for example) - in other words, the Inflation-linked currency I've written about before.

Liquid democracy

Monday, July 19th, 2010

Another political idea I really like is that of electronic liquid democracy. The idea is this - instead of trying to choose which of a very small number of representatives will best represent your interests, you can instead vote directly on every measure that your representative would vote on.

Most of the time, you might not care or might not have enough background knowledge to know what's best, so you appoint a proxy to choose your vote on your behalf. You can even choose different proxies for different areas - one for health-related legislation, another for trade-related and so on. You can choose multiple ranked proxies so that if one proxy fails to vote on one issue, the vote will go to your second choice and so on. Proxies can also delegate to other proxies. All this is done electronically (with appropriate security safeguards to avoid elections getting hacked) so you can change your proxies and votes whenever you like.

It's somewhat surprising to me that this hasn't taken off already, given how often voters seem to be dissatisfied with the decisions of their elected representatives. It seems like it would be relatively easy to get going - somebody just needs to run on a platform of no policy except voting in accordance with the result of the system. Such a representative could get votes from both sides of the political spectrum and grants voters much more power and control - it seems to be the best of all worlds. Unfortunately when it was tried in Australia, the Senator On-Line failed miserably to get elected. There are several reasons why this might have happened:

  • People avoiding voting for a candidate they feel is unlikely to win in order to avoid the wrong mainstream candidate getting elected.
  • People not trusting this newfangled internet stuff.

The second problem should go away as the generations that haven't grown up in the internet age die out. The first problem is ever present for third parties in first-past-the-post systems. Given that politicians won't generally reform the systems that elected them, it seems like the only chances are for the third parties to sneak in at times when a majority of people have an unpopular view of both mainstream parties (which may happen more often as internet-savvy voters get better informed about what their representatives are actually doing).

Seasteading

Sunday, July 18th, 2010

I find the idea of Seasteading fascinating - it could certainly be a great thing for humanity if it accelerates the evolution of government systems and gets them to solve the Hotelling's law problem and the irrational voter problem . However, I have serious doubts that a seastead would be a good place to live in the long term. Assuming it gets off the ground, here is how I think the experiment would play out.

The first problem is that seastead "land" is extremely expensive - hundreds of dollars per square foot, on par with living space in a major US city but (initially) without the network effects that make US cities valuable places to live/work/do stuff. The "land" also has ongoing maintainance costs much greater than old-fashioned land - making sure it stays afloat, de-barnacling, ensuring a supply of fresh water and food, removing waste and so on.

This means that (at least initially) a seastead is going to be importing much more than it exports. In order to be practical, it's going to have to ramp up to being a net exporter as quickly as possible to pay back the initial costs, keep itself running and generate net wealth. But there are far fewer options for a seastead to make money than for a landstead. Farming is going to be particularly difficult as it requires a lot of space. So the seasteads will have to concentrate on producing wealth in ways that don't require a lot of space, and trade with land states for food.

The things which seasteads will be able to do cheaper than landsteads are things which are made more expensive by government interference, such as manufacturing (no government to impose a minimum wage or environmentally friendly disposal practices) software and other IP-intensive activities (no copyrights or patents to worry about) and scientific research (particularly biological - no ethics laws to get in the way). Drugs, prostitution and gambling will be common as sin taxes can be avoided. Seasteads which don't do these things just won't be able to compete with the land states, so by a simple survival-of-the-fittest process, this is what the successful seasteads will end up doing - a classic race to the bottom.

By design, it's supposed to be very easy to leave a seastead and join another if you disagree with how your current seastead is being run - you just pack up your house onto a ship and go to another seastead. But this very mobility will introduce its own set of problems. Crime, for example - if someone commits a serious crime aboard a seastead, there's little stopping them from leaving before the police start breathing down their necks. To avoid having a serious crime problem, the seasteads are going to have to sacrifice freedom for security in one of a number of ways:

  • Make it more difficult to move (you'll need a check to make sure you aren't under suspicion of a crime before you'll be allowed to get on the boat).
  • Have a justice system and uniform baseline of laws covering a large union of seasteads - allow free travel between these seasteads but you'll still be under the same criminal jurisdiction - similar to how US states operate. By agreeing to be a part of the union, your seastead gets some additional security but you have to play by the union's rules to keep your police protection.

Another problem is that it's going to be difficult to institute any form of social welfare - if you tax the rich to feed the poor, those who are getting a net tax will just leave for a seastead which doesn't do that. So there is likely to be a great poverty problem - a big divide between wealthy seastates and poor ones. The poor ones will just die out and their "land" will be taken over by other seasteads, accelerating the Darwinism to the delight of libertarians and the dismay of humanitarians.

If lots of people end up on seasteads, I forsee large environmental problems since it's going to be very difficult for seasteads to enact (let alone enforce) anti-pollution laws. Seasteads which take pains to avoid dumping their toxic waste in the ocean just won't be able to compete with those that don't.

There are certain economies of scale in many of the wealth-generating activities that seasteads will perform, so it seems likely that reasonably successful seasteads will merge into larger conglomerates the way corporations do in the western world (particularly in the US). These corporate sea-states will be the ones which are nicest to live on - since they are major producers of wealth they'll be able to afford the best living conditions and attract the brightest and hardest-working people. They won't be the most free seasteads though - only those who can (or have) contributed to the seastead's economy will be allowed to live there - customers, employees and their families, retired employees, employees of partner corporations and so on. As an employee, you'll have to play by the company rules and not engage in any activities which might be deleterious to the company's profit margin - don't expect to be able to take recreational drugs, for example - they might affect your job performance and add costs to the company healthcare plan.

Some seasteads might have constitutions which prevent this sort of thing. They might not be as big, as pleasant to live on or quite as good at making money, but their small size and consequent agility might allow them to survive and become viable in certain niche areas of the seasteading economy. They may have to make some compromises to stay viable, but as different seasteads will make different compromises, there will be plenty of choice.

The outcome of the seasteading experiment is really important because when humankind inevitably starts colonizing space, similar economic forces will be at work.

Arbitrary precision Mandelbrot sets

Tuesday, July 6th, 2010

I added arbitrary precision arithmetic to my Mandelbrot zoomer. Surprisingly, the difficult part wasn't the arbitrary precision arithmetic itself, it was deciding when and how to switch from double-precision to arbitrary precision arithmetic. Because my program reuses data computed at one zoom level to speed up the computation of more deeply zoomed images, some images contained pixels computed using both methods. The trouble is, the two methods don't give the same results. In fact, after some experimentation I discovered that the problem was even worse than that: even using just the double-precision code, you get different results depending on the number of digits you use. So the point that was represented in my program by 0xffa615ce + 0x00000002i (using 10.22 bit fixed point) escaped after 548 iterations but the point 0xffa615ce00000000 + 0x0000000200000000i (using 10.54 bit fixed point) escaped after 384 iterations. The problem is that after every multiplication we shift the result right by the number of fractional bits, which performs a rounding operation. The images generated look reasonable but are actually only approximations to the true images that would be calculated if infinite precision were employed.

Having determined this, I realized it would be necessary to throw away all the points computed with lower precision once we started using a higher precision. This isn't as bad as it sounds, since (when zooming in by a factor of 2) the recycled points only account for 1/4 of the computed points but if we just threw them all out at once it would result in a visible artifact when passing through the precision boundary - the entire window would go black and we'd restart calculation from scratch. I wanted to avoid this if possible, so I started thinking about convoluted schemes involving restarting a point if its 3 sibling points were of higher precision. Then I realized that I could just recycle the points but keep their colours when splitting blocks to a new precision boundary, avoiding the glitch. There are still some artifacts while the image is being computed, but they are less jarring.

Finally there was the problem of interfacing with double-precision floating-point numbers. The Mandelbrot set is contained entirely within the circle |c|<2 and in this range a double has 52 bits of fractional precision. But if we replace the 10.22 and 10.54 bit fixed point numbers with doubles, we'll have a region of blockiness where we need 53 or 54 bits but only have 52. Rather than having two sorts of 64-bit precision numbers, I decided to simplify things and have 12 integer bits in my fixed point numbers. The program is very slow when it switches into arbitrary precision mode - it's barely optimized at all. The 96-bit precision code is currently has a theoretical maximum speed of about 92 times slower than the SSE double-precision code (190 million iterations per second vs 2 million). It could be significantly faster with some hand assembler tuning, though - I have a 96-bit squaring routine that should speed it up by an order of magnitude or so. All of the multiplications in the Mandelbrot inner loop can be replaced with squarings, since [latex]2xy = (x+y)^2 - x^2 - y^2[/latex]. Squaring is a bit faster than multiplication for arbitrary precision integers, since you only need to do [latex]\displaystyle \frac{n(n+1)}{2}[/latex] digit multiplications instead of [latex]n^2[/latex]. Given that we are calculating [latex]x^2[/latex] and [latex]y^2[/latex] anyway and the subtractions are very fast, this should be a win overall.

Improved GPU usage for fractal plotter

Sunday, November 1st, 2009

I've been tinkering with my fractal plotter again. One thing that annoyed me about it was the pauses when you zoomed in or out past a power of 2. I thought this was due to matrix operations until I did some profiling and discovered that it was actually dilation (both doubling and halving) of the "tiles" of graphical data to which squares are plotted and which themselves are painted to the screen.

This is work that can quite easily be done on the GPU, without even having to resort to pixel shaders, by using the ability to render to a texture. Here is the result and here is the source. In order to do this I moved all the tiles to video memory (default pool instead of managed pool) and used ColorFill() to actually plot blocks instead of locking and writing directly to textures. All this adds up to much more CPU time available for fractal iterations.

Another change is that instead of an array of grids of tiles, I've switched to using a grid of "towers" each of which is itself a tile and can point to 4 other towers. This simplifies the code somewhat.

There is still some glitchiness when zooming but it is much less noticable now.

This reminds me of something I meant to write about here. When I originally converted my fractal program to use Direct3D, I figured that locking and unlocking textures was probably an expensive operation so rather than locking and unlocking every time I needed to plot a square, I kept them all locked most of the time and just unlocked them to paint. However, it turns out that this "optimization" was actually a terrible pessimization - now all the tiles were dirtied each frame and had to be copied from system memory to video memory for each paint, and because of the locking nothing else could happen during that time. I was able to get a big speed up by locking and unlocking around each plot operation - that caused only the parts of tiles that were actually plotted on to be dirtied. It just goes to show that when optimizing you do have to be careful to actually measure performance and see where the slow bits really are.