Archive for the ‘maths’ Category

Rotating fractal

Thursday, October 29th, 2009

Last year, I wrote about a way to make rotating fractals. I implemented this and here is the result:

The equation used was z <- z1.8 + c, and the branch cut varies from 0 to ~10π.

A trip around the cardioid

Tuesday, October 27th, 2009

Take a point that moves around the edge of the main cardioid of the Mandelbrot set, and plot orbits with values of c close to that point. You end up with this:

It can be thought of as a sequence of cross-sections of the Buddhabrot.

Why should healthcare be a right anyway?

Tuesday, October 20th, 2009

Ron Paul says healthcare isn't a right. He's correct, technically - the US constitution guarantees no such right. However, unlike Dr Paul, I think healthcare should be a right - at least, some basic level of healthcare.

There are several reasons for this. One is a basic moral imperitive - enshrining such a right would save tens of thousands of lives each year.

Another is that healthcare doesn't work like other goods. I have a choice about whether to buy a new car or a new computer. If I am in need of life-saving healthcare - if my "choice" is to get the healthcare I need, or to die - that isn't really a choice at all. Faced with such a need, my only rational response would be to obtain the healthcare I needed by any means necessary, even if that means using all my savings and selling everything I own. No-one "shops around" for the cheapest cancer treatments - people go to the doctor, get fixed, and then worry about the cost afterwards. Until the bill comes, they probably have no idea how much it's going to cost.

Food and shelter are also essential goods, but they don't bankrupt people like medical bills do (at least in the US) because they are very predictable costs - one isn't going to suddenly find oneself faced with a food bill of hundreds of thousands of dollars due to circumstances beyond one's control. A "basic minimum for survival" level of food is also very cheap (we spend so much on food because we like it to be tasty and interesting as well as nutritious) but healthcare is very expensive because doctors are very highly trained professionals.

Yet another reason is that universal healthcare benefits everyone, even the healthy. Preventitive medicine and quick treatment of disease (as opposed to people avoiding going to the doctor because they can't afford it) means a healthier population overall, less disease, fewer sick days and greater productivity.

One other reason that doesn't seem to be mentioned anywhere is peace of mind. Unless you work for a very large corporation, health insurance isn't reliable. Even if you have insurance, you might still be bankrupted by deductibles, copays and coinsurance. If you have an individual insurance plan, your insurance company will look for any excuse to drop you as a customer if you develop a particularly expensive condition - woe betide you if you left an i undotted or a t uncrossed on your application form! If you work for a small company, the insurance company may pressure your employer to fire you (with threat of cripplingly higher rates) if you get too sick. All these add up to worries (and stress) that those who are guaranteed healthcare by their governments simply don't have.

Plan for welfare

Wednesday, October 14th, 2009

I think social welfare is generally a positive thing for society - nobody should have to starve, become homeless or sacrifice their retirement savings due to circumstances beyond their control (such as losing their job). However, it is also important to not destroy the incentive to work - we should balance the aim of having everyone contribute usefully to society with the safety net. It seems the UK isn't doing a very good job with this at the moment as there are people on welfare who could work and would like to except that it would mean having less money for greater effort (a minimum wage job pays less than welfare, and if you have the job you stop getting the welfare). There are also many teenage girls who become pregnant just so that they will get a council house. [EDIT: This is an unsubstantiated and almost certainly false claim - I heard it on the Jeremy Vine show and failed to research it before repeating it here.]

If we were designing the system from scratch, one question might be to ask "what do we do with someone who is just Terminally Lazy (TL)?" I.e. what do we do with somebody who simply refuses to work or contribute to society at all? What sort of lifestyle should they have? For humanitarian reasons, I don't think we should let them freeze to death in the streets, so I think they should have some sort of Basic Minimum Standard Of Living (BMSOL). We would also like to avoid the possibility of them committing crimes simply so they get sent to prison and have a place to sleep (on the general principle that encouraging crimes is a bad idea). I also don't think we should treat TL-ness as a crime itself - if a TL person wakes up one day and decides to go and get a job and become a productive member of society, that should be encouraged - they should not lose the ability to do that.

I think that the concept of "prison" is the right idea here, though - apart from the "freedom to leave" thing, there is no reason to provide any better BMSOL to a TL person than to a convicted criminal. In both cases, we only provide that BMSOL for humanitarian reasons. Let anyone who wants to have a bed for the night in conditions that are roughly the same as those in prison: shelter, a bed, basic hygiene facilities, up to three square meals per day and a basic level of medical care. No sex, booze, drugs or TV.

Paying for the BMSOL for the TL whilst making the non-TL pay for those things isn't exactly fair, though, so let's have a guaranteed minimum income (equal to the cost of the BMSOL) for everyone, and give people the choice of receiving it in the form of cash or BMSOL (or some of each).

If you want more than the BMSOL you have to do some work. With the BMSOL system in place, the minimum wage could be scrapped which would mean there would be plenty of work to go around (the problem with unemployment isn't that there's a lack of stuff to do, it's that because of the minimum wage there's a lack of money to pay people to do it).

How to pay for all this? I tend to favour an income tax since it's cheaper to collect and more progressive than a sales tax. I think inheritance tax is one of the most fair taxes, as I've mentioned here before. Seignorage (if carefully controlled) is probably also a good idea. It would be nice if the government had a national surplus rather than a national debt, so that it could make some money from investments rather than having to pay interest. Sin taxes (on booze, tobacco, recreational drugs, pollution and gambling) should be levied exactly as much as necessary to undo the harm that they cause (so smokers should pay the costs of treatment of smoking related diseases, etc) - any less leads to the abstinent paying for the sins of the partakers, and any more could lead to encouragement of the sins.

Income tax should be calculated as a function yielding y (the take home pay) from x (the salary) such that:

  • There is always an incentive to earn more (i.e. an increase in your salary always corresponds to an increase in your take-home pay) - \displaystyle \frac{dy}{dx} > 0.
  • The tax is progressive (i.e. the more you earn the less of your salary you take home): \displaystyle \frac{d}{dx}\frac{y}{x} < 0.
  • There's no tax on the first penny (i.e. \displaystyle \frac{dy}{dx} = 1 at x = 0).
  • A minimum income is guaranteed (i.e. y >= m) rather than a minimum wage (x >= n).

Because the minimum income is part of the income tax system, the tax bill x-y is negative for people making less than a certain amount (not necessarily the same as the minimum income).

It would be interesting to write a program to simulate the economics of a society and see what effect various tax and welfare schemes have.

Edit 14th July 2013:

Here's a good article about basic income.

Taxing hard work verses taxing luck

Tuesday, October 6th, 2009

In the year 2000, John McCain was asked by a student, "Why is it that someone like my father who goes to school for 13 years gets penalized in a huge tax bracket because he's a doctor. Why is that - why does he have to pay higher taxes than everybody else? Just because he makes more money. How is that fair?". To his credit, McCain did actually defend the idea of progressive taxation (those who make more money should pay a higher percentage of their income as tax).

It seems to be a common misconception amongst conservatives that progressive taxation is "punishing hard work" (or perhaps it just makes a good soundbite). But I don't think that even the most diehard liberals want to punish anybody for working hard - what should be (and is) punished is good luck. Many extremely wealthy people haven't had to work particularly hard for their wealth - especially those who have inherited it, but also those who have had the good fortune to be selling the right product in the right place at the right time (for example Microsoft in its early days with MS-DOS).

Also, there are a great many people who work extremely hard, but don't make a lot of money (those who work two or three low-paying jobs).

Progressive taxation decreases the reward for being lucky and thereby increases the amount of that reward that is due to hard work. So a (well implemented) progressive taxation strategy actually encourages hard work rather than discouraging it.

Not all progressive taxes are well implemented, though. In the UK there is a VAT limit of (currently) £68,000 per year. If your business makes more than that you have to register for VAT and pay 15 percent on all your profits (not just the profits over some limit). This greatly discourages people who are making just under the limit from working any harder, because if they did they would actually make less money.

I think one of the fairest taxes is the inheritance tax. This is much maligned as the "death tax", a moniker which doesn't make much sense to me as it isn't the dead person that pays the tax, it's the beneficiary. It's a fair tax because there's no hard work involved in inheriting money, you just have to have the good fortune to be born to wealthy parents. The downside of the inheritance tax is it's effect on the "family business" - a child wouldn't necessarily be able to afford to take over a business from his or her parents if there are large inheritance taxes to be paid, and this may lead to the destruction of some otherwise profitable businesses. But why, as a society, should we give that child the gift of this business and not expect anything in return?

A business which would be affected by this must have a large intrinsic value to be subject to the inheritance tax, but also make a small enough profit that it would take an unreasonably long time for the beneficiary to pay it back. I think the answer to this conundrum is investors - the beneficiary ceases to be sole owner of the business but the business can continue to exist and benefit society. Inheritance taxes should be set up in such a way that this can work.

The programmer cooperative

Wednesday, August 12th, 2009

If I suddenly found myself with lots of money to spare, one thing I'd like to do is hire some talented programmers to work on whatever they wanted to work on. I suspect that there are lots of talented programmers out there who have ambitious projects they'd like to take on but which they just don't have time for because they have full time jobs. Most of those projects probably have no commercial application, but it's very difficult to tell in advance what is likely to make money and what isn't. If you get enough of these projects together, at least some of them are bound to make money.

Any profits made could be shared with the entire cooperative, or used to hire more programmers. Sort of like a business and employment cooperative except for programmers rather than entrepreneurs. Ideally once it got big enough it would become self-sustaining and then undergo exponential growth. Part of the nature of software is that it requires very little capital investment (really just the programmers' salary). Another part is that because it can be copied indefinitely and exhibit network effects, once a piece of software is written it can have great influence and importance. So a successful software project can make much more money than it costs, and one only needs a small percentage of projects to be financially successful in order to sustain the entire system.

I imagine that a very large number of programmers would like to join such a cooperative - what could be better than getting paid and not having anyone tell you what to work on? So one would need some method for choosing between a very large number of applications. Let's suppose for a moment we have some infallible way of weeding out people who, given the choice, would rather go skiing or watch TV all day than write computer programs of any sort. Suppose also we have a way of weeding out those programmers who are actually working on something for someone else and want to get paid for it twice.

One way to choose amongst the remainder would be for them each to submit a few ideas of some things they would like to work on, and how much they want to be paid. The cooperative would then pick those ideas that have the most "bang per buck" (i.e. are most interesting, not too expensive and have some chance of commercial success). The member might change their mind about what they work on, but the initial idea would give some idea about what sort of things they're interested in.

Every few months, each member would give a demo of what they've been working on to the rest of the company. Or, alternatively, make regular blog posts about their progress. This gives some cross-pollination of ideas, and may lead to collaborations within the cooperative. Such collaborations can be very flexible and informal, ending as soon as they stop being fun for any of the parties involved.

Memory handling for fractal program

Sunday, August 9th, 2009

I have been poking again at my fractal plotting program recently. The latest version is available here (source). I've added multi-threading (so all available cores will be used for calculation) and Direct3D rendering (which makes zooming much smoother and allows us to rotate in real time as well with the Z and X keys - something I've not seen any other fractal program do). There is still some jerkiness when zooming in or out (especially past a power-of-two boundary) but it's greatly reduced (especially when there's lots of detail) and with a little more work (breaking up the global matrix operations) it should be possible to eliminate it altogether.

The linked grid data structure in the last version I posted worked reasonably well, but it used too much memory. At double-precision, every block needed 72 bytes even after it had been completed. This means that an image that covered a 1600x1200 screen with detail at 16 texels per pixel for anti-aliasing would use more than 2Gb. I could use a different block type for completed points but there would still need to be at least 4 pointers (one for each direction) at each point, which could take 500Mb by themselves.

So I decided to switch to a data structure based on a quadtree, but with some extra optimizations - at each node, one can have not just 2x2 children but 2n by 2n for any positive integer n. When a node contains 4 similar subtrees (each having the same size and subnode type) it is consolidated into a 2n+1 by 2n+1 grid. If we need to change the type of one of the subnodes of this grid, we do the reverse operation (a split). In this way, if we have a large area of completed detail it will use only 4 bytes per point (storing the final number of iterations). This works much better - memory usage can still be quite high during processing (but seems to top out at ~700Mb rather than ~2Gb for the linked grid data structure) and goes right back down to ~200Mb when the image is completed (only about 35Mb of which is the data matrix, the rest is Direct3D textures).

One disadvantage of the grid tree method is that it doesn't naturally enforce the rule preventing 4 or more blocks being along one side of one other block. That rule is important for ensuring that we don't lose details that protrude through a block edge. However, this turns out to be pretty easy to explicitly add back in. This gives a more efficient solid-guessing method than I've seen in any other fractal program (by which I mean it calculates the fewest possible points in areas of solid colour).

This data structure was surprisingly difficult to get right. The reason for this is that with all the splitting and consolidating of grids, one must avoid keeping grid pointers around (since they can get invalidated when a split or consolidation happens in a nearby part of the tree). That means that many algorithms that would be natural to write recursively need to be written iteratively instead (since the recursive algorithm would keep grid pointers on the call stack).

Another tricky thing to get right was the memory allocation. My first attempt just used malloc() and free() to allocate and deallocate memory for the grids. However, the way grids are used is that we allocate a whole bunch of them and then deallocate most of them. However, they are quite small and the ones that don't get deallocated are spread throughout memory. This means that very little memory can be reclaimed by the operating system, since nearly every memory page has at least one remaining block (or part of one) on it.

The problem here is that the malloc()/free() interface assumes that a block, once allocated, never moves. However, grids turn out to be quite easy to move compared to general data structures. This is because the grid tree is only modified by one thread at once (it must be, since modifying it can invalidate any part of it) and also because we know where all the grid pointers are (we have to for split and consolidate operations). Only a dozen or so different grid sizes are currently used in practice, so it's quite practical to have a separate heap for each possible grid size. If all the objects in a heap are the same size, there's an extremely simple compaction algorithm - whenever we delete an item, just move the highest item in the heap down into the newly free space. So a heap consists of a singly-linked list of items. These are allocated in chunks of about 1Mb. Each chunk is contiguous region of memory, but chunks within a heap don't have to be contiguous. This allows the heaps to interleave and make the best use of scarce (on 32-bit machines) address space.

Sounds simple enough but again there turned out to be complications. Now it's not just split and consolidate that can invalidate grid pointers - now deleting a grid can also invalidate grid pointers. This means more recursive algorithms that need to be made iterative (such as deleting an entire subtree) and certain operations need to be done in a very specific order to avoid keeping more grid pointers around than are necessary. I now understand why such techniques are so rarely used - they're really difficult to get right. I do think that a general garbage-collected heap would be slower and use more memory than this special purpose allocator, but I'm not sure if difference would be large enough to make all this effort worth it. Perhaps someone will port my code to a garbage-collected language so the performance can be compared.

Sound for fractal program

Saturday, August 8th, 2009

I'd like to be able to add some kind of sound output to my fractal program. What I'd like to do is generate a click every time a point is completed. So, when it's working on a difficult area it would emit a clicking sound but when it's finishing a lot of points it would generate white noise. The principle is very similar to that of a geiger counter's audio output.

This is trivial to do under DOS, but in Windows it is surprisingly difficult. Ideally we want to generate a waveform at 44.1KHz. When we finish a point, we flip the current sample in the audio buffer between its maximum and minimum positions. The problem is determining what the current sample should be (or, alternatively, moving a pointer to the next sample 44,100 times per second). What we need is a high-resolution timer, but none of the high-resolution timers seem to be particularly reliable. timeGetTime() is too low-resolution. QueryPerformanceCounter() has bugs on some systems which cause it to skip backwards and forwards a few seconds on occasion. The RDTSC CPU instruction isn't guaranteed to be synchronized between CPUs, and its count rate may vary due to power saving features on some CPUs. Another possibility is reading the playback cursor position from DirectSound, but that is unreliable on OSes before Vista.

What I may end up doing is faking it - measure the average number of blocks completed over some small period of time and use that to generate the clicks via a Poisson distribution.

Mandelbrot set taxonomy

Tuesday, August 4th, 2009

Lots of videos have been made of zooms into the Mandelbrot set. These videos almost all follow the same pattern. The initial image is the familiar one of the entire set. One zooms in on some tendril or other, gathering details, passing various minibrots and possibly some embedded Julia sets on the way until one reaches a final minibrot and the zoom stops.

The reason for this is that these sequences are relatively quick to calculate, they show a variety of the different types of behaviors that the Mandelbrot set has to offer, and they have a natural endpoint which causes the video not to seem like it has been cut off part-way through.

If we relax these conditions, what interesting zoom sequences open up to us?

To answer this question requires classification of the complex numbers according to the roles they play in the Mandelbrot set.

The first taxon separates exterior points (whose z sequences are unbounded, example: +1), interior points (limit of z sequence is a finite attracting cycle, example: 0) and boundary points (all other z sequence behaviours). Zooming into either of the first two will leave us with an image of uniform colour after a finite amount of time. Most zoom movies actually target the second of these.

Zooming in on the boundary will leave us with an infinite sequence of "interesting" images, so let's explore this group further. The second taxon separates shore points (those which are on the edge of an atom), Feignbaum points (which are at the limit of a sequence of atoms of increasing period) and filament points (all other boundary points).

Feignbaum points make quite interesting infinite zoom sequences. The canonical example is ~-1.401155, the westernmost point of the main molecule. Each time you zoom in by a factor of ~4.669201, the lakes look the same but the the filaments get denser. Filaments connect to molecules at these points. These are the only places filaments attach to the continent but each mu-molecule has one additional filament emerging from its cardioid cusp.

Filament points can be divided into two types based on orbit dynamics - those whose orbit is chaotic (irrational external angle) and those whose orbit converges to a finite repelling cycle. The latter includes two subtypes - branch points (where there are more than one but fewer than infinity paths to other points in the set) and terminal points (where there is only one). Well-known terminal points include -2 and +i. Zooming into terminal points and branch points gives a video which is infinite but which devolves into very same-y spirals after a short time as the mu-molecules shrink to less than a pixel.

I don't know an exact formula for any chaotic filament points, but they're easy to find using a fractal plotter - just zoom in, keeping away from any mu-molecules, branch points and terminal points. As with terminal and branch points, the minibrots tend to shrink to less than a pixel quite quickly, so the images obtained will become same-y after a while - the structures formed by the filaments tend to look like L-system fractals.

Shore points can be divided into two categories, parabolic points and Siegel points. Parabolic points come in two flavors, cardioid cusps (the point of maximum curvature of a cardioid, example: +0.25) and bond points (where two atoms meet, example: -0.75). If you zoom in a cusp you'll technically always have detail on the screen but the picture gets very boring very quickly as the cusps shrink to sub-pixel widths.

This leaves the Siegel points. You can find these by zooming into the edge of an atom but avoiding the cusps. For example:

  1. Start off with the large circle to the west (call it A) and the slightly smaller circle to the north (call it B).
  2. Zoom in until you can pick out the largest circle C between A and B on the boundary of the set.
  3. Rename B as A and C as B.
  4. Go to step 2.

This particular algorithm takes you to the point whose Julia set is known as the Siegel disk fractal, \displaystyle \frac{1}{4}(1-(e^{i(1-\sqrt{5})\pi}-1)^2) which is about -.390541+.586788i.

All points on the boundary of the continent cardioid can be found using the formula \displaystyle c=\frac{1}{4}(1-(e^{i\theta}-1)^2). For rational \displaystyle \frac{\theta\pi}{2}, this gives a parabolic point and for irrational \displaystyle \frac{\theta\pi}{2} this gives a Siegel point (a point whose Julia set has Siegel discs). The "most irrational" number (in the sense that it is least well approximated by rational numbers) is the Golden ratio \displaystyle \frac{1}{2}(1-\sqrt{5}). The Siegel point corresponding to this number is the point reached by the algorithm given above. In some sense then, this area is the most difficult area of the Mandelbrot set to plot, and indeed if you attempt to zoom into it you'll quickly discover that you have to increase the maximum number of iterations exponentially to avoid losing detail. I don't think it's really practical to explore this area very far without a Mandelbrot set program that can do SOI, such as Fractint, Fractal Witchcraft (which is a pain to get working now as it's a DOS program), almondbread (which is a pain to get working now as one of its prerequisites seems to no longer be available) or the program I'm writing.

If you could zoom a long way into it, what would the Siegel area of the Mandelbrot set look like? Well, we know the part inside the cardioid is inside the set. We also know it's not a bond point so at least part of the remainder is outside the set. The pattern of circles on the cardioid is self-similar (it's related to Farey addition) so the continent part would look similar to a shallow zoom on the same point. The rest is a bit hazy, though. My experiments suggest that the density and relative size of mu-molecules gets greater as one zooms in on this area, so an interesting question to ask is what the limit of this process is? Would the area outside the continent be covered with (possibly highly deformed) tesselating mu-molecules, leading to a black screen? Or would only part of the image be covered by mu-molecules, the rest being extremely dense filament structures? Both possibilities seem to be compatible with the local Hausdorff dimension being 2. Either way I suspect there are some seriously beautiful images hiding here if we can zoom in far enough.

Summary of the taxonomy:

  • Exterior points (example: infinity)
  • Member points
    • Interior points
      • Superstable points (examples: 0, -1)
      • Non-superstable points
    • Boundary points
      • Filaments
        • Repelling points
          • Terminal points (examples: -2, +/- i)
          • Branch points
        • Chaotic, irrational external angle filaments
      • Feignbaum points (example: ~-1.401155)
      • Shore points
        • Parabolic points
          • Cardioid cusps (examples, 0.25, -1.75)
          • Bond points (examples, -0.75, -1 +/- 0.25i, -1.25, 0.25 +/- 0.5i)
        • Siegel points (example: \displaystyle \frac{1}{4}(1-(e^{i(1-\sqrt{5})\pi}-1)^2) ~= -.39054087021840008+.58678790734696872i)

Determining if a point is in the Mandelbrot set

Sunday, August 2nd, 2009

I'm writing a super-fast, super-accurate Mandelbrot set plotter. This one is unusual in that no explicit iteration limit is ever set - it will keep iterating any point until it either escapes or is determined to be in the set. This works pretty well and produces much sharper cusps than any other program I've seen.

However, the downside is that it can take an arbitrarily long to determine if any given point is in the set or not. Points on the edge of the set are (as always) the particularly tricky ones - some of them exhibit chaotic behavior for a very long time before becoming periodic or escaping.

Currently I use two methods for determining if a point is in the set. I have explicit formulae for the main cardioid and the period-2 bulb, and I look for periodic orbits using Floyd's algorithm.

I'm wondering if a third method might be practical - one that determines if a point is in the set without waiting for it to become periodic. Here's the method I'm considering.

Let the Mandelbrot iteration function f(z) = z2 + c. Suppose we have a point c, and we suspect that it might be periodic with period N. We can determine if it is or not by finding a point zf that is unchanged under N applications of f, i.e. a solution to fN(zf) = zf. There will generally be 2N such solutions, of which only N will be relevant to the case at hand. However, if we have a z that is likely to be near one of these fixed points we should be able to converge on one of them quickly by using Newton's method or something similar.

Once we have our fixed point, we take the derivative of fN with respect to c and evaluate it at zf. This is easy to do numerically - just evaluate fN at zf and a nearby point, take the difference and divide by the original distance. If the modulus of the result is 1 or less, then the attractor is stable and c is in the set. If it isn't, the point could still be in the set as we might have picked the wrong zf or N - that is we are likely to get false negatives but no false positives using this method (which is exactly what we want - if we get a false negative we can
iterate some more, try a new N and/or a new zf and eventually get the right answer).

Working through these steps symbolically gives us closed form solutions for the period-1 and period-2 attractors but if we try it for period 3 we get a polynomial of order 6 in z (23 = 8 minus the two from factoring out the period-1 attractor) - I'm not sure if we can get a closed form solution for period-3 attractors from this.

The tricky part is choosing an appropriate N and an approximation to zf for any given point. I think the way to do it is this: whenever we find a periodic point with Floyd's algorithm, iterate a further N times to figure out the period, and store the period and fixed point. When we start a new set of iterations on a point, check to see if any neighbouring points are periodic. If they are, try this method using the N from that point and the zf (appropriately adjusted for the new c). When we find a periodic point this way, we store the found N and zf just as for Floyd's algorithm.