Archive for the ‘fractals’ Category

Fourier transform of the Mandelbrot set

Sunday, August 31st, 2008

I wonder what the Fourier transform of the Mandelbrot set looks like? More specifically, the 2D Fourier transform of the function f(x,y) = {0 if x+iy is in M, 1 otherwise}. This has infinitely fine features, so the Fourier transform will extend out infinitely far from the origin. It's aperiodic, so the Fourier transform will be non-discrete.

The result will be a complex-valued function of complex numbers (since each point in the frequency domain has a phase and amplitude). That raises the question of its analytical properties - is it analytic everywhere, in some places or nowhere? (Probably nowhere).

Other interesting Mandelbrot-set related functions that could also be Fourier transformed:
M_n(x,y) = the nth iterate of the Mandelbrot equation (\displaystyle f = |e^{-\lim_{n \to \infty}\frac{1}{n}M_n}|).
D(x,y) = distance between x+iy and the closest point in the Mandelbrot set. Phase could also encode direction.
P(x,y) = the potential field around an electrically charged Mandelbrot set.

Simplicity density function

Friday, August 29th, 2008

A commenter on Tuesday's post wondered what the density function of numbers with low complexity looks like. This seemed like an interesting question to me too, so I wrote some code to find out. This is the result:

Simplicity density function

I actually used a slight modification of the complexity metric (L(exp(x)) == L(-x) == L(log(x)) == L(x)+1, L(x+y) == L(x)+L(y)+1, L(0) == 1) and a postfix notation instead of a bracketing one. These are all the numbers x with L(x)<=20. More than that and my program takes up too much memory without some serious optimization.

The smooth curves and straight lines are functions of the real line (which is quite densely covered). The strong horizontal lines above and below the center (0) lines are +πi and -πi, which occur from taking the logarithm of a negative real number. There is a definite fractal nature to the image and lots of repetition (as one would expect, since every function is applied to every previously generated point up to the complexity limit).

I didn't add duplicate elimination rules for duplicates that didn't appear until L(x)>=8 or so, so some points are hotter than they should be, but I don't think fixing this would make the image look significantly different.

The code is here. This header file is also required for the Complex class, and this in turn requires this and this. The program is actually a sort of embryonic symbolic algebra program as it builds trees of expressions and pattern matches strings against them. It generates a 1280x1280 RGB image which I cropped down to 800x600. The colour palette is based on a thermal spectrum where temperature goes as an inverse seventh root of the number of hits on a pixel (no particular mathematical reason for that - I just think it makes it look nice). The points between black and red are fudged a bit.

Overlapping images in escape time fractals

Tuesday, July 15th, 2008

This is a post I made on sci.fractals recently (it didn't get any replies there though).

I've been playing about with drawing escape time fractals using different formulae. I've noticed that some formulae give images which seem to have a strange feature - in some places it looks like there are two overlapping images, with one image showing through in areas of the other.

For example, here is an image I made using the formula
z <- (z+1)*(z+c)*(z+i)

Can anybody help me to understand what is going here? Are there really two iterative process going on hidden in one formula or is it an illusion? Does this phenomenon have a name that I could search for to find out more about it?

Use derivatives for SOI

Sunday, June 15th, 2008

This is an elaboration on a point from an earlier blog post.

Synchronous orbit iteration is a way of speeding up the calculation of fractals. The idea comes from the observation that the orbits of nearby points follow similar trajectories for a while. So one can take a rectangular array of points and subdivide them once a rectangular array no longer approximates them
well.

It seems to me that a better way to do this might be to compute some the derivatives of the iteration function and iterate them instead of a grid of points, for the same reason that the accuracy of numerical integration is usually better improved by switching to a higher-order method than by decreasing the step size.

This method simplifies the algorithm which determines whether to subdivide or not (just see if the magnitude of the highest derivative exceeds some limit, rather than looking for rectangularity - which amounts to the same thing for the second derivative).

It's also even easier to subdivide - instead of interpolating to find the intermediate iteration points, just evaluate the Taylor series.

Of course, I'll need to do some experiments to determine if this is truly a better method (at the moment there are some more fundamental changes that my fractal plotter needs before I can play with this sort of thing). As far as I can tell nobody's ever tried it before, though. Classical SOI is difficult enough to get right.

Rotating fractals

Thursday, May 22nd, 2008

Fractals like z=z^2+c and z=z^3+c are easy to draw because they just involve complex addition and multiplication. What happens if we try to generalize these sorts of fractals to non-integer powers?

We can do this using the identities
\displaystyle z^w = (e^{\frac{\log z}{\log e}})^w = e^{w\frac{\log z}{\log e}}
\displaystyle e^{x+iy} = e^x(\cos y + i \sin y)
\displaystyle \frac{\log z}{\log e} = \frac{\log |z|}{\log e + i(\arg z + 2n\pi)}

The trouble is, we have to pick a value for n. Given a parameter \theta we can pick n such that \theta - \pi \leq \arg(x+iy) + 2n\pi < \theta + \pi (i.e. choose a branch cut of constant argument and a sheet of the Riemann surface): \displaystyle n = \lfloor\frac{1}{2}(\frac{\theta - \arg(x+iy)}{\pi} + 1)\rfloor. Then as we gradually vary \theta the fractal we get will change. For w = u+iv, v \ne 0, increasing \theta will cause z^w to "spiral in" from infinity towards the origin (v<0) or out (v>0). When v = 0, increasing \theta will have a periodic effect, which will cause the fractal to appear to "rotate" in fractally sort of ways with a period of \displaystyle \frac{2\pi}{u}.

It would be interesting to plot these as 3D images (like these with \theta on the z axis. These would look sort of helical (or conical, for v \ne 0.)

Using u < 0 poses some additional problems - the attractor isn't bound by any circle of finite radius about the origin so it's more difficult to tell when a point escapes and when it is convergent but just currently very far out. Clearly some further thought is necessary to plot such fractals accurately - perhaps there are other escape conditions which would work for these.

Real-time Mandelbrot set zooming

Thursday, September 6th, 2007

A while ago, I discussed a data structure which makes real-time Mandelbrot zooming possible. I finally got around to implementing it and the result is here (source code). Windows only for now, but it could be ported to other platforms with some effort.

As well as real-time zooming, the program has a logarithmic palette, progressive resolution, progressive dwell limit and anti-aliasing.

The program should run on Windows 98 or later and has no dependencies. It uses GDI for drawing - DirectX is not needed. It'll work best if your video mode is set to 32-bit colour depth. It's reasonably fast on my 2GHz Core Duo machine (even though it only uses one core at the moment), but the faster your machine the better. It can be a bit memory hungry (I've seen it use over 200Mb for a detailed 1600x1200 image).

The program is controlled with the mouse - hold down the primary button to zoom in, the secondary button to zoom out and drag with the middle button to pan. I was surprised to learn after implementing them that they are exactly the same as for XaoS. I'd like to think that it's convergent evolution but I probably just subconsciously remembered them from the previous time I used XaoS some months previously.

If you zoom in too far (by a factor of 30 billion or so) you'll reach the limit of double-precision arithmetic and the image will dissolve into rectangles. Eventually I plan to implement multiple precision arithmetic which will make it possible to zoom in much further. Though there will always be a practical limit of just how long it takes to multiply two numbers of sufficient precision.

If you leave it for a while without zooming it will gradually refine the image to sub-pixel level and make some really gorgeous images. It will calculate faster if the window is covered up - redrawing is quite slow, especially once the image is quite highly refined. This is also why the zooming is a bit jerky if you leave it for a while. I'm thinking of ways to improve the rendering speed - I'm wondering if I can do better (at least for highly refined areas) by rendering to a grid-aligned array first and then resampling, instead of rendering each block directly.

I've experimented with SOI (using linear interpolation and a quadrilateral method for divergence detection) but so far it doesn't seem to make much difference (though the fact that it doesn't slow things down terribly is encouraging). This version doesn't include those experiments. Perhaps I need to use a higher-order method. AlmondBread and Fractint use quadratic interpolation. I'm wondering if better results can be achieved by iterating some derivatives of the Mandelbrot formula instead of extra points. Then I wouldn't need to interpolate at all - just evaluate the Taylor expansion when I need to subdivide. And divergence detection would just be a matter of looking at the highest order derivative.

Recursive subdivision

Tuesday, May 8th, 2007

Recently I was experimenting with drawing some Mandelbrot sets. Rather than scanning across the window pixel by pixel and doing one escape per pixel I generated a sequence of complex numbers within the window by repeatedly adding a complex number with irrational real and imaginary parts (the tangent of the argument of which was also irrational) and wrapping the real and imaginary parts into the window. This scatters points across the window evenly in such a way that no point is ever hit twice (within the bounds of precision) and that the pixel grid becomes gradually more dense. Thus the image sort of "rezzes in" and is gradually refined. Anti-aliasing is sort of built in - as the distance between samples gradually decreases, the effective resolution gradually increases. The program never completely stops but you can tell when it's done as the picture seems perfectly anti-aliased and stops changing. I didn't build this in, but the generated points could potentially be redrawn in a larger window to do a sort of (limited) real-time zoom (a la XaoS but hopefully with less artifacting - though it might take a lot of memory to store all that data).

This is kind of neat but I think we can do much better:

  • The program doesn't take advantage of the natural structure of the Mandelbrot set (large "lakes" of high iteration count, which have detail only at the edges)
  • The iteration limit is fixed (it would look wrong if some pixels had different iteration limits) - it would be nice if the iteration limit could be gradually increased (for points near the boundary) at the same time as resolution was gradually increased.
  • There is no way of reusing the calculation for one point for a nearby point (Synchronous Orbit Iteration, which can really speed things up for very deep zooms)

I think all these things are possible with the right data structure. My idea is to form a sort of 2-dimensional linked list. The image is divided into rectangles. Each rectangle has iteration data (corresponding to its center) and 8 pointers to other rectangles, two on each edge. The two pointers may point to the same rectangle or to different rectangles. Suppose both right-pointers of rectangle A point to rectangle B. Then either the two left-pointers of rectangle B can point back to rectangle A, or only one of them does. Thus, each rectangle is bordered on each side by either two rectangles, one rectangle or half a rectangle. And by a process analogous to linked-list insertion, any rectangle can be subdivided horizontally (if it doesn't have half rectangles on either of the horizontal edges) and/or vertically (if it doesn't have half rectangles on either of the vertical edges).

Thus at any point we can increase resolution (by splitting rectangles) or iteration count (by iterating on existing rectangles). If a rectangle is bounded by lake on all sides it is probably also full of lake and we can avoid subdividing or iterating, meaning that we do less work (or defer work) in the boring areas. We also have a natural way of doing SOI (copying the iteration data when we split a rectangle).

We can also do real-time zooming, by noticing that the relationship between rectangle dimension and pixel dimension need not be fixed, and (if necessary) throwing away or paging out rectangles that are outside the window.

There are some fiddly parts, like "how do you divide your priorities between iterating and subdividing" and "what do you do at the edges" which are the reason why I haven't explored this any futher, but it's an interesting project.

Fractal gallery

Friday, November 2nd, 2001

For those of you who can't get enough of fractal pictures, here are some of my favorites. The orbit types were created with programs of my own devising, written with DJGPP and Allegro. The others were created with Fractint, at 1600x1200 and resampled to 800x600.