Rendering Fractal Images using Photographs

Images and text © 2001 Kerry Mitchell


Fractal images are generally rendering using solid regions or continuous gradients of color. Except for a few fortuitous occasions, this results in an abstract image. A different style of rendering is the use the original fractal as a map, and using the map to place small copies of an external image (for example, a photograph) into the fractal. This article presents one method of doing that.

Preparing the photograph

This method is based on square external images. The only other requirement is that the image be 256 pixels on a side. Cropping and resizing are easily done in an image-processing program. Shown here is the external picture (of the author, handsome devil!) used with the final image.

Base fractal

Since the photograph has been cropped to a square, the base fractal needs to be colored in some fashion relating to squares. With Ultra Fractal, I use the "square trap" coloring in the "lkm-special.ucl" file in the formula database. The idea is to color according to how the orbit relates to a square: how many times it enters the square, a component of #z the first time the orbit entered the square, etc. This coloring works in #z space, which means that the squares can be fairly warped when viewed in pixel space. To recognize your external image in the final fractal, you need to balance the location, size, and amount of distortion of the warped squares. This can be accomplished by changing the center and side length of the square. Here are some examples, using the standard Julia set for c = (0.3, 0):

square center = (0,0)
side length = 2

square center = (0,0)
side length = 0.4

square center = (2,0)
side length = 1.25

Each region of color is a warped square. In the left image, there is so much distortion that the original picture in the photograph would probably not be recognized. In general, this can be improved by reducing the size of the square or moving the square further away from the fractal. In the middle panel, the size has been greatly reduced. The regions look more like squares, thus the photograph would have less distortion. Also, the regions are no longer touching, giving much more blank (black) space. In the right panel, some of the blank space is reduced by having the squares overlap slightly. The center was moved to outside the extent of the fractal, allowing the side length to be increased. Since the center is so far out, only orbits that are divergent will enter the square. Thus, the bailout value must be large enough to capture those orbits. Also, this combination of settings will only work with outside points; inside orbits would never enter the square and would have their pixels colored black.

Given an appropriate base fractal image, it must then be colored so that the coordinate information can effectively be retrieved. For this, I use the "last real" and "last imag" modes in the "square trap" coloring. The idea is to color by the real part of the iterate the last time the orbit entered the square. This is done with one component of color (say, blue) in a linear ramp from 0 (black) at for the minimum value and full blue for the maximum value. This has the effect of coloring the "square-ish" regions in bands of increasing blueness from one side to the other, and is shown in the left panel of the image below. The real/blue layer is then combined with a second layer for the imaginary part of the iterate, colored in another component (say, red). The red bands will sideways to the blue bands; if the blue bands are vertical, then the red ones will be horizontal (middle panel). The layers should be combined using the Addition merge mode, in order to perserve their individuality. That is, the red component of a pixel from the overall image will be the same as the red component of the same pixel from the imaginary/red layer. The combination is shown in the right panel. The black corner will be filled from the lower left corner of the photograph; the blue corner from the lower right; red from the upper left, and magenta from the upper right. The green component is used as a control. If the orbit of a pixel never lands in the square, then that pixel won't be colored using the photograph. In the base image, green to signify that condition.

"last real" in blue

"last imag" in red

sum of both layers

Building the final image

Once the base fractal is completed, it can then be interrogated by an auxiliary program and the final image generated. The program needs to use the read each pixel of the base image and determine (via the green channel) how the pixel should be colored. If the green channel is set, then the pixel should be colored black. Otherwise, a pixel from the photograph is read. The blue channel of the base is a number from 0 to 255 inclusive, which gives the x-coordinate of the pixel in the photograph. The red channel of the base likewise gives the y-coordinate. Also, the program needs to handle a few housekeeping tasks like the header information. Here is some functional (but not necessarily optimized) code in QBASIC, which uses 24-bit Targa files (the base fractal is assumed to be 480 x 480 pixels): DEFDBL A-Z: CLS

'256 x 256 photo image (24 bit Targa)
OPEN "kerry.tga" FOR BINARY AS #1

'red, green and blue base fractal (24 bit Targa)
OPEN "base.tga" FOR BINARY AS #2
mbase = 480
nbase = 480

'final image (24 bit Targa)
OPEN "final.tga" FOR BINARY AS #3

'read the header of the base and write the header of the final
'read the header of the photo file to advance the pointer
FOR i = 1 TO 18
GET #2, , one: PUT #3, , one
GET #1, , one

'step through each pixel of the base image
FOR j = 1 TO mbase
PRINT "row"; j
FOR i = 1 TO nbase

'read the blue, red, and green components of the base
GET #2, , one: blue = ASC(one)
GET #2, , one: green = ASC(one)
GET #2, , one: red = ASC(one)

'if green is set, then paint pixel black.
'otherwise, x = blue and y = red
IF green = 255 THEN
one = CHR$(0)
PUT #3, , one
PUT #3, , one
PUT #3, , one
x = blue
y = red
k = (y * 256 + x) * 3 + 18
k = k + 1: GET #1, k, one: PUT #3, , one
k = k + 1: GET #1, k, one: PUT #3, , one
k = k + 1: GET #1, k, one: PUT #3, , one

' next pixel


And here is the resulting final image (click on it to see a larger version):

Suggestions for improvements

This basic idea can be improved and extended; here are a few suggestions:The color palettes used here where simple linear ramps, black to blue and black to red. That, combined with the exact fractal and square trap used, will affect how the photograph appears in the final image. For example, the photo used here appears upside-down on the left and right side up on the right. Changing the blue palette to blue-to-black will flip the image horizontally, and changing the red to red-to-black will flip the image vertically. Moving from a linear ramp to a curved ramp with further warp the image.

Here, a single control variable was used in the green channel: if the pixel was full green, then color the corresponding pixel black in the final image. The green channel can be used in other ways. For example, different levels of green can be used as a signal to use different photographs. Alternatively, various levels of green can be used to implement different solid colors or effects.

The method as presented here relies on square external images, 256 pixels on a side. With some manipulation, the 8-bit green channel could be split so that (say) 3 more bits are allocated to both the x- and y- coordinates. This would allow for 2048 x 2048 pixel images, with 2 bits, or 4 states, allowed for the control channel. This would be particularly useful for high-resolution images, where the larger square regions are bigger than 256 pixels on a side.

Back to the articles page
Up to my home page