chaos101

 

Chaos math 101

Much has been written over the last few years on the subject of chaos. The term chaos refers to seemingly simple systems that exhibit behavior which is complicated to the point of unpredictability. While chaos was receiving a great deal of media attention, a counterpoint was being developed: the idea of the spontaneous formation of order within seemingly complex systems. This concept, marketed under the name "complexity theory", promises to solve questions which were previously too difficult to be studied rigorously.

Deterministic chaos (sensitive dependence on initial conditions) arises out of systems of iteration. When a particular process is carried out over and over again and its elements relate in a nonlinear fashion, chaos emerges. Take, for instance, a lump of bread dough. Place two raisins next to each other at some point on the dough and begin kneading it. As each fold rearranges the dough, the raisins are moved about. Despite the fact that they were close together initially, the raisins may end up far apart. Because the final position of a given raisin varies greatly with respect to its original position, we say that the system exhibits sensitive dependence on initial conditions. Our bread dough is chaotic.

It doesn't take anything as complicated as bread dough to demonstrate chaos, however; chaos lurks even in the seemingly simple, predictable functions. Take, for example, the simple parabola: y=c(x-[x^2]). If one takes values of c ranging from 0 to 1, then iterates the function (that is, start with a value for x, find y, substitute y for x in the equation and repeat the process), then this function is known as the logistic equation; its primary use is modelling populations over time. The function's apparent simplicity belies its hidden chaotic properties. If one takes a given c value and iterates the function over the region 0 < x < 1, the function (predictably) begins to converge on a certain value; the surprising thing is that, depending on the value of c, the logistic equation may converge to anywhere from one to an infinite number of limits!

If such a simple mechanism can generate such complicated behavior, is it not also possible that complicated mechanisms are able to generate relatively simple patterns? After all, the day-to-day world is an extremely complex place, yet most of the things we encounter behave in fairly predictable ways. People are able to drive cars in straight lines; basketball players are able to bounce a ball, although the ball's motion is mathematically very complex. The human form itself, while never identical for any two individuals, is eminently recognizable. What is this underlying pattern of similarity, this property of regularity in systems whose mind-boggling complexity we cannot even begin to comprehend? The answer lies in attractors.

Attractors are values or patterns that particular orbits (iterated evaluations of a function performed on some given starting point) tend to approach. For example, a ball at the bottom of a basin will simply sit immobile. That state is the attractor of the ball-bowl system. If one places the ball at any other point in the bowl, it will eventually tend toward sitting at the bottom. Anyone who doubts this may experiment: take any simple ball, set it in motion, and watch; it will eventually slow down and stop moving, at which time it will be in its stable state.

Attractors can be as simple as the ball's stable state, or they can be as complicated as the patterns on a butterfly's wing.The ball in the basin is an example of a system with a simple attractor. If viewed from above, the ball will travel in a spiral. The center of the spiral is the appropriately called a "point attractor," as a point travels inward along a spiral, its position will tend toward the center point. Attractors that fall into the more complex category are dubbed "strange attractors." The essential property that strange attractors possess is that they consist of orbits which, although infinite in number and bounded in space, never cross. The curves of the orbits traversing the attractor are infinite in one dimension but bounded in another. Since they are not truly one-dimensional, nor two-dimensional entities, they are considered to be of fractional dimension.

[Image: The Mandelbrot Set]

The renown Mandelbrot set. The set represents the basin of
attraction for the orbit of Zn+1 = Z(n^2)+C iterated over the
value Z0 = 0+0i. Different points on the image correspond to
different C values on the complex plane. Points that "escape"
(whose values increase without bound) are colored according to the
number of iterations it takes them to reach a threshold value; points
in black never escape.

 

What does it mean to say that an object is "of fractional dimension"? For that matter, what does it mean to say that an object has ANY dimension? Benoit Mandelbrot, pioneer in the field of chaos, offers one solution which relies on the principles of self-similarity and scaling.

Self-similarity and scaling are intimately related principles. To say that a thing is self-similar is to say that it can be divided into sections which, if scaled by a certain value, will resemble the figure as a whole. Nature provides us with many examples of self-similar structures; clouds, coastlines, and even cauliflower possess self-similar structures on different scales.

Using the idea of self-similarity, we can define dimension in terms of scaling factors. Take a line segment of length 1, for instance. It can be broken into s sub-segments, each with length 1/s, and each self-similar to the original segment when enlarged by a factor of s. Dimension is computed by taking the logarithm of the number of pieces needed to construct the whole and dividing it by the logarithm of the scaling factor. That is, if the number of pieces is a and the scaling factor is b, then dimension = (log a)/(log b). For our line segment, this means that its dimension should be log s divided by log s, or (log s)/(log s), or simply 1. By the formula, we can also correctly find the dimension of a cube. If we break it into smaller cubes with side length 1/s, we find that we need s^3 cubes to reconstruct the original cube, and that the scale factor needed to mimic the original is s. Now we simply apply the process again, and find that (log s^3)/(log s) = 3(log s)/(log s) = 3.

The two examples above verify our prior knowledge concerning certain objects of integer dimension: lines are one dimensional, and cubes are three dimensional. But what does it mean to say that an object has fractional dimension? An object of fractional dimension is simply one for which log(a)/log(b) is not an integer. To illustrate this principle, as well as the principle of the strange attractor as a whole, let us examine one such animal.

The attractor we will examine is known variously as the Sierpinski Triangle and the Sierpinski Gasket. It can be constructed by taking a solid triangle, subtracting out the triangle formed by the midpoints of each of the line segments composing the original triangle, and repeating this process on the remaining triangles ad infinitum. The resulting shape is neither truly one dimensional nor two dimensional; it is fractal. To calculate the fractional dimension of the Gasket, we need to find a way to break it up into a number of evenly-sized self-similar components. This is easily accomplished here, as the Gasket can quite clearly be broken up into the three sub-gaskets which were created by the first subtraction. Now to find the scaling factor. Each sub-gasket is 1/4 the area of the original triangle, and scaling any particular one by a factor of two would reproduce the original gasket. The dimension of the gasket, then, is (log 3)/(log 2), or approximately 1.585. The Sierpinski Gasket, then, is about halfway between being one dimensional and being two dimens to generate the Sierpinski
Gasket, one begins with a triangle, divides it into four congruent equilateral
triangles, removes the middle triangle, then repeats the procedure for each
of the remaining triangles. The Sierpinski Gasket is the limit of this procedure
as it is repeated infinitely many times.

The aforementioned method of generating the Sierpinski Gasket is very useful for seeing its fractional dimension, but hides its great significance as an attractor. Another way of getting precisely the same shape is to take a triangle and a point within the triangle, find the midpoint of the segment connecting the point and one of the triangle's vertices (chosen at random), plot it, and repeat the process from that point ad infinitum. Strange though it may seem, the shape produced by this methodology is precisely the same as that produced by the triangle subtraction method. The orbit of the midpoint in the process converges rapidly to the gasket, regardless of the placement of the point, marking the gasket as a strong basin of attraction for this method.

If it seems strange that two completely different methods can be used to complete the gasket, how about three, four, or even five? One of the more interesting ways to construct the gasket is with Pascal's Triangle: one need simply draw a few lines of the triangle and begin to shade over all of the odd numbers within it in order to see the gasket begin to form. Another way relies on space-filling curves, and still another way of creating the gasket is to "grow" it using cellular automata (such as the computer program "Game of Life"). More methods exist, but already it is evident that there is something significant about this "polydemic" shape, that is one that exists in two or more regions. The Gasket, like the shape of trees or the Fibonacci Sequence, is a mathematical form which shows up time and time again, in many different places and under many different circumstances, and it was through an examination of these seemingly "universal" concepts that the idea of "universality" first arose.

Universality, the notion that there are certain underlying properties common to all systems, is in some ways a new idea, and in some ways a very old one. In ancient times the Greeks sought to find a single unifying relationship to describe the world. Physicists still search for a Grand Unified theory; in the past, men of mathematics proudly predicted the day when all the universe could be reduced to a single equation. These hopes were smashed by the discovery of chaos, which teaches us that the vast majority of non-linear systems are not solvable, but are now being reborn in the form of universality.

Is, then, universality at odds with chaos? Although finding order in chaos seems paradoxical, it is not. Chaos and universality, far from being in conflict, are complementary ideas. The theory of chaos says that you can never know exactly how a dynamic system will behave; universality asserts that, regardless, you can often know its approximate behavior.

This paradigm fits smoothly with our understandings from other areas of science. The motions of electrons, for instance, are chaotic in that it is impossible to predict exactly what a particular electron will do in any instance, but are universal in the sense that they are governed by basins of attraction which determine their statistical behavior. Likewise, in large-scale chemical reactions the behavior of particular molecules is unpredictable, while the reaction as a whole is not.

This blossoming of order from chaos is intuitive as well. The human visual system uses attractors to identify all manner of things; because of universality, you can recognize your crazy uncle Harry whether he is wearing golf pants or a suit of armor. Humans make decisions by looking at the central tendencies and underlying similarities in the world, and making probablistic predictions based on them.

Ours is a world which is neither certain nor random, and our realization of the importance of the interplay of order and chaos has opened up the doors to a whole realm of investigative opportunities. Those studying this region between order and disarray have dubbed their work "complexity theory," and in examining the subtle interactions between the instability of deterministic chaos and the building of attractor conditions they have come to identify several universal behavior patterns.

One of these is self-organization. This is the tendency of attractor conditions to form spontaneously from chaotic interactions. Although we have heard much of a mysterious force "entropy" which states that the universe tends towards disarray, we find that the matter of the universe has formed patterns. Self-organization is evident in everything from the evolution of life on earth to Jupiter's Great Red Spot, and is crucial to understanding such phenomena as memory and pattern recognition.

The most exciting thing, perhaps, about our advances in the understanding of order and chaos is the fact that chaos and complexity theory are acting as gateways, allowing scientists to address problems which were heretofore considered too difficult to study. One of the fields to benefit the most from the recent insights has been economics, which has been intimately involved with complexity since shortly after its inception. Markets are chaotic environments, but there is a great stake in understanding their behavior, and recent work with attractors has yielded enough promise to spawn market analysis tools. One company, Cross/Z International Inc., has sold successful software to such companies as Club Med Inc. and American Express Co. These notions have also led to the creation of at least one investment firm which watches for chaotic patterns in the stock market.

Likewise, the potential for advances in the understanding of social systems is enormous. Whereas before only qualitative observations could be made, it is now becoming possible to look for attractors and analyze systems for scaling factors. The growing awareness of the potential utility of these new methods is reflected in the fact that articles concerning the application of chaos and complexity theory to existing problems have cropped up in increasing number in professional journals such as the Journal of the American Psychoanalytic Association.

In addition to expanding the reach of the sciences, improvements in chaos and complexity theory are creating fallout inside the technical world. One area of intense research is in treating heart arrhythmias. By understanding the patterns in the neuron firings which lead to fatal arrhythmias, scientists hope to be able to put a stop to the process, thereby saving thousands of lives each year. Another area of promise is in computer programming, where technicians are working to apply evolutionary principles to such mundane tasks as sorting and hashing algorithms. By creating programs which compete in a virtual environment to accomplish a given task with the greatest efficiency, programmers are able to cause algorithms to be "grown" rather than made. Successes in creating self-organizing computer programs likewise serve to bolster the work being done in nanotechnology and genetic engineering, both of which make use of the self-organizing properties of systems to accomplish goals.

As these new perspectives on order and chaos expand the sciences and revitalize our technological capabilities, they send out ripples which will ultimately affect our lives in many different ways. Just as the old-rationalist ideas of Newton found their way into society via management policies such as Taylorism and philosophies such as utilitarianism, so shall the new-rationalists have an effect on the way people perceive the world, and how they act. Whereas the old perspective saw chaos and order as being separate, inversely related properties, this new view recognizes the ultimate inseparability of the two. Whereas the old ideas considered it essential to have explicitly mandated, force maintained social orders, the new perspective recognizes the potential for the evolution of cooperation among individuals. Whereas scientists used to believe that there were definite answers to all questions, but found many questions unapproachable, this new breed recognizes that, while we can't have all the answers, we can often find the underlying properties which define prevailing tendencies. Just what becomes of this scientific movement remains to be seen; chaos and complexity theory are not considered "true" sciences in the usual sense, and will eventually re-merge with the greater scientific world. Nevertheless, it seems that a new understanding of order and chaos is here to stay.