This month, the American Physics Society magazine, Physics Today, published an article about the 50th anniversary of the Lorenz model. At the link, you can read the entire article. In it, experts describe the history of chaos, Lorenz’s discovery of it, and some of the state of the field today, but with a great deal less technical jargon.

50 years ago, Edward Lorenz first captured the mathematical phenomena we now know as chaos, known popularly as the “butterfly effect“. Below is a picture from the Lorenz model exhibiting chaos. The idea of chaos boils down to highly structured behavior that cannot be predicted. No matter how precisely we measure, after some time we cannot know the state of the system. We can say that the system will stay in a certain region of weather; in the picture below, there are definitely places the trajectory does not visit. We observe this with weather models– the forecast is good for a couple of days, so-so for a couple of days after that, and completely inaccurate for any time farther in the future. Analogously, we can say that it will not be -100 C tomorrow. Appropriately, Lorenz’s discovery of chaos came about as he tried to develop a model for the weather. Chaos is all around us and can be observed in a number of systems.

the Lorenz system, which turned 50 this year

At this link, you can play with a fun Lorenz model java applet. The trick with the applet is choosing the right parameters. Try setting the “spread” to 0.1, the “variation” to 20, the “number of series” to 2, and the “refresh period” to 100. Then push the button “reset the parameters” and “restart”. This will start 2 trajectories in the Lorenz model that differ by only 0.1. You will quickly see the two paths diverge and become completely unrelated. If you reduce the “spread” to 0.01, the same thing will happen, though it will take longer. As long as the spread is more than 0, the two paths will eventually diverge.

This is why we cannot predict the state of a chaotic system, because our ability to measure the state of the system is inevitably flawed. If we could measure the state of the weather to 99.99999% accuracy, that 0.00001% inaccuracy would eventually lead to divergence. And you can imagine that getting 99.99999% accuracy is much harder and more expensive than 99.9% accuracy.

The small-world phenomenon refers to the fact that even in a very large population, it takes relatively few connections to go from any element 1 to another random element. Amongst people, we know this concept as the “six degrees of separation” game. Any population of objects with connections can be conceptualized this way. Examples include crickets communicating by audible chirps, websites with links, electrical elements with wiring, board members with common members, or authors on mutual scientific papers. All of the examples I list have been examined in various scientific studies.

In a small-world network, elements are first connected in a regular lattice; for example, each element is connected to one or two nearby neighbors on each side. The leftmost picture below shows a regular lattice of elements. A connection between element i and element j is then removed. Then we add a connection between element i and any other element x, like the middle picture below. If x is across the network from i, then the number of steps between i and x has been reduced from some large number to 1. All of the elements connected to i are now 2 steps from element x. This reduces the diameter of the network, which is the maximum number of steps between any two elements, although the number of connections remains constant. In the six-degrees of separation game, the diameter would be 6. As we replace more of the lattice connections with random ones, the network becomes more and more random. We quantify a small-world network by its randomness, as in the picture below.

The small-world network has been explored as a means of sending information efficiently through a population. As the diameter reduces, the time it takes information to spread through the entire network reduces. Neurons in the brain have been explored as small-world networks; certain regions of the brain are highly interconnected with a few long distance connections to other regions of the brain. Protein networks and gene transcription networks have also been described with the small-world model. Further information with scholarly references is available on the scholarpedia page (which is generally a great resource for complex systems problems).

Here you can read a good scientific paper by Steven Strogatz, one of the premier scientists in the area. This is a paper published in Nature, one of the highest scientific publications. There are some equations, but the figures are also excellent if you are uncomfortable with the math. The paper models the power grid, boards of directors, and coauthorship using network ideas. I mention this paper specifically because I find Strogatz a very relatable and clear writer. Also consider reading his recent nontechnical book about math, The Joy of X, for more math fun.

If you have a set of items and you can connect or sequence them in many ways, you probably have a graph or network. Clearly if you have these objects, some connection arrangements might be preferable to others. Heart cells are connected in patterns that contract the heart in the proper pattern. If you must deliver items to ten different locations, different paths may be more efficient (the traveling salesman problem).

Euler’s 1735 Koenigsberg bridge problem is considered the first graph theory problem. At the time, the city of Koenigsberg had seven bridges (shown above). Euler wished to find a path which crossed each bridge exactly once. He showed mathematically that no path satisfied those constraints.

The famous game “six degrees of Kevin Bacon” is a network theory problem. This game says that with six steps, any actor can be linked to Kevin Bacon through films pairs of actors appeared in. This idea was originally introduced at the Erdös number. Paul Erdös was a brilliant and highly published mathematician (over 1500 papers!) who worked in graph theory and combinatorics. The Erdös number was how many papers it took by coauthoring to connect you to Erdös. He was also wonderfully eccentric. Once, visiting a friend, he woke in the night to get some juice. In the morning, his friend found red liquid all over the floor. Erdös, puzzled by the juice carton, had simply stabbed a hole in the side to drink from. His biography is a fascinating glimpse into a nearly alien mind.

In my own research, I look at how oscillators synchronize in small networks, such as rings. Even in a simple ring, many new types of synchrony occur, compared to all-to-all connections. It is easy to believe that the structure of the brain, and how various regions and subregions connect, might greatly influence human thinking. On a more science fiction note, I suspect that artificial intelligence will not exist in machines without complex networks of elements.

This was just a very quick overview of a huge field. In the future, I plan to write on topics like small-world networks, scale-free networks, and synchrony on networks. Check out my other science posts on synchrony, fractals, the Mandelbrot set, and chaos.

Fractals are a branch of math that better describes nature. Before fractals, there was Euclidean geometry, the geometry of lines, planes, and spaces. Euclidean geometry cannot give the length of a rough coastline; neither can it give the surface area of shaggy tree bark. The answer you arrive at in Euclidean geometry depends upon the scale at which you examine an object– intuitively the distance an ant travels over rough terrain is different from the distance we cover walking. The ant interacts with the terrain at a different length scale than we do. Fractal geometry is designed to handle objects with multiple meaningful length scales.

Fractal objects are sometimes called “scale-free“. This means that the object looks roughly the same even if examined at very different zooms. Many natural objects look similar at multiple zooms. Below, I include a few. The craters on the moon are scale-free. If you keep zooming in, you could not tell the scale of the image.

Terrain is often scale-free in appearance– a few years ago there was a joke photo with a penny for scale. The owners of the photo subsequently revealed that the “penny” was actually 3 feet across. I couldn’t find the photo, but you cannot tell the difference. The reveal was startling. Below is a picture of Mount St Helens. The top of this mountain is five miles (3.2ish km) away. Streams carry silt away from the mountain, as you can see more towards the foreground. They look like tiny streams. These are full-sized rivers. Mount St. Helens lacks much of the vegetation and features we would usually use to determine scale. The result is amazingly disorienting, and demonstrates scale invariance.

Plants are often scale free too. Small branches are very similar in appearance to large branches. Ferns look very fractal. Below is a picture of Huperzia phlegmaria. Each time this plant branches, there are exactly two branches. Along its length, it branches many times. It is a physical realization of a binary tree.

The common concept of chaos as the disordered unknown is a concept thousands of years old. Chaos as a branch of mathematics refers to an extremely ordered but complex behavior. In some lights, mathematical chaos might seem utterly random; different inspections reveal order. The first model of chaos emerged in the 1950s to model the movement of air in the atmosphere– the Lorenz model.

Defining Chaos in Mathematics

First, let’s discuss what chaos is in a mathematical sense (chaos on scholarpedia). In daily language, we would assume that something chaotic is not at all predictable. Chaos is deterministic, which means that if we know the position or value of a chaotic element exactly, then we know every value that it will ever have for all times. Many non chaotic systems are deterministic also; if we measure the position and velocity of a pendulum and its frictional loss, we can predict its path for all time.

If we are measuring some value, there is a limit to our possible accuracy. If you measure your weight, you are measuring your weight plus or minus some error associated with the instrument. We know some scales are more accurate, and some are less accurate. When we measure the position and velocity of the pendulum, there is some error, but the behavior at position x plus a little bit (or x+ε) is essentially identical to the behavior at position x. In a chaotic system, the paths following positions x+ε and x will be entirely different after some window of time. The paths will be similar for some window of time; the length of this window is determined by how chaotic the system is. This behavior is known as sensitive dependence upon initial conditions (or more famously, the butterfly effect).

The lower half of the image below shows the difference between two paths starting at x+ε and x in the Lorenz attractor. The paths are nearly the same, until they quickly become very different. You can look at the Lorenz attractor in more detail and play with it here. The link goes to a Java applet hosted by Caltech where you can change the parameters and see what happens in a time series and in a different representation called the state space.

The above image also shows the third of the three most common criteria for chaos: periodicity. This means that the system oscillates in some kind of way. The top half the picture above shows the time series of z variable (see link above for the full model if you are interested). If we plot the data in a different way, we get the image below. This image is called the Lorenz attractor. We see it loops around, but that it is bounded in space. That is, there are places we can say the path will not go. Intriguingly, the Lorenz attractor has a fractal dimension of 2.06. For more on fractals, you can read my posts on fractals and fractal dimensions.

Simple example of chaos: the double pendulum

Many pretty simple systems can exhibit chaos. The easiest example is the double pendulum, which is just a pendulum whose bottom is the top for a second pendulum. The video below shows the complexity of behavior achieved by the double pendulum. Chaos has also been demonstrated in electrical circuits, biological systems, electrochemical reactions and planetary orbits. In another post, I will write about the amazingly universality of chaos, which shows how all of the diverse systems I listed are in fact very similar.

This post continues Wednesday’s post about fractals and the Mandelbrot set. Fractals are a branch of mathematics that we can observe in our daily life. Something is said to be fractal when a small piece of an object resembles a larger part of itself. The featured image is of romanesco broccoli; as you can see, each small cone on the broccoli resembles the overall structure of the vegetable. For this reason, the mathematical terms “fractal” and “self-similar” are closely related.

Examples of fractals in nature abound. The heartbeat of a healthy person is fractal when plotted in time; interestingly, people with various health problems show less fractal character to their heart rate. For a great slide show with images of fractal-ness in nature, check out this Wired article. Fractals have been observed in ocean waves, mountain structures, fern, lightning, city layout, seashell, trees, and many others. Many computer graphics of natural phenomena are generated using fractal processes.

Koch Snowflake (Wikipedia)

The Koch snowflake (above), is a fractal generated from a line. As the fractal pattern is repeated, the length of the curve grows infinite. A line segment does not have infinite length, and yet the Koch Snowflake clearly does not fill space. So what is the dimension of this object? Through a method called the “box counting method“, we can determine the dimensionality of a fractal object. The box counting method is used to estimate area and coastal length from satellite pictures, as demonstrated below.

Using the box counting method to estimate the area of Great Britain (Wikipedia).

In short, we can see how the number of boxes needed to define a length or space changes as the box size changes. For a line, the number of boxes needed grows as 1^{n}. For a space, the number of boxes grows as 2^{n}. The method is explained in more detail here. Intuitively, we can tell the Koch Snowflake has a dimension between 1 and 2. It turns out that, using the Box Counting method, we can determine that the Koch Snowflake has a fractal dimension of log(4)/log(3), or about 1.26.

Lorenz attractor from Wikipedia

Fractal dimensions turn up in strange places. For example, chaotic attractors have fractal dimension. The Lorenz attractor, above, has a fractal dimension of 2.06. In the future I will discuss chaos and chaotic attractors. Check out my previous science posts on synchrony and art resembling science.

Fractals are often immediately visually appealing, even if the underlying equation is harder to understand. For this reason, fractals have reached a wider audience than many branches of mathematics. Beyond their visual appeal, fractals give us a way to look at many natural systems that math was not previously able to examine. How long is a winding and convoluted coastline? How does a one-dimensional system like the circulatory system serve our three-dimensional bodies? How does lightning disburse its energy when it strikes? (The image below shows how electricity dissipated through a block of plexiglass, more details here.) These are all concepts related to fractals.

from Capturedlightning.com

One very famous fractal is the Mandelbrot set (pictured at the top of this entry), named after pioneer Benoit Mandelbrot. The Mandelbrot set is generated by the iterative equationz_{n+1} = z_{n}^{2} + c. This equation indicates that at a specific value of c, we get to the next z (that is, z_{n+1}), by squaring our current z and adding the constant c. Let’s say that c is 1. z_{0} is 0, so z_{1} is z_{0} squared plus 1, and z_{1}=1. Then z_{2}=z_{1}^{2}+1=2, z_{3}=z_{2}^{2}+1=5, and so forth. A value c is in the Mandelbrot set if z_{n→∞} goes to a constant value (so that z_{n=large} is roughly equal to z_{n=large+1}). When c=1, each z keeps getting bigger and bigger, so clearly it is not a part of the Mandelbrot set. c is a complex number, so we generate a map in two dimensions of which values of c belong to the set. The video below shows the Mandelbrot set (color giving rate of divergence, black giving a member of the set) and continues to zoom in. Even at incredible zoom scales, fine and self repeating structure can be seen.

Fractals can also be generated in a more directly visual way. Below is a fractal called the Koch Snowflake. The Koch Snowflake is generated iteratively as well. The base unit is a triangle. The middle third of each leg of the triangle is replaced by a tent. For the next step, the middle segment of all the legs of the new structure are replaced by a tent, and so on. You can see in the graphic that the Koch Snowflake gets complicated quickly. Many other visual fractals have been explored. The java applet here has a few that you can play with.

Koch snowflake from Wikipedia

I will have another post about fractals on Friday, where I discuss more numerical properties and examples of fractals in nature. Food for thought: what is the perimeter length of the Koch Snowflake? Also check out my previous science posts on synchrony and art resembling science.