Tag Archives: fun science

Fun Science: Vacuum and Pressure

Pressure is caused by collisions between particles. Scientists use the term “vacuum” when there are few particles, and thus few collisions. Air in our atmosphere is dense with particles; atmospheric pressure is very high compared to lab vacuum or the vacuum of space. Scientists use vacuum in many ways. Vacuums were used in lightbulbs and vacuum tubes (such as the old CRT or cathode ray tubes of old TVs and computers). Vacuums are used for depositing materials in clean environments, such as on silicon wafers for microcircuitry. Vacuums are used for separating liquids that have different evaporation points. In scientific labs, we can produce pressures billions of times lower than atmospheric pressure, but the pressure in space is still lower.

Atmospheric pressure: Every cubic centimeter (also called a milliliter) of air contains 2.5 x 1019  air molecules. That’s 25,000,000 trillion molecules, where the US debt is roughly $12 trillion, and a terabyte (TB) hard-drive holds a trillion bytes of information. That is a lot of particles causing a lot of collisions. The average particle travels only 66 nanometers before colliding with another particle. That’s only about 200 times the size of a nitrogen molecule.

On top of Mount Everest: Pressure is roughly 1/3 of the pressure at sea level, and there are 8 x 1018 molecules of air per cubic centimeter. The average particle travels 280 nm before colliding with another particle.

Incandescent light bulb: The pressure inside a lightbulb is 1 to 10 Pascals (pressure at sea level is 100,000 Pascals). There are still about 1014 molecules/cm3, or 100 trillion molecules. The average particle travels a mm to a cm before a collision. This pressure is too low for plants or animals to survive.

Ultra high lab vacuum: The most sophisticated lab vacuum equipment can produce pressures of 10-7 to 10-9 Pascals, yielding about 10,000,000 to 100,000 molecules/cm3, respectively. Particles travel an average distance of 100 to 10,000 km before colliding with another particle. Such extreme vacuums require highly specialized equipment, including specialized pumps and chambers. Only certain materials can be used; paint, many plastics and certain metals can release gases at very low pressures, making them unsuitable.

Space vacuum: The vacuum of space depends on what part of space you mean. The pressure on the moon is 10-9 Pa, or roughly our highest lab vacuum, with 400,000 particles/cm3. The pressure in interplanetary space (within the solar system) is lower yet, with only about 11 particles/cm3. It is estimated that there is only about 1 particle per meter cubed in the space between galaxies. Still, some microorganisms have survived exposures of days to space vacuum by forming a protective glass around themselves.

Going the other way, there are pressures much higher than the pressure of our atmosphere.

At the bottom of the Mariana trench: Pressure is about 1.1 x 108 Pa, or about 1100 atmospheres. A variety of life has been observed in the Mariana trench.

At the center of the sun: Pressure is about 2.5 x 1016 Pa, or 2.5 x 1011 atmospheres, or about 100,000 times the pressure at the core of the earth. This pressure is sufficient to fuel the fusion process of the sun, where hydrogen is combined to form helium.

At the center of a neutron star: Pressure is about 1034 Pa, or 1018 times the pressure at the center of the sun. Here, pressure is so high that normal atoms with electrons around a core of protons and neutrons cannot exists. Nuclei cannot exist in the core of a neutron star.

Read about other science topics on my fun science page.

Science is Creative!

In the US, science is regarded as valuable, but dry and a bit stiff. As a student, it’s easy to get this impression, studying rigid facts first explored centuries ago. The math, chemistry, physics, and biology we learn in high school and college are about recreating long-known answers by well-established methods. But the process of making new science and math is inherently creative, and new ideas require letting the mind run wild a little. In this post, I’ll talk about how I develop my ideas.

I work with populations of oscillators. The idea of this research is that the complexity of the whole (the population) exceeds the complexity of each element (the oscillator). The human brain is a good example of such a system–each neuron is fairly simple and well-understood, but overall brain behavior arising from the interactions of many neurons is not understood. My research tends to work by observation–I notice something I find interesting and I explore that further. Other researchers work on what they suspect they will find, based upon other work. All research works within the context of its field. There are many interesting behaviors I have noted in my experiments, but I explore the ones I might explain. Really random observations are cool, but hard to frame in a way which is meaningful to the community.

The above may not sound particularly creative. But the key to experiments like I do is imagining what might happen when one explores slightly beyond what is known. It requires extrapolating from the areas we know, in the context of the rules we know, to the areas we don’t know. Some of the rules we know are pretty absolute, like thermodynamics, but others may be flexible. (As a note on this point, the stable chemical oscillations I study were once considered thermodynamically impossible. Someone had to bend the established understanding of thermodynamics to explain these oscillations. Einstein had to bend Newton’s Laws for relativity, and he arrived at that conclusion by logic rather than by observation.) In an experimental apparatus like mine, thousands of experiments are possible. It is up to the experimentalist to pick from the possibilities, in the context of what might work in his imagination, to demonstrate something hitherto unknown.

In some ways, the process is similar to writing. There are rules that must be obeyed, and the process of finding something new or interesting is very indirect. With science and writing, I develop some of my best ideas drinking a beer or taking a walk. Sitting at a desk focusing is required at times, but so too is active contemplation. The rules of science are broader and more rigid and take longer to learn, but there are similarities.

A lot of historical scientists were fascinating people, akin to historical artists. Van Gogh got his ear cut off in a fight. Astronomer Tycho Brahe lost his nose in a duel. Salvador Dali shellacked his hair. Electrical engineer Nikola Tesla fell in love with a pigeon. Mathematician Paul Erdos lived itinerantly for decades. In one visit to a colleague, he couldn’t figure out how to open a carton of juice, so he instead stabbed it open (among many, many other oddities). Physicist Richard Feynman used to work on his physics at strip clubs. Artists may share their eccentricities more in their works, but I would argue that scientists have every bit as much oddness.

I hope this post illustrates a little what it is like to be a research scientist, and how science at the cutting edge works. For more science posts, check out my fun science list.

Fun Science: Gravitational waves

Gravitational waves were first predicted in 1916 by Einstein’s general theory of relativity; today we are trying to directly observe them. A gravitational wave is a tiny oscillation in the fabric of space-time that travels at the speed of light; all other findings from general relativity predict its existence. Many objects will create minuscule gravitational waves, and even the largest objects create ones we just barely hope to see (such as binary stars and black holes). From the LIGO wikipedia page “gravitational waves that originate tens of millions of light years from Earth are expected to distort the 4 kilometer mirror spacing by about 10−18 m, less than one-thousandth the charge diameter of a proton.”

What would we gain from this? Astronomers believe that gravitational waves could eventually become another mode of imaging by which to analyze the universe, like gamma ray, x-ray, and infrared imaging.

Example of gravitational wave distortions (from wikipedia)

The LIGO (Laser interferometer gravitational-wave observatory) ran from 2002 to 2010; it was unsuccessful in its hunt for gravitational waves. It is being recalibrated to restart in 2014. The two observatories in Louisiana and Richland, Washington record the same events and compare the time at which they arrive. Below is a schematic of this set-up. LISA, the laser interferometer space array, has been discussed for years as an orbiting detector with greater length scales (and therefore greater accuracy) than LIGO; a proof-of-concept is due for launch in 2014.

Laser interferometer set-up (wikipedia)

If you want to learn more, Einstein Online, which is run by the Max Planck Institute, is a great resource (the Max Planck Institute is involved in great cutting edge research, perhaps comparable to NASA). The above link is for info on gravitational waves, but there is also great info on other concepts related to relativity if you are interested.

Fun Science: Enzymes

An enzyme. Spirals and sheets and strands indicate different kinds of structures. (from Wikipedia)

Enzymes are the catalysts of the body, helping to facilitate chemical reactions that would be very slow or unfavorable in their absence. In a previous post, I discussed how platinum lowers the activation energy barrier for desirable chemical reactions in a car engine, among other places. Enzymes do the same thing, but they are much more selective. Platinum can act on millions of molecules. Enzymes are shaped so specifically that they act only on one molecule. Because of this, enzyme catalysis is often compared to a lock and key–only one chemical is so perfectly shaped as to fit into the active site of the enzyme.

Enzymes are mostly proteins, which are made of hundreds of amino acids with several layers of structure. Our DNA is coded so that enzymes can be assembled from the instructions. The “primary structure” is the sequence of amino acids strung together. The shape of local groups of amino acids gives the “secondary structure”; some combinations tend to coil, others tend to be flat (see the picture at the top of this post). This is due to interactions between the amino acid groups; for example, ionic groups might attract or repel each other. The “tertiary structure” is the structure of the overall molecule, also called the “folding”. We can reproduce the primary and secondary structures in the lab; the folding is harder, because for most sequences of amino acids, there are several possible structures. In the body, the protein is assembled in such a way that it conforms properly. We are mostly still unable to synthesize proteins and enzymes. We usually use bacteria and fungi to make them, when possible.

Enzymes are essential to life. They aid in digestion. Many diseases are caused by the lack of a single enzyme. People with lactose intolerance lack lactase; the deadly Tay-Sachs disease is caused by the lack of hexosaminidase A. In Tay-Sachs, a waste product of cellular metabolism builds up in the brain. Without the enzyme to accelerate its break-down, the waste product builds up to intolerable levels. We can obtain hexosaminidase A, but we can’t therapeutically deliver it to where it is needed in the brain.

You probably already knew that the human body is a remarkable machine. But I hope this brief overview of enzymes gives an appreciation for this one small aspect. Happy digesting.

Fun Science: The Element Lithium

Lithium is the third element on the periodic table, after hydrogen and helium. It is the lightest metal, and you probably use it every day. The batteries in your phones and laptops and most rechargeable batteries you use are lithium ion batteries.

Lithium is used in batteries because it has the highest electrochemical potential of any element. It is so high that it will split water into hydrogen and oxygen (and violently!). This means there is a lot of energy available to exploit. This is also why laptop batteries can sometimes explode; the batteries are sealed very tightly, but if the seal is broken, air and water vapor will come in contact with the lithium and this is unsafe. 10-15 years ago, there weren’t as many lithium batteries in use, but now they are everywhere. Science has made great strides in improving the configurations of the batteries to give more energy, such as increasing the surface area of the lithium portion. Each time you cycle your battery, the lithium undergoes an electrochemical reaction on the draining and again on the charging of the battery. This is also why batteries become shorter lived over time; the high surface areas of new batteries aren’t thermodynamically favorable, and the lithium will become lower surface area with time. Less available surface area means less available energy.

Lithium ion (Li+) has another, very different use. It is used as a mood stabilizer. It is particularly useful at combatting mania. The linked wikipedia page contains its fascinating medical history. It was first used in the 1870’s as a mood stabilizer. Eventually LiCl was marketed as an alternative to table salt (NaCl), to avoid high blood pressure, and its mood properties were forgotten. Early versions of 7 Up contained lithium. Excessive lithium use was found to be deadly, and it was banned as an additive in the 1940s. Then in Australia, it was again discovered to have mood-stabilizing properties. Its therapeutic dose is quite close to its toxic dose, which is maybe why it took a while to gain approval in the US. Studies suggest that water supplies containing lithium may promote longevity and reduce the occurrence of suicide.

Lithium salts also have another really nifty use: cleansing the air in spaceships and submarines.  Not only does human breathing consume oxygen; it also produces carbon dioxide, which is toxic when present in high amounts. Several lithium salts can remove carbon dioxide from the air. One even adds oxygen to the air when it removes carbon dioxide.

Elemental lithium is highly reactive, and is a member of the alkali metal group (all of whom react very impressively with water). Below is a video of lithium reacting with water. It bursts into bright red flame:

Another video shows more lithium action:

The people who made the second video have a great youtube channel with videos about all the elements done in a university laboratory environment. Most of them have good footage of reactions as well. I just spent an hour watching their videos, they are very entertaining for people with little knowledge, or a lot. If you have a little time to kill, the videos of sodium and potassium are also good, flammable fun.

Fun Science: Helium

Helium: filler of floating balloons, maker of high-pitched voices. But there are a lot of other interesting things about helium too!

First, helium makes our voices high because it is less dense than air, and thus the vocal chords vibrate more quickly. (Also fun: higher density gases, like sulfur hexafluoride, will correspondingly make the voice become very low. In this case, the practitioner must be upside-down, because right-side-up the gas will settle in the lungs, potentially causing asphyxiation.)

Helium is the second most common elements in the universe, but it’s pretty rare on earth. We get pretty much all of our helium during natural gas extraction, when it is trapped underground. Because it has such a low density, it basically escapes the atmosphere once it gets into the air. Helium is very common in the universe because it is formed by the fusion of hydrogen. Our sun and other stars are hydrogen to helium engines, pumping out tons os helium per second, though it doesn’t come to Earth. Most helium on earth comes from the radioactive decay of uranium, which emits helium.

Helium is a noble gas. This means that it naturally has the number of electrons to be stable without interactions with other atoms. Helium has the lowest boiling and melting points of any element, at 4K and 1K respectively. This is due to its stability. Liquids and solids are formed when atoms energetically interact with one another; helium has very little tendency toward this. Because of its stability, helium is used as a cryogenic gas. Helium is an essential part of an MRI machine, shown below. The helium is required to supercool the magnets, which increases the magnetic field and thus the resolution.

MRI for medical imaging.

The US is the largest supplier of hydrogen in the world. This is partially because congress signed an act to bleed down our helium reserve by 2015. However, some scientists have pointed out that helium is hard to come by, and we should conserve our helium. One source estimates that helium balloons should cost $100 dollars each, based upon the scarcity of helium. Another says they should be illegal.

So the next time you look at a blimp or a balloon, marvel at the substance that fills it. It’s really star stuff, and rare to boot!

Fun Science: Why’s platinum so special?

In science, we tend only to learn about a small subset of the elements that populate our world. This is not unreasonable, since 96% of our bodies are composed of just hydrogen, water, carbon, and nitrogen. But there are over a hundred more elements, and they often influence life outside our bodies in ways we don’t hear about. So in today’s post I will talk about platinum.

Platinum is one of the rarest metals in the Earth’s crust. Only 192 tonnes of it are mined annually, where 2700 tonnes of gold are mined annually. When the economy is doing well, platinum can be twice as expensive as gold. So what’s so valuable about it?

Platinum is used a lot in jewelry. Platinum has the appearance of silver, but it doesn’t oxidize and become tarnished like silver. It’s harder than gold, and its rarity can be appealing.

But it’s the chemical properties of platinum that set it apart. Platinum is a great catalyst. This means that platinum facilitates chemical reactions, but is not consumed as the reaction proceeds. The catalytic converter in your car is a platinum catalyst. The catalytic converter helps eliminate a variety of undesirable compounds such as carbon monoxide, nitrous oxides, and incompletely combusted hydrocarbons. Platinum is also a critical part of current hydrogen fuel cells; it splits hydrogen into protons and electrons.

Platinum doesn’t force reactions to occur, but it makes them easier by reducing the energy required. The image below shows the reaction of carbon monoxide (CO) to carbon dioxide (CO2). The chart at the bottom shows the potential energy before, during and after the reaction. Imagine a ball rolling along the red curve (with platinum) and the black curve (without platinum). The ball on the black curve will need more speed to get over the hump. Any given ball is more likely to get over the red hump. Likewise, the presence of platinum lets CO get over the hump to become CO2. Platinum does this for all kinds of reactions.

activation energy

The reaction takes less energy because once a molecule bonds to the surface of platinum, the bonds within the molecule are a little weaker. Molecules like O-O and H-H can split into singletons, something they would never do off the surface. Below I show an example reaction for CO to COon platinum. This diagram is meant to be illustrative, a possible mechanism for the reaction and to show how platinum helps out. In reality these reactions occur very quickly, and careers can be spent figuring out exact reaction mechanisms.

catalysis

 

Platinum is a bit like velcro. Molecules become hooked to the surface, do their reaction, and unstick. If molecules stick and then refuse to unstick, this is called catalyst poisoning, and it’s a big issue in fuel cells. Like velcro, once the hooks are occupied, they can’t do anything else. Platinum is a good catalyst because a lot of things (like hydrocarbons) want to stick to it, but they don’t stick too hard. Other metals either are not attractive enough, or they are too attractive. Platinum is so valuable because, besides being rare, its properties happen to be balanced just right for the reactions we want.

 

Fun science: scale-free networks

A scale-free network is a network with self-similar structure. As you zoom in on parts of the network, the sub-network resembles the overall network. In this way, scale-free networks are the network analogy of fractals. (Read previous posts about fractals, or fractals in nature.) Fractals arise in many natural systems like coastlines, snowflakes, and topology; likewise many naturally arising networks are self-similar. Examples include links on the internet, social networks, and protein interaction networks. Understanding the structure of a network helps us to understand the types of behavior that can occur on the network. Some network structures are more prone to failure or instability, or different types of failure or instability.

Scale-free networks have a sort of hub arrangement. Some elements connect to a bunch of other elements, while most connect to just a few. Going back to the social network analogy, the hubs are those people with 1500 friends on Facebook, while most people have 100 or so. In the picture below, a random network is shown on the left, and a scale-free network is shown on the right (hubs shown in grey). In the random network, all the elements have roughly the same number of connections, with some slightly more, some slightly less. The scale-free there has more variation in number of connections amongst elements.

From Wikipedia page on scale-free networks.

Getting more mathematical, the number of connections an element has gives its degree. An element with 3 connections has a degree of 3. We can say, hypothetically, that element 1 has a degree of 2, element 2 has a degree of 6, element 3 has a degree of 2, etc. We have degrees for all of the elements of the network. If we organize this set of degrees into a histogram (where we bin by degree value– in our example, we had counted two of degree 2, and one of degree 6) we get a degree distribution.

If something is distributed normally, the histogram has a bell-shape to it, like the first picture below. If you did a histogram of the height of all the people in your city, it would be a normal distribution. If it is distributed in a scale-free fashion, there are a few high value elements (high degree in our case), and a lot of low value elements. This gives the bottom picture, with a peak at a low value and a long tail into the higher values. The wealth distribution in your city probably looks like the scale-free distribution. If you take the log of the values on the scale-free distribution, you will get a straight line. This is because the logarithm is the inverse of the function 10^x; if you take the log of something, you can see its behavior on the 10^x scale, which gives you insight into how it behaves across multiple powers of 10, or its “scaling free” behavior.

From Wikipedia article on normal distribution

From Wikipedia article on power-law distribution

If you are interested in other basic explanations of more advanced science, also check out my posts on synchrony, chaosnetwork theory, and small-world networks.

Americans are (statistically!) Weird

Have you ever wondered how social scientists conduct their psychological experiments? They mostly use volunteer American college undergraduates. This might seem obviously flawed; can a bunch of educated 20-year-olds possibly even represent the American spectrum, much less the world? The field hypothesized that the human brain structure is universal, and thus reasoning and decision-making as a consequence of that structure should be universal. The article “Why Americans are the Weirdest People in the World” explores the research of Joe Henrich. The article discusses how different cultures solve different problems, and how truly diverse thinking processes are across the globe. And wouldn’t you know it, Americans are crazy, crazy outliers in all of the problems.

Economics often uses behavioral experiments of game theory to understand choices that people make. In the famous “prisoner’s dilemma”, two “prisoners” may choose whether or not to rat out the other prisoner. Depending upon the choices of the two, there are four possible outcomes. If both betray, they are collectively worst off (say two years of prison each). If A betrays B and B does not betray A, A goes free while B gets 3 years of prison, and likewise for the reverse. If neither betray, they are collectively best off, and get a year each. The constructs of the game reward deceit.

Joe Henrich played such games with natives in Peru. The Ultimatum game is a version of the prisoner’s dilemma. Player 1 is given $100. He must make an offer to player 2. If player 2 feels the offer is too low, he may reject it, in which case both players keep nothing. Both players know the rules. Player 1 is compelled to offer enough so that player 2 does not feel cheated. In the US, the offer is typically close to $50, and lower offers are typically rejected. In Peru, the offer was much lower, and it was typically accepted. The people in Peru figured money was money, why reject it? Different cultures displayed different reactions to the Ultimatum game yet. The US is relatively typical of the west in this game. The researchers supposed that in a western society, people have grown to accept some inconvenience on their own behalf to punish dishonesty or greed, such as taking the time to write a complaint to the Better Business Bureau.

The article goes on to detail that Americans are outliers statistically. This has major implications for economics and sociology and psychology. It’s a great read, and for my part, I think a reason to take these kinds of sciences with a grain of salt. They are definitely fields worthy of study, but definitive conclusions are difficult. We know most that we know little about the human brain. Even if you aren’t particularly interested in the science, the article is a fascinating read just for the variety of human thinking.

Fun Science: Small World Networks

The small-world phenomenon refers to the fact that even in a very large population, it takes relatively few connections to go from any element 1 to another random element. Amongst people, we know this concept as the “six degrees of separation” game. Any population of objects with connections can be conceptualized this way. Examples include crickets communicating by audible chirps, websites with links, electrical elements with wiring, board members with common members, or authors on mutual scientific papers. All of the examples I list have been examined in various scientific studies.

In a small-world network, elements are first connected in a regular lattice; for example, each element is connected to one or two nearby neighbors on each side. The leftmost picture below shows a regular lattice of elements. A connection between element and element j is then removed. Then we add a connection between element i and any other element x, like the middle picture below. If x is across the network from i, then the number of steps between i and x has been reduced from some large number to 1. All of the elements connected to i are now 2 steps from element x. This reduces the diameter of the network, which is the maximum number of steps between any two elements, although the number of connections remains constant. In the six-degrees of separation game, the diameter would be 6. As we replace more of the lattice connections with random ones, the network becomes more and more random. We quantify a small-world network by its randomness, as in the picture below.

The small-world network has been explored as a means of sending information efficiently through a population. As the diameter reduces, the time it takes information to spread through the entire network reduces. Neurons in the brain have been explored as small-world networks; certain regions of the brain are highly interconnected with a few long distance connections to other regions of the brain. Protein networks and gene transcription networks have also been described with the small-world model. Further information with scholarly references is available on the scholarpedia page (which is generally a great resource for complex systems problems).

 

Here you can read a good scientific paper by Steven Strogatz, one of the premier scientists in the area. This is a paper published in Nature, one of the highest scientific publications. There are some equations, but the figures are also excellent if you are uncomfortable with the math. The paper models the power grid, boards of directors, and coauthorship using network ideas. I mention this paper specifically because I find Strogatz a very relatable and clear writer. Also consider reading his recent nontechnical book about math, The Joy of X, for more math fun.

Check out my other science posts on graph theory, chaos, fractals, the mandelbrot set, and synchrony. And drop a note with any questions!