Friday, January 30, 2009

The Ethics of Atoms

Monday's broadcast of the Trials of J. Robert Oppenheimer recalled one of the most important debates in the history of physics. Historians use the Manhattan Project to debate the ethics of science and technology in society. Much of the arguments boil down to the question of what kind of role science plays in society and whether its used as a force for good or for ill. Some saw the devastation wrought by the atomic bomb as proof that science was inherently a destructive force in the world, while others saw the promise for cleaner and cheaper energy. The debate continues to this day, and draws in nearly every field of science.

Today scientists working on the latest atomic advancements, nanotechnology, are facing just as serious of a debate. There is a tremendous amount of potential in these cell sized machines, but what kind of potential depends greatly on one's attitude towards science. It's an important debate to have. Just as atomic energy proved to be the most powerful technology of the latter half of the twentieth century, nanotechnology is promising to be the most powerful development of the early twenty-first century.

But to what ends? On the one hand nanotech will lead to faster computers, stronger materials, better solar panels and even the ability to repair damaged nerves and tissues. However the long term affects of nanotechnology on human health and the environment is still unknown. Or more disconcerting; at what point does the synthetic reproduction of life's processes start to become the creation of synthetic life?

Congress has already called for an investigation into the potential health and environmental impacts of nanotech. At the same time, Canada's parliament is deliberating whether companies should be required to register their nanotech research with the government. This is all part of the continuing introspection as to the role of science in society.

The comparison here to the Manhattan Project is appropriate because all advances in science come both with potential benefits and detriments. Some see technology as a tool, beholden completely to the hands of the people wielding it. Some prefer to focus on the potential harms of science, while I prefer to take a more positive view. The advancement of science, often shockingly fast paced, represents the search for the betterment of humankind. There is no question that the world is better off today than a century ago because of the advancement of technology. Certainly there are some problems that stem from these advancements, but at the same time, science is the best tool to address and correct these.

What about you? What do you think about the promise and pitfalls of technology in the twenty-first century?

Read the rest of the post . . .

Thursday, January 29, 2009

Beware the Fine Print!

For anyone who has ever gotten tripped up by the fine print of a cell phone contract; watch out! Scientists at Stanford University have just created the world's smallest writing. These new letters are tiny, only about a third of a nanometer in size. I already have enough trouble figuring out my contract, if they start printing my roaming rates in words less than a thousandth of a millimeter high, I might just give up and revert back to carrier pigeons.

The team of scientists manipulated the quantum waves of an electron on a piece of copper so it would encode 35 bits per electron to encode each letter. A bit is the fundamental unit of computerized data, essentially a switch that signals either on or off. When multiple bits are combined, more complicated data like letters and numbers emerge. These wave patterns actually project a hologram of the letters outwards, which can be seen with a powerful microscope. What would a team of scientists from Stanford University write in the world's tiniest letters? Why the letters SU of course, so everyone can know who wrote it.

This is a major step forward for storing small scale information. Up to this point scientists had thought that atoms would be the smallest repository for data, with one atom storing a single bit of information. However this process was able to store 35 bits on a single electron, showing that there still is no smallest limit for data storage.

The quest to write the smallest began in 1959 when physicist Richard Feynman offered a thousand dollars to the team who could reprint a page from a book, 25,000 times smaller than its usual size. In 1985 a physicist also at Stanford won by writing the first page of "A Tale of Two Cities" by Charles Dickens in print that needed an electron microscope to read. Five years later, IBM was able to write its name in only 35 xenon atoms. Once again, Stanford is home to the smallest letters in the world.

Read the rest of the post . . .

Wednesday, January 28, 2009

Factoids about Schrödinger's Cat

Weirdness is the name of the game for particles the size of an atom or smaller. Quantum particles exist in multiple states and positions at the same time. This can be hard to visualize sometimes, fortunately there's a couple of metaphors that can help out.

"Factoid" is one of the more curious words in the English language today. It was originally cooked up 1973 by author Norman Mailer while he was penning a biography of Marilyn Monroe as a word to describe "facts" that weren"t accurate. The suffix "-oid" means a "similarity, not necessarily exact, to something else," so by adding it to the word "fact," Mailer describes information that is accepted as true, especially by the media, but isn't

More to the point: In fact, a "factoid" isn't factual.

By cruel twist of fate, the use of "factoid" has been distorted to the point where its original meaning has been obscured. Over the years it's been misused (usually by the media ironically) to the point where if you look the definition of this troublesome word up in the dictionary today, the two definitions you are presented with are as follows:

1. A piece of unverified or inaccurate information
2. A brief, somewhat interesting fact.

Condensed down:

1. A piece of information that is not true.
2. A piece of information that is true.

The only way to tell which definition is relevant at any given time is by its context. Until the moment that the word is measured by putting it in a sentence, it means either and both of two polar opposite definitions.

Just like a quantum particle.

Until the moment a subatomic particle like a photon or electron is measured, it exists in every possible state at once. To really illustrate how weird this idea is, Erwin Schrödinger came up with a slightly macabre thought experiment now affectionately known as Schrödinger's Cat.

There is a box with a cat inside. Also inside is a vial of poison gas hooked up to a Geiger counter and an atom of an unstable element. When the atom decays and emits radiation, the Geiger counter registers it, releasing the gas, and kills the cat. The atom has a certain quantum mechanical probability of decaying at any given moment and until the moment it's measured, exists as both a decayed and complete atom. Unfortunately for the poor feline involved, this means that until the box is opened, the cat would exist in both a living and a dead state. This is obviously absurd.

The point of the thought experiment is not to actually develop an overly elaborate cat-killing device, or even that the feline in question would actually be both dead and alive, but to illustrate how really bizarre quantum particle behavior really is. Just like the word "factoid" which has multiple definitions at once until its context is determined, so too can subatomic particles exist in multiple states and positions until they're measured.
We've got our own section of daily factoids on our Physics Central homepage. Hopefully it's easy to quantify what kind of factoids they are. Should we change the name to avoid any confusion?

Read the rest of the post . . .

Tuesday, January 27, 2009

How'd They Do That Tuesday: The Camera

Cameras are to the nineteenth and twentieth centuries what Gutenberg's printing press was to the fifteenth century. Both have completely revolutionized how information is conveyed. Cameras have captured everything from the battlefields of the civil war to the first moon landing into our homes and more. The first cameras over 180 years ago were not more than wooden boxes with a flap to let light in, but today cameras can capture images on film or digitally, still images or moving, microscopic or interstellar, at nearly any wavelength of light. However almost all cameras work using the same basic principles that they've been using since they were invented. Nearly all cameras share several similar components, a lens, a shutter, and recording surface, and they use these parts along with the physics of optics, to capture an image.

When you point the camera at a friend on vacation or a stunning landscape, the light reflected off of the subject will be collected by the lens. The lens is nothing more than a finely polished piece of glass that bends light so it all converges on a single focal point. Light travels slower through glass (or whatever medium the lens is made of, sometimes plastic) so when one side of a light ray hits the lens, part of it slows down and the ray bends. Think of it like a car. If the wheels on one side of the car turn slower than the other side, the car turns towards the slower wheels. When one side of a light ray travels faster than the other, the light will bend towards the slower side.

The image is captured on the recording surface where the light rays converge at the focal point. The recording surface is usually a film coated with light sensitive silver halide crystals, or in a digital camera, a charge-coupled device. If the film isn't right at the focal point, the image will appear out of focus and blurry.

The lens will actually flip the object's image onto the film. Because of the angle the light hits the glass, rays from the object that hit the edge of the lens bends more sharply than light that hits around the center. The light then converges on the film on the far side from where it started. Where the focal point falls is a property of both the curvature of the lens and how far away the object is. When you twist the focus on a camera, it moves the lens and the focal point closer or farther away from the film so you can focus on objects different distances away.

The shutter and aperture both regulate how much light is allowed into the camera. The aperture acts much like the iris in your eye, opening wide to let more light in, or narrowing to restrict the flow. This allows the photographer to make sure that the right amount of light gets through for the picture to come out. The shutter is essentially the small door in the camera that exposes the film, very briefly, to the focused light so it can capture the image. The longer it's opened the more light gets through. So much of the art of photography is finding the right balance between the shutter speed and aperture size to perfectly capture the subject.

Read the rest of the post . . .

Monday, January 26, 2009

New Year, New Podcasts!

Do you have any New Year's resolutions? Well, Physics does! Listen to the podcasts.
Read the rest of the post . . .

A Very Long Entanglement

A team at the University of Maryland was able to teleport the information of a charged a charged atom across a meter of distance last week. The actual atoms didn't move, however information about it moved a large distance instantaneously, as in faster than the universal speed limit, the speed of light. Though this might sound impossible, under the funky laws of quantum mechanics, weird things that don't make classical sense are commonplace. This wasn't even the first time scientists were able to influence distant particles instantaneously, but it's the first time an entire atom's information have been teleported.

The subatomic world of quantum mechanics is full of bizarre happenings and spooky events. One of the strangest goings-on is how particles can exist in multiple places at one time. In the very tiny world of quantum mechanics, particles don't necessarily inhabit a single point in space, but exist as a wave of probability. These waves express where a particle is likely to exist at any given moment. Only when someone measures the locations and states of these particles, do their wave functions collapse and the particles resolve themselves into single points.

When two waves overlap, the particles can become "entangled" and that's when things can get really spooky. When two particles are entangled their traits become mixed up and indistinct. Only when someone measures one of them do they resolve themselves into their original distinct particles. Up until that moment, they exist as both particles with a certain probability attached to it.

Scientists can isolate each particle's wave function, and with careful manipulation, actually move them apart from each other. After the two are separated by a wide distance, scientists then measure one particle collapsing its wave function. When this happens, the other particle resolves itself at precisely the same moment as well. The two particles essentially communicate with each other at precisely the same moment in time, across great distances. The reason that no laws of physics are broken by this instantaneous linkup, is because the information needed to interpret the results still has to travel by normal means.

Though this kind of "teleportation" won’t help Mr. Spock beam down to a planet surface, this transfer of information holds a lot of promise for quantum computers and secret codes. Using entanglement, information can be securely transferred between two points a great distance away.

Read the rest of the post . . .

Friday, January 23, 2009

Can You See Me Now?

"I was working on the latest in invisible cloaking technology, but now I can't find where I put it."

What with the hullabaloo around here of the inauguration last week, this story nearly fell through the cracks. Technicians at Duke University announced that they have built a next generation cloaking device that could make objects appear to vanish. This new device improves enormously over previous light bending prototypes, and is a huge leap forward for the developing technology.

Cloaking devices work by bending electromagnetic waves around an object so it seems to disappear. When light hits the material the team created, the light gets redirected around an obstruction, and then released around the other side as if nothing ever happened. A spectator would see some distortion around whatever object is "cloaked" but it would be more like looking at a mirage than at a solid object.

A team at UC Berkeley developed a prototype last year that could hide objects from visible light. This new one is capable of deflecting nearly any wavelength of light instead of only the thin band we can see. The device tested last week, actually used microwaves redirected around an object, but can easily be refigured for visible light. There are many different possible applications for the new technology out side of the obvious "make things invisible" aspect. Putting this material around objects that usually disrupt cell phone calls could cut down on interference, or even allow signals to connect that couldn't have otherwise.

What's really cool about this new technology is how surprisingly easy it is to make. The trickiest part and what took the longest for the Duke team was developing the algorithms to tell the materials how to redirect the light. Once that was completed, the rest of the device only took nine days to actually build.

The whole research field has been moving forward very quickly. Metamaterials that could redirect light were only first isolated two years ago and already there's a working prototype. It's impossible to say for sure, but at the rate this field is moving forward, a Harry Potter style invisibility cloak may be available at your local Brookstone before you know it.

Read the rest of the post . . .

Thursday, January 22, 2009

The Facts on Pollock's Fractals

What do you do when you inadvertently discover a huge cache of paintings that may have been created by the American master Jackson Pollock? Have all of them verified by experts? Naturally. Set up a world tour then donate them to museums? Of course! Maybe even fire up the calculator for some non-Euclidian geometry?

Not so fast on that last one.

About ten years ago physicist Richard Taylor created quite a stir in the mathematics and art world when he claimed Pollock’s works contained unique mathematical patterns in his brush strokes. Taylor said that these mathematical patterns were fractal, and so unique one could actually identify whether any given work of art was a genuine Pollack based on these patterns. Fractals are extraordinary complex geometric patterns where shapes and configurations infinitely repeat themselves.

However a new soon to be published paper dispels this idea of using fractals to identify and date a Pollack work. Physicists Lawrence Krauss, Katherine Jones-Smith and Harsh Mathur of Case Western Reserve tested several of Pollock's most famous paintings against ones commissioned by local artists to see if the fractal identification would hold up.

It didn't. Not even close. Several of Pollock's most famous paintings failed the test, while other paintings created in 2007 by local artists showed up as the real thing. This work builds on a previous study in 2006 where Jones-Smith and Mathur showed the fractal patterns Taylor found were far too small to usefully identify a painting. The two were able to recreate these "Pollock fractals" by drawing crude freehand stars.

These new study shows pretty conclusively that Taylor's method for identifying works by Jackson Pollock using fractals is complete bunk. At the same time it also serves as a good word of caution about seeing things that may not be there. Fractal geometry in nature is a very trendy research subject right now. The idea that life can be broken down to numeric equations is exciting both in the sciences and pop culture. It was even the inspiration for Darren Aronofsky's mathematics-thriller Pi. At the same time, it's easy to go overboard. Mathematics explains so much about the physical world around us, but there are limits as to what it can predict. Fractals are a cutting edge field of research with applications across many of the sciences, but one has to be careful that each application is backed by hard evidence.

Or to quote the wizened old mathematics professor in the movie Pi, "[W]hen you abandon scientific rigor, you're no longer a mathematician, you're a numerologist!"

Read the rest of the post . . .

Wednesday, January 21, 2009

Name that Scientist! New Administration Edition

President Obama's administration is chock-full of scientists. Kudos to all who can correctly name each scientist caricatured in this picture. And if you can't....take a peek at the article it came from.
Read the rest of the post . . .

The Physics of Crowds

What do the grains of sand in an avalanche and the people at the 2009 Presidential Inauguration have in common? There are so many small parts that the best way to understand how they all flow is through the physics of fluid dynamics.

The news is reporting that close to 2 million people crowded the lawn of the National Mall for the inauguration of Barack Obama. At the end of the ceremony, the throngs of people surged out seeking to get somewhere warm. I know because I was right in the middle of it, caught up in the flow.
That's what fluid dynamics is, the study of the flow of liquids or large numbers of particles, or even large crowds of people. Whenever there is a whole bunch of anything moving together, like a running river or an avalanche, certain patterns and organizations tend to emerge. Areas with a smooth flow or turbulence can be predicted accuratly based on the fluid's properties like viscosity or particle size.

The same is true with large crowds of people. Properties like how densely packed the area is or what people are trying to get to, allow programmers to predict how a crowd will act in any environment. In large groups, people really behave just like a "smart fluid." Modeling the behavior of thousands of individual people (or particles) takes a lot of processor power. But, as computers continue to improve, more detailed information about each person in the crowd, like line of sight, or even individual personalities, can be programmed in.

This kind of crowd mapping can literally save lives. Architects used this technique to completely redesign an especially dangerous bridge in Saudi Arabia. Until its redesign, pilgrims traveling to Mecca would overwhelm the bridge, resulting inlarge numbers of trampling deaths.

Planners for yesterday's inauguration used similar kinds of models to make sure people would flow smoothly in and out of the National Mall. Where turbulence started to form, was where people were in danger. By keeping the flow running smooth, giving everyone a little room and avoiding crowd surges, the enormous crowd was able to disperse without major incident.


Here's a couple of photos from the inauguration. You can see how closely packed everyone is. Not too hard to see how the flow of an avalanche might be similar to everyone trying to move at once.

Read the rest of the post . . .

Friday, January 16, 2009

How'd They Do That Tuesday (Friday Edition): Spectroscopy

[Note: Due to the long weekend, next week's How'd They Do That Tuesday is being brought to you early.]

As I wrote about earlier, scientists are now scrutinizing methane emissions on Mars to determine whether they were caused by microorganisms or some unusual geologic phenomenon. But the astute reader might wonder; how did astronomers on Earth discover Martian methane in the first place? Because of its orbit, Mars can be anywhere between 55 million kilometers and 400 million kilometers away from Earth. Not to mention methane is clear, how did astronomers "see" it? What gives?

Of course, physics has an answer. One of the most powerful tools in an observatory is called the spectrometer and astronomers use it to tell what elements make up objects very far away. It's a brilliant system that needs nothing more than visible light to work.

When light hits an object, some wavelengths are absorbed by the object, while others are reflected off, giving the object its color. Spectroscopy uses this reflected light to create a sort of fingerprint of the elements in the object.

Atoms are the fundamental building blocks of every element in the universe. They're composed of a number of electrons orbiting around a dense nucleus of protons and neutrons, the number of which is unique to each element. Because of the strange laws of quantum mechanics, these electrons can only orbit at very specific, discrete distances away from the nucleus, and the distances are unique for every element. When light hits the cloud of electrons they'll absorb some of the energy of the light, and jump from their ground state (their closest orbit) to their excited state (an orbit farther away). The energy they absorb is at a very specific energy wavelength on the electromagnetic spectrum.

As a result, when light reflects off of any material, it's missing thin wavelengths of energy. When this light is shown through a prism, it spreads out from longer, low energy wavelengths (red) to the smaller, high energy wavelengths (violet) in the form of a rainbow. The missing packets of energy show up a pattern of small dark bands at certain colors, each pattern unique for every element.

That's for reflected light, like sunlight off of Mars. The process works in reverse when you have a star or other bright object giving off light of its own like a star. When the object's electrons jump from their excited states to their ground states, they give off specific energies of light, at exactly the same frequencies that they absorbed them. If you look at a star through a spectrometer you'll see a dark bar with thin bands of color where on the bars would be if the light had been reflected.

Using these techniques, astronomers can find out what elements make up far away objects, like the methane on Mars. They don't stop there. Spectroscopy can be used over tremendous distances, like when carbon dioxide and water vapor was found in the atmospheres of planets many light years away.

Since it was first used on stars by Father Angelo Secchi in the mid 19th century, spectroscopy has allowed astronomers to probe the composition of the universe over great distances with extraordinary accuracy. Without it we would have to physically travel to Mars to find methane.

Read the rest of the post . . .

Of Mars and Methane

Yesterday NASA said during a press conference that there is a chance life may still exist on Mars.


Back in 2003 and 2006 the Earth based Keck Telescope in Hawaii detected huge bursts of methane gas on the surface of the Red Planet. After pouring over the data for years, the team of scientists analyzing these inexplicable events released their preliminary conclusions yesterday. NASA says these gaseous emissions are from either microscopic life forms or some unusual geologic phenomenon.

They're hedging their bets a little becasue the agency jumped the gun once before, when in 1996 they announced that tiny structures on a Martian meteorite could be fossils of extraterrestrial bacteria. The announcement caused a great sensation at the time, but over the years alternate explanations have shown to be equally plausible.

This time they're being much more cautious. In the words of team leader Michael Mumma, "Right now, we do not have enough information to tell whether biology or geology -- or both -- is producing the methane on Mars…But it does tell us the planet is still alive, at least in a geologic sense."

That's really saying a lot. Mars has been thought of as a dead planet for years, but the team simply hasn't been able to defiantly rule out life as the source of these methane emissions.

Today the surface of Mars is a cold and barren place, but it wasn’t always like that. Far back, in its much warmer past the Red Planet once had its own blue oceans and rivers. It's theoretically possible that life was able to gain a foothold under these milder conditions. Mars's climate cooled dramatically over the millennia. Most of its liquid water evaporated away or froze underground into the nearly inhospitable place it is today. If there are microbes belching methane, it's likely they live far underground, where Mars's warm core keeps the normally frozen water liquid.

What makes these methane emissions so exciting is that Mars has been long thought to be geologically inactive. There are no known active volcanoes on Mars that would produce this gas. In addition, other elements usually associated with volcanic eruptions, such as sulfur, don't seem to be present. It is possible that the gas has been stored as solid deposits below the planet's surface for millennia, which is why more study is needed before anyone can say anything definitively.

Even if it does turn out that its geology not biology behind these methane emissions, that alone would be Earthshaking (Marshaking?) because it shows the Red Planet isn't as geologically dead as we previously thought.

It will likely be many years before we are able to land a rig on Mars to drill down to investigate first hand what's emitting this gas, but it gives us a tremendous impetus to keep searching for answers. It's way too early to jump the gun and declare Mars a wildlife reserve, but it's a promising piece of evidence. Personally I really hope down the line it turns out to be microorganisms. If two planets in the same solar system are capable of sustaining life, than odds are we aren't nearly so alone in the universe as it seems.

Read the rest of the post . . .

Thursday, January 15, 2009

Bend it, Fold it, Twist it, Compute it

Molecules of carbon keep showing more and more promise for future use in electronics. This week scientists in South Korea demonstrated a process to make a transparent and flexible computer chip out of many interlinked carbon atoms. The days when we can wear our computers and keep the screens folded up in our pockets seems to keep getting closer and closer.

When carbon atoms are organized into a honey-comb patterned molecular structure they exhibit all kinds of exciting properties. They are great electrical conductors and are very structurally strong while still lightweight. When you roll them up into a cylinder, they're called carbon nano-tubes, and could possibly be used in everything from a space elevator to micro-circuts. If you keep the carbon honeycomb in flat sheets, it's known as graphene, and it's showing a tremendous amount of potential as the computer chip material of the future.

The team in South Korea was able to efficiently make these flexible nano-sheets by first evaporating a carbon rich substance over a sheet of nickel. When the gas condensed back to a solid, a honey-comb lattice of carbon atoms formed on the surface of the nickel, efficiently creating graphene. The nickel sheet is then dissolved away with chemicals and the graphene is stuck to more flexible plastic sheet. By etching specific patterns onto the nickel sheet, the circuitry paths needed for creating a working processor forms when the carbon condenses.

This is a huge step forward for the creation of nano-computing. Though the method still needs some refining, this method is far more efficient than previous techniques for creating large scale sheets of graphene. Once purer graphene can be produced in greater quantities, technicians think that these chips could be the key to building exponentially faster processors. This is very good news, because a recent simulation shows that just linking more processor cores together will eventually reach its limit of efficiency, and actually start to slow down computer processing. This new technique using graphene could be the key to continue to push processing speeds ever faster for years to come.

Read the rest of the post . . .

Wednesday, January 14, 2009

A Rose by Any Other Quantum Wave Function Would Smell as Sweet

Biology and quantum physics are two disciplines in science that rarely overlap. Quantum physics studies the strange workings of fundamental particles smaller than an atom, while biology looks at much larger chemical interactions and living organisms. However new unconventional research suggests that something as mundane as stopping to smell the roses is made possible by processes that bridge that gap.

Roses grow by using chlorophyll to convert sunlight into food through photosynthesis. However a recent study of photosynthesis of green sulfur bacteria found that these tiny microorganisms might just use some quantum weirdness to help transfer that food energy efficiently. Energized electrons travel through the myriad of connections within the bacterium's single cell transferring energy throughout. Electrons, which are quantum particles, can literally exist along a wave function at multiple points at one time. Only when someone (or something) seeks to measure them do their wave functions collapse and they resolve into a single point.

The bacterium takes advantage of this quantum peculiarity by letting the electron randomly wander through all of the potential paths across the connections simultaneously. The path that first reaches the intended destination collapses the wave functions of all the other particles on alternate routes, so only the most efficient path is used. This is really a fundamental form of quantum computing in the natural world.

After you've taken a deep sniff of the blossom, another quantum effect might be helping you smell that rosy smell.

A new take at how we sense smells uses an exotic phenomenon known as quantum tunneling. This happens when a quantum sized particle, like an electron, is seemingly able to show up on the opposite side of a theoretically impenetrable barrier. When you inhale, tiny particles called odorants enter your nose and interact with the smell receptors. However a new study shows that it's likely that electrons from the smell receptors are able to tunnel through the odorants to the other side, creating an electrical current. This current sets the odorant vibrating at a specific frequency giving the rose its sweet smell.

Right now these theories are still outside the mainstream of convention, but they are starting to gain traction. It's not impossible that in a few years a new field of quantum biology will start to develop as more and more of the wonders of the natural world are linked to fundamental physics.

Read the rest of the post . . .

Tuesday, January 13, 2009

How'd They Do That Tuesday: The Physics of Bicycles

I'm starting a new feature this week called "How'd they do that Tuesday." Each week I'll pick out some cool widget or doodad possibly even a thingamajig and take a close look at some of the physics that makes it work. So for my inaugural run, what better place to start than with a subject near and dear to my heart: Bicycles.

Bicycles are elegant machines wholly governed good old fashioned classical mechanics. No fancy quantum probability waves or relativistic space time curves here. Right now, some of the oldest and most basic laws of physics are all that we need when talking about the bicycle, starting with Newton's second law of motion:

Mass (M) x Acceleration (A) = Force (F)

Essentially whatever energy you put into the bicycle, is what you'll get out of it. When you push down on the pedals with whatever amount of strength, the bike will then accelerate proportionally to that, depending on the bike (and rider's) weight. That way if I weighed 70 kg and my bike weighed 10 and I wanted to accelerate from zero to about 11 kilometers per hour (roughly 3 meters per second) all I have to do is plug in the numbers I need and I can figure out how much force I need to put into the bike.

80 kg x 3m/s^2 = 240 Newtons

Newtons are the units used to measure force, which are (kg*m)/s^2. It's simply an algebraic way of saying 1 Newton of force would in 1 second accelerate a 1 kg object 1 meter per second faster than it was traveling before. If my bike and I combined were twice as heavy, it would take twice as much force to get the machine moving as quickly. Racing bikes are usually made of super-lightweight carbon fiber and titanium, so there's less weight for the rider to have to push.

Gear Ratios

That’s the basic idea behind applying force to a bicycle, or any object really, but there's a bit more to it than that. Bicycles use gear ratios to change how the force put into the bike is used to make the bike go fast. Essentially, gear ratios in a bicycle will change how far the rear wheel will turn for every turn of the pedals.

To get a good picture of how this works, imagine you're going up a hill. No matter what gear you're in, you'll have to travel the same distance and ultimately use the same amount of net force, but by shifting into a lower gear, you can make it up a lot easier. When you're in a low gear, the wheel only turns a short distance for each rotation of the pedals. This way, the force that you need to move a few centimeters forward on the road is spread over the entire turn of the pedals, making it very easy to concentrate a lot of force into a short distance.

If you tried riding up the same hill in a higher gear, when each rotation of the pedals turns the wheel two or three full rotations, you would find pedaling much more difficult. This is because while the force needed to push the bike up the same stretch of road would remain the same, it would be spread out over fewer turns of the pedals. You would need much more power (or torque) to move the pedals the same distance around.

This is in essence a simple machine in action. No matter what, a set amount of energy will be needed to climb the hill. By spreading the input of it over many turns of the pedals rather than only a few, the same amount of work can be done far more efficiently. It's essentially the same premise that makes pulleys so efficient.

Once you've crested that hill, the brakes use nothing fancier than old fashioned friction to slow you down again.

Read the rest of the post . . .

Monday, January 12, 2009

Top 5 Discoveries of LAST WEEK!!!

Last week the American Astronomical Society had their big annual meeting, full of groundbreaking discoveries. I already mentioned our home galaxy is both bigger and spinning faster than we thought before because it was too big news not to mention it. Now that the conference has finished up, I can go through and give you my five favorite discoveries.

5) Black Holes Preceded Galaxies – Since astronomers first determined that gigantic black holes live in the center of most galaxies they've been trying to figure out which came first. Did the large cluster of stars in the middle of a galaxy collapse causing a black hole to form, or did the black hole already exist and pull stars in around it. Astronomers have found that the mass of a black hole in the center of a galaxy is almost always one thousandth of the mass of all the stars in the galactic bulge. However when one peers to some of the farthest and oldest galaxies, the black holes at their center are much bigger proportionally, leading scientists conclude that the black holes likely formed first then pulled stars and material around them.

4) Stars Form around Black Hole – More about black holes. The tumultuous environment around a massive black hole like the one at the center of our galaxy seems hardly the place for a baby star to form. However astronomers were able to observe two stars forming only a few light years from the galactic center where gravitational forces should have ripped them apart. It just goes to show that there's still so much about our own galactic backyard we don't know about.

3) Magentars and Quark Stars – Ok, this one is wierd. After a large star runs out of fuel, it erupts into a gigantic explosion called a supernova, and gravity crushes down the leftovers into a super-dense material. If the star wasn't massive enough to compact down into a black hole, then it can form a neutron star. The material in them is so dense all of the electrons and protons are condensed together forming a solid mass of neutrons. It is as if the entire city sized star is a gigantic atom. Every once in a while one has a surprisingly powerful magnetic fields, dubbed magnetars, but no one was quite sure why they were so strong. Astrophysicists now think that the reason may be because the neutrons that make up the neutron stars compress to the point where they start forming a very dense material out of the neutron's fundamental quarks themselves. This would essentially make a magnetar a "quark star" and part of this very exotic material would be extremely strong magnetic properties.

2) Gas Giants have to form Fast – Planets like Jupiter form when gas and debris orbiting a star starts to gravitationally coalesce around a single location eventually creating a gas giant. After analyzing a nearby young star cluster, scientists were surprised to discover that all of the gas giant's material seems to dissipate from around a newly formed star in only a couple of million years, hardly any time at all on the interstellar timescale. This means that these gas giants have to form very quickly before all of their raw materials are gone. The solid stuffs that make up rocky planets like Earth and Mars, seems to linger longer, giving more time for smaller denser planets to form.

1) Cosmic Background Noise Louder than Expected – When a team of scientists set out to observe some of the oldest stars in the sky using a radio receiver, they were startled to discover a six times as much radio interference in their instruments than they expected. After many checks and recalibrations, the team was forced to conclude that their equipment was right and there was a lot of radio noise coming from something. No one knows for sure what though, there's no phenomenon that we currently know of that could produce so powerful a radio signal from all directions. This interference will make it much more difficult to detect the very oldest of stars that the team was originally out hunting for, but an unexpected result like this is so tantalizing, because it once again means there's something still to figure out about the universe. Right now all we know, is there's something out there making an awfully loud racket.

Read the rest of the post . . .

Friday, January 09, 2009

Sometimes Even Scientists Have to Just Say "Huh?"

Some days Scientists just have to throw up their hands and admit that they just don’t know why something is happening the way it is. I love it when this happens, not because I like seeing exasperated physicists (though it can be pretty funny sometimes), but because it shows that there's still so much left in the universe to discover.

That's what happened this week when scientists from the Argonne National Laboratory announced that they really just don’t know why certain metals can transfer electricity without resistance at relatively high temperatures. They’ve got some guesses sure, but right now that's all.

When one freezes most materials down to super-cold temperatures (within a tiny fraction of a degree above absolute zero) a strange phenomenon sets in. Magnetic fields stop going through the object and electrical current is able to flow without any of resistance. This is called superconductivity, and even with these near zero temperatures, it's a hot field for study in physics. Because superconductors don't have any resistance, electrical charge is able to flow through them without losing any energy. This could in the future be used to transfer electricity vast distances without losing any power. In addition the electrical current can even keep flowing after the original voltage is removed, much like an ice skater can keep moving just from their own momentum.

Getting materials to keep these unusual properties at warmer temperatures has been a huge focus for scientists. They've been able to get certain copper based materials to hold on to their superconductivity in temperatures as high as 77 degrees Kelvin (that's a balmy -196 degrees Celsius). However last year scientists were able to get certain iron-arsenic compounds to hold onto their conductivity up to 55 K. Now they’re just not sure why.

In the copper compounds, small vibrations in the superconductor's molecular structure let electrons pair up and travel freely through the material. However tests have shown that the vibrations in the iron-arsenic compounds just aren't strong enough to let that happen, leading to speculation that maybe it's the material’s magnetic properties that cause the superconductivity.

As of right now, no one is really sure. That's what makes all of the cutting edge research so exciting, because who knows what new and exciting discoveries await around the next corner.

Read the rest of the post . . .

Thursday, January 08, 2009

Happy Birthday Stephen Hawking!

Today we celebrate the 67th birthday of Professor Stephen Hawking, perhaps the most brilliant mind alive today. He's been one of the greatest contributors to advanced physics and cosmology over the last half century, even after having been wheelchair bound for many years because of Lou Gehrig's Disease. At the same time, he has been a tireless advocate for scientific research all the while reaching out to involve the public. His book, A Brief History of Time stayed on the London Sunday Time's best sellers list for 237 weeks straight, making it the most popular physics book in history.

He first exploded on the scene in the late 1960s by helping to prove that black holes, massive gravity wells in space named because not even light can escape, were predicted by Albert Einstein's general theory of relativity. At the center of every black hole is what's called a singularity, an infinitely small and infinitely dense point in space that gives the black hole its mass. Hawking realized that the laws surrounding a black hole's singularity would be the same as the laws of the point at the center of the Big Bang, out of which the entire cosmos emerged. This breakthrough pushed the study of the beginning of the universe ahead immensely.
Just a few years later in 1974, Hawking again turned the study of black holes on its head when he found that black holes weren't completely black after all. He found that because of quantum field theory, black holes had a small temperature which emits energy, dubbed Hawking Radiation. More recently Hawking has been splitting his research between predicting the ultimate fate of the universe, and fleshing out String Theory in hopes of unifying quantum mechanics and general relativity once and for all.
When Hawking was 21, doctors diagnosed him with amyotrophic lateral sclerosis (ALS) commonly known as Lou Gehrig's Disease. The condition causes the nerves between the motor center of the brain and body to break down, causing the gradual loss of all muscle control. At the time doctors gave Hawking only about 3 years to live, that was 46 years ago. He has since been confined to a wheelchair and lost the use of his voice, but his mind is as brilliant as ever. Communicating through a computerized voice synthesizer, Hawking has written numerous books for everyone from the advanced physicist, to the general public, and even a new series of kid's books. He's become a sort of physics celebrity having appeared in numerous TV shows including Star Trek: The Next Generation, The Simpsons and Futurama. In 2007 he even took the time to float around in zero-gravity. Last year he announced that he was leaving the Lucasian Professor of Mathematics at Cambridge University, a post once held by Isaac Newton. He'll stay on at Cambridge as a Professor Emeritus, and is far from retiring. He's said his schedule is already booked up through 2012.

Read the rest of the post . . .

Wednesday, January 07, 2009

What in the Heck is "Quantum Computing" Anyways?

The field of quantum computing is a major area or research in physics. Nearly every week a new discovery is made that helps to lay the groundwork for newer developments. But what exactly are quantum computers, and more importantly, how much longer will it be until I can upload my iTunes on to them?

Unfortunatly the answer to the second part is not for a long while. As to what a quantum computers actually are needs a quick look at what quantum physics is.

On the scale of the extremely tiny (we're talking one billionth of a meter and smaller, as in atom sized) particles behave very differently than what we're used to seeing all around us. Fundamental particles, such as photons and electrons, can exist in literally more than one place at a time. Their positions aren't described as a single point, but as a wave of probability, also known as thier superposition. They exist simultaneously at every point along the wave until someone or something measures their location. When this happens, their probability wave collapses, and only then does the particle resolve itself into a single point.

Crazy no? It seems so counterintuitive because were used to looking at large sized objects that obey the "one place at a time" rule we all know and love. In fact what we're seeing is the averages of all the many quantum sized particles, canceling out all but the most probable path an object can take. On the tiny quantum scale, these waves of probability reign supreme. Trials like the famous double slit experiment have physically shown this to be the case. It's not just a particle's path that can exist in many different states at once, but nearly everything that defines it, like its rotation.

So what does all this have to do with computers? The most basic form of information in a computer is a single one or zero, known as a bit. Long strings of bits organized into code are the fundamental language of computer programming. Bits stored in a normal computer are usually magnetized domains on a hard disc. In a quantum computer, subatomic particles would store the bits (known as qbits) of information. The big difference is on a normal computer each bit can be only a 1 or a 0, but in a quantum computer, each bit can be a 1, a 0, or a statistical combination of both a 1 and a 0 at the same time. Because this information exists in many simultaneous states, the quantum computer would be able to run a number of difficult calculations in parallel. To get the final readout one would measure the relevant equation, collapsing the wave functions of all of the other solutions, and yielding a correct answer. By processing all of these equations in parallel, quantum computers could be far more powerful than even the most advanced supercomputers of today. That, in a very brief nutshell, is what quantum computing is all about.

Of course, it's going to be a while before we'll be cruising the information superhighway in the latest qbit powered Pentium-Q processor. Theoretical physicists are still working out how to set up quantum based algorithms while technicians are still developing methods to store qbits. No one knows for sure when the first one of these will hit the markets, but when it does, today's best supercomputers will look like pocket calculators.

Read the rest of the post . . .

Tuesday, January 06, 2009

Milky Way Galaxy 2.0

Imagine living in one city for all of your life. After a while you get to thinking you had a pretty good grasp about where things are, what the streets look like and so on. Then one day you realize nearly everything you thought you knew about home needed to be redrawn. Turns out, your home city is much bigger, shaped differently, spinning faster and… um… more likely to collide with the next nearest city. Pretty disorienting huh? Ok so my city metaphor sort of breaks down there at the end. That being the case, what I just described is almost exactly what's happened with our home galaxy, the Milky Way this last week.

A galaxy is a massive cluster of stars, many thousands of light years across, and they come in numerous different shapes and sizes. The most common kind of galaxy is elliptical in shape however the one we live in is more of a couple of spiraling arms with a bulge in the center. Because we don’t have the technology yet to travel outside the galaxy to map it, astronomers have had to infer its contours based on data that we could gather here on Earth. On Monday a team of astrophysicists announced that they've finished using infrared light to model the complete shape of the galaxy. The old debate as to whether our spiral galaxy has two or four arms can be settled, they're both right. The galaxy starts out with two arms jutting out from the center of the galaxy, which then each split off into two arms. This is the first time this particular shape has been proposed.

At the same time, the American Astronomical Society is having their huge annual meeting in Long Beach California, so there have been a slew of new cosmic discoveries announced this week. There, another team of astronomers announced that the speed our Sun is orbiting around the galactic core is around 100,000 mph faster than previously measured. In order for it to be moving that quickly at this distance from the center, the galaxy itself is likely 50 percent more massive than previously thought.

The one snag is that this extra mass means we've got a stronger gravitational pull. This rather considerably boosts the odds of an eventual collision with our nearest neighbor, the Andromeda Galaxy. No worries though, the predicted collision is at least two billion years away, giving us plenty of time to duck and cover.

Read the rest of the post . . .

Monday, January 05, 2009

What Do Broomsticks and Rockets have in Common?

Under normal circumstances, broomsticks and rockets have nothing (that I can think of) in common. But at the December 2008 Space Elevator Conference in Luxembourg, these trusty sweeping untensils get the job--of going to space--done.

European Space Agency engineer Age-Raymond Riise used a broomstick and an electric sander to demonstrate how a hypothetical "space lift" or "space elevator" might pull its cargo mechanically. The project could see a 100,000km long cable anchored to the Earth as a means of cheaper transportation to space.

I wrote about the concept of a space lift a few months ago. The simplicity of the idea, combined with the numerous and complex hurdles in technology needed to morph a project like this into life is what fascinates me. I suppose it's a prime example of the notion that "old" ideas aren't necessarily bad ones to be quickly tossed out in favor of the completely innovative; they can be modified and applied to new situations.

Riise proposed powering the cable mechanically, with a sharp thrust of its base. Holding a broomstick upright (to represent the cable held in tension) he tied three brushes or "cargo" around the broomstick and turned on the electric sander at its bottom. The rhythmic vibrations caused by the sander allowed the bundle of brushes to grip the broomstick (even as it moved slightly downward) and slide up, straight to the top of the stick (you can watch a video here).
Riise says the same principle could power a cable-based space elevator. Anyone need a lift?

Read the rest of the post . . .

Year of the Stars

2009 heralds the Year of the Ox according to the ancient Chinese calendar. However, the International Astronomy Union and UNESCO declared it is also the International Year of Astronomy. This year marks the 400th anniversary of the birth of modern astronomy and what better excuse is there to party with the stars?

Over the next twelve months, museums, governments and astronomy enthusiasts all over the globe will be publicizing and promoting the stars and planets above. The planned events include everything from teaching the science of outer space, to exploring how astronomy influenced societies and cultures over the millennia. Some events will utilize the very latest in internet technology while others will go back to the first telescopes, all with the intent to get people excited about astronomy. Already 135 countries are officially involved, with more expected to sign on as the year progresses. There are many ways to get involved either on your own or along with any museums or universities in the area.

The planners picked 2009 to correspond with the 400th anniversary of Galileo's fist use of a telescope for astronomy. In 1609, using the new instrument, he was the first person to observe the rings of Saturn and the moons of Jupiter. He used the movements of Jupiter's moons as evidence that the Sun didn't orbit the Earth, but in fact the Earth orbited the Sun. After he published his findings, the Catholic Church denounced his conclusions and ordered him to recant his beliefs. Galileo refused and was placed under house arrest for the rest of his life.

The Catholic Church has come a long way from this hard line stance in 1609. It officially embraced the Sun centered view of the universe in 1835, and in 1992 Pope John Paul II officially apologized for the actions of the Church. At the same time, the Church has also gotten more and more involved with the study of astronomy. Throughout the eighteenth and nineteenth centuries it sponsored the construction of several observatories, including one in the Vatican itself. In the mid nineteenth century, Father Angelo Secchi was one of the first scientists to declare the Sun a star, and was the first to use spectroscopy to classify distant stars. The Vatican's observatory has been continually upgraded over the years and is still in use today. Of course, it too is also on tap to be a participant in this year's big astronomy bash.

Read the rest of the post . . .