Wednesday, September 30, 2009

Orange juice, and fundamental physics, a la Rube



It's one of the most vivid memories of my high school years. I'm dozing on my kitchen floor, surrounded by a debris of screwdrivers, scrap wood, ball bearings, nails, old cardboard boxes, springs, wood glue, wire hangers, and lots of duct tape. My dad nudges me in the side with his slippered foot. "It's three o'clock in the morning," he says. My cheek is squashed against the cool tile floor. I crack an eye open and see my nemesis: the golf ball that, in just a few hours, I'm supposed to somehow raise six feet in the air, in ten steps, for my physics final project.

Whom did I have to blame for my predicament? Not my physics teacher exactly, but rather a famous American cartoonist who never forgot the strange contraptions he saw in his engineering classes at UC Berkeley. After graduating in 1904, the engineer soon traded in his slide rule for a cartoonist's pen, but his training inspired his most beloved cartoons and even earned him a spot in the dictionary:


Rube Goldberg's signature drawings of hilariously baroque, hopelessly inefficient inventions drew their inspiration from engineering to poke fun at bureaucracy and irrationality. According to an article on Goldberg from UC Berkeley,

Goldberg gave credit for "one of the principal props of my career as a cartoonist" to an engineering professor, Freddy Slate. The professor had devised the Barodik to measure the earth's weight with "a series of pipes and tubes and wires and chemical containers and springs and odd pieces of weird equipment which made it look like a dumping ground for outmoded dentists' furnishings," Goldberg wrote.

"Like the Barodik, my "Rube Goldberg" inventions are incongruous combinations of unrelated elements which cause a chain reaction that accomplishes something quite useless. It points up the human characteristic of doing things the hard way."

But Goldberg's cartoons have since moved in the other direction, inspiring the engineers and physicists he satirized. Not only do probably hundreds of physics teachers assign Rube Goldberg final projects to their students each year, the national Rube Goldberg Machine Contest at Purdue University celebrates engineering for engineering's sake with ridiculous solutions to straightforward problems. First launched in 1949 by two engineering fraternities and later resurrected in the 1980s, the contest challenges high school- and college-age engineers-in-training to complete simple tasks like changing a lightbulb or assembling a hamburger in 20 steps or less. In 2007, Ferris State University won first place with this contraption that takes probably a hundred steps to squeeze a glass of orange juice (the action starts at 2:13):



If that tickled your fancy, check out the top ten best food-related Rube Goldberg machines and this top ten list.

Nothing compares to the fun of building a Rube Goldberg for yourself, but if you just don't feel like digging out the wood glue and the duct tape, an online game called Dynamic Systems will amuse the inventor in you. The task is simple, and doesn't change—get the ball bearing into the cup. But each level adds dominoes, springs, widgets and whirligigs to the challenge. I'm not sure if the game physics is perfectly accurate, but it's a fun way to test out the conservation of energy.

A screen shot from Dynamic Systems, a Rube Goldberg video game.

Rube Goldberg may have found engineering more amusing than amazing, but his satirical inventions have some truth in them. Rube Goldberg machines powerfully demonstrate how energy is conserved, and can be converted from one form to another, potential to kinetic and back again. I also love how the machines often shift the scale of motion, from a tiny ball bearing falling down a shoot to a huge lever arm swinging.

They also make me think of particle detectors. Incredibly intricate and often several stories high, particle detectors like those at CERN depend on complicated, seemingly fragile sequences of physical phenomena to turn a fleeting spray of fundamental particles far too tiny to see into an electrical current and, finally, a bit of data. I won't attempt to explain how ATLAS works, but take the humble photomultiplier, which is often found in larger particle detectors, as a small scale example:

Inside a photomultiplier.

When a photon hits the scintillating material, an electron in one of the atoms gets excited and then drops back down to its ground state, re-emitting a photon in the visible spectrum. This in turn hits the photocathode, causing an electron to fly out into the vacuum chamber of the photomultiplier thanks to the photoelectric effect. Each of the dynodes is set at a slightly higher voltage than the last, causing the electrons to accelerate a little more on their way, increasing their energy. This means that as the electrons hit a dynode, more and more electrons are produced. By the time you get to the end of the chamber, you've got enough electrons for a discernible jump in current, turning the tiny "bump" of the photon into something you can actually detect and quantify. And yet the process has so many steps, you might be tempted to call it Rube Goldberg. You would also have to say that, like the contraption for squeezing a glass of orange juice, it might be complicated, but it definitely works.

Read the rest of the post . . .

Tuesday, September 29, 2009

The Best Approach for Avoiding Zombies



WASHINGTON -- When Woody Harrelson escapes the living dead in "Zombieland", a new movie opening this Friday, should he head for the hills or the mall? A recently published research paper suggests that he's probably better off hiding in the mall to save his delicious brain.

The world is full of things that move in zombie-like fashion, such as particles flowing through a turbulent fluid or the unpredictable price changes of the stock market, so physicists seek insight into this behavior by creating so called "random walking" models.

Physicist Davide Cassi at the Università di Parma in Italy looked at how long an entity hiding in a complex structure could survive if being pursued by predatory random walkers. Cassi's paper, recently published in the journal Physical Review E, is the first to describe a general principle of a prey’s likelihood to survive over time while hiding in an irregular structure.

Though the paper itself does not specifically refer to fleeing from zombies, it describes "the survival probability of immobile targets annihilated by random walkers." The conclusions suggest that the people trapped in a mall in "Dawn of the Dead" may be better off than the folks stuck in a farmhouse in "Night of the Living Dead."

Cassi found that the likelihood of survival when threatened by predatory random walkers is closely related to how complex the prey’s hideout is. The more twists and turns, the safer you'll be. In structures that are highly complex and irregular, the chances of the predator coming into contact with its target shrinks down to almost zero.

Cassi formulates a model to describe the behavior of randomly moving particles as they travel through maze-like networks. He said that his work could apply to a wide variety of situations including the distribution of information through the internet and medicine spreading through the human body.

"There are a lot of applications of these results in a lot of fields of sciences," Cassi said. "The most amazing field of applications of these results are in biology, biochemistry and other organisms."

So remember, when the zombies come, flee to the biggest shopping mall you can find and remember that, using zombie movies as a guide, the undead often win.

Michael Lucibella
Inside Science News Service


Read the rest of the post . . .

Monday, September 28, 2009

Are Nobels predictable?


School has started and Halloween is around the corner, which can only mean one thing—it's Nobel season. On Tuesday, October 6, the Nobel committee will be announcing this year's Nobel laureates in physics at a press conference at the Royal Swedish Academy of Sciences in Stockholm. In the hours beforehand, a handful of physicists around the world will be tossing and turning in their beds or frantically checking their cell phone's reception while giving that day's lecture, wondering if they're about to get the "magic call", just minutes before the public announcement, telling them they've won the Nobel prize.

The "magic call" has been notorious for catching previous winners unawares, asleep, out shopping, and down at the pub. But the Nobel prize committee is always eerily successful in reaching their target, in a way that's reminiscent of, well, magic. Richard Ernst, for instance, was on a plane from Moscow to New York when the captain came out of the cockpit, walked down the aisle, and informed him that he'd won the 2001 Nobel Prize in chemistry. A rarity was biologist and 2008 Nobel laureate in chemistry Martin Chalfie, who heard the phone ringing in the distance in the middle of the night and turned over for another snooze.

But why should Nobel laureates be so in the dark about their nomination? The Nobel committee invites a select group of scientists all over the world to submit nominees, chooses a short list, and gets the opinion of experts in the relevant topics before deliberating. Doesn't the committee give any hint of whom they're considering? The answer is a resounding, official "no." While I suspect the list leaks out at least to some extent, not only is everyone involved forbidden from breathing a word about the nominees, that restriction holds for a half century!

According to the Statutes of the Nobel Foundation, information about the nominations is not to be disclosed, publicly or privately, for a period of fifty years. The restriction not only concerns the nominees and nominators, but also investigations and opinions in the awarding of a prize.

While those in the know keep mum, everyone else clamors to foretell the winners. Thomson Reuters recently released their list of Nobel contenders. They base their Nobel predictions on citations, choosing the people with both the vast amounts of citations in total and who have written high-impact papers, or papers that have individually piled up reams of citations:

Citation Laureates have been cited so often in the last two or more decades that these scientists typically rank in the top 0.1% in their research areas. Not only do Citation Laureates have stratospheric citation totals, they also typically write multiple high-impact reports, and do so over many years.

How good are the predictions? Since they started announcing their tips in 1989, Thomson Reuters have only completely missed the mark on two occasions. Other than that, they seem to correctly predict at least one of the prizes—but keep in mind that they choose several possibilities for each category. Whether that's terribly successful statistically, I leave to the Enrico Fermis of the world. But research suggests that citations are losing their influence on determining the Nobel prize as author lists grow and fields become more diverse. A paper on the arXiv early this year looked at whether Google's PageRank algorithm, which bases a page's rank on the number of pages that link to it, could correctly spin a pile of citations into a Nobel.
Peter Higgs: not-so-hotly-tipped for the Nobel.
Whether you've got an algorithm or just a hunch, it's fun trying to guess who the committee will choose. ScienceBlogger Chad Orzel has started a betting pool for this year's prizes. If you're low on ideas, check out the hotly-tipped physics contenders according to Thomson Reuters, which includes decorated quantum physicist Yakir Ahoronov and metamaterials pioneer Sir John Pendry. Surprisingly or not, the list doesn't include Peter Higgs, who made the Wall Street Journal's short list—multi-billion dollar accelerators are not as weighty as citations, apparently.

Read the rest of the post . . .

Friday, September 25, 2009

Weekend television: The Secret Life of Scientists

Nanoscientist Rich Robinson on The Secret Life of Scientists

A child-prodigy medical researcher who loves to run. An engineer who practices back-flips in his spare time. A nanoscientist who takes soul-searching photographs.

These are a few of the scientists profiled so far on the new NOVA Web series, "The Secret Life of Scientists." NOVA explores the different facets of each scientist's life, including their passions, their research, their experiences, and their opinions. The videos are short and minimalistic, and there are no hosts or voice-overs, keeping the viewer's attention on what the individual has to say.

Every two weeks, the Secret Life team puts a new scientist under the microscope. When a scientist is profiled, viewers have a chance to submit questions and, after a couple of weeks, get some honest answers.

The team's just getting started, but they already have a couple of great videos in the can. The idea behind it reminds me vaguely of a certain US Weekly column. I can see it now - Scientists! They're just like us! Brian Greene shops at Target! Steven Hawking enjoys ice cream and the occasional bad science fiction movie! It also has echoes of the Washington Post's awesome Web video series, onBeing. At any rate, it's refreshing to see science media that focuses on people. It's so easy to forget that a new telescope, medicine, and or technology doesn't just get churned out by some alien science machine, but comes into being through days and months and years of people living out their lives. Which are just like the lives of the rest of us...with a bit more test tubes, or balled-up pieces of paper, or blinking lights, perhaps.
Read the rest of the post . . .

Thursday, September 24, 2009

Adaptive optics: not high-tech, just humanitarian

Physicist Josh Silver's specs may look retro, but they can change lives.

Back in July, I wrote about the 2009 TED Global Conference, held at Oxford. The Global Conference is a sort of carnival of ideas, with talks and presentations by by great thinkers of every stripe, from storytellers to designers, anthropologists to physicists, and videos of this years talks have started to trickle onto TED's online archive.

Physicists, of course, were well-represented among the ranks of TED speakers, but when Oxford prof Joshua Silver took the stage, the audiences weren't in for the usual science lecture. Silver is an atomic physicist, but lately he's been obsessed with optics; not because he wants to design an invisibility cloak or improve high-speed communication, but because he wants to address a very important problem for the world: bad vision.

As Silver points out in his talk, glasses, contact lenses, and even laser eye surgery are facts of life for about half the people in the developed world. You can imagine that the same should be true for people in the developing world. But while there is equal need for vision correction among people, say, in Sub-Saharan Africa, there is a serious dearth of optometrists, about one to every eight million people. In the UK, by contrast, the ratio of optometrists to people is about 1 to 10,000.

Believe it or not, this is a problem for physicists to solve. Or, at least, that's the way Joshua Silver saw it. Glasses and lenses are fairly cheap and plentiful; optometrists are not. The answer: a pair of glasses that a patient could adjust on her own to fit her needs.

At a basic level, correcting bad vision boils down to high school optics. In the eye, muscles squeeze or release a small lens at the front of the eye, that focuses an image on the retina, the brain's outboard receiver for visual information. Squeezing the lens makes it fatter, which focuses nearby objects; relaxing the lens flattens it, focusing far away objects.


The eye's lens, from the Centre for Vision in the Developing World

But if the geometry of the eye is slightly off—if the lens is deformed, or the retina too far away from the lens—the lens's focal point will hit just in front of the retina (nearsightedness) or behind the retina (farsightedness.) Glasses are additional lenses, shaped to bend the light in a way that compensates for a refracting system that doesn't work. Usually these lenses are made from plastic and are shaped according to the patients' needs. Once made, they can't be adjusted.

So Silver came up with an ingenious alternative to prescription glasses. He invented a pair of glasses that had a space encased by two flexible plastic walls in place of each lens. By pumping in or removing liquid from the space between the walls, the patient can put the glasses on and change the shape of the lens until he can see clearly.



Fluid-filled lenses, from the Centre for Vision in the Developing World

The glasses cost only about $19—though Silver wants to drive the price even further down, since many of his prospective patients live on a dollar a day— and the whole adjustment procedure takes less than ten seconds, as Silver demonstrates in the talk below, shot at the TED Global Conference. Not only that, Silver set up the Centre for Vision in the Developing World to study and remedy the problem of access to vision correction and cook up other new ways of making adjustable lenses. Now that's physics in action.



Read the rest of the post . . .

Wednesday, September 23, 2009

Nerd on your gift list? Give a Gömböc!

The Gömböc and its creator, Gabor Domokos, on the British show QI

We buy pet rocks, snuggies, and shrinky-dinks; mathematicians have Klein bottles, Mobius strips, and the ultimate mathematical novelty item, the Gömböc.
The Gömböc, in sleek plexiglass
Gömb means "sphere" in Hungarian, but the Gömböc is an extraordinary shape all its own (and is apparently pronounced "goemboets"). As QI host Stephen Fry demonstrates in the video above, no matter how you set it down, the Gömböc will wobble and rock itself right side up. And, unlike the common Weeble, the amazing Gömböc isn't weighted. It rights itself thanks to its unusual geometry.

The story of the Gömböc begins the way many mathematical tales do—with an older cannier mathematician posing a really hard problem. Mathematicians knew that it was impossible for any two-dimensional object to have just one stable equilibrium point (like the curved bottom of a round bowl) and one unstable equilibrium point (the tip of a pencil, for instance). But Vladimir Arnold, a Russian mathematician, wondered whether this was also impossible for three-dimensional objects. The question sent Hungarian mathematicians Gábor Domokos and Péter Varkonyi on a swashbuckling hunt for such a monomonostatic (one stable, one unstable equilibrium point), homogeneous, convex object. That is, a three-dimensional object, uniform in substance, without any inward bulges or weights, that always righted itself no matter how you set it down.
Gabor Domokos: crazy for Gömböc
Domokos and Varkonyi quickly realized that it was easy to add an equilibrium point to any 3-d shape. So working backwards, there should be a sort of fundamental shape from which all others could be grown. The realization gave them hope, but finding the real thing cost the pair blood, sweat, and tears. And pebbles. (Domokos once hijacked his and his wife's vacation to a Greek island and turned it into a wild monomonostatic object chase, searching for the shape among 2,000 of the beach's pebbles.) But the enterprising mathematicians finally found a way to construct the elusive shape by mathematically tinkering with a sphere.

As if to add insult to injury, the Gömböc hunters realized that someone else had been making the shapes for millions of years, for free. Nature, in all her cunning, gave the India star tortoise a Gömböc-like shell so that it could right itself. While it works on rather slower timescales than does mathematical derring-do, evolution too can produce mathematical gems.
The India star tortoise
Look, even Cambridge has got one!
Given the trials and tribulations Domokos and Varkonyi overcame to finally construct the monomonostatic object, it makes sense that they'd want some payback. They've set up a company that makes Gömböcs from aluminum alloy, plexiglass, and brass. While the cheapest, the aluminum, will set you back about $220, it's not exactly overpriced given that the object's dimensions can deviate by just one hundredth of a millimeter. The items seem to be all the rage—Cambridge just received one! So please, do a favor to poor, pebble-counting Mrs. Domokos, and the next time you've got a deep nerd on your gift list, give a Gömböc.

Read the rest of the post . . .

Tuesday, September 22, 2009

Is a Nobel laureate smarter than a fifth grader?

George Smoot on last Friday's "Are You Smarter than a 5th Grader?"



Is a Nobel laureate smarter than a fifth grader?

George Smoot, a UC Berkeley professor who won the 2006 Nobel prize in physics, stepped up to the challenge last Friday as a contestant on "Are You Smarter than a 5th Grader," the entertainingly humiliating game show that tests adults on facts a ten-year-old is expected to know.

The show entertains by painfully exposing just how little of their elementary school education adults retain, so having a Nobel laureate on stage called for even more ridiculous FOX theatrics than usual. In the opening sequence, the announcer booms, "Will he blow it, and be the laughing-stock of Nobel prize-winners everywhere?" I wonder if any of Smoot's Berkeley colleagues started to sweat at that point. Would the show expose the shortcomings of science? Would Smoot remember how to spell the word "Mississippi?"

[Warning: spoilers below. If you'd rather find out for yourself how Smoot faired, watch the episode here on Hulu.]

Unfortunately for host Jeff Foxworthy's humor routine, which is usually based on insulting his guests, Smoot sailed through questions on the angles of an equilateral triangle and whether the conga was a percussion instrument. The show's writers tried to inject some tension with a fifth-grade astronomy query: "What country was the first to put a human being in space?" Smoot quickly remembered the Soviet cosmonaut Yuri Gagarin, but Foxworthy stumped him on the year—1961. That proved to be the Foxworthy's only opportunity to poke fun at the Nobel laureate.

Part of game play is cribbing answers from the show's stable of adorable, TV-ready ten-year-olds. So the fifth graders began to look decidedly pouty as Smoot answered question after question unaided. Finally, apparently realizing that he was taking all the fun out of the show, Smoot decided to "cheat" by blindly "copying" the on-stage ten-year-old's answer to the $500,000 question. This is by far the riskiest move a contestant can make, and Smoot clearly knew the answer to the question. Luckily, he wasn't punished for his good sportsmanship; ten-year-old Francesca answered the question correctly, and Smoot went on to be the show's second-ever million-dollar winner by correctly answering the question "What US state is home to Acadia National Park"—it's Maine.

Smoot flirted with the small screen earlier this year with a cameo on nerdtastic sitcom "The Big Bang Theory." Not only was it nice to see someone actually win "5th Grader" for a change—other episodes end on a rather more embarrassing note—but it was wonderful for an important modern physicist to enter the American consciousness and crack a few redneck jokes about himself while he was at it. The audience oohed and aahed when he pulled out his Nobel medal, and Smoot even got to say a word about his research.

FOXWORTHY: "So, how does a guy go about proving the Big Bang theory?"

SMOOT: "We figured out over the years a way to make a picture of the embryo universe."

FOXWORTHY: "Oh, wow."

SMOOT: "So it's the very beginning of the universe, but it's got the blueprint for what's gonna happen later."

Foxworthy then cut in, joking, "You found the infant uni...I can't even find my keys half the time!" Thanks, Jeff. Still, one of the most exciting advances in cosmology made it onto primetime television, even for just a few seconds. And well done explaining it, Dr. Smoot.

What Smoot described as finding "the embryo universe" provided crucial evidence for the big bang, transformed cosmology from a theoretical to a decidedly experimental science, and earned him half the Nobel. In 1989, Smoot and colleague John Mather (he's got the other half) launched the Cosmic Microwave Background Explorer satellite, which goes by the cute name COBE, to take detailed measurements of the cosmic microwave background.

"We have a tool that actually helps us out in this study, and that's the fact that the universe is so incredibly big that it's a time machine, in a certain sense," Smoot explained in a talk he gave at a design conference last November. Because the universe is so vast and light has a 300 million meter per second speed limit, it can take equally vast amounts of time for light from other galaxies. Light is a snapshot of a place as it looked when the light left it, whether that be eight minutes ago or millions of years ago.

The cosmic microwave background, or CMB, is the oldest light in the universe, an approximately 2.7 Kelvin bath of microwaves coming to us from just 380,000 years after the Big Bang. It is a snapshot of the universe in its infancy, "when the universe was hot and dense and very different," as Smoot says, and the farthest back our time machine can take us. The picture below, from one of Smoot's slides, shows a series of nested spheres with the Milky Way in the center. The outer boundary of what we can see, that strange Jackson Pollock of rainbow speckles, is the CMB.

"You see the whole big picture?" Smoot asked his audience. "The beginning of time is funny—it's on the outside, right?"

The speckles on the CMB map are really the tiniest of wrinkles, differences of one part in 100,000. The CMB was first detected in 1965 when a pair of scientists couldn't get rid of some noise in a radio receiver. It matched the Bing Bang theory; when the hot, dense universe expanded, it cooled, filling the universe with remnant heat—the CMB. But for years after that, scientists thought the CMB was completely uniform. It turned out that the CMB was more like an abstract painting that's just one color. You have to look very closely to see the bumps and texture of the paint brush, and that's just what Smoot and Mather did with COBE.

From these tiny wrinkles, Smoot explained, "we're going to go..to these irregular galaxies and first stars to these more advanced galaxies, and eventually the solar system, and so forth." So "embryo" is precisely the right word—the hot dense universe folds in itself the irregularities that grew up into the grand structures of stars and galaxies as we know them today."

The wrinkles allowed Smoot and Mather to measure the intensity of the different wavelengths within the CMB. Mapping intensity versus wavelength, the researchers created a curve that matched exactly the predictions of the Big Bang theory—another experimental pillar supporting the theory.

Since COBE, experimental cosmology has grown into an exciting field of its own, with projects ranging from balloons at the edge of space to the the South Pole Telescope to the Planck space telescope, the successor to COBE and later CMB observer WMAP.

You can see the improved resolution from COBE to WMAP in the following picture; the images of the world are for comparison to show the kinds of details that COBE smooths out. Planck, orbiting around Lagrange point 2 has just seen first light and will map the early universe in greater detail than either COBE or WMAP.

This would have been too long a story for primetime television, but I'm still happy that Smoot managed to shoehorn a sense of science and wonder into the episode. However, I'm still left wondering why George Smoot, who donated his Nobel prize money to establish fellowships, needs a million bucks.

Read the rest of the post . . .

Monday, September 21, 2009

Turning down the volume on TV ads: a tale of waves, ears, and brains


Credit: NIST.gov

WASHINGTON — Every year, television networks receive thousands of complaints from viewers bothered by commercials that seem to be getting louder and louder. They're tired of fumbling for the remote control and having the quiet moments in their romantic films spoiled by ads that sound louder than the loudest blockbuster movie explosions.

All of this may soon change. A technical organization that sets standards for digital TV broadcasters moved forward on Sept. 16 with new recommendations that may finally dial down the volume of these obnoxious ads.

"It's a problem that's been around for awhile not only in analog TV but also in FM radio," said Mark Richer, president of Advanced Television Systems Committee, the same organization that developed the standards for digital video formats now used by all broadcasters in North America.

The new audio recommendations, soon to be sent out to broadcasters for approval, provide a way to measure the loudness of television content based on current scientific understandings of how human hearing works. Shows and commercials would be tagged with information about their loudness that TVs and audio receivers could use to counteract the audio tricks that make commercials jump out at us.

"It achieves results similar to a viewer using a remote control to set a comfortable volume between disparate TV programs, commercials, and channel changing transitions," reads the working draft of the ATSC document.


Crashing Waves

Analyzing the sounds that accompany a television program or commercial is like spending a day at the beach watching the waves roll in. If asked how the waves were that day, a beachgoer could describe the biggest wave of the day or average all of the waves—big and small.

The Federal Communications Commission—the government agency that regulates the radio, television and cable industries—limits only the size of the biggest sound wave, the "peak level" of the sound. Under FCC rules, the peak of a commercial can be no higher than the programming it accompanies.

The problem with this approach is that the peak level of the sound does not accurately reflect how loud something sounds to the listener. Our brains judge loudness by averaging all of the waves that roll by—big and small.

"Human beings sum up the energy of the sound over a period of time while we listen," said Jack Randorff, an acoustics consultant at Randorff and Associates in Ransom Canyon, Texas.

Randorff said that audio engineers can find ways to get around the FCC rules by making commercials seem louder without actually increasing the peak levels of the loudest parts.

One way they do this is to use a trick called "dynamic range compression," which amplifies the softest sounds. This decreases the difference in size between the biggest and smallest waves. Compressed sound bombards the ear with more energy over a given period of time, audio that sounds flatter but louder.

"If TV shows minimized the dynamic range the way the advertisers did, it would be really unpleasant and unnatural to listen to," said Greg Lukens, former governor of the National Academy of Recording Arts and Sciences, which gives out the Grammy awards. "But the commercials only last a minute, and they want to catch our attention."

The problem is made even worse by the recent switch to digital television, which can produce a greater range of sound than analog. This exacerbates the difference between television programs, which use the full range of sound, and commercials, which squeeze the sound and push it upwards.

Audio engineers also recognize that human beings have evolved to pay more attention to certain pitches that have been important for our survival.

"We are most sensitive in the mid-range, in the range of babies crying," said David Weinberg, chair of the Washington D.C. chapter of the Audio Engineering Society.

Experiments have shown that low and high pitches tend to sound softer, and advertisers exploit this by adjusting the mix to favor certain frequencies without changing the overall volume.

Another effective technique, said Weinberg, is to add distortion by cutting off small pieces of the sound. Ben Burtt used this technique when mixing the soundtrack for Apollo 13 to give the sound of the Saturn V lift-off an extra kick.

Ear of the Beholder

In 2001, the International Telecommunication Union recognized that the broadcast industry needed a better way to measure loudness. A series of studies asked volunteers to listen to a variety of 15 to 30 second television clips—cut from soap operas, news, music, and sports broadcasts—and to rate how loud each clip sounded. A contest was held to develop a device that could measure the loudness of the clips in way that would match the human perceptions.

A group at Communication Research Centre Canada won, with a computer algorithm that cuts out the lowest tones—the ones that we tend to ignore—and adds together the higher frequencies over the entire clip's sound.

"The number you get is a good measurement of long-term loudness," said Louis Thibault, an audio engineer at the CRC. "Our loudness meter will tell you difference between compressed [commercial] and an uncompressed signal."

The new ATSC recommendations, which use the Canadian loudness meter, are entirely voluntary. But ATSC President Richer is confident that broadcasters will adopt them. "Broadcasters want to do things in a uniform way," he said. "Because our membership is broad— all of the major networks, many of the other broadcast groups, and also the manufacturers—we get a lot of buy-in to what we do."

Meanwhile, Congresswoman Anna Eshoo of California's 14th Congressional District has been pushing for new federal regulations. Her Commercial Advertisement Loudness Mitigation Act, H.R. 1084, would require the FCC to create legally-binding recommendations. An identical bill last year never came up for a vote, but her office believes that it is important to have an enforcement mechanism, especially because the cable and satellite providers are not members of the ATSC.

While government and industry continues to work out the loudness issue, television watchers who are bothered by booming commercials can shell out extra cash to buy special audio receivers and televisions equipped with a technology called Dolby Volume. These devices, created by the Dolby Laboratories in California, monitor and adjust loudness in real-time, using Dolby's own model of human hearing.

Of course, viewers can always stick to the traditional, tried-and-true method—pushing the mute button.

—Devin Powell
Inside Science News Service

Read the rest of the post . . .

Friday, September 18, 2009

Operating Cells Via Joystick



WASHINGTON — Biomedical research could someday look a lot like playing video games thanks to a new device that allows users to manipulate cells with the swerve of a joystick.

A team of physicists and engineers at Ohio State University in Columbus, Ohio developed the device from a tiny piece of square-centimeter silicon inlaid with rows of zigzagging magnetic wires. At each corner, the wire behaves like two magnets pointed north to north or south to south. The fields of the two magnets create a point of strong attraction just above them. A nearby magnetic object, such as a magnetically-tagged cell is attracted to the corner and gets stuck there

To get the particles moving, the researchers then place two magnetic fields around the chip one in the plane of the chip and the other perpendicular to it. By flipping the direction of these fields, the researchers can guide tagged cells along the zigzagging wire and even make them jump from one wire to the next. The researchers computerized the magnetic field switching so that a user steered the cells by simply handling a joystick.



video
Researchers manipulate a magnetically-tagged t-cell along a magnetic wire via a joystick. Credit: Sooryakumar Group.


The team at OSU put the device through its paces with magnetically-tagged T-cells, the body's guardians against infection. They snapped the cells to attention at one end of the chip, marched them down to the other end, and made them hop from one wire to another, reaching speeds of about 20 micron, or about a one-fifth the width of a human hair, per second.

Jeffrey Chalmers, the chemical engineer who tagged the T-cells for the experiment, said that the device would be ideal for examining tumor cells. To study biopsied tumors, researchers often treat them with enzymes, which break them down into their constituent cells. Researchers then separate cancerous cells they want to study from healthy cells like fat and blood.

"Part of the problem with cancer ... is that it's our own cells going haywire, so it's a heck of a lot harder to figure out what's different," Chalmers said. With this method, he said, researchers could magnetically tag the well-understood healthy cells and then remove them from a sample, leaving only the cancerous cells. Chalmers said this would be a boon to both a researcher studying a specific type of cancer or a clinician diagnosing a patient.

"The technology to do high-level analysis is pretty amazing, but it's only as good as the purity of the sample you start with," Chalmers said. "The more you can separate them out, [the more] you know what you're looking at."

The small magnetic fields are gentle on specimens; the device works on a flat surface, an improvement over other methods; and it's also cost-effective. The project's principal investigator, physics professor Ratnasingham Sooryakumar, said that the whole set up only costs about $200. He said it could easily be scaled up to a square centimeter silicon platform, with about 10,000 tiny traps, or scaled down to manipulate organelles within a single cell.

video
T-cells race along the magnetic wires, steered by joystick. Credit: Sooryakumar Group.


Sooryakumar said that scaling up would lead to a "lab on a chip," where researchers could cheaply and easily look at distinctive behavior within large populations of cells, making it easier to draw firm conclusions.

"You can look at each cell rather than averaging it out, and say, 'the cell on vertex number 348 did this,'" Sooryakumar said. "When you actually have 10,000 of them to analyze the data, you can understand stat distributions that we normally would not have gotten in ensemble measurements, and that's a huge thing."

Sooryakumar envisions embedding the device into containers that hold tiny amounts of fluid, like blood. By tagging a certain kind of particle, researchers could begin separating, say, viruses from healthy blood cells. Chalmers added it could be used to study cancer in blood samples.

"One in a million or one in a billion cells in your blood could be cancer," Chalmers said, but the technique could achieve higher concentrations of cancer cells to study by tagging and removing healthy blood cells.

Prem Thapa, a researcher at Kansas State University in Manhattan, Kan., who was not involved in the study, called the approach "interesting and innovative," adding that the technique had advantages over existing optical manipulation methods.

"The significance of these studies is high," Thapa said. But he pointed out that electrically excitable neurons or muscle cells may not take so kindly to magnetic manipulation.

Thapa's K-State colleague, physicist Brett Flanders, was impressed by the results but called the demonstration "simple."

"As with ... all potential biophysical applications, there's a lot more work to do," Flanders said. "I'm looking forward to seeing what comes next."

—Lauren Schenkman
Inside Science News Service


Read the rest of the post . . .

Thursday, September 17, 2009

Physics for your next shower



There's a classic elementary school experiment that gives you the inkling that fluids have more to them than meets the eye. You're handed a penny, a glass of water, and an eyedropper. Your task: fit as many drops of water as you can onto that penny.

As the droplet grows, the experiment acquires the dramatic tension of a game of Jenga. Will this drop burst the droplet? Will the next? The task delights and fascinates schoolchildren. Some of those schoolchildren grow up into physicists, and a good fraction of physicists, for whom the delights of the penny experiment perhaps never fade, devote their entire careers to probing the weird, often counterintuitive behavior of fluids.

The following video, filmed by researchers at the University of Twente in the Netherlands, shows the unexpected consequences of squirting shampoo out of a bottle:





The mysterious Kaye effect was first seen in the 1960s, and has fascinated scientists ever since. As the video describes, the falling shampoo piles up until a dimple is formed. Then the jet continues to pull air with it down into the dimple, eventually gliding on this air pocket and ramping out again into the air. You can see the effect with oil too.



The University of Twente's "Gallery of Fluid Motion" features videos that have won awards at the past meetings of the American Physical Society's Division of Fluid Dynamics. The videos show us strange phenomena, and the unexpected reasons behind them. For instance, one video on snapping shrimp reveals that the shrimp's claw doesn't directly stun its prey. Instead, a very different mechanism is at work.


Read the rest of the post . . .

Wednesday, September 16, 2009

First Detailed Photos of Atoms



WASHINGTON — For the first time, physicists have photographed the structure of an atom down to its electrons.

The pictures, soon to be published in the journal Physical Review B, show the detailed images of a single carbon atom's electron cloud, taken by Ukrainian researchers at the Kharkov Institute for Physics and Technology in Kharkov, Ukraine.

This is the first time scientists have been able to see an atom's internal structure directly. Since the early 1980s, researchers have been able to map out a material's atomic structure in a mathematical sense, using imaging techniques.

Quantum mechanics states that an electron doesn't exist as a single point, but spreads around the nucleus in a cloud known as an orbital. The soft blue spheres and split clouds seen in the images show two arrangements of the electrons in their orbitals in a carbon atom. The structures verify illustrations seen in thousands of chemistry books because they match established quantum mechanical predictions.

David Goldhaber-Gordon, a physics professor at Stanford University in California, called the research remarkable.

"One of the advantages [of this technique] is that it's visceral," he said. "As humans we're used to looking at images in real space, like photographs, and we can internalize things in real space more easily and quickly, especially people who are less deep in the physics."


To create these images, the researchers used a field-emission electron microscope, or FEEM. They placed a rigid chain of carbon atoms, just tens of atoms long, in a vacuum chamber and streamed 425 volts through the sample. The atom at the tip of the chain emitted electrons onto a surrounding phosphor screen, rendering an image of the electron cloud around the nucleus.

Field emitting electron microscopes have been a staple of scientists’ probing the very small since the 1930s. Up to this point, the microscopes were only able to reveal the arrangement of atoms in the sample.

The sharper a sample’s pointed tip inside the vacuum chamber, the greater the resolution of the final image on the screen said Igor Mikhailovskij, one of the paper's authors. In the last year, physicists learned to manipulate carbon atoms into chains. With the tip of the sample now just a single atom wide, the microscope was able to resolve the electron's orbitals. The Kharkov researchers are the first to produce real images of the electrons of a single atom, making the predictions of quantum mechanics visible.

While tools like the scanning tunneling microscope already map the structure of electrons in a sample of many atoms, "it's always good to have complimentary approaches," Goldhaber-Gordon said. "Sometimes something puzzling in one view becomes crystal clear in the other view. Each one gets you a step closer to a full understanding."

Goldhaber-Gordon also pointed out that the technique may not be widely applicable because the high resolution was due to the sample's specific structure.

"At the moment it's more important for displaying quantum mechanics very directly than for learning new things about materials," he said. "But that could change if [the Ukrainian team] develop new capabilities."

—Mike Lucibella & Lauren Schenkman
Inside Science News Service




Wow, so our chemistry teachers weren't lying to us! I remember peering at the little diagrams of fat blobs impaled on the x, y, and z axes, and thinking, "Yeah, right." Sure, you can solve Schrodinger's equation for a hydrogen atom and "see" for yourself, but quantum mechanics is maddeningly hard to wrap your mind around. And for those of us who can't see the answer in a math function, it's so wonderful to be able to look at a photo.

In the paper, the researchers identify the sphere and sort of dumbell shape are s and p orbitals. Carbon has six electrons total and four valence electrons, which occupy the 2s orbital and two of the 2p orbitals. That's exactly what we see in these photos. For comparison, here are the chemistry textbook versions of 2s (left) and 2p (right), courtesy of Wikipedia. The 2s orbital has been chopped in half so you can see the location of the nucleus:


The spherical 2s orbital chopped in half (left), and the 2p orbital (right).


Interestingly, this new observation was possible thanks to the marriage of nanotechnology and a pretty hoary piece of technology. The nanotech is pretty cool; to fabricate an atomic carbon chain, researchers peel one strip of atoms off of graphene, a single-atom-thick sheet of graphite, the 100-percent-carbon stuff in the lead of your number 2 pencil, the way you'd pull a thread out of a piece of fabric. (Earlier methods involved using an electron beam to burn a hole through a sheet of graphene, leaving just a thin line of carbon atoms joining two regions.)

Meanwhile, Erwin Muller invented the field-emission electron microscope in 1936. It works on a simple principle: get a strong, localized electric field between a sharpened sample and a screen coated with a fluorescent material, and the electric field will rip electrons off the sample (emission) and send them flying into your screen. Its successor, the field-ion microscope, pulled off whole ions, allowing Muller to see the (albeit blurrily) the atoms in the tip of a tungsten needle.



This retro tech is the key, in my opinion, to why this research is so exciting for the general public. Physicists and materials sciences have developed dozens of imaging techniques over the years. They can probe oxidation states and atomic structure with x-rays, and can make gorgeous maps of materials using scanning tunneling microscopy. The STM especially is a really incredible invention, and allows us to explore the nanoscale world and witness phenomena that are the direct consequence of quantum mechanics.

A scanning tunneling microscope image


But, as Goldhaber-Gordon says, there's something visceral about these FEEM images. Let's go back to the STM for a second. The STM scans a needle-like tip across a sample surface and measures how many electrons tunnel (a quantum-mechanical phenomenon) between the surface and the scope's needle-like tip in order. Because the tunneling current decreases exponentially as a function of distance between material and scanning tip, the current can be used to adjust the microscope continuously to keep it at the same distance away from the sample it's scanning. These values are what form the resulting, often breathtaking, image.

Meanwhile, the FEEM image works on the same basic principle as a film camera, except one is a case of photons impinging a piece of film and the other has electrons splatting on a fluorescent screen. What comes out, a physical mark of the quantum world, is somehow more believable to a member of the general public (myself included) than any other explanation. There's nothing like a Hubble photograph to drive home how rich and vast the universe is; similarly, these images make the baffling laws governing the incomprehensibly small simply more believable.

Read the rest of the post . . .

Tuesday, September 15, 2009

How big is it, really?


People often say that standing outside on a clear, starry night gives you a sense of scale, of how tiny you are compared to the vastness of the universe. But it's tough to really comprehend just how vanishingly miniscule we are. We're so used to living in inches and feet and miles—or centimeters, meters, and kilometers—that it's nigh impossible to wrap our minds around the enormous distances between us and other objects in the universe, even ones we can see, like the sun and moon. Is there any way to comprehend it?

That may be a tall order, but folks at Agnes Scott College in Atlanta, Georgia have come up with a wonderful, creative approach in a project called MASS. MASS stands for Metro Atlanta Solar System, a model of the solar system scaled down by a factor of 150 million to fit within Atlanta's city limits. Agnes Scott's gorgeous Bradley Observatory is the center, specifically the circular stone courtyard in front of it. The courtyard's diameter is about 30 feet, providing the scale for the planets, the rest of which are located at landmarks around the city that are the right distance away. Earth is a little more than half a mile away from the sun, at Decatur Public library, while Neptune is 18.5 miles away at Sweetwater Creek State Park.


What I love about this idea is that it reaches towards the bigness of the universe by being on a scale that requires travelling miles to get from one planet to another. When we see a scale model of the universe that takes up a few square feet, it's hard to get a sense for comparative distances and sizes. But if you have to hop in a car and drive out to the edge of Atlanta to find Neptune, you get a better idea of what's in Earth's neighborhood and what's not. It's even clearer to Atlanta natives, who have an instinctive sense of the nearness of these locations. I can imagine people saying, "You mean Mercury's still on Agnes Scott's campus but Uranus is at the airport? Wow, that's far away." Or standing in the 30-foot-wide courtyard of the sun and then seeing the three-inch model Earth at Decatur Library might bring home the tininess of our home planet. (Here's a list of other cities that have scale universe models—in the Peoria, Illinois scale model, it will take you an hour and half just to drive from Pluto to Neptune.)

I'd love to see people take on other conundrums of scale in this way. But it turns out that when you go smaller and smaller instead of bigger and bigger, things become even more mind-boggling. Say you wanted to represent an atom with a sphere the size of a basketball. In this blown-up world, the basketball would now now be a little under 200 times the size of the sun. (A hydrogen atom has about the diameter of an angstrom, or ten trillionths of a centimeter, while the basketball has a diameter of about 25 centimeters.)

John Norton at the University of Pittsburgh frames the atom's tininess by offering you what seems at first like a sweet deal: an atom of gold for every second since the dawn of the universe.

Go inside the atom, and scale becomes even more astonishing. While a hydrogen atom is about an angstrom across, the proton at its center is five orders of magnitude smaller. If the proton is now a basketball, the atom is 15 miles across. The rest? Empty space. As for the electron, good luck pinning it down to measure how wide across it is. What we do know is that it's about 2000 times less massive than the proton. Here's a fun and kind of bone-chilling representation of just how much nothing there is inside the atom.


Read the rest of the post . . .

Monday, September 14, 2009

Answer to the Friday Fermi Problem

The result of our Fermi problem: you may need one of these to brave your classes this fall.



On Friday, we posed the following back-to-school-themed Fermi problem:

Assuming you're not in a big lecture hall and the professor shuts the door at the start of class, how long does it take for you and your classmates to deplete the oxygen enough to feel it?


We promised a surprising answer, and here it is. You decide if our back-of-the-envelope calculations are reasonable.

Let's build our classroom first. It's 16 feet wide and long, and 10 feet tall. In handy metric dimensions, that's:
5 meters by 5 meters by 3 meters, or 75 cubic meters.

A cubic meter is 1000 liters, so now we've got 75,000 liters of fresh air.
The oxygen content of air is about 21 percent, and at about 17.5 percent you'll run from the room screaming. To get from fresh and breathable to absolutely stifling, take the difference between 21 percent of 75,000 liters and 17.5 percent of 75,000 liters. That gives us 2,625 liters of oxygen to get through.

How much oxygen does a human consume? It was tough finding a reliable source, but this press release about the 2006 installation of a new oxygen generation system on the International Space Station provides a clue:

During normal operations, it will provide 12 pounds daily; enough to support six crew members.


Aha! So one person needs about 2lb of oxygen a day, or .9 kg. But how many liters is that? Oxygen has a molar mass of 16 grams, so oxygen gas, or O2, has a mass of 32 grams per mole. One mole of gas at standard pressure and temperature takes up 22.4 liters. Now, as my high-school chemistry would say, it's time to hop on the mole-train:
.9 kg x (1000 g/1 kg) x (1 mole O2/32 g O2) x (22.4 L/1 mole O2)


This gives us a daily oxygen intake of 630 liters per person. Let's get a more reasonable rate:

(630 L/day) x (1 day/24 hours) x (1 hour/60 mins)


Now we have the serviceable rate of oxygen consumption of .4375 liters per minute. We're almost there.

Now populate the classroom with 34 students and 1 teacher. The 35 occupants consume 15.3125 liters per minute. Now for the final calculation:

2625 L x (1 minute/ 15.3125 L)


It will take about 171 minutes, or 2 hours and 51 minutes for the room to become unbearably stifling. You can image that you'd start to feel pretty uncomfortable about an hour and a half into the lecture—a good argument for shorter classes.


Read the rest of the post . . .

Friday, September 11, 2009

Fermi Problem Friday: Back to School Edition




There's nothing like a crisp fall day—fresh, cool air, leaves crunching underfoot, an apple in your hand, and ten pounds of textbooks in your backpack slowly giving you scoliosis as you haul them around campus. In the spirit of back-to-school, join me in imagining the following scenario.

You get to your class on time for once, file inside with your classmates, and find a seat. Then the teacher starts talking. And talking. And talking. Time dilation seems to be at work. You check your watch—still another half hour to go. And you're starting to feel not very good. Must be this droning professor, you think. You tug at your collar and cough; your head starts to hurt. Your eyes wander to the door and you wonder whether it's been shut this whole time. Then it dawns on you...

The Fermi Problem:

Assuming you're not in a big lecture hall and the professor shuts the door at the start of class, how long does it take for you and your classmates to deplete the oxygen enough to feel it? If you're in school, try using the classroom dimensions and number of students from one of your classes. (That's a hint on where to start.) Stay tuned for the answer on Monday's blog—it might surprise you.
Read the rest of the post . . .

Thursday, September 10, 2009

In love with Hubble all over again

NGC 630 caught on Hubble's new camera

It may have a boring name, the nebula known as NGC 630 is a minor celebrity these days. This butterfly-shaped cloud of gas, pluming spectacularly from a distant dying star, is all over the Web right now. It's one of the first images snapped by the Hubble Space Telescope since STS-125 astronauts replaced its camera and upgraded its instruments in May.

The Wide Field Camera 3
At 19 years old, Hubble may have seemed a bit young for a facelift. But, as any PC or camera owner knows, a lot has happened in the world of electronic and optics in the last 19 years. Judging from these first images, the Wide Field Camera Three is doing a fantastic job. Although it shares the same ultraviolet through infrared range as the Wide Field and Planetary Camera 2, its field of view and resolution outshine that of the retired Whiff Pick Two.

As the Christian Science Monitor points out, there's a marked difference in the new photos, which boast a crisper image and exquisite detail. And there should be: the makeover cost NASA nearly a billion dollars. But it's giving the aging telescope a new lease on life; NASA estimates that Hubble's instruments, assuming no major disasters, will funnel in rich data from the great beyond for another four to five years. When its instruments fail, the scope will be de-orbited, which is a euphemism for burning up in the atmosphere, and replaced by the next generation James Webb Space Telescope.

Before and after image of the Omega Centauri star cluster, 16,000 light-years away.


These photos are gorgeous, but that's not only thanks to Hubble. The WFC3 sees electromagnetic radiation in ultraviolet through infrared, which is wider than thethe narrow slice of the spectrum human eyes perceive. So data that falls outside the visible is assigned a representative color so humans can see the details. That's the case in the photo below of Saturn, shot in infrared (before the revamp).

An infrared shot of Saturn.


Another technique is to assign colors to the very specific wavelengths emitted by different chemical elements. For the famous photo of the Eagle Nebula below, image processors assigned the red, green, and blue to emissions of ionized sulfur, doubly-ionized oxygen, and hydrogen atoms.

The Eagle Nebula painted in chemical elements.


I remember feeling cheated when I first learned that Hubble's colors weren't "real," but now that seems like a closed-minded reaction. Human perception is extremely limited, and placing this data within grasp of our limited eyesight is simply the best way to convey the complexity and structure of these objects. For an in-depth explanation of how Hubble images are processed, check out this article from Slate.

Read the rest of the post . . .

Wednesday, September 09, 2009

ATLAS Rendered

A screenshot from Phil Owen's winning video "Origin of Mass"


Phil Owen might just be the envy of every geek on earth.

In November, the twenty-five-year-old will be flying from Australia to Geneva, Switzerland, courtesy of CERN. There he'll have a front-row seat to possibly the most anticipated event in scientific history—the startup of the Large Hadron Collider. And that's just the beginning. As the winner of a video contest held by the collaboration that works on ATLAS, the LHC's flagship detector, Owen will be the project's multimedia intern, with the opportunity to document those first moments in gorgeous 3D.

"I had some other plans for next year, but I think I'll put them off," he says. "It's an amazing opportunity."




Owen, who was born in the US, is finishing up his bachelor's degree in information technology at Monash University in Australia. While studying he's been working on medical visualization projects with the university's pharmacy faculty. "In the future I want to branch out and do material for all fields of science," he says.

Phil Owen's rendering of the Standard Model.


In his winning video, "Origin of Mass," Owen explains the significance of the Higgs boson with voiceover and shimmering 3D images. Entering the contest, he says, was a lot like cramming for a very tough exam. "I spent a couple of weeks studying really hard, learning the particle physics, making sure I understood it myself before diving into it," he says.

As the ATLAS multimedia intern, Owen will be creating animations based on the very first collisions. "It's daunting," he says, but adds that he thinks visualization provides a much-needed dimension to communicating science.

"I think it's to put things in the context for people," he says, "You can tell people how big the sun is a thousand times, but they don't get it till you can show them an actual image comparing it to earth."

A screenshot from Richard Green's video, "Proton."


Forty-seven-year old Richard Green, a video game environment designer in Seattle, won the "Neutrino Prize" for fourth place with his charming video of a talking proton on its way to a collision.

"I just sort of tried to grab something a little off-kilter about [the LHC]," Green says. Since the LHC collides protons, he started to imagine the proton as a character. The resulting video, he says, "is the life of a proton. I thought he should just be a talking head, telling the story himself, like it's him going to work." Hence the inspiration for the proton's "working class" voice.

A screenshot from Simon Howells' video, "Atlas"


In third place with the "Electron Prize" was twenty-five-year-old Brit Simon Howells, a master's student in 3D animation at the University of Hertfordshire. His video "ATLAS" is an atmospheric tour of the behemoth detector, set almost entirely to music.

"When you're trying to get the public interested in science, I don't think ramming the technicalities down their throats is the way to go," he says. In one scene, a snow of electronics falls gently through ATLAS empty innards.

A screenshot from Simon Howells' video, "Atlas"


"I wanted to show the epicness of the scale, not just in the size, but in the hundred million components that go into making ATLAS, the vast numbers of densely packed components," he says.

While Howells was disappointed not to snag the grand prize, he's incorporated the video into his master's thesis. "I'm a big fan of what they're doing at CERN," says Howells, who also produced this video posted to YouTube earlier this year:

"I see it as my generation's space race, which was quite a simple thing to convey, get us to the moon and back, whereas the LHC is a bit more difficult to explain," he says. "I wanted to make a bit more of an artistic thing, get people more interested in the visuals."


Read the rest of the post . . .

Tuesday, September 08, 2009

To be or not to be: the magnetic monopole



You might have read it in Nature News, Starts with a Bang, or Science: physicists have discovered magnetic monopoles. Sort of.

Positive and negative charges are happily independent, but north and south poles always come in twos. As the textbook example goes, cut a bar magnet in half, and you'll get two smaller bar magnets—you can never isolate one from another. Monopoles—a lone north or south pole—simply don't exist.

Or so I was told when I first heard about monopoles, in my first college course on electromagnetism. I heard about them for the second time from Shou-Cheng Zhang, a condensed-matter physicist who studies exotic phases of matter. He seemed to have a rather different opinion.




As my hand struggled to keep up with the interview, it occurred to me that Stanford clearly thought very highly of Zhang; sunlight flooded through a large window into the generously proportioned office, which was located just next door to one of the department's Nobel laureates.

Profiling Zhang was my first real assignment as an intern at SLAC National Accelerator Laboratory, and I couldn't have wished for a better subject. Well-spoken and patient, Zhang made his research both fascinating and accessible. He works on the quantum spin Hall effect, a strange state of matter where the spin of an electron is determined by its direction of motion. Zhang likened it to a graceful dance from the days of Jane Austen; couples moving counterclockwise around a room also spin counterclockwise, and vice-versa with couples moving clockwise. If this happened in a real material, he said, the current could flow without causing the material to heat up.

Shock enough for anyone who's heard how Moore's Law, which says that processing speed doubles every 18 months, may be reaching its limit as the ever-tinier transistors get hotter. But then Zhang said something even more astonishing. This kind of material, he said, would give rise to a magnetic monopole.

I remember being dumbfounded. Surely I didn't hear the word monopole come out of his mouth. Quadrupole, maybe? The look on my face must have given me away, because Zhang just grinned.

Stanford had a history, he told me, of monopole searches. "People were literally waiting for it to fall out of the sky," he said. Not only was a Nobel laureate around the corner from Zhang's office, but Blas Cabrera, lifetime monopole hunter, wasn't too far away either. Here's how Ethan Siegel, an assistant professor in physics at Lewis and Clark College, recapped the search in Friday's "Starts with a Bang" post:

Magnetic monopoles have always been a curiosity for physicists, and many of us think that they ought to exist. In the 1970s, there were searches going on for them, and the most famous one was led by a physicist named Blas Cabrera. He took a long wire and made eight loops out of it, designed to measure magnetic flux through it. If a monopole passed through it, he would get a signal of exactly eight magnetons. But if a standard dipole magnet passed through it, he'd get a signal of +8 followed immediately by one of -8, so he could tell these apart.

So he built this device and waited. Occasionally he'd get one or two magnetons, but the fact that it wasn't eight was hardcore evidence that something funny was going on with just one or two loops. (Three or more was never seen.) In February of 1982, he didn't come in on Valentine's day. When he came back to the office, he surprisingly found that the computer and the device had recorded exactly eight magnetons on February 14th, 1982. Huge devices with larger surface areas and more loops were built, but despite extensive searching, another monopole was never seen. Stephen Weinberg even wrote Blas Cabrera a poem on February 14th, 1983:

Roses are red,
Violets are blue,
It's time for monopole
Number TWO!

And, as of today, no one has seen good evidence for a second magnetic monopole, leading us to believe that the first one was spurious.


But Zhang wasn't talking about monopoles falling from the sky; instead, he said you could see it in the mirror image of an electron. I'll quote the article I ended up writing for symmetry:

To understand how a material can act like a magnetic monopole, it helps to examine first how an ordinary metal acts when a charge—an electron, say—is brought close to the surface. Because like charges repel, the electrons at the surface retreat to the interior, leaving the previously neutral surface positively charged. The resulting electric field looks exactly like that of a particle with positive charge the same distance below the surface—it’s the positive mirror image of the electron. In fact, from an observer’s point of view, it’s impossible to tell the difference.

The concept of an image charge is something undergraduate physics students encounter in their very first electricity and magnetism class, along with the idea that the magnetic monopole doesn’t exist. But Zhang’s "mirror" alloy is no ordinary material. It’s what’s called a topological insulator, a strange breed of solid Zhang specializes in, in which "the laws of electrodynamics are dramatically altered," he says. In fact, if an electron was brought close to the surface of a topological insulator, Zhang’s paper demonstrates, something truly eerie would happen. Instead of an ordinary positive charge, Zhang says, "You would get what looks like a magnetic monopole in the 'mirror.'"


The image charge analogy is important—there's no physical half a bar magnet lodged somewhere inside this material. Instead, the monopole's point-source magnetic field, its signature, its defining characteristic, emerges from the behavior of the electrons inside. [For further reading, see Zhang's paper in Science.]

CREDIT: L. D. C. JAUBERT AND P. C. W. HOLDSWORTH, NATURE PHYSICS 5, 258 (2009)



The research that's making the news today comes out of the same world as Zhang's strange material—condensed matter physics. It's called "spin ice" because its constituents, magnetic ions, are arranged in the same tetrahedral configuration as water molecules in solid ice. Adrian Cho at ScienceNOW explains:

The magnetic ions sit at the tips of four-sided pyramids or tetrahedra connected corner to corner (see diagram). At temperatures near absolute zero, they should organize themselves by a simple rule: In each tetrahedron, two ions point their north poles inward toward the center and two point outward.

Flaws in this pattern are the monopoles. If one ion flips--perhaps because it gets energized by the thermal energy in the crystal—it leaves one tetrahedron with three ions pointing inward and the neighboring tetrahedron with only one ion pointing inward (see figure). The two imbalanced tetrahedra act like north and south magnetic poles, respectively. If nearby spins also flip, the imbalances can shift independently from one tetrahedron to the next, so that the north and south poles end up connected only by a string of ions that point from one to the other. Thus the imbalanced tetrahedra become magnetic monopoles.


Now, are these magnetic monopoles in a sense that would satisfy the likes of Blas Cabrera? Probably not. While the researchers detected these strings of ions, seeing the monopoles themselves is going to be a lot trickier, as Geoff Brumfiel at Nature News explains:

Like any charged particle, opposites attract, and the north and south poles typically cluster together less than a nanometre from each other. That makes them extremely hard to detect individually.


Siegel of Starts with a Bang is even more critical, though he praises the research as important in its own right elsewhere in his post:

What they did was create magnetic "strings", or very long, thin magnets on a lattice, where North and South poles are separated by great distances. If you only look at one side of this string, you only see one pole. But the other pole is still there, and so this isn't a monopole. If you tried to snap the string, you still wouldn't isolate one magnetic charge...


So the evidence isn't clear as day whether this is really a magnetic monopole. But my question is whether spin ice could be at least used to study the creatures by analogy, because they're of great interest.

Why? "Many of us think that they ought to exist," Siegel writes, which leaves a some explanation to to be desired. Adrian Cho fleshes it out:

Monopoles would be the magnetic equivalent of electrically charged particles, and there are several reasons physicists would like to see them. In 1931, famed British theorist Paul Dirac argued that the existence of monopoles would explain the quantization of electric charge: the fact that every electron has exactly the same charge and exactly the opposite charge of every proton. In the 1980s, theorists found that the existence of monopoles is a basic prediction of "grand unified theories," which assume that three forces—the electromagnetic, the strong force that binds the nucleus, and the weak force that causes a type of radioactive decay—are all different aspects of a single force.


Monopoles are a part of the high-energy physics menagerie of exotic, never-before-seen particles. A few of their zoo-mates include anyons, 2-dimensional particles that straddle the properties of fermions and bosons, and axions, feebly interacting particles that "clear up" problems with charge-parity violation in quantum chromodynamics like Axion detergent cleans dishes (hence the name. Thanks Frank Wilczek). Then there's the most famous hypothetical particle of them all, the Higgs boson.

These entities are lynchpins in our best descriptions of the universe. But so far there's no sign of them—at least in cosmic ray showers or the short-lived spaghetti of particles in a collider's detector. Meanwhile, it seems like you can't swing an atomic-force microscope in a condensed-matter system without hitting one of these things.



Let's go back to Shou-Cheng Zhang. A particle physicist by training, Zhang had a very philosophical view of the relationship between condensed matter physics and high-energy physics. Using the words of English poet William Blake, he said that studying condensed matter physics was like "seeing a world in a grain of sand."

"It means that the structure of subatomic particles is reflected in the systems they make up—the solid, the grain of sand," he told me. Or you could think of it as an Escher waterfall: just when you think you've gotten to the top of the waterfall with whole systems of particles, you find yourself at the bottom, with the fundamentals.

Spin-ice and topological insulators are hardly the only materials where high-energy physics might find their elusive particles. Take graphene, for instance. A single-atom-thick layer of graphite, the same stuff that's in your pencil, the stuff's been hailed for the last five years or so as the new wonder material for electronics because electrons zip effortlessly through it. But the same physics that makes it so promising for lucrative applications also make it a playground for high-energy pursuits. One reason is that electrons in graphene don't act like regular old electrons. They act sort of like photons with charge and spin 1/2. There's an excellent article by Robert Service in the May 15 issue Science on the subject that's unfortunately only available to subscribers, but I'll quote some of it here.

In the lattice of a typical metal, electrons feel the push and pull of surrounding charges as they move. As a result, moving electrons behave as if they have a different mass from their less mobile partners. When electrons move through graphene, however, they act as if their mass is zero—behavior that makes them look more like neutrinos streaking through space near the speed of light. At such "relativistic" speeds, particles don't follow the usual rules of quantum mechanics. Instead, physicists must invoke the mathematical language of quantum electrodynamics, which combines quantum mechanics with Albert Einstein’s relativity theory. Even though electrons course through graphene at only 1/300 the speed of neutrinos, physicists realized several years ago that the novel material might provide a test bed for studying relativistic physics in the lab.




Service goes on to say that graphene could be used to study a number of predictions of high-energy physics in a sort of lab-on-a-chip setting, mentioning especially something called Klein tunneling, a 1929 thought experiment of Swedish physicist Oskar Klein:

Klein realized that when electrons travel at relativistic speeds, the likelihood that they will tunnel through a barrier can skyrocket. That’s because in the spooky world of quantum mechanics, within which particles can wink in and out of existence, a relativistic particle that hits a barrier can generate its own antiparticle, in this case a positron. The electron and positron can then pair up and travel through the barrier as if it weren’t even there.


In March of this year, Columbia physicist Philip Kim reported observations of the tunneling in a real-world material—good old graphene.

So whether or not spin-ice monopoles are just as good as the kind Blas Cabrera hoped would fall from the sky, they're worth creating and studying. Condensed matter physics holds out an immediate and fairly inexpensive way to test out the predictions of Grand Unified Theories. While I'm not expecting anyone to find the Higgs boson in a sheet of graphene, maybe high-energy physicists will make worthwhile discoveries by looking in the grain of sand, instead of the galaxies, for signs of their universe.


Read the rest of the post . . .