Monday, November 30, 2009

Discoveries, in their own words



The words Philosophical Transactions, which happen to form the title of the journal published by the Royal Society, may not ring a bell for the modern science enthusiast, but the hoary pages of this prestigious journal witnessed the birth of physics as we know it. When Newton discovered that white light was actually a combination of different wavelengths, he wrote a letter to the Royal Society vividly describing his experience:


"...In the beginning of the year 1666...I procured me a Triangular glass-Prisme, to try therewith the celebrated Phaenomena of Colours. And in order thereto having darkened my chamber, and made a small hole in my window-shuts, to let in a convenient quantity of the Suns light, I placed my Prisme at his entrance, that it might be thereby refracted to the opposite wall. It was at first a very pleasing divertisement, to view the vivid and intense colours produced thereby."

The account was published in 1672, twelve years after the Royal Society, which had its start as an informal scientific salon in London in the 1640s, was officially inaugurated as the UK's national academy. Hundreds of years later, the Society has made it possible for us to experience this moment of discovery as Newton's colleagues must have, funny-looking s's and all, through a sort of online museum of the most famous and influential letters the Society has published in its 350-year lifetime.

Newton's article and fifty-nine others appear in a virtual timeline spanning 1660 to 2010. The discoveries—Stephen Hawking and Roger Penrose's work on black holes, the invention of the first battery, Bayesian probability—may be familiar, but the presentation of each one, as a humble a letter describing the first foray into a now-established notions about nature, is refreshingly new in its, well, ancientness. Each article comes with a commentary written by a modern-day researcher, summing up the paper's contents and explaining its significance, in case you don't feel like trying to decipher 17th-century fonts; however, the papers themselves, with their beautiful, hand-drawn illustrations and rambling, almost conversational tone, are fun to read. Physics and astronomy are well-represented in the ranks, though something tells me that the medical letters would make good reading for those with a morbid sense of humor.

The archive, known as Trailblazing, lets you relive eureka moments from James Prescott Joule's investigations of friction as a source of heat in 1849 to the 1909 experiment that revealed that all the positive charge in an atom must be densely concentrated at its center. Meanwhile, a sprinkling of important events throughout the timeline provide historical context; it might interest you to know that James Clerk Maxwell's theory of electromagnetism came out just as the American Civil War was ending, or that Newton was playing with a prism in his darkened room at Cambridge the same year that the great fire of London raged.

Alessandro Volta's drawings from a letter describing a battery.

I thought I'd heard that the famous story about Ben Franklin flying kites in thunderstorms was apocryphal, but in the Royal Society archive you can find Franklin's comically blithe description of how scientists who were skeptical of his idea that lightning was a form of electricity could demonstrate this fact for themselves. After you've rigged up your silk kite with a foot-long metal wire, Franklin writes, "the kite is to be raised, when a thunder-gust appears to be coming on, (which is very frequent in this country)," in order to attract the "electric fire from the clouds." The scientist is supposed to hold onto a silk ribbon, attached to a metal key; Franklin admonishes a would-be lightning catcher to "stand within a door, or window, or under some cover, so that the silk riband may not be wet; and care must be taken, that the twine does not touch the frame of the door or window." Meanwhile, the "pointed wire will draw the electric fire…and the kite, with all the twine, will be electrified; and the loose filaments of the twine will stand out every way, and be attracted by an approaching finger." The key, he goes on to explain, can be used as a source of charge for any static electricity experiments, thus proving that lightning isn't the wrath of the gods but a form of electricity. As commentator Mark Miodownik writes, "It worked and, miraculously, did not kill him."


Read the rest of the post . . .

Wednesday, November 25, 2009

In the Brain, Seven Is A Magic Number


Having a tough time recalling a phone number someone spoke a few minutes ago or forgetting items from a mental grocery list is not a sign of mental decline; in fact, it's natural.

Countless psychological experiments have shown that, on average, the longest sequence a normal person can recall on the fly contains about seven items. This limit, which psychologists dubbed the "magical number seven" when they discovered it in the 1950s, is the typical capacity of what's called the brain's working memory.

Now physicists have come up with a model of brain activity that seems to explain the reason behind the magical memory number.

If long-term memory is like a vast library of printed tomes, working memory is a chalkboard on which we rapidly scrawl and erase information. The chalkboard, which provides continuity from one thought to the next, is also a place for quick-and-dirty calculations. It turns the spoken words that make up a telephone number into digits that can be written down or used to reply logically to a question. Working memory is essential to carrying on conversations, navigating an unfamiliar city and copying the moves in a new workout video.

It's easy to test how much you can fit on this chalkboard. Just have a friend make a list of ten words or numbers. Read the list once, and then try to recall the items. Most people max out at seven or fewer.

It makes intuitive sense: as a mental list gets longer, people are more likely to make mistakes or forget items altogether. But why do the clusters of neurons in our brains produce such a small chalkboard?

In a paper published on Nov. 19 in the journal Physical Review Letters, Mikhail Rabinovich, a neuroscientist at the BioCircuits Institute at the University of California, San Diego and Christian Bick, a graduate student at the Max Planck Institute for Dynamics and Self-Organization in Göttingen, Germany, present a mathematical picture of how neurons fire when we recall a sequence of steps -- such as turn-by-turn driving directions, the digits of a phone number or the words in a sentence.

When we hear the phrase "It was the best of times, it was the worst of times," a cluster of neurons fires during each word. When one cluster fires, it suppresses the others momentarily, preventing the sentence from coming out scrambled.

In Rabinovich and Bick's model, the excitation of a certain cluster represents a single point. As the neurons for "It," "was," "the," and "best" fire in sequence, the brain creates pathways from one point, or brain state, to the next. The more powerfully each excited cluster can inhibit or suppress all others in the sequence from firing, the more solid these pathways.

When we recall the sentence, the brain follows these pathways from state to state to reproduce the sequence, like a tightrope walker hurrying along a wire from one perch to the next.

As a sentence or a string of numbers gets longer, it becomes exponentially harder for the excited cluster to suppress the others from firing, resulting in pathways that are weak or barely there. Recalling seven items requires about 15 times the suppression needed to recall three. Ten items requires inhibitory powers that are 50 times stronger, and 20 or more items would require suppression hundreds of times stronger still. That, Rabinovich explained, is normally not biologically feasible.

"Synapses can't be stronger than that," he said. "The brain is a very complex biochemical machine."

Mathematical models like these may seem removed from the gritty reality of gray matter and neural chemistry, according to Karl Friston, who directs the Wellcome Trust Centre for Neuroimaging at University College London, but they provide a critical connection between what people actually experience and the hidden mechanisms inside the brain.

Rabinovich's model, Friston said, "is both plausible and compelling." It correctly predicts the working memory's capacity and with a little elaboration could be tested experimentally. Friston said the model suggests patterns in the working memory's activity that should be discernible in the brain's electrical signals.

The exception to Rabinovich's model may be autistic individuals who skip effortlessly past seven and eight items, memorizing even a hundred random numbers in a single read-through. Their brains seem to be able to create much stronger pathways than the typical brain.

- Lauren Schenkman
Inside Science News Service

Read the rest of the post . . .

Tuesday, November 24, 2009

Ka-Blammo!

In case you haven't heard, the massive Large Hadron Collider in Geneva Switzerland had its first proton collisions over the weekend after shutting down last September. This means that the world's largest most complicated machine is now back, and ready to search for the hypothesized Higgs boson, the particle theorized to give matter its mass.

The LHC is on record as the world's largest and most powerful particle accelerator. Not only that, it really is the world's largest and most complex machine. When running at full power it shoots lead ions around its 27 km long track at almost the speed of light. The ions collide with such energy, that new particles are created under conditions similar to just moments after the Big Bang. It’s the hope of physicists around the world that one of the particles that flies off into the detectors is that hard to pin down Higgs boson.


Author Bill Bryson visited the collider a few weeks ago while it still under repair. He spoke to CERN physicist James Gillies who put the machine's astonishing precision into perspective, "[G]etting two protons to collide is exactly equivalent to firing two knitting needles from opposite sides of the Atlantic Ocean and having them strike in the middle."

Its accident last September, just days after its commissioning, was a major setback. Faulty wiring caused six tons of the collider's liquid helium to destructively discharge, knocking 54 of its superconducting magnets out of alignment. Technically speaking, it was really bad.

Over at The Big Picture photo blog, Alan Taylor put together an amazing series documenting the damage and repair of the massive machine.

Now it's back up and running, and gearing up to hunt for the Higgs. These initial collisions are really just baby steps, a fraction of what the collider can do. Not surprisingly, the scientists are being careful as they ease the machine up to full power, but they are still making fast progress. These collisions came only three days after they shot the first test streams of protons around the collider.

To say these collisions are the culmination of a massive effort would be an understatement. The whole experiment cost over $9 billion and took 15 years and countless man-hours to design, build, and run (and repair). However raw numbers really can only offer a small glimpse into how much effort has been put into the massive machine. I think the looks on the scientists' faces in the main control room says it all.
Like the old saying goes, a picture's worth 1.0 x 10^3 words.




Read the rest of the post . . .

Monday, November 23, 2009

The Story Behind The Physics of Superheroes!


James Kakalios knows that he will be forever linked to the physics of Spiderman. When he started teaching a freshman seminar class in 2001 based on the physics of superheroes, he had little inkling that it would soon lead to a whole series of popular lectures, a popular book, and even a gig consulting on a major Hollywood motion picture. He jokes frequently that even if he were to win three Nobel Prizes, the photo of him surrounded by action figures would be his legacy.

There is of course more to Kakalios than caped crusaders and comic books. In addition to teaching and directing undergraduate studies at the University of Minnesota, he is also a condensed matter experimentalist. His work in disordered systems extends from the properties of amorphous semiconductors to neurological systems and the avalanche dynamics of sand.

Kakalios’s book The Physics of Superheroes originated from an impulse and momentum problem he included on exams nearly fifteen years ago. He asked his students to calculate how much force Spiderman would need to save his girlfriend Gwen Stacey after the nefarious Green Goblin sent her plummeting off the Brooklyn Bridge.

“I was trying to come up with an exam problem related to impulse and momentum that hadn’t been done a hundred times already,” Kakalios said, “At the time I was just really thrilled to come up with something challenging, a freshman physics problem that hadn’t been done before.”

He found that students responded well to the question because it presented a real world physics problem cloaked in an approachable pop culture superhero getup. Though caped heroes in spandex wielding super powers may not necessarily seem like “the real world,” Kakalios pointed out that most examples in physics books involving helicopters dropping bowling balls and such are so abstract that “those are fiction. There’s really no difference.”

Kakalios began incorporating more pop culture references in his physics lectures until in 2001 he started teaching a freshman seminar class title “Everything I Know About Science I Learned from Reading Comic Books.” In the class every diagram and example Kakalios used was borrowed directly from the pages of comic books.

“It really is a class on the physics of everyday life with instruction in creative problem solving, and it’s hidden in an superhero ice cream sundae,” Kakalios said, “The point of the class was not the equations themselves… [it was] if you can quantitatively analyze a new situation. How to figure out something you wouldn’t expect to be a physics problem and look at it from a physics point of view.”

The class required students to break down complex problems into their constituent parts and creatively solve them. Instead of a final exam Kakalios had students pick a character from a comic book and do a scientific analysis of them. Students would come up with original approaches like calculating how many calories the Flash needed to run hundreds of miles an hour. One student used the curve of the Earth in a single panel to extrapolate the height of an airplane flown by a cartoon mouse.

Kakalios found that many of his students enrolled in the course weren’t regular comic book readers, and most were from majors outside of the sciences, like history and journalism. They were people who had enjoyed physics in high school and had wanted to learn more but were intimidated by it.

When the Spiderman movie came out in 2002, Kakalios penned an editorial for the Minneapolis Star Tribune based on his original exam question about Spiderman’s imperiled girlfriend. After his editorial ran, other publications began to take notice of his novel method of teaching physics. The timing coincided with a slew of new feature films based on comic books, so news started to travel fast. The Associated Press profiled him which carried his story around the world. Other news outlets started to contact Kakalios as well, including CNN Headline News and the BBC.

Soon after, Gotham Books approached Kakalios to write a book based on his seminar course. In 2005 The Physics of Superheroes hit store shelves with Discover Magazine naming it one of the top science books of the year. Now in its eleventh printing, the book has been translated into German, Italian and Spanish, with Greek and Korean versions in the works.

The book takes its readers on a superhero guided tour of physics; starting with fundamental forces and motion then working up through thermodynamics, electricity, general relativity, and finally some fundamental quantum mechanics. To make it as accessible as possible, Kakalios intersperses humor with the hard science and opted to focus on the numerous instances where comic books get their physics correct rather than the many times they don’t.

“I explained all of physics using only superhero illustrations,” Kakalios said, “If you explain it using Spiderman or Superman, their shields aren’t up and people will stay engaged.”

The book’s popularity attracted the attention of Hollywood. In 2007 Ann Merchant, the deputy executive director of communications at the National Academies, asked Kakalios to act as the science advisor for the big-screen adaption of the popular “Watchmen” graphic novel. Kakalios advised the production company on ways to help create a believable fantasy in the film. Actor Billy Crudup, who played the physicist turned superhero Dr. Manhattan, said that talking science with Kakalios helped him get into character. The studio also helped Kakalios produce a series of “The Science of Watchmen” video clips. To date these videos have been downloaded from YouTube over 1.5 million times.

Kakalios recently completed an updated edition of his book, now out. The new version expands the original book to include the fluid dynamics of Aquaman, the angular momentum of the Human Top, and the material science of the members of the Justice League of America. “I did all the calculations and now the jokes are 12.7 percent funnier,” Kakalios said.

In addition, Kakalios is writing a new book on the importance of quantum mechanics in everyday devices like CD players and cell phones. Even if his new book is as popular as The Physics of Superheroes, Kakalios still plans to attend comic conventions across the country.

“Comic book fans ask the best questions,” Kakalios said, “They’re people who enjoy the science and keep up with what’s new.”





[Reprinted with permission from Nov. 2009 issue of APS News


Read the rest of the post . . .

Friday, November 20, 2009

Show your school spirit! (of innovation)

If you're a high school student, teacher, or parent, this post is for you. The deadline is coming up to register and submit project plans for the Pete Conrad Spirit of Innovation Awards. Now, I'll admit to being a pretty massive geek in high school—Math Bowl, Science Bowl, Academic Decathlon. I pretty much signed up for every extra-curricular activity guaranteed to cement my status as a social pariah. But if the Pete Conrad Spirit of Innovation Awards had been around in my day, I needn't have sacrificed cool for school. From a look around their website, it kind of seems like the X-prize for teenagers.

The competition is definitely about science, problem-solving, and technology, but there's a strong entrepreneurial aspect. The idea is to use your smarts to develop a product that's actually needed: good ideas get a $1000 grant, and winning teams get a $5,000 grant to develop their product further. There are four project themes that a team can work within: aerospace exploration, green schools, renewable energy, and space nutrition. Sounds like pretty hot stuff for up-and-coming tech entrepreneurs to think about.



Aerospace exploration
This category is wide open. Think tools for a future moon rover, an instrument on an unmanned spacecraft, a spacesuit for an astronaut landing on Mars, or a device that piggy-backs on satellites. How about an exercise machine for a lunar base, a plan for a lunar greenhouse, or a health monitor? Health-related tech could snag you a special award:
$5,000 National Space Biomedical Research Institute Prize for Innovation in Space Exploration Health Care will be awarded to one team for the best aerospace-related human health product.

Green schools
This one's in partnership with the United States Green Building Council. According to the EPA, 35 percent of waste generated in the US comes from schools. How green is yours? Projects could include assessing your school's carbon footprint or use of water, devising a plan to help your school save energy, or even just coming up with ways to make school a healthier, more pleasant place to learn.

Renewable energy
This category should need no introduction. Take a cue from William Kamkwamba who, as a teenager in rural Malawi, taught himself to build an electricity-generating windmill using outdated physics textbooks in the village library. Come up with a device that makes an everyday task run entirely on water, wind, solar power, or grass.

Space nutrition
Because the goal of this project is to produce a tasty, uber-fattening space treat, it's definitely my favorite. Will you go for the Power-bar-esque extruded recipe, bake your bar a la Cliff Bars, or whip up some old-fashioned granola staying power? Whatever recipe you decide to go with, make sure all the water molecules in your nutrition bar are bound to food, and not free to provide a growing environment for bacteria, yeasts, and molds. Winning bars must pack a whopping 375 calories into a paltry 100 grams of goodness, minimize crumbliness, and rate at least "6 or higher when tested using a 9-pointed hedonic scale"—actual astronauts might eat this stuff! I think it would be fascinating to be one of the judges.

Looks like today is the deadline to submit your spacebar recipe, but you can still register and sign on to one of the other challenges. That gives you a little less than a month to put together a technical concept and business plan.

These challenges may sound daunting, but the foundation's websites provides plenty of resources to help you along. There's an online forum for those 3 AM questions, and a searchable database to help you find experts on your topic. Simply turning in your paper ideas for a product could earn you a $1000 grant for R&D.

Looking for inspiration? Last year's winners came up with concepts for producing oxygen on the moon with algae, mining helium-3 on the moon, and a railgun launch system to replace rocket-powered travel from low earth to lunar orbit.

Today's happens to be the perfect day for coming up with bright ideas: mull over possibilities as you wait for the LHC to start up again.Thinking about product design? Check out these gorgeous photos of the LHC from the Boston Globe. Now that's some fine technology.

Read the rest of the post . . .

Thursday, November 19, 2009

The plot thickens; the ozone does not

Hello, ozone. (Image from NASA).
Here's a scientific mystery for you: where has all the ozone gone?

In 1985, researchers discovered a gaping hole in the layer of ozone-rich cloud over Antarctica. Ozone, a molecule of three oxygen atoms, may seem unassuming, but it absorbs light in the UV-B range, considerably reducing how much of it reaches the surface of the earth. When it comes to sunlight, this is the nasty stuff—skin cancer-causing, molecule-destroying, plant-killing wavelengths of ultraviolet. Ozone, happily, absorbs the lion's share before it starts frying our eyes.

So what was causing the hole? Another chemical was the culprit: chlorine. A chlorine atom will tear an oxygen atom away from O3, producing chlorine monoxide (ClO) and molecular oxygen (O2). Then things get worse: the molecular oxygen grabs the oxygen back from the chlorine atom, not to form ozone again, but to form two atoms of molecular oxygen. This leaves the chlorine atom free to do its dastardly business all over again—a vicious circle if I've ever seen one.


Where does all the chlorine come from? Some of it comes from chlorine-bearing molecules naturally present in clouds, but a lot of it, at least in the 1980s, was coming from us. In the 1930s, manufacturers began making refrigerator coolants, aerosol propellants, and cleaning solvents from chlorofluorocarbons, this wonderful a chlorine-bearing compound that, to their delight, couldn't react with anything, catch on fire, or poison anybody—except the ozone, that is.

Not long after the discovery that ozone in Antarctica was suffering thanks to the millions of safe refrigerators humming away in kitchens across America, the Montreal Protocol phased out production of so-called CFCs, although the CFCs already released will continue to do damage for years. There's just one hitch in this story: bound up in a compound like CFC, chlorine is pretty much harmless. So what was jimmying the molecular locks, letting chlorine loose on defenseless ozone molecules?

Scientists went back to UV light. This radiation could break down the CFCs at high altitudes, after which the fragments circulated to lower reaches. In colder months, the freezing clouds in the stratosphere provide a sort of lab-table for these fragments to engage in ozone-depleting reactions.

One scientist, however, wasn't buying it. In 2001, Qing-Bin Lu, then at the University of Sherbrooke in Canada, published a paper linking ozone depletion to cosmic rays, the high-energy nuclei, electrons, and protons produced in galaxies far, far away. Examining ozone and cosmic-ray data from satellites, balloons, and ground stations for the years 1997—1992, Lu found a statistical correlation between cosmic ray intensity, related to the 11-year solar cycle, and global ozone levels. He also showed in the lab that when CFCs bound in a "cloud" (water vapor condensed on a metal rod at below-freezing temperatures) were bombarded with low-energy electrons, they were a million times more likely to let loose active chlorine. (Cosmic rays rarely make it through the atmosphere intact, but instead produce showers of other particles, such as low-energy electrons, when they collide with atmospheric molecules.)

In April 2009, Lu came out with even stronger support for his idea, or so it seemed. This time his study spanned the years 1980-2007. Again, he showed a correlation between cosmic ray intensity and mean global ozone levels, and between cosmic ray intensity and fluctuations in Antarctic ozone.

"These correlations mean that nearly 100 percent of the ozone loss over Antarcitca must be driven by cosmic rays," he told Physics World, implying that UV was just not the culprit here. But other climate experts were very skeptical of Lu's attempt to explain ozone depletion by a completely new mechanism.

Maybe they shouldn't have been. An unexpected measurement presented at a conference 2007 seems to make ample living room for Lu's cosmic ray explanation. Markus Rex, an atmospheric scientist at the Alfred Wegener Institute of Polar and Marine Research in Potsdam, Germany, got a shock when he measured how quickly dichlorine peroxide (Cl2O2) broke down into chlorine and oxygen when exposed to wavelengths of light available in the stratosphere, where the vicious cycle of ozone-depleting chemistry takes place. Dichlorine peroxide is a crucial intermediate step in the cycle's second half, where chlorine oxide and molecular oxygen recombine to form two molecules of oxygen and free chlorine.

Rex found that the UV-induced re-liberation of chlorine occurred an order of magnitude more lowly than previously thought. This seemed to pull the rug out from under the accepted mechanism for ozone depletion—if UV light was really the culprit for breaking down these molecules, then we'd have much more ozone today. So now we need a catalyst. Cosmic rays seem like they might do the trick. Has Lu solved the mystery?

Perhaps—but now the plot has thickened—Physical Review Letters, the journal that published Lu's two previous cosmic ray papers, has recently accepted a paper by Rolf Müller and Jens-Uwe Strooß that picks apart Lu's work. (Incidentally, Müller called Lu's correlation potentially "spurious" in a New Scientist article about the April 2009 paper.) In his abstract, he announces, "measurements of total ozone in Antarctica do not show a compact and significant correlation with cosmic ray activity."

Müller proceeds to go through Lu's research with a fine-toothed comb; sure enough, it hitches on subtle snarls in argument and approach. He says that the connection between the solar cycle and global ozone levels is well known, and that a correlation has been observed in years (1960-1980) when "no substantial Antarctic O3 loss has been observed." So an argument for cosmic ray-induced depletion of polar ozone can't rest on correlations between the solar cycle and global ozone.

He then looks at Lu's second piece of evidence, the percentage variation of cosmic ray intensity and mean total ozone in the polar region from one October to the next in the period 1990 to 2007. Again, Müller thinks Lu's looking at the wrong indicator. Rather than looking at mean total ozone, which has confounding influences, he should have looked at the minimum of daily average total ozone in the region. When you do, Müller says, there's no correlation with cosmic ray intensity.

Finally, Müller looks at data from 1991, when Mount Pinatubo erupted, throwing up large amounts of sulfur that formed sulfate in the atmosphere. The surfaces of these particles are ideal for ozone-depleting chemical reactions to take place. Consequently, the data shows low ozone levels, but it also shows high cosmic ray intensity. So what was it, Mount Pinatubo or the cosmic rays?

I'm sort of fascinated that two completely contradictory pieces of research that examine the same dataset and reach entirely different conclusions could be published in the same journal within months of each other. Reading Müller's paper, I get a sense for how disturbingly subtle science can be. When it seems like a piece of research has all the answers, a few well-placed holes deflate the whole argument.

Whether Lu's explanation is right or wrong, chlorine is still a big problem for the ozone, so we won't go back on the Montreal Protocol. But if cosmic rays don't catalyze these crucial reactions, what does? The mystery persists.

Read the rest of the post . . .

Wednesday, November 18, 2009

Old School


A proton anti-proton collision at CERN, from 1982. (Image by CERN).

Good news for anyone who didn't have plans for this Friday night: the Large Hadron Collider might be starting up, possibly at 00:00 on Saturday, Geneva time. And you can monitor everything from the magnet temperatures to detector performance, thanks to this handy CERN portal put together by a reader of the online tech newspaper the Register to make the dense, yet disconnected jungle of CERN webpages easier for the average LHC fanboy or fangirl to navigate.

As for me, well all the excitement over the world's most advanced machine makes me long for simpler times, when physicists turned to photographic plates, not giant particle detectors, to learn about the unseen world. It's somewhat surprising, but humans have been photographing the innards of the nucleus and elementary particles for nearly as long as they've been photographing themselves in corsets, silly hats, and handlebar mustaches.

Our story begins in France,where the curse of obscurity seems to have plagued the Niepce family mercilessly. Joseph Nicephore Niepce produced the first-ever photograph in the summer of 1826, but his pioneering work has been largely ignored while Louise Daguerre, who announced his invention of photography ten years later, has been immortalized in the word "daguerrotype." Then, in 18671, the son of Niepce's first cousin, Abel Niepce de St. Victor, noticed that one of his photographic plates mysteriously darkened when stored near uranium salts, even though the plate had been wrapped in black paper. Niepce de St. Victor guessed that the uranium have somehow been shining. This guess relegated yet another Niepce to crushing obscurity; nearly thirty years later, Henri Bequerel, the physicist, discovered radioactivity pretty much the same way.

These days, we capture images using CCDs, but old-fashioned photography was all about chemistry. A photographer would coat a plate of glass about two millimeters thick with a layer of emulsion. (An emulsion is really just a suspension, in this case tiny grains of silver bromide or silver iodide suspended in gelatin goop.) When the plate is exposed to light, minuscule specks of silver, just a few atoms in size, form in the grains. To retrieve the image, the photographer would bathe the plate in a developer that caused grains with silver specks to go completely silver. Then he or she would dissolve the unaffected grains with a second solution known as a fixer. Darker areas in the image formed where grains were exposed to more light.

To catch a particle2, early 20th century physicists used basically the same techniques, except the emulsion contained about 10 times more silver bromide or silver iodide grains. As an alpha particle, (two protons and two neutrons) flies through the plate, it passes through some of the grains, causing specks of silver to form that blacken the entire grain when developed and fixed. If the concentration of grains is low, the particle's trajectory will come out as a dotted line, while more grains achieve a striking solid line, such as in the following image. This explosive image was created by hundreds of alpha particles flying out from a speck of radium placed on the photographic plate.


Occhialini and Powell, 1947. Printed in [1], p. 26.
"The alpha particles emerged from a speck of a radium salt which was allowed to fall on the surface of the plate. The more heavily ionising alpha particles give continuous tracks."

Particle collisions, caught the old-fashioned way, have a certain grittiness to them that's a far cry from the rainbow-toned computer graphics we're used to seeing from modern-day experiments. The following image shows what happened when a cosmic ray—a heavy nucleus in this case—barreled through a stack of emulsions.

sardinian Expedition, 1953. Printed in [1], p. 65.
"Successive fragmentation of a heavy nucleus of the primary cosmic radiation."

In the first column, a third of the way down, you can barely make out a small splotch as the nucleus glanced off another particle, losing an alpha particle. In the second column, about halfway down, there's a slightly more dramatic splatter where the nucleus lost several charged protons and even a few mesons. In the third column, things really got ugly for our somewhat reduced cosmic ray; it collided again, this time losing all its nucleons, producing that quintessential high-energy physics image, the particle spray.

References:
1. Powell, C.F., P.H. Fowler, and D.H. Perkins. The Study of Elementary Particles by the Photographic Method: An account of The Principle Techniques and Discoveries illustrated by An Atlas of Photomicrographs. London: Pergamon, 1959.
2. Powell, C.F. and G.P.S. Occhialini. Nuclear Physics in Photgraphs: Tracks of Charged Particles in Photographic Emulsions. Oxford: Clarendon Press, 1947. Print.

Special thanks to the Neils Bohr Library and Archives at the American Institute of Physics, where I found these delightful tomes.

Read the rest of the post . . .

Tuesday, November 17, 2009

Weird, tasty science



Looking for a creative twist to put on your Thanksgiving turkey this year? Why not cryosear and cryorender it?

The chefs of the experimental kitchen (read: nefarious secret lab) at Intellectual Ventures developed this technique to solve the age-old cuilinary quandary: how do you cook the fatty skin on a duck breast so that it's nice and crispy, without overcooking the meat itself?

The recipe is more Mythbusters than Julia Child: it involves pricking the duck's skin with a (hopefully unused) stainless steel dog hair brush, dehydrating the skin-enshrouded duck breast on a bed of salt, and pressing it between a satchel of metal pieces and a slab of dry ice. The idea is simple: the tiny holes in the skin help the fat cook faster, and the ice treatment prevents the meat from over cooking while the skin begins to bubble and crisp. Just don't walk into your local Bed, Bath, and Beyond and ask for a block of dry ice or a "muslin satchel filled with loose pieces of metal," which is what the recipe calls for.

Oh, and I forgot to mention—to get the duck meat to the right core temperature, you'll have to vacuum-seal it in plastic and cook it in a strictly temperature-controlled warm water bath. This is a method known as sous vide: it's French for "under vacuum," and joined the more commonly found phrases au jus, a gratin, and a la mode in the 1970s.

Physics has long been an essential part of cooking. Your oven is a hotbed of convection. We zap our food with microwaves to heat it. We "deep fry" turkeys with UV light. Wait, really? Yes, really.

And now, by cooking an egg vacuum-sealed in a polymer pack
in a 145 degree Fahrenheit water bath for about 75 minutes, you can achieve both a runny yolk and runny white, while your traditional "soft-boil" will get you the inferior totally cooked white and runny yolk every time. (Ovotransferrin, a protein in egg-white, starts hardening at 144 degrees Fahrenheit; meanwhile, egg yolk coagulates at about 158 F.)

Why do you care? If you're an avant-garde chef trying to push the limits of culinary creativity, you might join renowned chef Jean-Georges Vongerichten in gushing, "There are no new fish coming out of the ocean and all the food groups still remain the same. So to provide innovative tastes and culinary experiences, chefs search for new techniques and technologies—like the PolyScience Immersion Circulator, which enables us to cook with an accuracy and precision never before possible."

The secret formula for the time it takes to perfectly soft-boil an egg. Who ever said that physics wasn't useful?

The PolyScience Immersion Circulator looks like it belongs in a niche tech catalog for low-temperature physicists (between the strain-gauge section and the fold-out advertisement for the latest cryostat, I imagine), keeps your water bath stable to within .09 degrees Fahrenheit of the desired temperature, and will set you back $1,069.00, something you might more readily afford if you had extra grant money lying around. If you do decide to shell out for the gadget, why not help test some of the sous vide recipes that Douglas Baldwin, a Ph.D. student in applied maths at the University of Colorado, is trying to develop?

Deep fried fatty deliciousness, made possible by modern science. (Photo from the New York Times.)

On the other hand, if you're trying to deep-fry mayonnaise, you might try something off the processed-food shelf: xanthan gum. The New York Times describes it quite evocatively as "a slime fermented by the bacteria Xanthomonas campestris and then dried." Look for it on the ingredients label of your salad dressing. Like cornstarch, it thickens liquids into a gel, allowing chefs at Lower East Side restaurant WD-50 to deep fry mayonnaise and synthesize foie gras you can tie into a knot, satisfying appetites for innovation and gourmet food all at once.

Read the rest of the post . . .

Monday, November 16, 2009

Sagan sings



What started out as just another YouTube video may be competing with the bird-baguette-LHC urban legend for being the best way to get physics some attention from the public. In "A Glorious Dawn," composer John Boswell remixed clips from the vintage pop-science show Cosmos into a pretty darn catchy song. "A still more glorious dawn awaits—not a sunrise, but a galaxy rise, a morning with 400 billion suns, the rising of the Milky way." Thanks to AutoTune, the software that gave Cher's voice that robotic sound in "Believe" and helped Saturday Night Live comedians record their rap hit "I'm on a Boat," Carl Sagan sings his words of physical/philosophical wisdom.

It turned out the video isn't just for the geeks—NPR gave it a nod on their music blog, Monitor Mix, and on November 9, the Telegraph reported that White Stripes frontman Jack White is releasing "A Glorious Dawn" on his label, Third Man Records. Great timing—Carl Sagan, who died in 1996, would have been 75 on that very day. (If you miss the great man, you can watch old episodes of "Cosmos" on Hulu.)

The video is part of a project called "Symphony of Science," a way, says Boswell, to "bring scientific knowledge and philosophy to the masses, in a novel way, through the medium of music." A follow-up, "We are all Connected," brings in Bill Nye (the science guy), Neil deGrasse Tyson (of the Hayden Planetarium), and Richard Feynman. Highlights: the rap-video-esque shots of Bill Nye (as Monitor Mix pointed out) and Richard Feynman's broad Long Island accent, still detectable through the AutoTune. Just for that, I think it's even better than "A Glorious Dawn."



Hilariously, Neil deGrasse Tyson did an entire NOVA ScienceNow episode on AutoTune, interviewing Andy Hildebrand, the electrical engineer who invented the now famous—or infamous— "pitch-correction" software. Hildebrand is a former Exxon engineer who used sound waves to detect oil reserves beneath the ocean floor. And now? Well, you can thank him for TPain. (Warning: the following video contains scenes of an astrophysicist in the shower.)



Read the rest of the post . . .

Friday, November 13, 2009

Nobu Toge: Machine Portraits

Nobu Toge's lens peers through the final magnets of the test accelerator. (Nobu Toge/KEK)

For physicist Nobu Toge, a typical day of work at the Japanese high-energy physics lab KEK might involve attending a few meetings, calibrating a just-installed piece of equipment, or writing a report on the research's progress. But in the midst of it all, Toge might also pull out his always-ready camera and snap a photo of a gleaming piece of machinery, or a pair of technicians in bunny suits readying a component for testing. At the end of the day, reports and spreadsheets laid to rest, Toge will add the photos to the thousands he's taken on the job over the last seven or eight years.

A high-energy physics laboratory might seem an unlikely muse for a photographer. But just a glance at a few of Toge's photos might convince you otherwise. His images of scientific equipment are explorations of shape, form, and color. When you peer through his lens, you might find yourself awed by a long corridor of magnets as if it were the Grand Canyon, or admiring a vacuum chamber the way you might gawk at a dew-studded rose.

This beautiful abstraction is a close-up of a device that monitors components during cool-down. (Nobu Toge/KEK)

Toge is also a master at capturing his colleagues at work—the concentration lining a physicist's face as he fine-tunes a beam position monitor, or a technician's bemused expression as he handles a piece of equipment that looks like it was cribbed from Dr. Frankenstein's laboratory. There's a gentle humor in these photos, as if to remind us that, while the equipment may look cold, alien, and otherworldly, the enterprise of science remains quintessentially human.

Workers gingerly raise up a superconducting cavity to prepare it for testing. (Nobu Toge/KEK)

Here's what Toge had to say about being a scientist on the other side of the camera's lens.

How did you become interested in physics?

I found physics to be fundamental, and as such, when I was young, I thought I could easily associate a life worth significance with being into it. Now, I can tell I was really green to feel that way, but that is how I felt. Before then I thought a bit about going into fusion research (in high school and college), and before then I was interested in electronics and astronomy (in mid high school).

As a large particle physics lab, KEK is worlds away from most workplaces. Is there anything that still doesn't seem "normal" to you after having worked there for years?

Maybe it is worlds away, but KEK cannot quite complete its work without the help or participation by the ordinary people, whether technical or organizational. However, there are some hardcore people who are at the lab day and night even during the weekend. They are so dedicated to the research, or to be more exact I suspect they just do not know what else to do. And they call the people other than themselves "those civilians." Of course, it is a joke, but I love them. (No, I am not one of them.)

KEK staff align magnets for the Accelerator Test Facility 2. The precision required is astounding. (Nobu Toge/KEK) (Nobu Toge/KEK)

When did you first become interested in photography?

Some time ago when my children were very small and growing (they are still growing now, but not very small any more), I thought I should take their records. I used to think I always have a pretty good memory of everything, but I recognized that it wouldn't be that way for too long. This was about a bit over 10 years ago.

Besides family snapshots, I got into scenic photos, and I would drive my car for three hours to Nikko or North Ibaraki or Fukushima for shooting every other weekend or so. I'd leave home at midnight on Saturday so that I could catch the sunrise in the Sunday morning in Odashiro field in Nikko. In the first week in November, the first sunshine of the day comes onto a big, famous Japanese White Birch tree out there at just the perfect backlighting angle which I absolutely could not miss. Or in mid to late April some towns in mid-Fukushima would be practically completely buried under cherry blossoms, and so on.

"Kunimatsu district of Tsukuba city, known as Tsukuba Ume Garden. This area has a number of cherry blossom trees which bloom in mid-April." (Nobu Toge/KEK)

I inherited a medium format camera (Mamiya RZ67) from my father, and it was very heavy in a backpack but I loved it, since it would give an outstanding image when I got the framing and the focus and the exposure right, which would simultaneously happen perhaps once or 0.5 times per roll of 120 film.

Toge's camera captures an early spring morning near MountTsukuba. (Nobu Toge/KEK)

However, then the work at KEK became much busier and I could no longer feel energetic enough to continue doing this weekend shooting tour. Out of frustration, I started looking for something to shoot and I noticed that there are many hardware pieces and the beamlines at the lab which are aesthetically very interesting. Almost all of them are placed inside buildings so I do not need to worry about the weather or the sunrise time etc. It turned out to be a perfect diversion on the weekends since I did not need to bother my colleagues there, either (except for those "non civilian" friends). This is about seven or eight years ago.

A superconducting accelerator cavity undergoing thermal tests. (Nobu Toge/KEK)

What do the other physicists think of your photography?

At first I was shooting at the lab only in the weekend, so very few people knew what I was doing. Then slowly I got more nerve and started shooting while they worked on weekdays, like during installation of major hardware equipment and so on.

By now, all in our group know that I am doing this and they have become pretty good at focusing on their work even while I am shooting, i.e. they now know how to ignore my existence with a camera, I think. And that is very good in terms of our not interrupting each other at a wrong time, and for me that is good for capturing their natural expressions.

Testing materials is essential to building great accelerators. This is sample broke during a low-temperature strength test. (Nobu Toge/KEK)

Now, every other month or so some of them come to me and ask if they can use my photos for their papers or presentations. That is when I see that they like at least some of my photos, and of course, they can use any such photos 100 percent of the time, since these are really theirs.

One of my colleagues told me that my series of assembly scenes of our cryomodule (horizontal cryostat) in August 2006 helped him grasp the big picture of the process. And he is one of the guys in charge of that entire assembly process who are supposed to have created that big picture. And I thought I was shooting photos of the details rather than a big picture. I felt that this slight mutual mismatch of perception both interesting and educational.

This "yellow submarine" is a cryomodule: inside are superconducting cavities, wrapped in insulation and cooled with liquid helium. Nobu Toge/KEK)

It seems like it would be difficult to take compelling photographs of scientific equipment, but you do it so easily. Is that a skill you've had to learn over years of photographing KEK projects?

First off, scientific equipment is not boring. They have a reason to be there, to do some complicated things that not many others would do, and their existence is through someone's hard work, sometimes over years, and from that perspective there is always a view angle (or sometimes more) that they appear really precious, strong and beautiful. It is up to the person with a camera to know it or uncover it.

At KEK, even the most humble object is meticulously designed. This is a prototype for a grinder for delicately polishing uneven inner surfaces of accelerator cavities; under Toge's lens, it becomes a work of art. (Nobu Toge/KEK)

In a way this issue is similar to the scenic, or it is perhaps even more similar to portraiture. In practice, however, while shooting, I do not have time to think through these things really. I just keep taking as many shots as possible, while moving around in as many as possible different ways, until I think I "got it". Then, I continue shooting a bit more, just in case I get even better ones by chance, and after a few hundred shots finally I give it up. And usually it is the last few shots that turn out to be more or less OK.

At the Accelerator Test Facility 2, a line of quadrupole magnets, which focus the electron beam to the size of a hair, extend into the distance. (Nobu Toge/KEK)


Does photographing KEK and the projects you're working on affect the way you look at your research?

Through the lenses, I think I have become more appreciative of people's efforts in building these many pieces of equipment and have become more caring for the colleagues who have been doing all these. I do not think one really has to take this many pictures to learn such a basic thing, but that is what happened to me.

Staff at the Superconducting RF Test Facility take a break from installing the waveguide system.(Nobu Toge/KEK)

Nobu Toge is a physicist at KEK, a high-energy physics lab in Tsukuba, Japan. Since the 1980s he has worked on machine design, commissioning, and operation of particle accelerators ranging from the SLAC Linear Collider in northern California to KEK's electron-positron collider. He currently works on the Superconducting RF Test Facility and the Accelerator Test Facility, technology testbeds for the proposed next-generation particle accelerator, the International Linear Collider.

Read the rest of the post . . .

Thursday, November 12, 2009

Best physics inventions of 2009

What a real teleporter looks like. (Photo from the Joint Quantum Institute.)

TIME magazine has announced the 50 best inventions of 2009. NASA's Ares family of rockets was a shoo-in for best invention, given the recent launch of Ares 1-X, the family's test rocket. I'll give them that; NASA could certainly use the cheerleading.

But I was surprised to see "Teleportation" sixth on the list. When did I miss this? Has everyone else been teleporting to work while I've been trudging in the rain? How can I get my hands on a teleporter?

When something looks fishy, it's probably the usual suspect—quantum mechanics. What we're talking about here is teleporting information, in this case from one atom to another one meter away. Anton Zeilinger, reality-questioner extraordinaire, has accomplished the feat with photons, first in the lab, then across the Danube, then between two Canary Islands. But photons maybe seem a bit too ethereal for TIME magazine; atoms, now there's a solid piece of matter for ya. The feat with atoms was accomplished at the Joint Quantum Institute at the University of Maryland. What did they send? A "quantum state." It may not sound like much, but scientists can encode information into such a state; you could teleport quantum bits across vast distances, as long as they don't get corrupted along the way. Sounds rather handy now, doesn't it?

How did the JQI folks do it? The two atoms in question are quantum entangled, so that setting the state of one decides the state of the other. They used two ytterbium ions, isolated in high vacuum cages. With a burst of microwaves, they pushed one ion into a combination of states, then stimulated both ions so that they emitted photons that were then routed through beam-splitters and into detectors. When the photons interacted in a certain way this entangled the two ions.

Once they researchers saw signs of entanglement, they measured the first ion, collapsing the mix of states into one definite states and setting the state of the second ion. This also tells them exactly what microwave burst will get the original information, the mix of states in the first ion, to occur in the second ion, transferring the information from one ion to the other.

Hmmm. In an article in January on the experiment, TIME admitted that it wasn't exactly show-stopping. But it's nevertheless pretty cool, and could allow the transmission of a bit that's not simply a 0 or a 1, as in modern computing, but a mix of the two. That means, down the line, speedier computers. As for "Beam me up, Scotty," however:

The next step for the JQI team is to improve the photons' precision and the rate of communication between the particles. What we won't see soon — or ever, according to Monroe — is a contraption that can teleport humans from one point to another. Sorry, Captain Kirk, but beaming up is a pleasure strictly reserved for atoms. "There's way too many atoms," says Monroe. "At the other end of the transporter, you need to have some blob of atoms that represents Captain Kirk but has no information in it. I mean, what would that look like?"

A cloud for 2009: its Latin name means "turbulent undulation." (Photo from Wired.)

Other noteworthy, physicsy "inventions" of 2009 include a never-before-seen cloud named undulatus asperatus; cloud geeks are fighting for it to be included in the prestigious International Cloud Atlas published by the World Meteorological Organization. I guess Mother Nature wins the invention award for that one—see this page for more weird clouds.

This mouse floats on air. (Photo by NASA/JPL.)
There's also levitating mice—scientists at NASA's Jet Propulsion Laboratory achieved the feat with superconducting magnets that repelled the water inside the baby mice, apparently to study how long periods of time in zero gravity affect the body. And finally, my personal favorite, the Sky King. This paper airplane, folded from a single sheet of paper by Takuo Toda, chairman of the Japanese Origami Airplane Association, holds the record for longest flight at 27.9 seconds.



Read the rest of the post . . .

Wednesday, November 11, 2009

Call me a Luddite, but...


Galileo's famous experiment reincarnated in a Wolfram Demonstration.
Ever since discovering them in my first year of college, I've been a frequent visitor to Stephen Wolfram's MathWorld and ScienceWorld. Those websites were great for when, far from my textbook, I needed to remember the value of a physical constant or what the heck Green's function does. Occasionally, the Mathematica Integrator helped smooth out a thorny integral. But they were more references than textbooks, better for jogging my memory than teaching me something new.

So I was pretty excited to see a new Wolfram webpage: Wolfram demonstrations. After downloading (for free) Mathematica Player, you can open up any number of demonstrations in subjects ranging from pure math to biology to high school physics. I was excited; I sort of love Java applets that illustrate physics concepts, and the Wolfram demonstrations page was just so colorful and full. Galileo's gravity experiments? Archimedes' principle? Not even deterred by the required online form, I signed up for my free copy of Mathematica Player and happily downloaded the demonstration of Galileo's fabled experiment of dropping balls from the top of the Tower of Pisa.

The applet showed a computer-graphic-cute Tower of Pisa, tilted over a grassy plain on a sunny day. A stick figure leaning out a top story window was holding two balls, whose masses I could change by moving a slider. I pressed play. The two balls started to fall, accelerating rather jerkily until their black pixels met the green pixels of grass at the bottom of the tower at—big surprise—exactly the same time.

It was then that I realized there was something fundamentally wrong with the idea of trying to show people this simple concept, that acceleration of a body due to gravity is not dependent on its mass, with a computer program. First of all, it's very intuitive to us that heavier things should fall faster. It just seems like it should be true, no matter how many times you've been told it isn't, right? Hold a bowling ball and a golf ball at the same height, and it just seems impossible that the golf ball will hit the ground at the same time. That's why replicating the experiment (it's debatable whether Galileo actually did it in the first place) is so much fun. Physics is surprising and counterintuitive, so seeing it at work in nature is very compelling. Only experiments can make the connection between the acceleration calculated from the equation for the gravitational attraction between two objects and an actual falling object.

Unfortunately, there is nothing compelling about a computer-generated movie that shows this. You're not going to convince any kids with this demonstration. Far more compelling would be dropping, say, a computer mouse and a CPU tower from a high window.


A Wolfram demo takes the excitement out of Archimedes' classic experiment.

Still, I'm a bit of a fan of Stephen Wolfram (anyone who tried to write a particle physics textbook at the age of twelve is sort of hard to hate) and I wanted to like this new project, so I opened up a demo that purported to reenact the experiment Archimedes carried out as a result of his bathtub "Eureka!" moment. This is a great story from the birth of science—Archimedes was charged with figuring out if a crown the king had commissioned for a temple was really pure gold. By comparing how much water the crown displaced to how much gold of the same mass displaced, he showed that the crown was not pure gold—the crown displaced a greater volume of water than the equivalent mass of gold, meaning it was greater in volume than it should have been. It had padded with a less dense metal (silver) to make the masses match. Too bad for the crown-maker.

A less messy version of the experiment is to weigh the crown in water and air and use Archimedes' principle to figure out the crown's density from the difference:

Archimedes’ Principle states that the buoyancy support force is exactly equal to the weight of the water displaced by the crown, that is, it is equal to the weight of a volume of water equal to the volume of the crown.

The Wolfram demo of this experiment is somewhat interesting visually but not enlightening in the slightest. (I even suspect that the way the author has reworked the experiment makes the whole thing trivial, but I won't get into that here, though it brings up the question how trustworthy these are as teaching resources.) Now here's an opportunity for a physicist-in-training to splash water around and relive quite a dramatic tale from Greek history. A sacred temple, a golden treasure, a treasonous betrayal—this physics experiment has it all! Seeing numbers change on a screen doesn't really do it for me, and I already know what I'm supposed to be looking for.


A Wolfram demonstration shows the compression wave in air created by a tuning fork.

True, these were only two of hundreds of demos available. A tuning fork one wasn't so bad—it lets you see air particles rarefy and compress. Compression waves are &mdsah;tough to visualize, and this makes it explicit. But you can't change the loudness of the tuning fork, or its frequency. And what about harmonics? Again, it would be more instructive to have a real tuning fork you could bang and listen to and feel.

But maybe that's because I've been reading High Tech Heretic an indictment of our enthusiasm for allotting greater and greater portions of school budgets to buying computers and hooking up high-speed internet. Believe it or not, the author isn't living in a cave, or in Amish country. Clifford Stoll, an astronomer who makes topological drinkware in his spare time, has been "digital" since the 70s. While working at Lawrence Berkeley National Lab in the 80s, he detected and relentlessly tracked down a KGB hacker who was popping in and out of networked computers on military bases around the country.

Wait, and this guy doesn't want computers in the classroom? Is this a grudge?

My knee-jerk reaction is to think that more computers in schools means greater, easier access to information. But Stoll's book points out that, first of all, schools' budgets are limited; is it really worth it to spend tens of thousands of dollars on equipment that's obsolete in two years or less, when that money could be invested into longer-lasting resources, like better libraries and talented teachers? Many proponents of computers in the classroom claim that computers can save costs by replacing laboratories, largely thanks to programs that demonstrate chemical, biological, or physical concepts—Wolfram demonstrations, for example. No toxic chemicals, no cow eyes, no costly beakers and scales—students can instead explore scientific concepts in a virtual environment. Right?

Stoll tells this hilarious story, about the daughter of an MIT sociology professor who favors bringing more computers into the classroom:

What about science? Well, Professor Turkels' seven-year-old daughter was curious about magnets. The girl had a computer program about magnets but didn't really understand them. Then Professor Turkel bought a magnet for her daughter. "Once she was holding it in her hands, she got it."

He goes on to describe a simulated chemistry lab:

School chemistry software comes complete with pretty images of thermometers, pipettes, and condensers. To simulate a titration, you type in commands, use a mouse to drag a simulated beaker across the screen, and then watch the effect on a simulated pH meter. Sure looks spiffy, but it ain't chemistry. It's simulated chemistry.

The demonstrations I found on Wolfram's site were disappointing for the same reason. It's hard to be interested in physics in a virtual world. To build these demonstrations, people took rules already extracted from nature and used them to construct a simple, boring, perfect version of nature. The more kids learn physics from computers, the harder it will be for them to actually understand that physics doesn't just exist in computer simulations in math. It governs the world of cars and people, flowers and ants, towers and bathtubs. No computer can teach that.

Read the rest of the post . . .

Tuesday, November 10, 2009

A modest mathematician, a not-so-modest conjecture

Grigory Perelman didn't want the money or the fame—he loved math for math's sake.
It's not often that an unemployed, middle-aged man living with his mother in the suburbs gets a write-up in the Wall Street Journal. But Grigory Perelman, a forty-something Russian mathematician who shares an apartment in the St. Petersburg suburbs with his mother, could have been a Fields medalist and a tenured professor at any of the top mathematics departments in the world. Instead, he turned down the notoriety for a quiet life.

The provenance of Perelman's unclaimed Fields medal was a question posed in 1904 by Henri Poincare, the celebrated French mathematician. The famous question, known as Poincare's Conjecture, can be worded succinctly in mathematical parlance like so: "every simply-connected closed three-manifold is homeomorphic to the three-sphere." (Here's the official description from the Clay Institute—more on them later.)

The "three-sphere" in question is the skin of a four-dimensional sphere, just as a two-sphere is the curved surface of a three-dimensional sphere. Since I can't for the life of me image what a four-dimensional sphere looks like —though the language of mathematics, useful thing that it is, allows folks to consider and play with these non-existent objects—I find it easier to consider the problem in dimensions I can visualize.

People who care about problems like these are called topologists; I think of them as the type of mathematicians who never tired of the joys of playdoh. Given a doughnut of playdoh, they can mold it into a coffee mug or a teacup without having to tear it or stick pieces together together. Similarly, they can mold a sphere into a football, bend it into a bowl, or smash it into a plate. But turning the plate into a doughnut requires tearing a hole in the middle, and doing the opposite requires sticking pieces of the doughnut together to close that hole.

These shapes are not homeomorphic, but topologically distinct.

What I'm trying to get at with all this nostalgic talk of playdoh is an intuitive understanding what "homeomorphic" means. The coffee mug and doughnut are essentially the same form, so they are homeomorphic, as are the bowl and the sphere. But the doughnut can never be molded into the sphere, so these are two fundamentally different forms. Next time you come across some playdoh (I keep a little pot next to my computer in case I'm suddenly inspired to discover new topological truths) try it yourself.

What about "simply connected?" Imagine looping a string around the middle of a sphere. What happens when you pull one end of the string, tightening the loop? The loop will slip away from the middle, enclosing a smaller and smaller circumference, until it becomes a point on the surface. The same happens when you loop the string around a football. But if you loop the string through the ring of a doughnut, there's no way to shrink it down to a point without cutting it and tying it back together. So if a loop on a surface can shrink down to a single point without breaking, the shape is "simply connected."

You can smoothly shrink the rubber band around the sphere to a point, but you can't shrink the rubber band on the doughnut without breaking it.

In the dimensions we can visualize, Poincare's conjecture says that any surface on which you can shrink a loop down to a point—a football, bowl, or plate—must really be equivalent to a sphere. What Perelman did was rigorously prove that this was true if you take the whole problem to a higher dimension, so that the magic shape is the three-dimensional skin of a four-dimensional sphere.

On November 11, 2002, Perelman posted his 39-page proof on the preprint arxiv, without a care for publication in a peer-review journal or possible plagiarism. You can read "The entropy formula for the Ricci flow and its geometric applications" for yourself; Ricci flow is the mechanism Perelman used to smooth out bumps in flaw-ridden three-dimensional surfaces, revealing them to be equivalent to the smooth, curved three-sphere. His proof won him the Fields Medal (many think of it as math's Nobel prize) in 2006. The Clay Institute, who named Poincare's Conjecture as one of their seven "millennium problems", have worked on turning an explication of Perelman's proof into a published work so he can be eligible for the million dollar prize. (Jury's still out on whether Perelman will, if offered, take it.) Meanwhile, mathematics professors from all over the world aggressively tried to recruit him to their departments.

Perelman refused all offers. In the Wall Street Journal article, Masha Gessen, who has written a book about Perelman, tries to psychoanalyze both Perelman's successful proof and his refusal to accept the Fields Medal as artifacts of his upbringing in Russia's mathematical counterculture. This was the world outside of institutionalized Soviet math, the stables of Russian mathematicians preened, nurtured, and controlled by the state.

Whether by religion, ethnicity, or political leanings, some mathematicians could never join this institution, but that didn't stop them from doing great work. Outside of state-run research towns, mathematics was pursued as a hobby or an art more akin to poetry than engineering, Gessen writes. It was a solo enterprise, independent of academia, teaching, publishing, or professional ambitions. Although Perelman spent time in the US as a post-doc in the early 90s, those mundane aspects of American math drove him back to Russia. There, in the relative solitude—he did have colleagues and his proof rests heavily on previous work—familiar to Russian mathematicians on the outside, he worked on his proof. And he seems content to stay on the outside.

Read the rest of the post . . .

Monday, November 09, 2009

Particle physics pop-up


Nerdy science books enter a new dimension: the third. (Photo by Fons Rademakers/CERN)

Stumped by what to get for that picky particle physics buff on your Hannukah list? You're in luck. CERN is about to debut a technolust-inspiring souvenir that's science lesson and work of art all in one: a pop-up book of the ATLAS detector. Folded between its covers are Geneva, both above ground and 100 meters below, the Big Bang, and the complex architecture of ATLAS.

The ATLAS detector at CERN is a vast cathedral of electronics, wires, and materials—from semiconductor to liquid argon to scintillating plastic—enshrining the spot where two protons collide. It's not a single detector, but a series of detectors, packed around the interaction point Russian-doll style. Together they'll capture every last flicker and trace of the particles hurled out from the collision and spit out this information as hard data—about 27 CDs worth per minute.

Build your own ATLAS detector! (Photo by Papadakis Publisher.)

Anton Radevsky, a paper engineer whose pop-up ouevre includes Modern Architecture and the Wild West, reproduced the innards of ATLAS in paper panels so intricately patterned they look like stained glass windows. You can unpack and fold ATLAS's components yourself and slot them carefully in place around the interaction point—a pleasure, surely, for accelerator physicist wannabes.



As ATLAS physicists Pippa Wells quips in the above video posted on YouTube, the feat takes about a minute, but ATLAS was 15 years in the making, from concept to commissioning. And while the book only requires a bit of uncomplicated folding, the actual construction required lowering ATLAS piece by piece from the surface down through two shafts to the tunnel floor, 100 meters below ground, and then assembling it. Here's the whole process condensed down to a minute of action:



As CERN readies itself for the magic moment when protons circulate in the tunnels and collide, tensions are high and excitement is building. It's taken more than a year for the thousands of people involved in the project to return the LHC to the point it was at last September, and the media and public seem skeptical that it will all come through. The LHC seems to be all about bigness and grandeur—it's 27 kilometers in circumference, and a single bit of ATLAS weighed 250 tons—but it's incredibly delicate and vulnerable. Last Thursday CERN made news when a bird snacking on a chunk of baguette dropped it into a high voltage component, causing a power cut. Cooled magnets threatened to warm up; the press cackled gleefully.

The accelerator is due to fire up later this month, and everyone's thinking the same thing: will it work? Bill Bryson, in a refreshingly down-to-earth piece of writing chronicling his recent visit to the lab, has it from the mouth of the LHC's head of operations: yes, without a doubt.

While you wait, though, better get on with that Hannukah shopping: you can preorder "Journey to the Heart of Matter" (not yet available on Amazon) at the website of publisher Papadakis.


Read the rest of the post . . .

Friday, November 06, 2009

Heaven on Google Earth

Yesterday I talked about using Google Earth to get a sense for the grandeur of the huge, landscape-sized machines of experimental particle-physics. But Google Earth is also perfect for touring the holy sites of the other big science, astronomy, whether you want to check out the world's biggest telescopes or explore the stars.

First stop: in the "Fly To" bar, type in "Very Large Array." Before you know it you'll be descending on a dusty, desolate patch of New Mexico that's home to 27 telescopes, each laden with a 25-meter-wide dish. The VLA is part of the National Radio Astronomy Observatory and "see" the universe in radio waves just as we see the world in visible light, allowing astronomers to study anything from the Cosmic Microwave Background to stellar corpses known as pulsars. Some of the data collected by these telescopes has even found its way into Google Sky.

The Very Large Array in New Mexico.

Apparently the Google Maps truck barreled down at least on dusty road in the complex; enter one of the photo-bubbles and look around. You're standing at the edge of the array with one of the giant listeners looming over you, dish cocked to receive a message you can only guess at. Pretty impressive.

One of the Very Large Array's telescopes, up close and personal.

Next, click the "Add Content" button on the Places bar. This will open up a window in the bottom half of Google Earth that lets you search extra material. Search for "Chandra X-ray Observatory," download the file that comes up, and open it in Google Earth. It will become a folder under places. Double click, and you'll fly over to the air above Cape Canaveral, where you'll see the purple-winged (actually, they're solar panels)Chandra hovering.

The Chandra X-Ray Observatory, hovering over the Kennedy Space Center.
The scale isn't quite realistic—Chandra's elliptical orbit takes her to an altitude that's about a third of the way to the moon, and she's not always above Flordia—but it's convenient for zooming down onto the Kennedy Space Center just below her and checking out the launch pad.

Now it's time to head down south. In the browser window, go to the Pierre Auger Observatory's Google Earth Page, where you can download a model of the cosmic-ray observatory and open it in Google Earth. Spanning an area of the Argentine Pampas that rivals the size of Paris, Pierre Auger uses 1600 water tanks to catch secondary particles from cosmic-ray showers. The Google Earth package is wonderful because you can really get a sense for the vastness of the enterprise, and how amazing it must be to stand on that desolate plain with the Andes looming in the distance.

The 1600 cosmic-ray detectors of the Pierre Auger Observatory in Argentina.

Tropical climes are next. Fly To Mauna Kea on the big island of Hawaii. Zoom in a bit and you'll likely see a cluster of enormous white domes: the Mauna Kea Observatory , home to the northern telescope of Gemini (the other is on the summit of Cerro Pachon in Chile) and the two telescopes of the KECK Observatory. In your "Layers" menu, turn on 3D buildings for a treat—the twin domes of KECK in 3D.

The KECK Observatory's twin telescopes, on Mauna Kea, in Hawaii.

Finally, "Fly To" Canary Islands and watch Chandra whip by as you zoom across North America to land in a remote, northeast corner of the Atlantic. Once you're there, "Fly To" Roque de los Muchachos, Canary Islands. When you land in a deep ravine covered in plant life, zoom out a bit. You'll spot a white dome in the upper left of your screen. Go to investigate—it's the Roque de los Muchachos Observatory, home to an impressive suite of telescopes, including the world's biggest optical eye on the sky, the Gran Canarias Telescope.

The telescopes of the Roque de los Muchachos Observatory on La Palma, Canary Islands.

Plenty of the telescopes have been modeled in 3D, so make sure to turn on 3D Buildings. This shot of the Gran Canarias Telescope is as pretty as a painting.

The Gran Canarias Telescope on its perch overlooking the Atlantic.

Once you've had your fill of astronomy on earth, go to Google Sky, Moon, and Mars to do your own exploring.

Read the rest of the post . . .