Friday, January 29, 2010

Hubble 3D!

If you haven't heard yet, the IMAX film crew who put together Space Shuttle 3D and Deep Sea 3D have a new film coming out called Hubble 3D. Le trailer:



Reports say that there will be some never before seen images from Hubble at the end of the movie (I'm drooling already), but most of the film is a documentary about the space shuttle crew of STS-125 going up to repair and upgrade Hubble last May. The trailer also makes it sound sort of scary (it is always scary to strap human beings to a bomb and then watch them try to repair a billion dollar instrument 350 miles above Earth) but of course we already know that the mission was a terrific success: Hubble works wonderfully and the crew made it back safely.

I was lucky enough to catch a preview screening of about 15 minutes of footage from the film, and LIKE WOW. I will definitely be going back to catch the whole thing. Seeing the shuttle launch in IMAX 3D was so surprisingly stunning. I got teary. And then seeing Hubble, this magnificent instrument, laid out on the operating table, looking vulnerable, graceful and powerful all at the same time, was really inspiring.

Also fantastic (and I'm sorry this won't be included with all screenings of the film) was mission specialist on STS-125 Michael Massimino, who came to answer questions and discuss his experience with Hubble. He was such a likable guy, and opened right up to the audience about how emotional the experience was, and how the film brings back so many memories. He and the crew spent about two and a half years in intense training for the mission, so they became, as Massimino said, like a family.

The group was able to bring an IMAX camera and 8 minutes of film into space (they also had non-IMAX video from helmet cameras and satellite feed). I was wondering the whole time what the cost of bringing the equipment with them must have been. If you've seen IMAX movies like Everest, you wonder how they managed to haul all the equipment with them on these already difficult journeys. According to the IMAX website, a full camera package weighs 650 pounds! Now I've found varying numbers, but it seems that sending something to space costs about $5,000 per pound. So the 650 pound IMAX package would cost around 3.25 million dollars. (I cannot find the exact details via IMAX or NASA so I may be completely wrong here, but I imagine it was not cheap to send this stuff into space; otherwise I'm sure they would have included far more than 8 minutes of film).

Was it worth it? For people who dream of taking a space flight and will never actually get to, the IMAX film is a priceless opportunity (well, not totally priceless. But $15 bucks is definitely a bargain). For NASA, well, no one should underestimate the importance of advertising your product, even (or especially) when the people funding it have so many other things to consider. Massimino talked briefly about the NASA budget situation. He said he's always felt support for NASA "from the President, Congress, and the public." The Hubble 3D experience may reinvigorate public support not only of telescopes like Hubble (and it's replacement, the James Webb telescope) but the space shuttle as well.

Some people never lost their support for NASA, but in tough economic times, it frequently ends up on the chopping block. The agency does do a great deal of work that leads to technological and scientific progress that we can see in our every day lives, but it is also a national symbol. It represents the pursuit of the unknown, the importance of challenging ourselves, and of chasing our dreams. And Hubble 3D shows just how magnificent those things can be. Personally I think it's a fantastic way to share the experience of a space shuttle flight with the people who pay for it, and the people who dream about it.

Read the rest of the post . . .

Thursday, January 28, 2010

Part 1 of "Prediction is difficult . . .

. . . especially about the future." Niels Bohr

With all due respect to Professor Bohr, I think some things are easy to predict. Take the latest stab at the air car. For at least a century, futurists have been predicting that we'd be flying to work eventually. Check out the video below to see NASA's late air car idea. I feel like it's pretty easy to predict if and when these sorts of things will be commonplace.



I'm going to try my hand at futurism for a bit. I plan to throw some cold water on a few predictions, and highlight a few others that I think are feasible and likely. Because there're so many things to talk about, I thought I'd make this a semi-regular column. Today's topic: NASA's Puffin Air Car and the Holodeck - Future Tech or Dead End?

Flying Cars - You might prefer to call them "personal aircraft" or some other buzzword. People have been working on them since airplanes were invented. In fact, we already have some personal aircraft that work pretty well - ultralight planes, powered paragliders, and autogyros, to name a few. None of them have become popular commuter vehicles.

The biggest problem problem I see with the NASA concept and so many other air car designs is that they are basically "fail dangerous" designs. In case the term sounds odd, a "fail safe" system is one that leaves you intact if something goes wrong. "Fail dangerous" systems are ones that go terribly wrong if there's problem.

Early elevators were an example of fail dangerous devices. If a cable broke in an old fashioned elevator, you'd be in trouble. A fellow named Otis made elevators fail safe by incorporating brakes that automatically kicked in UNLESS the elevator cables were intact. In other words, if a cable broke, the elevator would freeze in place. Now when elevators fail, they get stuck rather than falling to the basement. That is, they are now fail safe systems.

Consider the engine in a land car as opposed to one in an air car. If the engine fails while you're driving to work, you come to a stop (it fails safely). If it fails while you're flying to work, you plummet out of the sky (it fails very dangerously).

There are lots of ways that land cars can fail dangerously, but few as dramatic as being left hundreds of feet in the air with no power. Solve that problem, and you might have a viable air car. My proposal would be to make sure they fly low and slow, which means turning them into plain old cars. Otherwise, personal aircraft will remain toys for hobbyists and adventure seekers, and will never be practical daily transport.

Holodecks - Guess what? They're already here. Sure, we have a long time to go before we can flip a switch and find ourselves dueling renaissance musketeers. But I can already sit down at a table and be instantly transported to a board room in Portland. At home, I can play tennis, go bowling, and practice batting in a major league-sized stadium from the comfort of my living room. Haptic devices ( things like the Wii handset) can even simulate the sproing of a ball hitting a racket.

At the pace this stuff is developing, it won't be long before we have virtual reality so convincing that it'll be hard to tell reality from virtual reality. In fact, virtual reality will probably be the best way to enjoy a personal aircraft. Then your air car will be both fail safe (if the server crashes, you don't die) and exhilarating.

Next Time: invisibility cloaks and hydroponic people grown from conception to birth outside the womb. Do either of these seem likely to you?

Read the rest of the post . . .

Wednesday, January 27, 2010

Mars, Tablets and Hydrogel

There's a lot going on today. And it's not even a Saturday. I think of Saturday as the day of many rewards, but apparently the solar system and Apple are irreverent about the days of the week. They will reveal mighty wonders whenever they want!


The solar system is revealing Mars today - the red planet will be the closest to Earth that it will get until 2014. It will be particularly visible in North America tonight, so be sure to step outside and get a good look. One that will hold you over for 4 years. You know, now that I think about it, maybe the solar system did plan to have this happen on a Wednesday. Heaven knows I'm not about to leave my apartment on a Thursday night in the middle of 30 Rock to look at the sky. (I seriously still watch TV shows on the TV and not online. I can pretty much start asking for the seniors discount at moving picture theatre.)

The other big news down on Earth is that Apple will be debuting their brand new gadget: the tablet (or as Information Week called it, "tablet computing device." Computing device? IW is in the seniors line with me at the movie theats, probs.) Word keeps coming in about what the tablet can do - it will be somewhere between and iPhone and a notebook computer, with more versatility and capability than an iPhone for people who are out of the office a lot. It also serves the functions of a Kindle, which, given all the tablet's other features, could give Amazon some tough competition.

The question will be whether or not enough people need something between and iPhone and a notebook. Will those people who switch from a notebook have everything they need, and will those who switch from an iPhone find the excess overwhelming. The tablet is about the size of a kindle, which might be more than iPhone users want.

Still, the tablet is fascinating to me because like much of what Apple has produced in the last 10 years, it's most futuristic-looking-thing available. Apple seems to have realized that if you really want people to buy your stuff, make sure it looks like it is from a science fiction movie. Having the opportunity to hold and play with something that looks like the future we dream of is apparently worth a four hour wait in line, $200 plus $75 every month after that. And Apple has figured out that to tap into this, you cannot do it gradually. If you make technological advances slowly, you still end up with great stuff at the end, but people have time to get used to them and then become jaded.

We should all take a look around and remember that if we transported someone from 1900 to 2010 their head might explode looking at everything we have (not to mention the fact that they would no longer have to worry about getting polio, small pox, measles, mumps, or dying from any number of what we now consider "minor" afflictions). I'm still awed by the iPhone but in a few years I'm sure the Apple tablet will be just oh so passe to many people.

But for those of us who can appreciate the little advances, I'm very excited to see what happens with this new hydrogel (extensive article about it in Science News).

[Now let me note that by saying "little advance" I am not trying to down play the amazingness of this stuff. Quite the opposite. I am making a comment about how sometimes we overlook truly revolutionary things in favor of entertaining things. Like, some people might look at this hydrogel and say something like "Can I check my email on it? I can't? WHATEVS NOT INTERESTED!]

So the hydrogel is mostly water, which is ideal for biological uses, like tissue replacement. But often water-based materials that are soft enough for the human body can be easily damaged. Materials that are self healing are often hard and brittle. But the new hydrogel is turning out to be the best of both worlds and exhibiting properties not seen in any other material. It's quite soft but also tough (described as very tough jello pudding). And perhaps best of all, it's self healing.

The material is made of water and clay tablets that have a positive charge around their edges, but negative charges on their tops. The researchers then added an ingredient called the G binder, which gives the material the self healing property (it's also used in some hand creams, lubricants and laxatives). Like octopus arms, the molecule grabs onto the positive charge on the clay tablets and will reconnect them if they are pulled apart.

Between the tablet, the hydrogel, CNN's holograms (very necessary), and the new wooden legs, we're pretty much living in the future world (all we need is to bring back laser disks).

Read the rest of the post . . .

Tuesday, January 26, 2010

Volcano Lightning

A type of volcanic lightning was discovered during Mount Redoubt's Jan. 2009 eruption.

When volcano seismologist Stephen McNutt at the University of Alaska Fairbanks's Geophysical Institute saw strange spikes in the seismic data from the Mount Spurr eruption in 1992, he had no idea that his research was about to take an electrifying turn.

"The seismometers were actually picking up lightning strikes," said McNutt. "I knew that I had to reach out to the physicists studying lightning."

With McNutt’s curiosity about volcanic lightning sparked, he teamed up with physicist and electrical engineer Ronald Thomas and Sonja Behnke, a graduate student in atmospheric physics at the New Mexico Institute of Mining and Technology in Socorro, N.M. for a unique collaboration in order to learn more about volcanic lighting.

When the Mount Redoubt volcano started making seismic noise in January 2009, McNutt alerted Thomas and Behnke that this would be a great opportunity to capture some new volcanic lightning data. By the time the volcano erupted in March, the team had four Lightning Mapping Arrays set up to monitor the lightning from the eruption.

"The LMA is basically an old TV antenna set to pick up channel 3 -- the same frequency that lightning radiates from," said Behnke.

Setting up LMAs about 50 miles away from the volcano across a body of water called Cook Inlet in south central Alaska may not seem like an ideal location, but the team explained that there are obstacles to setting up LMAs near the volcano.

"We can't put the LMAs on the volcano because the volcano is basically in a wilderness area and the stations need power and internet to function," said Thomas.

As the data started coming in from the eruption, the team found something unexpected.

"We saw lots of lightning -- 20 to 30 minutes of lighting," said Thomas. "We saw even more lightning than we would typically see during a major thunderstorm."

Not only was the amount of lightning unusual, but so was the kind of lightning coming from the volcano.

"At the moment the eruption started, there were these sparks of lightning coming from the vent of Redoubt that only lasted 1 to 2 milliseconds," said McNutt, " This was a different kind of lighting that we have never seen before."

The residents and scientists who witnessed Mount Redoubt’s explosive eruptions described the events as a breathtaking display.

"They all said that it was the most spectacular lightning display that they have ever seen," said Thomas.

The team has also been studying how the newly-discovered volcanic lighting compares to familiar thunderstorm lightning.

"It's fascinating as we learn how volcanic lighting is the same and yet different form thunderstorm lightning," said Behnke.

Emilie Lorditch
Inside Science News Service


Read the rest of the post . . .

Monday, January 25, 2010

A Star Class is Born

Hold it right there, star. Trying to become a black hole, are ya? Well, not so fast. It's possible that as you begin to collapse and squeeze the subatomic particles at your core, the squarshed quarks could begin to radiate neutrinos and stop that collapse, leaving you stuck as a dense, nearly invisible lump for millions of years. Yeah, those tiny little quarks are real tough when you get a bunch of them together.

Scientists have proposed that there is a new, exotic type of star living in our universe that we haven't seen yet. The so-called "electroweak stars", if they exist, will be difficult to detect because they mostly emit neutrinos - subatomic particles which, for the most part, don't interact with ordinary matter.

[Image: "Electroweak" stars may recreate the conditions of the big bang in an apple-sized region in their cores (Illustration: Casey Reed, courtesy of Penn State)]

When massive stars run out of fuel to burn, they expand into a supernova and then begin to collapse, eventually compacting into black holes. Smaller stars, like our sun, collapse and may leave behind dense cores called white dwarfs but will never form a black hole. An "electroweak star" may prevent the supernova of a massive star from collapsing down into a black hole. It does this because of electroweak burning - essentially the transition of quarks into leptons, namely neutrinos.

As the dying star tries to collapse, a very large amount of mass may be pressed into an incredibly small area (the mass of two Earths into the size of an apple!). It takes this incredible pressure, and the pressing of a great deal of mass into a very small area, to make this electroweak burning occur. Instead of exploding into a great supernova or collapsing into a black hole, the star would live the rest of its life mostly invisible to us. Because neutrinos don't interact with regular matter (for the most part) they are very difficult to detect.

These electroweak stars may sound a bit like quark stars, which are thought to exist in the core of neutron stars. Neutron stars are incredibly dense, and at their cores, scientists believe the neutrons break down into their smaller parts - quarks. But the electroweak stars the density would be even greater, causing the distinction between two of the four fundamental forces - the electromagnetic and weak forces - would break down. Without this distinction, the quarks in electroweak stars would radiate neutrinos.

The electroweak stars are, for the moment, purely theoretical. Scientists who study our universe can use what we can see to make guesses about what we can't. They gather information about our surroundings, infer laws based on those observations, and come up with a models of the universe. The models aren't always perfect, but sometimes they lead to new discoveries. To better understand this, imagine you are standing in the middle of a house (lets say, in a hallway on the second floor). You are trying to make a drawing of what the entire house looks like, even though you can't see all of it. You would know quite a bit about the structure of the house immediately around you. If you couldn't see inside one room, but could see daylight coming out of it, you could reasonably guess that the room had a window. If you heard a door close and then saw someone enter your area wearing a rain coat, you could infer where the front door is.

In a similar way, scientists used the standard model to study the phenomenon of electroweak burning, in which quarks (the subatomic particles that make up protons, neutrons and electrons, which in turn make up atoms) turn into different subatomic particles called leptons. In normal stars, lighter particles like hydrogen can fuse into heavier particles like nitrogen (the more massive the star, the more massive particles it can eventually create).

When looking for these stars, astronomers might mistake them for very dense neutron stars. Their density would ultimately be more dense than theory predicts for a neutron star, which might give them away. In addition, the electroweak stars might not cool down nearly as fast as neutron stars.

Check out other stories on electroweak stars at Space.com, the iTwire, and New Scientist.

Read the rest of the post . . .

Friday, January 22, 2010

New Newton News

There's a fantastic little book out there called Sum, which is a collection of essays about what the afterlife might be like. The essayist carries what can often seem like wonderful and harmless scenarios for the afterlife, and carries them out to sometimes hilarious, sometimes heart warming, sometimes heart breaking conclusions.

In one scenario, people who have died enter a waiting room before they can enter the great beyond. They remain in the room for as long as people on Earth say their name. For people with large families, their names carry on through a few generations and then die out. For those with no family or friends, the wait ends after their funeral. A man who's name has become attached to a local legend retold by tour guides goes crazy wishing for the end of it all. And then there are the very famous people, some who have been there for centuries or millennia, who may very well be there until the end of civilization.

This scenario makes me happy because it means I'd get to meet some amazing people when I die. Including Isaac Newton, who will no doubt be trapped in that waiting room for a good long time. The guy was just too darn influential. We celebrated his birthday a few weeks ago - well over 250 years after he shed this mortal coil and stopped coming to the parties. And this month, the Royal Society produced an online version of previously unavailable pages from a biography of Newton, written shortly after his death.

The pages, written by Newton's friend and colleague William Stukely, address the famous "Newton and the Apple" story. The folklore says that an apple fell on Newton's head and sparked his thoughts about gravity; as if God or nature were literally smacking him on the head to say, "it's right in front of you!"

Unlikely as that exact story seems, the biography suggests that it was, in fact, a falling apple that gave Newton inspiration to pursue his study of fundamental laws. The BBC conducted an interview with Martin Kemp, emeritus professor of the history of art at Oxford University's Trinity College, UK, who had this to say:

"We needn't believe that the apple hit his head, but sitting in the orchard and seeing the apple fall triggered that work.

"It was a chance event that got him engaged with something he might have otherwise have shelved."

It's incredible to think that when Newton came up with his ideas, it wasn't just a scientific breakthrough but a philosophical one. People didn't believe that there were universal laws, but rather, that God managed most things on an individual basis. Furthermore, people didn't think it possible that the motion of the planets (such large, heavenly things!) could behave the way they did because of the same laws that governed a falling apple (so small!). Newton's time in the purgatorial waiting room is well deserved, I just hope he doesn't mind being there for a long, long time.
(Image courtesy of Royal Society)

Read the rest of the post . . .

LaserFest is Here!


The laser was born 50 years ago, so it's time to celebrate!

You can get all the news and find out what's coming up at the official LaserFest web site. Or check out articles about the laser's birthday on the Washington Post, NPR's Science Friday and All Things Considered, and no doubt many others as the year goes by. I'll keep you posted.
Read the rest of the post . . .

Wednesday, January 20, 2010

Pretty Physics Picture of the Week: Supersonic Splash

There's something very elegant about these images. It's just a disc being pulled rapidly down into water, but the space left behind looks so pretty. The researchers who took the pictures weren't out to make art. They were studying what happens when an object rapidly plunges into a fluid. It turns out that the void left behind collapses and pushes out a supersonic jet of air.

If you want to learn more about the science of the supersonic splash, it's worth looking over an explanation by University of Maryland physicist Dan Lathrop in the APS online publication Physics. Dan knows lots about the topic. I used to work across the hall from his lab where he did related experiments. Instead of dropping things into water, he had a huge pool that was mounted on an apparatus that bounced it up and down, creating waves that would collide to create jets of water that shot straight up in nearly unbelievable liquid spikes.

Of course, you can forget about the science and just appreciate the fleeting and elegant lines in the images instead. Or better yet, watch a movie of the splash in an article by Lisa Grossman at Science News. That's all I feel like doing at the moment - after all, sometimes it's nice to relax and enjoy the beauty of physics, and to leave the worrying about the calculus and data crunching for another day.

Read the rest of the post . . .

Tuesday, January 19, 2010

Picturing An Infant Universe

A new image from NASA's Hubble Telescope has provided astronomers with the earliest snapshot ever taken of galaxies in the universe's infancy, about 600 million years after the Big Bang.

The deep look into the ancient cosmos revealed baby galaxies very different from those that exist now.

"We're seeing very small galaxies that are the seeds of the galaxies today," said Garth Illingworth of the University of California, Santa Cruz.

These galaxies, which are very blue and only 1/20 the size of our own Milky Way, may help to explain where the first stars came from.

After the bright energy of the Big Bang -- which took place about 13.7 billion years ago -- the universe became a dark place. For hundreds of millions of years there were no stars or galaxies, mostly hydrogen and helium gas and a faint glow.

Then something happened around 400 million years ago that caused the first points of light, the stars, to be born and end the dark age. The stars shot off a lot of ultraviolet energy that "reionized" the universe's hydrogen gas, giving it a charge. The exact sequence of events that led to the first stars is one of biggest mysteries in cosmology, but formation of galaxies is thought to be a likely culprit for kicking off this process, which lit up the universe like a Christmas tree.

To find the earliest galaxies, astronomers look for far-off objects whose light has spent more time traveling to Earth -- to see a galaxy farther away is to see further back in time. Space telescopes like Hubble spend days collecting the faint light from this objects. A new instrument installed on Hubble in May 2009, called the Wide Field Camera 3, improved its ability to spot extremely faint infrared light coming from objects extremely far away.

With this instrument Hubble can see infrared light "about 250 million times fainter than the unaided eye see visible light from the ground," said Rogier Windhorst of Arizona State University in Tempe.

The image presented on Jan. 5 at the American Astronomical Society meeting in Washington shows a handful of galaxies that are 13 billion light years away -- that is, light traveled for 13 billion years, 95 percent of entire life of the universe, to reach the instruments aboard the Hubble. This pushes the time of galaxy formation back before 600 million years, within striking distance of the time of reionization.

Illingworth believes that an even deeper look could reveal galaxies from an even earlier time. But traveling that far back in time will have to wait for the launch of the more powerful James Webb Space Telescope, scheduled for launch in 2014.

"We're just at the beginning of this story," said John Grunsfeld, former astronaut and deputy director of the Space Telescope Science Institute in Baltimore, Md.

By Devin Powell
Inside Science News Service

Read the rest of the post . . .

Monday, January 18, 2010

A Day to Remember

A fellow science writer once told me there is physics in everything. So far, I think she's right. But I wondered if I'd be able to find any physics in today's celebration of Martin Luther King's birthday. It sometimes feels like physics is disengaged from issues of race in our nation (I guess because the focus of the field is not on people). But I know that's not true.


Just as there is ongoing discussion about how to increase the number of professional female physicists, so there is discussion about how to increase the number of professional physicists who belong to racial minorities. The United States' recent call to increase the number of science and engineering graduates not only relies on improving college programs, but improving science education all the way down to elementary schools. In particular, many poor, inner city schools around the country are in desperate need of better (or any) science programs. Quite often, the percentage of minority students is higher in these schools. The American Physical Society supports a number of programs to assist minority students, as does the National Society of Black Physicists.

This is a topic that could lead to many a long blog post, but in honor of MLK I'd rather recount a great story about a group of physicists who supported his cause during a period of great turmoil in our nation.

In the mid 1960's, particle physics was about to enter a new era with the construction of a new government laboratory, later named Fermilab after Enrico Fermi. Fermilab would eventually host the Tevatron, an accelerator that was, up until a few weeks ago when the LHC started up, the largest and most energetic accelerator in the world.

At the same time, cities across the nation were in flames. Riots and protests were raging over issues of civil rights. On paper, political progress was being made. There was the Civil Rights Act of 1964[1], that banned discrimination in employment practices and public accommodations; the Voting Rights Act of 1965, that restored and protected voting rights; the Immigration and Nationality Services Act of 1965, that dramatically opened entry to the U.S. to immigrants other than traditional European groups; and the Civil Rights Act of 1968, that banned discrimination in the sale or rental of housing. But as with the Emancipation Proclamation, the passage of law enforcing civil rights did not equal immediate changes in the way people behaved. The acceptance of African Americans and other minorities into equal status with whites was met with great and violent resistance. Even in seemingly progressive and racially mixed cities like New York, ingrained social practices that promoted racial discrimination and separation caused flare ups of discontent. Putting the new legislation into practice meant removing old ways of thinking that were stuck in the collective social mind like tree stumps, which in some places did not come out easily.

Dr. Martin Luther King became a national leader of the Civil Rights Movement in 1955 when he lead a boycott of the Montgomery, Alabama bus system. The boycott was set off by Rosa Parks, who refused to give up her seat to a white passenger. The boycott went on for nearly a year, and put a major dent in the bus system's budget. The success of the boycott inspired similar demonstrations around the country.

In the following years, King's influence on the national movement became a force to be reckoned with. Though King was an advocate of peaceful resistance, he insisted that the African American population not be quiet about the change that needed to happen in America.

In 1963, a federal bill to desegregate housing and make it open to all people regardless of race narrowly passed through Congress. But the state of Illinois rejected it's own version of the bill, and kept segregated housing legal. Once again, passing a law turned out to be much less difficult than making people obey it.

By 1967, open housing still wasn't being enforced in Illinois. That year, Robert Wilson was elected director of the National Accelerator Laboratory, though construction had yet to begin. The lab was set to be built in Batavia, Illinois, near a tiny community called Weston, about an hour outside Chicago. In defiance of Illinois' stance on housing, King threatened to protest the construction of the new facility. With so many other cities in turmoil, such a demonstration would scar the laboratory or threaten to stop it before it began, not to mention bring the riots to quiet Batavia, Illinois.

Who ever said physicists are detached from social issues should remember a handful of people who built Fermilab. The director of the laboratory, Robert Wilson, sent this telegram to Dr. King:

"We scientists now designing the 200 BeV accelerator to be located in Batavia strongly support the struggle for open housing in Illinois. Science has always progressed only through the free contribution of people of all races and creeds. This is not less true today in America, and the full
success of this laboratory will depend on achieving conditions in Illinois which allow any scientist, regardless of race or creed, to participate in this important project ­a project which will contribute to a truly great intellectual and cultural heritage in Illinois. We join you in wanting to attain these great ends."

And that wasn't all. Former deputy director Ned Goldwasser went to Chicago to meet with the leaders of minority groups and tell them that the lab intended to have hiring practices that reflected what we now call affirmative action. Goldwasser met with members of the Urban League, the NAACP, as well as the Black Panthers and Chicago gang members. The plan was to interview people in the gangs, find those who wanted the chance to get out, and let the lab be their opportunity.
Ken Williams joined the lab shortly there after and took the responsibility of interviewing the young gang members and recruiting them for a six month training period at Oakridge National Laboratory in Tennessee. Those who completed the training got jobs at Fermilab as technicians.


The protest at Fermilab was averted, and instead of becoming a symbol of intolerance, the lab became a symbol of progress.

It's good to remember that Dr. King was also a supporter of better education and professional opportunities for minorities. His protest was not against the progress of science, but the progress of a nation that would not treat it's own citizens equally. Dr. King was assassinated on April 4, 1968.

Read the rest of the post . . .

Friday, January 15, 2010

Avatar vs. All the Cats on the Internet

If you haven't seen Avatar yet, that is OK. No, really, it's fine, you do not have to see it, because this is a free country and we are in a recession and maybe you just aren't into that sort of thing. But lots and lots of other people have seen Avatar, and I think most of them would agree that if you are going to see it, you should see it in in 3-D and on IMAX, if possible. I'm not giving away any plot points here, I'm just saying that this movie would probably look pretty weird on a 12 inch black and white television. When you let all the computer power that it took to make this film flourish in three stories and three dimensions, it is pretty amazing to look at.


The visual amazingness of Avatar is due in large part to the work of Weta Digital, a visual effects company that is pretty much destroying (in the awesome sense of the word) in the field of movie visuals right now. They were hired for The Lord of the Rings (which I'm sure had no small part in their growing popularity), the King Kong remake, District 9 and a handful of amazing others. A major part of producing visual effects like that is being able to hold onto them while the movie is being made, and that takes some incredible computing power.

The data storage system that held the movie as it was being shot is one of the top two hundred most powerful supercomputers in the world. Like, wow. It consists of 40,000 processors and 104 terabytes of RAM. Most desktop computers have around a gigabyte of RAM (where 104 terabytes is about 106500 gigabytes). The final cut of the film takes up over 17 gigabytes per minute, where as a compressed version of a two-hour, highly visual movie is usually less than ten gigabytes for the whole two hours.

Weta Digital reports that rendering the film is what took up most of the computer power and time during the last month of production (where rendering means putting all of the data that makes up each individual frame into a computer and turning that data into an image). Those 40,000 servers were running 24/7 pumping in 7 or 8 gigabytes of data per second, which is incredible. Even though a 75 year old woman now has a 40 Gbt internet connection. Related? I don't know.

Nerdy as it might be to say this (I am writing on a physics blog) I think the growth of data storage is really interesting. NO REALLY! It's pretty amazing how fantastic we humans are at producing data. We are incredible at it. First there are all those books and things we produced when we used to write on paper, which Google is trying its hardest to chronicle - but there is also everything we produce on the internet. Think of everything on the internet! Think of all the cats! ALL THE CATS! THAT'S A LOT OF CATS!

Just look at all of them! In fact, there are so many cats (and other things) on the internet that companies like Google and Ebay may soon compete with science experiments for who has the most data. That is to say, it used to be that science experiments (mostly particle physics, but also some biology, like mapping the human genome) produced far and away the most data. For however small subatomic particles might be, studying them can produce incredible volumes of data. So much so, that particle physicists were sort of sitting around waiting for computer scientists to catch up.

Take the Large Hadron Collider. It has plans to produce roughly 15 petabytes of data per year collected from the very tiny explosions it will create. Most particle experiments actually toss out between 80 to 90% of the total data produced by their machines. Most of that data is pretty uninteresting, so they aren't really wasting anything, but if they wanted to, the scientists at the LHC could produce hundreds of petabytes of data a year - but the capabilities to handle that much info all at once don't exist yet.

So now internet companies are staring to compile these mountains of data as well. To store the data mountains, they must build databases. You can buy databases from companies like IBM, but only up to a certain size. Petabyte databases are not commercially available yet. They require too many specific guidelines to be built generically. Building an extremely large database (which is actually the "official" name that people in this industry are calling petabyte databases - they call them extremely large databases or XLDB's) isn't as simple as say, doubling a recipe. It's like having built a one room house and then being asked to build a sky scraper. While some might view it as just a bunch of one room houses stacked on top of each other, there's a bit more art to it than that.

Sifting through so much data would be daunting for one computer center, so the LHC has spread the work out over the planet. CERN's computing center has put together what's known as a tiered computing grid. The grid consists of a total of 140 computing centers in 33 countries. Tier 1 centers will filter through the rough, raw data (the portion that looks like it could be interesting and is not thrown out immediately by automated computer programs). Tier 2 will filter through the stuff the tier 1 groups pick out, and tier 3 will follow. After all that, there will hopefully be some nicely combed, ready for analyzing packets of data. This requires some good group participation by institutions around the world who want to get in on the physics taking place at the LHC. It goes to show that physics is a world-wide collaborative effort, since mostly everyone is after the same thing.

I can't think of how to wrap this up so...more cats.







Read the rest of the post . . .

Thursday, January 14, 2010

Antimatter Supernova


Evidence for a new kind of stellar explosion 7 billion years ago

Astronomers meeting in Washington last week announced that a recent search for bright exploding stars -- commonly called supernovas -- found something quite unusual: antimatter.

Usually stars like our sun are powered by fusion reactions in which the nuclei of two atoms fuse together to form a heavier nucleus. In Y-155, a star in the constellation Cetus, the astronomers argue that another process was crucial: the making and unmaking of antimatter particles.

In all stars a titanic struggle takes place between gravity, which wants to draw matter toward the center of the star, and the pressure of nuclear interactions, which tends to keep the star inflated as if it were a balloon. Only when the star uses up all its internal fuel, causing the nuclear reactions to slow down, does gravity start to win out. The resulting gravitational collapse is what causes the star to explode. When a star dies in this way, as a supernova, it often spews matter into space and can be brighter than its host galaxy, at least for a short time. Astronomers love to study such supernovas since they say a lot about the inner mechanisms of stars and also provide a yardstick for determining how far away the star was.

Notre Dame astronomer Peter Garnavich reports that what makes Y-155 different is its mass, an estimated 200 times heavier than our sun. With such a large mass, the pressure at the core of the star is so great that the light released in nuclear reactions is capable of creating new particles, electron-antielectron pairs. The creation of these particles actually hastens the collapse of the star and its eventual explosion.

The idea of a supernova triggered by the creation of antimatter has been around for only about 40 years, Garnavich said, but the observational evidence is sparse. In the case of Y-155 the signature light cast out after the explosion was odd: most supernovas send out higher-energy blue light first followed by cooler red light, but in this case the red light came first then the blue. That and the much larger amount of radioactive nickel shooting outwards compared to common supernovas led the researchers to suspect that antimatter was involved in triggering the explosion.

Garnavich is part of a team of scientists participating in a project called ESSENCE. Using a 4-meter-wide telescope mirror at high altitudes in Chile, the scientists observed 200 of the most explosive type of supernova. Y-155 was the most explosive of them all.

The Keck Telescope on Hawaii was directed at Y-155 so that an accurate spectrum -- that is, a summary of all the light coming from the star--- could be recorded. This allowed the distance to the star to be determined. At a distance of 7 billion light years, this star lies about halfway back in time toward the origin of the universe.

Garnavich reported these results at a meeting of the American Astronomical Society last week in Washington, D.C. He said that because of its size and powerful emission, Y-155 might resemble the first generation of stars in the universe. Another ESSENCE scientist, Alex Filippenko of the University of California, Berkeley, said that the antimatter-supernova mechanism might be important in locating these first stars.

By Phillip F. Schewe
Inside Science News Service

Read the rest of the post . . .

Wednesday, January 13, 2010

Who Is Ettore Majorana?

Imagine it's the early 1930's. You are a gifted physicist, and your world has recently been turned upside down. Physics is no longer the study of things we can see, but explorations of worlds invisible to our sight - world's that behave in the most peculiar of ways. The nucleus of the atom, the particles that make up light, and perhaps even smaller building blocks are now your playground. These rebel particles play a totally different game than the one your predecessors spent centuries trying to understand. You are at the forefront of a revolution.


You are working with the brightest minds of your generation, you have all the funding you need, and the network of physicists world wide are awaiting your most recent results. When suddenly, you realize something. Atoms, which physicists have found are made up of massive, positively charged particles called protons and much lighter, negatively charged particles called electrons, must also posses a neutral particle. Something a just little less more massive than a proton but not nearly as light as an electron. [Ed note: As pointed out by one of our dear readers, I mixed up the masses of the proton and the neutron here. The neutron is slightly heavier.] You've just discovered the neutron. It will unlock a thousand doors in physics and change our perception of the world. The person credited with it's discovery will no doubt live on in physics history forever.

Do you:

a. Run and tell your supervisor right away so you can begin preparing a paper for publication.

b. Get drunk and run into the street screaming about your neutral particle - then go tell your supervisor right away so you can begin preparing a paper for publication.

c. Do nothing.

d. Haha, yeah right, like anyone would do nothing if they made one of the greatest discoveries in physics history. (c) is pretty much just a joke.

Well, actually, it is believed that Ettore Majorana, a Sicilian born physicist working with the likes of Enrico Fermi, did in fact construct a pretty solid argument for the existence of the neutron, then sat on it and waited until someone else published about it.

But why? Why sit on such a discovery and let someone else get the credit? That question nearly drove Enrico Fermi bonkers. Fermi was so obsessed with publishing every tiny result he came up with, let alone any major result, that his frustration with Majorana haunted their relationship for years to come. Why wouldn't someone want credit for their discovery? What kind of a person, especially one so incredibly gifted at physics, not choose to stake their claim?

The answer is, that no one really knows. No one really understood Ettore Majorana while he was alive, and now it is too late to try and ask him. Ettore Majorana disappeared from Italy in 1939, and was never heard from again.

Majorana did suffer from depression; he once locked himself in his room for four years and hardly spoke to anyone. Yet, once he left the hermit lifestyle, he started teaching at the University of Naples and seemed to be coming around and enjoying his life. He may have battled suicidal thoughts; but he also took out a few months salary right before his disappearance. He left a letter for his boss in which he asked forgiveness and said he would not be at work; yet many argue whether this was the equivalent of a suicide note. He was an incredible physicist who could have marched the field forward; yet he had already displayed a total disregard for fame or the need to publish results which he believed someone else would publish eventually. Majorana was an enigma, a two-face, a mystery.

He was also brilliant. Part of Majorana's emotional suffering may have come from the fact that he was a child prodigy, gifted at mathematics. Like many smart children, Majorana couldn't fit in with his peers and leaned on a sense of superiority to deal with it. This apparently carried into his adulthood, and he would become enraged or severely irritated at anyone who couldn't keep up with him in physics and mathematics (which was just about everyone). In addition, Majorana grew up in a dictator household where his mother ruled with an iron fist that left a deep imprint on young Ettore. While most of his siblings made it out alive and fairly stable, Majorana was troubled by inner demons.

Still, he was a natural scientist and he earned an undergraduate degree in engineering and a PhD in physics from the University of Rome La Sapienza. There Ettore joined Enrico Fermi and a group of young men destined for quantum mechanical greatness. They were known as the Via Paspernera Boys, named after the street on which their laboratory was located.

Together with quantum mechanics, the physics world was focused on understanding the nucleus of the atom: it's make up, structure and behavior. How is an atom built? What particles or building blocks make it up? How can we determine more about it? How is it held together? How does it interact with magnetic or electric fields? And how can we break it apart?

Ettore dabbled in all of this during his undergraduate and doctoral work at the University of Rome. His mind seemed to allow him to join the ranks of the most established physicists of the time. His thoughts were unrestrained by classical physics or a notion of what must or must not be correct, and this freed him up to discover many of the unexpected twists and turns that atomic theory presented.

Ettore worked closely with the Boys for many years, and collectively the group made huge progress on the world's understanding of the neutron. Majorana producing a fair amount of published work that was helpful - but he was still suspected of sitting on great ideas rather than publishing them. Who knows what great discoveries we might credit to Ettore Majorana had he only been motivated to publish. And while this drove Fermi crazy, he would later praise Majorana's genius, putting him in league with Galileo and Newton. He also humbled himself enough to realize that while he, Fermi, had great technical skill and tremendous mathematical ability, he lacked the creativity that Majorana possessed. Perhaps this was why he was so obsessed with getting credit for his work. Majorana seemed to know that he would always produce great physics, while Fermi did not.

After a few years with the Boys, iced with more than a little personal conflict between them, Ettore traveled to Germany where he worked with legendary physicist Werner Heisenberg. The two became good friends and complimentary colleagues, and they worked on a model of the atomic nucleus, among other things. Heisenberg and Majorana became friends, and Majorana even worked with Heisenberg's long time mentor Niels Bohr. Not long after, Majorana published what would come to be seen as his great opus - his paper on the neutrino. It is only in the last decade that experiments to directly test Majorana's theories have been constructed. Frustrated though he was at the physics community following a theory laid out by Dirac and not his own, Majorana's life was, for the most part, looking up.

So why did Majorana disappear? After leaving Heisenberg's company and having mostly cut himself off from the Via Boys, Majorana entered a major slump. Following a great illness, he spent four years mostly cut off from the world, living alone, growing out a shaggy beard, reducing his bathing schedule, and at the same time writing volumes of papers on topics ranging from geophysics to relativity. Those papers were never published by Majorana, but have been revived by some of his old colleagues.

Four years with very little human interaction would break some people, but it almost seems to have relaxed Majorana. He decided, rather suddenly, to reenter public life, even though the physics community had mostly written him off. He entered a physics "contest" that was established in order to award the winners with professorships at Italian universities. But the outcomes were always known before hand. It was never a surprise which "competitor" went to which school. When Majorana joined the competition, it threw the pre-planned schedule totally off, but for appearances sake the organizers couldn't deny him entry. And they knew he'd flatten everyone else there (including an old friend of his). So, before the competition, they offered him a position at Naples.

Majorana was a new man. While his lectures still went over the heads of many of his students, he was far more patient and understanding than he had been with the Boys. He was still socially awkward, but his students reported that he was pleasant and helpful. His disappearance was a surprise.

On the last day he was seen alive, Majorana boarded a boat from Naples to Palermo. He supposedly took the return trip, but if he did, he didn't go home when he landed.

The theories as to what happened to Majorana cover a wide range. Most think he killed himself; but then, why did he withdraw so much money before he left? Enigmatic notes to his superior as the university suggest that he was not planning to return, but did not amount to suicide notes. Did he simply want to leave the world behind? But why? Some speculate that he got a glimpse of what a nuclear bomb could be and left before he could bring the theory into the light. Others believe he was kidnapped or killed by the mob, some that he fled to Argentina (where alleged sightings have been reported), others that he entered a monastery, and some that he became a beggar. No one will ever really know.

What I find most tragic about this whole story is that we may never know what amazing developments Majorana might have brought to physics had he applied himself for directly, felt the need to publish, and stuck around a little longer. Who knows that impact he may have had on the field; what mysteries he might have uncovered that we still have yet to dig up.

For a wonderful telling of the entire Majorana story, I recommend the recently published A Brilliant Darkness: The Extraordinary Life and Mysterious Dissapearance of Ettore Majorana, the Troubled Genius of the Nuclear Age, by Joao Magueijo. Like at true scientist, Magueijo seems unsatisfied to hear from others what they believed happened to Majorana, so he goes in search of the truth himself. His path through Italy also weaves through Majorana's physics (mostly the study of the neutrino, which is up for experimentation right now!) and his life. Magueijo makes the story sound more like a gossip column than a history. Who says physicists can't stir up human emotion? These guys are almost ready for a reality TV show. I love Magueijo's style of writing and I love this story.



Read the rest of the post . . .

Monday, January 11, 2010

Goodbye Ray Solomonoff


Long before Steven Spielberg cast Haley Joel Osmond as Creepy Jr. in A.I., and before guys started marrying their video game girlfriends, way back in the early 1950's, a small group of physicists and mathematicians coined the term "Artificial Intelligence." While the group remained small for a few more years, the dawn of the computer age has seen this field blow up exponentially.

Ray Solomonoff was a member of that small group; an early pioneer in the field of artificial intelligence whose work showed that to achieve artificial intelligence, we must first understand intelligence on a systematic level. Solomonoff died on December 7, 2009, at the age of 83 of a ruptured brain aneurysm.

To understand the impact of Solomonoff, you have to go back to Alan Turing.

Alan Turing committed suicide by eating an apple dipped in cyanide. A dramatic end to a truly amazing life. Turing is not only hailed by the physics and computer science community, but by philosophers and the gay community. Turing was a homosexual, which was a criminal offense in Britain in the early 1950's. Despite doing a tremendous amount of work for the British government during World War II, he was forced to go on hormone treatment which some doctors believed cured homosexual desires.

The Turing Machine was used during WWII to crack German codes and it's contribution had an immeasurable impact on the success of the Allies. The machine was really the first computer. While the concept of an algorithm was not created by Turing, he did formalize it, and put into practice the idea that machines could "think." An algorithm, very simply, is a sort of road map that a machine can follow. "If you come to a fork in the road, then go left." Except the maps become very intricate very quickly and can help the computer arrive at the solution to a mathematical problem.

Turing also came up with the theoretical Turing Test, which he believed could tell us when we had actually achieved true artificial intelligence. If a human being speaking to the computer cannot tell the difference between the computer and real human, then you have artificial intelligence (Terminator Salvation, anyone?).

But I digress.

It goes without saying that you need more complex algorithms to solve more complex problems. Turing machines could be programmed with different algorithms that could either do a great deal of work, or potentially solve a problem that humans had been able to solve. However, Turing and some of his collaborators showed that some algorithms, when asked to solve certain problems, could end up running infinite loops. The algorithm couldn't find an answer, and would continue circling around itself trying to find it. Turing found that algorithms had limits. Algorithms don't need to explain to you why they arrive at one answer, so why couldn't mathematicians just ask an algorithm to tell them if un-proven problems were true or false? It turns out that algorithms have limits.

To solve very difficult problems computer scientists, physicists and mathematicians began to incorporate probability into algorithms. If you wanted an algorithm to tell you what the outcome of a dice roll would be, you couldn't make it predict the future, but you could give it enough information to deduce the probability of the roll.

Solomonoff's great contribution was Algorithmic Probability. Prior to the 1960's, the method of calculating probability was determined simply by the number of positive outcomes versus the number of trials (the probability of getting heads when you flip a coin is a ratio of the number of times you flip heads versus the total number of times you flipped).

Solomonoff's theory of probability requires that we look at the world like computers: in a series of 0's and 1's. Everything that comes through a computer, from pictures of cats to mathematical equations, is represented in the core of that computer as a series of 0's and 1's. The order of the series tells the computer what to do.

This isn't a bad way to look at the world. It's very definite. It gives you a unique way to describe something that is happening.

So Solomonoff decided that probability in algorithms could better be described as how easy it was to describe a series of 0's and 1's. For example, it's rather easy to describe the series 0101010101. It is five repeats of '01'. However, the series 011110010010111 is random. The easiest way to describe it would just be to say it. The less complex the description of a series is, the higher it's probability. Furthermore, the probability can be changed based on the input data given to a computer, so it can get better at predicting a sequence of numbers depending on how much it has learned.

Solomonoff's theory applies to artificial intelligence because it provides a model for learning. Once again, in order to create artificial intelligence we first need to understand what real intelligence is. As we grow and progress and humans we acquire experience and transfer that into knowledge. The more input data we are given the better we become at dealing with new situations. In a sense we become better at predicting the next sequence of events: we have previous information to help us predict what the next series of events will be.

Unfortunately, Solomonoff's theory on Algorithmic Probability is un-programmable, and incomplete. Only estimations of it can be used in computers. But even these estimations are better than the old methods.

Some interesting thoughts on his passing from Foresight.org and The New York Times.

Read the rest of the post . . .

Friday, January 08, 2010

The Wooden Leg Makes a Comeback

Pirate enthusiasts rejoice. Wooden legs are back.

Sort of.

Actually, anyone who has lost bone or bone density to disease or accident may have something to celebrate, and will not actually have to worry about looking like a pirate.

Scientists in Italy, at the Istec laboratory of bioceramics in Faenza near Bologna, have found that the wood of the rattan tree, when heated, pressurized and enhanced with calcium and phosphates, becomes incredibly similar to human bone. Early tests show that real bone is so comfortable with the organic substitute, that it will bind to it (this is with segments of bone - no tests in humans have taken place yet). This acceptance by real bone, coupled with the treated rattan wood's natural durability, means the substitute bone will never need replacing.

Current substitutes for people who have suffered bone loss due to disease, where the remaining bone is also weak, often need to be replaced, and certainly don't fuse with the real bone. The BBC quoted surgeon Maurillo Marcacci, who is testing the bone substitute in sheep, as saying, "A strong, durable, load-bearing bone is really the holy grail for surgeons like me and for patients." (The treated rattan wood shown left.)

It could be another five years before the bone substitute is ready for use in humans. Smart planet also has a neat story about this, and here is a neat video of the discovery from the BBC.

It's always interesting when the advance of science goes back to it's roots (no pun intended) and finds inspiration in nature. It's even more amazing to find raw materials. Mother nature is very good at what she does. She's had millions of years to develop some of her creations, and natural selection has helped preserve the best and most efficient.

I'm reminded of scientists in recent years finding that many arthropods (including crabs and spiders) have single-atom deposits of metal in the very tips of their fangs or claws. These unique structures are incredibly fracture resistant and durable, which is good, because the animals use them to come in contact with the rest of the world, such as when walking or attacking. While the building structure of these tips wouldn't work for large scale structures, they could teach engineers how to better build nanostructures.

I'm also reminded of how some doctors in the last ten years have returned to using maggots to remove infection. For a long time, these methods were viewed as an ancient and unnecessary with modern antiseptics. But infection can still kill people, even when they have access to good health care. Patients suffering from diabetes, which can lead to open sores particularly on the feet, still suffer from gangrene if the sores become infected and the infection isn't caught soon enough. Apparently, even when doctors believe they have removed all of an infection with more modern methods, small amounts can remain. And that's all it takes. Maggots, however, are apparently very thorough when it comes to eating up infected tissue, and in fact, will leave healthy tissue alone.

In addition, leeches are back in vogue as a way to treat circulation problems that may occur after reconstructive surgery. The squirmy little critters promote circulation and release a blood thinning agent that prevents coagulation. Read all about it in this fantastic article by Ben Harder called Creepy-Crawly Care.

It's important to note that taking a cue from nature, and respecting what it has to offer, isn't the same as abandoning science. The two are tightly linked, and I'm afraid I've run into many people who believe they are separate. They see scientists and doctors as old, white, balding men in lab coats who don't listen to their patients, or ignore new ideas. And while I think that image should have faded long ago, some people still assume modern medicine is separate from nature because it relies so heavily on technology and man-made drugs. I think the above examples demonstrate how modern scientists have in no way left nature behind. But it's also important to separate the use of naturally occurring materials in modern medicine, as the same thing as "alternative medicine."

Alternative medicine took a big hit a few weeks ago when an extensive study by the University of Virginia School of Medicine found that ginkgo biloba does not improve memory or prevent cognitive decline in aging test subjects. The problem with many alternative medicines is that they've never been subject to this kind of study. The problem with many others is that they have, and been found to be ineffective, and people still use them with the belief that they will have an effect.

But it's important to keep in mind a point that Bad Astronomy blogger and all around awesome guy Phil Plait brought up when reporting this story: that the researchers were disappointed to find that ginkgo did nothing. Scientists aren't (or they shouldn't be) automatically against natural remedies. If something helps people feel better and get better, then doctors are happy. But while anecdotal evidence may be enough to convince some people to use an untested or ineffective treatment for ailments which are long term and often cannot be contributed to any one cause, this isn't enough to satisfy a good doctor. Whenever anyone tries to convince me that an alternative therapy is effective, or even better than modern medicine, I pose this question: while you might take herbs for mild mood enhancement or better digestion, or wear crystals for back pain, if your child went to the hospital for Anaphylactic shock and was suffocating, would you want take your chances with something that you only know works because of stories you've heard, or epinephrine?

The point is, when faced with more urgent care needs, modern medicine wins out.

I bring this up because I think it's important not to hit either extreme. Have you ever met anyone who swore that scientists and doctors didn't know anything? Or someone who believed there were no mysteries left to be found in nature? Once again, there were many folks who didn't think leeches or maggots were useful once we got antiseptics. On the flip side, modern medicine is advancing treatment for Alzheimer's and dementia at a rate that no purely natural remedy has been show to do. I think the problem is not science or nature, but humans. Whatever system we use to find answers, we are always subject to our own imperfections.

Ok, maybe I brought that up so I could post this:


Read the rest of the post . . .

Wednesday, January 06, 2010

Can Physics Predict the Future?


You and I each make predictions about the future continuously throughout our day. As we drive to work, we predict that the path we take will look much like it did yesterday. When we practice a sport, we begin to learn how hard we need to kick or hit a ball to make it go where we want. And soon we understand the forces of gravity, friction, and our own muscle, enough to score a goal or block a line drive. Each of us is constantly predicting the future. And sometimes we're right.

So if physics is a science meant to break down the world that we live in - picking apart it's smallest particles, figuring out it's hidden mechanisms, and working out the equations that describe it - then shouldn't physics be able to predict the future?

This is a question, or an assumption, that many people have about physics. And I'll tell you right now that it's wrong.

Physics can, to some extent, tell us what to expect from the world. Just like you and I can reasonably assume that because our path to work or school has been mostly the same every day for the past however many days, weeks, and years, it will be pretty much the same tomorrow. Just like we can learn to play a sport and begin to control the outcome of the game. But the goal of physics simply isn't to predict the future.

The goal of physics is to understand the world as it is now; here in the present. Physicists want to know why things are the way they are, why the world works the way it does. Using that knowledge, it is often possible to learn more about the world around you than you originally knew. We can't see black holes, but by learning about the world around us, and coming up with equations to describe it, we can find other ways to look for black holes. We know that matter around a black hole will behave in a certain way, so we look for that behavior and aha! A black hole. This kind of discovery requires that we uncover the rules and equations that describe the world; that make it run. Some scientists are even searching for a single equation, a "theory of everything," from which all other equations could be derived. Could such an equation predict the future?

No.

K.C. Cole, a tremendous science writer for the LA Times and author of an incredible new book on the life of Frank Oppenheimer, wrote a short post over at the NPR Blog 13.7: Cosmos and Culture, about the notion of a "theory of everything." She makes the point that although scientists may (heavy on the 'may') find a single equation from which all other equations and governing limits of the universe can be derived, it will still not allow us to predict the future. That was never the intention of figuring out physical laws.

If a fortune teller told you that tomorrow you would go to work and the road you drove on would bend in the same direction that it always has, that the building you work in would be in the same place, and that the sun would rise and set at about the same time it did yesterday - well, you'd be pretty disappointed with that fortune teller. Even if she was right.

You'd want to know about something much more complex and less predictable. Lets say you chose, on your way to work or school, to forget about your destination and allowed yourself to be pushed and pulled by the stimuli in your world - turning a corner because a gust of wind pushed you that way; crossing a street because the person next to you did; pausing for a moment because you smelled something good - well, then that same fortune teller would probably fail to tell you anything that actually happened. Even you might not be able to predict where you'd end up. The possibilities would overwhelm both of you. And so do possibilities overwhelm physics' ability to predict what will happen next.

One of Isaac Newton's three laws states that an object in motion tends to stay in motion unless acted upon by a force. Seems pretty straight forward. Consider the above scenario where you decide to let go of the force of will that moves you toward work or school each day. Could you predict where you would go? You could be pretty sure that you would not start to float upward and land on top of the Empire State Building (but you might take an elevator there). You will obey the laws of physics, and yet the many ways in which those laws will act upon you would overwhelm your ability to predict your fate.

Now, as a human, you have more control over where you go, so Cole uses the example of a drop of water in a water fall. As that drop of water reaches the crest of the fall, if we could pause and try to predict it's path and where it would land, we would be utterly overwhelmed. We'd have to incorporate the motion of every other drop in the river, the force of gravity and wind, the force of the rocks below. The equations would grow to fill volumes, they would swamp a supercomputer, and we'd be lost. We can assume that the drop of water will obey the laws of physics - but those laws will act upon that drop of water in so many ways that we will lose all hope of figuring out where it ends up before it ends up there.

Systems like this one are too complex to predict. And unfortunately, nearly everything we really want to know about the future is more complicated than predicting what your trip to work will be like. Even then, you never know what will happen. Who will you bump into? What sounds will you hear as you walk? What emotion might overtake you when you see something beautiful or ugly? Will you run into the person you are meant to marry? We can't say for sure until it's already happened. And the goal of physics is to best describe the current situation we are in. What does the universe look like right now? Why do things behave the way they do? Those are questions that physics hopes to answer.

That's not to say that we can't predict how things will go at all, or that we can't use those predictions to our advantage. We can't predict the path of a single electron for very long, but we can still utilize electricity. We can still count on our computers to work (most of the time). And as science progresses, we can get a clearer and clearer idea of what is happening when these things work the way they do. In essence, there are fundamental rules and regulations for the universe - like the force of gravity - that help us understand the way things are. That is, gravity helps us understand why we don't float away. But they can't predict the future.

"But wait!" you exclaim (I knew you would); aren't things like Moore's Law an example of scientists predicting the future? Moore's Law shows that about every two years, computer processor speeds double. So it predicts that in four years, processor speeds will be four times as fast, no? Isn't this a prediction of the future? Well, yes and no. Maybe now we're getting to the point where we need to define what we mean by predicting the future.

Is predicting the future seeing something that can't possibly be determined by looking at cause and effect? Fortune tellers promise to reveal things like the date you will get married, how many children you will have, or how much money you will make. Of course, to get specific about these things they'd need to know everything from your genetic history to what the weather will be like for the next thirty years. If, say, you believed that a fortune teller instead was given this information without having to know all those things; that he or she had some sort of phone line running into the future, well, then, I can't say I'd know what to say to that. So lets say that "predicting the future" means using cause and effect to determine the outcome of something.

So with Moore's Law, scientists are, in a way, predicting the future. Moore's Law doesn't tell us who will come out with the faster processor first, or what they'll eat for breakfast the day they announce it. And Moore's Law would fail if all the engineers in the world stopped working. It's not a law that is set in stone. Instead, it's more like predicting that the lights in your house will turn on assuming your bill is paid and the wiring is in good shape. It's likely, but not set in stone. Though we can't control every cause and effect in the world, we can use what we know of how the world works to reasonably predict what those causes will lead do; what things will be like tomorrow, or ten years from now. We obviously couldn't do much with our lives if we didn't understand cause and effect well enough to have a good idea of how many things would go. We predict that big buildings won't spontaneously switch places (although you could insert a quantum argument that it's not impossible), we predict that machines we build will run the way they did yesterday, and we predict that the sun will rise. And because of science, we know why the sun appears to rise. We'd know enough about the current state of the world to deal with situations that we can't predict.

Now I'm not even mentioning quantum effects - because once again, this is more a qualitative discussion about the objective of physics, rather than a discussion of the nitty gritty details that prevent us from estimating what will happen next.

So just because we can't predict the future doesn't mean we have no idea what will happen. What is more important to understand is that the purpose of physics is not to predict the future, but to describe the system we live in now.

That said, there was an interesting article in Wired today about the work of New Zealand-based physicist Sean Gourley, who gave a talk at TED discussing how he and a group he was working with thought they'd come up with a model to predict insurgencies in war. My immediate reaction to this is that there would far too many variables, not the least of which is human decision making, that determine what happens in a war. To think that we could come up with an equation to model these events seems to simplify an incredibly complex situation. It would be like a single equation to show the path of a water droplet down a water fall. But Gourley and his team thought they had found it, and last month they had a paper on the cover of Nature called “Common ecology quantifies human insurgency.” Nature is the top science journal in the world, so approval of this research gives it some major support.

Wired contributor Katie Drummond has some major issues with this. Namely, that the model Gourley and his team used was based on information that came from the media. So their map of insurgencies only includes those that were reported, and only those that included fatalities. Unfortunately, it's sort of widely known that the media coverage of insurgencies isn't always accurate. Partly because of how close the media can cover the entire war, and how much the military is willing to share. This is only part of Drummond's objection and I encourage you to read her piece.

The article seriously calls into question whether or not Gourley's model is worth anything. But the objections are mainly to his methods: what he uses for input data to construct the model and assumptions he makes about what defines an insurgency (does it only count if the insurgent forces start it, as opposed to the counter-insurgents?). With more accurate data and some discussion with army personnel to define the parameters, could we make a model to predict the patterns of war? I'm sure it's been attempted and is part of military strategy, but who knows how many complicating factors you would run into when trying to make a universal equation to describe all of these conflicts.

That's not to say there haven't been studies of systems that depend on human behavior: things like traffic patterns and the best way to board airplanes come up rather frequently. Albeit, those could definitely be viewed as much simpler systems, with much more concrete definitions and traceable data. It would certainly be wonderful if we could use physics to learn something about wars, with the ultimate objective of ending them faster and with fewer casualties.

But beware of those peddling snake oil: no one has a phone to the future.

Read the rest of the post . . .