Wednesday, October 18, 2006

Single Pixel Camera











I was on the bleeding edge of digital photography when I bought my one megapixel camera back in '96. Now some physicists at Rice University in Houston have the nerve to build a ONE PIXEL camera. The camera produces images by recording thousands of single-pixel images one after the other, rather than simultaneously recording millions of pixels.

It's not the greatest camera in the world, as you can tell from the pictures you see here. The image on the left is a conventional close up photo of a Mandrill and the photo on the right is a single pixel camera shot of the same image. What’s more, it takes about fifteen minutes to record a shot like this with a single pixel. So it's not likely that you'll be standing in line at Circuit City with a single pixel camera in your cart any time soon.

The key benefit of the experimental camera is that it needs much less information to assemble an image. Massive CCD arrays collect millions of pixels worth of data, which are typically compressed to keep file sizes manageable. It's an approach the Rice researchers describe as "acquire first, ask questions later." Many pictures, however, have portions that contain relatively little information, such as a clear blue sky or a snowy white background. Conventional cameras record every pixel and later eliminate redundancy with compression algorithms.

The single pixel camera, on the other hand, compresses the image data via its hardware before the pixels are recorded. As a result, it's able to capture an image with only thousands of pieces of information rather than millions.

The compression is achieved with an array of tiny, movable mirrors. Various mirror arrangements encode information about the photographic subject as a whole, in lieu of the point-by-point image recording in a normal camera.

One potential payoff of this sort of research is that it may make conventional digital cameras much better. If a single pixel can do the job of an array of pixels, as the Rice University team shows, then you could potentially get each of the pixels in a megapixel camera to do extra duty as well. Effectively, you can multiply the resolution of your camera with the techniques developed in a single pixel camera.

The technology could make cameras much cheaper by letting us get by with fewer pixels, or perhaps lead (some day) to gigapixel resolution from megapixel cameras. In addition, the researchers say, the single pixel detector can be replaced with devices that register other wavelengths of light, potentially leading to images collected with light outside the ranges that CCD and CMOS detectors can handle.

You can peruse the paper that the researchers presented at the Optical Society of America meeting a few weeks ago in Rochester, New York, if you want all the details. For more readable information, visit their website or read the Physics News Update item my friends at AIP posted recently.

20 comments:

  1. In 1999 scientists at Los alamos national lab did essentially the same thing. Except they went one better---they also added in Phase detection by heterodyning the receiver.

    Instead of using micro mirrors, the Los alamos team used an LCD which were more mature at the time. And Instead of using random modulation they used a progression of zernike polynomials and thus achieved much more control over the data compression in fewer measurements than a non-orthogonal random expansion.

    additionally here's a patent on another method:
    A patent for "A single element detector acts as an array"

    ReplyDelete
  2. So, let's say you've got a one pixel camera and would like to take a 1M pixel photograph with an exposure time of 1/100th of a second, that would leave 1/(100*1000000) th of a second exposure time per pixel. . .

    Will this really improve the image. . .
    i dont think so

    Kind regards the Anonymous coward. . .

    ReplyDelete
  3. I guess that it is no new idea. Cameras for mechanical TV were singlepixel too. Those days image was strange, lines were floating. But maybe this will be useful inf future.

    ReplyDelete
  4. if it can really capture colors outside the range of CCD and CMOS, then i would say that hell yes this is an improvement. what good is the megapix race with out more realistic images?

    ReplyDelete
  5. >
    > So, let's say you've got a one pixel
    > camera and would like to take a 1M
    > pixel photograph with an exposure
    > time of 1/100th of a second, that
    > would leave 1/(100*1000000) th of a
    > second exposure time per pixel. . .
    >

    No, I believe one of the points made in the paper was that in order to construct an image of 1Mpixel effective resolution requires significantly fewer samples when using this approach. In other words, it won't take a million samples in that case.

    ReplyDelete
  6. quoting another anonymous coward;
    1/(100*1000000) th of a second

    Hm, I wouldn't mind having a camera that is so fast, but maby it might to less light in that timeframe (just one 100 millionth of a second. Perhaps math isn't his best side and he mend one 100th times a million or in numbers (1/100)*1000000 th. Small difference in text, big difference in math ;)

    ReplyDelete
  7. Important for mobile phone cameras.
    You all who use them know how good pictures look from mobile phones when you look at them close up. There's lots of noise on the photo.
    If we can reduce the CCD then we'll get better pictures from mobile phones.

    ReplyDelete
  8. Even more impressive:
    http://graphics.stanford.edu/papers/dual_photography/

    Instead of mirrors they used a standard projector. By applying some mathematic properties of imaging they also manage to reconstruct a picture that appears as if taken from the perspective of the light source.

    ReplyDelete
  9. I wonder if something similar could be done with sonar and radar allowing for better images with fewer transducers/antennas.

    Another anonymous coward.

    ReplyDelete
  10. in reaction on: 1/(100*1000000)

    This means what maximal time should be the raction time of the scanning mirror. So the miror soud be able to move in 10^-8 sec. Cause you don't want to expose one pixel for 1/100th of second but whole scene. So you have to divide this time between million of pixs. So it means that each pixel will have incredible small exposure time and the result will be very dark.

    ReplyDelete
  11. What happens if you have a dead pixel?

    ReplyDelete
  12. Tough luck on the dead pixel, most manufactures require that you have 5 or more defective before they deam the unit defective.

    ReplyDelete
  13. moving mirrors - does that mean there will be a LOT more moving parts than a conventional digital camera? Sounds like it'd be prone to more frequent break-downs then :-(

    ReplyDelete
  14. What has PowerBall and ONE PIXEL camera in common?

    Parts inside are moving at speeds like 10000 RPM (or RPmS for camera)?

    ReplyDelete
  15. Just Some Facts About

    Alan Turing Alan Mathison Turing, OBE (June 23, 1912 � June 7, 1954), was an English mathematician, logician, and cryptographer. Turing is often considered to be the father of modern computer science. - - With the Turing test, Turing made a significant and characteristically provocative contribution to the debate regarding artificial intelligence: whether it will ever be possible to say that a machine is conscious and can think. He provided an influential formalisation of the concept of the algorithm and computation with the Turing machine, formulating the now widely accepted "Turing" version of the Church�Turing thesis, namely that any practical computing model has either the equivalent or a subset of the capabilities of a Turing machine

    ReplyDelete
  16. Instead of using micro mirrors, the Los alamos team used an LCD which were more mature at the time. ...leadingcamera.com and mycameraworld.com

    ReplyDelete
  17. It's not about who did it first. These guys made for the first time a practical implementation of Compressed Sensing, a new signal processing theory. This theory is incredibly powerful, and now they showed that it is also feasible in practice. That's why they used randommly moved micro-mirror, to complain with CS theory.
    We're gonna see some interesting stuff. Compressed sensing is something that started around 2004. In just 5 years we passed from the math to the working camera!

    ReplyDelete
  18. Yet another waste of time with a catchy name?

    ReplyDelete
  19. The way amateur radio astronomy works is by using a directional antenna as a "single-pixel" detector. As the earth spins, you sweep a line in the sky. Stack up the lines and you have a 2-D image. So I immediately thought hey, this technique could help.

    But of course, "mirrors" to combine various (in this case, random) waveplanes onto the receiver are hard to come by in the radio spectrum. Then as I thought longer, I realized this concentration of imagery data onto a single-pixel detector is WHAT AN ANTENNA ALREADY DOES! There's just one driven loop and a bunch of extra metal included to do this 'lensing.' Of course, it's a regular pattern (not random) because it can't change between measurements.

    ReplyDelete