Paper thin lenses.

Discussion in 'Open Discussion' started by lenshoarder, Aug 25, 2012.

  1. lenshoarder

    lenshoarder Mu-43 All-Pro

    Nov 7, 2010
  2. Promit

    Promit Mu-43 All-Pro

    Jun 6, 2011
    Baltimore, MD
    Real Name:
    Promit Roy
    Yep, as long as you only want to take photos of lasers.
  3. spatulaboy

    spatulaboy I'm not really here

    Jul 13, 2011
    North Carolina
    Real Name:
    Yeah... but how is the bokeh? :tongue:
    • Like Like x 2
  4. Conrad

    Conrad Mu-43 Veteran

    It's more like a diffractive optical element. It's not clear to me whether the technique introduces a spatially varying true phase shift (wavelength independent), or a spatially varying time delay on the surface of the silicon (which results in a wavelength dependent phase shift). In the former case this would be phenomenal. In the latter case, the CA will be unimaginable. A significant achievement nonetheless, but rather useless for photography. I suspect the latter.
  5. Promit

    Promit Mu-43 All-Pro

    Jun 6, 2011
    Baltimore, MD
    Real Name:
    Promit Roy
    It sounds like you tune for specific wavelengths to me. Wonderful for fiberoptics, not that helpful for traditional imaging.
  6. kevwilfoto

    kevwilfoto Mu-43 Veteran

    Sep 23, 2011
    I wonder if this would be useful in small applications or where light weight is crucial, such as cell phones or eyeglasses.
  7. KVG

    KVG Banned User

    May 10, 2011
    yyc(Calgary, AB)
    Real Name:
    Kelly Gibbons

    typed on my phone, sorry.
  8. meyerweb

    meyerweb Mu-43 Hall of Famer

    Sep 5, 2011
    This. It's not at all clear how this would work with a full spectrum. Nor how one would support it physically and make it robust enough to stand up to typical field use, nor use a variable diaphragm with it. How does one build different focal lengths, or build a zoom? At a minimum, I suspect there will be a pretty sizable lag between these lab tests and affordable, effect lenses for photography.

    Aside from that, this won't help with the real problem most photographs have: composition. :wink:
  9. David A

    David A Mu-43 All-Pro

    Sep 30, 2011
    Brisbane, Australia
    I don't think the support and robustness issues are all that difficult to solve. You could sandwich the metasurface with the openings between two thin layers of optical glass, for instance, or simply bond it to the rear of a protective single layer of optical glass and still have a thin, flat "lens".

    Focal length? Well you could go the 40mp Nokia phone camera route and simply capture the image from smaller and smaller areas of the sensor in order to get the effect of different focal lengths while creating much smaller than 40mp images. In other words, if the pixel area has a greater mp count than your file size, you use pixel binning. That's one way of dealing with the need to be able to capture different fields of view which is why you use lenses of different focal lengths. Nokia is already doing this with a fixed focal length lens in a camera that can deliver different fields of view.

    There may be another. The focal length of an optical lens controls the field of view from which light is gathered and the angle over which it is deflected to reach the sensor. If you can control the field from which light reaching the metasurface is gathered, and the angle over which it is dispersed from the metasurface, then you can simulate different optical focal lengths. That would allow you to sell different units with different "effective focal lengths".

    Variable diaphragm? Why not simply a variable neutral density filter system of some kind, some way of controlling the amount of light reaching the metasurface over a 12 EV or greater range. It sounds as if this "lens" has oodles of depth of field so in that sense it may be similar to what I've heard of the lens array in the Lightro light field camera and depth of field may need to be controlled in a different way to aperture in our current camera systems. Actually I think it would be really advantageous if we could separate control of depth of field from aperture and if we could use a combination of aperture and shutter speed to control exposure and degree of desired subject movement while controlling depth of field separately it would be really great. Imagine being able to get really deep depth of field with an F/1.4 and short shutter speed combination or really narrow depth of field with an F/16/long shutter speed combination. What if we could do things like blurring water movement or a moving subject or a moving background while panning easily and have total depth of field control separate to our choice of aperture and shutter speed? I think that would open up a lot of interesting possibilities.

    I'll happily admit that most of the science involved here is way beyond my level of expertise, and that a lot of what isn't way beyond my level is still beyond my level, but from what I read and the little I do know, I don't necessarily see the issues you raise being all that problematic.

    I see the big issues lying elsewhere. There are comments about the thinness of the array in the article, but I saw no comments about the size of its surface which is what is equivalent to the area of the front element of a normal lens and how it relates to sensor size. Is the front area equivalent to that of normal lenses, or smaller, which is fine or is it much larger which would be bad. Also, no mention of distance from the metasurface to the sensor. That distance is important because it relates to flange distance. If it is too small, then you really have to mount the sensor and metasurface in a single unit and then you're effectively talking a fixed lens camera unit if you regard the combination of lens and sensor as a "camera". You'd probably have to use an electronic sensor rather than a physical one and while you could create an interchangeable lens camera body and lens system, it would be very different to what we're used to. The body would hold a data storage system and control electronics, the lens would house the metasurface, sensor, diaphragm and shutter systems. The "lens units" might have to do much more than our current lens units have to do simply because of the need to incorporate the sensor and shutter in the lens assembly if the flange distance equivalent is too small to allow separation of shutter and lens assemblies.

    There's a lot that's unclear here but my feeling is that they can probably deliver the equivalent of our camera lenses, and in different focal length equivalents, with this approach. I also think that it isn't at all clear what the actual implications for camera design are from this article and that the real "issues" for camera design may not be where people initially think they are.