Spring 2012:LFM Proposal

From Course Wiki
Revision as of 02:57, 10 March 2012 by Leanna Morinishi (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Abstract

A light field microscope (LFM) is capable of producing a 3-dimensional rendering of a sample using information from a single image. The addition of a microlens array, a grid of lenses with diameters on the microscale, to a traditional illumination microscope grants this capability. Here we propose a low-cost light field fluorescent microscope experiments for use in a teaching undergraduate laboratory.

Introduction

Traditional light microscopy has been used to illuminate miniature biological specimens since the mid-1600s. Since, researchers have come to realize and (attempt to) circumvent the inherent limitations of the technique. One such limitation is that of superimposed features on the captured image. With no depth information from the subject, it is difficult to extract data specific to the focal plane from the rest of the image. The light field microscope allows angular resolution information to be recorded in a single image. The light field is a function that represents the amount of light traveling in every direction through every point in space. By recording a sample’s light field, one can produce perspective and parallax views of the specimen. This is accomplished by inserting a microlens array in a conventional bright field microscope, and analyzing the images in silico. A microlens array is an optical component containing a spread of thousands of lenses with diameters in the micron scale. Applying 3D deconvolution to the focal stacks from the array produces a series of cross sections which allow for a 3D reconstruction of the sample. Light fields and lens arrays have their roots in photography [Lippman 1908] and have evolved as the technology for manufacturing mini and microlenses improved. Lens arrays have been used to build a microscope with an ultra-wide field of view [Weinstein 2004], and have been recently utilized in a commercial light field camera [Lytro.com, Ng et al. 2005]. The first LFM built to capture multi-viewpoint imagery was described in 2005 [Levoy et al. 2005] along with open source software to assist in the deconvolution, image stacking, and 3D volume rendering process. The setup was later improved with a second microlens array for angular control over illumination [Levoy et al. 2009]. Confocal also provides a method of visualizing 3-dimensional images and has better spatial resolution than light field microscopy, but images require scanning the entire sample at every depth of interest. Light field has the advantage of capturing relative depth information in a single photo. This additional visual information comes at a cost, however. Diffraction places an upper limit on the lateral and axial resolution of the images. We thus sacrifice spatial resolution to obtain angular resolution. Visualizing smaller and smaller, run into practical limitations of NA, superimposed features, and limited depth of field Light propogating from an object through a lens is bandlimited by NA/lambda


…………

Theory

Experimental Goal

Development Plan

There already exists comprehensive research on both light field sensing/processing and confocal microscope design. We plan to integrate the two, so the main development challenges lie in creating effective separate parts and interfacing the parts. In order make testing the sensor and the microscope appartus more constrained, the sensor will be built and tested in a simple environment before being used in a confocal-like apparatus. Testing the apparatuses will be comprised of taking ray bundles of different samples and evaluating the results. Development will be complete when the microscope is able to readily produce confocal-like images. Once built, the microscope will have a certain operation depth range and will also have other restrictions on its ability to capture ray bundles and output images. We plan to first test these parameters through testing the microscope with a sample of silica microbeads suspended in a clear gel. This allows for a fairly low-complication capture and will hopefully demonstrate any quirks of the setup. We also want to inspect the microscope’s maximum working depth in materials of varying opacity. It is expected that in a This project is restricted to nine weeks of work and experimentation. We have set aside three weeks for experimentation, with the seven preceding allocated to constructing the apparatus and a method of processing the data from the apparatus. By the end of Week 1, we intend to have gathered the necessary equipment and materials, especially the microlens array and camera. We also intend to start assembly of a very simple brightfield transmittance microscope which will be used to test our processing of the light field. By the end of Week 2, the simple microscope and the light field imager (microlens array and camera) will be completely built and transmitting. This is an important milestone, since it allows focusing work on getting the imager to work with software in the upcoming weeks. In Weeks 3-4, we plan to focus solely upon transforming the ray bundle from the lightfield imager into a coherent and sensical image stack. We intend to be able to choose arbitrary synthetic apertures for any image and to focus on any arbitrary plane in the lightfield’s capacity. At the end of week 4, the light field setup should be able to function to the full specifications of the final microscopes. By the end of Week 5, we want to have built most of the final confocal-type microscope. By this time, all light paths should be set up for the microscope, and the apparatus should only be missing glass optics and the camera. By the end of Week 6, we plan to finish confocal-type light field microscope by installing all lenses, mirrors, and objectives, as well as the lightfield imager. We intend to test and tweak the software to work in the new apparatus, though it should theoretically be a small change. Weeks 7-9 will be devoted to experimentation with the system.

There are two major investments in this project: a microlens array and an appropriately specified camera. The sort of microlens array required is available from a variety of retailers, including ThorLabs. We are interested in an array of ~150µm-diameter lenses, reasonable pitch, and a displacement angle of at least ±5º. We are trying to maximize the number of rays captured by the camera (rays captured is proportional to number of microlenses on the sensor) and to allow for a good amount of spatial resolution for every ray (proportional to the number of pixels under a single lenslet). This process sacrifices sensor spatial resolution for resolution in the depth axis. Because of this, we require a camera with ~7µm pixels and spatial resolution of over 5 megapixels. Spatial resolution of the output of the light field processing will vary based on depth chosen, but with a 5 megapixel sensor, the minimum resolution out will be around 0.3 megapixels. The camera and microlens array are the only pieces of equiptment that need to come from outside sources, since the rest of the materials will be available from lab supplies. The rest will consist of optics supplies for building microscopes, both a brightfield transmittance microscope and a confocal-like microscope.