## Abstract

We describe a compressive snapshot color polarization imager that encodes spatial, spectral, and polarization information using a liquid crystal modulator. We experimentally show that polarization imaging is compressible by multiplexing polarization states and present the reconstruction results. This compressive camera captures the spatial distribution of four polarizations and three color channels. It achieves <0.027° spatial resolution, 10^{3} average extinction ratio, and >30 PSNR.

© 2015 Optical Society of America

## 1. Introduction

The emergence of imaging spectroscopy and imaging polarimetry has enabled applications in remote sensing [1, 2], biomedical diagnosis [3], and scattering imaging [4]. While the spectral information usually reflects the physical and chemical properties of the object [5], the polarimetry analyzes the surface properties between different orientations of the electromagnetic information from the object. For example, it has been used in contrast enhancement and material analysis [2]. Since the polarization parameters have little physical correlation with the spectrum, measuring them can better decipher the object. An extension of polarimetry is combining the photometry to achieve spectral polarization imaging to diagnose the spectral and polarization distribution of the scene. Recording this four-dimensional (4D) datastream requires advanced sampling strategy.

Conventional spectral polarization imagers use sequential measurements or parallel the recording process to sample a high dimensional data-cube. The sequential measurement samples several two-dimensional (2D) or three-dimensional (3D) sub-datasets by scanning the scene or switching the polarization and/or bandpass filters. Such mechanical movements cause lengthy acquisition time, which limits the application in dynamic scenes. Also, those motion related errors such as beam wandering and jitter noise should be considered and minimized by system optimization [6–8]. Parallel sampling can be achieved by using detectors and beam splitters to record multiple sub-datasets simultaneously. An alternative parallel method uses thin film filter arrays with multichannel spectral [9] and polarization filters [10–17], known as division of focal plane (DoFP) sampling. This strategy suffers from the following technical challenges, such as low extinction ratio of the micro-polarizer array and the fabrication difficulty for more than 10 spectral-polarization channels. Alternatively, the computed tomography imaging channeled spectro-polarimeter (CTICS) uses a computer generated hologram to project an unfolded Fourier transformed spectral-polarization data-cube on the detector array [18]. Similarly, the polarization grating imaging spectro-polarimeter uses multiple polarization sensitive gratings to project the dispersed spectral images based on their polarization [19, 20]. Although the well-registered images are captured from the same aperture, the projected data-cube occupies different regions on the image plane, which reduce the effective numerical aperture.

Those strategies follow the dimension conservation of the imaging techniques, which only allow the detectors recording limited object dimensions from the scene. The recordable dimensions should be equal or less than the dimensions of the detector. Compressive sensing breaks the dimensional conservation by projecting the object space to the image space with designed modulation and then reconstructing the object through computational processing [21]. By adapting hardware to generate amplitude, phase or temporal modulation, compressive imagers allow the signal compression on spectral imaging [22], diffraction tomography [23], x-ray scattering imaging [24], and mass spectroscopy [25].

Multiplex sampling strategy expands the temporal [26], spectral [22] or polarization [27] sensitivity in compressive optical sensing. By using coded aperture, this strategy can be viewed as a coded division multiple access (CDMA) process, which encodes each channel by a spatially independent code. The detector integrates all the encoded channels and then the information is recovered into separated channels based on their projected code patterns. This coding strategy amplifies the difference of band-limited channels by projecting them on their corresponding spatial position on the detector plane. However, this modulation technique can only sense the difference between two orthogonal polarization states [27].

Here we demonstrate a snapshot color polarization imager using a spatial light modulator (SLM) to encode 4D spatial, spectral and polarization information on a 2D detector with a similar CDMA process. The SLM is an array of micro cells of parallel aligned nematic liquid crystal on a reflecting layer [28], which provides polarization and wavelength dependent transmission patterns to encode the scene and then multiplexes them on a 2D detector. After the compressed measurement, the polarization and color information can be decoded inversely by using the iterative optimization algorithm. Considering the polarimetry optimization, we isolate the signal into 0°, 90°, 45°, and 135° polarization channels to resolve the linear polarization state. This linear polarimetry would satisfy several applications without significant circular polarizations, such as most of the natural scenes.

## 2. Theory

The Stokes vector is a mathematical formula describing average irradiance between different polarization subsets to express the properties of electromagnetic field [29]. Three of the Stokes parameters are related to the linear polarization state; the relationship can be derived by the following equations:

where*I*

_{0},

*I*

_{90},

*I*

_{45}, and

*I*

_{135}represent the average intensity filtered by an ideal polarizer oriented at the corresponding angle. Conventionally, multiple filtered images with different polarization orientations have been required to recover the polarization information. Compressive sampling, on the other hand, only requires a snapshot to measure the

*S*

_{0}to

*S*

_{2}Stokes parameters. Here we propose using a Liquid Crystal on Silicon (LCoS) based SLM to modulate the color and polarization signal to multiplex four intensity subsets in a single measurement. In the SLM, the electrical controllable optical anisotropy of the liquid crystal (LC) is used for signal modulation. Each layer of the LC can be considered as a thin optical birefringence material. Its long axis and the short axis provide different refractive indices to the electromagnetic wave. Since the orientation of the LC molecules can be controlled by the applied voltage, the controllable birefringence makes the SLM a variable wave-plate. The SLM provides up to 3

*π*programmable phase retardation to create the wavelength dependent polarization state rotation. We sample the vertical fraction of the polarization imaging to transfer the phase modulation into a detector recognizable amplitude modulation, which can be described as:

*β*(

*λ*) is the variable birefringence generated by the modulator, which is a function of wavelength. Figure 1 illustrates the amplitude modulation in different polarization and color channels. Each sub-image describes the relationship between the transmission pixel counts and the applied voltage on the SLM. Since the applied voltage is relevant to the birefringence of the modulator, these knowledge can be used to map the transmission code to the color-polarization signal. These polarization dependent transmission patterns multiplex the polarized images into one enoded intensity measurement. We note that this multiplexing strategy enables the freedom of choosing the basis of polarization state decomposition. Any two orthogonal polarization states can be assigned as one pair of polarization channels to decompose the incident light. This paper chooses linear horizontal, linear vertical, linear 45°, and linear 135° as the decomposing basis to analyze all linear polarization states.

## 3. System design and mathematical model

A schematic of this imager is shown in Fig. 2. This design includes the optical elements which relay, modulate, and record the characteristics of light. The scene of interest is first imaged by an objective lens (L1). Then it is relayed by the collimating lens (L2) and the imaging lens (L3) onto an intermediate image plane. A pseudo random voltage map is applied on the SLM to generate phase retardation to encode the image. After reflected by the silicon layer of the SLM and then filtered by the polarizer, the applied phase retardation becomes a wavelength and polarization dependent amplitude modulation which is added to the scene. Finally, the modulated image is projected by the imaging lens (L4) and then recorded by a color detector. The LCoS device has a considerable pre-tilt angle, which causes additional phase retardation, transfers the linear polarizations into elliptical polarizations [28]. This artifact reduces the contrast in transmission between two orthogonal linear polarized images after the modulated signal filtered by the polarizer. Here we apply a quarter wave-plate as a compensator to increase the contrast ratio. The modulation process can be represented by the following mathematical model:

*g*represents the spectral density on the detector plane,

*f*represent the spatial-spectral distribution of the scene, and

*T*is the wavelength and polarization dependent transmission code patterns. The horizontal and vertical subscripts represent a given pair of orthogonal states which decompose the incident polarization. The color detector integrates the spectral density into three color channels

*λ*,

_{R}*λ*, and

_{G}*λ*. Considering the detector array has size Δ and the measurement noise

_{B}*w*, the discrete form of the measurement model becomes:

Finally, we denote the source spectral density discretely as *f _{mnkp}* and the color and polarization dependent code patterns as

*T*. Here

_{mnkp}*m*and

*n*denote the (

*m*,

*n*)

*spatial location,*

^{th}*k*represents the spectral channels defined by the color filter, and

*p*is the number of polarization subsets. We derive this measurement process in matrix form as:

**g**=

**Hf**+

**w**, where $\mathbf{H}\in {\mathbb{R}}^{\left(\frac{\mathbf{M}}{\mathbf{2}}\times \frac{\mathbf{N}}{\mathbf{2}}\times \mathbf{4}\right)\times \left(\frac{\mathbf{M}}{\mathbf{2}}\times \frac{\mathbf{N}}{\mathbf{2}}\times \mathbf{4}\times \mathbf{4}\right)}$ represents the forward matrix of the system, $\mathbf{f}\in {\mathbb{R}}^{\left(\frac{\mathbf{M}}{\mathbf{2}}\times \frac{\mathbf{N}}{\mathbf{2}}\times \mathbf{4}\times \mathbf{4}\right)\times \mathbf{1}}$ represents the object datacube discretized at the dimensions of the spectral and polarization compression ratios, and $\mathbf{n}\in {\mathbb{R}}^{\left(\frac{\mathbf{M}}{\mathbf{2}}\times \frac{\mathbf{N}}{\mathbf{2}}\times \mathbf{4}\right)\times \mathbf{1}}$ represents the sensor noise. The forward matrix

**H**approximates the sensing process that encodes each color and polarization channel identically to map the continuous 4D datastream

**f**onto the 2D measurement

**g**.

## 4. Experimental setup and system calibration

Figure 3 shows the experimental prototype of this compressive camera. The optics in this camera include a 60 mm commercial objective lens (Jenoptik), a 75 mm achromatic collimating lens (Edmund Optics), two 75 mm imaging lenses (Pentax), a broadband non-polarizing beam splitter (Newport), an achromatic quarter wave-plate (Newport), and a linear polarizer (Newport). This camera has a 25° field of view and a 12.5 mm clear aperture. The detector is a 1280×720 color camera (GuppyPro, AVT) with 4.08 *μ*m square pixel size. The SLM used in this setup is a 1920×1080 phase only liquid crystal modulator (Pluto, Holoeye) with 8 *μ*m resolution. All of these components are aligned on optical rails (Newport). A translation stage and a lab jack are used to provide precision horizontal, vertical and axial alignment for the SLM to correct the coding on the intermediate image plane. The applied voltage map on the SLM has an 8-bit pseudo random pattern with 16 *μ*m square feature size, which provides one-to-four mapping between the code patterns and the detector pixels. The SLM has a 60 Hz refresh frame rate, which could have the potential to be adopted in video rate sensing. However, we are using a single pattern during the modulation in order to simplify the system calibration. We note that for an effective modulation, the exposure time for each measurement should cover at least one modulation period, which should be longer than 16.7 ms.

The reconstruction performance is correlated to the reliability of the forward matrix **H**. A revised **H** is calibrated experimentally, which considering the errors that could break the ideal mapping relationship, such as the aberration caused by the relay optics and the sub-pixel misalignment. We record the calibrated **H** matrix by illuminating the camera under four identical polarization states to record the modulation patterns sequentially. The light source we use is a white light LED filtered by a rotatable polarizer. We assign linear horizontal, linear vertical, linear 45°, and linear 135° polarization channels as the identical states because these two orthogonal pairs are the basis of all linear polarizations.

## 5. Reconstruction algorithms

The conventional linear inversion constrain that the rank of the projection **H** must be equal to the number of the object modes **f**. Also, the magnitude of the noise **w** should be small enough to achieve reliable reconstruction. Compressive sampling is a novel method which is designed to use fewer number of measurement modes to estimate the object; thus, conventional linear inversion methods, such as pseudo inverse and least-squares are incapable of reconstructing the object. Modern reconstructions algorithms solve this ill-conditioning sampling by using convex optimization processes. Such methods can effectively use finite measurement modes to estimate extra dimensional scenes. Here we use Two-step Iterative Shrinkage Thresholding (TwIST) [30] and General Alternative Projection (GAP) [31]. TwIST usually use regulation functions such as L1-norm or total variation to constrain the estimation. GAP, on the other hand, transforms estimations into some sparse domains to fulfill the requirement of sparsity of compressive sampling.

#### 5.1. TwIST

TwIST solves the optimization by using the following estimation:

*τ*is the weighting factor of the regularization and the total variation (TV) regularizer

*H*(

_{TV}*f*) is defined by:

*τ*∈ [0.02, 0.05] and iterations between 50 and 300.

#### 5.2. Generalized alternating projection (GAP)

We adapt an *anytime* algorithm, GAP, from other applications to reconstruct the compressed spectral and polarization. GAP produces a sequence of partial solutions that *monotonically* converge to the *true signal* (thus, anytime). In [31], no real data or application was considered and it was improved for the video and depth compressive sensing in [32]. The manner in which the GAP algorithm is employed here, as well as the application considered, is significantly different from [31], and similar to but different from [26, 32]. Specifically, the wavelet transformation is used in space globally in [26, 32], while in our applications, we use the locally DCT for different blocks. In the following, we first review the underlying GAP algorithm and then show how to improve it to get better results for the data considered here.

GAP is used to investigate the *group-sparsity* of wavelet/DCT coefficients of the video to be reconstructed. Let **T*** _{x}* ∈ ℝ

^{nx×nx},

**T**

*∈ ℝ*

_{y}^{ny×ny},

**T**

*∈ ℝ*

_{t}^{nt×nt}be orthonormal matrices defining bases such as wavelets or DCT. Define $\mathit{v}=\left({\mathbf{T}}_{t}^{T}\otimes {\mathbf{T}}_{y}^{T}\otimes {\mathbf{T}}_{x}^{T}\right)f$, and

**Φ**=

*H*(

**T**

*⊗*

_{t}**T**

*⊗*

_{y}**T**

*), where ⊗ denotes Kronecker product. Then we can write Eq. (7) concisely as*

_{x}*g*=

**Φ**+

*v***, where**

*w***Φ**∈ ℝ

^{nxny×nxnynt}with $\mathbf{\Phi}{\mathbf{\Phi}}^{T}=\text{diag}\left(\text{vec}\left({\sum}_{k=1}^{{n}_{t}}{\mathit{H}}_{k}\odot {\mathit{H}}_{k}\right)\right)$. For simplification, from now we ignore possible noise

**. Note that**

*w**g*reflects one

*n*×

_{x}*n*compressively measured image, and

_{y}*f*= (

**T**

*⊗*

_{t}**T**

*⊗*

_{y}**T**

*)*

_{x}**is the**

*w**n*×

_{x}*n*×

_{y}*n*images we wish to recover.

_{t}### 5.2.1. GAP for CS inversion

GAP solves the following problem

*λ*

^{(t)}≥ 0 is the Lagrangian multiplier uniquely associated with

*C*

^{(t)}. Denote by

*λ*

^{*}the multiplier associated with

*C*

^{*}. It suffices to find a sequence {

*λ*

^{(t)}}

_{t≥1}such that lim

_{t→∞}

*λ*

^{(t)}=

*λ*

^{*}.

We solve Eq. (10) by using alternately projection between ** v** and

**. Given one, the other is solved analytically:**

*θ***is an Euclidean projection of**

*v***on the linear manifold, while**

*θ***is the result of applying group-wise shrinkage to**

*θ***. An attractive property of GAP is that, by using a special rule of updating**

*v**λ*

^{(t)}, we only need to run a single iteration of Eq. (10) for each

*λ*

^{(t)}to make {

*λ*

^{(t)}}

_{t≥1}converge to

*λ*

^{*}. In particular, GAP starts from

*θ*^{(0)}=

**0**and computes two sequences, {

*θ*^{(t)}}

_{t≥1}and {

*v*^{(t)}}

_{t≥1}:

*m*) such that ${\Vert {\mathit{v}}_{{\mathcal{G}}_{{j}_{1}^{(t)}}}^{(t)}\Vert}_{2}{\beta}_{{j}_{1}^{(t)}}^{-1}\ge {\Vert {\mathit{v}}_{{\mathcal{G}}_{{j}_{2}^{(t)}}}^{(t)}\Vert}_{2}{\beta}_{{j}_{2}^{(t)}}^{-1}\ge \cdots \ge {\Vert {\mathit{v}}_{{\mathcal{G}}_{{j}_{m}^{(t)}}}^{(t)}\Vert}_{2}{\beta}_{{j}_{m}^{(t)}}^{-1}$.

The algorithm Eq. (11) and Eq. (12) is referred as *generalized alternating projection* (or GAP) to emphasize its difference from alternating projection (AP) in the conventional sense: conventional AP produces a sequence of projections between two *fixed* convex sets, while GAP produces a sequence of projections between two convex sets that undergo systematic changes over the iterations. In the GAP algorithm as shown in Eq. (11) and Eq. (12), the alternating projection is performed between a fixed linear manifold *𝒮*_{Φ,y} and a changing weighted-*ℓ*_{2,1} ball, i.e.,
${B}_{2,1}^{\mathcal{G}\beta}\left({C}^{(t)}\right)$ whose radius *C*^{(t)} is a function of the iteration number *t*.

### 5.2.2. Extension of GAP for the proposed camera

The diagonalization of **ΦΦ*** ^{T}* is the key to achieve fast GAP recovery of videos. The inversion of

**ΦΦ**

*in Eq. (11) now just requires computation of the reciprocals of the diagonal elements, as a result of the*

^{T}*hardware implementation*of the proposed camera. In our experiments, we use the block-wise reconstruction. More specifically, we partition the 3D cube to overlapping 3D blocks (

*b*×

_{x}*b*× 4), and we inverse each block independently and then average the results. This leads to the parallel computation. Best results are found with

_{y}*b*=

_{x}*b*= 16 and

_{y}*T*,

_{x}*T*,

_{y}*T*corresponds to DCT. The weights of the DCT is similar to the time weight used in [32], and each group in GAP is 2 × 2 × 1. There is no group in the polarization domain because we here reconstruct four polarization channels and they are not necessary to share any common information.

_{t}## 6. Experimental results

The experimental results are presented in this section. A resolution chart, color bricks and a scene of a parking lot were used as examples to examine the compressive sampling and the reconstructing ability. We chose the reconstruction algorithm based on the spatial feature of the object. As such TwIST was used to solve the spatially smooth objects, such as the resolution chart, while GAP was used to reconstruct the object with spatial complexity, including toys and the natural scene. The reconstruction recovers the linear horizontal, linear vertical, linear 45°, and/or linear 135° polarization channels with red, green, and blue color channels.

The first experiment measured the color polarization image of a 1951 USAF resolution chart. A linear polarizer was mounted in front of the light source to control the incident polarized white light. Figure 4 shows the detector measurement of the resolution chart. As shown in the image, the multiplexing process encoded the image with pattern. Figure 5 shows the reconstruction of this compressed measurement in three colors and four polarization channels. The noisy modulation pattern which appears in Fig. 4 had been removed and the sharp edges can now be visually reconstructed. The brightness in each image reveals its irradiance between polarization channels, which had been normalized to the maximum value. Since the illumination is linear vertical polarized, the linear vertical (90°) channels have the highest brightness; the linear horizontal (0°) channels are close to zero brightness; and the other two channels have half of the brightness of the linear vertical channels.

The second experiment was to validate the camera’s reconstruction ability as imaging polarimeter. We rotated the polarizer at different azimuth angles to change the incident polarized light. Figures 6 and 7 show the distribution of the reconstructed *S*_{1} and *S*_{2} Stokes parameters under a series of different incident polarizations in the green channel. The reconstructed values vary with the incident polarization state. These reconstructed images follow the theoretical value of *S*_{1} and *S*_{2} Stokes parameters.

The experimental results of spatially and spectrally complex scenes are shown in Figs. 8 and 9. Polarization filtered toys and a scene of a parking lot were captured and reconstructed. Each Fig. includes an unpolarized reference color image, a monochrome coded image represents the compressed measurement, a polarized color images represents the reference of polarization channels, and pseudo-color demodulated images depict the reconstructed channels. Both scenes were reconstructed with patch based GAP using a spatial DCT basis. In the reconstruction, each color channel had been reconstructed separately and using Bayer filter demodulation to generate pseudo-color estimations.

We note that the errors in the recovery images are possibly caused by several factors. First, the calibration error in the forward matrix **H** could reduce the quality of reconstruction. Such errors could come from the beam deviation and the azimuth angle misalignment of rotatable polarizer in the calibration, and the insufficient of the light source uniformity. Using a better calibrated **H** matrix could reduce such error. Second, the patch based DCT regularization function might generate additional spatial noise to compensate the color and polarization reconstruction in Figs. 8 and 9. A potential solution to improve the spatial reconstruction is to adapt a 2D TV regularizer to increase the spatial smoothness of the recovery images.

Figure 10 compares the spatial resolution between an unmodulated image and a reconstructed image to estimate the resolution degradation caused by the compressive measurement. The camera’s angular resolution can be estimated by measuring the minimum resolvable line width on the resolution chart over the object distance. Since the object was placed 323.5 mm in front of the system, the angular resolution of the normal image measured by the camera is 0.024°; and the angular resolution of the compressive measurement has been degraded to 0.027°.

The extinction ratio is one important parameter of a polarization camera, where its definition is the fraction of maximum transmission over the minimum transmission. The measurement result in Fig. 5 gives the extinction ratios, which are 2418.5, 664.3, and 556.1 for red, green, and blue channels, respectively. This camera has a 1075.5 overall extinction ratio, which is 10 times higher than the micropolarizer array. We note that the low extension ratio in the blue channel might due to the low visibility and high gain value in such channel, which provides poor signal to noise ratio to the reconstruction.

We present the peak signal to noise ratio (PSNR) of all four polarization channels in multiple measurement frames to evaluate the reconstruction stability of the camera. (Figure 11) The stationary object is a resolution chart under vertically polarized illumination. Since the objective has a constant spatial, polarization, and color distribution in all measurement frames, the average reconstruction was used as the ground truth. All of the polarization channels provide stable, low noise reconstruction with PSNR usually higher than 30 dB. This result shows that the reconstruction algorithm is robust enough to provide stable estimations. We note that the linear vertical channel has the highest PSNR since the test target was illuminated by linear vertically polarized light, which also has the best signal-to-noise ratio.

## 7. Conclusion

This single-shot color polarization imager presents an integration of color and polarization compressive and multiplexing sampling. It uniquely encodes and decomposes the scene into polarization images by using the phase modulation of a SLM, which provides extra polarization sensitivity compared to a conventional detecto The color-polarization compression eliminates mechanical movements that hinder conventional polarimetry. We have also presented a patch based GAP algorithm combined with a spatial DCT basis to estimate scenes with relatively complex color polarization distribution (Figs. 8 and 9). This algorithm requires no dictionary training and prior information of the object scene. Also, its patch based characteristic enables the inverse estimation to be parallelly processed, which saves time in reconstruction. Finally, the experimental results show high extinction ratios, clear spatial resolution and stable, low noise reconstructions. Future applications will use the polarization sensitivity to analyze the surface information, such as the curvature and the roughness.

We note that the spectral compression ratio of this camera could be extended by applying a side color camera to record the un-coded, high spatial resolution iamges [33]. Increasing the thickness of the LC cell or adapting dispersive prisms in the system to enhance the complexity of the spectral modulation could also contribute to the quality of the spectral compression and reconstruction.

Since the SLM is an active modulating device, switching multiple SLM frames per integration may compress the scene in temporal domain to achieve polarization compression video [26]. Such polarization-temporal compression must modulates *C* SLM frames in every measurement to acquire *C* times temporal resolution. This temporal coding strategy should provide proof of concept estimations describing 5D data-cubes *f*(*x*, *y*, *λ*, *p*, *t*) under the same camera design. Future high-dimensional compressive sampling implementations may also adapt the SLM in design to gain the polarization and/or temporal sensitivity.

## Acknowledgments

This work was supported by the Comprehensive Space-Object Characterization Using Spectrrally Compressive Polarimetric at the
Air Force Office of Scientific Research, grant
FA9550-11-1-0194. **Note:**LEGO is a trademark of The LEGO Group which is not overseeing, involved with, or responsible for this activity, product, or service.

## References and links

**1. **W. G. Egan, “Polarization in remote sensing,” Proc. SPIE **1747**, 2–48 (1992). [CrossRef]

**2. **J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. **45**, 5453–5469 (2006). [CrossRef] [PubMed]

**3. **R. G. Nadeau, W. Groner, J. W. Winkelman, A. G. Harris, C. Ince, G. J. Bouma, and K. Messmer, “Orthogonal polarization spectral imaging: A new method for study of the microcirculation,” Nat. Med. **5**, 1209–1212 (1999). [CrossRef] [PubMed]

**4. **J. S. Tyo, M. P. Rowe, E. N. Pugh Jr., and N. Engheta, “Target detection in optically scattering media by polarization-difference imaging,” Appl. Opt. **35**, 1855–1870 (1996). [CrossRef] [PubMed]

**5. **W. Smith, D. Zhou, F. Harrison, H. Revercomb, A. Larar, A. Huang, and B. Huang, “Hyperspectral remote sensing of atmospheric profiles from satellites and aircraft,” Proc. SPIE **4151**, 94–102 (2001). [CrossRef]

**6. **J. E. Ahmad and Y. Takakura, “Error analysis for rotating active Stokes-Mueller imaging polarimeters,” Opt. Lett. **31**, 2858–2860 (2006). [CrossRef] [PubMed]

**7. **A.-B. Mahler, D. Diner, and R. Chipman, “Analysis of static and time-varying polarization errors in the multiangle spectropolarimetric imager,” Appl Opt , **50**, 2080–2087 (2011). [CrossRef] [PubMed]

**8. **J. S. Tyo and H. Wei, “Optimizing imaging polarimeters constructed with imperfect optics,” Appl. Opt. **45**, 5497–5503 (2006). [CrossRef] [PubMed]

**9. **B. Bayer, “Color imaging array,” U.S. Patent 4,054,906 (20 July 1976).

**10. **J. J. Peltzer, K. A. Bachman, J. W. Rose, P. D. Flammer, T. E. Furtak, R. T. Collins, and R. E. Hollingsworth, “Ultracompact fully integrated megapixel multispectral imager,” Proc. SPIE **8364**, 83640O (2012). [CrossRef]

**11. **X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin guest-host micropolarizer arrays for visible imaging polarimetry,” Opt. Express **19**, 5565–5573 (2011). [CrossRef] [PubMed]

**12. **G. Myhre, A. Sayyad, and S. Pau, “Patterned color liquid crystal polymer polarizers,” Opt. Express **18**, 27777–27786 (2010). [CrossRef]

**13. **X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “Thin Photo-Patterned Micropolarizer Array for CMOS Image Sensors,” IEEE Circuit. Devic. **21**, 805–807 (2009).

**14. **V. Gruev, A. Ortu, N. Lazarus, J. Van der Spiegel, and N. Engheta, “Fabrication of a dual-tier thin film micropolarization array,” Opt. Express **15**, 4994–5007 (2007). [CrossRef] [PubMed]

**15. **J. Guo and D. Brady, “Fabrication of thin-film micropolarizer arrays for visible imaging polarimetry,” Appl. Opt. **39**, 1486–1492 (2000). [CrossRef]

**16. **J. Guo and D. Brady, “Fabrication of high-resolution micropolarizer array,” Opt. Eng. **36**, 2268–2271 (1997). [CrossRef]

**17. **X. Zhao, X. Pan, X. Fan, P. Xu, A. Bermak, and V. G. Chigrinov, “Patterned dual-layer achromatic micro-quarter-wave-retarder array for active polarization imaging,” Opt. Express **22**, 8024–8034 (2014). [CrossRef] [PubMed]

**18. **D. Sabatke, A. Locke, E. L. Dereniak, M. Descour, J. Garcia, T. Hamilton, and R. W. McMillan, “Snapshot imaging spectropolarimeter,” Opt. Eng. **41**, 1048–1054 (2002). [CrossRef]

**19. **J. Kim and M. J. Escuti, “Snapshot imaging spectropolarimeter utilizing polarization gratings,” Proc. SPIE **7086**, 708603 (2008). [CrossRef]

**20. **C. Oh and M. J. Escuti, “Achromatic diffraction from polarization gratings with high efficiency,” Opt. Lett. **33**, 2287–2289 (2008). [CrossRef] [PubMed]

**21. **D. J. Brady, *Optical imaging and spectroscopy* (Wiley-Interscience, 2009). [CrossRef]

**22. **M. Gehm, R. John, D. J. Brady, R. Willett, and T. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express **15**, 14013–14027 (2007). [CrossRef] [PubMed]

**23. **D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive Holography,” Opt. Express **17**, 13040–13049 (2009). [CrossRef] [PubMed]

**24. **K. MacCabe, K. Krishnamurthy, A. Chawla, D. Marks, E. Samei, and D. Brady, “Pencil beam coded aperture x-ray scatter imaging,” Opt. Express **20**, 16310–16320 (2012). [CrossRef]

**25. **E. X. Chen, M. Gehm, R. Danell, M. Wells, J. T. Glass, and D. Brady, “Compressive Mass Analysis on Quadrupole Ion Trap Systems,” J. Am. Soc. Mass Spectr. **25**1295–1307 (2014). [CrossRef]

**26. **P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, “Coded aperture compressive temporal imaging,” Opt. Express **21**, 10526–10545 (2013). [CrossRef] [PubMed]

**27. **T. H. Tsai and D. J. Brady, “Coded aperture snapshot spectral polarization imaging,” Appl. Opt. **52**, 2153–2161 (2013). [CrossRef] [PubMed]

**28. **W. Osten and N. Reingand, *Optical imaging and metrology advanced technologies* (Wiley-VCH, 2012). [CrossRef]

**29. **D. Goldstein, *Polarized Light*, 2nd ed (Marcel Dekker, 2003).

**30. **J. Bioucas-Dias and M. Figueiredo, “A new twist: two-step iterative shrinkage/thresholding for image restoration,” IEEE T. Image Process. **16**, 2992–3004 (2007). [CrossRef]

**31. **X. Liao, H. Li, and L. Carin, “Generalized Alternating Projection for Weighted ℓ_{2,1} Minimization with Applications to Model-based Compressive Sensing,” SIAM J. Imaging Sci. **7**(2), 797–823 (2014). [CrossRef]

**32. **X. Yuan, P. Llull, X. Liao, J. Yang, D. Brady, G. Sapiro, and L. Carin, “Low-Cost Compressive Sensing for Color Video and Depth,” Proc. CVPR IEEE, (2014).

**33. **X. Yuan, T. H. Tsai, R. Zhu, P. Llull, D. J. Brady, and L. Carin, “Compressive Hyperspectral Imaging with Side Information,” IEEE J. Sel. Top. Signa. (to be published).