A Multiview Light Field Camera for Scene Depth Estimation from a Single Lens

A Multiview Light Field Camera for Scene Depth Estimation from a Single Lens Shaodi You∗ National ICT Australia Ran Wei † National ICT Australia Ant...
Author: Laurel May
1 downloads 0 Views 5MB Size
A Multiview Light Field Camera for Scene Depth Estimation from a Single Lens Shaodi You∗ National ICT Australia

Ran Wei † National ICT Australia

Antonio Robles-Kelly ‡ National ICT Australia

Figure 1: Left-hand panel: Our actual camera prototype and its output; Right-hand panel: Recovered depth map (top left corner), rendering of the scene from a novel viewpoint (bottom left corner) and super-resolution results (right-hand column).

Abstract

1

In this paper, we present a light field camera which can be viewed as a compound of a perspective lens that tansfers the ligth field on to a microlens array which acts as a multiview stereo device. The camera configuration presented here can employ parfocal lenses and benefits from the spatial resolution of focused plenoptic cameras while exhibiting low angular bias. To recover the dense depth map of the scene, we calibrate the camera using a non-parametric approach based upon optical flow. Indeed, since the camera configuration presented here can be viewed as two separate, intertwined optical systems, we obtain the optical flow in two steps. In the first of these, we obtain a “distortion” map so as to perform lenslet-wise geometric calibration. In the second step, optical flow is obtained between pairs of views so as to recover the dense depth map trough a triangulation operation. In addition to the depth estimation of the scene, here we also illustrate the utility of our method for novel viewpoint generation and super-resolution.

A light field or plenoptic camera allows for the depth of the scene to be captured in a single snapshot [Georgiev and Lumsdaine 2009]. This is due to the fact that, by making use of a micro-lens array, a light field or integral imager can capture the position and direction of each light ray impinging on the camera lens [Adelson and Wang 1992]. This light field [Levoy and Hanrahan 1996] hence captures not only information regarding the scene appearance but also its geometry. This is an important trait which makes plenoptic imaging an attractive means of tackling computer vision and computer graphics settings in which the distribution of the light rays in the scene may be used for 3D rendering, refocusing, low light image capture or extended depth of field imaging [Ng et al. 2005].

Keywords: Plenoptic camera, light field, camera configuration Concepts: •Image Processing and Computer Vision → Digitization and Image Capture Imaging Geometry; ∗ e-mail:[email protected] † e-mail:[email protected] ‡ e-mail:[email protected] Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request c 2016 ACM. permissions from [email protected].

Introduction

The plenoptic camera was first presented by Lippmann in [Lippmann 1908] using lenslets and by Ives in [Ives 1928] using pinhole screens. Both three dimensional imaging techniques are based upon the notion that each of the lenslets or pinholes can be viewed as a small camera which forms an image on the sensor plane of the scene as captured by the main lens aperture. Since these are arranged as an array whose position varies spatially, the viewpoint with respect to the scene varies accordingly. Note that, nonetheless camera arrays and stereoscopic systems have also the capacity to recover the scene depth information, plenoptic cameras can avoid the angular aliasing often found in multiple camera arrays [Adelson and Wang 1992]. The trade-off here, however, is spatial aliasing [Bishop and Favaro 2011]. This spatial aliasing is dependent on the camera geometry. For instance, in [Georgiev and Lumsdaine 2009], Georgiev and Lumsdaine explore the relaSIGGRAPH 2016 Technical Paper, July 24-28, 2016, Anaheim, CA ISBN: 978-1-4503-ABCD-E/16/07 DOI: http://doi.acm.org/10.1145/9999997.9999999

Figure 2: The optical model of our proposed camera configuration Unlike traditional light field cameras, where the lenslet array is defocused with respect to the front lens, we focus our microlenses on the light field transferred from the main lens. Thus, the lenslets in our camera work as an array of multi-view stereo cameras. Our configuration also contrasts with catadioptric cameras where the optical elements, i.e. lenslets, are placed in front of the main camera lens.

tionship between the sensor-to-microlens spacing and the depth of focus in plenoptic cameras. In a related development, Levoy and Hanrahan [Levoy and Hanrahan 1996] hint at the use of large apertures to avoid spatial aliasing. In [Bishop and Favaro 2011], spatial aliasing is reduced by making use of a space-varying filter of the light field in an iterative fashion. This method is somewhat related to that in [Chai et al. 2000], where the problem of plenoptic sampling in image-based rendering is studied and a minimum sampling rate for light field rendering is presented. In [Ng et al. 2005], spatial aliasing is examined in light of the artifacts introduced by the reconstruction of the light field. It is worth noting that this spatial aliasing effect is closely related to the output resolution of plenoptic cameras. This is as these devices often assume that each lenslet image is defocused with respect to the main camera lens. This is, the final image can only render one pixel per microlens [Georgiev and Lumsdaine 2009]. To tackle this drawback, Georgiev et al. [Georgiev et al. 2011] have proposed the focused plenoptic camera. This camera uses a micro-lens array which is focused on the image plane of the main focusing lens. By focusing the lenslets in this manner, the focused plenoptic camera can sample spatial and angular information on the light field more effectively by achieving a trade-off between spatial and angular resolution. This, in practice, delivers much improved spatial resolution as compared to “traditional” plenoptic cameras. In [Bishop and Favaro 2011], a multiresolution approach for plenoptic cameras is presented whereby, departing from the image formation process, the authors use a Bayesian framework to reconstruct the scene depth and the light field. Perwass and Wietzke [Perwass and Wietzke 2012] have proposed a multi-focused plenoptic camera which can achieve up to a quarter of the sensor resolution by making use of microlenses with different focal lengths.

2

Contribution

Here, we present a camera configuration which is, from the light field point of view, somewhat related to the focused plenoptic camera [Georgiev and Lumsdaine 2009], the microlens approach of Lippmann [Lippmann 1908] and the hand-held plenoptic camera in [Ng et al. 2005]. It is also somewhat reminiscent of the TOMBO [Tanida et al. 2001]. Indeed, the camera presented here can be viewed as a focused plenoptic camera with large lenslets which, from the optical point of view can be considered to be a dioptric per-

spective camera with paraxial shifts whose image formation process is akin to that of catadioptric devices [Taguchi et al. 2010], primslens cameras [Georgeiv and Intwala 2003] and water drop stereo vision [You et al. 2016]. Thus, the camera configuration presented here shares the benefits in resolution achieved by focusing the microlens array on the image plane while capturing angular information on the light field. This allows for a configuration that can capture the light field using parfocal lenses, i.e. a lens with a fixed focus with varying focal length. Moreover, as we will see later on, the optical model for our camera can be posed as a compound system where the front, i.e. focusing, lens serves as a light field transfer unit whereas the micro-lenses can be treated as a multi-view stereo array. Hence, our contributions are: • We present a versatile novel camera configuration which does not require complex fabrication or custom made sensors and can employ parfocal lenses, allowing for changes in the focal length of the system, i.e. zooming.

• We provide a detailed analysis of the optical properties of the camera showing that the configuration presented here can be treated as a perspective lens which transfers the light field on to the lenslets, which can be viewed as a multi-view camera array. • We present non-parametric calibration and depth recovery methods applicable to tasks such as dense depth estimation, refocusing and super-resolution. The paper is organized as follows. In Section 3 we present the optical model of our camera and elaborate further on its relation to catadioptic and traditional plenoptic cameras. In Section 4, we present the non-parametric calibration procedure used to obtain the lenslet geometric calibration and the transfer parameters between micro lenses in the array. In Section 5, we show how a dense depth map for the scene may be recovered using optical flow. In the section, we also show depth estimation, super-resolution and novel view-point generation results on sample imagery acquired by our camera. Finally, in Section 6 we conclude on the developments presented here.

𝑷 = (𝑋, 𝑌, 0) 𝑧 − 𝑎𝑥𝑖s P = (X, Y, Z) Θ 𝟎

p = (x, y, z) 𝜃

f

f

𝑧

𝑍

Figure 3: Light field transfer. The scene light field (left-hand side of the lens) is transferred to an equivalent light field on the right-hand side of the lens.

3

Optical Model

equations −

In this section, we commence by introducing our camera configuration and discussing its advantages as compared to two related systems, it i.e. the focused plenoptic camera and catadioptric devices. Later on in the section we provide the optical equations for our camera. These are comprised by the light field transfer and multi-view camera geometry. For the former, we also study the effect of the optics on the light field as transferred by the camera lens.

1 1 1 + = , Z z f X x = , Z z Y y = , Z z

(1)

where f is the focal lens. has two components: a front parfocal lens and a large lenslet array. Note that, for such configuration, the front lens refocuses the light field from the scene, giving rise to a new light field between the front lens and the microlens array. We call this the light field transfer function of the camera. This trait allows for our lenslet array to work as a traditional multiview stereo camera array. While the joint analysis of the refraction through two sets of lenses is challenging, here we note that the front lens and back lenslet array are not co-axial. This is an important observation since we can view them as separate optical systems, whereby the main lens transfers the light field onto the microlens array acting as a multi-view stereo. In this manner, the analysis of the camera optics can be significantly simplified.

Our proposed configuration

3.1

Light-field transfer

As mentioned above, for the sake of simplicity, let us assume the front lens is a simple zoom parfocal lens and, hence, stays in focus when the focal length is altered. As we will note in this section, in such situation, the light field transfer from the main lens to the lenslets is a straight forward matter.

With the expressions above, we can write the transfer equations explicitly as follows   fX fY fZ (x, y, z) = , , , (2) f +Z f +Z f +Z and, similarly, we have  (X, Y, Z) =

 fx fy fz , , . f −z f −z f −z

(3)

Note that, from Equations 2 and 3 it becomes evident that the light field is “transferred” from one side of the z-axis to the other one by the lens. Moreover, making use of the notation in Figure 3, we can consider a light ray originated from P = (X, Y, Z) and passing ˜ Y˜ , 0). By assuming a through the main focusing lens at P˜ = (X, thin lens, the ray impinging the main lens can be expressed as R=

˜ − X, Y˜ − Y, −Z) (X P˜ − P = . kRk kRk

(4)

whereas the ray out-bound from the main lens is given by To see this more clearly, in Figure 3, we show a simplified paraxial diagram of the main focusing lens. In the figure, we have assumed the origin is at the lens centre and the z-axis is given by the principal axis of the lens, where the right-hand side is positive. Consider a point in the scene, denoted as P = (X, Y, Z), which has an in-axis distance from the lens Z and an off-axis deviation (X, Y ). From the paraxial diagram it becomes evident that the point P is imaged on the positive side of the z-axis. Let the image of P be denoted as p = (x, y, z). With these ingredients, we have the following lens

r= =

˜ y − Y˜ , z) (x − X, p − P˜ = krk krk Z ˜ (X − (1 + )X, Y − (1 + Z )Y˜ , Z) f

f

krk

(5) .

where we have included the denominators in Equations 4 and 5 so as to normalise the vectors to unit length.

𝑧 − 𝑎𝑥𝑖s

𝜙

fact that, since the lenslets have fixed focal length, the light field cannot be guaranteed to be all in focus. In fact, there is bound to be out-of-focus blurring. Here we examine the effects of such outof-focus blurring and, more importantly, note that, in practice, its effects can be considered to be negligible.

𝜙𝑧

Δ𝑍

Following Figure 4, assume the distance between the lens and the sensor is z. Similarly, the corresponding in-focus distance is Z, and let the aperture size be Φ. Making use of these notations, we can express the diameter of the out-of-focus blurring kernel in a straightforward manner as

𝛿𝑧 𝑧

Z

Figure 4: Out of focus blurring Note that the PSF is a function of the stop appeture (see text for more details).

We conclude the section by studying the derivative properties of the light field transfer. Specifically, we show that the first order derivatives for the transfer are actually related to the angular deviation of the light rays to the focal length.

Properties of the light field transfer

To this end, we commence by examining the first order partial derivatives of the image of P . This can be done in a straightforward manner making use of Jacobian matrix of Equation 2, which is given by   f 0 0 f +Z ∂p f   0 0 = (6) . f +Z ∂P fX fY f2 − (f +Z)2 − (f +Z)2 (f +Z)2 With the Jacobian in hand, note that, as a result of the transfer operation, the scene depth undergoes a scaling transformation. A similar observation can be made using the inverse Jacobian matrix. This property underpins the angular characterisation of the light field transfer. This can be obtained by using the chain rule so as to compute the first order derivatives of Equation 5, which yields Z f +Z ∂Θ =− =− . ∂θ z Z

(7)

Where Θ and θ are the angular deviation between a specified ray and that passing through the optic centre of the focusing lens, as illustrated in Figure 3. From inspection, we can conclude that, according to Equations 6 and 7, our camera configuration spatial and angular resolution is a function of the focal length f . This is telling, since it opens-up the possibility of achieving a trade-off between the spatial and angular resolution of our camera by simply varying the focal length of the main lens.

φ=Φ

∆Z z . z 2 − Z∆Z

(8)

Thus, an efficient way to reduce the out-of-focus blurring is to limit the aperture of the stop. This can be accomplished in our camera calibration step by making use of a large aperture front lens. As a result, a wide angle light field can be captured while reducing the out-of-focus blurring by limiting the aperture of the lenslets. Note that, for lens-prism [Georgeiv and Intwala 2003] and catadoptric configurations limiting the out-of-focus blur is not straightforward. This is since in such cameras the prisms and lenslets are front mounted and, hence, the back perspective lens cannot reduce the blurring by using a small aperture without overly reducing the magnitude of the light field captured by the camera. See more discussion in Section 6.

4

Calibration

Note that, unlike common perspective cameras, our lens-lenslet array configuration is highly non-axial. This is to say, the optical axis of the front lens and back lenslets are not co-linear. Such non-axial configuration significantly increases the complexity with regards to parametric calibration. This is due to the notion that, while an inaxis system distortion can be easily approximated by a few radial basis functions (RBF) with O(n) complexity, the parameters and basis functions required for approximating a system with non-axial distortion exhibits squared complexity O(n2 )[Agrawal and Ramalingam 2013]. This complexity can only result in a high computational burden. Furthermore, most existing automatic calibration algorithms assume solely in-axis lens distortion, hence, lacking the ability to tackle non-axial calibration tasks. Thus, here we opt for a non-parametric calibration approach as an alternative to a parametric method. In this section, we first introduce the geometric calibration used by our camera. Later on in the section we turn our attention to the photometric calibration of the system.

4.1

Geometric Calibration

As a consequence of the light field transfer, the microlens array can be treated as a multiview stereo camera system. By assuming the lenslets in the array have been calibrated accordingly (we elaborate further on this in the following section), the dense depth map of the scene can be recovered in a straightforward manner making use of well known multi-view geometry techniques widely used in the computer vision community [Hartley and Zisserman 2003].

As illustrated in Figure 5, here we perform geometric distortion compensation by making use of the dense distortion map for each lens computed making use of optical flow. This can be viewed as a geometric calibration step which operates on each lenslet independently. The main motivations for this are twofold. Firstly, correcting non-axial distortion can be highly non-linear. Making use of optical flow delivers a dense displacement map that can be used, in a straightforward manner, to recover an undistorted image for each lenslet. Secondly, the optical flow can also be used for the purposes of recovering the optical centre of each lenslet.

It is worth noting, however, that the outof-focus blurring warrants further examination. This is due to the

Thus, we commence by generating a digital ground truth calibration pattern, shown in Figure 5.a. Such pattern is used as a reference to both, centre the lenslet under consideration and to recover a dense

3.2

Microlens Array

Out of focus blurring

(b) Raw captured pattern

(a) Ground truth pattern

Figure 6: Lenslet Calibration We align the pattern centre to the optical centre of each lenslet. The off-centre displacement used in our calibration procedure is relative to one of the lenslets in the centre of the array. Here, the recovered optical centres are shown as red crosses while the effective imaging area of each lenslet as used in our experiments is enclosed in a red bounding box.

(c) Dense warping map

calibration errors.

(d) Raw input

(e) Unwarpped input

Figure 5: Geometric non-parametric undistortion using optical flow Notice that the distortion is highly non-axial. In the figure, the hue of the distortion map corresponds to the direction of the optical flow.

To address this problem, we alight the “cross” on the calibration patterns with respect to the central lenslet in our array (the view obtained from the microlens on the second column in the middle row) as illustrated in Figure 6. By taking this lenslet as the reference for the other eleven in our array, we can compute the relative off-centre displacement. Notice that the off-centre displacement is dependent on the distance between the lens and the calibration pattern board up to a scale coefficient. For our calibration procedure, we place the board 1m away from the lens.

4.2 displacement map for geometric distortion. As a result, when calibrating each lenslet, each one of the twelve views delivered by the camera are used to recover a dense distortion map. In Figure 5.b we show one of these sample views whereas in Figure 5.b we show the non-parametric dense displacement map delivered by the optical flow method in [Li et al. 2015]. This displacement map is hence obtained for every one of the lenslets. Here, in order to reduce the calibration error, we employ the average over five sets of displacement maps for each microlens. Such repetition significantly reduced the calibration error. Once these maps are in hand, they can be used to undistort the imagery captured by our camera. This is illustrated in Figure 5.d, where the raw input image is undistorted using the dense map delivered by optical flow, yielding the image in Figure 5.e. It is worth noting in passing that the goemetric distortion is also dependent on the main lens focal length. However, due to our choice of parfocal lenses, such changes in the geometric distortion are negligible in practice. As mentioned earlier, the geometric calibration step also involves the recovery of the optical centre of each lenslet. Note that our calibration pattern has a “cross” clearly visible in its centre (Figure 5.b). This allows for the recovery of the optical lenslet centre, which is done in a lenslet-wise manner. Note that, since the optical centre of each lens is calibrated independently, directly calculating the displacement between two lenslet optical centres might yield

Photometric Calibration

For our camera, we have opted to perform photometric calibration as a preprocessing step to the geometric calibration. Recall that the apparent colour of the objects in the scene depend on a number of factors. One of these is the power spectrum of the lights illuminating the scene. In the left-hand panel of Figure 7, we show the pixelwise illuminant colour used for our photometric calibration. To do this, we have used a white calibration target and, once the illuminant colour is in hand, the image can be white balanced in a straightfoward manner [Gu et al. 2014]

5

Experiments and Applications

We now illustrate the utility of our method for scene depth estimation, super resolution and view point transfer tasks. For our experiments, we have acquired imagery of cluttered scenes with glossy and transparent objects. We show the images acquired by our camera before geometric and after photometric calibration in the top row of Figure 9.

5.1

Scene Depth Estimation

Recall that, after the images have been undistorted using our nonparametric geometric calibration, we can now consider our camera as an ideal perspective camera array. As mentioned earlier, this

Reference

Raw input

White-balanced

Figure 7: Photometric calibration Left-hand panel: Image acquired by our camera of the calibration target; Middle panel: Image of a real-world scene before photometric calibration; Right-hand panel: White balanced image corresponding to the that in the middle column.

Figure 8: An example of multi-view stereo. Left: we use a centre image as reference, and another image away from it. Middle: the dense optical flow between two images. Right: the depth using optical flow. 11 sets of optical flow between the reference and the rest 11 images are used.

permits the application of well established multi-view geometry algorithms so as to obtain the scene depth [Hartley and Zisserman 2003]. As illustrated in Fig. 8, we obtain the dense displacement by using optical flow, Fig. 8.b. The displacement maps from the centered figure to the surrounding figures are computed, and later normalised. Finally, the displacement [Xu et al. 2012] is converted to depth trough triangulation, as exemplified in Figure 8.c. More results are provided in the fourth row of Fig. 9. As can be seen, the estimate works well in cluttered scenes with glossy surfaces. Unlike plenoptic cameras of catadioptric devices, our model allows us to estimate depth with sharp detail. Furthermore, without special transparency handling, we managed to robustly estimate the depth of objects such as the glass or the water bottle.

5.2

Super-resolution

Note that, once the undistorted views for each of the microlenses are in hand, super resolution can be achieved in a straight forward manner. This is due to the fact that the optical camera centre displacements are known and the images can be reliably matched to one another. Here, we have used the multiview super-resolution method in [Farsiu et al. 2003] whereby, at input, we have employed the 12 frames corresponding to each of the microlenses in our camera and, at output, we have obtained a high resolution image. Each

of these low resolution views is approximately 1 Mpx in resolution, while our output super-resolved images are 4.2 Mpx. In Figure 9, we show super-resolution results for four real-world scenes. In the figure, the second row, from top-to-bottom, shows the output image as delivered by the method in [Farsiu et al. 2003]. The third row shows in detail selected image regions.

5.3

View-transfer

As noted in the previous section, our camera allows for the recovery of depth which, in general preserves the discontinuities and detail in the scene. This has motivated our choice to show results on view transfer rather than refocusing. This is as digital refocus can be obtained in a straightforward manner from the quantised scene depth. This is due to the notion that refocusing is often based on a fixed range or interval of scene depth values which, in practice, is achieved by separating the scene into two or three depth layers. In contrast, the depth recovered by our camera is a continuous depth map which is not limited to a few layers or quantised in a postprocessing step. As a result, we have employed the depth recovered by our camera and rendered the scene from a novel view point not previously captured by the camera. The results of this view transfer operation are shown in the last row of Figure 9. To produce these novel views, we convert the depth map to a dense and continuous 3D mesh and re-

Input Super resolution Zoom in Depth Noval view point Figure 9: Depth recovery, super-resolution and view transfer results From top-to-bottom: Input imagery, super-resolution results, detail of the super-resolved images in the second row, depth map and novel views rendered using the depth in the fourth row. For the results shown in the fourth row, the brightness is inversely proportional to the scene depth.

Figure 10: Left-hand panel: focused plenoptic camera model; Right-hand panel: catadioptric camera model.

color each triangle on the mesh with the RGB value corresponding to the closest pixel on the central lenslet view (the one corresponding to the microlens located on the second row and second column from left-to-right and top-to-bottom of the array). Once the mesh is rendered, we vary the camera pose accordingly so as to obtain a novel viewpoint.

6

Discussion and Conclusions

Note that our camera configuration and the focused plenoptic camera employ a microlens array which is focused on the image plane of the main focusing lens. The main difference between our system and the focused plenoptic camera resides in the fact that our microlens array is not only in focus with the main lens but also with respect to the camera sensor, as shown in the simplified paraxial diagram in Figure 10. This, together with the use of larger microlenses than those often employed in focused plenoptic cameras, allows for our camera to capture images with good spatial and angular resolution. This contrasts with plenoptic cameras, which suffer from serious spatial aliasing. In addition, while the focused plenoptic cameras requires good alignment between the sensor and lenslets, the configuration presented here is quite forgiving with respect to shifts on the sensor plane. This can significantly reduce the manufacturing costs associated with the camera presented here.

Relation to the focused plenoptic camera

It is worth noting that the configuration proposed here can be considered as an inverted catadioptric model. This can be appreciated from the paraxial diagram shown in the right-hand panel in Figure Fig. 10. Note that, as a result of such inversion, our camera successfully avoids the focusing and spatial aliasing problems commonly found in catadioptric devices. This is as catadioptric systems generally exhibit a shallow focal depth. In addition, the light field is “dispersed” by the front lenslets, distorting it with respect to the back zoom lens. Indeed, reducing the actual size of the front lenslets or increasing the aperture of the back zoom lens can alleviate this problem to some degree. However, this requires careful consideration since reducing the front lenslet size has the effect of also reducing the angular resolution. Our camera vs the catadioptric model

In this paper, we have presented a multiview light field camera which does not require custom made sensors and can employ parfocal lenses as main focusing elements. The camera configuration presented here can be viewed as a focused plenoptic camera which can be considered to be a dioptric perspective camera with paraxial shifts. As a result, it can capture the light field while sharing the benefits in spatial resolution achieved by focused Conclusions

plenoptic cameras. We have also studied the optical properties of the camera in detail, where we have treated it as a compound system comprised by a perspective lens which transfers the light field on to a microlens array. This microlens array can then be viewed as a multiview stereo system. Moreover, we have shown how the camera can be calibrated using a non-parametric approach based upon optical flow. This yields a dense scene depth map. We have illustrated the utility of our camera for scene depth estimation, superresolution and view transfer tasks. We have also discussed the link between our camera, focused plenoptic cameras and catadioptric devices.

References A DELSON , E. H., AND WANG , J. 1992. Single lens stereo with a plenoptic camera. IEEE Trans. on Pattern Analysis and Machine Intelligence 14, 99–106. AGRAWAL , A., AND R AMALINGAM , S. 2013. Single image calibration of multi-axial imaging systems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1399–1406. B ISHOP, T. E., AND FAVARO , P. 2011. Full-resolution depth map estimation from an aliased plenoptic light field. In Proceedings of the 10th Asian Conference on Computer Vision - Volume Part II, Asian Conference on Computer Vision. C HAI , J. X., T ONG , X., C HAN , S. C., AND S HUM , H. Y. 2000. Plenoptic sampling. In SIGGRAPH ’00, 307–318. FARSIU , S., ROBINSON , D., E LAD , M., AND M ILANFAR , P. 2003. Fast and robust multi-frame super-resolution. IEEE Transactions on Image Processing 13, 1327–1344. G EORGEIV, T., AND I NTWALA , C. 2003. Light field camera. design for integral. view photography. In Adobe Technical Report. G EORGIEV, T., AND L UMSDAINE , A. 2009. Depth of field in plenoptic cameras. In Eurographics. G EORGIEV, T., L UMSDAINE , A., AND C HUNEV, G. 2011. Using focused plenoptic cameras for rich image capture. IEEE Computer Graphics and Applications 31, 1, 62–73. G U , L., H UYNH , C. P., AND ROBLES -K ELLY, A. 2014. Segmentation and estimation of spatially varying illumination. IEEE Trans. on Image Processing 23, 8, 3478–3489. H ARTLEY, R., AND Z ISSERMAN , A. 2003. Multiple view geometry in computer vision. Cambridge university press.

I VES , H. E. 1928. A camera for making parallax panoramagrams. J. Opt. Soc. Am. 17, 6, 435–439. L EVOY, M., AND H ANRAHAN , N. 1996. Light field rendering. In SIGGRAPH, 31–42. L I , Y., M IN , D., B ROWN , M. S., D O , M. N., AND L U , J. 2015. Spm-bp: Sped-up patchmatch belief propagation for continuous mrfs. In Proceedings of the IEEE International Conference on Computer Vision, 4006–4014. ´ L IPPMANN , G. 1908. Epreuves r´eversibles donnant la sensation du relief. J. Phys. Theor. Appl. 7, 1, 821–825. N G , R., L EVOY, M., B R E´ DIF, M., D UVAL , G., H OROWITZ , M., AND H ANRAHAN , P. 2005. Light field photography with a hand-held plenoptic camera. Computer Science Technical Report CSTR 2, 11, 1–11. P ERWASS , C., AND W IETZKE , L., 2012. Single lens 3d-camera with extended depth-of-field. TAGUCHI , Y., AGRAWAL , A., V EERARAGHAVAN , A., R AMA LINGAM , S., AND R ASKAR , R. 2010. Axial-cones: Modeling spherical catadioptric cameras for wide-angle light field rendering. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2010) 29, 6, 172:1–172:8. TANIDA , J., K UMAGAI , T., YAMADA , K., M IYATAKE , S., I SHIDA , K., M ORIMOTO , T., KONDOU , N., M IYAZAKI , D., AND I CHIOKA , Y. 2001. Thin observation module by bound optics (tombo): concept and experimentalverification. Applied Optics 40, 11, 1806–1813. X U , L., J IA , J., AND M ATSUSHITA , Y. 2012. Motion detail preserving optical flow estimation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 34, 9, 1744–1757. YOU , S., TAN , R. T., K AWAKAMI , R., M UKAIGAWA , Y., AND I KEUCHI , K. 2016. Waterdrop stereo. arXiv preprint arXiv:1604.00730.

Suggest Documents