Light field camera based particle tracking velocimetry

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016 Light field...
Author: Bertram Dean
3 downloads 0 Views 1MB Size
18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016



Light field camera based particle tracking velocimetry K. Ohmi1, S. Tuladhar2, J. Hao2 1: Dept. of Information Systems Engineering, Osaka Sangyo University, Japan 2: Dept. of Information Systems Engineering, Graduate School of Osaka Sangyo University, Japan * Correspondent author: [email protected] Keywords: Light field PIV, Parallel view stereo, Particle tracking

ABSTRACT A novel parallel view stereo approach has been developed and tested in the light field camera based particle tracking velocimetry. A light field camera is optically equivalent to a bundle of parallel view stereo cameras. Therefore a recorded raw image of particles can be converted into multiple stereo images, from which the 3D locations of particles can be obtained by using the same triangulation scheme as in the conventional 3D particle tracking velocimetry. A large number of parallel view stereo images in this approach facilitate the particle matching process in the stereoscopy at a cost of reduced resolution of individual stereo images. Computationally expensive refocus calculation or tomographic reconstruction is not required to calculate a 3D volume of particles. At relatively low particle density, the primary test results of this approach seemed promising as compared to the tomographic reconstruction based approach.

1. Introduction The light field image processing technique has been recently increasingly popular in the particle image velocimetry for 3D flow measurement [1]-[2]. The depth measurement in this technique has been mainly performed by a detailed intensity analysis of refocus images processed from the original recording [3] or by a ray tracing tomographic reconstruction based directly on the original recording [4]. However, the potential of this approach in the 3D volumetric velocity measurement of distribution should probably be exploited more in detail [5]-[7]. One of the possible reasons for this is the limitation of the resolution of current single sensor light field cameras and the other one is the uncharted field assessment of image processing algorithms for recovery of depth field. In the present work the authors propose a new basic light field imaging system that uses commercially available handheld light field camera to perform a depth recovery of recorded particles. The basic principle of the new approach is the fact that a light field camera is optically equivalent to a bundle of parallel view stereo cameras. Then the recording of particles images can be converted into multiple stereo images, from which the 3D locations of

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

particles can be obtained by using the same scheme as in the conventional 3D particle tracking velocimetry. A large number of parallel view stereo images in this approach facilitate the particle matching process in the stereoscopy at a cost of reduced resolution of individual stereo images. After 3D locations of particles are obtained, a conventional particle tracking velocimetry technique is used to determine the velocity profile of fluid flow inside a square cavity illuminated by a laser sheet and recorded by a light field camera. 2. Camera calibration 2.1 Light field camera and parallax The original recording of the handheld light field camera ‘Lytro’ is decomposed into raw image data and metadata using proprietary ‘Lytro Desktop’ software. Figure 1 shows whole field and partial enlarged views of one such raw image (showing a camera calibration plate). Parallax images captured by the micro lens array are extracted from this raw image using Matlab Light Field Toolbox [8]. Since these parallax images from a light field camera are equivalent to those recorded by a bundle of parallel view stereo cameras, it is possible to determine the depth of imaged targets by using a least squares based triangulation method. But when a single light field camera is used for stereoscopic imaging, the baseline of the parallax view is considerably small, resulting in the limitation of accurate depth measurement in the nearby targets. So the present work does not aim to capture the target motions in the far field. The most basic method of extracting parallax sub images from a raw recorded image of light field camera is to rearrange the pixel intensity data taken from the same relative position of the micro-lens as shown in Figure 2. More precisely, all these pixel intensity data must be rearranged in vertical and horizontal directions as it is arranged in the micro-lens array. This leads to a generation of as many parallax images as the number of micro-lenses. In the case of the ‘Lytro’ camera, the number of pixels covered by one micro-lens is approximately 9×9 = 81, so altogether 81 parallax images are extracted from a single raw image recording. Also the image sensor of the Lytro camera is covered by a 378×378 lenslet micro-lens array, which results in the resolution of parallax image at a size of only 378×378 pixel. Figure 3 shows 54 out of 81 parallax images of the camera calibration plate in Figure 1. Figure 4 shows two enlarged views of these 54 parallax images, which are extracted at micro-lens positions (5, 2) and (5, 8). Slight biased alignment (parallax) in the positions of nine calibration points between two images can be recognized.

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

Fig.1 Lytro raw image of a calibration plate

Fig.2 Extraction of parallax images from a

(left) and partial enlargement (right)

light field raw image

Fig.3 Set of parallax images of a calibration plate extracted by Light Field Toolbox 2.2 Calibration method and depth recovery A dot pattern (3.5 mm diameter and 15 mm horizontal and vertical spacing) printed in black on white background is mounted on a fine scale slider as shown in Figure 5 and used for calibrating

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

the light field camera. The depth images are recorded starting from a 45mm distance from the tront face of the main lens of the camera with increment of 10mm up to 245mm. The focal length of the main lens (zoomable) is kept at the shortest value (6.45 mm) all through the experiment because the imaged target of the present study is located in the near field of the camera lens.

Fig.4 Enlarged views of the parallax images in Fig.3.

Fig.5 Lytro camera calibration

Then the gravity centers of all calibration points in each parallax image is measured with subpixel accuracy. This image processing consists of binarization, labelling and weighted averaging of the coordinates of calibration points. For subpixel analysis, 2-D Gaussian peak fitting is performed first in 0.1 pixel accuracy and then followed by a Gauss 3 point approximation for more decimals. After all the gravity centers are calculated in all parallax images, the 3D physical (world) coordinates and 2D camera coordinates of all the gravity centers are processed to calibrate the camera parameters of all the parallax views. The calibration function used in this process is expressed in the form of 3rd order polynomials as follows:

x = a1 X + a2Y + a3 Z + a4 X 2 + a5Y 2 + a6 Z 2 + a7 XY + a8 XZ + a9YZ + a10 X 3 + a11Y 3 + a12 Z 3 + a13 X 2Y + a14 X 2 Z + a15Y 2 X + a16Y 2 Z + a17 Z 2 X + a18 Z 2Y + a19 XYZ + a20 y = b1 X + b2Y + b3 Z + b4 X 2 + b5Y 2 + b6 Z 2 + b7 XY + b8 XZ + b9YZ + b10 X 3 + b11Y 3 + b12 Z 3 + b13 X 2Y + b14 X 2 Z + b15Y 2 X + b16Y 2 Z + b17 Z 2 X + b18 Z 2Y + b19 XYZ + b20

(1)

where x and y are the 2D camera coordinates, X, Y and Z are the 3D world coordinates of the calibration points and a to a and b to b20 are the two sets of camera parametes to be calibrated 1

20

1

by using a least squares approximation (pseudo inverse calculation). These two sets of camera parameters have to be determined for each parallax view. A typical calibration curve (showing the relationship between the parallax shift and depth only) of the light field camera in the present study is given in Figure 6.

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

Fig.6 Typical camera caliration curve showing the relation parallax shift v.s. depth. The accuracy of depth measurement by this approach is checked by a self-recovery calculation of the depth of calibration points. Table 1 shows the rms errors of the depth of calibration points by applying different order polynomial functions and different range of calibration depth. From this table it is confirmed that the depth recovery accuracy is higher as the order of polynomials is increased as well as the range of calibration depth is decreased. Another observation is that the accuracy with higher order polynomials is not highly improved as the range of calibration depth shifts into far fields. Table 1 RMS errors of the depth of calibration points Depth/ Polynomial Order 3rd 4th 5th 6th

4595 mm 0.578 0.497 0.351 0.127

45145 mm 1.183 1.058 0.988 0.910

45195 mm 2.098 1.669 1.559 1.539

95145 mm 1.174 1.164 1.018 0.874

3. Depth map results 3-1 Depth map results The particle image recording by the light field camera is carried out by using a small square cavity (150×150×10 cm ) with an electro-magnetic stirrer, an Ar-ion laser (4W) based volumetric 3

light source and other optical components as depicted in Figure 7. The main lens of the light field camera views the cavity flow with seeding particles (Mitsubishi high-porous polymer) normal to the transparent cavity wall. The distance from the camera main lens front to the two walls is

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

about 100 to 120 mm. The frame rate of the camera image recording is approximately 1 Hz and controlled manually. The decomposition of the camera recorded image into 42 parallax images is conducted again by using the Matlab Lightfield Toolbox. Sample depth recovery results of seeding particles viewed from the front side are given in Figure 8 together with the respective original images. Other sample depth results of seeding particles viewed from obliquely upward and from front side are given in Figure 9 and the 3D particle coordinates, thus obtained, are tracked over a sequence of time series recorded images as shown in Figure 10. In this 3D particle tracking, the particle matching is performed by using the SOM (Self Organizing Maps) PTV algorithm [9].

Fig.7 Experimental setup of the light field camera based particle tracking velocimetry

Fig.8 PIV particle images recorded in a small square cavity and their depth maps

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

Fig.9 Oblique upward and front side views of recovered 3D particles in the square cavity

Fig.10 Sample particle tracking result between two time steps Conclusions (1) A novel parallel view stereo approach has been developed and tested in the light field camera based particle tracking velocimetry. (2) Recorded raw image of particles can be converted into multiple stereo images, from which the 3D locations of particles can be obtained by using the triangulation scheme as in the conventional 3D particle tracking velocimetry. (3) A large number of parallel view stereo images in this approach facilitate the particle matching process in the stereoscopy at a cost of reduced resolution of individual stereo images. (4) At relatively low particle density, the primary test results of this approach seemed promising as compared to the tomographic reconstruction based approach.

18th International Symposium on the Application of Laser and Imaging Techniques to Fluid Mechanics・LISBON | PORTUGAL ・JULY 4 – 7, 2016

References [1] Ren, N.: Digital light field photography, Stanford University Doctoral Dissertation, (2006). [2] Cenedese, A., Cenedese, C., Furia, F., Marchetti, M., Moroni, M. and Shindler, L.: 3D particle reconstruction using light field imaging, Proc. 16th Int Symp on Applications of Laser Techniques to Fluid Mechanics, (2012). [3] Lynch, K., Fahringer, T. and Thurow, B.: Three-dimensional particle image velocimetry using a plenoptic camera, 50th AIAA Aerospace Sciences Meeting (2012), AIAA 2012-1056. [4] Thurow, B.S. and Fahringer, T.: Recent Development of Volumetric PIV with a Light field Camera, Proc. 10th Int. Symp. Particle Image Velocimetry, (2013). [5] Kitzhofer, J., Hess, D. and Brücker, C.: Comparison of particle reconstruction quality between a multiple camera system and a light-field camera, Proc. 10th Int. Symp. Particle Image Velocimetry, (2013). [6] Zeller, N., Quint, F. and Stilla, U.: Calibration and accuracy analysis of a focused light field camera, ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, II-3 (2014), 205-212. [7] Maas, H. G., Gruen, A. and Papantoniou, D.: Particle tracking velocimetry in three-dimensional flows, Experiments in Fluids, 15-2 (1993), 133-146. [8] Donald, G. D.: Light Field Toolbox for Matlab v.0.3, (2014). http://www.mathworks.com/ matlabcentral/fileexchange/48405-light-field-toolbox-v0-3 [9] Ohmi, K.: SOM-based particle matching algorithm for 3-D particle tracking velocimetry, Applied Mathematics and Computation, Vol.205, Issue 2 (2008), 890-898.

Suggest Documents