RELATIVE ORIENTATION BETWEEN A SINGLE FRAME IMAGE AND LIDAR POINT CLOUD USING LINEAR FEATURES

The Photogrammetric Journal of Finland, Vol. 23, No. 2, 2013 Received 05.11.2012, Accepted 16.08.2013 doi:10.17690/013232.1 RELATIVE ORIENTATION BE...
Author: Dustin Greer
4 downloads 2 Views 3MB Size
The Photogrammetric Journal of Finland, Vol. 23, No. 2, 2013

Received 05.11.2012, Accepted 16.08.2013

doi:10.17690/013232.1

RELATIVE ORIENTATION BETWEEN A SINGLE FRAME IMAGE AND LIDAR POINT CLOUD USING LINEAR FEATURES Petri Rönnholm1, Mika Karjalainen2, Harri Kaartinen2, Kimmo Nurminen2, Juha Hyyppä2 1

Institute of Photogrammetry and Remote Sensing, Aalto University 2 Finnish Geodetic Institute

[email protected], [email protected], [email protected], [email protected], [email protected]

ABSTRACT Registration of multi-source remote sensing data is an essential task prior their efficient integrated use. It is known that accurate registration of different data sources, such as aerial frame images and lidar data, is a challenging process, where extraction and selection of robust tie features is the key issue. In the presented approach, we used linear features, namely roof ridges, as tie features. Roof ridges derived from lidar data are automatically located in the 2D image plane and the relative orientation is based on the well-known coplanarity condition. According to the results, the average registration (absolute) errors varied between 0.003 to 0.196 m in the X direction, between 0.018 to 0.282 m in the Y direction and between 0.010 to 0.967 m in the Z direction. Rotation (absolute) errors varied between 0.001 to 0.078 degrees, 0.006 to 0.466 degrees and 0.013 to 0.115 degrees for ω, ϕ and κ rotations, respectively. This study revealed that the method has potential in automatic relative orientation of a single frame image and lidar data. However, the distribution, orientation and the number of successfully located tie features have an essential role in succeeding in the task.

1. INTRODUCTION Acquiring multi-source remote sensing data directly in the common coordinate frame is, usually, an unrealistic task even if the accuracy of direct georeferencing sensors is nowadays relatively high. The evolvement of lidars into accurate 3D-mapping devices has increased the need for proper registration methods. For example, there is a constant need for co-registration of airborne lidar data and aerial frame images because these data sources are typically considered to complement each other. Co-registration allows any data to be transformed in the coordinate frame of more accurately georeferenced data set. Accurate co-registration of multi-source remotely sensed data using post-processing is a challenging task, but at the same time, one of the most important tasks in the research field of photogrammetry, laser scanning and remote sensing. Once data are co-registered, they can be combined with each other and used together efficiently, in order to facilitate further applications such as the creation of photorealistic 3D models, classification, production of new data sets or advanced analysis, to name but a few. Extraction and selection of tie features used in registration are the key tasks. In the case of frame images and lidar point clouds, radiometric information of images does not exactly correspond to the lidar intensity values, which yields into difficulties to identify reliable tie features between data sets. In the field of photogrammetry, search of tie features have a long history and therefore various methods are available. According to Zitová and Flusser (2003) typical methods are areabased and feature-based methods. Examples of area-based methods are correlation-like methods,

1

Fourier methods and methods using mutual information. Area-based methods are widely used in the relative orientation of photographs (Heipke, 1997). Feature-based methods typically try to identify points, linear features, centers of gravity of regions, surfaces etc. using different kinds of search criteria. In principle, all of these feature extraction strategies can be applied to find tiefeatures between images and laser scanning data if a virtual 2D image of laser point cloud is created (Rönnholm, 2011). However, because of different nature of frame images and lidar data, some normalization is required before, at least, area-based methods can be reliably applied. If oriented stereo images or multi-image blocks are available, 2D image measurements can be transformed into 3D points using standard photogrammetric techniques. Such measurements allow the registration between two 3D data sets. However, this approach excludes the registration of a single image and laser point cloud. 3D tie features can be points, linear features or surfaces. For example, Huang et al. (2009) searched point-like tie features from aerial images and rasterized lidar images with intensity values using SIFT algorithm. However, airborne lidar data is not usually dense enough for the identification of accurate tie-points. Linear features were used in Habib et al. (2008). Surfaces have been applied in Postolov et al. (1999), McIntosh et al. (1999) and Pothou et al. (2006), for example. An overview of various surface matching algorithms can be found in Akca (2007). One possible approach uses extracted 2D features from images and their corresponding 3D features from laser scanning data. In this case, the use of linear features (e.g. Schenk and Csathó, 2002; Schenk, 2004; Habib et al., 2005a; Liu and Stamon, 2007; Wu, 2009; Choi et al., 2011; Chen and Lo, 2012) has been relatively popular. Other approaches have been, for example, use of roof centroids (Mitishita et al., 2008) and the general shape of objects in the scene (Rönnholm et al., 2003). An overview of the methodologies used in the fusion of lidar data and aerial imagery has been given for example in Schenk and Csathó (2002), who also emphasized the role of the straight lines in their fusion problem. Habib et al. (2005a) studied the use of linear features in registration of lidar data and images, where corresponding straight lines were sought out from both data sets. According to Habib et al. (2005b) linear features can be free form, however, the use of straight lines have many advantages, such as they can be found easily in urban areas, detection and correspondence can be found relatively easily from both data sets, parameters can be accurately solved, and straight line segments can approximate free-form linear features sufficiently. Choi et al. (2011) used straight lines and planar surfaces in order to find registration between lidar data and stereo-images, with a strategy to minimize the vertical inconsistency between linear 3D features. According to the best of our knowledge, only a very few studies have concentrated on the registration of lidar data and single frame images. Moreover, the simple approach to seek out candidate edge pixels representing the projected 3D control lines on the image has not been studied in the case of registration of lidar data and frame images. The objective of this article is to solve relative orientation between airborne lidar data and an aerial image using linear tie features. Relative orientation is solved by, first, extracting 3D roof ridges from lidar data and, then, locating their correspondences in 2D from the image plane. The coplanarity condition is used for calculating the relative orientation parameters. In this research, the exterior orientation parameters of an image are changed during the registration because of simplicity. However, finally, the solved relative orientation is reversed and the laser point cloud is transformed into the same coordinate system with the original image orientation. The presented approach is aimed to be fully automatic; however, some manual interaction is possibly needed in the beginning if the initial values of the relative orientation are not good enough.

2

2. MATERIALS The test area was located in Espoo, in the Southern Finland. Frame images consisted of four Z/I DMC panchromatic images with forward overlap of 60% and side overlap of 20%. The size of panchromatic images was 13824x7680 pixels and because of the flying height of slightly over 500 m, the pixel size on the ground was approximately five centimeters. Lidar data included two separate data sets. First, Leica’s ALS50-II lidar data had the flying height of 500 m, the scanning angle of 40 degrees (±20°), the point repetition frequency (PRF) of 148 kHz, the scanning frequency of 42.5 Hz, and the flying speed of 72 m/s leading to the point density of 4-5 points/m2. From Leica’s lidar data part of one flying strip was included. For this data set, no additional strip adjustment and data correction was applied. The second lidar data set was acquired using Optech’s ALTM 3100. In this case, the flying height was 1000 m, the PRF was 100 kHz, the scanning frequency 67 Hz, and the flying speed 75 m/s resulting in a point density of 2-3 points/m2. The scanning angle was 24 degrees, however, only 20 degrees were processed and used (±10°). For this data set, the strip adjustment of several flying strips was made. However, only part of one flying strip was included in this examination. Both lidar data sets covered approximately the same area, which was also covered by the stereo models of aerial images. The point densities and distribution patterns of laser point clouds are illustrated in Figure 1. Aerial images are illustrated in Figure 2.

Figure 1. Left: A sample of Optech’s ALTM 3100 lidar data with a point density of 2-3 points/m2. Right: A sample of Leica’s ALS50-II lidar data with a point density of 4-5 points/m2. Colors in these images represent the heights of the laser points. Because ALS data sets are not acquired at the same time some physical changes have been occurred. In the present experiment, the image block was considered to be in the correct coordinate system. The interior and exterior orientations were known from the aerial triangulation using known ground control points. The RMSE of residuals of 64 ground control points were 0.039 m, 0.086 m and 0.030 m for X, Y and Z directions, respectively. Lidar data sets were not at the same coordinate system with image block or each other and, actually, they were shifted and slightly rotated in order to create unregistered data sets for study purposes. The evaluation of registered and transformed lidar point clouds was made in six check areas. For the reference, georeferenced terrestrial laser scanning (TLS) point clouds were acquired using Leica’s HDS 3000 (Figure 3). According to specifications, the position accuracy of

3

measurements is 6 mm when distance is less than 50 m. The check areas were selected in such a way that several roofs were visible within relative short distances and usable incidence angles. In some areas more than one scan was merged. From these TLS point clouds, planes and surfaces with different orientations were extracted.

Figure 2. Six check areas (red areas) superimposed in the image block. Each check area includes several surfaces with different orientations extracted from TLS-laser data.

Figure 3. A sample view of the TLS point cloud from one of the check areas illustrating the point density.

4

3. METHODS 3.1

Locating tie-line features

In our approach, 3D lines were extracted from lidar data and their corresponding 2D features were searched from a single image. We used roof ridges as 3D control lines. First, roof planes were searched by fitting plane primitives in the ALS point cloud. Then, the ridge of a roof was calculated by intersecting two adjacent roof planes, if they were non-parallel. The end-points of ridges are defined as the outermost points of the plane intersection. Because ALS point clouds were not very dense, this method cannot find accurately the actual end-points of the ridges. However, it can find 3D lines representing the direction and location of ridges. For the extraction of ridges from lidar data, an automatic algorithm implemented in the beta version of Terrasolid’s TerraMatch software was applied.

Figure 4. The principle of applying perpendicular 1D Canny edge detection operations along projected 3D edge. In the next step, 3D lines (roof ridges) were projected to the image plane using very rough estimates of the exterior orientation parameters in order to visually reduce the search space. In some cases, if the initial exterior orientation parameters are poor, the automatic search may fail to locate the line feature inside the search space. As a solution, one can apply a quick interactive method similar to Rönnholm et al. (2003) for improving the initial orientation. According to our experience, in a typical case, the distance of projected lines and their true locations should not be more than some 50 image pixels. Therefore, the initial exterior orientation parameters should be relatively close to the correct ones. Then, our method tries automatically to seek out the true locations of the projected 3D lines from the image. Using the initial orientation parameters, lidarderived 3D ridge lines were projected on the image space using the collinearity equations. The projected 2D lines guide the direction of searching edges from the image. We followed the projected 3D ridge and searched possible edges in the perpendicular direction using a 1D Canny edge detection operator (Figure 4 and Figure 5). Details about the 1D Canny edge operator can be found from Canny (1986). Examples of candidate edge pixels (red circles) representing a 3D roof ridge (solid red line) are given in Figure 4. As a result, we were able to find a set of edge pixels from the image. This set of pixels, typically, includes noise and outliers, which can be discarded during the block adjustment using linear features. A more detailed description of the method is given in Karjalainen et al. (2006).

5

Figure 5. Examples of automatic roof ridge localization. Initial exterior orientation parameters have been interactively refined so that the projected 3D ridge lines are close to their true locations. Left: successful; middle: partially successful; right: unsuccessful. Solid red line represents the 3D roof ridge projected on the image using the initial exterior orientation parameters. Red circles point out the found edge pixels using perpendicular scan lines. Unsuccessful cases are automatically removed from the adjustment. 3.2

Solving the relative orientation parameters

In our approach, 3D linear features extracted from lidar data were used to solve the relative orientation parameters between an image and lidar data. The relative orientation between data sets was solved in least squares adjustment using a coplanarity condition, in which the volume of the parallelepiped defined by three direction vectors is minimized. The three vectors (Figure 6) were a 3D roof ridge vector (β ), a vector from a 3D roof ridge to the perspective centre (r) and a vector from an image point to the perspective centre (p), which all should lie on the same plane i.e.

βX Δ = βY βZ

rX

pX

rY rZ

pY = 0 pZ

(1)

The early papers using linear features were published by Mulawa and Mikhail (1988), Tommaselli and Lugnani (1988) and Tommaselli and Tozzi (1996), who used this method in the space resection of frame images in order to solve the exterior orientation parameters. In our approach, all vectors are presented as 3D vectors. The first vector of the system (β ) is the direction vector of the 3D control line, which we simply defined by its 3D end-points. Lidarderived end-points of ridges do not necessarily represent accurately the correct ones but some points along ridges that are close to the correct end-points. Therefore, the directions of the roof ridges are defined even if the lengths of ridges are not accurate. The second vector (r) is a direction vector from any point on the 3D control line to the perspective center of the camera. We selected the starting point of the direction vector (r) to be the centre point of the 3D control line. The third vector (p) is defined using any point representing the tie feature in the image plane and the perspective center of the camera. Therefore, one ridge line locating in the image may create many vectors of this kind as we have point observations along the line. In the present study, three observations per line were required.

6

z

y

X0,Y0,Z0

x

p

r

β Z Y X Figure 6. The coplanarity condition is fulfilled if all vectors from ridges (3D and image plane) and a vector representing the ridge itself are within the same plane. Based on eq. 1 we apply the following non-linear functional model ∆=

,

,

, , ,

=0

(2)

which yields one equation for each image point. The exterior orientation parameters (X0, Y0, Z0, ω, ϕ, κ) of a frame image were considered as unknown. The exterior orientation parameters were solved using coplanarity condition exploiting least squares adjustment. The functional model after linearization of Δ is v =



+



+



+



+



+



+∆

(3a)

and written in matrix-vector form v = Ax + l

(3b)

in which v is the vector of residuals, A is the design matrix, x is the improvement vector [dX0, dY0, dZ0, dω, dϕ, dκ]T, and l is the vector containing the volumes of parallelepipeds ∆ (eq. 1) resulting from the current approximate values of unknown parameters. Function Δ (eq. 1) contains vectors derived from laser- and image-driven observations of ridges. This process is typically iterative i.e. edge detection is carried out several times until the exterior orientation parameters do not change (Karjalainen et al., 2006). Index i describes the current observation. The solution of the improvement vector x becomes

x = ( AT PA) −1 AT Pl in which matrix P contains the weights of the observations.

7

(4)

Our approach solves the transformation of the image orientation in the coordinate system of lidar point cloud. However, the desired coordinate system was the one in which the images originally were. Therefore, we solved the inverse transformation using formulas described in Rönnholm et al. (2009) in more detail. As a result, we were able to transfer lidar point clouds into the expected coordinate system of the original aerial image block. The complete workflow of the proposed method is presented in Figure 7.

Laser scanning point cloud

Aerial image and its initial orientation parameters

Automatic extraction of roof ridges

Project 3D roof ridges on aerial image

3D line features representing roof ridges

Check the accuracy of the orientation parameters visually

Automatic edge detection to locate roof ridge pixels on image

Solve new orientation parameters

No

Manual adjustment of orientation parameters

Yes

No

Is result acceptable?

Yes Transformation of laser scanning point cloud

Orientation parameters

Registered laser scanning point cloud

Figure 7. The workflow for registering ALS data and aerial images in the common coordinate system using linear features.

8

4. RESULTS The estimated accuracies of the exterior orientation parameters according to the least squares adjustment (calculated from the unit weight variance and variance/covariance matrix) are given in Table 1, as well as the final number of 3D roof ridges used in the calculation of the orientation parameters. Table 1. The estimated accuracies of the orientation parameters and the number of roof ridges used in orientation. The results are given for 4 images, which were oriented using both Leica’s and Optech’s ALS data. X (m) Y (m) Z (m) omega (deg) phi (deg) kappa (deg) # of ridges #1 /Leica 0.277 0.231 0.101 0.029 0.029 0.012 22 #2 /Leica 0.315 0.298 0.137 0.036 0.041 0.009 26 #3 /Leica 0.087 0.207 0.038 0.024 0.010 0.005 17 #4 /Leica 0.126 0.283 0.090 0.035 0.015 0.006 12 #1 /Optech 0.229 0.058 0.058 0.007 0.028 0.005 25 #2 /Optech 0.097 0.140 0.083 0.017 0.013 0.005 21 #3 /Optech 0.057 0.128 0.033 0.015 0.007 0.003 30 #4 /Optech 0.291 0.612 0.204 0.077 0.034 0.013 14 Transformed lidar point clouds were compared to the TLS-derived reference surfaces, described in the Materials chapter, in the six check areas that were distributed in the test area. Local 3D shifts (dX, dY, dZ) in each check area were solved separately by fitting laser point clouds with reference surfaces using the ICP (Iterative Closest Point) method. Because our method was applied separately to all images and lidar data sets, we got total of 8 combinations. The results of evaluation are presented in Tables 2-9. Table 2. Leica’s lidar data registered with aerial image 1. dX (m) dY (m) dZ (m) Check area 1 0.062 0.152 -0.569 Check area 2 0.266 -0.209 -0.104 Check area 3 -0.240 0.530 -0.727 Check area 4 -0.067 0.252 -0.522 Check area 5 0.001 0.375 -0.494 Check area 6 -0.004 0.281 -0.432 Average 0.003 0.230 -0.475 Std 0.166 0.250 0.207 RMSE 0.151 0.228 0.189 Table 3. Leica’s lidar data registered with aerial image 2. dX (m) dY (m) dZ (m) Check area 1 0.373 -0.117 -1.199 Check area 2 0.614 -0.356 -1.148 Check area 3 -0.476 0.342 1.185 Check area 4 -0.490 0.402 1.855 Check area 5 -0.531 0.677 2.257 Check area 6 -0.667 0.744 2.852 Average -0.196 0.282 0.967 Std 0.544 0.437 1.744 RMSE 0.500 0.400 1.593

9

Table 4. Leica’s lidar data registered with aerial image 3. dX (m) dY (m) dZ (m) Check area 1 0.082 -0.004 0.457 Check area 2 0.316 -0.199 0.698 Check area 3 -0.445 0.460 -0.330 Check area 4 -0.310 0.318 -0.458 Check area 5 -0.332 0.498 -0.589 Check area 6 -0.378 0.487 -0.754 Average -0.178 0.260 -0.163 Std 0.305 0.294 0.595 RMSE 0.278 0.268 0.543 Table 5. Leica’s lidar data registered with aerial image 4. dX (m) dY (m) dZ (m) Check area 1 0.088 0.183 0.154 Check area 2 0.318 -0.448 0.603 Check area 3 -0.208 0.256 -0.020 Check area 4 -0.017 0.000 0.176 Check area 5 0.019 0.119 0.188 Check area 6 0.037 0.044 0.256 Average 0.040 0.026 0.226 Std 0.170 0.250 0.206 RMSE 0.155 0.228 0.188 Table 6. Optech’s lidar data registered with aerial image 1. dX (m) dY (m) dZ (m) Check area 1 0.271 -0.499 0.399 Check area 2 0.034 0.047 0.347 Check area 3 -0.088 -0.060 0.407 Check area 4 -0.292 0.224 0.407 Check area 5 -0.161 0.094 0.378 Check area 6 -0.127 0.084 0.375 Average -0.061 -0.018 0.386 Std 0.194 0.253 0.023 RMSE 0.177 0.231 0.021 Table 7. Optech’s lidar data registered with aerial image 2. dX (m) dY (m) dZ (m) Check area 1 0.061 0.134 0.336 Check area 2 -0.087 -0.044 -0.033 Check area 3 -0.109 -0.038 0.215 Check area 4 -0.287 0.177 -0.028 Check area 5 -0.186 0.045 -0.148 Check area 6 -0.201 0.029 -0.282 Average -0.135 0.050 0.010 Std 0.119 0.090 0.229 RMSE 0.110 0.082 0.209

10

Table 8. Optech’s lidar data registered with aerial image 3. dX (m) dY (m) dZ (m) Check area 1 0.100 0.201 0.090 Check area 2 -0.048 0.044 0.277 Check area 3 0.035 0.025 -0.1340 Check area 4 0.047 0.165 -0.010 Check area 5 -0.064 0.010 -0.070 Check area 6 0.073 -0.036 -0.068 Average 0.024 0.068 0.014 Std 0.066 0.094 0.149 RMSE 0.060 0.085 0.136 Table 9 Optech’s lidar data registered with aerial image 4. dX (m) dY (m) dZ (m) Check area 1 -0.109 0.517 -0.822 Check area 2 -0.351 0.076 -0.752 Check area 3 -0.088 0.244 0.009 Check area 4 -0.193 0.293 0.335 Check area 5 -0.082 0.179 0.472 Check area 6 0.046 0.078 0.728 Average -0.130 0.231 -0.005 Std 0.133 0.165 0.649 RMSE 0.121 0.151 0.592 In order to evaluate possible rotation errors, we used the local shifts between reference and transformed lidar data from each check area and solved the rigid 3D transformation using least squares method. In other words, each check area was represented by one true observation and one virtual corresponding point. The difference between these points represented the local shifts. In this method, the overall rotations of data sets can be compared. The rotation results are presented in Table 10. In this table, d_omega, d_phi and d_kappa represent rotation differences in degrees between reference data and transformed lidar data. In this case, the rotation center was placed in the central of one check area that located approximately in the middle of the test area. Adjustment provides also global shift parameters, but they are dependent on the selected rotation center and thus are not presented here. Table 10. The evaluation of the rotation errors. d_omega (deg) d_phi (deg) d_kappa (deg) Leica’s data, image 1 -0.013 -0.011 0.022 Leica’s data, image 2 0.078 -0.466 0.115 Leica’s data, image 3 -0.028 0.146 0.060 Leica’s data, image 4 -0.012 -0.006 0.013 Optech’s data, pan, image 1 0.007 0.009 0.046 Optech’s data, pan, image 2 0.011 0.079 0.017 Optech’s data, pan, image 3 -0.001 0.028 -0.010 Optech’s data, pan, image 4 0.031 -0.175 -0.021

11

5. DISCUSSION In the best case, when Optech’s lidar data was registered with image 3, the result was excellent regarding both shifts and rotations. In Table 1, the estimated errors in that case were also lowest, the number of control lines was highest, and the distribution of the control lines was relatively good (see Figure 8, on the left). On the contrary, the worst case when Leica’s lidar data was registered with image 2, significant shifts and rotation errors remained. This can also be seen in Table 1, in which the estimated errors were highest. Moreover, in the case of Image 2 and Leica’s data, the distribution of control lines was rather poor, because all control lines are located in the upper-right corner of the image (see Figure 8, on the right). Overall, the connection between rotation errors and, especially, standard deviations of Z shifts is obvious. Actually, the same phenomena is visible also in standard deviations of X and Y errors, however, not that clearly.

Figure 8. Laser-derived ridges are illustrated with red lines. Green lines represent those ridges that have been used for registration. In the left: Image 3 and Optech’s data (the best case). On the right: Image 2 and Leica’s data (the worst case) In our algorithm, only those lines that were considered to fit with others were accepted into the solution, i.e. outliers were discarded from the process. In some cases, this may cause that only a few lines are included in the adjustment. For example, Figure 8 illustrates how the algorithm has

12

accepted only a few tie lines among the complete set of lidar-derived ridges. Another problem was that for some images the distribution of accepted tie features was not optimal. Moreover, in the worst case, all tie-features were located in the one corner of the image within relatively small area. It is evident that in such cases, the adjustment becomes unreliable. The ridge extraction results slightly differ between Optech’s and Leica’s data, which also leads to different selection of corresponding line pairs. Therefore, it is not possible to directly see similar behavior of the results when examining different combinations of ALS data and images. The use of linear features is not feasible in all possible registration scenarios. For example, if the area does not include any man-made objects, and especially roofs ridges, our approach will probably not be reasonable for solving the relative orientations of data sets. In such case, alternative tie-features and methods should be used (Rönnholm and Haggrén, 2012). Roof ridges are not always clearly definable. In some cases, the actual ridge is not exactly the intersection of two roof planes that can be quite accurately reconstructed from lidar point clouds. In addition, if a ridge is not sharp but round, as illustrated in Figure 9, it is difficult to find automatically accurate centre line of a ridge from images. Typically, edge like shadow on a round ridge moves along the sun angle. These difficulties easily cause some misalignment of tie features and thus reduce the registration accuracy.

Figure 9. Example of a ridge that is not exactly defined by the intersection of two roof planes. In addition, this type of ridge is not easy to accurately extract automatically from images. One advantage of the presented approach is that it is relatively fast compared to more complicated algorithms, because the processing time is linearly comparable to the number of 3D control lines. Moreover, because each point representing the roof ridges are found separately, the method appears to be able to detect outliers. Currently, possible outliers are rejected during the adjustment. However, the method could be enhanced by fitting a line to found line points and rejecting possible outliers before adjustment. If a line is very short and corresponds to only few pixels in the image plane, the orientation of the line becomes uncertain because of outliers. According to our experience the 3D control lines should correspond to 30-50 pixels in the image plane in order to ensure enough information for determining the correct direction of a line.

6. CONCLUSIONS We have developed a nearly automatic workflow for registering a single aerial image with airborne lidar data. Our method uses the roof ridges of buildings as tie-line features, which are identified from both frame images and lidar data. Finally, the relative orientation of the data sets is solved based on the control lines (roof ridges) and their locations on the frame images.

13

According to the results, the average registration (absolute) errors varied between 0.003 to 0.196 m in the X direction, between 0.018 to 0.282 m in the Y direction and between 0.010 to 0.967 m in the Z direction. Some of these shift errors can be explained by remaining rotation errors, which seems to be the subject that requires attention in forthcoming research. Rotation (absolute) errors varied between 0.001 to 0.078 degrees, 0.006 to 0.466 degrees and 0.013 to 0.115 degrees for ω, ϕ and κ rotations, respectively. We noticed that the performance of current implementation is still vague in some cases, where the distribution of the control lines is not optimal. Therefore, further research is still needed in order to make the concept more robust. Being automatic, our registration method is relatively fast to complete. The same method can be extended also to work with other sensors than lidar and optical frame cameras. Using linear features is feasible even if they are partially invisible or occluded in the image plane. This makes the use of them more robust than using single points. However, the use of ridges as tie features has also some limitations. Under the image footprint, there must be enough ridges and the orientation of ridges should vary. For example, if all ridges are parallel, there is not enough information to define all exterior orientation parameters. In addition, the distribution of tie features within an image footprint is important regarding the accuracy of orientations.

7. ACKNOWLEDGEMENTS This work was supported by the Institute of Photogrammetry and Remote Sensing, Aalto University, Academy of Finland (projects ” Interaction of Lidar/Radar Beams with Forests Using Mini-UAV and Mobile Forest Tomography”, “Residents' needs and possibilities to promote future urban infill projects and Roadside projects” and “Research on resident-driven infill development possibilities – case study in urban areas in Finland”), Ministry of Agriculture and Forestry (LuhaGeoIT project), Aalto Energy Efficiency Research Programme (Light Energy project), and RYM Oy (Energizing Urban Ecosystems project).

8. REFERENCES Akca, D., 2007. Matching of 3D surfaces and their intensities. ISPRS Journal of Photogrammetry and Remote Sensing, 62(2): 112-121. Canny, J., 1986. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 8(6):679-698. Chen, L. and Lo, C., 2012. Edge-based registration for airborne imagery and lidar data. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume 39(B3), pp. 265-268. Choi, K., Hong, J. and Lee, I., 2011. Precise Geometric Registration of Aerial Imagery and LIDAR Data. ETRI Journal, 33(4): 506-516. Habib, A., Ghanma, M., Morgan, M., and Al-Ruzouq, R., 2005a. Photogrammetric and Lidar Data Registration Using Linear Features. Photogrammetric Engineering & Remote Sensing, 71(6): 699-707.

14

Habib, A., Ghanma, M. and Kim, E., 2005b. LIDAR Data for Photogrammetric Georeferencing. FIG Working Week 2005 and GSDI-8: From Pharaohs to Geoinformatics, Cairo, Egypt April 1621, 15 pages. Habib, A., Jarvis, A., Kersting, A., and Alghamdi, Y., 2008. Comparative analysis of georeferencing Procedures using various sources of control data. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37(Part B4), pp. 1147-1152. Heipke, C., 1997. Automation of Interior, Relative, and Absolute Orientation. ISPRS Journal of Photogrammetry and Remote Sensing, 52(1): 1-19. Huang, H., Gong, P., Cheng, X., Clinton, N., and Li, Z., 2009. Improving Measurement of Forest Structural Parameters by Co-Registering of High Resolution Aerial Imagery and Low Density LiDAR Data. Sensors, 9: 1541-1558. Karjalainen, M., Hyyppä, and J., Kuittinen, R., 2006. Determination of Exterior Orientation Using Linear Features from Vector Maps. The Photogrammetric Record 21(116): 329-341. Liu, L. and Stamon, I., 2007. A Systematic Approach for 2D-Image to 3D-Range Registration in Urban Environments. Proc. Workshop Virtual Representations Modeling Large-Scale Environments, 8 pages. Mitishita, E., Habib, A., Centeno, J., Machado, A., Lay, J., Wong, C., 2008. Photogrammetric and lidar data integration using the centroid of a rectangular roof as a control point. The Photogrammetric Record, 23(121): 19-35. McIntosh, K., Krupnik, A. and Schenk, T., 1999. Utilizing airborne laser altimetry for the improvement of automatically generated DEMs over urban areas. International Archives of Photogrammetry and Remote Sensing, 32(Part B3), pp. 563-569. Mulawa, D. C. and Mikhail, E.M., 1988. Photogrammetric treatment of linear features, International Archives of Photogrammetry and Remote Sensing. Volume 27(Part B3), pp. 383393. Postolov, Y., Krupnik, A. and McIntosh, K., 1999. Registration of airborne laser data to surfaces generated by Photogrammetric means. International Archives of Photogrammetry and Remote Sensing, 32(Part 3/W14), pp. 95-99. Pothou A., Karamitsos S., Georgopoulos A., and Kotsis I., 2006. Performance evaluation for aerial images and airborne Laser Altimerty dara registration procedures. ASPRS, Nevada, 13 pages. Rönnholm, P., Hyyppä, H., Pöntinen, P., Haggrén, H., and Hyyppä, J., 2003. A Method for Interactive Orientation of Digital Images Using Backprojection of 3D Data. the Photogrammetric Journal of Finland, 18(2): 58-69. Rönnholm, P., H. Hyyppä, J. Hyyppä, and H. Haggrén, 2009. Orientation of Airborne Laser Scanning Point Clouds with Multi-View, Multi-Scale Image Blocks. Sensors, 9: 6008-6027.

15

Rönnholm, P., 2011. Registration Quality – Towards Integration of Laser Scanning and Photogrammetry. In: EuroSDR Official Publication No 59, pp. 9-64. Rönnholm, P. and Haggrén, H., 2012. Registration of laser scanning point clouds and aerial images using either artificial or natural tie features. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., I-3, pp. 63-68, doi:10.5194/isprsannals-I-3-63-2012. Schenk, T. and Csathó, B., 2002. Fusion of LIDAR data and aerial imagery for a more complete surface description. International Archives of the Photogrammetry. Remote Sensing and Spatial Information Sciences, 34(3), pp. 310-317. Schenk, T., 2004. From point-based to feature-based aerial triangulation, ISPRS Journal of Photogrammetry & Remote Sensing, 58(5-6): 315-329. Tommaselli, A. and Lugnani, J., 1988. An alternative mathematical model to collinearity equations using straight features. International Archives of Photogrammetry and Remote Sensing 27(Part B3), pp. 765-774. Tommaselli, A. M. G. and Tozzi, C. L., 1996. A recursive approach to space resection using straight lines. Photogrammetric Engineering & Remote Sensing, 62(1): 57-66. Wu, J., 2009. Automatic Geo-registration of Aerial Image Sequence with Untextured Lidar Data Using Line Features. Proc. of SPIE Vol. 7496, 7 pages. Zitová, B. and Flusser, J., 2003. Image registration methods: a survey. Image and Vision Computing, 21(11): 977-1000.

16

Suggest Documents