A novel image fusion method using IKONOS satellite images

UCTEA Chamber of Surveying and Cadastre Engineers Journal of Geodesy and Geoinformation TMMOB Harita ve Kadastro Mühendisleri Odası Jeodezi ve Jeoi...
Author: Cora Atkinson
1 downloads 0 Views 2MB Size
UCTEA Chamber of Surveying and Cadastre Engineers

Journal of Geodesy and Geoinformation

TMMOB Harita ve Kadastro Mühendisleri Odası

Jeodezi ve Jeoinformasyon Dergisi

Vol.1  No.1  pp. 27 - 34  May 2012 www.hkmodergi.org

A novel image fusion method using IKONOS satellite images Deniz Yıldırım, Oğuz Güngör * Karadeniz Technical University, Department of Geomatics Engineering, 61080, Trabzon, Turkey

Abstract Accepted: 17 May 2012 Received: 12 April 2012 Pub. Online: 10 July 2012 Volume: 1 Number: 1 Page: 27 - 34 May 2012

In satellite remote sensing, spatial resolutions of multispectral images over a particular region can be enhanced using better spatial resolution panchromatic images for the same region by a process called image fusion, or more generally data fusion. A fusion method is considered successful, if the spatial detail of the panchromatic image is transferred into the multispectral image and the spectral content of the original multispectral image is preserved in the fused product. This research proposes a novel image fusion algorithm which takes aim at producing both spatially enhanced and spectrally appealing fused multispectral images. In the proposed method, first an intermediary image is created using original panchromatic and multispectral images. This intermediary image contains the high frequency content of the panchromatic source image such that it is the one closest to the given multispectral source image (upsampled) by a natural semi inner product defined. The final fused image is obtained by applying a function which performs convex linear combination of the intermediary image and the upsampled multispectral image. The function used depends on the local standard deviations of the source images. To test the performance of the method, the images from IKONOS sensor are fused using the Brovey, IHS, PCA, wavelet transform based methods, and the proposed method. Both visual and quantitative evaluation results indicate that the proposed method yields to both spectrally and spatially appealing results as the wavelet transform based method, and it gives a better performance when both spatial detail enhancement and spectral content preservation in the fused products are considered. It is also obvious that the method has a potential to get better results if a better fitting, more complex function is found. Keywords Fusion, Spectral, Spatial, Image, Mathematics, IKONOS

Özet IKONOS uydu görüntüleri ile yeni bir görüntü kaynaştırma yöntemi Kabul: 17 Mayıs 2012 Uzaktan algılamada, çok bantlı renkli uydu görüntülerinin konumsal çözünürlüklerinin aynı bölgeye ait daha iyi konumsal çözünürlüğe sahip pankromatik görüntülerle iyileştirilmesi işlemine görüntü kaynaştırAlındı: 12 Nisan 2012 Web Yayın: 10 Temmuz 2012 ma denilmektedir. Pankromatik görüntüdeki konumsal detay çok bantlı görüntüye aktarılırsa ve çok bantlı Cilt: 1 Sayı: 1 Sayfa: 27 - 34 Mayıs 2012

görüntüdeki spektral içerik orijinal görüntüdekine uygun olarak saklanırsa, görüntü kaynaştırma yöntemi başarılı olarak kabul edilir. Bu çalışmada, konumsal çözünürlük anlamında iyileştirilen ve spektral açıdan da geliştirilen çok bantlı görüntüler üretmeyi amaçlayan yeni bir görüntü kaynaştırma yöntemi önerilmektedir. Önerilen yöntem öncelikle bir ara görüntü oluşturmaktadır. Bu ara görüntü her bandı pankromatik görüntünün yüksek frekanslı kısmını tam olarak içeren görüntüler arasında orijinal çok bantlı görüntüye, tanımlanmış bir yarı içsel çarpıma göre, en yakın olan görüntüdür. Bu ara görüntü ve orijinal çok bantlı görüntünün dışbükey lineer toplamına belirli fonksiyonlar uygulanarak, kaynaşmış görüntü oluşturulmaktadır. Bu fonksiyonlar, orijinal görüntülerin yerel standart sapmalarına bağlıdır. Metodun performansını test etmek için, IKONOS uydu görüntüleri Brovey, IHS, PCA, dalgacık dönüşümü ve önerilen yöntem kullanılarak kaynaştırılmıştır. Görsel ve nicel değerlendirme sonuçları göstermektedir ki, önerilen yöntem hem konumsal hem de spektral olarak, dalgacık dönüşümü tabanlı yöntemler kadar iyi sonuçlar vermekte, kaynaştırılmış ürünlerde konumsal detayın iyileştirilmesi ve spektral içeriğin korunması birlikte ele alındığında ise daha iyi performans gösterdiği görülmektedir. Yöntem, daha uygun fonksiyonlar bulunarak daha da geliştirilme potansiyeline sahiptir. Anahtar Sözcükler Kaynaştırma, Spektral, Konumsal, Görüntü, Matematik, IKONOS

* Corresponding Author: Phone: +90 (462) 3772761 Fax: +90 (462) 3280918 E‒mail: [email protected] (D. Yıldırım), [email protected] (O. Güngör) © 2012 HKMO

28 A novel image fusion method using IKONOS satellite images

1. Introduction There are a number of remote sensing satellites currently in orbit around the Earth, managed by various organizations. They are the primary tools of acquiring images for satellite based remote sensing. GeoEye satellites including IKONOS (4 m multispectral (4 bands) and 1m panchromatic), and GeoEye‒1 (1.6 m multispectral (4 bands) and 0.41m panchromatic); DigitalGlobe satellites including Quickbird (2.4 m multispectral (4 bands) and 0.6m panchromatic), WorldView‒2 (2 m multispectral (8 bands) and 0.5m panchromatic) offer high spatial resolution imagery (Klenas 2011; Loarie et al. 2007). On a GeoEye‒1 image with a spatial resolution of 41cm, a 41cmx41cm object of arbitrary height on Earth can be discerned from its surroundings. The parallel borderlines of objects can be resolved if they have a distance of at least 41cm. The spatial resolution is related to the ground sample distance (GSD), the distance between the centers of ground areas represented by adjacent square pixels on the image. Unlike GSD, the spatial resolution is not necessarily uniform between images acquired by the same sensor. EO‒I is a remote sensing satellite launched as a part of NASA’s Earth Observing System. It has the Hyperion sensor, which offers visible and infrared hyperspectral imagery with over 200 spectral bands each of breadth approximately 10nm, including 14 bands for the wavelength range 620‒750nm (red color), so one can discern 14 different shades of red (Url‒1). Hence, the spectral resolution is high on the images acquired by the Hyperion sensor. The spectral resolution is related to the number of bands and the spectral range of bands. As can be inferred from the information given above, generally, remote sensing satellites provide both panchromatic and multispectral images simultaneously for the same region at a given time, and panchromatic images have higher spatial resolution than the multispectral images of the same sensor mainly for technical reasons. Image fusion in satellite remote sensing is data fusion of at least two images for the same region, where the images possibly have different basic properties like size, spatial/ spectral/radiometric resolution and date of acquisition. The fused image will be superior in terms of information content of both individual input images. A special case of image fusion process is called pan‒sharpening in which panchromatic images with better spatial resolutions are fused with coarser resolution multispectral images to get a new multispectral image that has similar color information with the input multispectral image but has a better spatial resolution, close to the spatial resolution of the panchromatic image (Pohl and Van Genderen 1998; Güngör 2008). The fused image has as many bands as the input multispectral image, and each band has the same GSD and size with the input panchromatic image. The fused images are to have the highest possible spatial information content and at the same time, they should preserve good spectral information quality (Cliché et al. 1985). Zhang (2008) divides image fusion methods into three categories, namely modulation‒based methods (e.g. Brovey method), component substitution methods (e.g. IHS and PCA methods), and multi‒scale analysis‒based methods.

The proposed method best fits to the modulation based image fusion methods category. The method starts with modifying the panchromatic image, in a way that one gets rid of the inessential parts as defined by a particular space of images. It is statistical in essence, as the result depends on local variances. The function of this dependence can be changed with a parameter enabling a trade‒off between spectral content preservation and spatial detail transfer qualities in fusion process.

2. Image Fusion Methods 2.1. Modulation‒based Methods In modulation‒based methods, first a synthetic image is created from the input images, and then the ratio of the panchromatic image and the synthetic image is multiplied by the upsampled multispectral image, band by band, to get the fused image (Yang et al 2010). A popular modulation‒based method is Brovey method, where the synthetic image is the sum of the multispectral bands. First, each multispectral band is normalized, dividing by the sum of all spectral bands (adding a small constant), and then they are multiplied by the panchromatic band (Zhang 2002). Equation 1 gives the formula. Fi = C ! XSi

(1)

Fi is the ith fused band, XSi is the upsampled ith band of the lower resolution multispectral band, and C is the ratio of the panchromatic and the intensity images at a pixel location. 2.2. Component Substitution Methods Component substitution methods proceed in three steps. First, a forward transform is applied to the multispectral bands to get the components in the new data space. Then, the spatial component, the component that should resemble the panchromatic image most, is replaced by the panchromatic image. Afterwards, the inverse transform is applied to get the fused image (Yang et al 2010). IHS and PCA methods are the most well known among component substitution methods. In IHS color space, I stands for intensity component, H is hue, which is an angle between 0o to 360o that gives the dominant color, and S stands for saturation. In fact, intensity component of a multispectral image contains its spatial component (Chibani and Houacine 2002), whereas hue and saturation components retain the spectral information (Pohl and Van Genderen, 1998; González‒Audícana et al. 2005; González‒Audícana et al. 2006). For image fusion the original multispectral bands are upsampled so that they have the same size as the panchromatic image. Then, the three upsampled multispectral bands are transformed into the IHS space and the intensity component is replaced with the panchromatic image while the original hue and saturation parameters are kept unchanged. Finally, the fused image is obtained via the inverse IHS transformation. The forward and inverse IHS transform defined by (Harrison and Jupp 1990) is given in Equation 2.

Deniz Yıldırım, Oğuz Güngör / Cilt 1  Sayı 1  2012 29

! # ! I $ # & # # # V1 & = # # V & # #" 2 &% # # # "

1

1

3 1

3 1

6 1

6 '1

2

2

! 1 # ! F $ # 3 # 1 & # 1 # F2 & = # # & # 3 #" F3 &% # 1 # # 3 "

1 6 1 6 '2 6

$ & 3 & ! R $ '2 & # & G & & 6 & ## & " B &% 0 & & %

tions, one will get its scale space representation. In our application, panchromatic image is decomposed twice, as the ratio of resolutions is 4. Afterwards, the input multispectral image replaces the approximation image, which is the wavelet component of the panchromatic image, with the same resolution as the input multispectral image. The fused image is obtained by applying the inverse wavelet transformation.

1

$ & 2 &! Pan $ & '1 & # & # V1 & 2 &# & & #" V2 &% 0 & & % 1



 

(2)

Here R, G and B are the original multispectral bands, F1, F2 and F3 are the fused bands and Pan is the panchromatic image. H and S components are obtained from the equations respectively. The fused H = atan (V2 / V1) and color multispectral image will have the same spatial resolution as the panchromatic image and carry its spatial details. In fact, it turns out that the whole process is just adding the same value (the difference between I and the panchromatic image value) to each band, hence also causing a spectral distortion. In satellite remote sensing, one usually has at least four multispectral bands, and using a weighted mean for the I component can give better results. The weights depend greatly on the sensor. Choi et al. (2008) calculated the best weights for IKONOS images, and found that I= R/10 + G/4 + B/12 + 17 NIR/30. Another popular component substitution method is the PCA method (Zhang 2010). The PCA transforms the multispectral image into its principal components minimizing the covariance between bands. The principal components are linearly independent. The first principal component conveys the spatial detail information of the multispectral image, whereas the remaining principal components contain the spectral information (Chavez and Kwarteng 1989; Zhou et al, 1998; González‒Audícana 2004). The first principal component is replaced with higher spatial resolution panchromatic image and the inverse PCA transform is applied to get a higher resolution fused image (Gonzales and Woods, 1992). The performance of this statistical method depends much on the input images (Güngör 2008). 2.3. The multi‒scale analysis methods The multi‒scale analysis methods use wavelet multi‒resolution decomposition (Zhang 2008). Wavelet transform, discovered in 1980s, allows decomposing an image into both space and scale (Schneider and Farge 2006). Instead of sine and cosine base functions in Fourier transform, one has mother wavelet functions generating the wavelets. An orthogonal basis consisting of wavelets exists, generating square integrable functions on [0,1]. Continuous functions are quantized. Panchromatic image will be decomposed into its wavelet components, depending on the ratio of resolu-

3. Methodology The proposed method involves defining a space , which is a subspace of the space of images with the same size as the input panchromatic image. defines, uniquely up to a scale, a seminorm with kernel = , by means of covariance. This seminorm defines a real valued semi inner product . is to help us find the essential information of panchromatic image that is to be transferred over to the fused resultant image by isolating the inessential part. Any such is to include the constant images. can be defined by a number of elements in it, that span . Given , a finite set of zero sum filter kernels, can be chosen to be the space of images that have zero valid convolution with any element of , hence also with any linear combination of elements of H. Popular methods do not make use of such spaces, directly. Given P, the input panchromatic image and , P0 is found such that t = P ‒ P0 is in , P0 and t have zero covariance, and t has zero mean. Z, the closest multispectral image to the upsampled input multispectral image, containing all the ‒essential information of the panchromatic image, is calculated. The bands of Z are multiples of P0. In the resultant image, for each band, the value of each pixel depends on the values of the corresponding pixels in Z and in the upsampled input multispectral image. The relation is defined by a function fr applied to a local variance with a given window size, and a parameter r > 0. This trade‒off parameter r enables us to change how spatially or spectrally good the fusion will be. The following hold true for fr (r > 0): 1. fr are continuous on [0,1], fr(0) = 0, fr(1) = 1 2. fr' exist and are continuous and non‒negative on (0,1). 3. fr are convex for r ≥ 1 and concave for 0 < r ≤1 4. as r approaches s in (0,1), fr approaches fs in ([0,1]) 5. as r decreases to 0, fr approaches to g1 in ([0,1]), described below 6. for every ε > 0, there exists Rε, such that for all r > Rε, , where g1(x) = 1 for x in (0,1], g1(0) = 0 g2(x) = 0 for x in [0,1), g2(1) = 1. Note that if two sequences of functions satisfy the conditions above, than so does their convex linear combinations. Also f1 is the identity function. One may impose further that fr can be written in terms of hypergeometric functions. Calculations of general hypergeometric functions take a long time. Where the local variances are high, the fused image will look more like the panchromatic image, and where the local variances are small, the fused image will resemble the original multispectral image. This strategy aims to preserve the color content of the input multispectral image, while enhancing its spatial detail content. Furthermore, if r is decreased, then the fused image is expected to look more like the input panchromatic image.

30 A novel image fusion method using IKONOS satellite images

4. Application and Findings 4.1. Study Area The area studied is a 2.048km‒2.048km coastal‒urban region in the Turkish province of Trabzon, centered approximately around 40o 59.7' North, and 39o 46.4' East. The province has a population of over 760000 and lies in the Eastern Black Sea Region (Url‒2). The location of the province within Turkey and the study area are shown in Figure 1. The image shows Karadeniz Technical University main campus lying around the center of the image and the Trabzon international airport lying to the northeast of the university, with their surroundings. The southern part of the image is rural, and the northern part is the Black Sea and the shores. 4.2. Study Material An IKONOS panchromatic image acquired in May 2003 is used together with its multispectral image acquired simultaneously. The best spatial resolutions of the panchromatic image and multispectral image are 1m and 4m, respectively. The multispectral image has four spectral bands, namely blue, green, red, and infrared. The signal quantization level is at 11 bits. This relates to the dynamic range and the radiometric resolution. The input images are displayed in Figure 2. 4.3. Visual Evaluation There are four bands, and only three bands can be displayed at once. In Figure 2 and Figure 3, infrared is displayed as red, green is displayed as green, and blue is displayed as blue. Vegetations will appear red‒orange, as they reflect infrared most, much more than green (Knipling 1970). In Figure 4 and Figure 5, the images are displayed in true color. The pixel values for all images in all figures are multiplied by 32 = 216‒11, for brightness. Figure 3 displays the images obtained by applying the methods to the full image and Figure 4 is obtained by applying them to a subimage. Figure 3 gives the impression that all methods except PCA method, perform spectrally well at the full image level. On the other hand, the images obtained through the PCA method are rather dark and blurred (Figure 3b and Figure 4b). Figure 4 helps analyzing the performances of each method better in terms of spectral quality and spatial detail enhancement. As can be seen in Figure 4c, the IHS method creates spectral distortion, especially inside the encircled areas. The Brovey method performs well spatially, and performs better than IHS spectrally. However, it also introduces some color distortion, as

seen in Figure 4d. The color of the cars inside the encircled area is bluish, which is not the case on the original multispectral image. The wavelet method performs spectrally well, but produces blocking artifacts especially near edges of buildings and linear details (Figure 4e). The proposed method gives quite satisfactory results spectrally and spatially (Figure 4f). In addition, the proposed method is flexible. One may choose different values for the parameters, or choose convex linear combinations of them to achieve either spectrally or spatially better results. This creates a trade‒off between quality of spatial detail and the spectral content preservation. This is due to the fact that spatially better configurations have poorer spectral content preservation, and vice versa. Figure 5 demonstrates the effect of the aforementioned situation in true color. 4.4. Quantitative Evaluation Many image fusion metrics have been proposed so far. Common ones include, CC (mean squared difference between correlation coefficients of respective bands of the original and fused multispectral images), RMSE (Root Mean Square Error), ERGAS, RASE, SAM, SID, SSIM (Structural Similarity Index).

Panchromatic image

Turkey

Figure 1: Trabzon province (marked green) on a map of Turkey and the study area expanded on the right.

Multispectral image Figure 2: The input images (full image)

Deniz Yıldırım, Oğuz Güngör / Cilt 1  Sayı 1  2012 31

(a) Multispectral image

(b) PCA method

(c) IHS method

(d) Brovey method

(e) Wavelet method

(f) Proposed method

Figure 3: Fusion images with the proposed and selected methods.(band combination Blue‒Green‒Infrared)

32 A novel image fusion method using IKONOS satellite images

(a) Multispectral image

(c) IHS method

(e) Wavelet method

(b) PCA method

(d) Brovey method

(f) Proposed method

Figure 4: Fusion images with the proposed and selected methods applied on a subimage (band combination Blue‒Green‒Red)

Deniz Yıldırım, Oğuz Güngör / Cilt 1  Sayı 1  2012 33

Original multispectral

Original panchromatic

Current configuration

Spatially better, spectrally worse configuration

Figure 5: Different configurations yielding different spatial and spectral qualities

Wald (2000)’s ERGAS (relative dimensionless global error) is an improvement on RMSE, taking the resolution difference into account. RASE gives the average performance of the image fusion method in the spectral bands as a percentage, and is calculated using RMSE and mean radiance (Choi 2005) The formulas for metrics RASE and ERGAS are given below in Equations 3 and 4, respectively (Wald 2000).

(3)



(4)

Here, h/l is the ratio of resolutions, Mk is the mean of Bk, the kth spectral band of a total of K spectral bands (Güngör 2008). M, the mean radiance, is calculated as shown in Equation 5.



(5)

SAM (Spectral Angle Mapper), gives the mean change in color angle for each pixel. SID (Spectral Information Divergence) is an improvement on SAM in spectral similarity characterization (Chang 1999). It is the sum of relative entropies. The aim is to find a probabilistic relation scale for the pixel values of the compared images, viewed as Random Variables (Chang 1999). Wang et al. (2004) created the SSIM index, an improvement on their previous spatial index UIQI (universal image quality index) that measures the structural similarity between images, hence the quality of spatial detail transfer. It involves calculating the directional means and standard deviations for each window w of a particular size, sweeping through moving windows over entire images. Equation 6 gives the formula for a window w (Wang et al. 2004).

34 A novel image fusion method using IKONOS satellite images

Table 1. Fusion evaluation statistics CC ERGAS RASE% RMSE SAM SID SSIM

Opt 0 0 0 0 0 0 0

SSIM (x, y | w) =

Proposed 0.0948 2.6385 10.439 43.466 1.8053 0.0018 0.6901

Wavelet 0.0489 4.8662 18.855 78.506 1.4153 0.0608 0.665

PCA 0.2112 6.8023 25.816 107.49 2.7529 0.0119 0.5416

IHS 0.0538 5.85 22.6616 94.328 1.9158 0.0347 0.7461

(2wx wy + C1 ) (2! w w + C2 ) x y

(w + w + C1 ) (! ! w2 + C2 ) 2 x

2 y

2 wx

y

 

Brovey 0.0904 87.677 85.413 355.64 2E‒07 2E‒06 0.2228

(6)

Table 1 lists the statistics using these metrics. Optimal values are listed in the second column. For each index, the best results are marked red, and the second best results are marked blue. In three out of six spectral metric tests listed, the proposed method performed the best, trailed by the wavelet method. The PCA method could not become even the second best performer in any of the tests for this set of images. The Brovey method resulted in a high RMSE value, while providing zero SAM and SID values. The proposed method resulted in a small SID value; the distribution of pixel values in fused image is probabilistically similar to the one in the input multispectral image. According to the SSIM values, IHS method performed the best spatially, slightly better than the proposed method.

5. Conclusion

The quantitative and visual evaluations show that popular image fusion methods may produce either spectrally or spatially appealing results, but not both all the time. The aim of this study was to develop a flexible and convenient fusion method that performs well, both spatially and spectrally. A user should be able to change one input parameter and get either spatially or spectrally better results. The proposed method, a quasi‒statistical method depending on local variances, has been tested against the selected image fusion methods. With the current configuration, the results were found to be satisfactory. As desired, one may choose to change the configuration, so that more spatial details are transferred, or spectral content is better preserved during the fusion process. As the result of the proposed method depends on the local variances, the isolated corner like objects, which are expected to be noise in many applications, can be filtered out. Our future research will focus on finding a better choice of the T space and fr functions to enhance the results further.

References Chang C., (1999), Spectral information divergence for hyperspectral image analysis, In: Proc. Geosci. Remote Sens. Symp., Vol.1, pp.509-511. Chavez P.S., Kwarteng A.Y., (1989), Extracting spectral contrast in landsat thematic mapper image data using selective principal component analysis, Photogrammetric Engineering and Remote Sensing, 55(3), 339–348. Chibani Y., Houacine A., (2002), The joint use of IHS transform and redundant wavelet decomposition for fusing multispectral and panchromatic images, International Journal of Remote Sensing, 23(18), 3821–3833. Choi M., Kim R.Y., Nam M.R., Kim H.O., (2005), Fusion of multispectral and panchromatic satellite images using the curvelet transform, IEEE Geoscience and Remote Sensing Letters, 2(2), 136–140. Choi M., Kim H., Cho N.I., Kim H.O., (2008), An improved intensity-hue-saturation method for IKONOS image fusion, International Journal of Remote Sensing.

Cliché G., Bonn F., Teillet P., (1985), Integration of the SPOT panchromatic channel into its multispectral mode for image sharpness enhancement, Photogrammetric Engineering & Remote Sensing, 51(3), 311–316. González-Audícana M., Saleta J.L., Catalan R.G., Garcia R., (2004), Fusion of multispectral and panchromatic images using improved IHS and PCA mergers based on wavelet decomposition, IEEE Transactions on Geoscience and Remote Sensing, 42(6), 1291-1299. González-Audícana M., Otazu X., Fors O., Seco A., (2005), Comparison between Mallat’s and the a trous discrete wavelet transfom-based algorithms for the fusion of multispectral and panchromatic images, International Journal of Remote Sensing, 26(3), 595–614. González-Audícana M., Otazu X., Fors O., Alvarez-Mozos J., (2006), A low computational-cost method to fuse IKONOS images using the spectral response function of its sensors, IEEE Transactions on Geoscience and Remote Sensing, 44(6), 16831691. Gonzalez R.C., Woods R.E, (1992), Digital Image Processing, Addison-Wesley, Reading, MA. Güngör O., (2008), Multi Sensor Multi Resolution Image Fusion, PhD Thesis, Purdue University. Klemas V., (2011), Remote sensing techniques for studying coastal ecosystems: An overview, Journal of Coastal Research, 27(1), 2–17. Knipling E.B., (1970), Physical and physiological basis for the reflectance of visible and near-infrared radiation from vegetation, Remote Sensing of Environment, 1, 155–159. Loarie L.S., Joppa L.N., Pimm S.L., (2007), Satellites miss environmental priorities, Trends in Ecology & Evolution, 22(12), 630-632. Pohl C., van Genderen J.L., (1998), Multisensor image fusion in remote sensing: Concepts, methods and applications, Int. J. Remote Sensing, 19(5), 823-854. Schneider, K., Farge M., (2006), Wavelets: Mathematical theory, In: Encyclopedia of Mathematical Physics, (Françoise J.P.,Naber G., Tsun T.S., Eds.), Academic Press, Oxford, pp.426-438. Wald L., (1999), Some terms of reference in data fusion, IEEE Transaction on Geoscience and Remote Sensing, 37(3), 1190–1193. Wald L., (2000), Quality of high resolution synthesized images: is there a simple criterion? In: Proc. Int. Conf. Fusion Earth Data. Wang Z., Bovik A.C., Sheikh H.R., Simoncelli E.P., (2004), Image quality assessment: From error measurement to structural similarity, IEEE Trans. Image Process., 13(4), 600–612. Yang J., Zhang J., Li H., Sun Y., Pu P., (2010), Pixel level fusion methods for remote sensing images: A current review, International Archives of Photogrammetry and Remote Sensing (IAPRS), XXXVIII(7B). Zhang J., (2008), Generalized model for remotely sensed data pixel-level fusion, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XXXVII(B7), 1051–1056. Zhang J., (2010), Multi-source remote sensing data fusion: Status and trends, International Journal of Image and Data Fusion, 1, 5–24. Zhang, Y., (2002), Problems in the fusion of commercial high resolution satellite images as well as Landsat 7 images and initial solutions, International Archives of Photogrammetry and Remote Sensing (IAPRS), 34(4). Zhou J., Civco D.L., Silander J.A., (1998), A wavelet transform method to merge Landsat TM and SPOT panchromatic data, International Journal of Remote Sensing. 19(4), 743-757. Url-1, HYPERION Spectral Coverage, USGS EO-I Website, http:// eo1.usgs.gov/sensors/hyperioncoverage, [Accessed March 2012 ]. Url-2, Trabzon Province, English Wikipedia, en.wikipedia.org/ wiki/Trabzon_Province, [Accessed May 2012].

Suggest Documents