Computers & Graphics

Computers & Graphics 36 (2012) 224–231 Contents lists available at SciVerse ScienceDirect Computers & Graphics journal homepage: www.elsevier.com/lo...
Author: Polly Fleming
7 downloads 0 Views 3MB Size
Computers & Graphics 36 (2012) 224–231

Contents lists available at SciVerse ScienceDirect

Computers & Graphics journal homepage: www.elsevier.com/locate/cag

Applications of Geometry Processing

Abstract line drawings from photographs using flow-based filters Shandong Wang a,b,c,n, Enhua Wu a,b,c, Youquan Liu d, Xuehui Liu a, Yanyun Chen a a

State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China Department of Computer and Information Science, University of Macau, Macau, China Graduate University of Chinese Academy of Sciences, Beijing 100190, China d School of Information Engineering, Chang’an University, Xi’an, China b c

a r t i c l e i n f o

a b s t r a c t

Article history: Received 6 August 2011 Received in revised form 13 February 2012 Accepted 16 February 2012 Available online 9 March 2012

This paper presents a non-photorealistic rendering technique for stylizing a photograph in the line drawing style. We first construct a smooth and direction-enhancing edge flow field from the eigenvectors of the smoothed structure tensor, and then we use the flow field to guide the line drawing process. In particular, we develop a new operator for detecting step edges, which outperforms the existing edge detectors in terms of feature preservation and edge localization. Our approach works by applying the proposed detector in the direction perpendicular to the edge flow tangent and then smoothing the intermediate results along the edge flow curve. Optionally, an anisotropic nonlinear filter with an elliptical kernel is incorporated into the algorithms to extract the line edges, which may extend our technique further for creating an image to convey a hand-painting style. The presented algorithms are all highly parallel, allowing a real-time performance with GPU implementation. Experimental results show that our approach can produce more attractive and impressive line illustrations with a variety of photographs. & 2012 Elsevier Ltd. All rights reserved.

Keywords: Non-photorealistic rendering Line drawing Edge detection Flow-based filtering

1. Introduction Much research has been dedicated to Non-Photorealistic Rendering (NPR) techniques for creating a wide variety of expressive styles such as painting, drawing, technical illustration, and animated cartoons. As a simple and effective tool for shape visualization and visual communication, line drawing is one of the most popular styles used in NPR applications. Because it focuses on capturing and conveying meaningful features while minimizing possible distractions from relatively insignificant details, the line drawing effect enables quick recognition and appreciation of the subject for viewers [1,2]. The goal of line drawing is to produce a sparse set of lines so that our visual system, which is remarkably effective at reconstructing a particular 3D shape, may recognize the object [3]. Most line drawing algorithms are described by finding the local minima or maxima of the mathematical sets. In this paper, we mainly investigate image-space line drawings, in which lines are used to simplify or abstract the input photographs. In many image-based stylization and abstraction systems [4,5], line extraction is the first step, and adding the extracted lines to the

n Corresponding author at: State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China. E-mail addresses: [email protected], [email protected] (S. Wang), [email protected] (E. Wu).

0097-8493/$ - see front matter & 2012 Elsevier Ltd. All rights reserved. doi:10.1016/j.cag.2012.02.011

smoothed regions further increases the visual distinctiveness of these locations. Line extraction, often called edge detection, is an important lowlevel image analysis technique, and has been widely developed in the fields of computer vision and pattern recognition during the last 20 years. Many algorithms for edge detection have been extensively developed for various applications [6], but from the perspective of esthetics, the existing edge detectors usually fail to produce a good stylistic illustration, in part because of the difficulty of identifying shapes embedded in a 2D image and often corrupted by noise. Kang et al. [7] presented a Flow-based Difference-of-Gaussians (FDoG) filter that automatically generates a high-quality stylistic line drawing from a photograph. In this approach, the meaningful structures in the scene are captured and depicted with a set of smooth and coherent lines. The key idea behind the FDoG filter lies in the introduction of an anisotropic, flow-based curve-shaped kernel when applying a DoG operator. Such a flow-based filtering approach not only enhances the coherence of the extracted lines but also effectively suppresses noise. Compared with other edge detectors, the FDoG filter has undeniably proven to be more effective for creating stylistic illustrations. However, from the perspective of feature preservation and edge localization, the approach typically has some limitations. Many different types of feature lines have been proposed, such as sharp creases [8], contours and suggestive contours [9,10], and image-space ridges [11]. Aiming to highlight certain meaningful features, these computer-assisted line drawing techniques are described in terms of mathematical sets.

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

225

Fig. 1. Compared with the popular FDoG filtering method (b), our approach can produce more attractive and impressive line illustrations with good edge localization and more preserved features (c). Additionally, the incorporation of line edges may lead to the production of a high-quality illustration conveying a hand-painting style (d). (a) Input image. (b) FDoG filtering. (c) Our result with only step edges. (d) Our result with both step edges and line edges.

Similarly, the DoG operator used in the FDoG filter extracts the feature lines as the maxima of the negative magnitude of the second directional derivatives. Although this method can detect salient lines, it may miss some perceptually important features, such as certain dense lines in high contrast regions, demonstrated in Figs. 1 and 4. Moreover, because of the inherent limitation of the thresholded DoG edge model [4,7], the captured lines may drift from true edges for large values of the spatial scale parameter, as will be discussed in Section 3.2. In addition to the above limitations, from the perspective of artistic expression, there is still much room for improvement. It is well known that regrading line drawing illustrations created by artists, human observers are not only able to rebuild complex shapes with limited information but are also able to evaluate esthetics and other properties of images. In fact, to convey a specific mood or to enhance an artistic feeling, artists usually create illustrations partially using lines other than the types of lines found in pure line drawing, that is, lines that also incorporate properties such as color, tone, and material. Therefore, it is necessary to explore approaches to creating an impressive image that, at first glance, resembles a painting by an actual human. To address the limitations mentioned above, this paper presents a novel approach for automatically generating abstract line drawings from photographs using two flow-based filters. Our approach has several advantages in the following respects: Robustness: By following the flow-based filtering framework inspired by Kang et al. [7], employing a smooth local edge flow field for guiding the line drawing process, our method is robust to noisy input photographs and can produce coherent high-quality line illustrations. Feature-preserving: Our method differs from FDoG filtering in that it develops a new unbiased operator for the detection of step edges (i.e., the natural boundaries between two adjacent regions), which not only can effectively capture and convey more meaningful features in the photograph but can also produce coherent and stylistic lines with good edge localization. Hand-painting style: We optionally adopt an anisotropic nonlinear filter with an elliptical kernel for the extraction of line edges (i.e., the bar-shaped narrow regions with finite width seen as lines in the image), which is particularly useful for enhancing features and conveying a hand-painting style that can elicit an esthetic response from viewers. Simplicity: The algorithms are straightforward and simple to implement, and they work in real-time on contemporary graphics hardware.

2. Related work Numerous methods for image edge detection have been studied. An early approach, the Canny edge detector [12,13], is a

popular choice for many image-based NPR applications [14–17] because it usually produces satisfying results. In fact, there are a variety of edge detectors that are useful in many image processing applications, and we make no attempt to provide a comprehensive survey in this paper. For a detailed summary of edge detection techniques, we refer the reader to [6]. From the perspective of artistic or esthetic appreciation, however, images produced with commonly used edge detectors such as the Canny edge detector often fail to qualify as good illustrations. Son et al. [1] proposed a line drawing method based on likelihood-function estimation for finding genuine shape boundaries, but the approach is too computationally expensive and thus cannot be used in real-time applications. Gooch et al. [18] presented a black-and-white facial illustration system based on a DoG operator, which is a computationally simple approx¨ imation of the Marr and Hildreth edge detector [19]. Winnemoller et al. [4] extended the DoG edge detection by using a slightly smoothed thresholding function to increase the temporal coherence in videos. Because of the nature of the isotropic filter kernel, the resulting edge map is often corrupted by noise, making the isolated and incoherent lines less meaningful to viewers. An improved approach to creating artistic illustrations is to take advantage of flow-based filters, which may clearly reveal the direction of the local image structure and clarify region boundaries and features. Kang et al. [7] first used a kernel-based nonlinear vector smoothing technique to build a smooth, feature-preserving local edge tangent flow (ETF) and then proposed a flow-based anisotropic DoG filtering technique that directly produced the line illustration. Fast separated implementations of the FDoG filter with comparable high-quality results have been developed in [20,5]. In addition, such flow-based filtering frameworks are widely adopted in various image and video abstraction applications [5,20–23]. Traditional edge detection algorithms, such as the Canny edge detector and the DoG operator, are not suitable for line edges because they usually detect the boundaries on both sides of narrow regions. In contrast, several approaches attempt to extract line edges as ridges and valleys in the image using differential geometric properties [24–26]. Another popular method treats line edges as 2D regions and attempts to extract all of the pixels within the regions [27,28]. Liu et al. [27] used an isotropic nonlinear filter to extract whole pixels with high responses as wide lines. Li et al. [28] proposed a method for curvilinear structure extraction and delineation by using kernel-based density estimation. The two methods eliminate directional derivatives and thus are robust against noise. Motivated by these related studies, this paper provides a novel technique for automatically generating abstract line drawings from photographs using adapted flow-based filters. The aim with this technique is to deliver artistic line illustrations and to preserve more meaningful structures with good edge localization.

226

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

3. Our approach Inspired by the flow-based filtering framework of [7], our approach begins by estimating the local edge flow field based on the smoothed structure tensor for the input photograph. Then, the proposed operations, including step edges detection and line edges extraction, are performed during the first filtering pass in a direction perpendicular to the edge tangent direction. Finally, we obtain coherent and stylistic lines by smoothing the intermediate results along the edge flow curve.

also define a local anisotropy measurement first proposed by [30] qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðEGÞ2 þ4F 2 l1 l2 A¼ ¼ ð5Þ l1 þ l2 EþG The anisotropy measurement A ranges from 0 to 1, where 1 indicates strongly oriented and 0 indicates an isotropic pattern. Fig. 2(c) illustrates how the anisotropy measurement reflects the locally oriented structure. 3.2. Step edges detection

3.1. Edge flow estimation Our solution to edge flow estimation derives from the robust orientation estimation of the structure tensor introduced by [20]. For each pixel, we first construct a local initial structure tensor, and then we obtain an edge flow field from the eigenvectors of the smoothed structure tensor. Compared with the nonlinear vector smoothing technique for constructing the edge tangent flow in [7], this method can produce a similar result with much less expensive computation. Given an input image, we first convert it to a perceptually uniform CIE-Lab color space. The initial structure tensor is given by one 2  2 matrix JðrIÞ ¼ rIrIT

In this paper we mainly discuss two types of edges that are usually contained in natural photographs: step edges and line edges. Note that step edges emphasize region boundaries, whereas line edges are located within the narrow regions, as illustrated in Fig. 3(a). The FDoG filter detects edges essentially by applying a linear DoG operator, which is a popular second derivative edge detector computationally approximated to the Laplacian of Gaussian (LoG) operator. Because of the difficulty of direct detection of zero-crossings, which is sensitive to noise, by applying a thresholded function [4,7] to the DoG filtering result, it is easy to produce a wider range of stable, stylized and coherent lines. Nonetheless, the captured lines may drift from true edges

ð1Þ

where rI is the gradient computed by employing the Sobel operator on the luminance channel. Then, it is possible to average the orientations by smoothing the matrix componentwise with a Gaussian filter J s ðrIÞ ¼ Gs nðrIrIT Þ ¼:



E

F

F

G

 ð2Þ

The matrix of Eq. (2) is positive semidefinite and thus has two eigenvalues measuring the contrast in two orthogonal eigenvectors corresponding to the directions with maximal and minimal local contrast, respectively qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Eþ G 7 ðEGÞ2 þ 4F 2 ð3Þ l1;2 ¼ 2



!   F l2 G n¼ l1 E F

ð4Þ

Selecting the eigenvector corresponding to the minimal local contrast, namely, n, gives a vector field of edge flow, which is visualized with line integral convolution [29] as shown in Fig. 2(a) and (b). The local orientation is defined to be y ¼ arg n. We

Fig. 3. Comparison between the DoG operator and our step edge detector. (a) is an input signal I(x) containing two step edges and two line edges, to which noise was added randomly. (b) is the result by convolving the signal with the first Gaussian derivative, namely, G0sc nIðxÞ. (c) is the result convolved with the DoG operator, namely, ðGsc Gss ÞnIðxÞ. (d) is the result from our proposed step edge detector of Eq. (7). As can be observed, compared with (c), our edge detector has better localization for step edges and bright line edges.

Fig. 2. For the input photograph of Fig. 1(a), the edge flow field is visualized with line integral convolution before (a) and after (b) smoothing. The anisotropy measurement in (c) ranges from 0 to 1, where 1 indicates strongly oriented and 0 indicates an isotropic pattern.

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

for large values of the spatial scale parameter, that is, the estimated position will be pushed to the weak side of the true edge, as shown in Fig. 3(c). Based on the observation that edges are extracted and localized by finding extreme values of the first derivative or zerocrossings of the second derivative of luminance function, we define our step edge detector by combining the first Gaussian derivative and the second DoG operators ( 1:0, DðxÞ rt HðxÞ ¼ ð6Þ 1:0tanhðje  DðxÞÞ otherwise where DðxÞ ¼ 9G0sc nIðxÞ99ðGsc Gss ÞnIðxÞ9  X    ¼  ðG0sc ðJtxJÞIðtÞ

227

position, so that the detected position by Eq. (6) is always the true edge position, shown in Fig. 3(d). However, for line edges, the proposed detector would find two parallel edges on each side of the bar-shaped narrow regions, leaving gaps between them. More specifically, for the bright line edge, our method produces two edges that are much closer to the center of the narrow region than those detected by the DoG operator; for the dark line edge, our method still produces two edges on two sides of the narrow region, while the DoG operator happens to detect one edge within the region because of the edge drift linked to the thresholded DoG edge model. Fig. 4(c) shows the line drawing result by the proposed step edge detector. To accurately localize all of the points (pixels) within the bar-shaped narrow regions, that is, to extract the line edges, we optionally apply an anisotropic nonlinear filter, which is described in next section.

t A N g ðxÞ

    X ðGsc ðJtxJÞGss ðJtxJÞÞIðtÞ 

ð7Þ

t A N g ðxÞ

In the above formulas, I(x) denotes the luminance value of pixel x, and H(x) is the detected result saved in a gray-level image, where the edges are represented in black (0) and other pixels in pixels white (1). In the formula, Ng(x) represents the neighboring pffiffiffiffiffiffi 2 2 along the gradient direction of pixel x. Gs ðxÞ ¼ ð1=s 2pÞex =2s is 0 a 1D Gaussian function, and its first derivative G ðxÞ ¼ ðx= s3 s pffiffiffiffiffiffi 2 2 2pÞex =2s . JtxJ represents the Euclidean distance between pixel positions t and x. Furthermore, sc and ss determine the spatial scale for edge detection, and ss ¼ 1:6sc is set to let ðGsc Gss Þ in Eq. (7) be an approximation of the Laplacian of Gaussian [19]. Once sc is given by the user, the size of Ng(x) is automatically determined, i.e., 3nsc . Parameter je controls the sharpness of the edge representations, and t is a threshold larger than 0, with the typical value of 0.3 as a reasonable result. The desirable property of our proposed edge detector is that for step edges, the value of Eq. (7) is always maximum at the edge

3.3. Line edges extraction To extract line edges, Liu et al. [27] proposed a wide line detector based on the isotropic filtering responses via circular masks, which works well for a range of images containing line edges of different widths, especially for those in which the width varies greatly. In addition, the method is robust against noise because the filter is not dependent on any derivative. In this paper, we adapt the solution of [27] to work with our line drawing approach, automatically adapting the shape of the mask to the local structure of the input, in a so-called anisotropic nonlinear filter. In isotropic regions, the shape of the mask should be a circle, whereas in anisotropic regions, the shape should become an ellipse whose major axis is aligned with the principal direction m in Eq. (4), as shown in Fig. 6. Because of this adaptation of the filter to the local structure, the noise suppression is easily achieved, and the directional features are better preserved and emphasized; see Fig. 7 for comparison.

Fig. 4. Compared with FDoG filtering (b), our solution (c) preserves image features better while capturing lines with good localization. (a) Input. (b) FDoG filtering. (c) Our result with step edges.

Fig. 5. Line drawing examples by different filters. Reminder: please zoom 300–400% for better view. (a) Input. (b) Result with step edges. (c) Result with line edges. (d) Result by blending (b) and (c).

228

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

For any given image, the luminance similarity between the center pixel x and any other pixel x0 within the mask can be measured   IðxÞIðx0 Þ 5 ð8Þ Sðx,x0 Þ ¼ sech d where sechðxÞ ¼ 2=ðex þ ex Þ and d is the luminance contrast threshold. The similarity between the center and all other pixels within the mask can be accumulated to give the mass of similarity of that pixel X mðxÞ ¼ Sðx,x0 Þ ð9Þ x0 A CðxÞ

where C(x) is the filter mask of x with an elliptical shape, which is determined by the local orientation y and the anisotropy measurement A defined in Section 3.1. Let a and b, respectively, be the major and minor axes of the ellipse. To adjust the eccentricity depending on the amount of anisotropy, a ¼ ð1 þ AÞr and b ¼ r=ð1þ AÞ are set, where r is the user-defined mask radius. Assuming that the ellipse’s major axis is aligned in the horizontal direction, we should perform a rotation by the angle 0:5p þ y counter-clockwise in order to ensure that the major axis is aligned in the smoothed gradient direction. The specific implementation of C(x) is similar to the implementation of the elliptical kernel used in the anisotropic Kuwahara filter [22]. For a certain pixel, the smaller the mass of similarity, the more possible it is that the pixel will be within the line edge region.

Thus, our anisotropic nonlinear filter is defined as follows: ( 1:0, mðxÞ ZgðxÞ OðxÞ ¼ mðxÞ=gðxÞ otherwise

ð10Þ

where g(x) is equal to half of the area of the elliptical kernel C(x), which is the theoretical optimum value as proven in [27]. Note that the relationship between the width of the detected line edge 2  w, and the major radius of the elliptical mask, a, is a Z 2:5w. The threshold d in the similarity definition is automatically set larger than, yet close to, the standard deviation in the mask. The result of the line edges extraction using Eq. (10) is shown in Fig. 5(c). Note that many non-line edges (e.g., the large tree branch) that seem to be aligned well with the step edges in Fig. 5(b) are also extracted. Because, in those regions, there is strong luminance contrast d, the mass of similarity m computed by Eq. (9) is usually smaller than the half area of the mask shape g; therefore, these step edges are detected as line edges. However, this discrepancy does not affect the overall purpose of the line drawing. Fig. 5(d) is the enhanced result by combining the step edges and the line edges (multiplying the values of Eqs. (6) and (10) in the first filtering pass), where the gaps between two parallel edges in (b) are filled with (c). Although the width of the resulting lines in Fig. 5(d) is larger than the original lines in Fig. 5(a), we prefer applying such a combination mainly because of its potential feature enhancement value. In fact, one important performance for the line edges is to allow the reinforcement of the features after the step edges detection, which are useful in conveying a hand-painting style.

4. Results and discussion

Fig. 6. Masks with different shapes are shown in different positions, the shape of which is determined by the local orientation and anisotropy measurement.

We have implemented our abstract line drawing technique using OpenGL and Cg shading language to accelerate the rendering on the GPU. All drawing examples were tested on a PC with a 2.66 GHz Intel Core2 Q9400 CPU and an NVIDIA Geforce GTX 285 graphics card. In our implementation, most computations of the

Fig. 7. Application to an image (a) with Liu’s [27] isotropic nonlinear filtering (b), isotropic filtering with directional smoothing along the edge flow (c), and our proposed flow-based anisotropic filtering (d), that is, directional smoothing along the edge flow after anisotropic filtering.

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

229

Fig. 8. Comparison with other methods. (a) is an input image corrupted with 3% Gaussian noise. (b) is achieved by the isotropic DoG filtering [4], sc ¼ 1:0, t ¼ 0:997, je ¼ 1:25. (c) is achieved by the FDoG filtering [7], sc ¼ 1:0, t ¼ 0:995, je ¼ 0:45. (d) is achieved by Eq. (6), sc ¼ 1:0, t ¼1.8, je ¼ 0:2. (a) Input. (b) Isotropic DoG. (c) FDoG. (d) Step edges.

Fig. 9. Test photographs.

algorithms are performed in fragment shaders with the help of the Frame Buffer Object (FBO) technique. Regarding the computational cost, it mainly depends on the input image size and the filter kernel size. For a common 512  512 image such as Fig. 1(a), our algorithms run approximately 9 ms to produce the result in Fig. 1(c) with the default parameter values s ¼ 3, sc ¼ 1:0, je ¼ 0:25, t ¼0.3, whereas it takes approximately 8.5 ms for Kang’s GPU accelerated implementation to obtain the result shown in Fig. 1(b). Combining the anisotropic nonlinear filter with the algorithm, we were able to achieve the enhanced result shown in Fig. 1(d) at the cost of 35 ms. The excess time was taken to compute the elliptical mask and its standard deviation in Section 3.3. For an image with a resolution of 1000  808 as in Fig. 9(e), the processing rate for creating the result of Fig. 11(e) is

18 fps. Thus, this proves that our approach is suited to real-time NPR applications. From the line drawing perspective, the FDoG filter outperforms other popular line extraction techniques, including Canny’s, mean shift segmentation, and isotropic DoG, which has been demonstrated in [7]. However, the thresholded DoG edge model usually leads to edge drifts as well as significant missing edges. Fig. 8 shows the comparison of our step edges detection method with the isotropic DoG and FDoG filtering techniques. Similar to the FDoG method, ours is robust and less susceptible to image noise. Because our method uses an unbiased step edge detector combining the first derivative and the second derivative, it not only can fix edge drifts well but can also produce coherent lines with more shape features preserved.

230

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

As can be seen from Fig. 3(b) and (d), the profiles of 9G0sc nIðxÞ9 and the original D(x) in Eq. (7) are similar except for the magnitude of function values; therefore, similar results will be derived from this absolute form of the first Gaussian derivative. Thus, based on the flow-based framework, we additionally implemented Eq. (6) by

modifying DðxÞ ¼ 9G0sc nIðxÞ9, and the corresponding results are Fig. 10(c) with a large threshold t¼1.0 and Fig. 10(d) with a small threshold t¼0.5. Compared with Fig. 10(e) and (f), which are achieved with the whole step edge detector using different control parameters, Fig. 10(c) and (d) usually inevitably contains some noise caused

Fig. 10. Line drawing control with different edge detectors and different parameters. (c) is using Eq. (6) while modifying DðxÞ ¼ 9G0sc nIðxÞ9, t ¼ 1.0, je ¼ 0:25, sc ¼ 1:0. (d) is the same as (c) except t ¼0.5, je ¼ 0:45. (e) is using Eqs. (6) and (7), t¼ 0.55, je ¼ 0:15, sc ¼ 1:0. (f) is the same as (e) except t¼ 0.25, je ¼ 0:45. (g) is using Eq. (10), w¼ 2.0. (h) is the same as (g) except w¼ 4.0. Compared with (b), the results (i) and (j) are more attractive and impressive to viewers’ visually aesthetic appreciation. Reminder: please zoom 300–400% for better view. (a) Input. (b) FDoG filtering. (c) Gradient edges. (d) Gradient edges. (e) Step edges. (f) Step edges. (g) Line edges. (h) Line edges. (i) Result by blending (e) & (g). (j) Result by blending (f) & (h).

Fig. 11. More line drawing results obtained from original photographs in Fig. 9. Default parameters selected for (a)–(f): s ¼ 3, sc ¼ 1:0, je ¼ 0:25, t¼ 0.3, w¼2.5, with exception that (a) t¼ 0.8, (c) sc ¼ 0:8, (d) je ¼ 0:15, t ¼0.5, (e) je ¼ 0:45, w¼ 4.5. Reminder: please zoom 300–400% for better view.

S. Wang et al. / Computers & Graphics 36 (2012) 224–231

by the first derivative of the image. Furthermore, the anisotropy nonlinear filtering process contributes to the abstract and artistic depiction of the scene, as shown in Fig. 10(g) and (h) using different filtering radii. More line drawing results obtained from the test photographs in Fig. 9 are shown in Fig. 11. As shown in these experimental results, our technique performs consistently well on a variety of photographs for creating abstract and artistic line illustrations.

5. Conclusion We presented an automatic technique to create abstract line drawings from photographs using the flow-based filtering framework. Based on the smoothed structure tensor, our technique first generates a smooth and feature-preserving edge flow field. Subsequently, we described two filters as new solutions to step edges detection and line edges extraction. Guided by the edge flow field, we first apply the proposed filters in the direction perpendicular to the edge tangent and then accumulate the individual filter responses along the edge flow curve. What separates our approach from the popular FDoG filtering method is that we develop a new filter for detecting step edges with good localization and adapt the existing isotropic nonlinear filter to follow the flow field for extracting line edges accurately. We show that our approach improves the line drawing performance considerably in terms of feature preservation and artistic expression, resulting in the production of a high-quality line illustration from a photograph that imitates the human handdrawing style.

Acknowledgements The authors would like to thank the owners of the photographs included in this paper for kindly allowing us to use them for our experiments. Fig. 1(a) is from [22]. Fig. 4(a) is courtesy of ImageKingdom.com. Fig. 7(a) is from [23]. The original image of Fig. 8 is courtesy of Andrew Calder. Figs. 5(a), 10(a), 9(a) and (f) are from xitek.com. This research is supported by a National Fundamental Research Grant of Science and Technology (973 Project: 2009CB320802) and a research grant from the University of Macau. References [1] Son M, Kang H, Lee Y, Lee S. Abstract line drawings from 2D images. In: Proceedings of Pacific graphics. Washington, DC, USA: IEEE Computer Society; 2007. p. 333–42. [2] Cole F, Golovinskiy A, Limpaecher A, Barros HS, Finkelstein A, Funkhouser T, et al. Where do people draw lines? ACM Trans Graph 2008;27(3):88:1–88:11.

231

[3] Hertzmann A. Non-photorealistic rendering and the science of art. In: NPAR’10: proceedings of the 8th international symposium on non-photorealistic animation and rendering; 2010. p. 147–57. ¨ [4] Winnemoller H, Olsen SC, Gooch B. Real-time video abstraction. In: Proceedings of ACM SIGGRAPH 2006. New York, NY, USA: ACM; 2006. p. 1221–6. [5] Kang H, Lee S, Chui CK. Flow-based image abstraction. IEEE Trans Vis Comput Graph 2009;15(1):62–76. [6] Pellegrino FA, Vanzella W, Torre V. Edge detection revisited. IEEE Trans Syst Man Cybern B Cybern 2004;34(3):1500–18. [7] Kang H, Lee S, Chui CK. Coherent line drawing. In: NPAR’07: proceedings of the 5th international symposium on non-photorealistic animation and rendering. New York, NY, USA: ACM; 2007. p. 43–50. [8] Markosian L, Kowalski MA, Goldstein D, Trychin SJ, Hughes JF, Bourdev LD. Real-time nonphotorealistic rendering. In: Proceedings of the 24th annual conference on computer graphics and interactive techniques; 1997. p. 415–20. [9] DeCarlo D, Finkelstein A, Rusinkiewicz S, Santella A. Suggestive contours for conveying shape. ACM Trans Graph 2003;22:848–55. [10] Judd T, Durand F, Adelson E. Apparent ridges for line drawing. In: ACM SIGGRAPH 2007 papers. New York, NY, USA: ACM; 2007. p. 19. [11] Lee Y, Markosian L, Lee S, Hughes JF. Line drawings via abstracted shading. In: ACM SIGGRAPH 2007 papers. New York, NY, USA: ACM; 2007. p. 18. [12] Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 1986;8(6):679–98. [13] Meer P, Georgescu B. Edge detection with embedded confidence. IEEE Trans Pattern Anal Mach Intell 2001;23:1351–65. [14] DeCarlo D, Santella A. Stylization and abstraction of photographs. ACM Trans Graph 2002;21:769–76. ISSN 0730-0301. [15] Fischer J, Bartz D, Straßer W. Stylized augmented reality for improved immersion. In: VR’05: proceedings of IEEE virtual reality; 2005. p. 195–202. [16] Kang HW, Chui CK, Chakraborty U. A unified scheme for adaptive strokebased rendering. Vis Comput 2006;22(9):814–24. [17] Orzan A, Bousseau A, Barla P, Thollot J. Structure-preserving manipulation of photographs. In: NPAR ’07: proceedings of the 5th international symposium on non-photorealistic animation and rendering; 2007. p. 103–10. [18] Gooch B, Reinhard E, Gooch A. Human facial illustrations: creation and psychophysical evaluation. ACM Trans Graph 2004;23(1):27–44. [19] Marr D, Hildreth E. Theory of edge detection. Proc R Soc Lond Ser B Biol Sci 1980;207(1167):187–217. ¨ [20] Kyprianidis JE, Dollner J. Image abstraction by structure adaptive filtering. In: Proceedings of EG UK theory and practice of computer graphics; 2008. p. 51–8. ¨ [21] Kyprianidis JE, Kang H, Dollner J. Image and video abstraction by anisotropic Kuwahara filtering. Comput Graph Forum 2009;28(7):1955–63. ¨ [22] Kyprianidis JE, Kang H, Dollner J. Anisotropic Kuwahara filtering on the GPU. In: Engel W, editor. GPU Pro—advanced rendering techniques. AK Peters; 2010. [23] Kyprianidis JE, Kang H. Image and video abstraction by coherence-enhancing filtering. Comput Graph Forum 2011;30(2):593–602. [24] Lindeberg T. Edge detection and ridge detection with automatic scale selection. Int J Comput Vis 1998;30(2):117–56. [25] Steger C. An unbiased detector of curvilinear structures. IEEE Trans Pattern Anal Mach Intell 1998;20(2):113–25. [26] Jacob M, Unser M. Design of steerable filters for feature detection using canny-like criteria. IEEE Trans Pattern Anal Mach Intell 2004;26(8):1007–19. [27] Liu L, Zhang D, You J. Detecting wide lines using isotropic nonlinear filtering. IEEE Trans Image Process 2007;16(6):1584–95. [28] Li S-X, Chang H-X, Zhu C-F. Fast curvilinear structure extraction and delineation using density estimation. Comput Vis Image Underst 2009;113: 763–75. [29] Cabral B, Leedom LC. Imaging vector fields using line integral convolution. In: Proceedings of SIGGRAPH 1993. New York, NY, USA: ACM; 1993. p. 263–70. ¨ [30] Forstner W, Glch E. A fast operator for detection and precise location of distinct points, corners and centres of circular features. In: Proceedings of the ISPRS intercommission workshop; 1987. p. 281–305.