A 3D Painterly Rendering Framework for Photon Mapping *

JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 26, 1379-1395 (2010) A 3D Painterly Rendering Framework for Photon Mapping* MENG-TSAN LI AND CHUNG-MIN...
Author: Lily Byrd
2 downloads 2 Views 3MB Size
JOURNAL OF INFORMATION SCIENCE AND ENGINEERING 26, 1379-1395 (2010)

A 3D Painterly Rendering Framework for Photon Mapping* MENG-TSAN LI AND CHUNG-MING WANG Institute of Computer Science and Engineering National Chung Hsing University Taichung, 402 Taiwan This paper presents a novel framework for 3D painterly rendering for photon mapping. We introduce a new non-photorealistic rendering pass and integrate it within the standard photon mapping pipeline. Our framework contains three passes. The photon tracing pass casts photons to exploit a three-dimensional scene and holds photon information in a data structure developed specifically for painterly rendering. The non-photorealistic rendering pass constructs the sparse map and informative map which preserve the attributes of strokes required for 3D painterly rendering. In this pass, we also employ an impressionistic line integral convolution algorithm producing various types of painterly rendering results. Finally, the rendering pass renders photorealistic objects generating the final image consisting of the scene with both artistic painterly styles and global illumination visual effects. We implemented our framework and developed an experimental rendering system on a personal computer environment. To the best of our knowledge, the proposed framework is the first that can take advantages of both global illumination and 3D painterly rendering environment. Experimental results and rendered images demonstrate the feasibility of our framework. Keywords: 3D painterly rendering, non-photorealistic rendering, global illumination, photon mapping, sparse map, informative map

1. INTRODUCTION Non-photorealistic rendering (NPR) algorithms have been under intensive study in recent years. They include algorithms developed to render effects such as stipple, penand-ink, and pencil techniques, etc. [1, 2]. One of the important topics has been to mimic painter’s mechanisms which produce an image with painterly style using various brush strokes. This technique was often named painterly rendering. Most painterly rendering algorithms are applied on a 2D image [2]. Few of them are applied for 3D models [3], which are referred to as 3D painterly rendering. Different 3D painterly rendering techniques may apply different illumination models in order to develop their algorithm. A number of 3D painterly rendering techniques use local illumination models such as Gauroud or Phong shading because of the efficiency and popularity consideration. Recently, more and more illumination models have been proposed, and one of these is global illumination. The advantage of global illumination is a representation of rendered objects that appear to be true to the original. To the best of our knowledge, NPR using global illumination is an open topic, with many important issues to investigate [2-4]. Among these global illumination algorithms, the photon mapping, introduced by Jensen [5], is powerful because of its simplicity, and its ability to produce, and diffuse Received May 13, 2008; revised July 8, 2009; accepted September 10, 2009. Communicated by Jiebo Luo. * This work was partially supported by the National Science Council of Taiwan, R.O.C., under the Grants No. NSC 95-2221-E-005-056, NSC 96-2221-E-005-090-MY2, and NSC 98-2221-E-005-073-MY3.

1379

1380

MENG-TSAN LI AND CHUNG-MING WANG

inter-reflections and caustics phenomena. Furthermore, this algorithm is a 2-pass algorithm, making it be independent of a rendering stage. This independency allows us to extend a novel stage for developing the 3D painterly rendering framework. In the movies or games, we see a hybrid of real scenes and rendered objects; for example, the film “Stuart Little”, mixing real-world scenes and animated protagonist. Many TV or PC Games such as “Dragon Quest VIII” shows the protagonist on Celluloid style and the scene is rendering by conventional Gauroud shading technique. Global illumination using photon mapping can achieve highly photorealistic rendering with lighting details. In contrast, NPR methods synthesis images with lighting and shading abstraction. The combination of these two categories of methods can create another style of image to convey information. Users will appreciate this combination since the global illumination demonstrates pixel level high quality images while NPR methods deliver non-pixel level abstraction. In this paper, we present a novel framework for 3D painterly rendering for photon mapping. In particular, a new NPR pass is introduced and integrated into the standard two pass photon mapping pipeline. We extend the original data structure for photon mapping, allowing the new one to store not only the PR messages but also the NPR information. Our framework presents a novel NPR pass for 3D painterly rendering. In this pass, we project the NPR photons which were absorbed on NPR object in order to construct the sparse map. Then, a 3D orientation field is constructed consisting of the oriented vectors through the interpolation using radial basis function. We then convey the 3D NPR photons and the 3D orientation field to build the informative map. This map stores attributes of brushstroke, including positions, colors, and directions. Finally, the rendering pass is adopted which renders the PR objects in order to produce the final image illustrating scenes with global illumination effects and painterly rendering style. Our framework achieves the target of producing 3D painterly rendering as well as photorealistic rendering. Experimental results demonstrate the feasibility of our framework providing much flexibility in which to produce various painterly styles of images as well as a simultaneous global illumination effect. The rest of paper is organized as follows: section 2 reviews related work. We describe our framework in section 3, and demonstrate experimental results in section 4. Finally, we present conclusions and future work in section 5.

2. RELATED WORKS 2.1 The Photon Mapping Algorithm Global illumination algorithms aim to produce photorealistic images. In general, ray tracing and finite element radiosity are two different approaches. The point sampling is preferred since tessellation of the model is not necessary, and it is easy to simulate specular reflections. Ray tracing casts rays from the eye point through a view plane to the 3D scene, and calculates the intensity of illumination from the light source or the surrounding environment [6]. However, this algorithm is not able to simulate the phenomenon of caustic. As a result, Jensen proposed a photon mapping algorithm to overcome this drawback. The photon mapping algorithm [5] adopts two-pass rendering. In the photon tracing

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1381

pass, photons are emitted uniformly from the light source. According to the material characteristics associated with the objects, photons are refracted, reflected, or absorbed. Jensen records photons information using KD-tree [7] for efficient searching. The second stage is called the rendering pass. It begins by casting rays from a viewpoint which then intersects with all objects. Once the visible point is determined, we search a number of photons close to this visible point in KD-tree to estimate the radiance contributed from these photons. Photon mapping algorithm is powerful since it can simulate caustic efficiently. The two-pass algorithm causes it to be independent of the viewpoint in the photon tracing pass. As a result, if the viewpoint is changed, we only need to re-estimate the radiance in the second stage. These advantages inspire us to integrate the photon mapping algorithm in our framework for 3D painterly rendering. 2.2 3D Stroke-based Rendering (SBR) Stroke-based rendering allows us to draw a picture with different brush strokes which can simulate painter’s mechanisms [2]. The 3D stroke-based rendering affirms that the input is a 3D model rather than a 2D image. Unfortunately, there are very few works focusing on 3D SBR. The earliest 3D stroke-based rendering algorithm is the painterly rendering for animation proposed by Meier [8]. Hiroaki et al. proposed an interactive point-based painterly rendering algorithm [9]. The algorithm achieved interactive frame rates by using both a point-based approach to represent the geometry of the surface, and an imagebased approach for the rendering. Recently, Chi et al. [3] presented an interactive 3D painterly rendering system, which uses 3D point sets and builds a multi-resolution bounding sphere hierarchy. These various sizes of spheres represent multiple stroke sizes on canvas. The advantages of their work are that they preserve stroke densities on the screenspace and achieve interactive system performance. In addition, Kolliopoulos et al. [10] presented a segmentation-based 3D non-photorealistic rendering in which 3D scenes are rendered as a collection of 2D image segments. They use a graph-based procedure for segmenting an image, and their algorithm can produce both curved and narrow segments. With use of the segmentation approach, one can create a good interface for designing artistic style images. Meier [8] presented a technique for 3D painterly rendering using particles which modelled surfaces as 3D particle sets. The particles were transformed to screen space and sorted in order of their distance from the viewpoint. Then, a painter’s algorithm was used to render the particles as 2D brush strokes where each brush stroke rendered one particle [11-13]. There is a common feature of most 3D stroke-based rendering algorithms shown in literature. They are not built on global illumination rendering algorithms. Instead, they normally employ simplified shading algorithms, such as Gouraud shading. This is because interactivity seems to be the major concern, although this approach is at the cost of low image quality. 2.3 ILIC Algorithm The line integral convolution (LIC) method [14], originally developed for imaging

1382

MENG-TSAN LI AND CHUNG-MING WANG

vector fields in scientific visualization, has the potential to produce images with directional characteristics. The ILIC (Impressionist LIC) algorithm [15, 16], which takes advantage of the directional information given by a photograph image, incorporates a shading technique which blends cool or warm colors into the image, and finally applies the LIC method in order to imitate artists’ paintings in the impressionist style. The ILIC algorithm differs from the original LIC on three points. First of all, the vector field is derived from the gradient using a Sobel filter. It is then further smoothed to automatically produce a convolution path for each pixel. Secondly, noise texture is generated with a stratified approach, striking a balance between image distortions and blurring. Finally, the convolution is accomplished by selecting sample points with stratification along the convolution path together with the corresponding weighted function. The success of the ILIC algorithm inspires us to adopt it in our framework as an appropriate candidate for 3D painterly rendering. Note that Wang and Lee’s papers [15, 16] are different from this paper. In this work, the stroke direction is decided by texture images. In contrast, the stroke direction in this paper is decided by oriented vectors created on the surfaces of 3D objects.

3. THE PROPOSED FRAMEWORK This section presents in detail the proposed framework, which can generate both photo-realistic and artistic styled scenes in a synthesized image. Given a scene model that contains a number of 3D objects, we assume that objects to be rendered with artistic style are specified by prior users. For simplicity, we denote them as NPR objects, and in contrast, objects that are not NPR objects are called PR objects. In our framework, we modify the photo tracing and the rendering passes which originally existed in the photon mapping algorithm. In addition, we introduce a brand new NPR pass and integrate it with the other two passes. As a result, our framework contains the photon tracing pass, the NPR pass, and the rendering pass, respectively. Due to space limit in the following sections, we detail the NPR pass and highlight the modifications we made in the other two passes. However, detailed description for the photo tracing and rendering passes can be referred to the original paper [5], and also to Jensen’s book [17]. 3.1 The Photon Tracing Pass The objective of this pass is to emit photons from the light sources, trace photons through the model scenes, and hold information in the modified data structure for our framework. The flowchart of this pass is shown in Fig. 1. To do so, we have to consider three processes: photon emission, photon scattering, and photon storing. The photon emission process emits photons at light sources which exist in the model scene. This process is the same as the original photon mapping algorithm. In the photon scattering process, we reveal the interaction between a photon and the surface of an object in the scene. This process is slightly different from the original photon mapping algorithm since there are not only PR objects but also NPR objects. For either object, when a photon hits either object, it can be reflected, transmitted, or absorbed, depending on the reflection model.

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1383

Fig. 1. Flowchart of the photon tracing pass and the rendering pass. typedef structure enhance_photon{ // original data structure for photon mapping float position[4]; float incident[4] float power[4]; // a data structure being extended in our framework float surface_normal[4]; int hit_object_id; bool hit_npr; } Fig. 2. We extend three records in the original photon mapping data structure in our framework to hold information for PR and NPR photons.

Finally, in the photon storing process, we need to store photons when they hit diffuse surfaces. This process is different from the original photon mapping algorithm. In particular, we extend the original data structure, shown in Fig. 2, and construct them using KD tree. This new data structure gathers both PR and NPR photons. For each photon-surface interaction, the first three records store the position on the surface that is hit by the photon, the power of the incoming photon, and the incident direction, respectively. The fourth record keeps the normal vector of the surface at the intersection point hit by the photon. The fifth record, which is an integer, keeps the object identity hit by the photon. This integer information enables us to directly enquire about an object’s material properties without incurring ray object intersection tests, which will be useful in the NPR pass. The last record represents a Boolean flag indicating the type of the photon, NPR or PR photon. 3.2 The NPR Pass We introduce a new NPR pass as the second pass to render an artistic style image in our framework. In the following, we will first present an overview of this new NPR pass, and then detail several processes and maps that are employed in this pass. 3.2.1 An overview of the NPR pass The objective of the NPR pass is to construct attributes of stroke, based on which we can render the scene model to produce an artistic style image. The attributes include a stroke type, positions, color, direction, and length. The basic scenario of this pass is to

1384

MENG-TSAN LI AND CHUNG-MING WANG

first project photons in the model space onto the screen space. The photon projection approach constructs several maps to represent the attributes of stroke on the screen space. Fig. 3 illustrates the flowchart of the NPR pass. There are three processes in this pass. First, given NPR photons stored in the KD-tree structure, we apply the sparse map construction process to build the sparse map. Second, we apply the informative interpolation process to construct the informative map. Finally, we render the informative map using the impressionistic line integral convolution (ILIC) algorithm. The output of the NPR pass produces a NPR partial image. This image contains an artistic style of NPR objects being specified by users. This is the reason we name it NPR partial image. We now detail each process as below.

Fig. 3. Flowchart of NPR pass where the sparse and informative maps are generated in this pass.

3.2.2 Sparse map construction process The sparse map construction process creates the sparse map by considering the NPR photons which have “adhered” to the NPR objects. The sparse map is a two-dimensional map where its size is the same as the view plane. In this paper, we consider the view plane as the canvas. We “paint” strokes on the canvas to produce a style image for the NPR objects. To do so, we store a stroke’s color, position, and direction in the pixels of the sparse map. We name it as “sparse map”, because there are likely to be limited numbers of strokes. Thus, only a few numbers of pixels will hold strokes’ information, while others will be blank. We present a three-step method for constructing the sparse map. Step 1: NPR Photon Projection and Simplification We need to identify pixels in the sparse map associated to the NPR objects. In the first step, we visit every NPR photon, and project it to the sparse map with respect to the viewpoint V. Suppose a NPR photon is located at the position x with the normal vector v N on a NPR object in three-dimensional model space, as shown in Fig. 4. We position

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

Fig. 4. An illustration of photon projection and radiance estimation.

1385

Fig. 5. An illustration of the NPR projection and simplification.

the sparse map between the viewpoint and the model space. Then, for each NPR photon, we can find a corresponding pixel Px in the sparse map that is intersected by the projectuuuv tion vector xV . We name this pixel the “source” pixel, and the associated photon is known as the “source” NPR photon. During the projection, we only process NPR photons that can be “seen” from the viewpoint. This is similar to performing the back-face culling for a polygonal model. In addition, several NPR photons may be projected at the same pixel in the sparse map. If this happens, we operate the photon simplification. We sort those photons in three-dimensional space by depth and find the NPR photon which is closest to the viewpoint V. The “source” pixel is then tagged with the index of this “source” NPR photon. This step ends when we process all the NPR photons and locate the “source” pixels in the sparse map. Fig. 5 illustrates the photon projection and simplification. The normal vectors for NPR photons are in blue located on the bunny model. After the photon projection, we can locate “source” pixels in the sparse map, where three NPR photons are projected into the pixel [3, 2]. However, after the simplification, there is only one “source” NPR photon (in green) associated with pixel [3, 2]. Step 2: Radiance Estimation In this step, we determine a stroke’s color. In particular, we estimate radiance of each “source” NPR photon and record the result in the “source” pixels of the sparse map. To uuv estimate radiance we consider a “source” NPR photon x and treat xP x as the irradiance r direction ω (see Fig. 4). We accumulate radiance of x by considering its neighbouring photons within a given distance. This accumulation is the same as the conventional photon mapping algorithm in the rendering process. Step 3: Three-dimensional Orientation Field Construction and Projection We have stored a stroke’s position, and colors, in the “source” pixels in the first step. Once we visit all “source” NPR photons and follow the method described in the second step, attributes of the strokes are available except for the stroke’s direction. In the third step, we intend to determine this direction. This can be done by first constructing a threedimensional orientation field before we operate the projection again.

1386

MENG-TSAN LI AND CHUNG-MING WANG

1. The Orientation Field Construction The three-dimensional orientation field contains a number of oriented vectors, which are tangent to the surface of the NPR object. For each NPR photon, we need to derive an v oriented vector that is perpendicular to the normal vector N of this NPR photon at the position x. It is a challenge to construct an oriented vector for every NPR photon. We solve this challenge by using the concept of the interpolation. Our approach, inspired from the vector field creation method [18], contains three procedures. First, we pick several NPR photons at different locations by utilizing a user interface we build. We call these NPR photons the “seeds” photons. In the second procedure, for each “seed” photon, say x with v v the normal vector N , a user can specify a direction vector D along the surface of the v v NPR object by utilizing the user interface we provide. Given N and D, we can deterv mine the oriented vector T using the cross product twice. The oriented vector thus deterv v uuv v mined will be T = N × ( N × D ). In the final procedure, once we settle oriented vectors for all “seed” NPR photons, we can operate the interpolation to generate an oriented vector for every non-seed NPR photon on the surface. Through this three-step approach, we are able to semi-automatically create an oriented vector for every NPR photon. In addition, our approach preserves the coherence of the oriented vector for every NPR photon. This preservation ensures that the attributes of strokes built from the orientation field possess coherent features. 2. The Oriented Vector Interpolation The problem remaining is how to interpolate an oriented vector for a non-seed NPR photon, given the oriented vectors available in the “seed” NPR photons. In this paper, we employ the radial basis function (RBF) [19], the reason being that an RBF offers a compact functional description of a set of oriented vectors on a surface. Interpolation is inherent in the functional representation. In our case, we visit every NPR photon, find at least two “seed” NPR photons neighbouring to the current NPR photon, and then make the interpolation in order to determine an oriented vector for the current NPR photon. In particular, we first collect Q numbers of NPR photons that are the nearest to the NPR photon. We then check them to see if they contain at least two “seed” photons. If not, we increase the value of Q and do the collection again. In our implementation, when we set Q = 20~30, we will be able to find at least two “seed” NPR photons adjacent to the NPR photon currently being visited. Once these “seed” NPR photons are available, we make the interpolation. Fig. 6 visualizes the 3D orientation field for the teapot model. Fig. 6 (a) illustrates the oriented vectors (in red) for the “seed” NPR photons specified by the user on the surface. Fig. 6 (b) shows a number of oriented vectors created using our approach. Clearly, these oriented vectors preserve the directional coherence along the surface of the teapot. In contrast, Fig. 6 (c) demonstrates a 2D directional field. As expected, the directional vectors generated appear disorderly. This is due to the fact that this approach does not take into consideration the 3D geometric information existing in the teapot model. Consequently, directional vectors are consistent able to represent the directional nature along the surface of the teapot.

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1387

(a) (b) (c) Fig. 6. Visualizing and comparing the 3D orientation field for the teapot mode.

3. The Oriented Vector Projection Here, we determine a stroke’s directional vector by conveying a 3D orientation field v to the sparse map. In particular, we project the oriented vector T associated with every NPR photon x into the sparse map with respect to the viewpoint V. After the projection, v we will derive a stroke’s directional vector T ′ on the sparse map. The oriented vector projection also ends the sparse map construction process. Fig. 7 shows colors of the “source” pixel in the sparse map for the bunny model, where we use the white color for the background and the bunny is inside a box with three walls (without the front wall). Note that pixels in white color inside the bunny image represent that there are no “source” NPR photons in the sparse map. As expected, not many “source” pixels are in the sparse map. We can also observe that the forehead of bunny is directly under the light source, so it is lighter than other places, since a great number of NPR photons are recorded here. In contrast, the bottom of the rear leg is darker, representing few NPR photons here. This is because the scene has no front wall, so very few photons are likely to be reflected and staying at the rear leg area. 3.2.3 The informative interpolation process

The second process in the NPR pass is the informative interpolation process. In this process, we apply the interpolation technique on the spare map in order to create the informative map. We intend to “fill” out “non-source” pixels by using the interpolation technique, given attributes stored in the “source” pixels, which include positions, colors and directions of strokes. Once we complete the informative interpolation, we name the map being generated the informative map. The informative interpolation process begins by visiting every pixel in the sparse map. When the pixel being visited is not a “source” pixel, we apply the interpolation. In particular, we position a virtual circle with the radius R centered at the current pixel being visited. The left diagram in Fig. 5 illustrates the information interpolation process. Inside this virtual circle, we interpolate colors and directional vectors using the weighted sum of those stored in the “seed” pixels. The weights are inversely proportional to the square of the distance between the current pixel of interest and “source” pixels. The radius R will surely influence the interpolation. We employ 5-10 pixel lengths in our implementation. However, when there are no “seed” pixels inside the virtual circle, we assign a random color and direction to this pixel of interest.

1388

MENG-TSAN LI AND CHUNG-MING WANG

Fig. 7. Displaying colors in the sparse map for the bunny model.

Fig. 8. An illustrative image after the informative interpolation process.

Fig. 9. An image rendered by the ILIC algorithm using colors and projected oriented vectors stored in the informative map.

Fig. 8 shows a resultant image after the informative interpolation. Comparing Figs. 7 and 8, we can visualize the effects of the informative interpolation: the bunny body is clearer after the interpolation. 3.2.4 The ILIC rendering process

We now move to the third process, which is the impressionistic line integral convolution (ILIC) rendering process. In this process, we employ the ILIC algorithm for rendering. In order to enhance the effect for the artistic style, we first randomly add noise in order to change the colors stored in the informative map. Then, we smooth the directional vector twice with the Gaussian filter using 15 × 15 and 7 × 7 pixel width of window sizes. Finally, we specify the length of strokes, and the ILIC algorithm determines the path of the integral, which follows the directional vector from the current pixel to the next pixel. The ILIC algorithm renders every pixel by the weighted sum of the pixel colors on the path. As a result, we can produce the NPR partial image after this process. Fig. 9 shows the NPR partial image we rendered using the ILIC algorithm to render the informative map. By observing this image closely, we found that the directions of the strokes are consistent on the surface of the body of the bunny. This is due to the fact that we convey 3D orientation vectors to the informative map. The bunny’s body is clearer than Fig. 8 after the ILIC rendering, since values at each pixel are a weighted sum from the path of the line integral convolution. The ILIC rendering is the final step in the NPR pass of our framework. In the next section, we will describe the third pass, the rendering pass.

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1389

3.3 The Rendering Pass

The rendering pass is the final pass in our framework. The rendering is operated pixel by pixel through the rays cast for spatial enquiry and we determine types of pixels before the ray casting. This means that we do not cast any sample rays for “source” pixels of the informative map. Instead, we simply retrieve the pixel values that have been computed and stored in the NPR partial image. During the ray casting, we do the radiance estimation when the ray hits the PR objects by collecting a number of photons near to the intersection point. Collecting these photons is conducted efficiently since we have built the photon KD tree into the photon tracing pass, enabling for fast search. Two rendered images produced by our framework from different viewpoints for the bunny and bunny_1 models are shown in Fig. 10. The ILIC algorithm renders images with impressionistic effects where stroke directions are consistent to the surface of bunny body. The effect of color bleeding, an important feature in global illumination, is visualized on the floor near the green wall. Finally, soft shadows are also visible on the floor and walls. This image demonstrates that global illumination effects and painterly style effects can exist simultaneously in the rendered image using our framework. In the next section, we present experimental results and some more rendered images.

Fig. 10. Artistic style images rendered by our framework for the bunny model (left) and the bunny_1 model (right) using different viewpoints.

4. EXPERIMENTAL RESULTS We develop our framework using C++ programming language on a personal computer with 3.0 GHz processor and 1.0 GB main memory. To demonstrate the power of photon mapping, all test models are positioned in a box (width = 10, depth = 20, height = 10) without the front wall. For simplicity, a single semi-sphere light source is located on the rear half of the ceiling. Table 1 describes statistics of eight test models. All models were obtained from the Internet with various polygon numbers. Models are scaled and placed near the middle floor of a box. The key object in each model is the NPR object, and others are the PR objects. We use different values to represent he probability of specular reflection (PSR) and the probability of absorption (PA). These values affect numbers of NPR photons adhesive on NPR objects (NPR Photons) and numbers of “source” pixels (“Source” Pixels) in the sparse map. In order to construct an informative map from the sparse map we use 5 or 10

MENG-TSAN LI AND CHUNG-MING WANG

1390

Table 1. Statistics for eight test models. Model Number 1 2 3 4 5 6 7 8

Model Name Bunny Bunny_1 Teapot Athena Athena_1 Cat Cat_1 Armadillo

Polygon Number 1,979 1,979 604 1,121 1,121 671 671 345,944

Painterly rendered Athenea_1

Grayscale Armadillo

PSR

PA

0.2 0.2 0.1 0.2 0.2 0.2 0.2 0.1

0.7 0.7 0.7 0.9 0.9 0.9 0.9 0.9

Violet

NPR Photons 58,814 58,814 45,189 10,096 10,096 9,240 9,240 33,776

“Source” Pixels 33,410 47,175 10,918 8,730 8,636 7,954 8,134 79,746

Interpol. Radius 5 5 5 5 10 5 10 5

ILIC Length 20 20 10 20 20 20 20 20

Painterly rendered Cat_1

Painterly rendered Armadillo

Fig. 11. Painterly styled images rendered by our framework.

pixel width of radii in the informative interpolation process. Finally, we employ either 10 or 20 pixel width for the length that is used in the impressionistic line integral convolution algorithm. Fig. 11 shows the close view for three painterly rendered images for Athena_1, Cat_1, and Armadillo models using our framework. In particular, the Armadillo model and the greyscale image are retrieved from the Stanford 3D scanning repository which contains 172,974 vertices and 345,944 triangles (http://www-graphics.stanford.edu/data/ 3Dscanrep/). Observing these images, the effect of the ILIC algorithm is prominent on the surfaces of these NPR objects. In particular, our framework enables us to paint strokes along the curvature of the surface on these 3D objects. Also, these curved strokes remain consistent and coherent when visualizing them from different views. Again, this is due to the fact that we make use of 3D information exploited by the NPR photons in the NPR pass. We perform two experimental tests to compare two images produced by two different approaches using our framework. The first approach is referred to as “Rendered-First-

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1391

ILIC-Last.” In particular, we first render the bunny model using the conventional photon mapping algorithm. This produces a photorealistic image which has the bunny area and the background area. Then, we compute the gradient vector field on the bunny area before we apply the ILIC algorithm exclusively on the bunny area, where the ILIC length equals to 20 pixel width. The first approach produces an image shown in Fig. 12 (a). Observing Fig. 12 (a), it demonstrates chaotic attributes in stroke directions: strokes are inconsistent along the curvature of the surface. This is because that this approach does not take advantage of 3D information when we apply the ILIC algorithm. Surely, Fig. 12 (a) cannot reflect any curvature appeared in the 3D bunny model. Consequently, Fig. 12 (a) is less interesting than images shown in Fig. 10. In the second approach, we focus on 3D model texture mapping using an image processed by the ILIC algorithm. We can refer the second approach as “ILIC-First-Texture Mapping-Last.” In particular, we apply the ILIC algorithm to an image shown in Fig. 12 (c). We employ larger ILIC length which equals to 60 pixel width. This produces the ILIC image shown in Fig. 12 (d). Then, we utilize this processed image as the texture image and operate the 3D model texture mapping on the bunny model. Fig. 12 (b) shows the result of the second approach. Observing Fig. 12 (b) we found that though this image is visually plausible, it is NOT produced by the 3D painterly rendering technique; instead, the image is a 3D texture mapping result. Consequently, in comparison to the images shown in Fig. 10 being produced by 3D painterly rendering technique, the image shown in Fig. 12 (b) is considered to be “far from satisfactory!” This comparison reflects the power of our framework because it is capable of supporting 3D painterly rendering in order to produce visually plausible painterly results fully taking advantage of information appeared in the 3D models.

(a) (b) (c) (d) Fig. 12. A comparison of two images produced by two different approaches using our framework.

Fig. 13 compares the photo-realistic image synthesized by the photon mapping algorithm with an artistic style image rendered in our framework, where the teapot represents the NPR object. Global illumination effects are noticeable in these two figures. This is illustrated by, soft shadows on the floor around the teapot and caustics effect under the transparent sphere ball. On the other hand, we can also visualize the impressionistic painting style on the surface of the teapot, thanks to the framework we propose. This comparison indicates that our framework can easily produce both photo-realistic and artistic-styled appearances simultaneously.

MENG-TSAN LI AND CHUNG-MING WANG

1392

Fig. 13. The image rendered by the photon mapping algorithm (left) and that by our framework (right) with impressionistic effect on the surface of the teapot.

{IR = 2, Length = 10, GF = (15 × 15, 7 × 7)}

{IR = 2, Length = 20, GF = (21 × 21, 11 × 11)}

{IR = 2, Length = 30, GF = (31 × 31, 15 × 15)}

{IR = 15, Length = 10, GF = (15 × 15, 7 × 7)}

{IR = 15, Length = 20, GF = (21 × 21, 11 × 11)}

{IR = 15, Length = 30, GF = (31 × 31, 15 × 15)}

Fig. 14. Illustrating different styles of rendering results using various interpolation radii, ILIC length, and window size for Gaussian filters.

Fig. 14 illustrates various artistic style images by changing the rendered parameters including the interpolation radius (IR), the ILIC length (length), and the window size (in pixel width) of using the Gaussian filter (GF) twice. We observe that this sufficient large radius value makes the rendered images more colorful. A short length makes the images disordered, but the long length lets the images appear smooth. Nevertheless, given different parameters, our framework is very flexible for generating different styles of 3D painterly rendering. Our framework contains three passes. The most computationally expensive step is in the photon tracing pass. This bottleneck, however, is similar to the conventional pho-

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1393

ton mapping algorithm. In the photon tracing pass, the rendering time can take 3-6 hours to render an image with the resolution of 512 × 512 pixels. Nevertheless, in reward to the long computation time is that we can produce global illumination effects that are visually plausible.

5. CONCLUSIONS AND FUTURE WORK In this paper, we present a novel framework for 3D painterly rendering using photon mapping. Our framework integrates a NPR engine within the standard photon mapping pipeline. In particular, we design and develop a novel NPR pass, which constructs the sparse maps and informative maps, in order to determine the attributes of strokes required for 3D painterly rendering. A modified data structure is introduced to store information carried by the NPR photons during the photon propagation. We make use of the photon mapping rendering processes to produce a photorealistic image with global effects including color bleeding and software shadows. Our framework allows us to keep the advantage of photon mapping for producing high quality images. For 3D painterly rendering, we employ an impressionistic line integral convolution algorithm which provides various types of painterly rendering results. In conclusion, our framework exquisitely waves the NPR pass into the original photon mapping algorithm. This powerful integration makes it possible to produce an artistic style of image under photorealistic rendering environments. Experimental results demonstrate the feasibility of our framework not only for 3D painterly rendering but also for 3D texture mapping. To the best of our knowledge, our framework is the first that is capable of taking both advantages of the global illumination and non-photorealistic rendering. Our framework is simple, independent, and yet powerful. This is due to the individual NPR pass module we design and propose which can independently handle complex painterly rendering issues. Given a 3D model, our framework can mimic the interaction between the light and the 3D objects and convey information through the artistic style appearance. Future research will be conducted to support more artistic styles of rendering, such as cartoon and pencil style of painting. Our framework can consider the advantage of GPU computing or accelerated techniques [20] to increase the performance.

ACKNOWLEDGEMENT We thank Dr. Evelyne Stitt Pickett for her assistance in improving the clarity of this article. The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.

REFERENCES 1. A. Lu, C. J. Morris, J. Taylor, D. S. Ebert, C. Hansen, P. Rheingans, and M. Hartner, “Illustrative interactive stipple rendering,” IEEE Transactions on Visualization and Computer Graphics, Vol. 9, 2003, pp. 127-138. 2. A. Hertzmann, “A survey of stroke-based rendering,” IEEE Computer Graphics and

1394

MENG-TSAN LI AND CHUNG-MING WANG

Applications, Vol. 23, 2003, pp. 70-81. 3. M. T. Chi and T. Y. Lee, “Stylized and abstract painterly rendering system using a multi-scale segmented sphere hierarchy,” IEEE Transactions on Visualization and Computer Graphics, Vol. 12, 2006, pp. 61-72. 4. H. W. Kang, W. He, C. K. Chui, and U. K. Chakraborty, “Interactive sketch generation,” The Visual Computer, Vol. 21, 2005, pp. 821-830. 5. H. W. Jensen, “Global illumination using photon maps,” in Proceedings of International Conference on Rendering Techniques, 1996, pp. 21-30. 6. T. Whitted, “An improved illumination model for shaded display,” Communications of the ACM, Vol. 23, 1980, pp. 343-349. 7. J. L. Bentley, “Multidimensional binary search trees used for associative searching,” Communications of ACM, Vol. 18, 1975, pp. 509-517. 8. B. J. Meier, “Painterly rendering for animation,” in Proceedings of ACM SIGGRAPH, 1996, pp. 477-484. 9. K. Hiroaki, G. Alexandre, and K. Takashi, “Interactive point-based painterly rendering,” in Proceedings of International Conference on Cyberworlds, 2004, pp. 293-299. 10. A. Kolliopoulos, J. M. Wang, and A. Hertzmann, “Segmentation-based 3D artistic rendering,” in Proceedings of EUROGRAPHICS Symposium on Rendering, 2006, pp. 361-370. 11. T. Y. Lee, C. R. Yan, and M. T. Chi, “Stylized rendering for anatomic visualization,” Computing in Science and Engineering, Vol. 9, 2007, pp. 13-19. 12. T. Y. Lee, C. H. Lin, S. W. Yen, and H. J. Chen, “A natural pen-and-paper like sketching interface for modeling and animation,” in Proceedings of International Conference Computer Animation and Social Agents, 2007, pp. 87-92. 13. C. R. Yen, M. T. Chi, T. Y. Lee, and W. C. Lin, “Stylized rendering using samples of a painted image,” IEEE Transactions on Visualization and Computer Graphics, Vol. 14, 2008, pp. 468-480. 14. B. Cabral and L. C. Leedom, “Imaging vector fields using line integral convolution,” in Proceedings of SIGGRAPH, 1993, pp. 263-270. 15. C. M. Wang and J. S. Lee, “Using ILIC algorithm for an impressionist effect and stylized virtual environments,” Journal of Visual Languages and Computing, Vol. 14, 2003, pp. 255-274. 16. C. M. Wang and J. S. Lee, “Non-photorealistic rendering for aesthetic virtual environments,” Journal of Information Science and Engineering, Vol. 20, 2004, pp. 923948. 17. H. W. Jensen, Realistic Image Synthesis Using Photon Mapping, A. K. Peters, Natick, 2001. 18. G. Turk, “Texture synthesis on surfaces,” in Proceedings of ACM SIGGRAPH, 2001, pp. 347-354. 19. J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. Mccallum, and T. R. Evans, “Reconstruction and representation of 3D objects with radial basis functions,” in Proceedings of ACM SIGGRAPH, 2001, pp. 67-76. 20. J. Steinhurst, G. Coombe, and A. Lastra, “Reducing photon-mapping bandwidth by query reordering,” IEEE Transactions on Visualization and Computer Graphics, Vol. 14, 2008, pp. 13-24.

A 3D PAINTERLY RENDERING FRAMEWORK FOR PHOTON MAPPING

1395

Meng-Tsan Li (李孟燦) is a Ph.D. candidate at the Institute of Computer Science, National Chung Hsing University (NCHU), Taiwan. He received B.S. degree in Computer Science and Information Engineering from National Central University in 2000. Then he went to NCHU for the master program and allowed to directly pursue the Ph.D. studies in 2001. His research interests include global illumination in computer graphics and painterly rendering.

Chung-Ming Wang (王宗銘) is a professor at the Institute of Computer Science, National Chung Hsing University, Taiwan. Wang received B.S. from National Chung Hsing University and since then he had worked in industry for several years. He received his Ph.D. degree in Computer Science and Engineering from the University of Leeds, United Kingdom, in 1992. Dr. Wang has won the Dragon Thesis Awards funded by the Acer Computers several times. His research interest includes computer graphics, virtual reality, and multimedia. Dr. Wang is a member of ACM, IEEE and Eurographics.

Suggest Documents