Rendering with Photon Maps

Rendering with Photon Maps Roland Schregle Diploma Thesis University of Bonn Department of Computer Science 7th May 1998 Abstract The presentation ...
Author: Zoe Gilmore
1 downloads 0 Views 5MB Size
Rendering with Photon Maps Roland Schregle

Diploma Thesis University of Bonn Department of Computer Science 7th May 1998

Abstract The presentation of a recent addition to the suite of popular global illumination rendering methods is the focus of this thesis. The photon map comprises a particle transport simulation with a density estimate characterized by independency of the scene geometry and the ability to efficiently model diffuse global illumination and caustics. Its embedding in existing ray tracing environments is straightforward; an implementation of the photon map and its integration in the Minimal Rendering Toolkit (MRT) is described in detail.

Versicherung Hiermit versichere ich, daß ich die folgende Arbeit „Rendering with Photon Maps“ selbständig verfaßt und keine anderen als die angegebenen Quellen und Hilfsmittel verwendet habe. Bonn, den 7. Mai 1998

Roland Schregle

1

Acknowledgements Special thanks go to the following individuals: Prof. Dieter Fellner for his enthusiasm and support; Henrik Wann Jensen for putting up with my nagging questions via email; Gordon Müller for his superb debugging skills, collaboration in design issues, and for giving this thesis the once over; Stephan Schäfer for his invaluable help and suggestions, and yet another perusal; and ”the computer artist formerly known as Greg Ward” for his help with the tone mapping. They’re now listed in my Book of Cool Guys. Also thanks to triton.informatik.uni-bonn.de for completing the renderings within the deadline!

2

Contents 1 Introduction 1.1 The Global Illumination Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Chapter Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5 6 6

2 Global Illumination Basics 2.1 Radiometric Terms . . . . . . . . . . . . . . . . . . . 2.1.1 Flux . . . . . . . . . . . . . . . . . . . . . . . 2.1.2 Flux Density . . . . . . . . . . . . . . . . . . 2.1.3 Solid Angle . . . . . . . . . . . . . . . . . . . 2.1.4 Radiance . . . . . . . . . . . . . . . . . . . . 2.2 The BRDF . . . . . . . . . . . . . . . . . . . . . . . . 2.3 The Rendering Equation . . . . . . . . . . . . . . . . 2.4 Monte Carlo Integration . . . . . . . . . . . . . . . . 2.5 Informed Monte Carlo: Importance Sampling . . . . . 2.5.1 The PDF . . . . . . . . . . . . . . . . . . . . 2.5.2 Global Illumination with Importance Sampling

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

8 8 8 8 8 9 9 10 10 11 12 12

3 Overview of the Photon Map 3.1 Existing Methods . . . . . . . . . . . . . . . 3.2 The Photon Map . . . . . . . . . . . . . . . 3.2.1 Effects Covered by the Photon Map . 3.2.1.1 Direct Illumination . . . . 3.2.1.2 Global Diffuse Illumination 3.2.1.3 Caustics . . . . . . . . . . 3.2.2 Light Pass and Visualization Pass . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

13 13 14 14 14 14 15 15

. . . . .

16 16 16 17 17 17

. . . .

18 18 19 19 20

4 Data Structures 4.1 Shadow Photons . . . . . . 4.2 Global and Caustic Photons 4.3 Spatial Data Structure . . . 4.3.1 Properties . . . . . 4.3.2 Queries . . . . . .

. . . . .

5 The Light Pass 5.1 Distribution Basics . . . . . 5.2 Shadow Photon Distribution 5.3 Global Photon Distribution . 5.4 Caustic Photon Distribution .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

3

. . . . .

. . . .

. . . . .

. . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

. . . . .

. . . .

. . . . . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

. . . . .

. . . .

CONTENTS 6 The Visualization Pass 6.1 Illumination Components . . . . . . . . . . . . . . . . . . . 6.2 Shadow Photon Visualization . . . . . . . . . . . . . . . . . 6.2.1 Accurate Solution . . . . . . . . . . . . . . . . . . . 6.2.2 Approximate Solution . . . . . . . . . . . . . . . . 6.3 Caustic Photon Visualization . . . . . . . . . . . . . . . . . 6.3.1 Radiance Estimate . . . . . . . . . . . . . . . . . . 6.4 Global Photon Visualization . . . . . . . . . . . . . . . . . 6.4.1 Accurate Solution . . . . . . . . . . . . . . . . . . . 6.4.1.1 Importance Sampling with Global Photons 6.4.1.2 Verifying the PDF . . . . . . . . . . . . . 6.4.2 Approximate Solution . . . . . . . . . . . . . . . .

4

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

. . . . . . . . . . .

22 22 23 24 24 24 24 25 25 25 27 28

7 Implementation 7.1 The Minimal Rendering Toolkit . . . . . . . . . . . . . . . . 7.2 Area Light Sources . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Modified Light Source Interface . . . . . . . . . . . . 7.3 Implementing the Photon Map . . . . . . . . . . . . . . . . . 7.3.1 Embedding the Photon Map . . . . . . . . . . . . . . 7.3.2 The kd-Tree . . . . . . . . . . . . . . . . . . . . . . . 7.3.3 Optimizations . . . . . . . . . . . . . . . . . . . . . . 7.3.3.1 Level of Detail . . . . . . . . . . . . . . . . 7.3.3.2 Irradiance Cache . . . . . . . . . . . . . . . 7.3.3.3 Restricted Search Space . . . . . . . . . . . 7.3.3.4 Caustics Filter . . . . . . . . . . . . . . . . 7.3.3.5 Accelerated Photon Distribution with BReps 7.4 Tone Mapping . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

. . . . . . . . . . . . .

29 29 31 32 33 33 33 35 35 35 36 36 36 37

8 Results 8.1 Test Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2 Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3 Parametrization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40 40 41 42

9 Conclusion and Future Work

54

A Parameters for t_PhotonMap

56

Chapter 1

Introduction Computer graphics remains one of the most demanding applications of computer hardware. The rapid development of high performance hardware coupled with the decline in production cost has stimulated the dissemination of computer graphics and made it accessible to the masses. In particular, the synthesis of photorealistic images to model ”real world” scenes as captured by a camera is a complex and time consuming process, and this field has also evolved with the hardware. The first of these photorealistic methods was Ray Tracing [Whi80]. The basic ingredients is a scene consisting of a collection of geometric objects with discrete surface characteristics, a simplified pinhole camera model, and a set of light sources. The light sources are points in 3D space which emit light energy or intensity (the units are typically arbitrary) radially in all directions. A light source is considered to be shadowed if it is obstructed by an object as seen from a point in the scene. The camera consists of a projection plane and a center of projection (or eye point, corresponding to the viewer’s position). The projection plane is rastered to provide the pixels for the 2D representation of the scene. Each object’s surface is characterized by a set of coefficients which quantify its diffuse and specular reflectivity and/or transmissivity. The diffuse components of a surface consist of scattered, weakly directional reflection and transmission. Surfaces having only diffuse properties are termed ideal diffuse or Lambertian. The specular components consist of highly directional reflection and transmission, giving rise to specular highlights as seen on polished surfaces with varying degrees of ”sharpness”. These surface parameters correspond to the Phong Illumination Model [BT75] which is employed in standard ray tracing. The ray tracing algorithm emits primary rays from the eye point into the scene, which pass through pixels in the rastered projection plane (figure 1.1). Each of these rays is examined for an intersection with one of the objects in the scene, and if one exists, the reflected and/or transmitted light intensity at the closest intersection point is determined according to Phong’s model. Although ray tracing is not restricted to this particular model, it is simple and intuitive and hence widely employed in standard ray tracing implementations. The model comprises local or direct illumination from the light sources in the scene (shadows, diffuse reflection/transmission from the light sources and specular highlights), and global or indirect illumination consisting of a specularly reflected and/or transmitted secondary ray. The intensities conveyed by the latter are determined by recursively applying the algorithm, with the intersection point as the eye point. In practice, this recursion is limited since the number of traced rays increases exponentially with the number of recursion levels and the contributions of rays from subsequent levels diminish. The intensity obtained at the top level then corresponds to that arriving at the pixel in the projection plane.

5

CHAPTER 1. INTRODUCTION

6

Light source Transmitted secondary ray Shadow feeler Projection plane Eye point Primary ray Intersection

Reflected secondary ray Figure 1.1: The basic ray tracing mechanism. The blue rays converging at the eye point define the field of view.

1.1

The Global Illumination Problem

The standard ray tracing method suffers from a major deficit: it does not consider all possible paths via which light can arrive from the light sources to the eye point whose illumination is under consideration. Taking all these light transport paths into account is the concept of global illumination. These paths can be classified according to the surface characteristics (D for diffuse, S for specular) of the objects which convey them from the light source L to the eye point E [Hec91]. The set of all possible paths can be expressed as: L ! (SjD) ! E While standard ray tracing considers specular indirect illumination, possibly with one intervening diffuse reflection/transmission, i.e. L ! [D !]S+ ! E paths, this is insufficient for true photorealism, particularly since only the perfectly reflected ray is traced. The diffuse indirect components L ! D ! D+ ! E, are ignored. Standard ray tracing methods attempt to compensate this problem by adding a constant ambient illumination term to the intensities obtained from the Phong model, however this is unacceptable for professional applications which require fastidious realism. Several global illumination methods have evolved as the quest for visual realism continues, some of them extensions to the original ray tracing method, while others use a different approach. These methods have only become possible with the availability of high performance computer hardware, since their requirements are substantially higher than standard ray tracing. In return, the obtained results more closely resemble real world imagery.

1.2

Chapter Overview

Chapter 2 introduces global illumination basics such as radiometric terminology required for further discussion. Surface properties are generalized via the bidirectional reflection distribution function, and the rendering equation is presented as a mathematical formulation of the global illumination problem.

CHAPTER 1. INTRODUCTION

7

This forms the basis for Monte Carlo methods, which are introduced subsequently. This concept is in turn refined in the form of informed Monte Carlo methods, with particular emphasis on importance sampling. Chapter 3 introduces the photon map and compares it to other global illumination methods. The capabilities of the method and an overview of the two major steps are presented. Chapter 4 discusses the data structures required for the photon map, and their desired properties. Chapter 5 describes in detail the first step of the photon map method, the photon distribution, or light pass. Chapter 6 focuses on the second step, the photon gathering process, and describes in detail how radiance information is extracted from the photon map. Chapter 7 delves in the intricacies of implementing the photon map. The embedding of the method in the Minimal Rendering Toolkit within a C++ class hierarchy is described, together with several other peripheral modifications to supplement the method. Optimization is also considered. Chapter 8 presents the results we’ve all been waiting for. A series of test renderings together with their statistics is presented, and the effects of parametrization are discussed. Chapter 9 concludes the paper with an outlook on the future of the photon map.

Chapter 2

Global Illumination Basics The fundamental quantities and tools employed in the field of radiometry to simulate light transport are defined in this chapter. These are the prerequisites for any discussion concerning global illumination and will be referred to extensively in this document. Furthermore, Monte Carlo methods, which have become the staple global illumination extension to ray tracing, are introduced. An excellent comprehensive survey of global illumination concepts can be found in [Laf96].

2.1

Radiometric Terms

Radiometric terms [IES86, NRH+ 77, SH92] are the conventions by which physical quantities pertaining to light transport may be measured. These are actually functions of wavelength, time, position, direction, and polarization. When applied to computer graphics, these terms are usually simplified by excluding the dependency on time and polarization. Rather than considering all possible wavelengths in the visible spectrum, these are often narrowed down to three primary colors, typically red, green, and blue.

2.1.1

Flux

The basic unit quantifying energy is the joule (J), denoted Q. When discussing moving energy such as electromagnetic radiation (including light), the basic unit becomes J =s, or watts (W). This is termed the radiant flux Φ. It describes the energy passing through an arbitrary surface per unit time. Radiant flux Φ =

2.1.2

dQ dt



W

=

J s



Flux Density

To quantify the flux entering or leaving an area, the concept of flux density is required. This is the flux per unit surface area, measured in watts per square meter. In particular, the flux density incident upon a surface is termed the irradiance E. Irradiance E =

2.1.3

dΦ dA



W m2



Solid Angle

The solid angle ω is the 3D extension of the familiar 2D concept of angle. A 2D angle is subtended by two rays intersecting a circle with their common origin at the circle’s center. The magnitude of the angle then corresponds to the ratio of the arc length intercepted by the two rays to the circle’s radius. By the same token, a 3D solid angle is formed by subtending a cone in a sphere, with the cone’s apex 8

CHAPTER 2. GLOBAL ILLUMINATION BASICS

9

coinciding with the sphere’s center. The magnitude of the solid angle is the ratio of the sphere’s surface area intercepted by the cone to the squared radius of the sphere1 , and is measured in steradians (sr). In vector form, the solid angle ~ω generalizes an arbitrary vector representing a direction into a set of directions around this vector. In global illumination, solid angles are often considered with respect to hypothetical illuminated hemispheres on a surface centered at a point. A differential solid angle d~ω can then be regarded as the set of directions within a cone emanating from, or converging at the hemisphere’s center. Since the surface area of such a hemisphere is 2πr2 , the maximum solid angle is 2πr2 =r2 = 2π sr.

2.1.4

Radiance

The most important radiometric term is the radiance L. This expresses the flux arriving at or leaving from a surface per unit solid angle and per unit projected (or foreshortened) area. The latter refers to the projection of the surface onto the plane perpendicular to the direction of flux propagation. It is measured in watts per steradian per square meter. Radiance L =

2.2

d 2Φ dAd~ω

=



dE d~ω

W sr  m2



The BRDF

The bidirectional reflectance distribution function f r , or BRDF, provides a general mechanism for describing arbitrary surface properties. Given a point ~x, an incident direction ~Ψi , and direction of reflection ~ Ψr , the BRDF fr (~x; ~Ψi ; ~Ψr ) specifies a measure of the amount of radiance incident on ~x along ~Ψi that is reflected along ~Ψr . It is formally defined as the ratio of the reflected radiance Lr (~x; ~Ψr ) to the irradiance E (~x; ~Ψi ). fr (~x; ~Ψi ; ~Ψr ) =

Lr (~x; ~Ψr ) E (~x; ~Ψi )

=

dLr (~x; ~Ψr )

Li (~x; ~Ψi )j~Ψi  ~N~x jd~ωi

;

where:

 

~

N~x is the surface normal at point ~x. d~ωi is a differential solid angle around ~Ψi .

A physically plausible BRDF must meet the following requirements: Helmholtz reciprocity: the reflectance is invariant with respect to inversion of the direction of light ~ ;~ ~ ~ transport, i.e. fr (~x; Ψ i Ψr ) = fr (~x; ,Ψr ; ,Ψi ) Conservation of energy: the entire radiance leaving a point in all directions can never exceed the radiance entering from all directions. Formally, Z



Ωr

fr ~x; ~Ψi ; ~Ψr



jΨr  Nxjd ωr  1 8Ψi 2 Ωi 8Ψr 2 Ωr ~

~

~

~

;

~

;

~

;

where Ωi and Ωr is the set of all incident and outgoing directions in the hemisphere centered at ~x, respectively. Apart from the diffuse and specular components of a surface, the BRDF can also be used to model anisotropic surfaces such as brushed metal or wood grain. The analogous counterpart for transmission is the bidirectional transmission distribution function (BTDF) ft . Examples of physically plausible BRDFs can be found in [LW94] and [Sch93]. 1 Note

that in both cases this removes any dependency on the radius.

CHAPTER 2. GLOBAL ILLUMINATION BASICS

2.3

10

The Rendering Equation

A formalized representation of the global illumination problem was proposed by Kajiya, known as the rendering equation [Kaj86]. It expresses the radiance Lr (~x; ~Ψr ) reflected by a point ~x on a surface along ~ Ψr in terms of the incident radiance from all surfaces in the scene; the radiance reflected from these latter surfaces is, in turn, defined by the rendering equation. Hence the equation provides a recursive definition of light transport. The radiance emitted by a surface is also included, thus catering for light sources. Lr (~x; ~Ψr ) = Le (~x; ~Ψr ) +

Z



Ψ i 2Ω i

~





fr ~x; ~Ψi ; ~Ψr Li ~x; ~Ψi



jΨi  Nxjd ωi ~

~

~

~

(2.1)

where:

 

Le (~x; ~Ψr ) is the radiance emitted by the surface at point ~x along ~Ψr . 



Li x Ψi is the radiance impinging on ~x along incident direction ~Ψi . This incoming radiance is itself defined by recursively applying the equation. ~; ~

The rendering equation forms a theoretical basis for global illumination methods. The goal of these methods is then to solve this equation – approximately, in practice.

2.4

Monte Carlo Integration

The rendering equation is an integral over all incident directions ~Ψi in the hemisphere Ωi centered at point ~x. Each such direction gives rise to recursion, making an evaluation by analytical methods computationally unfeasible for all practical purposes. The rendering equation is an example of an integral which requires numerical methods. One of these approaches is the integration by Monte Carlo methods, which gained popularity due to their general applicability. However, their indiscriminate application to any conceivable mathematical problem has also evoked criticism (”the only good Monte Carlos are dead Monte Carlos” [TT54]), as these methods must be used with care in order to ensure the accuracy of the solutions they deliver. Hammersley and Handscomb [HH64], and Rubinstein [Rub81] provide painful, in-depth discussion on Monte Carlo techniques, while chapter 7 of [Gla95] gives a more concise presentation. Given a function f (x) defined over some domain, assumed to be [0, 1] without loss of generality, the application of the Monte Carlo method consists of estimating its integral I, the estimand. Z 1

I=

f (x)dx 0

The essence of Monte Carlo lies in taking a random sample ξ 2 [0; 1] and evaluating f (ξ), which becomes an estimator hI i for the integral.

hI i = f (ξ) The expected value of the estimator is then equal to the actual integral. Such an estimator is termed unbiased. E (hI i) =

Z 1

f (x)dx = I 0

The variance V and standard deviation σ indicate the accuracy of the estimator. V (hI i) = σ2 =

Z 1 [ f (x) 0

, I ]2 dx =

Z 1 0

f 2 (x)dx , I 2

CHAPTER 2. GLOBAL ILLUMINATION BASICS

11

Extending this concept to N samples ξi , the estimator becomes N

hI i = N1 ∑ f (ξi )

:

i=1

Incorporating the N samples as a single N-dimensional sample, the expected value is then E (hI i) =

Z 1 :::

0

Z 1 1 N 0

N i∑ =1

f (xi )dx1 : : : dxN

= I:

Hence this estimator is unbiased as well. Using N samples reduces the variance compared to the base case with one sample. σ

2

Z 1 =

Z 1 :::

0

0

#2

"

1 N f (xi ) N i∑ =1

dx1 : : : dxN , I

2

=

1 N

Z

1

0

f (x)dx , I 2

 2

(2.2)

This is Monte Carlo in its crudest form. The estimand will converge to the integral as the number of samples approaches infinity. Equation 2.2 obviates the fact that the standard deviation is inversely p proportional to N. This is a rather modest convergence rate, and other more sophisticated Monte Carlo methods exist which strive to minimize the variance. Equation 2.2 serves as a basis to compare these methods.

2.5

Informed Monte Carlo: Importance Sampling

Most optimizations of the basic Monte Carlo method involve the distribution of the samples ξi in the integral’s domain. Stratified sampling, for instance, distributes the samples uniformly in the domain to prevent clustering. A more effective approach is to base the sample distribution on some a priori knowledge of the integral. Monte Carlo methods which rely on such information are termed informed Monte Carlo. Some regions of the integral may be more important than others, particularly those which have high function values. Concentrating samples in these regions can reduce the variance and possibly the required number of samples, depending on the accuracy of the information available about the integral. This leads to the concept of importance sampling [KW86]. This prior information comes in the form of a probability density function (PDF) p(x) which is similar to the integral2 . The samples are then taken according to this PDF, rather than uniformly. This nonuniform sampling requires adapting the estimator to avoid introducing bias. This is implied when rearranging the integral I: Z 1

I=

f (x)dx = 0

Z 1 f (x) 0

p(x)dx:

p(x)

Given a sample ξ with probability p(ξ) over [0, 1], the unbiased estimator becomes

hI i = pf ((ξξ))

(2.3)

:

Since the estimator is unbiased, its expected value is still equal to the integral. The variance becomes σ

2

2 The

=

 Z 1 f (x) 2 0

p(x)

p(x)dx , I 2 =

Z 1 2 f (x) 0

p(x)

dx , I 2 :

two functions should correlate, but the absolute values need not correspond, since the PDF is normalized.

CHAPTER 2. GLOBAL ILLUMINATION BASICS

12

Using N importance samples ξi , the estimator is

hI i = N1 ∑ pf ((ξξ)) N

i=1

;

and the corresponding variance is Z



1 f 2 (x) 1 σ = dx , I 2 : (2.4) N 0 p(x) Comparing this to equation 2.2 reveals that the variance of an importance sampling method depends on the PDF; a good choice of PDF can reduce the variance compared to that of basic Monte Carlo, while a poor choice can actually increase it. Equation 2.4 implies that if p(x) = f (x)=I, the variance would be zero. However, this optimal PDF requires the actual integral I, which is sought in the first place! Nevertheless, a PDF which approximates f (x) is sufficient, and care must be taken when choosing such a function. 2

2.5.1

The PDF

The PDF p(x) is the heart of every importance sampling method. Chosing an appropriate function for this role is of paramount importance, and can drastically affect the variance and hence the convergence speed of the estimate, as seen in equation 2.4. Apart from resembling f (x), the PDF must also fulfil the following requirements:

  

p(x) > 0 when f (x) 6= 0

R1 0

p(x)dx = 1

It is possible to compute the inverse P,1 (x) of the probability distribution function P(x). This is the cumulative function of the probability density function p(x): Z x

P(x) =

p(t )dt : 0

A candidate PDF for importance sampling must therefore be verified with respect to these requirements before it can be applied with confidence.

2.5.2

Global Illumination with Importance Sampling

It has become common practice to apply importance sampling (and other Monte Carlo methods) to global illumination. The integral to be solved in this case is of course the rendering equation (2.1). The radiance Lr (~x; ~Ψr ) reflected at point ~x on a surface along ~Ψr can then be found by tracing rays incident from directions ~Ψi in the illuminated hemisphere surrounding ~x. These directions are the domain of integration, and each of the traced rays contributes radiance which is computed by recursively applying the rendering equation at their respective intersection points. Applying importance sampling to this task involves choosing these directions according to the PDF p(~Ψi ). If no information about the expected illumination is available, the surface’s reflection characteristics may supply this information [BSS94, Lan91]. In particular, the BRDF may serve as a PDF. In this case, the BRDF guides the ray tracing algorithm by associating high probabilities with those incident directions ~Ψi which yield high reflectivity for the given reflecting direction ~Ψr . Concentrating the sample rays in these regions of high reflectivity allows a reduction of sample rays and hence improved performance. As will be seen in the next chapter, this is a sound approach for the specular component of the BRDF, whose regions of high reflectivity are mostly isolated, but less so for the diffuse component.

Chapter 3

Overview of the Photon Map This chapter introduces the photon map as a global illumination method. The effects which can be achieved with it are discussed, and an overview of the initialization and rendering steps is given, which is refined in chapters 5 and 6.

3.1

Existing Methods

Several global illumination methods have been developed since the original ray tracing proposal. Some, such as the Monte Carlo methods discussed in section 2.4 are extensions to standard ray tracing, while others, such as radiosity, are based on a different approach. Radiosity [GTGB84] is a very popular method which is actually based on thermal radiation transfer in a closed system. By setting up and solving a system of linear equations representing this energy exchange among all surfaces in the scene, the method can model diffuse interreflection on these surfaces. There are major advantages to this method: it can be supplemented by hardware, and since the reflection is diffuse and therefore view-independent, the solution of the radiosity equation can be reused to create real-time, interactive walk-throughs. However, radiosity has the disadvantage of necessitating object tesselation, or meshing, since the algorithm relies on planar surfaces (requiring an approximation of curved surfaces), which are adaptively subdivided into patches, possibly producing visible artifacts due to discontinuities at the seams. Furthermore, This subdivision cannot be applied to arbitrary geometries, in particular procedurally defined objects like fractals. Thus radiosity is restricted with respect to the scene geometries it can handle. By definition, classical radiosity only accounts for diffuse reflection, i.e. all L ! D ! E paths. Specular reflection is not considered (if it were, the interactive walk-throughs would not be possible). To this end, radiosity has been supplemented with directional form factors [ICG86], but this incurs a substantial increase in storage requirements. Hybrid methods exist which combine radiosity and ray tracing [WCG87, SP89, RT90, SAWG91], however simply serializing the two methods does not account for all light transport paths, and would only model L ! D ! S ! E. The more sophisticated hybrid algorithms do infact model all light paths, i.e. L ! (SjD) ! E, but they still require object tesselation which is plagued by the problems discussed above. Few global illumination methods can efficiently model the phenomenon of caustics. These are highly directional indirect illumination components which result from specular reflection or transmission and manifest themselves on diffuse surfaces. This corresponds to L ! S ! D ! E light paths. Examples are light reflected off polished metal or refracted through glass, particularly when it is focused onto a diffuse surface. Arvo’s illumination map [Arv86] is a two-pass method which stores irradiance on object surfaces as texture maps. The first pass emits flux from the light sources and simulates its propagation in the scene 13

CHAPTER 3. OVERVIEW OF THE PHOTON MAP

14

via reflection and transmission, and accumulates the resulting irradiance in texture maps associated with the surfaces it affects. The second step involves collecting this irradiance from the textures, which is then conveyed to the eye. Thus this two-pass approach connects the light transport paths originating at the light sources, and those terminating at the eye. This method can also handle caustics. The major drawback is the fact that textures are used, which are notorious for aliasing effects, and require a parametric description of a surface in order to perform a mapping to the texture space. As with radiosity, this restricts the geometries to which this process can be applied.

3.2

The Photon Map

The photon map developed by Wann Jensen et al [JC95b, Jen96] evolved from the illumination map and corrects some of the latter’s deficiencies. Like the illumination map, it is a two-pass method emitting flux from the light sources and subsequently collecting it. The crucial difference is its independency of the scene geometry, which may be regarded as the fundamental philosophy behind the method. With no object dependency (tesselation, texture mapping, etc.), the photon map is applicable to arbitrary geometries. It is also efficient, and can be used as an extension to an existing ray tracing environment, as discussed in chapter 7. The method is based on the simulation of light particle1 transport, hence the allusion to photons. The distribution of these particles is stochastic, and they may be absorbed as they strike a surface.

3.2.1

Effects Covered by the Photon Map

The photon map provides an estimation of the actual illumination in the scene, the accuracy of which is adjustable. The following sections describe the light transport paths and illumination effects the photon map can model, and how this illumination estimate is put to use. Photons come in three flavours: shadow photons, global photons, and caustic photons. Each type is responsible for a different effect, and there is a distinct photon map for each type. 3.2.1.1 Direct Illumination The photon map may be used to render the direct illumination and shadows from light sources [JC95a], i.e. L ! DjS ! E paths. This is done using shadow photons, which are stored in the shadow photon map. The principal idea behind shadow photons is to locate shadow boundaries (penumbrae) and to concentrate the sampling of light sources via shadow feelers in these regions. Regions of uniform illumination or uniform shadow (umbra), which typically dominate in a scene, do not require this expensive sampling if shadow photons are used. 3.2.1.2 Global Diffuse Illumination The photon map’s primary purpose is to render global illumination, and this is handled by the global photons, which are stored in the global photon map. These handle all light paths which are reflected diffusely at a surface into the eye, i.e. L ! (DjS)+ ! D ! E. The path notation indicates that this photon type is strictly dedicated to indirect illumination; it is reflected/transmitted at least once. Direct illumination is the responsibility of the shadow photons. As indicated in section 2.5.2, the importance sampling process based on the BRDF has difficulty handling the diffuse component of the BRDF; while the specular component contains highlights or ”peaks” corresponding to a set of directions in which a small number of sample rays can be concentrated, the diffuse component is practically constant, hence any direction in the entire illuminated hemisphere is 1 De

Broglie’s hypothesis of wave/particle duality justifies considering a bundle of light rays as a particle.

CHAPTER 3. OVERVIEW OF THE PHOTON MAP

15

equally valid and this process would simply degenerate to basic monte carlo since the PDF would be constant. Sampling the latter would require a much high number of sample rays than for the specular component. Rather than using the BRDF for the directions which are expected to yield high illumination contributions, it would make more sense to use directions which actually yield high contributions, particularly if the indirect illumination is nonuniform. This is where the global photons come in, and they supply this information by providing a rough approximation of the actual distribution of the indirect illumination in the scene [Jen95]. This information is then drawn on for the PDF to sample the diffuse component with a smaller number of samples. Therefore global photons lend themselves to importance sampling of the diffuse component, as will be discussed in section 6.4.1. 3.2.1.3 Caustics Caustics correspond to L ! S+ ! (DjS) ! E paths, and are handled by caustic photons which are stored in the caustic photon map [Jen97]. They are treated separately from the global photons because their illumination contributions are highly directional, and typically require a higher number of photons. They are also visualized differently. These photons undergo a series of specular reflections/transmissions (at least one) after leaving a light source, and are reflected/transmitted into the eye. This latter reflection/transmission may be diffuse or specular, since caustic photons are not restricted to lambertian surfaces.

3.2.2

Light Pass and Visualization Pass

As with the illumination map, the photon map is a two-pass method. The first step is the light pass, which initializes the photon map. The second is the visualization pass which accesses the photon map to provide the illumination components. The two steps connect the transport paths originating at the light sources and those terminating at the eye, hence making it a bidirectional method. In the light pass, a certain number of photons of each type is emitted into the scene from the light sources. Each photon type is handled differently during this distribution process, but in general a photon is reflected/transmitted at a number of surfaces in the scene and inserted in an appropriate data structure (the actual photon map) corresponding to its type which allows rapid searching. This simulates the light transport paths originating at the light sources. Chapter 5 elaborates on this. In the visualization pass, the direct and indirect contributions to the illumination of a point on a surface are determined using the photon map. The three photon types are visualized differently, but typically a number of photons in the vicinity of the 3D space surrounding the illuminated point are located and examined for their contributions. This simulates (in reverse) the light transport paths terminating at the eye. Chapter 6 gives an in-depth treatment of this pass.

Chapter 4

Data Structures This chapter introduces the underlying data structures for the photon map without going into implementation details. The attributes of the individual photon types are presented in an abstract manner, followed by the spatial data structure in which the photons reside.

4.1

Shadow Photons

A shadow photon is endowed with the following attributes:

   

Its position in 3D space on a surface, The surface normal at this position, The ID of the light source which emitted this photon, A shadow flag indicating that this photon lies in this light source’s shadow.

The position corresponds to an intersection point with a surface and is used to locate a photon. The surface normal serves to locate photons whose surface normal lies in the same halfspace as that of the point whose direct illumination is under consideration; thus surface curvature is taken into account. If the shadow flag is not set, the light source which emitted the photon contributes to the direct illumination and is referenced to determine its radiance.

4.2

Global and Caustic Photons

These photon types share common attributes:

   

Its position in 3D space on a surface, The surface normal at this position, The flux it carries per primary colour1 , Its origin in 3D space.

The position and surface normal serve the same purpose as for shadow photons. The flux represents a photon’s illumination contribution to the surface it is located on. Its origin serves to determine its incident direction at the point whose illumination is under consideration. The displacement of this point relative to the photon position would not be taken into account if the incident direction at the photon position were stored exclicitly. 1 Photons are physically characterized by a single wavelength, but simulating the dominant wavelength by combining three primary components into a photon is obviously more efficient.

16

CHAPTER 4. DATA STRUCTURES

4.3

17

Spatial Data Structure

The actual photon map is a spatial data structure which stores the photons and allows fast retrieval. Each photon type is stored in a separate instance of this structure.

4.3.1

Properties

The data structure must be capable of maintaining a set of 3D keys corresponding to the photon positions, which entails sorting data by space subdivision. Several such data structures exist for geometric databases, the most popular being octrees. Chapter 5 of [Mae96] gives an overview and classification of various spatial data structures.

4.3.2

Queries

When computing the illumination of a point ~x during the visualization pass, a number of photons in the vicinity of this point must be located, called a nearest neighbour search. This associative search (or query) must be supported by the data structure implementing the photon map. It corresponds to finding the N closest keys (photon positions) in the data structure to a given key (point ~x). This requires the concept of some metric to measure key distances, which would be euclidean in the case of the photon map. Thus the nearest neighbour query would find the N photons with the shortest euclidean distances to ~x. This can be construed as centering a sphere at ~x and extending its radius until it contains N photons.

Chapter 5

The Light Pass This chapter gives a detailed description of the inital step in a rendering using the photon map, the light pass. The differences in the distribution of each photon type are emphasized.

5.1

Distribution Basics

The purpose of the light pass is to initialize the photon map with the flux distribution in the scene. This flux is conveyed by the photons, which are distributed stochastically. In what follows, an object or surface referred to as being diffuse should be construed as having diffuse properties, even if these are outweighed by any specular components it might also have. A specular object or surface should be interpreted similarly. The process of photon distribution is a simulation of particle transport [AK90] in the absence of participating media. Each photon has a unique path, and impacts a surface as it follows this path. At this point it may be either absorbed or reradiated along a new path (scattering event). The probability that the photon is reradiated is the surface’s albedo, which depends on the surface’s total reflectance and transmissivity. These can be either expressed in terms of reflection and transmission coefficients (as in the Phong model), or by integrating the BRDF and BTDF over all outgoing directions for a given incident direction (total hemispherical reflectivity and transmissivity). The probabilistic absorbtion of a photon limits the number of scattering events it undergoes, since an implementation cannot follow photons ad infinitum. This leads to the concept of russian roulette; a reradiated photon’s flux is adjusted to compensate for those which are terminated prematurely via absorbtion, thus avoiding bias. This is done by dividing the flux by the albedo, which artificially bloats the contributions of those photons which survive. If the photon is reradiated, the BRDF/BTDF is used to probabilistically generate a new path for it – predominantly along directions of high reflectivity or transmissivity – via importance sampling. The flux is then attenuated according to the reflectivity/transmissivity along that path. When a new photon is created a light source is chosen for the emission. The choice of light source should depend on its radiance and surface area in the case of an area light source, or its intensity ( W sr ) in the case of a point light source. Thus each light source has an importance which is drawn on when choosing a candidate for photon emission. The origin on the light source can be arbitrary, but the emission path along which the photon leaves the light source should be directed at a chosen object in the scene. The choice of object depends on the photon type, but in general this measure increases the efficiency of the distribution process, since photons which do not strike an object after emission are lost by leakage and do not contribute to the flux distribution provided by the photon map. To accomplish this, the light sources require knowledge of the scene geometry, i.e. a mechanism along the lines of the light buffer [HG86]. In [JC95b], Wann Jensen suggests using a projection map, which is a 2D projection of all objects in the scene onto the hemisphere surrounding a photon’s origin. 18

CHAPTER 5. THE LIGHT PASS

19

Each region in the projection map indentifies the type of object (specular or diffuse), if any, which lies in the direction specified by the polar coordinates corresponding to its index. The photon origin is reused for several photon emissions to compensate for the overhead of constructing this projection map for every origin.

5.2

Shadow Photon Distribution

Shadow photons are emitted towards all objects in the scene. The first surface a shadow photon strikes after emission from a light source is directly illuminated, hence the photon is entered in the shadow photon map together with the impact point on the surface as key, the light source’s ID, and the unset shadow flag. The photon is then followed along its original path and entered in the shadow photon map at every subsequent surface intersection with the shadow flag set, since these surfaces all lie in the shadow cast by the first intersected surface. The photon is terminated when it leaves the scene (figure 5.1).

L

Figure 5.1: Shadow photon distribution from light source L. White points represent direct illumination, red points represent shadows.

5.3

Global Photon Distribution

Global photons are also emitted towards all objects in the scene. The probability of choosing an object should be proportional to its size (total surface area or volume), its albedo, i.e. the sum of its reflection and transmission coefficients or its total hemispherical reflectivity and transmissivity, and its proximity to the light source. Thus objects which are more likely to reradiate a photon and therefore yield greater contributions to the flux distribution are given higher probabilities. Starting with the first indirect surface intersection, the global photon is inserted in the global photon map together with the intersection point as key, the origin from whence it came (the previous intersection point), and its flux before attenuation according to the BRDF/BTDF. A scattering event then takes place, resulting in a new path and attenuated flux, and the process repeats at the next surface collision. This continues until the photon is absorbed or leaves the scene (figure 5.2).

CHAPTER 5. THE LIGHT PASS

20

L

Figure 5.2: Global photon distribution from light source L. Global photons are created at the indirect surface intersections marked by red points.

5.4

Caustic Photon Distribution

Caustic photons are emitted towards the specular objects in the scene. The probability of choosing such an object depends on its size and proximity, and its specular albedo, which can be either the sum of its specular reflection and transmission coefficients or the specular component of the total hemispherical reflectivity and transmissivity. As a result, objects with highly directional, iridescent specular properties will contribute most. A caustic photon is followed through a series of scattering events which result in specular reflection or transmission. These scattering events are based on the fact that surfaces which are predominantly specular are more likely to reflect or transmit specularly during such an event. The photon is entered into the caustic photon map at the first scattering event which gives rise to diffuse reflection or transmission. Like a global photon, it is inserted along with the intersection point, its origin, and the incident flux prior to application of the BRDF/BTDF (figure 5.3). After diffuse reflection/transmission, the caustic photon may optionally be treated as a global photon, and entered in the global photon map at every subsequent collision with an object. This can supplement the information provided by the global photon map, since caustics are generally characterized by high flux contributions. The photon is terminated if it is absorbed or leaves the scene. If the caustic photon is terminated prior to diffuse reflection/transmission, it has contributed nothing, which is tough luck. Similarly, if the caustic photon is reflected/transmitted diffusely at the first surface intersection following emission from the light source, it does not contribute to caustics either, since this is direct illumination, and it is therefore discarded.

CHAPTER 5. THE LIGHT PASS

21

D S L D

S D Figure 5.3: Caustic photon distribution from light source L. Scattering events on predominantly specular surfaces (S) are marked by white points. A photon is created once it undergoes a scattering event on a predominantly diffuse surface (D), marked by a red point.

Chapter 6

The Visualization Pass Having initialized the photon map in the light pass, the contributions from the flux distribution represented by the photon map are assessed in the visualization pass. This chapter covers the different visualization methods for each photon type. In what follows, the contributions of each photon type to a point ~x on a surface whose reflected radiance Lr (~x; ~Ψr ) along ~Ψr is under consideration are accounted for. Lr (~x; ~Ψr ) is simply the sum of these contributions. When computing the radiance resulting from caustic or global photons it becomes necessary to determine a photon’s incident direction relative to point ~x. This can be done by subtracting the photon’s origin from ~x and normalizing the result. Should a photon be incident from the halfspace adjacent to the surface normal ~N~x at ~x, it contributes to the transmitted radiance (if the surface is transparent), which is computed analogously, and requires use of the BTDF instead of the BRDF. What is outlined here for the reflected radiance is thus also applicable to the transmitted radiance.

6.1

Illumination Components

It is possible to split the BRDF fr (~x; ~Ψi ; ~Ψr ) into:

 

a diffuse component fr;d (~x; ~Ψi ; ~Ψr ) and a specular component fr;s (~x; ~Ψi ; ~Ψr ),

so that fr (~x; ~Ψi ; ~Ψr ) = fr;d (~x; ~Ψi ; ~Ψr ) + fr;s (~x; ~Ψi ; ~Ψr ). Likewise, the incident radiance Li (~x; ~Ψi ) may be partitioned into:

  

direct contributions from the light sources Li;l (~x; ~Ψi ), indirect contributions from the light sources exclusively via specular reflections (caustics) Li;c (~x; ~Ψi ), and indirect contributions from the light sources which have been reflected diffusely at least once Li;d (~x; ~Ψi ),

such that Li (~x; ~Ψi ) = Li;l (~x; ~Ψi ) + Li;c (~x; ~Ψi ) + Li;d (~x; ~Ψi ). The radiance equation (2.1) can then be expanded as follows [Jen96]: Lr (~x; ~Ψr )

=

Le (~x; ~Ψr ) + |

{z

}

emission

22

CHAPTER 6. THE VISUALIZATION PASS Z

|

Ψi 2Ωi

~

Ψi 2Ωi

~

Z

|

Ψi 2Ωi

~

{z

}

fr;d (~x; ~Ψi ; ~Ψr )Li;c (~x; ~Ψi )j~Ψi  ~N~x jd~ωi + {z

}

caustics

h

i

fr;s x Ψi Ψr ) Li;c x Ψi ) + Li;d x Ψi ) j~Ψi  ~N~x jd~ωi + (~ ; ~

;~

(~ ; ~

(~ ; ~

{z

}

specularly re f lected indirect

Z

|

fr (~x; ~Ψi ; ~Ψr )Li;l (~x; ~Ψi )j~Ψi  ~N~x jd~ωi + direct

Z

|

23

Ψi 2Ωi

~

fr;d (~x; ~Ψi ; ~Ψr )Li;d (~x; ~Ψi )j~Ψi  ~N~x jd~ωi : {z

}

di f f usely re f lected indirect

Omitting the straightforward emission term, four illumination components can be discerned above:

   

The second term represents direct contributions from the light sources; these are visualized by shadow photons. The third term represents caustics on surfaces having a diffuse component; they are visualized by caustic photons. The fourth term is indirect illumination reflected specularly; it is visualized via importance sampling based on the specular component of the BRDF as outlined in section 2.5.2, since the choice of sampling directions and hence the number of samples is usually narrowed down. The fifth and last term denotes weakly directional indirect illumination which is reflected diffusely; it is visualized via importance sampling based on the global photon map, since all sampling directions in the hemisphere would be equally valid candidates if no information about the incident flux were available.

The visualization of the components contributed by photons is outlined below for each type. In all cases the visualization begins by performing a nearest neighbour search in the photon map associated with the specific photon type. The found photons must be culled according to their surface normals; a photon is only accepted if its surface normal lies in the same halfspace as ~N~x , which is the case if their dot product is positive.

6.2

Shadow Photon Visualization

The direct radiance is determined by finding the Ns shadow photons closest to ~x in the shadow photon map and, using the light source ID associated with each photon, classifying them for each light source l as follows [JC95a]:

 

nl ;s , the number of photons with the shadow flag set, representing shadow, nl ;d , the number of photons with the shadow flag unset, representing direct illumination.

This leads to four possible cases:

) x lies in shadow (umbra) with respect to light source l, nl s = 0 =) x is directly illuminated by light source l,

1. nl ;d 2.

;

=0=

~

~

CHAPTER 6. THE VISUALIZATION PASS

24

6= 0 and nl s 6= 0 =) x lies in the shadow boundary (penumbra) of light source l, nl d = 0 and nl s = 0 =) no contribution from light source l, so there.

3. nl ;d

;

4.

;

;

~

Cases 1 and 2 do not require any shadow feelers. The irradiance may be obtained directly from light source l and the resulting radiance by applying the BRDF. Case 4 may arise if light source l is small or emits low radiance and consequently fewer photons compared to the others. This invariably requires sampling of the light source, since no information is supplied by the shadow photons, and is handled in the same manner as the accurate solution described below. Case 3 may be handled in two ways, depending on whether an accurate or approximate solution is desired.

6.2.1

Accurate Solution

The accurate solution is called for in the lower recursion levels during the ray tracing process, where the contributions to the final radiance conveyed by the primary ray to the projection plane are still significant. The accurate solution requires sampling the light source via shadow feelers as in standard ray tracing or using an analytical method to obtain the precise irradiance on ~x and the resulting reflected radiance.

6.2.2

Approximate Solution

The approximate solution, or visibility estimate can be used in higher recursion levels of ray tracing, where the contributions to the primary ray are relatively small. The visibility estimate is simply the ratio of the photons representing direct illumination to the total number of photons found for light source l: nl ;d : nl ;d + nl ;s This is the approximate fraction of the radiance of light source l that arrives at ~x.

6.3

Caustic Photon Visualization

Caustic photons are always visualized directly using a radiance estimate. For this reason, their contributions require higher accuracy than what is expected of global photons, hence they are generally greater in number, resulting in a higher density of these photons in the scene.

6.3.1

Radiance Estimate

Caustics are visualized by finding the Nc closest photons to ~x via nearest neighbour search, and estimating their radiance. Since the radiance estimate is based on the photon density, it may also be referred to as a density estimate. The estimate relies on the assumption that each photon p of the Nc caustic photons obtained from the nearest neighbour search represents a flux contribution ∆Φ p along the photon’s incident direction ~Ψi; p to the reflected radiance Lr (~x; ~Ψr ). This can be derived by integrating over the hemisphere Ωi representing all incident directions [Jen97]: Lr x Ψr ) = (~ ; ~

Z Ψi 2Ωi

~

fr (~x; ~Ψi ; ~Ψr )

Nc ∆Φ p (~x; ~Ψi; p ) d 2 Φi (~x; ~Ψi ) d~ωi  ∑ fr (~x; ~Ψi; p ; ~Ψr ) ; dAd~ωi πr2 p=1

(6.1)

where r is the radius of the sphere containing all Nc photons, i.e. the distance from ~x to the furthest neighbour found in the search. Hence the sum of the flux carried by the Nc neighbouring photons divided by the surface area they cover yields an estimate of the resulting irradiance, which then results in the reflected radiance after

CHAPTER 6. THE VISUALIZATION PASS

25

application of the BRDF. In equation 6.1 this surface area is approximated as a disk, implying that the surface intercepted by the sphere representing the search space containing the Nc neighbouring photons is planar. Thus surface curvature is ignored. However, it should be noted that this surface portion becomes asymptotically planar as the photon density increases and r decreases while keeping Nc constant. The radiance estimate already accounts for the solid angle; it is implicit due to the fact that the photon density decreases as the photons leave the light source.

6.4

Global Photon Visualization

Unlike caustic photons, global photons are only visualized directly at higher recursion levels by a radiance estimate. Ng global photons are located in the vicinity of ~x with a nearest neighbour search in the global photon map, and their flux contributions used either for an accurate or approximate solution, depending on the recursion level.

6.4.1

Accurate Solution

The accurate solution is used at lower recursion levels and entails importance sampling of the diffusely reflected component of the indirect illumination at ~x, using the incident flux from the global photons to derive a PDF which facilitates localising those incident directions which actually yield high contributions to this component. 6.4.1.1 Importance Sampling with Global Photons Importance sampling using global photon flux is an extension of the approach discussed in section 2.5.2. The hemisphere of incident directions centered at ~x is discretized into a 2D sampling grid which is used to construct the PDF [Jen95]. This sampling grid contains Nθ  Nφ regions which correspond to coordinates for the incident directions. Since the polar coordinates in the hemisphere lie in the interval [0; π2 ]  [0; 2π], the ratio of Nθ to Nφ should be about 1:4. There are several parametrizations for the hemisphere [LW94, Shi92] which can be used to map a sampling grid index (ξ1 ; ξ2 ) 2 [0; Nθ , 1]  [0; Nφ , 1] to a polar coordinate (θ; φ) in the hemisphere, and vice versa. The following has the advantage of mapping equally sized elements to equally sized spatial angles in the hemisphere: s (θ; φ) = (arcsin

ξ2 ξ1 ; 2π ): Nθ Nφ

(6.2)

Each photon p of the Ng global photons found in the search is entered in the region of the sampling grid corresponding to its incident direction ~Ψi; p . The region’s index is derived by transforming ~Ψi; p to the surface’s local coordinate system in which the y axis coincides with the surface normal ~N~x , converting these cartesian coordinates to polar coordinates, and mapping them to the corresponding grid index (ξ1 ; ξ2 ) using the inverse of equation 6.2. The product of the photon’s flux and fr;d (~x; ~ Ψi; p ; ~Ψr ) is then added to the indexed region. Including the diffuse component of the BRDF indicates how much of the flux incident from ~Ψi; p is actually reflected. A counter for the number of photons mapped to the region is also incremented. Once the flux contributions from all Ng photons have been accumulated according to their incident directions in the sampling grid, they can be used as PDF for importance sampling of the diffuse indirect component. Φ(ξ1 ; ξ2 ) denotes the averaged flux in the region with index (ξ1 ; ξ2 ), which can be determined with the region’s photon counter, and corresponds to the PDF p(ξ1 ; ξ2 ). Wann Jensen suggests eliminating ”empty” regions (containing zero flux) so that every region has a non-zero probability in order to avoid bias [Jen95]. The actual value placed in the region is arbitrary;

CHAPTER 6. THE VISUALIZATION PASS

26

P(ξ1 ; ξ2 )

Selected region

ξ

( ξ 1 ; ξ2 )

(0; 0)

(Nθ

, 1 Nφ , 1) ;

Sampling grid region Figure 6.1: Choosing a sampling grid index (ξ1 ; ξ2 ) from a sample ξ by applying P,1 (ξ).

a fraction of the total flux in the grid, for instance. However, this measure increases the variance, and introduces visible noise in the rendering. It should be used with care. An alternative is to simply assume no contributions can be expected from the directions corresponding to regions containing zero flux, and leave them untouched. These regions will then be ignored by the PDF, but they will appropiately reduce the average flux in the sampling grid and thus account for partial obscuring of the illuminated hemisphere. Ng must be at least equal to Nθ  Nφ , otherwise the grid will always contain some empty regions. A direction for an importance sampling ray is chosen by taking a uniformly random sample ξ 2 [0; Φtotal ], where Φtotal is the sum of the averaged flux over all regions. The inverse of the probability distribution function P,1 (ξ) is then taken for this sample, which maps to a grid index (ξ1 ; ξ2 ) (figure 6.1). This probability distribution function is the cumulative function of the photon flux in the sampling grid: P(ξ1 ; ξ2 ) =

ξ1

ξ2

∑ ∑ Φ(t

t =0 p=0

;

p);

ξ1 2 [0; Nθ , 1]; ξ2 2 [0; Nφ , 1]:

Note that P(Nθ , 1; Nφ , 1) = Φtotal . P,1 (ξ) can be determined by traversing the regions in order and subtracting the flux Φ(t ; p) in each from Φtotal until it falls below ξ. This ends the iteration at the region with index (ξ1 ; ξ2 ) corresponding to P,1 (ξ). The probability of choosing a region is proportional to its flux, and it is therefore more likely to select regions with high incident flux contributions. A random point within the indexed region is then mapped using equation 6.2 to polar coordinates (θ; φ), which correspond to the incident direction of the associated sampling ray (figure 6.2). The randomization of the coordinates (jittering) within the limits of the indexed region breaks up any aliasing artifacts in the mapping. The polar coordinates are finally mapped to cartesian coordinates and transformed from the surface’s local coordinate system to global coordinates. A sampling ray can then be sent in the opposite direction, and its radiance contribution determined by recursive application of the visualization procedure. The radiance returned by the sampling ray must be adapted to avoid bias as in equation 2.3:

CHAPTER 6. THE VISUALIZATION PASS

27

Nθ - 1

ξ

1

0

Nφ - 1

ξ2

+ (θ; φ) = (arcsin



1 =Nθ ; 2πξ2 =Nφ )

Sampling ray

θ φ

Figure 6.2: Mapping a selected sampling grid index (ξ1 ; ξ2 ) to a sampling ray direction.

Li (~x; ~Ψi ) ; p(ξ1 ; ξ2 )

(6.3)

where (ξ1 ; ξ2 ) is the index of the region which maps to the sampling direction ~Ψi . In order for this estimator to be unbiased, the PDF must be normalized. This is discussed in section 6.4.1.2. The jittering of the sampling ray directions has a side effect: it does not necessarily intersect the object from which the photon(s) contributing to the sampling ray’s selection originated. Two pathological cases may cause problems in this respect: the sampling ray may intersect no object at all, or it may intersect a light source. In both cases the returned radiance will not correlate with the information provided by the photon map, producing noise. Rays which yield no intersection should be ignored; those that intersect light sources should not convey their emissive components (this is direct illumination, which is handled separately). 6.4.1.2 Verifying the PDF The adaptation of the sampled radiance in equation 6.3 requires normalising the PDF. It is also necessary to verify that it satisfies the requirements in section 2.5.1.



R

R 2π 2 θ=0 φ=0 π

p(θ; φ)dθdφ = 1:

CHAPTER 6. THE VISUALIZATION PASS

28

This is only possible if the PDF is normalized. Since the average flux Φ(t ; p) in each region corresponds to p(t ; p), the normalization factor is the reciprocal of the integral of the flux over all regions. The interval of integration is the polar coordinate space in the hemisphere, which corresponds to the sampling grid indices. In order to integrate to 1, the factor must be divided by the size of the interval, Nθ Nφ . The normalization factor thus becomes Z

π 2

Z 2π

θ=0 φ=0

p(θ; φ)dθdφ 

,1

1 Nθ,1 φ Φtotal Φ(t ; p) = : ∑ ∑ Nθ Nφ t =0 p=0 Nθ Nφ N

The normalized PDF by which each sample must be divided is then Φ(ξ1 ; ξ2 )Nθ Nφ : Φtotal



p(ξ1 ; ξ2 ) > 0 when Li (~x; ~Ψi ) 6= 0: This condition is satisfied asymptotically as Ng , Nθ , Nφ , and the number of global photons in the scene increase. It becomes obvious from the fact that Φ(ξ1 ; ξ2 ), Φtotal , and the indirect component of Li (~x; ~Ψi ) are supplied by the photon map. Any jittering applied to (ξ1 ; ξ2 ) diminishes as Nθ and Nφ approach infinity. This also legitimizes the assumption that regions of zero flux in the sampling grid yield no contributions and can therefore be ignored.



It is possible to determine P,1 (ξ): This was discussed in the previous section.

6.4.2

Approximate Solution

In higher ray tracing recursion levels it suffices to take an approximate solution for this component. This is simply a radiance estimate as outlined in section 6.3.1 using the Ng global photons found in the neighbourhood of ~x. Since the global photon density is typically lower than that of caustic photons, and subject to greater fluctuation, the radiance estimate for global photons yields very poor results if used in lower recursion levels. It is totally futile to use a radiance estimate for global photons at the top recursion level (i.e. the primary ray from the projection plane).

Chapter 7

Implementation This chapter deals with the implementation details of the photon map within the framework of an existing rendering package, the Minimal Rendering Toolkit (MRT). The components of the software architecture pertaining to light sources, surface characteristics, and illumination models are summarized. The embedding of the photon map, additional area light sources, a tone mapping operator, and optimization issues are then subsequently examined. The extensions presented in this chapter have been integrated into the MRT source code distribution, which is available at: http://www.graphics.uni-bonn.de/MRT.

7.1

The Minimal Rendering Toolkit

The MRT is an extensible, object-oriented platform for the general purpose of 3D visualization [Fel96a, Fel96b]. Although it has numerous other visualization modes (volume rendering, radiosity, polygonal approximation), it is presented here in a ray tracing context. MRT emphasizes its object-oriented flavour, and all its constituent software components which implement fundamental concepts found in any rendering package consequently follow this principle. For this purpose, the programming language C++ is an appropriate choice. The building blocks of which MRT consists are therefore implemented as C++ classes, which can be extended by derivation to implement more complex and specialized entities. An extract of the basic building blocks comprising MRT follows (figure 7.1):

  



t_Object is the base class for all geometric objects. It contains member functions to check for ray intersection, to supply a bounding volume which completely contains the object, and to generate a boundary representation (BRep) [Ben97], among others. t_SurfaceObject is derived from t_Object and represents geometric objects with infinitesimally thin surfaces. This distinguishes them from volume objects. t_Scene contains the actual scene geometry. This class is derived from t_Object and is consequently treated like a geometric object. This is transparent to program modules which access an instance of this class. The scene is subject to a hierarchical structure whereby scenes can contain either subscenes or elementary objects (at the bottom of the hierarchy). Thus if an intersection of an object with a ray is sought, the ray is passed down the scene hierarchy to whichever subscene’s bounding volume is intersected by the ray, eventually ending up at an elementary object, which is then interrogated for an intersection. t_Image is the top level instance controlling the visualization process and handles the appropriate output device. In ray tracing, it controls the generation of primary rays emanating from the projection plane. This is handled by the member function rayTrace().

29

CHAPTER 7. IMPLEMENTATION

  



    

30

t_Shader is the base class for all shading methods. It contains common surface characteristics for the Phong model for surface and volume objects. t_SurfaceShader contains characteristics pertaining exclusively to instances of class t_SurfaceObject, and is derived from t_Shader. Every surface object is associated with an instance of this class. t_Reflector [Mül97] is the base class for the BRDF and BTDF, which are implemented as member functions brdf() and btdf(). Importance sampling according to the BRDF or BTDF is done by function importanceSample(), which generates a sampling direction and also returns the type of surface interaction (diffuse/specular reflection/transmission, or absorbtion). Variants also exist which are restricted to the diffuse and specular components, such as diffuseBrdf(), specularBrdf(), specularImportanceSample(), etc. Every surface shader contains a reference to its associated reflector. The implementation is based on Lafortune and Willems’ modified Phong BRDF [LW94]. t_SrfIlluminator [Mül97] is the base class for all illumination models for surface objects. Every surface shader contains a reference to an associated illuminator. Typically this would be one and the same instance for the entire scene, although it is possible to apply different illumination models to the same scene (different illuminators for varying levels of detail, for example). The member incidentLight() evaluates the illumination model for a point on the surface of the object whose surface shader is associated with the illuminator. The default implementation uses plain vanilla local Phong shading without any global components. t_RTSrfIlluminator [Mül97] is an illuminator derived from t_SrfIlluminator implementing standard recursive ray tracing. t_Light is the base class for all light sources. It implements a simple point light source. The members brightness() and position() return its emission (which does not pertain to any physical units) and its position in the scene. The function directLight() performs the shadowing for a given point relative to the light source and returns the incident emission. t_Camera implements the pinhole camera model which handles the projection of the scene onto the projection plane and supplies the resulting primary rays via an iterator for the ray tracing process. t_IllumScene is a container for the entire scene hierarchy along with all light sources and other global information. t_IrradianceCache implements an irradiance caching scheme [WRC88] for diffuse indirect illumination. These can be used by the illuminators to accelerate the computation of these components at the expense of image quality (see section 7.3.3). Irradiance gradients [WH92] are also optionally supported if the illuminator can supply them. The cache resides in class t_IllumScene and may be optionally preprocessed before actually rendering with its member irradianceCachePreprocess(), which uniformly distributes a specified number of primary rays over the projection plane and invokes the ray tracing process for each. If the illuminator supports irradiance caching it is then responsible for detecting the preprocessing stage (via a flag in t_IllumScene) and storing the computed irradiance in the cache. This preprocessing step reduces the likelihood that the irradiance cache will extrapolate (rather than interpolate) its samples, which adversely affects image quality.

The ray tracing process in MRT consists of the following steps:

CHAPTER 7. IMPLEMENTATION

31

1. t_Image::raytrace() is called to initiate the process. 2. Each primary ray passing through the projection plane into the scene is supplied via an iterator t_Camera::getRay(). 3. The primary ray is passed on to t_IllumScene::incidentLight() to determine the radiance it conveys. 4. The closest object intersected by the ray is determined. This requires passing the ray down the scene hierarchy via t_Scene::intersect(), recursing into those subscenes whose bounding volumes it intersects. If an intersection exists, the recursion terminates at the corresponding elementary object (after calling t_Object::intersect()), otherwise the scene’s background colour is returned. 5. If an intersection is found, t_IllumScene::incidentLight() calls virtual t_Shader::incidentLight() for the shader associated with the intersected object. For a t_SurfaceObject this translates to a call to t_SurfaceShader::incidentLight(). 6. t_SurfaceShader::incidentLight() calls virtual t_SrfIlluminator::incidentLight() for its associated illuminator to evaluate the illumination model for the intersected surface. 7. The illuminator will at least take direct illumination into account by calling virtual t_Light::directLight(). 8. t_Light::directLight() calls either t_Scene::intersect() or t_Scene::checkIntersect() (a fast check for the existence of some intersection without evaluating the actual intersection point) to determine shadowing, depending on the light source. 9. If the illuminator also takes global illumination into account, it will trace secondary rays by calling t_IllumScene::incidentLight() resulting in an additional recursion level. 10. The resulting illumination components are summed (possibly with weights) and returned as the radiance conveyed by the ray passed to t_IllumScene::incidentLight().

7.2

Area Light Sources

The standard point light sources in MRT are insufficient for global illumination purposes, particularly since they have no physical basis. Furthermore the lack of penumbrae restricts the realism of the resulting renderings. A suite of area light sources was therefore developed and the light source interface extended with the primary aim of maintaining as much existing functionality as possible while introducing flexible handling for extended features. A list of the area light source classes follows:

  

t_DistLight (”distributed light”) is an optimized spherical light source using an adaptive sampling algorithm for shadowing [Sch96]. However, like a point light source it is not actually part of the scene geometry and therefore only its emission is visible, not the light source itself. Since the algorithm is tailoured to its dedicated geometry it is very fast. t_AttenuatedDistLight is similar to the above, but takes global attenuation into account when determining its direct contributions on surfaces. As such, it is more appropriate for standard ray tracing than for global illumination methods. t_DistSpotLight corresponds to t_DistLight but emits within a user defined cone, in analogy to a spotlight.

CHAPTER 7. IMPLEMENTATION

 

32

t_AttenuatedDistSpotLight combines the above with global attenuation effects for standard ray tracing. t_LightObject implements an area light source of arbitrary geometry which is associated with an object in the scene and hence visible itself. The sampling is based on the tesselation of the object into a set of planar faces represented by BReps [Ben97]. Each face visible from the illuminated point is assigned a number of sample rays proportional to its projected area and emission of the light source. A separate BRep is maintained for each recursion level with appropriately reduced tesselation quality and number of faces (level of detail).

7.2.1

Modified Light Source Interface

The set of light source member functions has been extended with a consistent interface for all light sources as primary design goal. This provides transparent handling without regard to the specific type. The encapsulated definition of the functionality is of course type specific (via C++ virtual function overloading). The challenge in the approach lies in the fact that some concepts pertaining to area light sources do not apply to point light sources. An extract of the member functions follows:

       

position() returns the centre of the area light source’s bounding box. This is trivially the point position for point light sources. brightness() returns the emission of the light source, which is dimensionless for point light sources and the radiance for area light sources. It is specified in the constructor. directLight() has been modified to sample the light source for direct illumination via reflection and/or transmission1 . The light transport mode is returned along with the resulting emission incident on the surface, which accounts for the projection of the light source (see below). This is the high level sampling routine. numberOfSamples() returns the number of samples for the light source. It is subject to the specific sampling algorithm of the light source, which may be adaptive. For class t_LightObject for example, it is the number of samples per unit incident flux (resulting from the radiance multiplied by the projected area of the light source). area() returns the total emissive surface area of a light source. By definition, this is 0 for point light sources. importance() is the relative global importance of the light source, which can be used by illuminators to choose a light source based on the magnitude of its contribution. For area light sources this is the product of the surface area and the radiance. For point light sources it is based on its emission alone2 . sample() is the low level sampling rountine which returns the incident emission from the light source for a given point based on one sample ray. Like directLight(), it returns the light transport mode, however reflection and transmission are mutually exclusive, since only one sample is taken. sampleRay() is another low level routine which returns a ray leaving the light source. Two variants exist, one of which returns an arbitrary ray within the light source’s field of emission (e.g. within a spotlight cone), the other allowing the specification of a destination point for the ray.

1 Note 2 Due

that an area light source can illuminate a surface from the front and back. to their disparate importance definitions it is generally a bad idea to combine point and area light sources in a scene.

CHAPTER 7. IMPLEMENTATION



7.3

33

projection() returns the projected area of the light source as seen from the given point divided by the distance to the light source, in analogy to a form factor or solid angle. For point light sources this would be 0, but for convenience it instead returns the dot product of the light vector and the surface normal.

Implementing the Photon Map

In this section the implementation of the photon map as a C++ class within MRT is explored, as well as the peripheral class which embodies the spatial data structure containing the photons. Those points which require optimization are refined later on.

7.3.1

Embedding the Photon Map

The photon map is implemented as class t_PhotonMap which is derived from t_RTSrfIlluminator. The function member incidentLight() is overloaded and performs the visualization step. It also calls initialize() on its first invocation, the member function responsible for the light pass. Thus the photon map is initialized implicitly without intervention from the instance of t_SurfaceShader which called it. The visualization is broken down into members directRadiance(), causticRadiance(), and importanceSample() to determine the direct, caustic, and indirect diffuse and specular contributions, respectively. The photons are implemented as class t_Photon, which represents both global and caustic photons. Each photon type is maintained in a separate photon map implemented as the spatial data structure defined in section 7.3.2. Direct illumination is handled by conventional light source sampling instead of shadow photons, which were removed from the implementation (see chapter 8). These three illumination components can be combined independently, depending on the parameters; setting the number of photons of a particular type to emit or search for to zero ignores the illumination component supplied by this photon type (see appendix A). The direct component is always included. The number of photons to emit is distributed among the light sources according to their importance. The distribution process is optimized as described in section 7.3.3.5. The number of photons which actually wind up in the photon map depends largely on the scene and the surface characteristics, since photons may be lost prematurely by leakage or absorbtion without making any contributions. Member importanceSample() handles the diffusely and specularly reflected components of the indirect illumination separately. The specular component underlies three choices: no specular component at all, the perfectly reflected and possibly transmitted ray(s) as in standard ray tracing, or importance sampling of this component based on the BRDF/BTDF. The perfect rays are traced by calling t_RTSrfIlluminator::eyePathContrib(), which performs the recursion on specular rays for standard ray tracing. Importance sampling of the specularly reflected indirect component is performed using t_Reflector::specularImportanceSample(). The diffusely reflected component is acquired via importance sampling according to incident global photon flux as in section 6.4.1. The photon map is embedded by assigning it as illuminator to the surface shaders of surface objects in the scene. This assumes some mechanism for selecting illuminators, either via parameters or interactively. No other alteration to the architecture is necessary, as all modifications are localized in the derived illuminator t_PhotonMap, and its presence is transparent to the other modules.

7.3.2

The kd-Tree

The spatial data structure used by t_PhotonMap is a kd-tree [Ben75, Ben79, BF79]. The ”k-dimensional tree” is an extension of the conventional binary search tree by generalizing the dimension of the keys. Each node S has the following attributes:

CHAPTER 7. IMPLEMENTATION

 

34

a k-dimensional key (s0 ; s1 ; : : : ; sk,1 ), a discriminator d 2 [0; k , 1] such that:

 

ld < sd for every node L with key (l0 ; : : : ; lk,1 ) in the left subtree. rd > sd for every node R with key (r0 ; : : : ; rk,1 ) in the right subtree.

Equalities between sd and another key in dimension d are resolved via cyclic comparison of the key components. This implies that no two nodes may have identical keys which match in all dimensions. The discriminator can be graphically interpreted as subdividing the search space containing all nodes in the subtree of which S is the root in dimension d. Variants exist which either store the discriminator explicitly in each node following some convention, or in which it is implicit. In the latter case it is often derived from the node’s tree level. Thus on level m the discriminator d is m mod k. As with the standard binary search tree, a balanced kd-tree guarantees logarithmic search time. The primary search methods are: Range query: finds all keys between (t0 ; : : : ; tk,1) and (u0 ; : : : ; uk,1 ), where ti  ui ; 0  i  k , 1. Nearest neighbour search: finds the N closest keys in the neighbourhood of the given key according to the underlying metric used to define key distance [FBF77, Mar96]. The kd-tree is implemented as a template class t_KDTree for general applicability. The template consists of dim, the key dimension, t_Key, the key component type, and t_Data, the data type of the nodes in the tree. Each node uses an implicit discriminator. The basic functions provided by the class are:

    

 

insert() inserts a node in the kd-tree. remove() deletes a node in the kd-tree. find() finds the node associated with the specified key. findRange() performs a range query for nodes within the specified key range. The found nodes are placed in a dynamically enlarged query buffer and can be subsequently retrieved via the nextFound() iterator function. findNearestNeighbor() finds the specified number of nearest neighbouring nodes for the given key. Variants exist which additionally subject the node selection to arbitrary, user-defined selection criteria. In addition the search space may also be restricted. The found nodes are accessed as with findRange(). Furthermore, the squared distances3 to the found nodes are also available via the nextDist() iterator. optimize() balances the kd-tree after node insertion to maximize performance. This is recommended if the kd-tree is static in the application. keyDist() is a protected member which implements the key distance metric between two keys. The default is a euclidean metric. This may be overloaded if other metrics are required.

Applying t_KDTree as photon map requires instantiating the template as follows: 3 This

results from the fact that square roots are omitted for efficiency when comparing key distances during the search.

CHAPTER 7. IMPLEMENTATION

  

35

t_Data is t_Photon, t_Key is a float or double, corresponding to the vector component types for the photon positions, dim is 3, as the photons are stored in 3D space.

Since the photon map remains static during visualization, it is balanced with the optimize() function after completion of the light pass. The nearest neighbour search uses the default euclidean metric implemented in keyDist(). During the search findNearestNeighbor() is supplied with an additional selection criterion which results in the invocation of a member function of t_Photon which takes the dot product of the photon’s surface normal and that of the illuminated point and signals its acceptance or rejection to the kd-tree based on its sign. In this way photons can be tested for membership in the illuminated surface’s halfspace ”on the fly”.

7.3.3

Optimizations

In addition to the basic photon map implementation a number of optimizations can be applied to primarily enhance its performance as well as the quality of the results. 7.3.3.1 Level of Detail An obvious and very simple enhancement is to adapt the parametrization to the ray tracing recursion level. The number of photons to locate during a nearest neighbour search and the number of importance samples are reduced as recursion progresses. The adaptation consists of scaling the parameters with a level dependent factor which may be linear or exponential. The net effect is a reduction in overhead for the progressively smaller contributions from secondary rays. This concept is applied consistently throughout the illuminators and light sources. The number of light source samples and the irradiance cache quality is adjusted in this way, for example. 7.3.3.2 Irradiance Cache The irradiance cache available in MRT as class t_IrradianceCache has been incorporated in the importance sampling routine t_PhotonMap::importanceSample() to cache the diffusely reflected component of the indirect illumination acquired by importance sampling based on global photon flux. This allows interpolation of the cached irradiance rather than performing importance sampling for every illuminated point. The impact on performance is substantial, depending on the quality setting chosen for the irradiance cache. Thus the irradiance cache compensates image quality for speed. Generally the indirect illumination is weakly directional and characterized by slow variation over a surface, hence the quality reduction is moderate. This scheme cannot be applied to the specularly reflected or caustic components due to their highly directional nature. Invariably, the irradiance cache must be preprocessed to contain a number of irradiance samples to interpolate before the actual rendering process begins; failure to do so will result in severe extrapolation artifacts which accumulate in the image, particularly if irradiance gradients are not available. A positive side effect of the irradiance cache is its tendency to smooth out noise in the image which may arise from the importance sampling. If used properly, the irradiance cache acts as a filter for the high frequency noise, resulting in very subtle gradation in the indirect diffuse illumination. However, the irradiance cache also displays a propensity to reduce shadowing effects arising from indirect illumination (a natural consequence of the interpolation process).

CHAPTER 7. IMPLEMENTATION

36

7.3.3.3 Restricted Search Space As mentioned in section 7.3.2, t_KDTree::findNearestNeighbor() supports the specification of a restricted search space. This can significantly accelerate the search by cutting off branches in the kd-tree which contain photons lying outside this space. This tweak can be applied effectively when photons are visualized directly in a radiance estimate under the assumption that there exists some threshold below which no radiance is discernible in the resulting image. This threshold Lmin actually depends on the dynamic range of the output device. Since the radiance estimate is based on density, a threshold surface area Amax may be estimated for Lmin and 2 , there is a radius r the average photon flux Φ in the scene. Since Amax = πrmax max defining a maximum search space for a nearest neighbour query beyond which the resulting radiance estimate is assumed to be negligible. Omitting the BRDF, this can be derived from equation 6.1 for N photons: s

Lmin =

NΦ Amax

() rmax =

NΦ : πLmin

Supplying t_KDTree::findNearestNeighbor() with rmax as maximum search distance is particularly effective if photons are concentrated in small regions of the scene, which is common for caustics. 7.3.3.4 Caustics Filter The radiance estimate used to directly visualize caustics has been supplemented by a cone filter to enhance the contours of the caustics and reduce blurring [Jen97]. The filter applies a weight w p to the flux of each photon p used in the radiance estimate (equation 6.1) for point ~x during the summation: wp = 1 ,

k  1;

dp ; kr

(7.1)

where:

  

d p is the euclidean distance between ~x and photon p, k is a filter constant which affects its slope, r is the radius of the neighbourhood in which the photons were found, i.e. the maximum d p .

The filter requires a normalization factor such that it integrates to 1 over its domain. Since the filter is radially symmetric, this factor can be found by integrating equation 7.1 over [0; r] using the cylindrical 2 shell method [Gla95, pp. 519-520], which yields πr2 (1 , 3k ). Thus equation 6.1 with cone filter becomes: Lr (~x; ~Ψr ) =

Nc

∑ fr (x Ψi p

p=1

~;~

;

Ψr )

;~

w p ∆Φ p (~x; ~Ψi; p ) ,  : 2 π2 r4 1 , 3k

7.3.3.5 Accelerated Photon Distribution with BReps The photon distribution process has been accelerated by tesselating the scene geometry into boundary representations. This allows selection of a point on an object’s BRep to serve as destination for a ray leaving a light source obtained with t_Light::sampleRay(). As a result, fewer photons will go amiss and are rejected, which accelerates the distribution. Each object in the scene is assigned an importance which is the probability of selecting the object as receptor for an emitted photon. This importance is proportional to:

CHAPTER 7. IMPLEMENTATION

  

37

the object’s total surface area (obtained from the BRep), the object’s surface albedo (sum of reflection and transmission coefficients); for caustic photon emission this is restricted to the specular coefficients, the proximity (inverse distance) to the light source.

The tesselation of the scene geometry for the purpose of distributing photons actually contradicts the photon map’s basic philosophy of geometry independency. However, this does not engender a severe functional restriction: for objects which cannot supply a BRep, emission ray destination points can be choosen within the object’s bounding volume. Depending on the object’s geometry, this can be considerably slower as the emission rays may not necessary intersect the object itself and thus be rejected. Unfortunately, the surface area cannot be determined for these objects, and their importance is instead proportional to the bounding box size (either side length or volume), which can be misleading.

7.4

Tone Mapping

The ratio of the highest to the lowest measurable amplitude of a signal is termed its dynamic range. In nature physical quantities are not bound; the dynamic range of ”real world” signals typically spans several orders of magnitude. Capturing these signals faithfully requires compressing this range, since the corresponding media have a limited dynamic range. The compact disc system with all its sophistication is still limited to a range of ”only” around 100 dB. Photographic film lacks the latitude to capture the entire gamut of real world intensities, and requires careful exposure settings to capture the area weighted mean intensity of the photographed scene. The human eye also has an uncanny ability to adapt to the dynamic range of real world imagery, a process that the observer is rarely aware of. A physically correct light transport simulation as implemented by a global illumination method will reproduce this phenomenon if the light source emissions are not bound. Although all data types on a computer architecture have limited domains, a suitably large type should be chosen to represent radiance. The output device on which the image is to be displayed will typically have a smaller dynamic range. In order to display the image its dynamic range must be compressed to accomodate it within the device’s range. For imagery, this process is known as tone mapping. Tone mappings typically operate on luminance, which is the photometric equivalent to radiance, measured in candelas per square metre (1 srWm2 = 180 mcd2 ). Photometric terms differ from radiometric ones in that they are adjusted to the response of the human visual system, which is nonuniform over the visual frequency spectrum. MRT has been extended to handle the high dynamic range imagery it produces in global illumination applications. In addition to the standard PPM image file format (which is limited to 256 intensity levels per primary) it also supports the recent TIFF format with LogLuv encoding [Lar97] which preserves the dynamic range in the image. This allows an arbitrary tone mapping to be applied as a separate postprocessing step. The LogLuv encoding is based on the CIE L u v perceptually uniform colour space. The luminance L is encoded logarithmically; since the upper byte of the logarithm changes slowly within a scanline, the encoding is well suited for RLE compression, which is also included in the format specification. The tone mapping implemented for postprocessing was devised by Ward Larson, Rushmeier, and Piatko [LRP97]. It is based on a histogram adjustment technique using local brightness averages of pixel clusters within 1 angles of the visual field (this requires the horizontal and vertical fields of view used by the camera in the rendering). These averages simulate foveal adaptation levels. The brightness is the logarithm of the luminance taken from the image, since the human eye’s perception is logarithmic rather than linear. Given the minimum and maximum luminance of the output device, the histogram adjustment will then map the luminance values into this range.

CHAPTER 7. IMPLEMENTATION

38

Furthermore, the method can model limitations of the human visual system such as glare (scattering within the lens), as well as reduction of visual acuity and colour sensitivity at low luminance levels. However, these were not implemented.

getRay()

t_Image

t_Camera

t_Shader

incidentLight() incidentLight()

t_IllumScene

t_SurfaceShader

t_Object intersect() incidentLight() incidentLight() intersect(), checkIntersect()

CHAPTER 7. IMPLEMENTATION

rayTrace()

t_Scene intersect()

t_SrfIlluminator

t_Light

t_SpotLight

t_DistLight

t_DistSpotLight

t_AttenuatedLight

directLight()

t_LightObject

t_RTSrfIlluminator

t_PhotonMap

t_AttenuatedDistLight brdf(), btdf(), importanceSample()

t_AttenuatedDistSpotLight

t_Reflector

irradiance() findNearestNeighbor()

t_IrradianceCache

t_KDTree 39

Figure 7.1: Extract of the MRT architecture. Red arrows represent flow control, green arrows represent class derivation.

Chapter 8

Results Yes, in this chapter we finally get to see some results! A series of test images is presented along with their statistics. Furthermore the parametrization and its effects on image quality and performance are discussed. In practice, the shadow photon scheme does not live up to the expectations. While acceptable results can be achieved with one or two light sources, the shadow quality deteriorates rapidly as the number of light sources increases. For realistic scenes containing hundreds or thousands of light sources, an inordinately high number of shadow photons would be required, severely degrading performance; in this case the nearest neighbour search time typically exceeds the time to sample the light source conventionally, thereby precluding any practical application. For this reason shadow photons were eventually omitted from the implementation and conventional light source sampling is used instead. All images were generated as LogLuv encoded TIFFs and subsequently tone mapped using the operator discussed in section 7.4. The light sources used are all of type t_LightObject, with the exception of figure 8.2. The maximum recursion level was 1 for all renderings, and the number of global and caustic photons to gather in a nearest neighbour search was 50, unless noted otherwise.

8.1

Test Images

Figure 8.2 is a very simple test of diffuse importance sampling with global photons using a point spotlight source. Everything outside the spotlight’s emission volume is illuminated indirectly, bearing a tinge from the directly illuminated red screen. Though physically implausible, the simplicity of point light sources makes for good performance. The unsightly banding artifacts are a side effect of the tone mapping; these appear when mapping intensities of low population in the histogram (namely those at the upper and lower extremes) or when mapping CIE colours for which there is no RGB counterpart. Figure 8.3 depicts the evergreen of global illumination test scenes, the Cornell Box. The planar light source illuminates only the space below it; the ceiling is entirely indirectly illuminated, and bears traces of colour bleeding near the red and blue walls. Figure 8.4 is a museum scene consisting of a set of small light sources illuminating the ”exhibits” and some large planar ceiling lights. The large windows in the corner (which are also planar light sources) admit sunlight, which is characterized by a higher energy and warmer tone than the artificial lighting. The ceiling is entirely indirectly illuminated except in the vicinity of the windows, displaying colour bleeding effects from the brown floor. Closer inspection of the ceiling reveals a fundamental flaw: the ceiling lights cast no shadows on the ceiling! This is attributed to the irradiance cache’s tendency to thin out shadows resulting from indirect illumination. The severity of this negative side effect is subject to the cache’s quality setting. Rendering the image without the irradiance cache would actually produce the shadows, but would also leave the noise which the irradiance cache conveniently filters out and result in exorbitant rendering times. This scene showcases a particularly nasty situation for the tone mapping 40

CHAPTER 8. RESULTS

41

operator: the high contrast between the bright sunlight at the windows and the weak artificial lighting within the room requires a tone mapping operator which preserves the visibilities of both the interior and the windows. A linear mapping would overexpose the area around the windows or underexpose the interior. The specular surface on the statue is obtained with perfect rays. Figures 8.5 and 8.6 are scenes consisting of a closed room, thus representing a sealed system in which photons are only terminated by absorbtion (not leakage). The indirect illumination through the lampshades dominates in both scenes. These renderings include importance sampling of the specular indirect component (50 samples). Figure 8.5 in particular has a high specular content (the marble floor and ceiling). Here too, the irradiance cache tends to thin out the shadow cast by the frame leaning against the wall. Figure 8.7 is a randomly generated maze [Ahl78] cursed with nonuniform indirect illumination as light undergoes complex reflection within the paths. This requires a high number of global photons in the map and a higher quality setting for the irradiance cache. The latter bogs the rendering down substantially. The fact that the maze is inefficiently modelled by a naive algorithm which assembles small wall segments of uniform size doesn’t help performance either. Figure 8.8 depicts diffuse indirect illumination at a swimming pool. A number of planar light sources populate the periphery of the ceiling, while the pool itself is illuminated by underwater lamps. The ceiling bears blue colour bleeding from the pool tiles. The bump mapped water surface is the only specular component in the scene, and is rendered with perfect specular sampling. The remaining figures depict the combined effects of global and caustic photons. The camera is submerged into the pool in figure 8.9. The tiles bear caustics formed by reflection of the pool lights and refraction of the ceiling lights at the water surface. Figure 8.10 is a classic example of caustics; a polished metal ring focuses light into a characteristic carioid pattern. This image also involves global photons, resulting in weak illumination of the ring’s inner surface. The specular component of the ring is rendered with perfect specular rays. 200 Caustic photons were used for the radiance estimate, resulting in the exaggerated rendering times although the scene geometry is actually very simple. The illumination in figure 8.11 is dominated by the chandelier consisting of 72 crystal ornaments and 4 light sources, producing the caustics on the walls. Caustics resulting from reflection on the water are also visible in the fountain. The fountain is illuminated indirectly for the most part by the dome from which the chandelier is suspended (by some invisible means which the designer couldn’t be bothered with). The indirect illumination seeps into the corridors leading off to the left and right and drops off rapidly. Importance sampling is applied to the specular surfaces (50 samples), which dominate here and are therefore responsible for the agonizing rendering times.

8.2

Statistics

Table 8.1 lists the statistics for the test images. Profiling the implementation reveals that it spends a surprisingly small percentage of the total rendering time searching for photons; the sampling of area light sources for their direct contributions and the intersection routines are infact the dominant factors. This is mainly attributed to use of the irradiance cache – without it the rendering times would take on harrowing proportions! This happens to be the case if the specular reflections of the indirect components are importance sampled using t_Reflector::specularImportanceSample, since irradiance caching cannot be applied here. Therefore the rendering times are given with and without specular components (if any); the poignant difference emphasizes the efficiency of the irradiance caching scheme for the diffuse component. This disparity is particularly evident with the atrium scene (figure 8.11), whose rendering time borders on the ridiculous! All listed rendering times are for a Silicon Graphics Origin 200 system running IRIX64 and include the light pass and irradiance cache preprocessing.

CHAPTER 8. RESULTS

42

It is possible to obtain acceptable results with more conservative parameter settings than the ones used for these images, which can effectively cut down on the rendering time. Scene Point Spotlight Cornell Box Museum Gallery 1 diffuse only diffuse & specular Gallery 2 diffuse only diffuse & specular Maze Pool Exterior diffuse only diffuse & specular Pool Interior diffuse only diffuse & specular Metal Ring diffuse only diffuse & specular Atrium diffuse only diffuse & specular

800  800 800  800 800  800

Global Photons 23426 8704 34104

800  600 800  600

35800 35882

46 min 315 min

800  600 800  600 800  800

37669 37737 147858

49 min 320 min 38 min

800  800 800  800

21929 22029

136 min 181 min

800  800 800  800

15048 15106

951764 952013

250 min 284 min

800  800 800  800

8026 7862

183229 183087

381 min 398 min

800  800 800  800

42179 42152

487458 487335

585 min 3444 min

Resolution

Caustic Photons

Time 50 min 78 min 214 min

Table 8.1: Rendering statistics.

8.3

Parametrization

The effect of the parameters can be quantified objectively by the mean square error (MSE), i.e. the variance of the difference between importance sampled renderings and a high quality reference image. This also serves as an indication of the noise in the renderings. A series of such comparisons additionally provides a measure of the convergence speed of the approximation. This was done with a reference image of the Cornell Box obtained with 1024 stratified samples with one recursion level. The influence of the parameters numGlobal, totalGlobal, and numDiffuse (see appendix A) can then be graphically assessed. This graph should be taken with a grain of salt: it is dependent on the scene at hand. Except for the corners and edges, the Cornell Box actually has fairly uniform indirect illumination. Figure 8.1 shows the effect of numDiffuse, the number of diffuse importance samples, for three combinations of numGlobal, the number of global photons to gather, and totalGlobal, the number of global photons emitted from the light sources. The MSE reduction is evident as numGlobal and totalGlobal increase; these two parameters are coupled, and must therefore be adapted in relation to one another. In general, increasing the number of emitted photons requires increasing the number of gathered photons, and vice versa. The MSE reduction incurred by increasing numDiffuse clearly levels off, suggesting the futility of using an excessively high number of importance samples.

CHAPTER 8. RESULTS

43

As with numGlobal and totalGlobal, the caustic equivalents numCaustic and totalCaustic are also coupled. Poor combinations of these two are more evident than with global photons, since caustics are visualized directly; too low a value for numCaustic produces noise, and too high a value produces blurring in caustics. The number of regions in the importance sampling grid, impPhiIntervals and impThetaIntervals, should be dependent on numGlobal. This follows from the fact that as the number of gathered photons increases, so should the number of discretized incident directions in the hemisphere. If regions of zero flux in the grid are ignored, numGlobal must at least equal the number of regions. 0.22 numGlobal = 10, totalGlobal = 5000 numGlobal = 20, totalGlobal = 10000 numGlobal = 50, totalGlobal = 20000 0.2

0.18

MSE

0.16

0.14

0.12

0.1

0.08

0.06 10

20

30

40

50

60

70

80

90

100

numDiffuse

Figure 8.1: Mean square error graph depicting effect of parameters numGlobal, totalGlobal, and numDiffuse.

CHAPTER 8. RESULTS

44

Figure 8.2: Colour bleeding with point light source.

CHAPTER 8. RESULTS

45

Figure 8.3: Cornell Box.

CHAPTER 8. RESULTS

46

Figure 8.4: Museum scene, designed by Stephan Schäfer. Original textures desecrated by yours truly.

CHAPTER 8. RESULTS 47

Figure 8.5: Gallery scene #1.

CHAPTER 8. RESULTS 48

Figure 8.6: Gallery scene #2.

CHAPTER 8. RESULTS

49

Figure 8.7: Maze scene.

CHAPTER 8. RESULTS

50

Figure 8.8: Pool exterior.

CHAPTER 8. RESULTS

51

Figure 8.9: Pool interior.

CHAPTER 8. RESULTS

52

Figure 8.10: Metal ring caustic.

CHAPTER 8. RESULTS

53

Figure 8.11: Atrium scene.

Chapter 9

Conclusion and Future Work A global illumination method based on light particle (photon) transport has been presented, at the heart of which lies a spatial data structure used to store the photons. The efficiency of this structure dictates the overall performance of the photon map. The kd-tree presented in section 7.3.2 turns out to be an appropriate choice. The two pass approach (light pass and visualization) correspond to the light transport paths originating at the light sources and at the eye point, respectively. Locating photons via nearest neighbour search connects these paths. The embedding of the photon map module proved to be straightforward once some peripheral adaptations of MRT were made to better suit global illumination, in particular with respect to light sources. The core of MRT itself required no modification. The implementation of the photon map class t_PhotonMap was however initially marred by a number of teething problems which were ironed out gradually (very gradually). A small fraction of the total rendering time is attributed to the indirect illumination components obtained from the photon map when properly used in conjunction with the irradiance cache; this is far outweighed by the direct components obtained from light source sampling when using area lights and importance sampling of indirect specular components. The efficiency of the photon map and the quality of its renderings can be further improved by applying the optimizations described in section 7.3.3. As far as performance is concerned, the photon map is however no match for radiosity solutions supplemented by hardware when simple geometries with no specular components are at hand. In this context, the photon map’s strength is its general applicability by virtue of its independency from the scene geometry and its ability to handle a broad range of surface characteristics. The demise of the shadow photon scheme was brought about by inefficiency and poor shadow quality. The latter couldn’t be rectified by even distributing very large numbers of shadow photons when using more than one light source, a configuration which turned out to be even slower than naively sampling the light sources. The shadow photon map has by no means lived up to expectations. The parametrization of the photon map is not straightforward; finding optimal parameters for a scene is a tedious process. Rules of thumb exist however:

   

numGlobal = numCaustic = 50. . . 80, totalGlobal = 20000. . . 50000, totalCaustic = 10. . . 20  totalGlobal, 10% of the total number of primary rays for irradiance cache preprocessing.

These are good starting values which can then be refined. The interaction between the global photon map and the irradiance cache is quite subtle; finding an appropriate quality setting for the irradiance cache can become a sobering exercise in trial and error. Other options rarely need fiddling. 54

CHAPTER 9. CONCLUSION AND FUTURE WORK

55

One issue worth investigating is parallelization. A parallel version of MRT exists (DMRT) [Zen97], based on a client-server model which partitions the image to render into strips and distributes them among (remote) clients. The server then joins the subimages produced by the clients upon their completion. This method could in principle be used for the photon map too, but there remains the matter of handling the kd-trees. A sound approach would be to distribute photons in parallel, resulting in a kd-tree per client, which the server then has to somehow unify. Once this is done, it would be more feasible for each client to keep a local copy of the entire unified kd-tree rather than to grab individual nodes from the server during the gathering step; the net traffic would otherwise be immense. This is particularly suitable in the light of the fact that the kd-tree is static after initialization, and hence no updates among the clients need be considered. Another issue is the applicability of the photon map to volume objects. Figure 8.9 would benefit from this by using a volume object to model underwater haze, eliminating some of its obvious sterility. The current implementation ignores these, and the MRT’s illuminator class hierarchy is not adapted to such objects. The major difficulty lies in the lack of surface intersections. It is conceivable that isosurfaces may work with the photon map (photons could be created at high density gradients), but volumes which model noise and turbulence would require a uniform distribution of photons within the entire volume. In the latter case a photon would simply terminate after travelling a certain distance within the volume, and then dangle in space. The photon must also be subjected to the attenuation by the volume as it propagates. The lack of surface normals would of course eliminate the need to test these while gathering photons.

Appendix A

Parameters for t_PhotonMap All parameters specific to class t_PhotonMap are taken from the environment. General parameters which are applicable to other illuminators may also be set via command line options. Some variables are adapted to the current recursion level; in this case they are specified for recursion level 0. The following environment variables govern the behaviour of class t_PhotonMap:

    

 



photonMap.numCaustic: The number of caustic photons to search for in a nearest neighbour query during visualization at recursion level 0. A value of 0 (the default) disables caustics. photonMap.totalCaustic: The total number of caustic photons to emit into the scene during the light pass, distributed among the light sources according to their relative importance. The actual number inserted in the caustic photon map results from the particle transport simulation. A value of 0 disables caustics. The default is 250000. photonMap.causticFilter: The cone filter constant (k) applied to caustics during the radiance estimate. The default is 1. photonMap.numGlobal: The number of global photons to search for in a nearest neighbour query during visualization at recursion level 0. A value of 0 disables diffuse indirect illumination. The default is 50. photonMap.totalGlobal: The total number of global photons to emit into the scene during the light pass. This number is distributed among the light sources according to their relative importance. The actual number of photons inserted into the global photon map results from the particle transport simulation (dependent on the likelihood of absorbtion and leakage). A value of 0 disables diffuse indirect illumination. The default is 25000. photonMap.numDiffuse: The number of diffuse importance samples based on global photon flux at recursion level 0. The default is 50. photonMap.impThetaIntervals, photonMap.impPhiIntervals: The number of regions in the sampling grid (which discretizes the polar coordinate space for incident photon directions) used when importance sampling based on global photon flux. Regions of zero flux are ignored, requiring that photonMap.impThetaIntervals  photonMap.impPhiIntervals  photonMap.numGlobal. The number of regions is adapted automatically if this is not the case. The default is 3  14. photonMap.minDist: Define this to supply the irradiance cache with the minimum distance to the sampled objects during importance sampling of the diffuse component. If undefined, the harmonic mean of the distances to the sampled objects is passed instead. Undefined by default. 56

APPENDIX A. PARAMETERS FOR T_PHOTONMAP



57

photonMap.specularSampling: Specifies how the specular indirect component is sampled and may be one of the following (case insensitive) strings: – none: why bother? – perfect: trace perfectly reflected and/or transmitted ray(s) as in standard ray tracing. – importance: importance sampling based on specular component of the BRDF and BTDF. The default is none.

     

photonMap.numSpecular: The number of importance samples for the specular indirect component at recursion level 0 when photonMap.specularSampling = importance. The default is 25. maxLightLevel (command line option -l): The maximum light path length (number of reflections/transmissions) for a global and caustic photon during the light pass. The minimum is 1. See class t_MRTControl for the default setting. maxEyeLevel (command line option -e): The maximum recursion level for diffuse importance sampling and specular sampling (if photonMap.specularSampling 6= none). A radiance estimate is used at or beyond this level for the diffuse component, and the specular component is omitted. See class t_MRTControl for the default setting. photonMap.minRadiance: The minimum discernible radiance (Lmin ) for the output image. This depends on the output device. It defines the maximum search distance for photons during a radiance estimate under the assumption that no radiance can be discerned in the output if the photons cover a larger area. The default is 0.004 srWm2 . photonMap.reradiate: Define this to reradiate caustic photons as global photons after diffuse reflection/transmission. This may substantially increase the number of global photons. Undefined by default. photonMap.distribQuality: The BRep quality used for all objects in the scene during photon distribution. It may be necessary to force tesselation of large surfaces (both curved and planar) to improve the distribution. The quality setting is an integer greater or equal to 0 (the default).

Bibliography [Ahl78]

David H. Ahl, editor. Basic Computer Games, page 3. Creative Computing Press, Morristown, NJ, 1978. 41

[AK90]

James Arvo and David Kirk. Particle transport and image synthesis. In Computer Graphics (Proc. Siggraph ’90), volume 24, pages 63–66, August 1990. 18

[Arv86]

James Arvo. Backward ray tracing. In Siggraph ’86 Developments in Ray Tracing Course Notes, volume 12, pages 259–263. ACM Siggraph, August 1986. 13

[Ben75]

Jon Louis Bentley. Multidimensional binary search trees for associative searching. Communications of the ACM, 18(9):509–517, September 1975. 33

[Ben79]

Jon Louis Bentley. Multidimensional binary search trees in database applications. IEEE Transactions on Software Engineering, SE-5(4):333–340, July 1979. 33

[Ben97]

H. Bendels. Eine Topologische Datenstruktur und ihre Anwendung im 3D-Graphiksystem MRT. Diploma thesis, University of Bonn, Department of Computer Science, March 1997. 29, 32

[BF79]

Jon Louis Bentley and Jerome H. Friedman. Data structures for range searching. ACM Computing Surveys, 11(4):397–409, December 1979. 33

[BSS94]

Philippe Blasi, Bertrans Le Saëc, and Christophe Schlick. An importance driven monte carlo solution to the global illumination problem. In Proc. 5th Eurographics Workshop on Rendering, Darmstadt, 1994. 12

[BT75]

Phong Bui-Tong. Illumination for computer generated pictures. Communications of the ACM, 18(6):311–317, June 1975. 5

[FBF77]

J. H. Friedman, J. L. Bentley, and R. A. Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209–226, September 1977. 34

[Fel96a]

Dieter W. Fellner. Extensible image synthesis. In Peter Wisskirchen, editor, Object-Oriented and Mixed Programming Paradigms, Focus on Computer Graphics, pages 7–21. Springer Verlag, February 1996. 29

[Fel96b]

Dieter W. Fellner. Mrt: Design issues and brief reference. Available at: http://www.graphics.uni-bonn.de/CompGraph.ResearchProjects.MRT.papers, January 1996. 29

[Gla95]

Andrew S. Glassner. Principles of Digital Image Synthesis. Morgan Kaufmann, San Francisco, 1995. 10, 36

58

BIBLIOGRAPHY

59

[GTGB84] Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennet Battaile. Modelling the interaction of light between diffuse surfaces. In Computer Graphics (Proc. Siggraph ’84), volume 18, pages 213–222, July 1984. 13 [Hec91]

Paul Heckbert. Simulating Global Illumination Using Adaptive Meshing. PhD thesis, University of California, Berkeley, January 1991. 6

[HG86]

Eric A. Haines and Donald P. Greenberg. The light buffer: A shadow-testing accelerator. IEEE Computer Graphics and Applications, 6(9), 1986. 18

[HH64]

J.M. Hammersley and D.C. Handscomb. Monte Carlo Methods. Methuen & Co., London, 1964. 10

[ICG86]

David S. Immel, Michael F. Cohen, and Donald P. Greenberg. A radiosity method for non-diffuse environments. In Computer Graphics (Proc. Siggraph ’86), volume 20, pages 133–142, August 1986. 13

[IES86]

IES. Nomenclature and definitions for illuminating engineering. Technical Report RP-161986, ANSI/IES, New York, 1986. 8

[JC95a]

Henrik Wann Jensen and Niels Jørgen Christensen. Efficiently rendering shadows using the photon map. In Proc. Compugraphics ’95, pages 285–291, 1995. 14, 23

[JC95b]

Henrik Wann Jensen and Niels Jørgen Christensen. Photon maps in bidirectional monte carlo ray tracing of complex objects. Computers and Graphics, 19(2):215–224, 1995. 14, 18

[Jen95]

Henrik Wann Jensen. Importance driven path tracing using the photon map. In P. M. Hanrahan and W. Purgathofer, editors, Rendering Techniques ’95, pages 326–335. Springer Verlag, 1995. 15, 25, 25

[Jen96]

Henrik Wann Jensen. Global illumination using photon maps. In Proc. 7th Eurographics Workshop on Rendering, Porto, pages 22–31, June 1996. 14, 22

[Jen97]

Henrik Wann Jensen. Rendering caustics on non-lambertian surfaces. Computer Graphics Forum, 16(1):57–64, March 1997. 15, 24, 36

[Kaj86]

James T. Kajiya. The rendering equation. In Computer Graphics (Proc. Siggraph ’86), volume 20, pages 143–150, August 1986. 10

[KW86]

Malvin H. Kalos and Paula A. Whitlock. Monte Carlo Methods: Volume I: Basics. John Wiley & Sons, New York, 1986. 11

[Laf96]

Eric P. Lafortune. Mathematical Models and Monte Carlo Algorithms for Physically Based Rendering. PhD thesis, K.U. Leuven, February 1996. Available at: http://www.cs.kuleuven.ac.be/cwis/research/graphics/ERICL/thesis/index.h 8

[Lan91]

Brigitta Lange. The simulation of radiant light transfer with stochastic ray tracing. In Proc. 2nd Eurographics Workshop on Rendering, Barcelona, 1991. 12

[Lar97]

Gregory Ward Larson. Logluv encoding for tiff images. Available at: http://www.sgi.com/Technology/pixformat/tiffluv.html, September 1997. 37

BIBLIOGRAPHY

60

[LRP97]

Gregory Ward Larson, Holly Rushmeier, and Christine Piatko. A visibility tone reproduction operator for high dynamic range scenes. Technical Report LBNL 39882, Lawrence Berkeley National Laboratory, January 1997. Available at: http://radsite.lbl.gov/radiance/papers. 37

[LW94]

Eric P. Lafortune and Yves D. Willems. Using the modified phong reflectance model for physically based rendering. Technical Report CW 197, Department of Computing Science, K.U. Leuven, November 1994. 9, 25, 30

[Mae96]

Rudi Maelbrancke. Linear and Spatial Data Structures: New Techniques for Analysis and Optimization. PhD thesis, Department of Computer Science, K.U. Leuven, March 1996. Available at: http://www.cs.kuleuven.ac.be/~rudim/pub/WWW/reports.html. 17

[Mar96]

Steve Margetts. Nearest neighbours using kd-trees. Available at: http://www.mathsource.com/cgi-bin/MathSource/Applications/ComputerScienc December 1996. 34

[Mül97]

G. Müller. Beschleunigung Strahlbasierter Rendering-Algorithmen. Diploma thesis, University of Bonn, Department of Computer Science, May 1997. 30, 30, 30

[NRH+ 77] F. E. Nicodemus, J. C. Richmond, J. J. Hsia, I. W. Ginsberg, and T. Limperis. Geometrical considerations and nomenclature for reflectance. Technical Report 160, National Bureau of Standards, October 1977. 8 [RT90]

Holly E. Rushmeier and Kenneth E. Torrance. Extending the radiosity method to include specularly reflecting and translucent materials. ACM Transactions on Graphics, 9(1):1–27, January 1990. 13

[Rub81]

Reuven Y. Rubinstein. Simulation of the Monte Carlo Method. John Wiley & Sons, 1981. 10

[SAWG91] François X. Sillion, James R. Arvo, Stephen H. Westin, and Donald P. Greenberg. A global illumination solution for general reflectance distributions. In Computer Graphics (Proc. Siggraph ’91), volume 25, pages 187–196, July 1991. 13 [Sch93]

Christophe Schlick. A customizable reflectance model for everyday rendering. In Proc. 4th Eurographics Workshop on Rendering, Paris, pages 73–84, 1993. 9

[Sch96]

Roland Schregle. t_Distlight: A distributed light source class for the minimal ray tracer. Available at: http://www.graphics.uni-bonn.de/CompGraph.ResearchProjects.MRT.papers, September 1996. 31

[SH92]

Robert Siegel and John R. Howell. Thermal Radiation Heat Transfer. Hemisphere Publishing, Bristol, PA, third edition, 1992. 8

[Shi92]

Peter Shirley. Nonuniform random point sets via warping. In David Kirk, editor, Graphics Gems III, pages 80–83. Academic Press, 1992. 25

[SP89]

François Sillion and Claude Puech. A general two-pass method for integrating specular and diffuse reflection. In Computer Graphics (Proc. Siggraph ’89), volume 23, pages 335–344, July 1989. 13

BIBLIOGRAPHY

61

[TT54]

H. Trotter and J. Tukey. Conditional monte carlo for normal samples. In Symposium on Monte Carlo Methods, University of Florida, pages 64–79, 1954. 10

[WCG87]

John R. Wallace, Michael F. Cohen, and Donald P. Greenberg. A two-pass solution to the rendering equation: A synthesis of ray tracing and radiosity methods. In Computer Graphics (Proc. Siggraph ’87), volume 21, pages 311–320, July 1987. 13

[WH92]

Gregory J. Ward and Paul S. Heckbert. Irradiance gradients. In Proc. 3rd Eurographics Workshop on Rendering, Bristol, pages 85–98, 1992. 30

[Whi80]

Turner Whitted. An improved illumination model for shaded display. Communications of the ACM, 23(6):343–349, June 1980. 5

[WRC88]

Gregory J. Ward, Francis M. Rubinstein, and Robert D. Clear. A ray tracing solution for diffuse interreflection. In Computer Graphics (Proc. Siggraph ’88), volume 22, pages 85– 92, August 1988. 30

[Zen97]

Marco Zens. Paralleles Rendering. Diploma thesis, University of Bonn, Department of Computer Science, November 1997. 55

Suggest Documents