International Journal of Thermal Sciences

International Journal of Thermal Sciences 50 (2011) 906e917 Contents lists available at ScienceDirect International Journal of Thermal Sciences jour...
1 downloads 0 Views 2MB Size
International Journal of Thermal Sciences 50 (2011) 906e917

Contents lists available at ScienceDirect

International Journal of Thermal Sciences journal homepage: www.elsevier.com/locate/ijts

Evaluation of image reconstruction algorithms for non-destructive characterization of thermal interfaces Hakan Erturk* Department of Mechanical Engineering, Bogazici University, Bebek 34342 Istanbul, Turkey

a r t i c l e i n f o

a b s t r a c t

Article history: Received 20 May 2010 Received in revised form 6 February 2011 Accepted 7 February 2011 Available online 10 March 2011

Thermal interfaces are encountered in many thermal management applications and interface materials are used to minimize thermal contact resistance resulting from solidesolid contact. For opto-electronic devices the quality of the thermal interface is critical for removing the generated heat for proper thermal management. Defects in the thermal interface introduce additional thermal resistance in the thermal path, and must be prevented. Detection of defects in the thermal interfaces becomes critical during the assembly process development. Imaging techniques such as X-ray computerized tomography, or scanning acoustic microscopy that require expensive equipment and significant processing time is necessary. Thermal tomography in conjunction to IR thermometry can be used as a lower cost alternative to these techniques. The feasibility of thermal tomography for non-destructive characterization of thermal interfaces is presented by considering different image reconstruction algorithms. The algorithms considered are the iterative perturbation algorithm, LevenbergeMarquardt algorithm and the regularized NewtoneGauss algorithm, and they were found to be capable of characterizing the thermal interface layer. Ó 2011 Elsevier Masson SAS. All rights reserved.

Keywords: Thermal tomography Thermal interface Fault detection Non-destructive testing Inverse problem Regularization Singular value decomposition

1. Introduction Multi-layered objects are used in many thermal management applications such as thermal management of electronic or optical devices. Efficient cooling is necessary to prevent temperature related device failures, and obstructions in the thermal path acting as a thermal resistance must be avoided to achieve the desired cooling performance. When two solid layers are in contact, a thermal resistance will result based on the applied pressure to hold these layers together and micro-scale roughness over the facing surfaces. An additional layer is often introduced as a filler to reduce the thermal contact resistance in which case the quality of the interface layer becomes important. Interface layer can have defects due to various problems with the assembly process such as the curing speed or deposition technique and speed [1]. These defects must be identified so that the assembly process can be modified to achieve a high quality interface layer. Imaging and nondestructive characterization techniques such as computerized tomography (CT) using X-rays, or scanning acoustic microscopy (SAM) can be used during the assembly process development in order to identify the defects [2].

* Tel.: þ90 212 359 7356; fax: þ90 212 287 2456. E-mail address: [email protected]. 1290-0729/$ e see front matter Ó 2011 Elsevier Masson SAS. All rights reserved. doi:10.1016/j.ijthermalsci.2011.02.002

Tomography is a well-known characterization and imaging technique that is used in medical [3,4], materials science [5,6], and geophysical [7] applications. Image of the object of interest is reconstructed by estimating the spatial distribution of its properties based on signal detected by sensors under an applied signal. Magnetic fields, electromagnetic or acoustic waves are used as signal in different tomography applications such as magnetic resonance imaging [8], X-ray computerized tomography [9], optical tomography [5], and seismic or ultrasound tomography [10]. Tomography problems are inverse problems that are usually illposed and special image reconstruction techniques must be used for their solution. In majority of the tomography applications indicated above optical or acoustic signals are used, and quantitative reconstruction algorithms that rely on wave characteristic of the signal are used [9]. A thermal signal that diffuses through the characterized object is used in the thermal tomography, and the characterization is based on measurements at discrete locations by temperature sensors or a measured distribution over a region by infrared (IR) imaging or by means of sensors. With the invention of un-cooled IR imaging systems relying on micro-bolometer technology the thermal tomography techniques can provide a lower cost alternative to X-ray or acoustic wave based imaging technologies. However, the reconstruction algorithms used for optical or acoustic imaging that relies on wave characteristic of the signal are not

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

Nomenclature Bi C CN Er F(c) h J k Lx Ly Lz Lz,A Lz,B NB Ns Nt Nu q00 R r rs,i tm,j tCPU T TN Tmax U, V X

Biot number volumetric heat capacity (rc) [J m3 K1] condition number of a matrix absolute error between applied and predicted non-dimensional thermal conductivity objective function to be minimized convective heat transfer coefficient [W m2 K1] Jacobian matrix thermal conductivity [W m1 K1] size of the structure along x-coordinate size of the structure along y-coordinate thickness of the structure (along z-coordinate) thickness of the layer-A thickness of layers A and B Number of sub-elements in layer-B number of sensors number of measurements taken at different times number of unknown parameters heat flux [W cm2] residual vector coordinate vector coordinate vector of i-th sensor time for j-th measurement time required for the computation temperature [K] ambient temperature [K] maximum allowable temperature for the system orthogonal matrixes solution vector

Greek letters c unknown vector, ¼[kB*, CB*]T 3 absolute convergence criteria

suitable for thermal tomography as the applied signal is transmitted by diffusion [11]. As a result thermal imaging has long been implemented for qualitative flaw detection that is used to complement other imaging techniques [2,12] and it has not been used for quantitative detection of electronic packages. Three alternative approaches can be adopted to use IR imaging for quantitative imaging using thermal tomography: 1) new algorithms can be developed, 2) the problem is adjusted so that it can be formulated similar to wave propagation like phenomena where existing reconstruction algorithms can be used for solution, 3) existing image reconstruction algorithms used for diffuse optical tomography where system is prone to diffusion can be adapted. The first approach is used in Ref. [11], where the algebraic reconstruction technique (ART) that is explained in Ref. [13] is adapted successfully for a thermal diffusion problem with point sources. The second approach is adopted in Ref. [14] where transmission of heat can be modeled similar to wave propagation using Fourier’s diffraction theory for a system subject to harmonically changing heat load. The wave based reconstruction algorithms can then be used successfully. The application of this method is limited to harmonically applied signals. A widely preferred approach for solution of the inverse problem in optical tomography problems is to use gradient based optimization techniques or perturbation approach [4,15e18]. Similar approach has also been adapted for thermal tomography. A Laplacian operator is used to regularize the tomography problem defined as a linear integral equation in Ref. [19], where the problem

F G l U S

sm sp i sS q qm J

907

inverse of the singular value matrix (¼S1) regularized form of F damping coefficient for LevenbergeMarquardt algorithm conditioning matrix for LevenbergeMarquardt algorithm diagonal matrix of singular values standard deviation of measurement error uncertainty of the property, or the boundary condition pi uncertainty due to uncertainties in the properties of layers and applied boundary conditions non-dimensional temperature, q ¼ (T  TN)/ (Tmax  TN) measurement operator; model based estimations for measurements for set of unknowns measurement vector; non-dimensional temperatures measured by sensors at measurement times

Subscripts and superscripts A, B, C Layers A, B or C a applied av average e estimated or predicted i i-th sensor j j-th time step k k-th variable l combined variable identifying sensor and measurement time, ¼i þ (j  1)Ns m measurement o reference quantity p p-th iteration s sensor * dimensionless quantity 0 exact solution of the problem

is then solved by the conjugate gradient method. The temperature dependent properties of a system are estimated using measurements at the system boundary in Refs. [20e22]. The properties are represented as polynomials using basis functions with continuous first derivatives, and the unknown coefficients of the polynomials are iteratively calculated in Ref. [20,22]. A perturbation approach is used in Ref. [21] and a modified Newton-Raphson method is used in Ref. [22]. Similar to Ref. [21], the problem in Ref. [22] is formulated as a parameter estimation problem where the solution is achieved by an iterative perturbation approach. Other similar applications can be found in Refs. [23,24] where biological tissue is characterized. Quantitative characterization of thermal interfaces using thermal tomography is investigated in this study. More specifically, the defects in the thermal interface layer of a multi-layered system will be identified using thermal tomography by estimating the variation in thermal property distribution of the thermal interface. The problem is formulated as a least squares minimization problem where three image reconstruction algorithms that use first gradient information; iterative perturbation approach, Levenberge Marquardt algorithm and regularized NewtoneGauss algorithm are employed for the solution. 2. Problem formulation The package of an electronic system can be represented by a three layer system sketched in Fig. 1. The heat is generated at the

908

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

  vq    v q  k*i r* ¼ k*j r* * vz z* ¼L* vz* z* ¼L*þ h

i



h

i

* * q x* ; y* ; L* ¼ q x* ; y* ; L*þ z;i ; t z;i ; t

Fig. 1. Sketch of the problem (shown in two dimensions, not to scale).

base of layer-A that represents the electronic circuit, and it passes through the layers AeC where it is released to the environment. In a thermal management application, the goal is to control the thermal conditions of layer-A using the layer-C as a heat spreader or sink where the layer-B serves as the thermal interface layer used to reduce the thermal contact resistance. In a high heat flux application, the quality of the thermal interface layer becomes critical and any small defect in the interface layer will result in overheating of layer-A. The objective of using thermal tomography is quantitatively identifying the defects so that necessary changes in the assembly process can be considered to prevent the formation of these defects. 2.1. Direct problem Considering the system shown in Fig. 1, the heat transfer through the multi-layer system is represented by the non-dimensional heat conduction equation given as

h    i  vqr* ; t *  * * * * * * * ¼ C r* V k r V q r ;t vt *

(1)

where q ¼ (T  TN)/(Tmax  TN), k* ¼ k/ko, C* ¼ C/Co, r* ¼ [x*, y*, z*], x* ¼ x/Lo, y* ¼ y/Lo, z* ¼ z/Lo, V* ¼ LoV, t* ¼ aot/Lo2, and ao ¼ ko/Co. Eq. (1) is subject to the initial condition





q r* ; 0 ¼ 0

(2)

and the boundary conditions

  vq    k* r* ¼ q00* x*;y*;t * 0  x*  L*x ;0  y*  L*y * vz z* ¼0 

vq vx*



vq vy*

x* ¼0;L*x

(4-b)

qm ðcÞ ¼ qðr*s ; t*m Þ

(5)

representing temperature estimated by the solution of the direct problem at the sensor locations, rs* ¼ [rs,1*, rs,2*,., rs,Ns*]T, rs,i* ¼ [xs,i*, ys,i*, zs,i*], i ¼ 1,.,Ns, for measurement times tm* ¼ [tm,1*, tm,2*,., tm,Nt*]T. qm(c) is a vector represented as qm(c) ¼ [qm,1(c),qm,2(c),., qm,NsNt(c)]T, where qm,l(c) ¼ q (rs,i*, tm,j*) is the non-dimensional temperature at rs,i* ¼ [xs,i*, ys,i*, zs,i*] at the measurement time tm,j* calculated by the solution of the direct problem. The index l includes spatial and temporal variations and can be calculated in terms of i and j as, l ¼ i þ (j  1)Ns, where i ¼ 1,.,Ns and j ¼ 1,.,Nt. 2.2. The inverse problem Given temperature measurements J ¼ [J1, J2,., JNsNt]T measured at rs* at time, tm*, under an applied heat flux q00 *(x*, y*, t*); the inverse problem can be defined as estimating the unknown property distribution in the layer-B, c, minimizing the objective function, which is the least squares norm of the difference between measured and estimated temperatures, defined as

FðcÞ ¼

1 T ½J  qm ðcÞ ½J  qm ðcÞ 2

(6)

(3-a)

The objective function reaches a minimum when its gradient with respect to unknown vector, c, equals to zero, represented as

# vqm ðcÞT ½J  qm ðcÞ ¼ 0 vc

¼ 0;

0  z*  L*z ; 0  y*  L*y

(3-b)

VFðcÞ ¼

¼ 0;

0  z*  L*z ; 0  x*  L*x

(3-c)

At this point, the Jacobian matrix that contains the sensitivity coefficients of the system can be defined as

(3-d)

2 vqm;1 # " vc 1 T T 6 vqm ðcÞ JðcÞ ¼ ¼ 6 « 4 vc

 y* ¼0;L*y



where (i, j) pair is either (layer-A, layer-B) or (layer-B, layer-C). Eqs. (4-a,b) are valid in the range, 0  x*  Lx*, 0  y*  Ly*, and Lz,i*, Lz,i*þ denotes values approaching Lz,i* from þz* and ez* directions, respectively. All the properties are known in the system for the direct problem. For a given power map and convection boundary condition (Eqs. (3-a,d)), solution yields the temperature distribution in the system. The problem can be solved numerically using any appropriate numerical solution technique [25] and an implicit finite difference solution is used in this study. For a given set of thermal properties for layer-B, c ¼ [kB*, CB*]T with kB*¼[k1*, k2*,., kNB*]T and CB*¼[C1*, C2*,., CNB*]T, the vector of non-dimensional thermal conductivity and volumetric heat capacity of the nodes or elements of layer-B, a measurement operator can be defined as

"



(4-a)

z;i

z;i

  vq    * * q r ; 0  x*  L*x ;0  y*  L*y ¼ Bi ;t k r* o vz* z* ¼L* *



vqm;NsNt vc 1

z;C

where q00 *(x*, y*, t*) ¼ q00 (x, y, t)/qo00 is the dimensionless transient power map, qo00 ¼ ko(Tmax  TN)/Lo is the reference heat flux, and Bio ¼ h Lo/ko is the reference Biot number. The interface conditions are

/ vqm;l vck

/

vqm;1 vcNu

« vqm;NsNt vcNu

(7)

3 7 7 5

(8)

Eq. (7) can then be re-written using the Jacobian matrix as

JðcÞT ½J  qm ðcÞ ¼ 0

(9)

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

h i JðcÞT JðcÞ þ lU ðc0  cÞ ¼ JðcÞT ½J  qm ðcÞ

3. Image reconstruction algorithms The inverse problem as defined in Eq. (9) is non-linear and there are numerous alternatives for the solution that is outlined extensively in Refs. [4,26]. In this study three iterative algorithms; iterative perturbation algorithm, LevenbergeMarquardt algorithm, and regularized NewtoneGauss algorithm that rely on the first gradient information and formulate the inverse problem as a least squares minimization problem as shown in Eq. (6) are considered. 3.1. Iterative perturbation algorithm Consider that the exact solution of the problem represented by Eq. (9) is c0 . The Taylor series expansion of qm(c0 ) if c is in the neighborhood of c0 can be defined as

vq ðcÞ qm ðc0 Þ ¼ qm ðcÞ þ ðc0  cÞ m þ / vc

(10)

Neglecting higher order terms, and representing the gradient term using the Jacobian, c0 can be calculated from solution of

JðcÞðc0  cÞ ¼ ½J  qm ðcÞ

(11)

considering that J ¼ qm(c0 ). An iterative solution is required as Eq. (11) is non-linear. The solution algorithm for the iterative perturbation algorithm (IPA) is presented in Fig. 2. 3.2. LevenbergeMarquardt algorithm Combining Eqs. (9) and (11) leads to

JðcÞT JðcÞðc0  cÞ ¼ JðcÞT ½J  qm ðcÞ

(12)

Iterative solution of non-linear Eq. (12) is known as NewtoneGauss algorithm (NGA), which is an approximation to the Newton method, where the Hessian that contains the second order gradient information is approximated by J(c)TJ(c). NewtoneGauss method converges if the problem is well-posed and the matrix J (c)TJ(c) is non-singular. As inverse problems are usually ill-posed, where the matrix J(c)TJ(c) is singular, the system must be regularized. This is accomplished in the LevenbergeMarquardt algorithm (LMA) by using a conditioning matrix U. The singular matrix J(c)TJ(c) is modified to J(c)TJ(c) þ lU, resulting in

3.3. Regularized NewtoneGauss algorithm The regularized NewtoneGauss algorithm (RNGA) used in this study regularizes Eq. (12) using truncated singular value decomposition (TSVD). TSVD is based on singular value decomposition (SVD), a well-known matrix decomposition technique, and it has been used to characterize and solve ill-posed inverse problems [27e29]. SVD of the matrix J(c)TJ(c) is defined as J(c)TJ(c) ¼ USV T, where U and V are orthogonal matrices and S is a diagonal matrix of singular values sorted in the descending order (S1,1 > S2,2>.>SN,N). The pseudo-inverse of the matrix can then be defined as [J(c)TJ(c)]1 ¼ VFUT, where F ¼ S1. The condition number (CN) of J(c)TJ(c) describes how ill conditioned the system is, and it is defined as the logarithmic ratio of largest to smallest singular values, ie: CN ¼ log10(S1,1/SN,N). The condition number for a discrete ill-posed system characterized by an ill-conditioned matrix is high [29] and to regularize the system the part of F corresponding to the smallest singular values is filtered so that a pre-defined maximum allowable condition number (CN,max) is satisfied.

1. Initial guess: p=0, χ0 2. θm(χ0) from Eqs.(1-5) 3. F(χ0) from Eq.(6) 4. While (F(χp)>ε) i. J(χp) from Eq.(8)

1. Initial guess: p=0, χ0

iii. χp+1=χp+[J(χp)TJ(χp)+λpΩp]-1 J(χp)T[Ψ- θm(χp)]

2. θm(χ0) from Eqs.(1-5)

iv. θm(χp+1) from Eqs.(1-5)

4. While (F(χp)>ε) i. J(χp) from Eq.(8) ii. χp+1=χp+[J(χp)]-1[Ψ- θm(χp)] iii. θm(χ

p+1

) from Eqs.(1-5)

v. F(χp+1) from Eq.(6) vi. While (F(χp+1)> F(χp)) a. λp=10λp b. χp+1=χp+[J(χp)TJ(χp)+λpΩp]-1J(χp)T[Ψ-θm(χp)] c. θm(χp+1) from Eqs.(1-5) d. F(χp+1) from Eq.(6)

iv. F(χp+1) from Eq.(6)

vii. λp+1=0.1λp

v. p=p+1

viii. p=p+1

Fig. 2. Iterative perturbation algorithm.

(13)

where the conditioning matrix is chosen as U ¼ diag[J(c)TJ(c)], a diagonal matrix with diagonal elements of J(c)TJ(c). For larger damping parameters, l, the diagonal elements dominate the coefficient matrix in Eq. (13) and the LMA tends to a steepest descent method and converges accordingly. For smaller damping parameters the algorithm approaches NewtoneGauss method that has a higher order convergence. The use of LMA for the solution of inverse heat transfer problems is discussed extensively in Ref. [26]. The solution of Eq. (13) that is non-linear is iterative and the LMA used in this study is presented in Fig. 3.

ii. Ωp=diag[J(χp)TJ(χp)]

3. F(χ0) from Eq.(6)

909

Fig. 3. The LevenbergeMarquardt algorithm [26].

910

Gi;i ¼

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917







Fi;i log10 S1;1 =Si;i CN;max ; i ¼ 1; 2; .; NsNt 0

log10 S1;1 =Si;i  CN;max

(14)

In the given form, using a relatively small CN,max leads to significant filtering, and the regularized system will differ significantly from the original system, whereas using a relatively large CN,max, the regularized system will only differ slightly from the original system. Therefore, the value of CN,max controls the regularization level for the TSVD. Further information on regularization of ill-posed problems is presented in Ref. [29]. Then a regularized solution of Eq. (12) is represented as

in h o ðc0  cÞ ¼ VGUT JðcÞT ½J  qm ðcÞ

(15)

The proper regularization level is strongly dependent on the nature of the problem. So called “L-curve” shows the variation of the norm of different solutions of Eq. (15), X ¼ c0  c, with different regularization levels controlled by CN,max, with respect to the norm of the residual of these solutions. The residual is defined as

R ¼ JðcÞT JðcÞðc0  cÞ  JðcÞT ½J  qm ðcÞ

(16)

(c0

where the solution  c) is calculated from Eq. (15). L-curve is a very useful tool in deciding on the appropriate regularization level as explained in detail in Refs. [27e29]. A constant regularization level is used through iterations to ensure stability and convergence as changing the regularization level might lead to convergence and stability problems in problems where successive regularized solutions are considered [28]. The RNGA used in this study is presented in Fig. 4. 4. Implementation and sample problem The measurement system considered in this study consists of a high speed thermal imaging system that monitors the temperature over the layer-C of the multi-layered system shown in Fig. 1 while power is applied at the base of layer-A. The high speed IR camera captures transient temperature distribution over an area limited with camera’s image capturing speed and resolution. This test setup is similar to the one used in Ref. [2] where thermal imaging is used for qualitative detection of the flaws of electronic packages to optimize the assembly process. An electronic package such as the one considered in Ref. [2] is similar to the system in Fig. 1, where layer-A represents the die and layer-C represents the lid of the package. 1. Initial guess: p=0, χ0 2. θm(χ0) from Eqs.(1-5) 3. F(χ0) from Eq.(6) 4. While (F(χp)>ε) i. J(χp) from Eq.(8) ii. Up, Σp, Vp from SVD of J(χp)TJ(χp) iii. Γp from Eq. (14) iv. χp+1=χp+[Vp Γp UpT] {J(χp)T [Ψ- θm(χp)]} v. θm(χp+1) from Eqs.(1-5) vi. F(χp+1) from Eq.(6) vii. p=p+1

Fig. 4. The regularized NewtoneGauss algorithm.

The objective of the current study is to demonstrate the feasibility of quantitative characterization of the thermal interfaces by using a similar test setup. A numerical study is carried out where experimental data is replaced by simulated measurements to meet this objective. Simulated measurements represent the measured temperature distribution over the top surface of the layer-C and they are estimated through solution of the direct problem for a known interface that is characterized by its thermal conductivity and heat capacity distribution. Ill-posed inverse problems are known for their instable nature; any small perturbation to the input data can have a significant effect on the estimations. All measurements are subject to some error due to measurement uncertainty that is described in terms of a standard deviation, and the algorithms must also be tested using data subject to error based on the measurement uncertainty. In a calibrated measurement system, the error can be considered as a random error with no existing bias error. Therefore, a random measurement error must be introduced to the simulated measurements to produce the synthetic measurement data. The random measurement error is introduced by using pseudo-random numbers based on a Gaussian distribution with a standard deviation, sm*, of 6.85  104 [30], with 99.7% of the introduced random error lying within 3sm. For the sample problem considered in this study, the interface layer-B thickness is 0.25 mm that is considered as the reference length (Lo), whereas its properties are based on the thermal grease G9 in Ref. [31] and they are also considered as reference values (ko ¼ 2.87 W m1 K1, Co ¼ 2.12 J cm3 K1). The non-dimensional cross sectional area of the structure is 40  40, non-dimensional thickness (Lz*) of layers A and C are 2 and 4, respectively. The nondimensional thermal conductivities of layers A and C are 41.8 and 132.4; the non-dimensional volumetric heat capacities of A and C are 0.77 and 1.62, respectively. The ambient and maximum temperatures are considered as 300 and 373 K, respectively. The system is initially at ambient temperature and a uniform nondimensional heat flux of 0.35 is applied at layer-A throughout the measurement. The layer-C is subject to convective cooling represented by a reference Biot number of 9  103 during the measurement until the maximum temperature in the system is reached that takes a measurement time represented by a Fourier number of 21.7. The reference Biot number indicated above represents removing heat by forced convection over the layer-C that can be achieved using a fan and it is assumed that it is known through theoretical estimations, or a prior experimental characterization. The interface layer has defects due to air voids formed during the assembly process as discussed earlier. There might be a single void or a number of smaller voids surrounded by the interface material in the regions where defects exists. The void fraction within that control volume has a significant effect on the thermal performance of the interface that can be represented in terms of an effective thermal conductivity and volumetric heat capacity, both dependent on the void fraction. The objective of interface characterization is to detect the voids or estimating the void fraction distributions that can be achieved through estimating effective thermal conductivity and the volumetric heat capacity distributions. Moreover, effective thermal conductivity and volumetric heat capacity in a control volume are related to void fraction; therefore, the problem can be defined and solution can be carried out in terms of effective thermal conductivity only. Then a zero void fraction represents the perfect interface and is equivalent to a non-dimensional effective thermal conductivity of 1 based on the effective medium theory, whereas an effective thermal conductivity of 0.01 that represents non-dimensional conductivity of air means that the related control volume is completely filled with air. The effective thermal conductivity distribution shown in Fig. 5 is used to produce the synthetic data for the sample problem. The

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

911

the measurement is in the order of 0.01 as seen in Fig. 6. Therefore, the effect of the random measurement error is significant that can be recognized comparing Figs. 6 and 7, making the problem a challenging one. 5. Results and discussion

Fig. 5. The applied interface non-dimensional conductivity distribution.

properties across the thickness of the interface are assumed to be constant considering effective properties. The simulated measurement data, calculated through solution of the direct problem for the conditions explained, are presented in Fig. 6. The synthetic measurement data displayed in Fig. 7 is then obtained by introducing random error to the simulated measurements shown in Fig. 6. The simulated measurement and synthetic measurement data shown in Figs. 6 and 7, represent data with 8 measurements (Nt ¼ 8). A 30  30  18 grid is used along x*, y*, and z*, respectively, together with a time step of Dt* ¼ 0.17 is used for producing simulated measurements as they produce grid and time step independent solutions. The number of measurement points used is Ns ¼ 900 that can be achieved by a camera with 30  30 pixel resolution with 300 mm pixel pitch. For the system under consideration the high conductivity in layer-C that is preferred for improved heat removal from the system helps reflecting the effect of the flaws to the surface. However, these effects tend to die out as the layer thickness increases. The system parameters and boundary conditions for the system investigated in this study are chosen to represent a typical electronics package with no spreading. For the particular system, the maximum non-dimensional temperature difference at the end of

The iterative perturbation algorithm (IPA), Levenberge Marquardt algorithm (LMA) and the regularized NewtoneGauss algorithm (RNGA) are tested by estimating the interface property distribution of the system summarized in the preceding section relying on simulated measurements and the synthetic measurement data shown in Figs. 6 and 7. The flaws within the interface layer are due to voids formed during the assembly process. The presented algorithms are general and are capable of estimating the conductivity and heat capacity terms simultaneously that can be used for cases where defects are due to contaminations of different materials. However, as discussed earlier the solution is carried out and presented for the non-dimensional effective conductivity distribution for the problem under consideration. As a result, the number of unknown variables, and the size of the Jacobian matrix is reduced to half, which leads to a significant reduction in computation time. An absolute convergence criteria is selected based on the measurement uncertainty as 3 ¼ NsNtsm*. A relative convergence criteria based on the difference of the estimated objective functions of the two successive iterations is also considered for convergence; if this value is less than 109, the iterations are terminated. The algorithms are tested using simulated measurements (data with no random error) and synthetic measurements (data with random error) at different sampling frequencies. The cases with multiple measurements (Nt > 1) consider data collected at a constant period throughout the total testing time. The same random number set is used for introducing the random error for the cases with same number of measurements (Nt) for consistency. 5.1. Iterative perturbation algorithm The IPA is first tested using a single simulated measurement (with no random error) at t* ¼ 21.7 (Nt ¼ 1) that is shown in Fig. 6. The non-dimensional conductivity predicted by IPA shown in Fig. 8

Fig. 6. Simulated measurements representing the non-dimensional temperature distribution over layer-C.

912

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

Fig. 7. Synthetic measurement data obtained by introducing random error to simulated measurements.

(a) and it is identical to Fig. 5. IPA predicts the thermal interface exactly using measurement data with no random error. The computational time used for the solution is 1560 s using a 3 GHz processor and this value is used as reference computational time, tCPU,o. If a single measurement at t* ¼ 21.7 that is subject to measurement error is used, the IPA does not converge with objective function fluctuating between two values without any decreasing trend. IPA still does not converge when 4 measurements, all subject to random error, are used. IPA converges to a solution when the number of measurements used is doubled. The predicted interface conductivity distribution for Nt ¼ 8 is presented in Fig. 8(b). It can be observed that the locations of the flaws shown in Fig. 5 are estimated correctly in Fig. 8(b); however, the shapes, sizes and estimated values are not captured. Moreover, flaw phantoms are observed due to the introduced random error. Absolute error at a point (x*, y*) can be defined as Er(x*, y*) ¼ jka*(x*,

y*)  ke*(x*, y*)j, where ka*(x*, y*) is the applied distribution in Fig. 5 and the ke*(x*, y*) is the predicted distribution. The average absolute error of the solution in Fig. 8(b) is 0.17 as shown in Table 1. Doubling the number of measurements (Nt ¼ 16), improves the accuracy of the prediction reducing the flaw phantoms estimated as shown in Fig. 8(c). A more accurate prediction of the interface conductivity is estimated by using 32 measurements (Fig. 8(d)), for which the average absolute error is reduced to 0.11. Increasing the number of measurements improves the accuracy of the predictions as shown in Table 1 with an increase in the required computational effort. Calculation of the Jacobian that is repeated at every iteration is the most demanding operation in the algorithm, and avoiding this operation will significantly improve the computational efficiency. Considering the small change in the Jacobian of successive iterations, the IPA can be modified, employing a “frozen” algorithm similar to the frozen Newton algorithm presented in Ref. [32]. In the

Fig. 8. The predicted interface non-dimensional conductivity distribution using IPA: (a) No error, Nt ¼ 1, (b) sm*, Nt ¼ 8, (c) sm*, Nt ¼ 16, (d) sm*, Nt ¼ 32, (e) “Frozen” algorithm, sm*, Nt ¼ 32, (f) Ad-hoc filtering applied, sm*, Nt ¼ 32, (g) 2sm*, Nt ¼ 32, (h) 2sm*, Nt ¼ 64, (i) 2sm*, Nt ¼ 128.

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917 Table 1 Summary of results with IPA for the test cases. Method

Error

Nt

Iterations

* tCPU

Erav

IPA IPA IPA IPA IPA IPA IPA Frozen-IPA

0

1 8 16 32 32 64 128 32

4 4 4 4 4 4 4 15

1 1.07 2.28 5.42 5.42 13.11 34.74 3.53

2.8  104 0.17 0.13 0.11 0.18 0.13 0.11 0.1

s m* s m* s m* 2sm* 2sm* 2sm* s m*

modified algorithm, the Jacobian is calculated during the first iteration, and is not calculated unless the relative convergence criterion is satisfied without satisfying the absolute convergence criterion. The so called “frozen-IPA” prediction shown in Fig. 8(e) is similar to IPA predictions shown in Fig. 8(d), and the computational time is reduced more than 30% with average absolute error reducing to 0.1 as shown in Table 1. Although reduced, flaw phantoms due to introduced random error are still observed in Fig. 8(d) and (e), and they can be removed by applying an ad-hoc threshold filter, ignoring any flaw with nondimensional conductivity between 0.25 and 0.75 based on prior information available about the ideal system. The resulting prediction presented in Fig. 8(f), has only slight differences with the applied profile, but the prediction can be considered as an acceptable prediction of Fig. 5 with the average absolute error being 0.03. It is obvious that the measurement uncertainty has a significant effect on the prediction accuracy. The standard deviation of the introduced random error is doubled to understand the effect of random error size on the predictive performance of IPA. The predictions that uses 32 measurements with doubled random error is presented in Fig. 8(g). The significance of random error size on the predictions can be observed comparing Figs. 8(d) and (g). For the doubled measurement error, the number of measurements used is first doubled (Fig. 8(h)) and then quadrupled so that 128 measurements (Fig. 8(i)) are used for predictions to improve the prediction accuracy. The prediction accuracy improves with using more measurement sets by increasing Nt as shown before and the prediction presented in Fig. 8(i) is very similar to the one shown in

913

Fig. 8(d). Comparing the two cases; it can be observed that reaching similar accuracy using data subject to doubled random error is possible by quadrupling the number of measurements used. The observed behavior can be interpreted based on the central limit theorem (CLT) noting that the introduced error resembles a random error. If total number of measurements is divided into a number sub-means, the variance of the means at every measurement point can be shown to be inversely proportional to the number of submeans based on CLT. Therefore, using 128 measurements with doubled random error will result in a similar variance of the means with 32 measurements at each measurement point, that in turn results in a similar prediction accuracy. 5.2. LevenbergeMarquardt algorithm Similarly, LMA is first tested using a single measurement at t* ¼ 21.7 (Nt ¼ 1) with no error, and the predicted interface conductivity distribution for this case shown in Fig. 9(a) is identical to Fig. 5. When a single measurement with error is used, the LMA converges; however, the predicted distribution shown in Fig. 9(b) is not acceptable as the locations, shapes and values associated with the flaws differ significantly from those in Fig. 5. The number of measurements is increased to Nt ¼ 4 to improve the solution quality (Fig. 9(c)). Although, the prediction quality improves and the locations of the flaws are identifiable, the shapes of the flaws are not captured accurately, and many flaw phantoms still appear in the predicted image that is blurry and far from being satisfactory. The prediction quality further increases for Nt ¼ 8 and 16, as shown in Fig. 9(d) and (e), respectively, with the average absolute error reducing to 0.13 for Nt ¼ 16 (Table 2). The prediction by LMA using Nt ¼ 32 is presented in Fig. 9(f), and it is similar to the predictions shown in Fig. 8(d) by IPA using Nt ¼ 32. The predicted distribution has still some flaw phantoms and the average absolute error reduces to 0.11, as shown in Table 2. Similar to IPA, the improvement in prediction accuracy of LMA with increasing Nt results in a significant increase in computational time. A frozen algorithm is also implemented for LMA following the implementation for IPA, and the prediction by “frozen-LMA” is presented in Fig. 9(g). There is a slight increase in prediction accuracy with 30% reduction in the computational time as shown in Table 2. Clearing the flaw phantoms using ad-hoc filtering is

Fig. 9. The predicted interface non-dimensional conductivity distribution using LMA: (a) No error, Nt ¼ 1, (b) sm*, Nt ¼ 1, (c) sm*, Nt ¼ 4, (d) sm*, Nt ¼ 8, (e) sm*, Nt ¼ 16, (f) sm*, Nt ¼ 32, (g) “Frozen” algorithm, sm*, Nt ¼ 32, (h) 2sm*, Nt ¼ 32, (i) 2sm*, Nt ¼ 128.

914

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

Table 2 Summary of results with LMA for the test cases. Method

Error

Nt

Iterations

* tCPU

Erav

LMA LMA LMA LMA LMA LMA LMA LMA Frozen-LMA

0

1 1 4 8 16 32 32 128 32

4 3 4 4 4 4 4 4 15

1.08 0.71 1.08 1.18 2.5 5.39 5.47 34.79 3.64

2.8  104 0.35 0.19 0.17 0.13 0.11 0.17 0.11 0.1

s m* s m* s m* s m* s m* 2sm* 2sm* s m*

straightforward as demonstrated for IPA. The predicted distribution after ad-hoc filtering is not displayed; however, the average absolute error is decreased to 0.03 for LMA. The effect of the measurement uncertainty on the predictions with LMA is shown in Fig. 9(h) and (i). Similar trend with IPA is observed for LMA; the number of measurements must be quadrupled to achieve similar prediction quality if the measurement uncertainty is doubled. It must be noted that an initial damping parameter (l0 in Fig. 3) of 105 is used for the results presented as LMA relying on this parameter converges minimizing the iterations in the inner loop (step vi in Fig. 3) for the problem considered. 5.3. Regularized NewtoneGauss algorithm Testing of RNGA is performed in conjunction with NGA as it can be considered as a base case with no regularization. As before, using simulated measurement data using a single data set (Nt ¼ 1) that is not subject to measurement error is used for the initial test. The distribution predicted by NGA is shown in Fig. 10(a) is identical to Fig. 5. The solution converges in 4 iterations, and the condition number of the coefficient matrix of the system in Eq. (12), J(c)TJ(c), at these iterations vary between 4 and 4.2. The solution does not require regularization when input data is free of measurement errors. When a single synthetic measurement data with random error is used, NGA does not converge to a solution, and the condition number (CN) of the coefficient matrix for the system increases to 25 after 10 iterations. When RNGA is used with CN,max set to 5,

the algorithm converges to a solution satisfying the relative convergence criteria with CN of the coefficient matrix reaching 15 after 11 iterations. However, the predicted distribution is dominated by flaw phantoms and the actual flaws are unidentifiable. The predicted distribution with CN,max set to 4 is more stable in terms of convergence but the actual flaws are not identifiable due to excessive flaw phantoms, and the prediction is not acceptable, either. When CN,max is further reduced to 2.5, the distribution shown in Fig. 10(b) is predicted by RNGA. Although, flaws close to bottom-left and upper-right corner of Fig. 10(b) are shifted significantly from those in Fig. 5, the average absolute error improved significantly with respect to those for the predictions by LMA using Nt ¼ 1 as shown in Table 3. For Nt ¼ 4, the NGA does not converge; however, setting CN,max to 4 RNGA predicts a distribution that has an average absolute error of 0.29. The quality of the image can be increased as more filtering is applied to the system by decreasing CN,max to 2.5. The distribution presented in Fig. 10(c) is predicted using CN,max ¼ 2.5 that reduces the average absolute error to 0.13 as presented in Table 3, which is the most accurate prediction with Nt ¼ 4. Although the values are not predicted accurately, the locations and shapes of the flaws are captured. The NGA converges for Nt ¼ 8, predicting the distribution shown in Fig. 10(d) that is similar to the predictions by IPA and LMA (Figs. 8 (b) and 9(d)) with similar accuracy. The effect of regularization can be observed clearly comparing Fig. 10(d) with Fig. 10(e) and (f), where CN,max is set to 2.5 and 2, respectively. The flaw locations are clearly identified in Fig. 10(e), where most of the flaw phantoms observed in Fig. 10(d) is removed. The average absolute error for predictions with RNGA using CN,max ¼ 2.5 is reduced to 0.12. The locations and shapes of the flaws are predicted reasonably in the regularized images. The flaws tend to smear as more filtering is applied as can be seen in Fig. 10(f). The average absolute error increases to 0.14 for CN,max ¼ 2 with introduction of further filtering. The results shown in Fig. 10(d)e(f) show the importance of regularization level. The selection of proper regularization level can be performed by two alternative methods. If a calibration experiment can be performed deciding on the proper regularization level is straightforward. However, if there is no possibility of performing such an experiment L-curve can be used. L-curves of the first three iterations (p ¼ 1-3 in Fig. 4) for solutions presented

Fig. 10. The predicted interface non-dimensional conductivity distribution using NGA or RNGA: (a) NGA, No error, Nt ¼ 1, (b) RNGA, sm*, Nt ¼ 1, CN,max ¼ 2.5, (c) RNGA, sm*, Nt ¼ 4, CN,max ¼ 2.5, (d) NGA, sm*, Nt ¼ 8, (e) RNGA, sm*, Nt ¼ 8, CN,max ¼ 2.5, (f) RNGA, sm*, Nt ¼ 8, CN,max ¼ 2, (g) RNGA, sm*, Nt ¼ 16, CN,max ¼ 3.5, (h) RNGA, sm*, Nt ¼ 32, CN,max ¼ 4, (i) RNGA, 2sm*, Nt ¼ 128, CN,max ¼ 4.

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

5.4. Effect of other uncertainties

Table 3 Summary of results with NGA or RNGA for the test cases. Method

Error

Nt

NGA RNGA RNGA NGA RNGA RNGA RNGA RNGA RNGA

0

1 1 4 8 8 8 16 32 128

s m* s m* s m* s m* s m* s m* s m* 2 s m*

CN,max 2.5 2.5 2.5 2 3.5 4 4

915

Iterations

* tCPU

Erav

4 3 6 4 6 6 4 4 4

1.08 0.74 1.90 1.16 1.93 1.92 2.54 5.24 36.33

2.8  104 0.19 0.13 0.17 0.12 0.14 0.15 0.11 0.11

in Figs. 10(d)e(f) are shown in Fig. 11. Every point on the L-curve represents an alternate solution with a different regularization level controlled by parameter CN,max that is practically achieved by truncating different number of singular values for each solution. The curve then presents the variation of solution of Eq. (15) with all possible regularization levels with respect to the residual of these solutions. It can be observed that the curves for the first iterations (p ¼ 1) are identical in all three figures, whereas the curves for the further iterations (p ¼ 2,3) differ as different solutions, represented by squares are selected along L-curve. The NGA solution that is identical to CN,max ¼ 5, chooses the solution with most accurate solution with minimal residual that has maximum solution norm due to fluctuations or flaw phantoms (Fig. 11(a)). The solutions with CN,max ¼ 2.5 are close to the corner of the L-shape and this selection represents optimal solution as described in Refs. [27e29] that has significant improvement in solution norm with reasonable increase in residual norm. Residual of the selected solution further increases with relatively less decrease in the solution norm as regularization level is increased by setting CN,max ¼ 2. The absolute convergence criteria is satisfied by all solutions; therefore, the value of the objective function for all solutions presented in Fig. 10(d)e(f) satisfies the convergence criteria. Although the NGA converges, the RNGA using CN,max ¼ 4 predicts the most accurate distribution using Nt ¼ 16 as shown in Fig. 10(g), for which average absolute error is 0.13 (Table 3). It can be observed that the prediction accuracy is similar to that of IPA and LMA. Further increasing Nt to 32, the NGA and RNGA with CN,max ¼ 4 predict distributions (Fig. 10(h)) that are also similar to predicted distributions by IPA and LMA with Nt ¼ 32. The effect of measurement uncertainty on the predictions by RNGA is very similar to IPA and LMA; similar prediction accuracy using data with doubled measurement uncertainty (sm*) is possible by quadrupling the number of measurements (Nt) as shown in Fig. 10(i). Similar to the other two methods, a frozen algorithm and ad-hoc filtering can be implemented in conjunction with RNGA to improve the computational efficiently and accuracy, respectively.

Aside from the effects of random measurement error on the predictions discussed in the previous sections, there are other uncertainties in system that needs to be considered. These are the uncertainties in regards to the applied boundary conditions and the thermal properties of the three layered system shown in Fig. 1. Considering the sample problem, the uncertainty in the thermal properties of silicon die, thermal interface material and copper lid are reported as 2% [33], 0.35% [31], and 3% [34], respectively. Whereas, the uncertainty in the power applied is reported as 0.2% [35] and the uncertainty in the convective heat transfer coefficient can be considered as 10% [36]. The resulting uncertainty in the measurement data considering these uncertainties can be calculated from Ref. [37]



* s*s r*s ; tm

2

¼

  * Xhvq r*s ; tm i

vpi

s*pi

i2

(17)

where pi and s*pi represent the properties or boundary conditions listed above, and their uncertainties. The synthetic measurements that already include random measurement error shown in Fig. 7 is then modified using the uncertainty predicted by Eq. (17). The three algorithms evaluated are tested using modified synthetic measurements for the case with Nt ¼ 32 and sm*. The estimations that consider random measurement error only for this case is shown in Figs. 8(d), 9(f), and 10(h), for IPA, LMA and RNGA, respectively. IPA does not converge when modified synthetic measurements are used, whereas LMA with l0 ¼ 105 as before converges after 17 iterations satisfying the relative convergence criteria, while the absolute convergence criteria is not satisfied. The estimated distribution shown in Fig. 12(a) is smeared and highly regularized. A smaller initial damping parameter, l0 ¼ 1010, is used to improve the solution by reducing the regularization. Although the prediction in Fig. 12(b) is improved, with average of absolute error reducing from 0.14 to 0.098, the predicted values in the defect zones are still not very accurate. Similar to the IPA, NGA does not converge to a solution, whereas RNGA yields the solution in Fig. 12 (c) for CN,max ¼ 4. Comparing the solution to Fig. 10(h), it can be seen that the method is not capable of identifying two adjacent defects close to lower left corner of the figure. The average absolute error of the solution in Fig. 12(c) is 0.11. In the solutions applied, the solution is carried out for the nondimensional conductivity where the local void fraction is estimated from the spatial conductivity distribution based the bulk conductivity of the interface material. The local volumetric heat capacity is then predicted from the void fraction distribution and bulk volumetric heat capacity of the interface material. In order to improve the solutions, the inverse problem can be re-formulated by adding

Fig. 11. The L-curve for first three iterations of NGA or RNGA: (a) NGA, sm*, Nt ¼ 8, (b) RNGA, sm*, Nt ¼ 8, CN,max ¼ 2.5, (c) RNGA, sm*, Nt ¼ 8, CN,max ¼ 2.

916

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

Fig. 12. The predicted interface non-dimensional conductivity distribution with modified synthetic measurement considering the uncertainty in system for Nt ¼ 32, sm*: (a) LMA, l0 ¼ 105 (b) LMA, l0 ¼ 1010 (c) RNGA, CN,max ¼ 4 (d) IPA, CB* ¼ 0.98 (e) LMA, l0 ¼ 1010, CB* ¼ 0.98 (f) RNGA, CN,max ¼ 4, CB* ¼ 0.98.

the value of the bulk volumetric heat capacity of the interface material to the set of unknowns (the conductivity distribution) and the predictions can be repeated. The local volumetric heat capacity can be estimated by the predicted bulk volumetric heat capacity of the interface material and void fraction. The predicted distribution with IPA using the modified formulation is presented in Fig. 12(d), with bulk non-dimensional volumetric heat capacity is estimated as 0.98. It can be seen that the solution with average absolute error of 0.1 is similar to Fig. 8(d). The modified formulation is also applied with LMA and RNGA and the predicted distributions are shown in Fig. 12(e) and (f), respectively. The predicted distributions are similar to Figs. 9(f) and 10(h), with non-dimensional bulk volumetric heat capacity is estimated as 0.98 and average absolute error of 0.1 for both predictions. To summarize; the algorithms tested in this study are capable of predicting the void fraction distribution and identifying the defects quantitatively relying on IR measurements subject to random measurement error, considering the uncertainties in the boundary conditions and thermal properties of the layers. All three algorithm’s prediction accuracy is similar for Nt  16 as they all rely on minimizing the same objective function, whereas the prediction accuracy of RNGA is superior for Nt < 16 where the effect of the measurement uncertainty is found to be more significant. The LMA and RNGA are found to be robust, converging for all test cases, where IPA and NGA fails to converge for Nt ¼ 1 and 4. It was found that RNGA is a very useful tool that is capable of producing reasonable images even with relatively less number of measurements. The rate of convergence and the computational effort required for all three algorithms are similar. The resolution of all three methods are similar and depend on a number of factors including limitations related to the hardware, the model used, and characterized system itself. The IR imaging system’s image capturing speed is critical as the accuracy of the algorithms increase with increasing the number of images used by the algorithms. The resolution of the numerical model used is

another significant factor in conjunction with the image capturing resolution of the system. Any flaw beyond the resolution of the model or the IR imaging system will not be captured using the formulation presented in this study that relies on distribution estimation. However, this limit can be overcome by adopting a parameter estimation formulation. Besides the ones listed above, the prediction resolution significantly depends on the physical system and the applied boundary conditions. Using a thicker layerC, the signal will spread more that will reduce the effects of flaws on the measured signal that in turn reduces the resolution of the system. Precise estimation of the resolution is very much case dependent and a detailed analysis is needed that is beyond the scope of this study. 6. Conclusions Tomography is a powerful tool for non-destructive characterization of systems used by diverse field of applications from medicine to geology. Thermal diffusion tomography uses heat conducting through an object as a signal instead of X-rays or seismic waves, and due to the diffusive nature of signal transport precise image reconstruction is a challenging problem. Solution by three reconstruction algorithms, the iterative perturbation algorithm, LevenbergeMarquardt algorithm and the regularized NewtoneGauss algorithm is presented. The feasibility of the techniques considered is demonstrated numerically using simulated measurements provided by a numerical model. Both algorithms were able to reconstruct the image of interface layer and successfully identifying the flaws using simulated measurements that represent data with no measurement error. However, in reality any measurement data that is used for image reconstruction will be subject to measurement error. Moreover, there will be uncertainties in the applied boundary conditions and thermal properties of the layers. Therefore, the algorithms are also tested using synthetic data with random measurement error and effects of

H. Erturk / International Journal of Thermal Sciences 50 (2011) 906e917

uncertainties in the boundary conditions and thermal properties introduced. It was found out that the measurement error size has significant effect on prediction accuracy that can be improved using more measurement sets collected at different times. The prediction accuracy of the regularized NewtoneGauss algorithm is superior to iterative perturbation algorithm or the LevenbergeMarquardt algorithm for systems prone to significant random error where the iterative perturbation algorithm and NewtoneGauss method fails to converge. Otherwise all three algorithms have similar prediction accuracy as all three are based on formulating the inverse problem as a least squares minimization problem. The predictive accuracy of the algorithms can be further improved by ad-hoc filtering applied based on prior knowledge regarding the system. The computational efficiency can be significantly improved using “frozen” algorithms that prevent calculation of Jacobian that is computationally demanding, unless it is necessary. Based on the results, it can be concluded that thermal diffusion tomography has a potential application for characterizing thermal interfaces for fault detection and isolation. Thermal diffusion tomography can provide a lower cost and practical alternative to other imaging approaches for non-destructive testing of thermal interfaces due to the availability of low cost thermal imaging systems. Acknowledgments The author would like to thank the Bogazici University, for support under B.U. Research Fund AP-09A602P. References [1] M. Mukadam, J. Schake, P. Borgesen, K. Srihari, Effects of assembly process variables on voiding at a thermal interface, in: Proceedings of Thermal and Thermo-mechanical Phenomena in Electronic Systems Conference (ITHERM), Las Vegas, NV, USA, 2004, pp. 58e62. [2] M. Pacheco, Z. Wang, L. Skoglung, Y. Liu, A. Medina, A. Raman, R. Dias, D. Goyal, S. Ramanathan, Advanced fault isolation and failure analysis techniques for future packaging technologies, Intel Technology Journal 4 (2005) 337e352. [3] C. Guy, D. Ffytche, An Introduction to the Principles of Medical Imaging. Imperial College Press, London, 2005. [4] S.R. Arridge, Optical tomography in medical imaging, Inverse Problems 15 (1999) R41eR93. [5] L. Salvo, P. Cloetens, E. Maire, S. Zabler, J.J. Blandin, J.Y. Buffiere, W. Ludwig, E. Boller, D. Bellet, C. Josserond, X-ray micro-tomography an attractive characterization technique in materials science, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms 200 (2003) 273e286. [6] V.P. Vavilov, D.A. Nesteruk, V.V. Shiryaev, A.I. Ivanov, W. Swiderski, Thermal (infrared) tomography: terminology, principal procedures, and application to nondestructive testing of composite materials, Russian Journal of Nondestructive Testing 46 (2010) 151e161. [7] D. Zhao, New advances of seismic tomography and its applications to subduction zones and earthquake fault zones: a review, The Island Arc 10 (2001) 68e84. [8] E.M. Haacke, R.W. Brown, M.R. Thomson, R. Venkatesan, Magnetic Resonance Imaging. Wiley, New York, 1999. [9] A.C. Kak, M. Slaney, Principles of Computerized Tomographic Imaging. IEEE Press, New York, 1988. [10] P. Carrion, Inverse Problems and Tomography in Acoustics and Seismology. Penn Pub. Co., Atlanta, 1987.

917

[11] V.F. Bakirov, R.A. Kline, Diffusion based thermal tomography, Journal of Heat Transfer 127 (2005) 1276e1279. [12] A. Gupta, Y. Liu, N. Zamora, T. Paddock, Thermal imaging for detecting thermal interface issues in assembly and reliability stressing, in: Proceedings of Thermal and Thermomechanical Phenomena in Electronics Systems, 2006, ITHERM ’06, San Diego, CA, USA, 2006, pp. 942e945. [13] F. Natterer, The Mathematics of Computerized Tomography. SIAM, Philadelphia, 2001. [14] N. Baddour, Fourier diffraction theorem for diffusion based thermal tomography, Journal of Physics A: Mathematical and General 39 (2006) 14379e14395. [15] A.P. Gibson, J.C. Hebden, S.R. Arridge, Recent advances in diffuse optical imaging, Physics in Medicine and Biology 50 (2005) R1eR43. [16] S.R. Arridge, M. Schweiger, Image reconstruction in optical tomography, Philosophical Transactions-Series B Biological Sciences 1354 (1997) 717e726. [17] S. Arridge, A gradient based optimization scheme for optical tomography, Optics Express 6 (1998) 213e226. [18] A.H. Hielscher, A.D. Klose, K.M. Hanson, Gradient based iterative image reconstruction scheme for time resolved optical tomography, IEEE Medical Imaging 3 (1999) 262e271. [19] R.H. Chan, C.P. Cheung, H.W. Sun, Fast algorithms for problems on thermal tomography, in: Proceedings of the First International Workshop on Numerical Analysis and Its Applications, Rousse, Bulgaria, 1996, pp. 90e97. [20] C.Y. Yang, A linear inverse model for the temperature-dependent thermal conductivity determination in one-dimensional problems, Applied Mathematical Modeling 22 (1998) 1e9. [21] C.Y. Yang, Estimation of the temperature-dependent thermal conductivity in inverse heat conduction problems, Applied Mathematical Modeling 23 (1999) 469e478. [22] C.Y. Yang, Determination of the temperature dependent thermophysical properties from temperature responses measured at medium’s boundaries, International Journal of Heat and Mass Transfer 43 (2000) 1261e1270. [23] C.H. Huang, S.C. Chin, A two-dimensional inverse problem in imaging the thermal conductivity of a non-homogeneous medium, International Journal of Heat and Mass Transfer 43 (2000) 4061e4071. [24] C.H. Huang, C.Y. Huang, An inverse problem in estimating simultaneously the effective thermal conductivity and volumetric heat capacity of biological tissue, Applied Mathematical Modeling 31 (2007) 1785e1797. [25] M.N. Özıs¸ık, Heat Conduction. Wiley, New York, 1980. [26] M.N. Özıs¸ık, H.R.B. Orlande, Inverse Heat Transfer: Fundamentals and Applications. Taylor and Francis, New York, 2000. [27] H. Erturk, O.A. Ezekoye, J.R. Howell, Comparison of three regularized solution techniques in a three-dimensional inverse radiation problem, Journal of Quantitative Spectroscopy and Radiative Transfer 73 (2002) 307e316. [28] J.R. Howell, K.J. Daun, H. Erturk, M. Gamba, M. Hosseini Sarvari, The use of inverse methods for the design and control of radiant sources, JSME International Journal, Series B, Fluid and Thermal Engineering 46 (2003) 470e478. [29] P.C. Hansen, Rank-deficient and Discrete Ill-posed Problems. SIAM Publications, Philadelphia, 1998. [30] G. Machin, R. Simpson, M. Broussely, Calibration and validation of thermal imagers, in: Proceedings of 9th International Conference on Quantitative Infrared Thermography, Krakow, Poland, 2008. [31] Y. He, Rapid thermal conductivity measurement with a hot disk sensor: Part 2. Characterization of thermal greases, Thermochimica Acta 426 (2005) 130e134. [32] B. Kanmani, R.M. Vasu, Diffuse optical tomography through solving a system of quadratic equations without re-estimating the derivatives: the ‘frozenNewton’ method, in: Proceedings of 2004 IEEE International Workshop on Biomedical Circuits and Systems, Singapore, 2204, pp. 17e20. [33] K. Kurabayashi, K.E. Goodson, Precision measurement and mapping of dieattach thermal resistance, IEEE Transactions on Components, Packaging, and Manufacturing Technology-Part A 21 (1998) 506e514. [34] R.W. Powell, C.Y. Ho, P.E. Liley, Thermal Conductivity of Selected Materials. National Bureau of Standards, Washington, DC, 1966, NSRDS-NBS 8. [35] H.C. Cheng, W.H. Chen, H.F. Cheng, Theoretical and experimental characterization of heat dissipation in a board level microelectronic component, Applied Thermal Engineering 28 (2008) 575e588. [36] W.M. Kays, M.E. Crawford, B. Weigand, Convective Heat and Mass Transfer, fourth ed. McGrawHill, Boston, 2005. [37] S.J. Kline, F.A. McClintock, Describing uncertainties in single-sample experiments, Mechanical Engineering 75 (1953) 3e8.

Suggest Documents