Image Analysis Methods for Paper Formation Evaluation

Image Analysis Methods for Paper Formation Evaluation A thesis submitted in conformïty with the requirements for the degree of Master of Applied Scie...
16 downloads 2 Views 4MB Size
Image Analysis Methods for Paper Formation Evaluation

A thesis submitted in conformïty with the requirements for the degree of Master of Applied Science Graduate Department of Electncal& Computer Engineering University of Toronto

@ Copyright by CHEN, YINGLiN, 1998

Nationai tibrary ofCanada

Bibliothèque nationale du Canada

Acquisitions and Bibliographie Services

Acquisitions et services bibliographiques

395 Weilingtcm Street OttawaON K1AON4 Canada

395. rue Wellington Ottawa ON K1AON4

Canada

The author has granted a nonexclusive licence allowing the National L%rary of Canada to reproduce, loan, distriiute or sell copies of this thesis in microfom, paper or electronic formats.

L'auteur a accorde une Licence non exclusive permettant à la Bibliothèque nationale du Canada de reproduire, prêter, distribuer ou vendre des copies de cette thèse sous la fome de microfiche/fïlm, de reproduction sur papier ou sur format électronique.

The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts Erom it may be printed or otheMlise reproduced without the author's permission.

L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni ta thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation.

IMAGE ANALYSIS METHODS FOR PAPER FORMATION EVALUATION M.A.Sc., 1998

YINGLIN CHEN Graduate Department of Electricai & Cornputer Engineering

University of Toronto

This thesis discusses the image analysis methods for global paper formation evaluation and fracture occurrence analysis:

Two methods, the local gradient method based on a planar stochastic structure and the cooccurrence rna& method based on the dependence texme analysis, were studied for the first problem. The results showed that they gave reasonable evaluations of paper

formation. Some improvernents on t h e efficiency and algorithm were made to them.

For the second problem, a h m e work was set up to predict the break stress field based on grammage, local fiber orientation estimated by a two dimensional power spectrum of the mass distribution, and local relative bonded area estimated by a newly proposed method based on the Kubelka-Munk theory. The results indicated a multivariable correlation beween the break stress and the structural properties.

ACKNOWLEDGEMENTS

1 wish to take this opportunity to express heartfelt thanks to Professor Raymond H. Kwong for guidance, financial support and encouragement throughout the work, and for his careful reading and correction of the manuscript.

Special thanks to Professor Mark T. Kortschot who gave me valuable instnictions, s u p e ~ s e dthe experimental work, and acquainted me with pulp and paper engineering during the course of this research. Many thanks to Doctor Andrew X. Wu for his cooperative and constructive work in the experimentai part of this research.

I wish to th& also Professor W. Murray Wonham for financial support during the work. FUially, I am grateful to Mr. Wai Ng, Mr. Rob Sighn, Dr. Ning Yan and Dr. Xintong Lu for al1 their helpful discussions and encouragement.

iii

Notation List Ag

8

-

magnitude of the locai gradient orientation of the local gradient

C

-

local coherence

e'

-

global eccentricity

w

-

,

rnean and covarizkce of the local density variability

- coefficient of variation, which is defined as CV,

CV, k,

size of the domain of the operator

-

p ,

O&

=-

parameter of the normalized gamma model

-

mean and variance of the local coherence values

cc. CV, - coefficient of variation, which is defined as CV, = -

CI,.

- parameter of the negative exponentid model T H ~-gcomputation time for one step of local gradient using the original algorithm Tw - cornputation time for one step of local gradient using the recursive algorithm

A,

GLCM

-

gray level CO-occurrencernatrk

P - normdized gray level CO-occurrencernatrk N, - number of distinct gray Ievels in digitized image

N, - floc number FS

- floc size

FD

- floc distance

FI

-

formation index

P,

-

the probability of transition Type I ,Le., where I = {BI, B2, B3, Cl, C2, C3) .

N, - the number of pixels dong one scan Line on the digitized image L - length of the s c d g line on the digitued image the maximal value in the gray tevel co-occurrence matrix ppeak S - the number of nonzero elements in the gray level CO-occurrencematrix h(B)- fiber orientation distribution function p(B) - normalized anguiar distribution of power spectnun of mass distribution

B L m G

digitized grammage map fiber width fiber length

mass per unit area on one fiber

coherence90 -summation of absolute projections of probabilities in all directions with

respect to

k I -k 2 Ab A,

-

?r

2

-

fiequency band for fiber orientation anaiysis

total bonded fiber surface area

- one-half the total extemal surface area of fibers

RBA or RUBA - relative bonded area or reIative unbonded area S-

reflectance coefficient

N

-

R

- light reflectance

T

-

S,

number of unbonded surfaces light transmission

light source

N, - total number of surfaces O,

-

break stress

Contents Acknowledgments ................................................... i

NotationList ...................................................... ii

........................................... 1 1.2 Objectives and Outline of This Thesis .............................. - 8

1.1 Background Overview

2 Image Analysis Methods for Global Paper Formation Evalaatioa .........12

2.2 Global Anisotropy Analysis ........................................ 13

2.2.1Theo ry ....................................................... 13 2.2.2 hprovement of the Algonthm .................................... 18 2.2.3 Implementation ................................................ 24 2.2.4 Experimental Resuit ............................................ -28 2.2.5 Conclusion ....................................................30 2.3 Dependence Texture Analysis ...................................... 31 2.3.1Theoq ....................................................... 31 2.3.2 Improvement of the Algorithm .................................... 37 2.3.3 Implementation ................................................ 44 2.3.4 Experimental Result ............................................. -44 2.3.5 Conclusion ................................................... -46

3 Paper Fracture Occurrence Analysis .................................48

3.2 Local Gradient Analysis ........................................... 50 3.2.1Theory ....................................................... 50 32 . 2 Implementation ................................................ 50 3 .2.3 Experimental Result ............................................ -51 3.3 Local Fiber Orientation Analysis .................................... 53

3.3.1Theory ....................................................... 3.3.2 New Anaiytical Result of the Algorithm ............................ 3.3.3 Implementation ................................................ 3.3.4 Experimental Result ............................................

53 -56 61

-63

3.4 Local Relative Bonded Area Anaiysis ................................ 64

3.5 Experiment Procedure ............................................ -70 3 S.1 Materiai. Imaging and Testing Systems ..............................70 3.5.2MethodoIogy .................................................. 72 3.5.3 Experiment Procedure............................................ 72 3.6 Experimental Result Analysis ...................................... -77

Reference ........................................................ -86 Appendix ......................................................... 90

vii

Chapter 1 Introduction

1.1 Background Overview The manufacture of paper is one of the oidest and most wide-spread engineering technologies. In modem times; continuous progress in paper making has contributed much to the advancement of culture and knowledge. For example, the cornputer era has increased demand for more precisely engineered paper to be used in sophisticated commercial printing and converting operations. At the same tirne, the trees, which are also the raw materid for paper rnanufacturing, form a precious resource, and their wastefùi usage is intolerable. Thus, fïnding better control technology to irnprove

efficiency of the paper making process is very important. A paper making production iine, fiom headbox to the final product is shown in Fig 1.1.

The main functions of each step are: [Il Headbox Spreads fumish (pulp) out to the width of the paper machine; disperses fibers and other materials uniformly; matches the slurry speed with the forming section speed. The greatest effort for betier control is made here.

1

Reel

Fig 1.1 Paper Making Production Line

Fourdrinier Forming Section Forms the sheet of paper by a filtration process; dewaters the sheet to approximately 20% solids.

Press section Dewaters the sheet to 35-45% solids; consolidates the web. Decreases the surface roughness.

Dryer Section Dewaters the sheet to 94 to 97% solids; sets the inter-fiber bonds. Modifies the surface characteristics of the sheet.

@

Reel Section

Collects the paper on a large roll at the reel.

The most important properties which determine paper quality, such as mechanical, porous or optical properties, are completely detennined by the structure of

paper. The

mechanical properties depend primarily on the nurnber of fiber-fiber contacts, the porous

properties on the number and ske of the inter-fiber spaces, and the optical properties on the density and roughness. Therefore, the study of the structural properties of paper is

essential for paper quality control.

In the past few years, more and more work has been done on applying image analysis technologies to paper strucRirai properties extraction. The images c m be captlired by

8-

radiography or light transmission [2]. Compared with the chernical and physical methods, the Mage a d y s i s methods have several advantages. Information derived is more essential and accurate because the methods are based on the study of paper structure (images of the fibrous structure) directly.

They are nondestructive. Taking #?- radiography or optical images won? destroy or change properties of the paper.

They can potentiaily be applied online with the development of real time imaging technology. Two major challenges in this area are to link and verify the texture features extracted

tiom images with the paper properties concemed, and to improve time efficiency of the imaging technologies. Related literature c m usually be found in the TAPPI Journal or Journal of Pulp and Paper Science. Here we only concentrate on the topics concemed in this study, Le., global paper formation evaluation, fiber orientation estimation, relative bonded area (RBA) measurement and hcture occurrence analysis.

Formation is an important propem of paper that affects its strength and opacity, as well as the printability and visual appearance of al1 paper grades. When formation is referred to, the first thing which cornes to mind is the uniformity. It describes the evenness of

mass distribution. However, unifmity does not take into account the size distribution of the non-uniformity, Le., how the regions of different basis weight are grouped, whether in large flocs or s m d grains. So, indices representing floc featwes are necessary complements to u n i f m i t y . Moreover, anisoîropy is also an important property not only because it has impact on paper quality, but also because it reflects the motion of the paper making machine and thus provides vaiuable information to control engineers.

Spectral andysis methods are the most commonly used techniques to describe basis weight variations, Le., non-uniformity and to determine systematically the size and shape of flocs etc., see for example Attwood [31], Norman [28], Haglund [13], Brendemuehl

[30] and Jordan [29]. Devrîes [27l also developed a continuous the-series mode1 fiom discrete light-transmission profiles and used it to obtain spectral moments of the profiles. Some indices derived Erom the moments were used to characterize the formation, Gorres

[45] proposed three indices of floc features obtained fiom an ensemble averaged linear auto-correlation functions.

The texture dependence method was also a popular technique for uniformity and floc

features extractions, such as in Cresson [6]['7l[9] and Yuhara [32][33]. The Pulp and Paper Research Institute of Canada developed a PC based image analysis system with expert software which emulated human perception based on the TUPI, CPPA and ISO test methods in dirt speck counting, paper formation quality and solid print nonuniformity [46]. A convenient and intrinsic index formation number was introduced by Dodson [34] which was the ratio of the variance of local basis weight for a commercial sheet to the correspondhg variance of a random sheet from the same funüsh. Gagnon [25] also showed the feasibility of fasf non-destructive high spatial resolution scanning

of basis weight, orientation anisotropy and dominant orientation. Recently, Scharcanski [4] used a local gradient function to extract information on spatial variability and anisotropy based on planar stochastic structures. Deng [17] studied paper properties fkom the point of view of stochastic engineered models. Her book [17] is a

synthesis of studies on this topic. Our study is largely concerned with exploring the application of algorithms which are accurate, fast and have the potential for online paper formation evaluation.

Paper k t u r e occurrence study is of great importance to paper quality evaluation. Catastrophic failure may happen if the papa can't tolerate the stresses in the high speed printing presses and fkctures. Another danger lies in the paper manufacturing process itseK Since defects c m never be eliminated completely fiom paper, they have the potential to cause damage d h g pressing, calendering or coating operations etc. So two engineering design strategies are to minimize hcture incidence and to maximize fÏacture tolerance. To cany out these strategies, a basic step is to pursue a model relating structural properties to failure [41][42][43][44]. The development of such a model consists of two steps. The fïrst is to i d e n w a failure criterion which descnbes the swpected cntical stress, strain or strength disiribution that will result in failure. In other words, a failure criterion is a condition to be satisfied for failure to occur. The second is to predict the stress, strain or strength field when a tensile load is imposed on the paper sheet. A recent paper by Wong [19] studied the failure cnterion and developed a hmework for predicting the strength of a sheet based on its structure at the millimeter scale. The only structural property considered was the grammage variation. The result showed that the grammage is not sufficient for modeling failure and the study of other variables, like fiber orientation and relative bonded area (RBA), is necessary. It was also shown that the inspection zone size may be too large to reved the true story of fracture occurrence. Thus, in our study, efforts are made to predict break stress based on not only grammage, but also fiber orientation and RBA. Certainly, the measurements of fiber orientation and RBA are important aspects in this study. A background survey of fiber orientation measurement can be found in Yuhara [3]. Here

we only give a brief summary. The fiber orientation distribution is usually deterrnined by angdar variation of mechanical properties, such as sonic velocity and zero-span tensile strength. These techniques are not a direct measurement of the geometric fiber orientation

and are influenced by the anisotropy of fiber floc distribution and drying stresses. Other techniques include: X-ray difiction, microwave transmission, and laser difiction and transmission. Although they are strongly comlated to the fiber orientation, X-ray diffiraction and microwave transmission give the cellulose molecular chah orientation other than the geometric fiber orientation, and laser diffraction and transmission are influenced by the light scattering ability, i.e., the fiber bonding condition. To solve the above problems, Yuhara et al. [3, 38,39,40] proposed a method using a two-dimensional power spectnun of a high-resolution soft X-ray images to meanire the fiber orientation. This method was theoreticdy proved and experimentally confirmed to be effective. However, the method has its limitations by assuming that ail the fibers in one sheet are identical, i.e., having the same length, width and mass distribution. Thus, our study concentrates on applying Yuhara's method to paper hcture occurrence analysis, and extending the existing theoretical result to a more general case.

As paper is a strongly interacting fiber network, the structure and properties of the boaded region are essential factors which affect almost al1 of its mechanicai, opticaf, thermal and electrical properties [22]. RBA is defined as the fraction of the fiber's bonded extemal surface area Methods for measuring bonding properties of paper c m be divided into four categories: the microscopic observation method, the gas adsorption and optical scattering method, the digitizing method and other rnethods [20] [2 1][22] [23]. Microscopie observation examines the bonding state by observing the paper sample under

light and electron rnicroscopy. Although it is the most direct way to provide information

on the nature of fiber -fiber bonds, some difficulties in measurement and uncertainties in interpretation of results, such as the microscopic image of paper rarely being clear, and the tedious examination work being af5ected by surface fibers, are reported. The gas adsorption and optical scattering method is based on the measurement of the surface area of the constituent fibers. Of al1 the above methods the one commonly used is the optical method, which utilizes the specinc scattering coefficient calcuiated fiom an adaptation of the Kubelka-Munk theory. However, the method is Limited by several assumptions and is

not yet universaily accepted. At the sarne t h e , it is fairly complicated and tirne-

consuming to implement because Nitrogen adsorption is required. The digitizing method

is a straightforward way of a n a l m g intemal fiber network geometry. It essentially consists of the d y s i s of photomicrographs of sheet cross sections on a digitizer intedaced with a computerized data processing system. This rnethod requires a large amount of cornputafion on each sheet cross section and, taking photomicrographs of sheet cross sections is very timeconsuming. Destroying the paper sheet is unavoidable. Other methods include the DC electricai conductivity method and the web shrinkage energy method etc. But they have either not been M y explored in application or stiU require more work to make them complete. So, the existing methods are usudly complicated, destructive, t h e consuming and, more important, valid for RBA rneasurement of the

whole sheet rather thao a local area. Therefore, the focus of our study is to explore a convenient, nondestructive and fast method to rneasure the RBA.

1.2 Objectives and Outline of This Thesis

There are two main objectives for this thesis. One is to develop algorithms which are accurate and fast for online paper formation evaluation. The other is the hcture occurrence anaiysis based on local structural properties, grammage, fiber orientation and relative bonded area (RBA). Consequently, measurements of fiber orientation and RBA are necessary sub-topics of interest. Nowadays a paper manufacturing machine can continuously produce papa ten meters wide, at twenty meters per second. Thus, to make the image analysis algorithms available on line is of great importance. Therefore, irnproving the time efficiency of the algonthms is always a major objective of this study.

In Chapter 2, two algorithms, the local gradient method [4]and the CO-ocmencematrùc

method [6][7][8][9], are studied. Both of them describe the uni$iormity feahires of the paper. In addition, the first one represents the anisotrcopy feature as well. The second one focuses also on thefloc features such as floc size, floc nurnber, floc distance etc.

In Section 2.1, the first method is studied. Here paper is viewed as a planar stochastic structure described by random variables. By analyzing statistical properties of these random variables, the estimated quantities are then used to evaluate paper unifomity ,

anisotropy, etc. The reason for choosing a stochastic rnodel lies in the nature of the paper forming process, Le., a large amount of fibers (roughly 5 x 106 fibers in a thin sheet of

10cm2) dispersed in water are deposited fiom the headbox onto a wire screen. The process yields randomness in spatial distribution of fibers and indicates the necessity of a stochastic model other than a deterministic model. Local gradient function is used to measure the change of density. Through computation of the local gradient, both global

and local unifonnity and anisotropy can be extmcted. The amplitude of the gradient vector measures the degree of mif~rmity,Le., the larger the amplitude, the less the uniformity. The global anisotropy is measured by an index called eccenhicity where

higher eccentricity corresponds to higher global anisotropy. Local anisotropy is measured by an index called coherence which describes the degree of agreement between the orientation of gradients and - the local dominunt orientation. Higher coherence corresponds to higher local anisotropy. Efforts are made to improve time efficiency of the algorithm by devising a recursive algorithm. Severai ambiguous points conceming definition and proof related to eccentricity and coherence are also clarified and complemented.

In Section 2.2, the second method is studied. The dependence texture analysis method (CO-occurrencem a t e ) gives the probabilities of two pixels with specified gray level and spatial separation. Standard statistics, including energy. entropy, contrasi, homogeneity etc., measwe the unifomity. High energy, homogeneity, Iow contrast and entropy correspond to high uniformity. Additional indices, such as floc number, floc size, floc

distance etc., are used to describe floc features of the paper. The basic idea of these additional indices is to count the occurrences of some specific types of edge combinations which clamp flocs. This study improves on the previous algorithms to give a more complete and accurate estimate of the floc features. The experimental results show that the method has potential to represent the anisotropy feature as well. This method esfimates indices with visual effects (the floc feature), and has the potential to replace the

human perception evaluation in industry. Section 2.3 is the future work of the global formation evaluation. More effort will be made on improving the time efficiency by fmding an accuracy-computation tirne tradeoff and implementing some real-time imaging technologies.

In Chapter 3, paper fhcture occurrence is studied based on the measurements of three local structural properties. The first step is to choose a failure criterion and here it is chosen to be the stress criterion, i.e., failure will occur at the local region with the smaliest break stress. Predicting the break stress of a local region from three local structurai properties, local gradient, [ocalfiber orientation and local relative bonded

area, is the second step. The final goal is to find out the correlation between local strength and the foregoing three local structural properties. Thereafter, a rnaîhematical model wiii be developed to predict the hcture occurrence which doesn't destroy the paperSection 3.2 introduces the local gradient d y s i s . It is based on the same theory found in

2.1. In addition to the coherence and uniformity computation, the local gradient, coherence and dominant orientation are visualized by quiverplots ( a built-in fùnction in

MATLAB) generated by the cornputer analysis and design software MATLAB. The length of a vector represents its magnitude and the directions its orientation. As introduced in Section 3.3, the localfiber orientation is represented by its probability

distribution. It was proposed and verified in (31 that fiber orientation distribution could be derived by rotating angular distribution of the power s p e c t m of the local density map by K. In this study, a new index calied coherence90 is intmduced to describe the 2

agreement belmeen orientations of al1 the fibee and the direction perpendicular to the forces imposed on the specimen. It's expected that the larger the coherence90, the stronger the specimen because there are more chances of fiber-fiber bonding. Fast Fourier transform is implernented to improve the time efficiency. In addition, the

andytical result is extended to a more realistic model of fibers which allows the fibers to have different lengths and widths. A new method of measuring local relative bonded area based on the Kubelka-Munk

theory is proposed and applied in this thesis (See Section 3-4). The Kubelka-Munk theory is a widely used theory in matenal science. In this study, the theory is extended to its discrete f o m and for thefirst tirne, applied to the relative bonded area estimation. A local region of paper is viewed a s a multi-layered structure divided by the fiee surfaces (unbonded fiber surfaces), that is, two fibers bonded together are considered one single layer. When a flux of light transmits through the region, reflectance will occur only on

fiee surfaces and absorption is neglected since the paper is considered to be very thin. Based on this, we have developed a relationship between the transmitted light, reflected

Light and the number of fiee surfaces (unbonded area). Also, the number of al1 the fiber s d a c e s is proportional to the grammage at this local area, and the grammage can be determined fkom ,8-radiography. Therefore, using these results, we have derived an estimate of the relative bonded (or relative unbonded area). It is expected that the larger the local unbonded area, the weaker the paper sheet.

A large amount of experimental work has been carried out in this study. The whole

expimental system was designed and set up as introduced in Section 3.5. The major steps are paper sheet making,

-radiography, threshold digitization, tensile sample

cutting, local fl-radiography and local optical measuremenf tensiie test and, finally,

image analysis.

The experimental results and conclusions are listed in Sections 3.6 and 3.7. The analysis can be divided into two steps: correlation between several structural properties and correlation between sîmctural and mechanical properties. For the first step, the result shows that relative unbonded area (RUBA) and coherence90 have positive correlation with the mean grammage. More important, for the second step, it is s h o w that break

stress has positive correlation with mean grammage, coherence90 and negative correlation with RUBA. These results show that there is correlation between structural properties and mechanical properties of paper. Moreover, the correlation is a complicated and multivariable dependence. Although it is still too early to establish the model to predict paper strength, the technique and equipment have been shown to be effective and the methodology is usefùl for paper fhcture prediction.

Future work is described in section 3.8. It iatroduces the approach and ongoing work towards the final goal of paper h c t u r e analysis, Le., building a mathematical model and verifying validity of the model by experiments.

Chapter 2 Image Analysis Methods for Global Paper Formation Evaluation

2.1 Motivation As introduced in 1.1, formation - is an important property of paper which not only affects

its appearance, printability and mechanical properties, but also provides valuable information to paper machine control engineers. Usually, paper formation can be described fiom three aspects, unformity, anisotropy andfloc features. Although a lot of research work has been done on using image analysis techniques to extract information of paper formation, it has rarely been employed in the pulp and paper industry. The slow acceptance of image analysis methods is due mainly to high cost and the need for highly skilled operators. Furthermore, instability and slow speed make them difficult to be standardized and applied odine.

Therefore, this study focuses on exploring applications of the image analysis methods to paper formation evaluation. In order to derive a relatively complete evaluation of the paper formation, two methods, Local Gradient Method [4] and Co-occurrence Matrix

Method [6][7][8][9], are studied. Both of them describe the unifomify of the paper. In addition, the first one represents the anisotropy as well. The second one includes also the

floc features such as floc size, floc number, floc distance etc. Efforts are also made to improve the t h e efficiency of these methods and explore the potentid of their online application.

2.2 Global Anisotropy Analysis

In this methoci, a local gradient function provides information on global anisotropy and local density variability from 2-dimensional density images. A Prewitt Operator is used to calculate the gradient by 2-dimensional convolution.

A Prewitt Operator with order N is d e h e d as the following,

Horizontal direction:

Similarly,in row direction:

NxN

The local gradient can be defined by its magnitude and orientation. Its magnitude is defined as:

Its orientation is dehed as:

where,

also g(x,y) denotes the average density in a zone about the location (x,y) , w is the size N-l of the domain of the operator. For a N x N operator, w = y. The formulas (2.1) to L

(2.8) are equivalent to the two dimensional convolution using the Prewitt Operator. Local gradient describes the magnitude and orientation of the change, or flow of mass distribution in a local area on the paper sheet From the formulas above we can see, H and V are the magnitude of the change of mass distribution in horizontal and vertical

directions respectively.

The behavior of g(x, y) is well descrïbed by a Gaussim distribution. According to the Central Limit Theorern, as w increases, H+

. Y , V+, V- approach normal density. By

[4], the summations of as few as five random variables already produce a reasonable

approximation of a Gmsian density, which corresponds to an operator of domain size w = 2 . Therefore, when w 2 2 , H and V can be assumed Gaussian. The behavior of

Ag(x, y) can be then modeled as a gamma distribution. Its parameters are associated with the f o m h g process:

If we normdize the above distribution to have unit mean,

k

= - = 1, then with

b

Here the global anisotropy is represented by the distribution of angle 8,where 0 is the associated angle at each pixel ( x , y) . It's obtained by (2.2).

Where d l angles 0 are equally probable, the sample is isotropie. In the case of anisotropy, the eccentricity e' is d e h e d as:

where

&

and

a, are two distinct eigenvaiues of the covariance matrix of angular

distribution. The larger the e2,the more anisotropy the sheet. The details of computing eZwili be descrîbed in 2.2.3.

Local dominant orientation and local coherence are used to represent the local anisotropy.

In the local dominant orientations the density tends to Vary fatand present the lurgest spatial variance in gradient directionality. The local coherence measures the degree of agreement between the orientations of the gradients in a window of size

(2w + 1) x (2w + 1) and the local dominant orientation. The dominant orientation 0 is estimated by maximinng the absolute values of the projections with respect to B in the mean square sense.

where the absolute value of the projection with respect to B is Ag, cos(@,- 8) and

N = (2w + 1) x (2w + 1) . 8, is the associated angle at pixel

j defined by (2.9).

Differentiating the equation above, and sening it to O, we obtain:

sin 28

in which 8 maximizes S and is the best estimate of the local dominant orientation. For

proof of this staternent, see 2.2.2.

The local coherence is defined as the m a t i o n of absolute values of projections of ail the local gradients in this locai area with respect to the local dominant orientation 8':

where O' is the locd dominant orientation, The discussion on the definition of local coherence is described in 2.2.2. High coherence values indicate similarity of texture propeaies in specific directions. The behavior of local coherence cm sometimes be modeled as negutive expuneniid distribution:

Hence, the series of indices that represent the paper formation are:

e'

global eccentricity

pM, &c

mean and covariance of the Local density variability

cv,

coefficient of variation, which is defined as CV, = -

k,

parameter of the nomalized gamma mode1

pc ,

mean and variance of the local coherence values

cvc

"c coefficient of variation, which is defined as CV, = -

4-

parameter of the negative exponential mode1

O*

&

Pc

2.2.2 Improvement of the Aigorithm 2.2.2.1 Improving T h e Efficiency As shown in [4], the Local Gradient Function Method has potential to be modified for

on-line qualiîy control purposes. To increase the time efficiency, a recursive algorithm is devised to do the gradient computation. The basic idea is to divide a step of computation

into several parts and some parts of them can be used for later computational steps. Algorithm Description

Here we still use the notation in 2.2.1 and consider H(x,y ) as an example. H ( x , y) can be calculated as

we get,

More specificdly for Prewitt Operator, if we denote newCol(x,y ) , then

y'=y+ w y'=y- w

;ir?;;»x

g ( x + w , y f )as

Substitute (2.16) and (2.17) to (2.19, we get,

For V ( x ,y ) the idea is the sarne. Similarly we have, V ( x ,y ) = newRow(x,y)+ V+( x ,y - 1 ) - newRow(x,y - w)- V+(x,y - w - 1)

Denote time needed for a multiplication operation as Tm,for addition as T,. The

computation time needed for H ( x , y) or V ( x , y ) T''Hg using the original a l g o d m can

be expressed as

T N ~=g 2w(2w + 1) x Tm+ (4w2+ 1) x @ While using the modified algorithm, the computation time needed for T,, is:

For Prewin Operator, the time needed when using original algorithm is

Whiie using the modified algorithm, the corresponding time needed T,,,,,

is

Obviously we can see, the computation time is considerably reduced after rnodifjkg the algorithm. For example, when w = 2 , TMd,,, = 41.18%Tflrig,, and T*fd = 50% T N ~ g .

when w = 4 , Tm&,,

= 16.92%TNrig,, and

TMd

= 50.7% TNHg. The larger the W .the

more computation rime can be saved (See Fig 2.1). For experimental result, see 2.2.4 Table 2.2, where w = 2 .

2.2.2.2 Discussions on the Local Coherence Estimation

Proof of best estimate 13

To prove that 0 in (2.12) reaily is the besr estimate of the local dominant orientation, it's

necessary to show that

d * ~ do^ < O .

Order of Prewitt Operator 2w+l

Fig 2.1 Time Eficiency of the Recursive Algorithm for Prewitt Operator

.

If we denote each gradient vector in polar format, Le. I A ~ ~ I ~ then ' ' J the square of one vector is lAg,l 2 ei28, . If we decompose each square gradient vector to two components,

' direction, the other is on the 28, = 90' direction. Take one is on the 28, = 0 sumrnations of components dong these two directions respectively and denote them as

Ikfll

1 ~ g ~ *we l , have,

.

Denote the vector summation of al1 the square gradients as 1~g,le''~ then 8, satisfies the equation below,

We can see 8, is exactly the same as the 28 in (2.12).

Take second order derivative of (2.1 1), we have,

Now we need to prove that

Note the lefi side of the inequaiity above is the vector summation of projections of al1 the

square gradients with respect to 2 6 . It is equivalent to 1 Ag.vlcos(Ov - 28). From above we also have 8, = 28, so, the left side equals to 1Ag.J which is always greater than or equal to

o. Therefore, it is proved that B in (2.12) really is the best estimate of the local dominant orientation.

Definition of locaI coherence In [4], the local coherence C is defined as the nimmation of projections of local gradients in the local area with respect to the local dominant orientation:

where 0' E [O,IT]and

8, E [O,2n] .

Obviously, when

4 - û' E [-z2 ,n] or

3n

[- 2x1, 2

Ag, cos@ - 0') 10. So it's possible to have CS O , which is contradictory to the neguiive exponential mode1 for the behavior of C where C 2 0. On the other hand,

according to the defkition of 0' , it's the angle which maximizes the sufnmation of the projections with respect to it in the mem square sense, i.e. it rnaxirnizes the absolute values of the projections.

Therefore, the local coherence should be defined as the

following:

Experimental results using (2.28) are shown in 2.1.5, Table 2.3 verifies that C c m be

negative and even the absolute values don't show a reasonable ranking. From 2.2.4, Table 2.1, it can be seen that formula (2.29) gives good results.

Interpretation of the local coherence value

In [4],local dominant orientations are interpreted as orientations in which the density tends to vary most slow&, and consequently, present the l e m spatial variance in gradient directionality.

But from 2.2.1 we know the local dominant orientation maxLnizes the projections of the gradient vectors with respect to it, i-e. it maximizes the magnitude of the resultant gradient vector. The larger the magnitude of a gradient vector, the Iarger the density variation dong the gradient direction. Thus, local dominant orientations should be interpreted as the orientations in which the density tends to Vary the fasesr, and present

the larges! spatial variance in gradient directionality. Local coherence values indicate the similarities of the texture properties dong certain directions. Here the [exme property

means having large gradient magnitude dong the local dominant orientation.

2.2.3.1 Reasons for Choosing MATLAB as the Tool The algorithm is implemented using MATLAB programming. MATL,AB is a powerful mathematics tool for engineering computation. It is selected as the programming tool of this project because:

It has built-in toolboxes for different engineering topics such as signal processing, communication, control etc., which are optimized and designed specifically for different hardware. For example, the function for 2-dimensional convolution corn2 is optimized for Sun SPARC stations, therefore, the tirne efficiency is improved considerably. So, MATLAB is ideal for real t h e signal processing.

It provides good graphics environment. For purposes of image processing, a visual feature is necessary. In later-chapters we can see that quiver plot, polar plot etc. are very helpfid for texture anaiysis.

2.23.2 Computation Procedure of

e

'

'

Since details explainhg the cornputational procedure of the eccentricity e were not provided in the reference paper [4], we wouid like to describe the procedure hem.

Accordhg to 151, the kth principal component direction of a distribution is dong an eigenvector direction beionging to the krh largest eigenvalues of the covariance matrix. Therefore, for calculation of e 2 , we need to determine the largest and the smallest eigenvalues of the covariance matrut Cv .

If we keep the same notations as in 2.2.1, we have,

where E is the expectation operator. Cv is syrnrnetric, this implies that its eigenvalues are

al1 real and its eigenvectors are orthogonal. Also, because Cv is positive semi-definite, its eigenvalues are positive or zero, and are given by

where R is the diagonal matrix of eigenvalues, and I is the identity matrix. We have,

Thus, we can calculate e2 by

2.23.3 Introduction of the Program g1obGrad.m First, the gray scale image is converted to a grammage rnap using the calibration curve. When taking the p-radiograph of a specimen, a series of calibration wedges are placed beside the specimen. These cdibration wedges are standard mylar sheets with known grammages [2]. There are d l y two groups of calibration wedges (See Fig 2.2 (a)). One group is numbered CR103 : 97.2, 8 1.O, 64.8,48.6, 32.4, 16.2 g/m' . The other group is numbered CW105 : 16.2, 32.4,48.6, 64.8, 81.0, 97.2, 113.4, 129.6, 145.8, 162.0 g/rn2 . The example s h o w in Fig 2.2 (a) is CW103.Thus grammage of the calibration wedges

and their corresponding gray levels on the P-radiograph form a group of calibration values in the format: (gray level, grammage). Polynomial fitting is used to connect these points into a curve- See Fig 2.2 (b) in which the circle dots are original values determined

nom the calibration wedges and the solid line is the calibration curve determined fkom second order polynomial fitting. With the calibration curve, the gray scale image c m be converted to the grammage mapl

ext, twodimensional convolution

is done using the modifïed algorithm. T%e

m e t e r s are calculated folIowing the fornulas shown in 2.2.1. Finaiiy, the computatio sdts are written to a file.

of the program, see Appendir 1for the source code and comments.

pxxe46al Fig 2.2 (a) Two Samples of the /3 mdiograph and calibration wedges

110-

1

1

1

I

1

100

-

90

-

-

80

-

-

01

E 70-

a 5

Q 60-

-

-

-

E

40

Oi

1

1

I

1

1

50

100

150 Gray Level

200

250

300

Figure 2.2 (b) Example of the Calibration C w e

Five image samples (from World Wide Web pages) were used to test the above algorithm. Among them, mdlj44a1 was a weil-fonned isotropie paper and the series pxxe... were increasingly oriented samples. The resolution of them is 5 pixe~s/mm2.The size of the paper sample is 5.6 cm2, which corresponds to the dimension of the digitized

ma&

28O x 28O

. Recall

eccenaicity e2 represents the degree of anisotropy, Le., the

larger the value of e2, the more anisotropy the paper sample. Ag is the magnitude of the

local gradients. The larger the value of Ag , the larger the spatial variability of mass distribution. ph, oz4, c2&,CV,, and k,

are estixnated quantities derived by

analyzing statistical properties of the random variable Ag . Local coherence value C

indicates the similarities of the texture properties dong certain directions. Here the texture property means having large gradient magnitude dong the local dominant

orientation. Larger value of C means the mass distribution is more locally anisotropy.

Similady, pc, 02, , C c and Ac are estimated quantities for the random variable C. The experime~talresults are listed in Table 2.1 - 2.3.

Table 2.2

Table 2.3

FinaIly, we c m make the foilowing conclusions: (1) The rnethod gives a good understanding of global anisotropy as weii as local density variability. Stochastic structure is used so that variation during formation process can be incorporated.

(2) T h e efficiency is improved considerably with the modified recursive algorithm. Although the absolute computation time is dependent on the machine, the cornparison of

the original and modified algonthms can be seen clearly. (3) Some ambiguous points about the eccentricity and local coherence are clarified,

includiog the definition, computation procedure, analytical proof and physical interpretation.

2.3 Dependence Texture Analysis 23.1 Theory [6]['7J[8][9] Gray level CO-occurrencema& (GLCM)is a second-order statisticai technique used as a means of extracthg texture information fiom pictures. It gives the probabilities of two pixels with specified gray level and spatial separation. An element GLCM(I,j) in the

gray level CO-occurrencematrix GLCM is defined as the relative fiequency of occurrence of two pixels, separated by distance d and dong one specific direction, such as

.

horizontal, vertical, right diagonal or left diagonal, one with gray tone i the other with j . Below is an example to show the computation procedure of GLCM. In this example,

the distance d = 1, which means two neighboring pixels, and the scanning direction is

horizontal.

Example 1: The digitized map is,

Below is the horizontal CO-occurrencematrix GLCM :

For example, the entry GLCM(2,l) should be equal to the number of occurrences of two neighboring pixels with gray levels 2 and 1 when scanning in the horizontal direction. It

is easy to see that GLCM(2,l)= 2 .Another example, GLCM(3,l) should be the number

of occurrences of two neighboring pixels with gray levels 3 and 1. It can be seen that there is no such a gray level transition in the digitized map, so, GLCM(3-1) = 0. Next, the GLCM is normalized so that its entries (nurnbers of occurrences) are

îrmsfomed into probabilities of occurrence (density function). Usualiy when GLCM is referred to, it implies the normalized GLCM. Below is the normalized GLCM of

Texture information is assessed by computing several rneasures representing the way the

matrix elements are located with respect to the main diagonal. If the texture of an image is non-uniform, gray levels of pixels at distance d will aiways be different, so the values of the co-occurrences will distribute widely over the matrix. in the case of uniform

texture, pairs of pixels separated by distance d will have simila. gray levels, thus high co-occurrence values concentrate on or near the main diagonal. Several statistical measures, such as energy, homogeneity, conimst and entropy, can be computed fiom the GLCM. Denoting the normdized rnatrix as P. the texture measures are defined as:

Energy =

( ~ ( j))' i,

where P(i,j) is the (i, j) th entry in a gray level CO-occurrencematrix.

N, - number of distinct gray levels in digitized image As for the Example 1, N, = 3 .

High energy, low contras?, low entropy and high homogeneity correspond to uniform gray level distribution [81[9]. The above methodology can also be carried out using grammage vaIues instead of gray levels when the GLCM technique is employed specifically for paper formation evaluation. In that case, high energy, low contrust, low entropy and hi&

homogeneity

mean unifonn grammage distribution, Le., good paper formation.

Besides the conventional texture measures, some new indices are developed to describe paper structurai properties especially. They are the number of flocs ( N ,

),

average floc

size ( FS ), average floc distance ( FD ) and formation index ( FI ) etc. These indices are denved by rearranging and partitioning the GLCM as shown in Figure

2.3. From the definition of GLCM, we know each element in the matrix is the probability of transitions from corresponding 1st basis weight to 2nd basis weight. In the figure, the lower-lefi corner corresponds to the transitions with the minimal 1st basis weight and the

minimal 2nd basis weight. The upper-right corner corresponds to transitions fiom the

maximal 1st basis weight to the maximal 2nd bais weight. It can be seen that the GLCM is partitioned with the two lines of median bais weight of the whole grammage map.

Thus, five types of basis weight transitions can be defined The most simply defined is the Type E which corresponds to the central point of the GLCM and denoted as

Fig 2.3. In Type E transitions, both 1' and 2"dbasis weights are the median value. Transitions in which both points have gray levels below the median value belong to the light basis weight zone (Type A), which is the lower-left quadrant of the matruc denoted

as A(LWZ) in Fig 2.3. Transitions in which both points have gray levels above the median value belong to floc (Type D), which is the upper-right quadrant of the matrix and denoted as D(F1oc). The other two quadrants of the rnatrix correspond to transitions defined as edges (Type C and Type B), denoted as B(Edge) and C(Edge) respectively. Transitions of Type B are of 1st basis weight beiow the median value and 2nd basis

weight above the median value. They are also calied the ascenduig edge. While transitions of Type C are of 1st basis weight above the median value and 2nd one below the median value. They are also called the descending edge. Types B and C c m be M e r classified to BI, B2, B3 and C l , C2, C3. BI is ascending edge with ending point equal to the median. B2 is ascending edge with starting point equal to the median and B3 ascending edge with no points equal to the median. Similarly, Cl is descending edge with ending point equal to the median. C2 is descending edge with starting point equal to the median and C3 with no points equal to the median.

Min.B.W.

2nd Basis Weight

Fig 2.3 Partition of the GLCM

Max.B.W.

Let's look at the example below to see the procedure of partitionhg the GLCM.

Example 2: Below is the digitîzed image (See Figure 2.6).

Fig 2.6 The Exampte of a Digitized Grammage map

Following the method in Example 1 (note the scanning direction is vertical), the GLCM ( d e r normalization) c m be denved (see Fig 2.7),

2nd Basis Weight Fig 2.7 GLCM of Example 2

nie resuit of partitioning the above GLCM is shown in the figure below (see Fig 2.8),

2

- 3

4

5

6

2nd Basis Weight Fig 2.8 Partitioning result of the GLCM

After partitionhg the GLCM, the next step is to derive some indices related to formation fÎom the partitioned mat&. The floc number

is defined as the number of flocs intersected by a line scanning

through one specific direction, such as horizontal, vertical, right diagonal, left diagonal etc. on the digitized grammage map. Assume the number of pixels dong this scan line is

N, ,and use PI to represent the probability of transition Type 1,Le.,

where I = (Bl, 82, B3, Cl, C2, C3} . N, cm be estimated according to [7] by,

In section 2.32, a new formula is developed for more complete floc number estimation and interpretation of the formula is also given there.

The average floc size FS is then given by,

where L is length of the scanning line. Average floc distance FD can be defined as the length of a line divided by the number of

ilocs,

FD = LIN,

(2.38)

The formation index FI is dehed as the maximal value in the GLCM Ppeak divided by the number of nonzero elements in it S. As to Example 2, we can get PB, = P,, = &, PB, = %,

PB,= & , P,.,

=f

. &., = 9.

PD = Q and N, = 8 . Suppose the length of the vertical scan line L is k m ,by formula 2.36 - 2.38, N, = 1.156. FS = OS41cm FD = 4325 cm and FI = 0.00794.

2.3.2 Improvement of the Algonthm (1) A region with grammage higher than the mean value is considered as a floc. So, the median gray level value previously used to partition the GLCM in the original algorithm is changed to the mean gray level value to give a more reasonable interpretation.

(2) The f o d a to estimate

N, in the original algorithm is modified to give a more

completed estimation of floc number.

Algorithm Description

The basic idea to estirnate the floc number is to count occurrences of certain types of edges which clamp flocs. For example, as shown in Figure 2.4 , if there are one edge Type B3 and one Type C3, then we can Say there is one floc between them. It was found that there are totally 24 possible combinations of different edges that clamp flocs as

shown in Figure 2.5 . They are named as flocTwei, where i = 1,2, ... 24.

0

1 O

Distance

Fig 2.4 An example of Flocs With Edge B3 and C3

For each situation, the probability of floc is:

Two assurnptions are made for deriving the formula to estimate Nf. First, the occurrences of ail the 24 types of flocs are equally probable. Second, for each transition

type, its occurrences in different floc types in which it is involved are equaily probable.

Thus, the flocnimiber NI can be estimated as,

Figure 2.5 The 24 Floc Types

41

The above formula is derived by three steps. The nrst, based on the first assumption, we add ali the above 24 expressionstogether, and we get,

The second, based on the second assumption, we divide PI by the number of occurrences

.

of transition type I in ail the 24 floc types, and denote the variable as Time, where 1 = (BI,8 2 , B3, CI, C2, C3) .

From

the

24

expressions,

we

cm

have

Time,, = Tirne, = 5 , Tirne,, = Time,, = 20 and Tirne,, = Time,, = 3. By substitutkg these values to (2.39), we get,

So far, (2.41) expresses the probability of flocs. In the thud step, obtain the number of

flocs N, intersected with one scan iine, the probability shouid be multiplied by the number of transitions dong the scan line. Recail N , is the number of points dong one scan iine, then the number of transitions should be N, - 1. Therefore, after this step, the

formula to estimate Nj is obtauied as,

The modified algorithm considers dl the possible combinations of edges that clamp flocs while the original algorithm only considers the fhst two situations, Le. flocljrpe, and

Let us continue Example 2. The mean vaiue of gray level is 4, so regions with gray level 5 or 6 are considered flocs. We denote number of flocs dong each column as N I , where j=1+8

NI, =4 ,

fkomlefttoright. Fromthematrix, wecanget N,, = 1 , IV,? = 3 . N,, = O ,

N

,, = 2 ,

N I , = 2 , N,, = 2 and N I , = O . N, should be mean value of

N d , i.e. 1.75.

Now use the old formula to compute the number of flocs, here denoted as N,,

N,,

, we

get

= 1.156. Use the modified algorithm and compute the number of flocs, here denoted

as Nwd, again, we get Nmd = U34.The error of these two values are respectively 33.94% and 4.8% . From the example we c m see, the old algorithm underestimates the floc nurnber compared with the modified one because it doesn't take into account ail the possible

situations in which floc cm occur.

(3) GLCM aiso has the potential to describe anisotropy features by comparing texture properties in different directions. Scanning through different directions, indices like

Energy, Enîropy, Contmst, Floc Size, Formation Index etc. c m be derived which represent the texture features in certain directions. The ciifferences between these directions indicate the anisotropy property. For example,

Formation Index (Fo

represents how uniform the grammage map is dong one direction, if we compute FI dong machine direction (MD) and cross machine direction (CD) respectively and compare them by

CD

11, then this value c m be

used to describe the anisotropy feanire of

spatial variance. When texture properties dong MD and CD are the MD - 11, the more which is the extremely isotropie situation. The Iarger the value of CD

anisotropic the grammage map. Experimental results are shown in 2.3 4, Table 2.5. The results show the same ranking of anisotropy as derived from Local Gradient Method.

43

2.3.3 Implementation The algorithm is implemented using MATLAB programming. For the rasons for choosing MATLAB as the tool, see 2.2.3.

First, the gray scale image is converted to a grammage map as introduced in 2.2.3. Second, GLCM on cross machine direction and machine direction are computed and normalized. Third, Energy. En~ropy,Conirusf and Homogeneity are computed. Next, if the resolution is too high, then the image resolution can be reduced by replacing several neighboring pixels by their mean value. This is because for the Energy.

Entropy,

C o n t r m and Hmogeneiîy computation, the higher the resolution of the image, the more

accurate the result. But N, , FS, FD and FI describe the rnacro-scale structural features, the resolution should be within a reasonable range, othenvise the results will be af5ected by micro-scale variability, such as variances on one single fiber.

The appropnate

resolution should be in mm scale which is considered the macro-scaie in paper science. The next step is to compute N I , FS, FI and FD according to the formulas described in 2.2.3. Finally, the computation result is written to a text file.

For detail of the program, see Appendix II for the source code and cornments.

2.3.4 Experimental Results The five paper samples used in 2.2 are also used here to test this algorithm. Recall energv, entropy, contrast, hornogeneity and FI represent the unifortnity of paper, high

energy, low entropy, Low con-

and hi& homogeneity correspond to uniform mass

distribution. FS, FD, Nf are used to represent the floc feature, which is critical information for paper making engineers to judge the quality of paper.

Table 2.4

1

I

Energy

1

Entropy

1

Gontran

1

Homogeneity

Table 2.5

MD

FI(CD)

FI( MD)

-1 CD - tf

e2

md1j44a 1

6.6686e-05

9.1237e-05

0.3682

1.3581

pxxe48e 1

1-8256e-05

1 -0628e-05

0.4149

3.3912

pxxe46al

2.1 1 le-05

4.4734e-05

1.1191

4.9413

1

2.3.5 Conclusion Therefore, we can conclude the following: (1) The GLCM method provides reasonable evaluations of the five paper samples. The ranking is the same as that obtained fiom the Local Gradient Method (2) Compared with the Local Gradient Method, much less computation is involved in the

GLCM method. Therefore, the time efficiency of this method is preferable. (3) This method introduces indices with visual effects, such as floc size, floc distance,

number of fiocs etc. It has the potential to replace the human perception evaluation applied in industry. (4) This method has the potentiai to describe anisotropy feature by comparing texture

properties in clifferent directions. The ranking of the five paper samples is consistent with that obtained fiom Local Gradient Method.

2.4 Future Work So far the theory of these two image analysis methods is well estabfished and some efforts have been made to improve their time efficiency. However, more work need to be done to provide real time versions for these two methods. One strategy is to make a timeaccuracy tradeoff. For example, the computation load of the local gradient function method can be reduced by increasing inspection zone size and decreasing resolution (pkels/mm). Thus, it's valuable to h d the effect of inspection zone size and resolution. Then, a tradeoff is possible to be found between computation load and accuracy. The other strategy is to apply some teal t h e image processing techniques based on not oniy software but also hardware design.

Other possibilities are to complete the study of anisotropy andysis using the

CO-

occurrence matrix method, investigate some image analysis techniques, such as wavelets,

2-Dauto-regressive models and cluster analysis etc.

Chapter 3 Paper Fracture Occurrence Analysis

3.1 Motivation

As introduced in 1.1, paper hcture occurrence study is of great importance to high speed printing and pressing, calendering, and coating operations during the paper manufacturing process. Two engineering design strategies are to minimize f'racture incidence and to

maximize fiachue tolerance. A basic step to carry out these strategies is to pursue a model relating structural properties to fdure. The development of such a model consists

of two steps. Firsf identify a $dure criterion which describes the suspected critical stress, strain or strength distribution resultbg in failure. Second, predict the stress, strain or strength field when a tensile load is imposed on the paper sheet. Much work has been done in this are* such as the recent work of Wong [19], which studied the faiiure criterion and developed a framework for predicting the strength of a sheet based on its structure at the millimeter scale. However, that study only considered one structural property, grammage variation. nie result showed that the gramrnage is not sufficient for modeling failure and the inspection zone size is suspected to be too large. Therefore, in our shidy, efforts are made to predict break stress based on not only local

gramrnage, but also local ftber orientahon and local RBA (relative bonded area). Certainly, the measuements of fiber orientation and RBA are dso important aspects of this study.

As introduced in 1.1, Yuhara et al. [3, 38, 39, 401 proposed a method using a twodimensional power spectnun of hi&-resolution sofi X-ray images to measure the fiber orientation. The rnethod was theoretically proved and experimentally confirmed to be effective. However, the method has its limitation in assuming that al1 the fibers in one sheet are identical, i.e., having the same length, width and mass distribution. Therefore, this shidy concentrates on applying Yuhara7s method to paper hcture occurrence

analysis, and extendhg the existing theoretical result to a more general case.

Because paper is a strongly interacting fiber network, the structure and properties of the bonded region are essentiai factors which affect almost al1 the mechanical, optical, thermal and electrical properties. In 1.1, the existing rneasurement techniques for RBA are introduced. They are usually complicated, destructive, time consumllig and, more important, valid for RBA rneasurement of the whole sheet d e r than a local area Thus, the focus of our study is to explore a convenient, nondestructive and fast method to measure the RBA-

In addition, according to [43], local gradient also has an effect on fracture occurrence. It is also suspected that certain types of gradient distribution are more liable to cause failure. Thus, in this study, the local gradient analysis is visualized so that in later work, the relation between local gradient distribution and h c m e occurrence can be studied by comparing the visualized quiver plots and video digital images taken during the paper failure process.

3.2 Local Gradient Analysis

3.2.1 Theory [4] Local gradient and local do&t

orientation are analyzed based on the theory of [4].

Here we won? repeat the theory description, for details, please refer to 2.2.1. The clifference in this study is that the local dominant orientation vectors are plotted to visualize the local gradient nnalysis. In the plot, length of an arrow denotes the magnitude

of a vector and its angle the direction of the vector. In this application, the Length of an m o w represents the local coherence value and its direction represents the local dominant orientation. In addition, mean gradient pk ,. mean local coherence K and eccentricity eL are computed to describe the degree of local unifrnzity and unisohopy.

3.2.2 Implementation The algorithm is implemented in MATLAB. MATLAB provides a good visual environment for the graphic outputs. First, the local gradients and local dominant orientations are calculated based on the theory in 2.2.1. Next, the local dominant orientation plot is reaiized by the quiver plot, which is a built-in function of MATLAB (see Fig 3.1 as an example of quiver plot). Finaily, several indices, mean gradient pQ. mean local coherence

and eccentricity e 2 , are computed and saved.

For detaiis of the program loca1Chad.m and comments, see Appendix III.

3.2.3 Experimental Result Below are the graphitai outputs of local gradient analysis on a paper specimen Ts IO with size of 2 x 2 m d . Fig 3.1 is the quiver plot of local dominant orientation vectors, their

magnitudes represent the local coherence values, and the directions are the local dominant orientations. Fig 3.2 is the polar plot of the local dominant orientation vectors in which the length in each radius direction represents the probability of local dominant orientation vectors dong this radius direction. Although these plots and the numenc indices are not incorporated in the hcture occurrence analysis this tirne, they are ready to be used in iater studies.

In later work from the quiver plot, the local gradient distribution in each local area can be seen. From the video digital image taken during the paper hcture process, we can see the initialkation and propagation of the failure. By cornparhg these two khds of images, it's possible to find out the effect of local gradient distribution on fracture occurrence. On the other hand, the polar plot (such as Fig 3.2) c m facilitate the exploration of correlation between local gradient distribution and local fiber orientation distribution, because local fiber orientation distribution is represented by a polar probability plot as well (as introduced in 3.3).

Local O o m i i ûrientationof tslO

25

I

w

a

1

Fig 3.1 Local Dominant Orientation Plot of Ts10

90

PmbaMMy Oistribulionof W OominantOrientation of t s l O

0.027408

O

270

Fig 3.2 Local Dominant Orientation Polar Plot of Ts f O

3.3 Local Fiber Orientation Analysis

33.1 Theory [3] As s h o w in [3], fiber orientation distribution h(û) can be approximated by the

normaiized angular distribution of power specûum of mass distribution p ( @ , rotated by

-2R ,.1-e-7

For anaiytical proof of the above conclusion, please refer to Yuhara [3] or 3.3.2.

In this section, we concentrate on the fomuiation of the algorithm. The algorithm can be camed out in four steps, assuming the grammage map G(x, y) (as introduced in 2.2.3.3) of a paper sheet has been obtained, where (x, y) are the Cartesian coorciinates.

First, compute power s p e c t m P(u,v) of G(x, y). This step can be formulated as follows:

Take the two dimensional Fourier Transform F(u, v) of G(x. y),

where u and v are spatial fiequencies of x and y directions in the image.

Take the square of the magnitude of F(u,v ) to get the power spectrurn P(u, v),

Secondly, t m f o r m P(u,v) in (u, v) plane to (k,@ polar coordinates plane by taking

k=

Jm' and 0 = atm''

v . U

Third, normaiize P(k,B) in a selected spatial fiequency band ki + kt as follows :

As concluded in [3] the suitable fiequency band should be chosen not to be afFected by macro-scale stnichiral anisotropy. The solution was provided in [3] that the suitable spatial fiequency band is

&+

or

& +& with B being the fiber width.

R

Finally, rotate p(6) by -, to get p(B 2

It

2', which is the approximation of the fiber

orientation distribution h(8) . Three assurnptions are made in this algorithm. (1) AU the fibers in the sheet are identical rectangdar fibers. i.e., of same length, width and mass distribution per unit area. (2) Fiber length L is substantially larger than fiber width B , Le., B I L + 0. (3) Fiber width is sufficientiy small compared to the sweyed area of the sheet, and sufficiently large

fiequency numben ki and kz are used. Therefore, the algorithm produces the probability distribution h c t i o n h(B) . However, as introduced in 3.1, the goal of this study is to predict break stress based on grammage, fiber orientation and RBA when a t e n d e load is imposed on the paper specimen. For the purpose of multivariable modeling, the structural properties need to be represented by

n&c

indices. Therefore, the concepts of dominant orientation and coherence defined

in 2.1 are also employed here to describe fiber orientation anisotropy feature. Dominant orientation is defmed as the direction which maximizes the summation of absolute projections in all directions with respect to it. Coherence is the summation of absolute values of these projections. Thus, dominant orientation represents the preferential fiber orientation, and coherence represents the agreement of fiber orientations with the dominant one.

in addition to dominant orientation and coherence, a new index coherence90 introduced in this study. We d e h e it as the summation of absolute projections x

probabilities in ail directions with respect to - : 2

where p, is the probability of orientation dong angle digitized angles within [O,

lr]

8,, B, E [O, 4.N

is the nurnber of

and can be f o d a t e d as,

Concisely, the reason for introducing this new index is that it has correlation with the number of fiber bondings, which is the physical essence of paper strength. Details will be provided in 3.6 (5). Further details and the justification of this algorithm are given in section 3.3.2.

3.3.2 New Analytical Result of the Algorithm In this section, the anaiytical development of the algorithm is given. It was analytically proved in [3] that when d the fibers don't have identical mass

distribution per unit area, the normalized angular distribution of the power specûum of rnass distribution rotating

2 agrees approximately with the fiber orientation distribution

weighted by fiber weight per unit area, Le.,

where m is the m a s per unit area.

In this study, an extension of the solution proposed in [3] is derived for the case where the fibers have different length L ,width B and mass distribution per unit area m. It will be shown that the fiber orientation distribution weighted by B~ ~rn' c m be approximated by the normalized angular distribution of the power spectnun of mass distribution rotating K

7 .The resuiting formula is,

The mode1 now is closer to the practical situation. The analytical proof of (3.6) is be

Anaiytical Proof

In order to be consistent with the notations used in reference papers, in this section we use several different notations as in 3-3.1. (x, y) is replaced by (x, ,x, ) ; (u, v) is replaced

by (k,,k,,) ;G is replaced by w; Fourier Transforrn F is replaced by f .

In the present mode1 a random sheet is composed of N fibers with different length L,, width B,aad m, per unit area where n

=

1, 2,

....,N . In the whole sheet the

mas

distribution w ( f ) is the sum of the independent mass contribution w,,(X) from N individual fibers. i.e.,

w(x) =

C

W"

(X)

where F is the position vector (x, ,x, ) . w,(F)= m,,over the corresponding fiber area A, and O otherwise.

The Fourier transform of

ME)is, e-2&d;

where

k is the spatial fiequency vector (k,,k,) and

A the area of the sheet.

Fig 3.3 The Coordinate of the Mode1

57

The position vector Z is expressed by the s u . of the position vector of nth fiber center Enand the position vector in coordinates for nrh fiber 7, (see Fig 3.3),

Therefore,

where,

I n

=

a JI,ë2" .

Since Fourier bases are othorgonal, so,

The power spectrum

~ ( kis )given by,

Substitute (3.15) to (3.16),

So far we have completed thefirst step as described in 3.3.1.

Introduce polar coordinates (k,@, according to 1131,

sin x in which sinc function is defined as sinc(x) = -. Thus, the power spectnim in polar coordinates is obtained as,

The second step in 3.3.1 has now been completed.

The next step is to norrnalize P(k,B) by,

d B )=

fi

~(k,@dk/

I

P (k,BB)dMBO

Firsf we consider the numerator. Setting u = d cos(@' - 8) and b(a) = d sin(@'- 0), we get,

Therefore,

(3-24)

Based on assumption (2)in 3.3.1, B / L + O, we obtain,

(3 -26)

According to assumption(3) in 3.3.1, sinc krr

have,

when @ - O + - -

n 2'

+ O except for a + O($' - B + f 9), so we

P(k, û)dk z ((k, - k,) A+lim ($)/z) Lm2

7r

L B

B~Lh(m, 0 - - B. L ) d d B d L 2

Consider the denominator of (3.2 1),

P(k,û)dkdû E { ( k , - k, ) Alim ($)lx) E (m2B2L ) -w Thus the normalized p(û) is

The work above corresponds to the third step in 3 -3.1.

Finally, rotate p(0) by -, which is thefourth step in 3.3.1, we obtain, 2

which is exactly the result we want to prove.

3.3.3 Implementation The algorithm is implemented using MATLAB programming. For the reasons for choosing MATLAB as the tool, see 2.2.3.

The input of the program is matrix G , which is the grammage map of the image (as introduced in 2.2.3.3). First, take the power spectrum of G . Second, nomalize the power spectnim and get the anguiar distribution. Third, rotate the angdar distribution by R

7 .Next, compute the dominant orientation, coherence and coherence90. Finally, give the polar plot of the orientation probability distribution.

Fast Fourier Transform

Although there is a fast Fourier transform built-in fiuiction called f l 2 in MATLAB. negative fiequencies are not defïned in @2 and the £iequency band is automatically chosen as O +

with N dimension of the transformed m a t . . Thus, it's not

appropriate to use the f l 2 directly. Ln order to still take advantage of the time efficiency

of MATLAB built-in fiinction, we use one-dimensional Fourier transform

as an

alternative. The built-in function fft.m in MATLAB uses Radzk-2 fast algorithm. The complexity is O(log, N ) , while for the conventional algorithm the complexity is

0(N4),ht,

N the dimension of t r d o r m e d vector.

For a vector x , its Fourier transform F(- f ) of negative fiequency -f is simply the conjugate of the Fourier transform of corresponding positive frequency f Le.,

Given a matrix G , its two-dimekional Fourier transfomi F, (G) can be obtained by

F,(G)= F ; ( W G ) ~ )

(3-33)

where F, (G) denotes the one-dimensional Fourier transforrn of G considering each column as a variable.

' denotes transpose of a matrix.

Thus by (3.32) and (3.33), we can use the

fl built-in function to do the two dimensional

Fourier transfom with negative fkequencies and selected fiequency band. For details of the program, see Appendix N for the source code and comments.

Fibre

270

Fig 3.4 Fiber Orientation Distribution of Ts 10

Fig 3.4 is the graphic output of one paper specimen TslO. In the polar plot of fiber orientation distribution, values dong the circular direction are angles. It can be seen that

' the valid range for fiber orientation is 0

+ 180'.

Values dong the radius directions are

the probabilities of fibers in the corresponding directions. For example, from Fig 3.4 we

can see the probability of fibers in the 90' direction is 0.019027.This kind of plot and the numeric indices are being used for later analysis on the h c t u r e occurrence. The experimental results will be introduced in 3.6.

3.4 Local Relative Bonded Area Analysis 3.4.1 Theory Relative bonded area (RBA) is defined as the ratio of the total bonded area Ab to one-haif the total extemal surface area of fibers A,, Le.

4 RBA = 4

Kubelka-Munk theory is a widely used theory in material science. Kubeika-Munk theory and some of its applications c m be found in [IO]. In this st~dy,the theory is extended to its discrete form and, for the fm tirne, applied to relative bonded area estimation. A new

theoreticai result is derived and it leads to a new easily implemented method of

estimating relative bonded area, or equivaiently, relative unbonded area.

I I

.

1 A

4 .

layer 1

Iayer n

Fig 3.5 Mode1 of the Discrete Fonn K-M Theory

The mode1 of the light transmission system is as shown in Fig 3.5. The transverse section

of a paper sheet is divided into layers according to the fiee sudaces (unbonded surfaces), i.e., fibers bonded together are considered as one single layer. Assume there are two light fluxes, one proceeding downward through the layer, the other simultaneously proceeding upward. Consider what must happen to the downward-proceeding flux, i , during its passage through any elementary-layer: see Fig 3.5. This effect is simply to decrease i by reflectance Si, where S is reflectance coefficient. Now consider what changes are made

.

in the upward-proceeding flux, j during its passage through the same layer. j is reduced

by reflectance Si, and the amount Si should be added to the downward-proceeding flux i . Thus, we have, incl= in +

in

= i.+i

-Si,,

- X+l+ s i n

From (3-35), we can get,

So, fiom (3.34) and (3.37) the state equation is.

The general solution of this equation is,

[" ]=[ Jn+i

1-nS -nS

"

r]

l + n S j,

We also have io = So, jo = R , i, = T Vj, = O. where N is the total number of layers, S,

is the light source, R the reflectance and T the light transmission. From (3.34) and (3.37),

Substitute the initial state

and the finai state

to (3.38b), we have,

Since in this model, light absorption is neglected, we also have,

R=So-T From (3.4 1) and (3.42) ,we gef '

The total number of surfaces Nt is proportional to the grammage G in this area, i.e.,

where m is the weight per unit area of one fiber. Thus, the relative bonded area RBA is,

RBA=l-

(1+ S ) Rm

STG

From (3.46) we can see, if S , R , T, G and rn are measured, RBA can then be estimated.

In this study, to avoid complicated experimental work measuring S and m. a value proportional to relative unbonded area RUBA was used for hcture occurrence analysis. Since RUBA = 1 - RBA ,

N RUBA = N,

RUBA =

(1 + S) Rm

STG

Therefore we have,

R

RUBA a TG

(3.49) is the new theoretical result derived fiom the discrete form of the Kubelka-Mimk

theory. There are three assumptions to this theoretical result First, light absorption is neglected. Second, the surveyed area is suffciently large compared with the thichess of

the paper. This is because the mode1 we applied here is simplified, the actual case is shown in Fig 3.6. The fluxes j, and i, scatter around in many directions rather than going perpendicularly upward and downward, so R and T are a c W y average values of the fluxes upward-proceeding and downward-proceeding respectively and consequently,

the RBA or RUBA computed here is the average value in the surveyed area. Third, al1 the fiben composing the paper have identical mass per unit area m such that Nt is

proportional to G .

layer 1 Iayer 2 Iayer 3

f

Iayer n

rn

layer N

Fig 3 -6 Light Transmission and Reflectance in Paper

3.4.2 Implementation Mainly experimentai work is involved in this irnplementation. The size of the specimens studied here is 2.0mm2. Frorn 3.4.1 (3.49) we have,

RUBA a

R TG

R , T and G can be denved in three steps. First, take the p-radiograph of one spechen, convert it to grammage map as described in 2.2.3, calculate the average value G of the grammage map. Second, take light transmission image of the same specimen, calculate the average gray level value T. Third, measure the average gray level of the iight source So used in step two. From 3.4.1, the reflectance R c m be obtained by,

.

R can be determined. Thus, R T and G are dl derived, and TG

For more details of the experimentai procedure, see 3 S.

3.5 Experiment Work In order to coilect data for the hcture occurrence analysis, an experimental system was devised and set up speciaiiy for this study although some of the seps used standard technologies.

3.5.1 Material, Imaging and Testing System The paper samples used in this study were made fiom Kraft pulp following the standard British handsheet maker with 120 seconds settling t h e [2]. The purpose of choosing a long settling t h e was to derive larger grammage vax-iability so that we could cut

relatively uniform mal1 tende samples with sufncient grammage ciifference fiom these handsheets.

The imaging system was composed of six parts. ( 11

P -radiography P l Type: '"cPMMA sheet (du Pont)

a

Decay:

fl-

Maximum ernission: 0.1 56 MeV 0

Total activity: 11.9 mCi

0

Expansion t h e : 30 min

(2) Camera: Kodak 1.61 CCD camera with AF micro Nikon 60 mm lens (3) Copy maker stand: Mini Repro 1192

(4) Light table: Knox 6002

(5) Cornputer: Pentium Pro 200 PC (6) Software: Opfimas 6.0 and ITEX IC-PCI 2.7.2.0 The tensile testing system was composed of three parts. (1) Tende testhg machine: SINTECH 1 controiied by 486 PC

(2) Software: MTS Sintech ~ e k w o r k 2.1 s

'

(3) Cutter: Lab developed micro-tensiie sample cutter specially for 2 mm micro-tensile sarnples

Test method: TAPPI T494 om-88 [2]

+

a

Test condition: 50.0% 2.0% RH and 23 .O ir 1.O'C [2]

a

Test speed: 5rnm/min, experimentally discovered speed for best tensile features

Handsheet Making

-

-

_, b-

Radiography

+

I

4

-

Video lmaging

1

+

1

Local Grammage Measurernent

Local Optical Prwerty Measurement

Threshold Digitkation

Tensiie Testing

I

Rebtive 'Onded

Area I

4 Local Gradient

Local Fibre OrÎentation I

Fig 3.7 Methodotogy

Mean Grammage I

Resuit A

3.5.2 Methodology The methodology of this study is shown in Fig 3.7. The whole procedure was designed to correlate tende testing with relative bonded area, local gradient, local fiber orientation and mean grammage. The details of each step of experiment work are s h o w in 3.5.3.

3.5.3 Experimental Procedure Handsheet making The paper samples used in this study were made fiom Kraft pulp. The method was

standard British handsheet rnaker [2]. The settling tirne was chosen to be 120 seconds.

The purpose of choosing a long settiing t h e was to derive larger grammage variability. ,8 -radiography and video imaging

The P-radiography and video imaging procedure is a standard technique as shown in Fig 3.8. First, P-radiograph of the handsheet was taken using the calibration wedge senes

CW 1O3 (introduced previously in 2.2.3). Second, the ,û-radiograph was digitized by the video camera. Third, the digitized data was saved and displayed in the cornputer with the support of Optimas 6.0 and ITEX IC-PCI 2.7.2.0. Fourth, the gray level map was converted to the grammage rnap by the calibration wedges (as introduced in 2.2.3). Thus,

the grammage map of the handsheet was derived.

radiogrâph film sample 1%

source

Digitization & caiibration

Fig 3.8 P4adiography and video imaging 4

Threshold digitization

Mer the grammage map was obtained, the next step was to threshold the map by different gray level ranges (See Fig 3.9).

Each gray level range corresponded to a

grammage range according to the calibration c w e . The reason for doing this was to identi@ regions with difFerent grammage values for the use of next step: micro-tende sample cutting.

-.

Seleded Threshold ''

Lower Lirnit

1

1

Upper Limit

Fig 3.9 Histogram of the Threshold Digitization 4

Micro-tende sample cutting

A micro-tensile sample cutter was designed for the use of cutting the handsheets to

tensile samples for this study according to the thresholding images. The central part of the tensile samples was of size 2mm2 as shown in Fig 3.10. The shape of the specimen

guaranteed that the hcture would be initialized at the central part when forces are imposed as shown in Fig 3.10.

b

4

.

Force

Force

Fig 3.10 A Micro-tende Sample

Local grammage measurement P-radiography and video imaging on the tensile samples were repeated. Resolution of the resulting grammage map was 130pUreIs/mm (See Fig 3.12). Next, three image processing algorithms were applied on the grammage maps. One determined mean grammage, the other

two determined a local fiber orientation and local gradient

introduced in 3.3 and 3.2 respectively. The computed results were used for fracture occurrence analysis later on.

Local optical properties measurement For the purpose of RBA analysis (introduced in 3.4), light transmission and light source were measured (see Fig 3.1 1). The measurement is a well established technology. The

system was composed of four parts, camera, rnask, light table and copy maker to hold the camera. The set up is as shown in Fig 3.1 1. Fig 3.12 shows an optical image of the tensile

specimens.

Carnera

Lig ht transmission

Mask

ht

Tensile sample

Fig 3.1 1 Local Optical Measurement

From f3-Radiography Film

From Tensile Sample

Fig 3.12 Local Video Irnaging

Tensile testing

The tende test followed the TAPPI T494 om-88 standard [2]. The load vs. displacement cunres recorded the breaking procedure of specimens. From these curves one mechanical value, the break stress, of each specimen, c m be calculated for the purpose of fracture

analysis.

Below is an example of the load vs. displacement c u v e and the procedure of calculating

break stress.

Fig 3.13 Example of Load vs. Displacement Curve

F1

Break stress: a, = W

Where F, is the break Ioad, w is the width of the tende sample, i.e., ? m m ,here.

3.6 Experimental Results and Analysis

Totally 59 tende samples were studied. For each sample, seven parameters, mean grammage ( G ), coherence90 (introduced in 3.3),

R TG

- ( RUBA ) (introduced in 3.4) and

break stress (a,) (introduced in 3-5) were computed.

To analyze these data and h d the correlation between them, we started fiom analysis among the first five parameters above. These five parameters represented structural

properties of the paper. The next step was to correlate the structural properties with mechanical property, described by break stress o,.

In the plots below, the equations are derived by linear regression fitting, R~ is the correlation coefficient. ( 1 ) coherence90 vs. mean grammage (G)

Fig 3.14 coherence90 vs. mean grammage (G) It can be seen fkom this plot that coherence90 and G don 't have an obvious correlation. It

implies that fiber orientation is independent of mean grammage. Thus, fiber orientation couid be an additional factor affecthg paper strength. On the other hand, we c m see the variation of fiber orientation is not very large, this is because the paper specimens used here are cut fiom a handsheet. For commercial paper,

fibers may have more obvious preferential orientations because of the fomiing process, Le. ,the speed difference between delivery of aqueous suspension and moving filter.

R (2) =n, ( R UBA ) vs. mean grammage (G)

From the plot above we c m see RUBA and G have a negative correlation, Le., with the increase of G,RUBA decreases. This implies that larger G corresponds to larger relative bonded area ( B A ) because there are more chances of fiber-fiber bonding. Thus, the

estimated value of relative unbonded area is as expected &om the perspective of physical interpretation. It can also be seen that RUBA doesn't have a large variation. This is because the

handsheet is not compressed very much during the forming process compared with commercial paper. While in cop~ercialpaper, local areas with high grammage shouid

have significantly larger RBA after heavy cornpressing.

(3) Break stress ( q,) vs. mean grammage (G)

Fig 3 16

break stress ( ub) vs. mean grammage (G)

The plot above shows a faùly strongpositive correlation between

and G. This rneans

that grammage is the most signifcant factor affecthg strength of paper. The reason should be that the larger the grammage, the more fibers and chances of fiber-fiber bonding.

The scattering of the data points indicates that grarnmage is not the ody factor of paper strength. As shown later on, there are another two factors of interest.

(4) Break stress ( a,) vs.

3.1

R - ( R LIBA ) TG

33

3.5

3.9

3.7

RrrG *IO00 (RUBA)

Fig 3.17

breakstress (a,) vs.

4.1 i

R (RUBA) TG

A negative correlation is shown in the plot above although not very strong. It means

larger RUBA (smaller RBA) corresponds to less strength. It's consistent with the physical interpretation of paper strength, that is, fiber-fiber bonding. Therefore, first, RUBA is also a factor of paper strength and has negative effect on it.. Second, it's not the only factor because of the scattering of data points. Third, compared with grammage (G), it's

less significant because the correlation coefficient is smaller than that of plot (3). Fourth, the estimated value of relative unbonded area is consistent with the theoretical expectation.

( 5 ) Break stress ( a,) vs. coherence90 The plot shows a positive correlation between break stress and coherence90. The scattering of data meam coherence90 is no[ the on& factor of papa strength. Compared

with grammage and relative unbonded mea, fiber orientation is the least important factor

because the correlation coefficient of this plot is the srnallest one among plots (3), (4) and (5). How can the physical meaning of this result be uiterpreted? Let us look at Fig 3.19.

Fig 3.19 Fiber Orientation and Bonded Area

Here B denotes fiber width, L denotes fiber length, B the angle of two fibers and BA the bonded are& There is,

B~ BA=sin 19

The two extreme cases are as shown in (b) and (c), i.e.,

We can see that when all the fibers have the same orientation, the largest bonded will be produced. In addition, consider the tensile forces in the horizontal orientation, it can be expected that the more fibers in the vertical direction, the larger the bonded area and therefore the stronger the paper. Recall that coherence90 represents the agreement of al1 the fiber orientations and the vertical direction, we c m expect coherence90 and break stress to have a positive correlation. As can be seen in plot (S), the experiment result does

indicate such a correlation. Thus, the physical meanuig of plot (5) is that the more fibers in the direction perpendimlur to tensile forces, the larger the bonded area against the

t e n d e forces and therefore the stronger the paper.

3.7 Conclusions (1) The methodology and equipment developed and set up in this shidy were verified to

(2) By proposing and applying a new method based on the Kubeka-Munk with the relative unbonded area being quantzj?ed by image (See plots (2) and (4)), the experiment result is consistent with the theoretical expectation.

(3) A new analytical result was derived for the estimation of fiber orientation distribution

by two dimensional power spectnim. Thus, the previous mode1 was extended to a more realistic one which dlowed different fiber width and Iength.

(4) Three indices were developed to numericdy describe the three structural properties denved by image analysis. Mean value G corresponded to grammage; coherence90 corresponded to agreement of fiber orientation with respect to the perpendicular direction of tende forces; RUBA (L ) corresponded to relative unbonded areaTG

(5) There exists a rnultntariable correZation between paper strength (described by break stress) and the three structural properties. Break stress has positive correlation with G, negative correlation with RUBA and positive correlation with coherence90. These three factors affect paper strength by contnbuting to the number offiber-fiber bonding. (6) Among the three factors, grammage is the most important one. Relative bonded area

is the second and fiber orientation the third.

(7) Local gradient and dominant orientation, visuaked by quiver plothg in MATLAB, can be helpful on hcture initiakation and propagation analysis.

3.8 Future Work (1) Repeat the work on commercial paper with different degrees of pressing instead of the handsheet because, as discussed in 3.6 (1) and (2), handsheets have a smaller variation of relative bonded area and fiber orientation compared with commercial paper.

(2) Develop a multivariable model which incorporates the three structurai properties: grammage, relative bonded area and fiber orientation. To accomplish this objective, fïrst, efficient amount of data have to be derived for the muhivariable fitting by repeating the experiments on commercial paper with different degrees of pressing. Second, effects of these three factors have to be quantified by anolysis of variance, a standard statistical method of multivariable analysis. The intended mathematical tools are SAS or Mathematica.

(3) VenQ validity of the model to predict paper break strength of local region after developing it. The approach is as foliows: make handsheets; take neighboring 1-2 mm'

P - radiograph of each

area with resolution in the fiber width scale, i.e., about 20

p d p k e l (Choosing high resolution

is for the fiber orientation analysis, See 3.3); make

optical rneasurements, light transmission and light source, of each neighboring 1-2 mm

'

area; calculate mean grammage and estimate fiber orientation distribution of each local area by the P-radiograph; estimate the local unbonded area by the local optical rneasurements; predict break stress of each local area using the multivariable model and the area with the smallest break stress is expected to break first; make a tensile test of the

handsheets, take images of the breaking progress with DVD camera; analyze the images taken during the hcture, check validity of the prediction. The whole approach is depicted

in Fig 3.20.

(4) Examine the effect of inspection zone size because structural properties derived by

image analysis here are all dependent on the inspection zone size.

rTensile Testing

1 Realtime lmaging Relative Bonded Area

1 1

i ~ ~ c a~ , li b j Orientation

Multivariable Model Fracture Prediction

Gramrnage ;M

I

Result

Fig 3 2 0 Approach to Ver@ the Validity of Multivariable Model

(5) Improve time efficiency of the whole processing by software and rnoreover, hardware techniques.

References [l] J. Burdette etc., The Manufacture of Pulp and Paper: Science and Engineering Concepts, TAPPI Press 1988

[2] TAPPI Standard 1996

[3] T.Yuhara, M. Hasuike & K. Murakami, Fiber Orientation Mensurement with the Two-Dimensional Power Spectrum of a High-Resolution Soft X-Ray Image, Journal of Pulp and Paper Science, Vol. 17 No. 4, July 1991 [4] J. Scharcanski & C. T. J. Dodson, Texture Anatysis for Estimating Spatial Variabüity and Anisotropy in Planar Stochastic Structures, Optical Eng. Journal 1995

[5] D. B. Damiano & J. B. Little, A Course in Linear Algebrn, Harcourt Brace Jovanovich Publishers (Academic Press), Orlando, USA 1988

[6] T. Cresson & P. Luner, The Characterization of Paper Formation, TAPPI Journal, Dec. 1990, 175-184 [7] T. Cresson & P. Luner, The Characterization of Paper Formation, TAPPI Journal, Feb. 1991, 167-175 [8] D.Marceau, A Review of Image Classification Procedure with Special Emphasis on the Grey Level Cooccurrence Matrix Method for Texture Analysis, MASc thesis, Univ. of Waterloo 1989

191 T. Cresson & P. Luner, Description of the Spatial Gray Level Dependence Method Algorithm, TAPPI Journal, Dec. 1990,220-222 [IO] D. B. Judd, Color in Business, Science and Industry, New York: Wiley 1975 [Il] A. R. Rao, A Taxonomy for Texture Description and Identification, Ed. Springer-Verlag, Berlin, Gerrnany 1990 [12] W. L. ingmanson & E. F. Thode, Factors Contributhg to the Strengtb of a Sheet of Paper, TAPPI Journal, Vol. 42 No. 1, Jan. 1959,83-93

-

[13] L. Haglund, B. Norman & D. Wahren, Mass Distribution in Random Sheets Theoretical Evaluation and Cornparison with Real Sheets, Svensk Papperstidn, Vol. 77, 1974,362-370 [14] John C. Russ, The Image Processing Handbook, CRC Press 1992

[15] W. Ng., VideoBeta Raàiogrnphy Analysis of Paper, B.A.Sc Thesis, Puip and Paper Center, University of Toronto 1993 [16] W. E. Scott & J. C. Abbott, Properties of Paper: An Introduction, TAPPI Press 1995 [17] M. Deng & C. T.J. Dodson, Paper-An Engineered Stochastic Structure, Atlanta, GA 1994 [18] W. K. Pratt, Digital Image Processing, Ed. John Wiley & Sons, New York, USA, 1991

[19] L. Wong, M.

T.Korschot & C. T. J. Dodson, Effect of Formation on Local Strain

and Fracture of Paper, Journal of Pulp and Paper Science: Vol. 22 No. 6, June 1996, 213-219

[20] 0. Kallmes & C. Eckert, The structure of Paper - W. The Appücation of the Relative Bonded Area Concept to paper Evaluation, TAPPI Journal, Vol. 47, No. 9, Sept. 1964,540-548 [21] J. W. Swanson & A. J. Steber, Fiber Surface Area and Bonded Area, TAPPI Journal, Vol. 42, No. 12, Dec. 1959,986-994 [22] R. E. Mark, Handbook of Physical and Mechanical Testing of Paper and Paperboard - Chapter 25, Volume 2, Marcel Dekker Inc. 1984 [23] W. L. hgmanson & E. F. Thode, Factors Contributhg to the Strength of a Sheet of Paper II. Relative Bonded Area, TAPPI Joumal, Vol. 42, No. 1, Jan. 1959,83-93

-

[24] 0.Kallmes & H. Corte, The Structure of Paper - 1. The Statistical Geometry of an Ideal Two Dimensional Fiber Network, TAPPI Journal, Vol. 43, No. 9, Sept. 1960, 737-75 1 [25] R Gagnon, B. Drouin, L. G. Dicaire & P. Bernard, High Spatial Resolution NonDestructive Testing of N e w s p ~ t Journal , of Pulp and Paper Science, Vol. 16, No. 6, Nov. 1990 179-183 [26] H. W. Kropholler, B. Clarks & J. Gomes, Investigating Paper Structure Using Image Analysis, Prof. M. T. Kortschot's paper collection [27l W. R. Devries & S. M. Wu, Chamcterization of the Formation of Base Sheet Paper Using Spectral Moments Determined From Tirne-series Modeis, Prof. Kortschot's paper collection

[28] B. Norman & D. Wahren, A Comprehensive Method for the Description of Mass Distribution in Sheets and Floccuiation and Turbulence in Suspensions, Prof.

Kortschot's papa collection

-

[29] B. D. Jordan & N. G. Nguyan, Specific Perimeter a Graininess Parameter for Formation and Print-Mottle Textures, Paperi ja Puy 6-7, 1986,476-482

[30]R C. Brendemuehl & J. E. Borchert, ReaCtime Spectral Aoalysis of Machine Directional Basis Weight Variations in Paper, TAPPI Journal, Vol. 59, No. 8, 1976, 79-8 1

[3 11 D. Attwood & J.R. Parker, Basis Weight Variations Over Small Areas of Paper, Paper Technology, Vol. 3, No. 5, 1962,43542 [32] T. Yuhara, M. Hasuike & K. Murakami, Application of Image Processing Technique for Anaiysis on Sheet Structure of Paper II. Evaiuation of the Formation of Paper, Prof. Kortschot's paper collection

[33] T. Yuhara, M. Hasuike & K. Murakami, Application of the Co-occurrence Matrix Technique for Evaluation of the Formation of Paper, Prof. Kortschot's paper collection [34] C. T. J. Dodson & C. ~c&t, Floceulation and Orientation Effeets on Paper Formation Statistics, TAPPI Joumal, Vol. 75, No. 1, Jan. 1992, 167-171

[35] M. Hasuike, P. lohansson, C. Fellen & O. Terland, Fiber Orientation and Its Relation to Paper Formation Studied by Image Analysis, Paper Physics Conference, 1987,185-188 [36] C. Schafmit, J. Silvy & C. T. J. Dodson, orientation Density Distribution of Fibers in Paper, Nordic hilp & Paper Research Joumal, No. 3- 1992 7, 1992, 121- 125 [37] F. C. Rodrigues & F. D. Carvalho, Measurement of Randornness in Fiber Distribution in Paper using Cornputer Vision, TAPPI Journal, Dec. 1990,209-2 12

[38] T. Yuhara, M. Hasuike and K. Mumkami, Preprint of Pulp and Paper Conference, Japan TAPPI,1986, p47

[39] T. Yuhara, M. Hasuike and K. Murakami, Japan TAPPI 41,1987,523-529 [40] T. Yuhara, M. Hasuike andK. Murakami, Japan TAPPI 40, 1986,85091

[41] C. T. J. Dodson, Some Statistical Considerations in the Analysis of the Deformation of Inhomogeneous Materiais, Structure, Solid Mechanics and

Engineering Design, The Proceedings of the Southampton 1969 Civil Engineering, Wiiey-Interscience 1971,579-587 [42] C. T. J. Dodson, The Failare of Bonds During the Fracture of a Random Fibrous Network, Structure, Solid Mechanics and Engineering Design, The Proceedigs of the Southampton 1969 Civil Engineering, Wiley-Interscience 1 97 1 , 3 57-366

[43] L. Kato, Evaluation of a Video Image Correlation Program, BASc Thesis, Univ. of Toronto 1994 [44]

H. Feldman, K. Jayaraman & M. T. Kortschot, A Monte Carlo Simulation of

Paper Deformation and Failure, Prof. Kortschot's paper collection [45] J. Gomes, H. W. ~rophollerand P. Luner, MeasuRng Flocculation Using Image Analysis, Prof. Kortschot's paper collection [46] R. J. Trepanier, A User Friendly Image Anaiysis System for the Routine Analysis

of Paper Formation, Dirt Speck Content and Solid Print Non-Uniforrnity, Process and Product Quality Division Symposium, 1989,79-83

Appendix Appendix 1 % File Name: g1obGrad.m % Author: Yinglin Chen % Date: May, 1996

resolution=0.2; % resolution(m/pixel) imgLen=56; % length of the original image (mm) suvLen=56; % length of surveyed area (mm) suvWid=56; % width of surveyed area (mm) insLen=l; % inspection zone side length (mm) xl=l; % starting row of the surveyed area yl=l ; % starting colurnn of the surveyed area x2=xl+suvWid/resolution-1; % ending row y2=yl+suvLen/resolution-1; % ending column imgDim=imgLen/resolution; % dimension of the original image suvWDim=y2-yl+l; 8 dimension of surveyed area length suvLDim=x2-xl+l; % dimension of surveyed area width opOrder=5; % order of operator w= (oporder-1)/2; % convert the gray scale map of the surveyed area to grammage map % - by polynomial fitting % grammage of the calibration wedges grammage=[97.2 81.0 . 6 4 . 8 48.6 32.4 16.21; gray= [ 1 ;

G=zeros (suvLDim,suvWDim); % initialize the mass distribution matrix for i=1:6 gray=[gray A((i-l)*2+1)]; % read in the calibration numbers end 8 2-order polyfit of mass and gray scale p=polyfit (gray, grammage, 2); % convert the gray to mass and Save in G for i=l:suvLDim for j=l :suvWDirn g=A( (i+xl-2) imgDim+ (j+yl-1)+12); G(i,j)=p(l)*gA2+p(2)* g + p ( 3 ) ; end end validLDim=suvLDim-2*w; aalidWDim=suvWDim-2*w; Ssize of deltg and t h e t a PrewCol=(l/(w*opOrder))*[l 1 O -1 -1 1 1 O -1 -1 1 1 O -1 -1 1 1 O -1 -1 1 1 O -1 -11; PrewRow= (l/(w*opOrder)) * [l 1 1 1 1 1 1 1 1 1 0 0 0 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -Il;% define prewitt row and column operators GradR0w=conv2(G,PrewRow,~valid'); % gradient in row direction

~rad~ol=conv2 (GrPrewCol, valid' )

; 3

gradient in column direction

deltg=sqrt(GradRow~"2+GradCol."2); % magnitude of gradient theta=atan2(GradRowfGradCol); % angle of gradient in radian

Unit=ones (validLDimfva1idWDim) ; MùyN=validLDim+validWDim; Mudeltg=mean(mean(deltg)); 3 mean magnitude of gradient Sigmdeltg2=mean(mean( (deltg-Mudeltg*Unit) - * 2 1 ) ; % variance of magnitude of gradient CVdeltg=sqrt(Sigmdeltg2)/Mudeltg; % deviation of gradient madnitude Kdeltg=Mudeltgn2/Sigmdeltg2; % parameter of gamma distribution Mux=mean(mean(GradCol)); Muy=mean (mean(GradRow)) ; Vxx=sum ( s u( (Gradcol-Mux'Unit) .*2! ) /MbyN; Vyy=surn (sum( (GradRow-Muy+Unit).A2)) /MbyN; Vxy=sum(s~((GradCol-M~x*Unit).*(GradRow-Muy*Unit)))/MbyN; Lurndmax=(Vxx+Vyy+sqrt((Vxx-Vyy)"2+4+VxyA2)) / 2 ; Lumdmin=(Vxx+Vyy-~qrt((Vx~-Vyy)~2+4*Vxy~2) )/2; e2=LumdmaxA2/LumdminA2; % eccentrity number e2 local coherence distribution insDim=opOrder; ~hetad2=zeros(validLDim/insDim~validWDim/insDim); Thetad=zeros(valid~~im/ins~im~va1idWDim/insDim); C=zeros(validLDim/insDim, validWDim/insDiml; %

for i=l:validLDim/insDim for j=l:validWDim/insDim yO=(i-1)*insDirn+l; x0=(j-1) *insDim+l; sinSum=O; cossum=o; for y=yO:yO+insDim-1 for x=xO:xO+insDim-1 sinSum=sinSum+deltg(yfX} "2*sin (Zftheta(yfx ) ) ; cosSum=cosSum+deltg( y rx) *2*cos (2'theta (yfx ) 1 ; end end Thetad2 (i,j 1 =atan2 (sinSumfcosSm); for y=y0 :yO+insDim-I for x=xO:xO+insDim-1 ~ ( ij)=C(if~j)+sqrt(deltg(y,x)^2+(cos(2*theta(yrx), Thetad2 (ifj) ) +1)/2); end end ; C(i, j)=C(ifj ) / (insDimA2) end

end Mucwean (mean(C)) ; Lumdc=1 /Muc; Sigmc2=mean(mean((C-Mucfones(validLDim/insDimfvalidWDim/insDim) )*"2)); CVc=sqrt (Sigmc2)/Muc; % find the local dominant orientation and coherence (absolute values of projections)

File Name: ConvCompare.m Function: Compare time efficiency of the original and recursive algorithms for two-dimensional convolution 3 Author: Yinglin Chen % Date: May, 1997 % %

% recursive algorithm tic; imgDim=280; % size of the matrix for convolution opOrder=S; % order of the operator w= (oporder-1)/2; for i=l:imgDim-2*w for k=l:w+l Hp(k)=sum(sum(G(i:2'w+i, k:w+k-1)) ) ; newCol(k)=sum(G(i:i+2*wf w+k)) ; end Ptr=O; for j=l:imgDim-2*w-1 ptrEnd=rem(Ptr+w, w+lf+l; ptrBegin=rem ( P t r , w+l)+l; HpCJew=Hp(ptrEnd)+newCol (ptrEnd)-newCol (ptrBegin); GradCol2 (ifj ) =HpNew-Hp (ptrBegin); Hp (ptrBegin)=HpNew; newCol (ptrBegin)=sum (G(i:2*w+i,j +2*w+L)) ; Ptr=Ptr+l; end j=j+i; ptrEnd=rem(Ptr+w, w+l)+l; ptrBegin=rem(Ptr, w+l)+l;

HpNew=Hp(ptrEnd)+newCol(ptrEnd)-newCol(ptrBegin); GradCol2 (i,j ) =HpNew-Hp (ptrBegin); end % GradRow for j=l:imgDim-2*w . for k = l : w+l Vp(k)=sum(sum(G(k:w+k-1, j:2*w+j) ) ) ; newRow(k)=sum(G (w+kf j :2*w+j)) ; end Ptr=O ; for i=l:imgDim-2*w-1 ptrEnd=rem (Ptr+w, w+l)+l; ptrBegin=rem ( Ptr, w+ 1) +l; VpNew=Vp (ptrEnd)+newRow(ptrEnd)-newRow (ptrBegin); GradRow2 ( i, j) =VpNew-Vp (ptrBegin); Vp (ptrBegin)=VpNew; newRow (ptrBegin)=sum (G(i+2*w+lf j:2*w+j ) ) ; Ptr=Ptr+l; end i=i+l; ptrEnd=rem(Ptr+w, w+l)+l; ptrBegin=rem(Ptr, w+l)+l; VpNew=Vp (ptrEnd)+newRow (ptrEnd)-newRow (ptrBegin); GradRow2 ( i fj ) =VpNew-Vp (ptrBegin); end tConvMod=toc; %

original algorithm

tic; for i=l:imgDim-2*w for j=l:imgDim-2*w

Hn=O ; Hp=O ; for y=i:i-l+opOrder f o r x=j :j - l + w Hn=Hn+G (y, x) ; end

for x=j +w+l :j +2*w Hp=Hp+G (y, x 1 ; end

end GradCol ( i fj ) = (Hp-Hn) / ( w i r o p 0 r d e r ) ;

Hn=O ; Hp=O ; f o r x= j :j -l+opOrder for y = i : i-l+w HpSHp+G ( y, x 1 ; end for y=i+w+l:i+2+w Hn=Hn+G(y,x) ; end end GradRow ( i fj ) = (Hp-'Hn) / ( w * o p O r d e r ) ; end end tConvOrg=toc;

Appendix II % % %

File Name: c0matrixCD.m Author: Yinglin Chen Date: May, 1997

resolution=0.2; 8 (mm/pixel) imgLen=56; % length of the original image (mm) suvLen=56; % length of surveyed area (mm) suvWid=56; % width of surveyed area (mm) pxlLen=0.2; % inspection zone side length (mm) xl=l; % starting row of the surveyed area yl=l ; % starting column x2=xl+suvWid/resolution; % y2=yl+suvLen/resolution; % imgDim=imgLen/resolution; suvWDim=y2-yl; % dimension suvLDim=xS-xl; % dimension

ending row ending colurnn % dimension of the original image of surveyed area length of surveyed area width

% convert the gray scale of the surveyed area to mass by polynomial fitting grammage=[97.2 81.0 64.8 48-6 32.4 16.21; gray= [ 1 ; % initialize the mass distribution matrix G=zeros(suvLDim,suvWDim); for i=1:6 gray=[gray A((i-l)*S+l)]; % read in the calibration numbers end % 2-order polyfit of mass and gray scale p=polyfit (gray, grammage, 2) ; % convert the gray to mass and Save in G for i=l:suvLDim for j=l :suvWDim g=A( (i+xl-2)*irngDim+(j+yl-1)+12) ; G ( i , j)=p(l)'gA2+p(2) * g + p ( 3 ) ;

end

end m=floor (max(max( G )) ) ; P=zeros (m,rn) ; for i=l:suvLDim for j=l:suvWDim-1 P(floor(G(i, j)), floor(G(i,jil)) )=P(floor(G(i,j ) ) , floor(G(i, j+1)) ) +1; end % above generate the CO-occurrence matrix end Sum=suvLDimf (suvWDim-1); P=P ./Sun; % normalize the CO-occ matrix S=nnz (P); S Ppeak=max (max( P)) ; Ppeak FI=Ppeak/S; % formation number FI energy=sum (sum(P."2)) ; energy

contrast=O; homo=O ; entropy=O ; for i=l :m for j=l:m contrast=contrast+(i-j)A2*P(Ffj); if -(P(i,j)==O) entropy=entropy+ (-l)*P(irj ) flog(P(i,j)) ; end homo=homo+l/(l+(i-j)"2)*P(i,j); end end contrast homo entropy S energy, entropy and homogenetity if pxllen-=resolution & pxlLen>resolution aveDim=pxllen/reso~ution; Gnew=zeros(suvLen/pxlLen, suvWid/pxlLen); for i=l:suvLen/pxllen for j=l:suvWid/pxlLen y0= (i-1)*aveDim+l; xO=(j-1) *aveDim+l; Gnew ( ifj ) =mean (mean(G(y0:yO+aveDim-1,x0 :xO+aveDixn-1 ) 1 1 ; end % smooth the image to desired pxl size end G=Gnew; m=floor(max(max(G)) ) ; P=zeros (m,m) ; for i=l:suvLen/pxlLen for j=l:suvWid/pxlLen-1 P(floor(G(i,j)), floor(G(i,j+l)))=P(floor(G(i,j)1, floor ( G( i, j+l)) ) +l; end % above generate the CO-occurrence rnatrix end % nomalize the CO-occ matrix P=P/((suvLen/pxlLen)*(suvWid/pxllen-1)); end PBl=O; PB2=0; PCl=O; PC2=0; PA=O ; PD=O ; PB3=0; PC3=0; mid=floor (mean(mean(G)) ) ; PBl=sum(sum(P(l:mid,mld))) 1 ; PC2=sum(sum(P (mid,l:mid)) ) ) ; PB2=sum(sum(P (mid,mid:m) ) ) ; PCl=sum(sum(P(mid:m,mid))) ) ; PA=sum(sum(P(l:rnid-lr1:mid-1) ) ) ; PD=sum(sum(P(mid+l:m,mid+l:m) ) ) ; PB3=sum(sum(P(1:mid-l,11tid+l:m)) ) ; PC3=sum(sum(P(mid+1:mt1:mid-1)));

N~=(O.~*(PB~+PC~)+O.~~~*(PB~+PC~)~PB~~PC~)*(SUVWD~~-~); % floc number

% 3 %

S

floc size

%

floc distance

File Name: c0matrixMD.m Author: Yinglin Chen Date: May, 1997

resolution=0,2; % (mm/pixel) imgLen=56; % length of the original image (mm) suvLen=56; % length of surveyed area (mm) suvWid=56; % width of surveyed area (mm) pxlLen=0.2; 8 inspection zone side length (mm) xl=l; % starting row of the surveyed area yl=l ; % starting c o l m x2=xl+suvWid/resolution; % y2=yl+suvLen/resolution; 3 imgDim=imgLen/resolution; suvWDim=y2-yl; % dimension suvLDim=x2-xl; % dimension

ending row ending column % dimension of the original image of surveyed area length of surveyed area width

% convert the gray scale of the surveyed area to mass by polynomial fitting grammage=[97.2 81.0 64.8 48.6 32.4 16-21; gray= [ 1 ; G=zeros(suvLDim,suvWDim); 8 initialize the mass distribution matrix for i=1:6 gray=[gray A((i-1)*2+1)]; % read in the calibration numbers end p=polyfit(gray, grammage, 2 ) ; % 2-order polyfit of mass and gray scale for i=l:suvLDim for j=l :suvWDim g=A( (i+xl-2)* imgDim+ (j+yl-1)+12); G ( i , j)=p(l)*gn2+p(2)*g+p(3) ; % convert the gray to mass and Save in G end end

m=f loor (max(max( G )1 ) ; P=zeros (m,m) ; for j =l :suvWDim for i=l:suvLDim-1 P(floor(G(i,j ) ) , floor(G(i+l,j) )=P(floor(G(i,j )1 , floor(G(i+l,j)))+l; end end Sum=suvWDim* ( suvLDim-I); P=P. /Sum; S=nnz(P); S Ppeak=max (max(P)) ; Ppeak FI=Ppeak/S;

FI

energy=~um(sum(P.~Z)); energy contrast=O; entropy=0; homo=O ; for i=l:m for j = l :m contrast=contrast+(i-j)"2*P(i ,JI; if -(P(i,] ) = = O ) entropy=entropy+ (-1)*P (ifj 1 *log!P ( i fj1 ) end homo=homo+l/(l+(i-j) ^2)fP(i, j); end end contrast homo entropy % Energy, entropy, homo and contrast

;

if pxlLen-=resolution & pxlLen>resolution aveDim=pxlLen/resolution; Gnew=zeros(suvLen/px1Len, suvWid/pxlLen); for i=l:suvLen/pxlLen for j=l:suvWid/pxlLen y0= (i-1)*aveDim+l; x0= (j-1)*aveDim+l; Gnew(i,j)~ean(mean(G(yO:yO+aveDim-1,x0:xO+aveDimend % smooth the image to desired pxl size end G=Gnew;

m=f loor (max (max(G)1 1 ; P=zeros (m,m) ; for j=l:suvWid/pxlLen for i=l:suvLen/pxlLen-l P(floor(G(i,j)), floor(G(i+l,j)))=P(floor(G(i,j)), floor(G(i+l,j)))+l; end % got CO-occ matrix again end Sum=suvWid/pxlLen*(suvLen/px1Len-1); P=P./Surn; % normalize the CO-occ matrix end 3 Partition the GLCM mid=floor (mean(mean(G)) ; PBl=sum(sum(P (l:mid,mid)) ) ; PCZ=sum (sum(P(mid,1 :mid)) ) ; PB2=sum(sum(P(mid,mid:m)1 ) ; PCl=sum(sum(P(mid:m,mid)1 ) ; PA=sum(sum(P(l:mid-la,1:mid-11) 1 ; PD=sum(sum(P(mid+l:m,mid+l:m)) ) ; PB3=sum(sum(P(l:rnid-l,mid+l:m) ) ) ; PC3=sum(sum(P (mid+l:m,1:mid-1) ) ) ; ~f=(0.3*(~~1+~C1)+O~375*(PB2+PC2)+PB3+PC3)*(suvWid/pxlLen-l); Nf FS=PD*suvLen/Nf; FS FD=suvLen/Nf; FD %

write result to file

fid=f~pen('comatrixMD~mat'~~a+~); f p r i n t f ( f i d , 'FI: % g \ n l , F I ) ; f p r i n t f ( f i d , ' P p e a k : % g \ n t, Ppeak) ; f p r i n t f ( f i d , 'S: % g \ n l , S ) ; f p r i n t f ( f i d , 'energy: % g \ n l ,energy ) ; fprintf(fid, 'contrast: %g\nl, contrast); f p r i n t f ( f i d , 'entropy: % g \ n l , e n t r o p y ) ; f p r i n t f ( f i d , 'homogineity: % g \ n l , homo) ; f p r i n t f ( f i d , 'Nf: % g \ n l , N f ) ; f p r i n t f ( f i d , ' FS: % g \ n l , FS) ; f p r i n t f ( f i d , ' FD: % g \ n l , FD) ; f p r i n t f (fid, 'tSetup: %g\nl, tSetup) ; f p r i n t f ( f i d , ' tMass :. % g \ n ', tMass) ; f p r i n t f ( f i d , ' tGray: % g \ n l , tGray) ; fprintf(fid, 'te: %g\nl, t e ) ; f p r i n t f ( f i d , 'tconvention: % g \ n t , tconvention); f p r i n t f ( f i d , ' tsmooth: S g \ n l , tSmooth) ; f p r i n t f ( f i d , ' t F l o c : % g \ n r, t F l o c ) ; f p r i n t f ( f i d , ' t T o t a l : % g \ n ', t T o t a l ) ; fclose ( f i d ) ;

Appendix III % % %

File Name: 1ocalGrad.m Author: Yinglin Chen Date: June, 1997

%

Read in .tiff image file and convert the gray level image to

[~,ma~]=imread('/home/grad/yinglin/research/image/ts2O~film~loca~. tifr); A=double (X); for i=1:220 for j=1:220 if A(i,j) < 160 G(irj)=pUl)*A(irj1 + p l (2); else ~ ( ij)=p2(l)*A(ir , j)"2+p2(2)*A(itj)+p2(3); end end end gmean=mean(mean(G)); % Mean grammage value %save -v4 tslfilrnlocal G % Given G (the grammage map) already resolution=1.6/220; % (mm/pixel) imgLen=l. 6; % length of the original image (mm) suvLen=l.6; % length of suveyed area (mm) suvWid=1.6; % width of surveyed area (mm) %insLen=0.02; % inspection zone side length (mm) xl=l; % starting row of the surveyed area yl=l ; % starting column

x2=xl+suvWid/resolution; % ending row y2=yl+suvLen/resofution; % ending column imgDim=irngLen/resolution; % dimension of the original image suvWDim=y2-yl; % dimension of surveyed area length suvLDim=x2-xl; % dimension of surveyed area width opOrder=5; % order of operator, odd number w= (oporder-1 /2; % initialize matrices for deltg and theta deltg=zeros(suvLDim-2*wrsuvWDim-2%; theta=zeros (suvLDh-2*w,suvWDim-2*w);

define prewitt row and column operators PrewCol(1:oporder,1 :w)=ones (oporder& ; PrewRow ( 1 :w, 1 :oporder ) =ones ( w, oporder); PrewRow (w+l,1: oporder)=zeros (1,oporder); PrewCol ( 1 :oporder,w+l ) =zeros (oporde; PrewCol(l:opOrder,w+2:opOrder)=(-l)+ones~opOrder~opOrder-~-~); PrewRow (w+2:oporder,1 :oporder)= (-1)*ones (oporder-w-1,oporder) ; %

GradRow=conv2(G,PrewRow,'valid'); % gradient in row direction GradCol=conv2(G,PrewCol,'validt); % gradient in column direction deltg=sqrt(GradRo~.~2+GradCol.~2); Mudeltg=mean (deltg); Mudeltç theta=atan2(GradRow,Gradcol) ;

Mux=mean (mean(GradCcjl)) ; Muywean (mean(GradRow)) ; Vxx=sum(~um((GradCol-MuxfUnit).~2))/MbyN; Vyy=sum (sum ( (GradRow-Muy*Unit) .A2)) /MbyN; Vxy=sum(sum((GradCol-Mux*Unit).*(GradRow-Muy*Unit)))/~yN; Lumdmax=(Vxx+Vyy+sqrt((Vxx-Vyy)"2+4*VxyA2))/2; Lumdmin=(Vxx+Vyy-sqrt((Vxx-Vyy)A2+4*V~y*2))/2; e2=LumdmaxA2/LumdminA2; 3 eccentrity number e2 e2

for i=l:LoDoLDim for j =l :LoDoWDim y0= (i-1)*insDim+l; x0= (j-1)*insDim+l; sinSum=sum(sum(deltg(y0:yO+insDim-l,x0:xO+insDim1),A2.*sin(2*theta(y0:yO+insDim-lI~O:x0+insDim-l))1 ) ; cosSum=sum (sum(deltg(y0:yO+insDim-l,x0: xO+insDim1).A2.*cos(2*theta(y0:yO+insDim-1,x0:x0+insDim-l)))l; Thetad2 (i,j ) =atan2 (sinSm,cosSum); if Thetad2(i,j)>=O Thetad(i,j)=O.S*Thetad2 ( i , j ); end if Thetad2 (ifj ) CO Thetad(i,j)=0,St(2*pi+Thetad2(i,j)); end Thetadc=ones (lns~im, i n s D i m ) *Thetad (i,j ) ; Coherence(i,j)=sum(sum(abs(deltgc.*cos(thetac-Thetadc~1 ) ; Coherence(i, j)=Coherence(i,j)/ (insDimA2); end end % above find the local dominant orientation & coherence( absolute values of projections) Muc=mean(mean(Coherence)); Muc % quiver plot of the whole suveyed area with Thetad ThetadX=Coherence.*cos(Thetad); ThetadY=Coherence.*sin(Thetad); quiver (ThetadX, ThetadY,1);

title ( 'Local Dominant Orientation of ts20 print -deps LoDoOrTs20.eps pause

)

quiver plot of the surveyed area with theta %quiver(GradCol(1:2Or1 ~ 2 0,GradRow(l:20,1:20) ) ,1) ; %title('Local Gradient Magnitude and Orientation of tst201) %print -deps LoGrTst20.eps %tPlot2=toc; %pause; %

% polar plot of local dominant orientation pThetad=zeros ( 181,l); alph-0: 179; alph=alph1.*pi./180; for i=l:LoDoLDim for j=l:LoDoWDim floThetad=floor(Thetad(irj)/pi*180+1); pThetad(floThetad)=pThetad(floThetad)+Coherence(i,j); end end pThetad(l)=pThetad(l) +pThetad(l81); pThetad=pThetad(l:180); pnorm=pThetad,/sum (pThetad); polar (alph, pnorm) ;gtext('Probabi1ity Distribution of Local Dominant Orientation of ts20' ) print -deps PolarLoD.oOrTs20.eps

write result to file 1ocalGrad.mat Save localGrad Thetad tSetup tMass t G r a y tAlgrthm tPlotl tPlot2 tTotal %

% % %

File Name: 0rientFFT.m Author: Yinglin Chen Date: June, 1997

% The program uses fast Fourier algorithm. When dimension of G is power of 2, Radix-2 algorithm is used which is the fastest case.

% Read in .tiff image file and convert the gray level image ta grammage map grammage=[162.0 145.8 129.6 113.4 97.2 81.0 64.8 48.6 32.4 16.21; gray=[197.156 195.262 186,809 182.402 170.865 153,401 124.636 96.8308 63.5572 37.02511;

gram= 1; pl=polyfit(gray(6:lO),grammage(6:l0),1); p2=polyfit (gray( 1:5),grammage (1:5 ) ,2) ;

[~,map]=imread('/home/grad/ying1in/research/image/ts2O~fi~m~local. tif'); A=double (X); for i=1:220 for j=1:220 if A ( i , j) C 160 G(i,j)=pl(l)*ACi,ji+p1(2); else G ( i , j)=p2(11 *A(i,j )h2+p2(2)*A& jl+p2(3); end end end gmean-ean (mean(G)) ; % Mean grammage value %save -v4 tslfilrnlocal G % given the matrix G (the grammage map) first resolut=137,5; %resolution pxls/mm B=0.02; %fibre width mm n=220; % s i z eof the matrix examined pxl kl=floor(n/(2*B*resolut)); %Frequency band kl-k2 k2=floor (n/(B*resolut)) ;

Fctemp=(fft ( G )) . '; Fcp=Fctemp(:, l:k2+1) ; Fcn=conj (Fctemp(:,2: k2+1)) ; Fp=fft (Fcp); Fp=Fp(l:k2+1, : ) . '; Fn=fft (Fcn); Fn=Fn(l:k2+1, : ) . ' F= [ Fp Fnl ; P=F.+conj(F); % get power spectrum P p=zeros(l81,1); %get the angular distribution and rotate it by 90 degrees . %frequency band is kl-k2. Angles are from O to for u=O:kl-1 179 degrees vstart=ceil(sqrt(klA2-uA2)); vend=floor (sqrt (k2"2-uA2)) ; for v=vstart :vend theta=atanZ (v,u) *180/pi;

flotheta=floor (theta); p(flotheta+91)=p(flotheta+9I)+P(v+l,u+l); theta=atan2(-v,u)*l8O/pi; ceiltheta=ceil (theta); p (ceiltheta+91)=p (ceiltheta+91)+P (k2+l+v,u+l); end end for u=kl :k2 vstart=O ; vend=floor(sqrt(k2"2-uA2) 1; for v=vstart:vend theta=atan2 (o,u)*l8O/pi; flotheta=floor (theta); p~flotheta+91)=p(flotheta+91)+P(v+l,u+l); theta=atan2(-v,u)*l8O/pi; ceiltheta=ceil (theta); p(ceiltheta+91)=p(ceiltheta+91)fP(k2+1+vfu+1); end end p(l)=p(l)+p(l81); p=p(l:180) ; pnorm=p./sum(p); %normalize the angle distribution, result stored in pnorm alph=l:180; alph=alphr; %find the dominant orientation which maximizes the sum of projections of probabilities of al1 the 180 angles SinSum=O ; CosSm=O; . for theta=O:179 SinSum=SinSum+pnorm(theta+l)"2*~in(2*theta*pi/180); CosSum=CosSum+pnorm(theta+l)^2'cos(2*theta*pi/~80); end thetadZ=atanZ (SinSm,CosSm) ; if thetad2= O thetad=Oq5*thetad2; end thetad=thetad/pit180; thetad % thetad is the dominant orientation coherence=sum(abs (pnorm(1 :180),"cos ( (alph-1)/180*piones (l80,l)*thetad)) 1 ; coherence % coherence is the agreement to the dominant orientation coherence90=sum(abs (pnorm(1:180) .*cos ( (alph-1)/180epiones (l80,l)*O.5*pi)) ) ; coherence90 3 cherence90 is the degree of agreement to 90 degree, the vertical direction Theta=0:179; %polar plot of the normalized angle distribution Theta=Thetar; Theta=Theta*pi/l80; polar (Theta, pnorm) ; gtext('Fibre orientation Distribution of ts20'); gtext ( ' Probability') ; gtext('angle'1; print -deps OrientTs20;

IMAGE EVALUATION TEST TARGET (QA-3)

LI

IL-

IMAGE. lnc --= = --

APPLIED ,

1653 East Main Street -- a Rochesîer, NY 14809 USA

Phone: 716i482-0300

3 Fax: 716/288-598Q

Suggest Documents