Appendices. Appendix A

Appendices Appendix A There is a correspondence between a sufficiently long and flexible chain in a polymer melt, and a random walk: the trajectory of...
Author: Theresa Cox
6 downloads 1 Views 86KB Size
Appendices Appendix A There is a correspondence between a sufficiently long and flexible chain in a polymer melt, and a random walk: the trajectory of the random walk corresponds to the coarse grained configuration of the chain, and one step of the random walk corresponds to going along the chain over the distance of one segment (see chapter 1). It was first shown by Flory11 (and later on confirmed experimentally,24,25 theoretically26 and by computer simulations27) that the probability to find the chain in a certain coarse grained configuration equals the probability that the Brownian particle follows the corresponding trajectory. r

Consider a random walk starting at the origin. Let g ( N , x ) be the probability density to arrive r at the point x after N steps. In this appendix it is shown that in the limit of a large number of r steps g ( N , x ) becomes Gaussian, irrespective of the single step probability density r r r g ( x ) ≡ g (1, x ) , provided that it is isotropic. The probability density g ( N , x ) satisfies the recurrence relation r



r

r

r

r

g ( N + 1, x ) = dy g ( N , x + y ) g ( y )

(A1) r

Since for large values of N the characteristic length scale of g ( N , x ) is much larger than the r r characteristic length scale of g ( x ) , the value of g ( N , x ) will hardly change over the area where the integrand gives the major contribution to the integral. Therefore, it makes sense to r r expand g ( N , x ) in a Taylor series around the point x . Since the first order term gives no r contribution because of the isotropy of g ( x ) , it is necessary to expand till the second order. The result is r

r

g ( N + 1, x ) = g ( N , x ) +

r

1 ∂ 2g 2 ∂xi ∂x j

r ( N,x )

r

∫ dy y y

i j

r

g(y)

(A2)

Due to the symmetry of g ( x ) the summation over i and j only gives a contribution if i = j , and this contribution is independent of i. Moreover,

Appendices r

r

1

r

r

1

∫ dy y g ( y ) = 3 ∫ dy y g ( y ) ≡ 3 a 2 i

2

2

(A3)

The difference equation A2 can be approximated by a differential equation: ∂g a 2 2 = ∇ g ∂N 6

(A4)

This is just the diffusion equation. Taking as boundary condition r

r

g (0, x ) = δ D ( x )

(A5)

the solution of this equation is given by  3x 2  3 3 2  g( N, x ) =   exp −   2πNa 2   2 Na 2  r

(A6)

This shows that indeed the probability density becomes Gaussian after sufficiently many steps, r regardless of the precise form of the single step probability density g ( x ) . Even if the walk is not random in the sense that the direction of each step depends on the direction of the previous step, the result A6 would still apply, although the value of a would not be equal any more to the root mean square of the single step distance. If the segments are chosen long enough, the r corresponding single step distribution function g ( x ) is Gaussian itself, regardless of the details of the chemical bonds. Therefore, one can take  3x 2  3 3 2  g( x) =   exp − 2   2πa 2   2a  r



 a 2q 2  g (q ) = exp −  6   r

(A7)

where a is called the “statistical segment length.” Note that it is not necessary to relate a to the microscopic distances between the chemical units: it can be regarded as an experimental parameter.

178

Appendices

Appendix B The purpose of this appendix is the evaluation of the integral given in equation B1, which may be interpreted as the partition function Z of a statistical mechanical system in the Landau representation. Using this interpretation, the set {yi } is the coarse grained state of this system, and the exponent in the denominator is the series expansion of the effective Hamiltonian (Landau free energy). F = − ln Z

∫ Z=

d{yi} e

− 1 Bij yi y j − Ai yi − 1 Aij yi y j − 1 Aijk yi y j yk + 2 2! 3!



dyi e

L − k1! Ai Li yi L yi 1

k

1

k

(B1)

− 1 Bij yi y j 2

In this appendix we use the so-called Einstein summation convention, which means that one should sum over repeated indices. Note that appendix B is the only part of the thesis where this convention is used. Without restricting our considerations it can be assumed that the tensors A and B are symmetric. The first step is the expansion of the A-dependent part of the exponential in a power series.

Z=



−1B y y d{yi} e 2 ij i j



(

(−1)n Ai yi + 1 Aij yi y j + L + 1 Ai1Lik yi1 L yik 2! k! n ! n= 0





− 1 Bij yi y j d{yi } e 2

)

n

(B2)

Therefore, one needs to calculate integrals of the form



d{yi } yi1 yi2 L yim e



− 1 Bij yi y j 2

−1 B y y d{yi} e 2 ij i j

≡ 〈 yi1 yi2 L yim 〉

(B3)

Clearly this integral is zero if m is odd. In order to calculate it for even values of m, we use the auxiliary function S( xi ) , which is defined by

179

Appendices − 1 Bij yi y j + xi yi 2

∫ d{y }e S( x ) ≡ ∫ d{y } e i

i

i

− 1 Bij yi y j 2

1 B − 1x x ij i j

= e2

(B4)

The expectation values B3 can be obtained by repetitive differentiation of S( xi ) : 〈 yi1 yi2 L yim 〉 =

∂S( xi ) ∂xi1 L ∂xim

(B5) xi = 0

Using the explicit expression B4 for S one obtains 〈 yi y j 〉 = Bij−1 〈 yi y j yk yl 〉 = 〈 yi y j 〉〈 yk yl 〉 + 〈 yi yk 〉〈 y j yl 〉 + 〈 yi yl 〉〈 y j yk 〉

(B6)

〈 yi y j yk yl ym yn 〉 = 〈 yi y j 〉〈 yk yl 〉〈 ym yn 〉 + 〈 yi ym 〉〈 y j yn 〉〈 yk yl 〉 + L In equation B2, Z has been written as an infinite sum. Consider a term arising from the Newtonian multinomial expansion of the nth term in the Taylor expansion of the exponential. Let this term contain np factors Ai1...ip. These numbers satisfy n1 + n2 + L + nk = n

(B7)

The considered term has the numerical prefactor (−1)n

1 n1

n2

(1!) ( 2!) L (k !)

nk

1 1 ≡ (−1)n n1 ! n2 ! L nk ! N

(B8)

Using B6 such a term can be split into a finite sum of subterms. Each of these subterms can be represented by what we call an “ordered diagram.” In an ordered diagram, a tensor Ai1...im is represented by a dot having m shoots. The dots (tensors) are placed along a line. From the left, first the dots having one shoot are drawn, then the dots having two shoots, etc. (note that the factor associated with the order of the vertices has already been absorbed into B8). One shoot is pointed upwards, and the others are distributed uniformly over the circle. All subterms arising from one term have the same array of dots. The subterms are obtained by connecting

180

Appendices

• • • •

• • • • fig B.1

fig B.2

the shoots in all possible ways by lines representing 〈yiyj〉, using relation B6. See fig B.1 for an example of an ordered diagram. Many of the ordered diagrams thus obtained represent the same summation, for instance the two ordered diagrams shown in fig B.2. Ordered diagrams representing the same summation are said to be “equivalent,” and we will represent all equivalent ordered diagrams by just one non-ordered diagram. Therefore, two different nonordered diagrams represent different summations. Two ordered diagrams are equivalent if one of them can be mapped onto the other by continuously deforming it, by moving the dots and changing the direction of the shoots, while the lines are dragged along. Therefore, every nonordered diagram (or “diagram,” for shortness) gets as a prefactor not only the number given in equation B8, but also the number of ordered diagrams it represents.

• • • 1

3

• • • 2

4

2

6

1

4

3

5

• • • 3

2

1 4

fig B.3

fig B.4

Consider an ordered diagram D. Let ED denote the set of ordered diagrams which are equivalent to D. In order to find the prefactor of the corresponding (non-ordered) diagram, we have to know the number of elements in ED . To this end, number the shoots in D from the left to the right, per dot in a clockwise direction, such that for each dot the shoot pointing upwards gets the lowest number. This numbering will be called the standard numbering s0 . Fig B.3 gives an example of an ordered diagram with its standard numbering. Now consider the set S of all numberings s, such that if two numbers are together in s0 (i.e. attached to the same dot), then they are together in s (but possibly at a different dot). Fig B.4 gives two examples of a renumbering: the upper one is in S, the lower one is not. Henceforth, when we say “numbering,” we refer to an element of S. The number of elements in S is just equal to N, as given by equation B8. 181

Appendices

The lines present in the ordered diagram D define an equivalence relation on S, in the following way. Every line connects two shoots, and each shoot has a number. Therefore, to every numbering s there is associated a set of non-ordered pairs of numbers, such that the shoots associated with the numbers in 1 2 3 4 one pair are connected by a line. For instance with the diagram depicted in fig B.3 there is associated the set {{1, 4} ,{2,3} ,{5,6} }. By definition, two numberings s1 and s2 4 3 1 2 are in the same equivalence class if they have the same set of pairs. This is illustrated in fig B.5: the upper two numberings belong to the same equivalence class, but the lower one 1 3 2 4 belongs to a different class. All equivalence classes have the fig B.5 same number of elements. We claim that the number of ordered diagrams equivalent to D, is equal to the number of such equivalence classes on the set of numberings. We prove this by constructing a bijective function from the set C of equivalence classes, to the set ED , as follows. Let c be an equivalence class in S, and let sc be any numbering in this class. Draw the ordered diagram D and number the dots according to sc . Then deform the diagram continuously by interchanging dots and rotating the shoots, such that one obtains an ordered diagram D′ (by definition equivalent to D) having the standard numbering s0 . Since D′ will be independent of the choice of sc , this defines a mapping from C to ED. Moreover, this mapping is invertible and bijective. Let 1 N E be the numerical prefactor for the (non-ordered) diagram E, (apart from the sign: due to the factor ( −1)n present in (B2), diagrams having an odd number of dots get a minus sign). This prefactor is a product of two factors. The first factor is the number of ordered diagrams representing E, which is equal to the number of equivalence classes in S. The second factor is the prefactor 1 N D for any of the ordered diagrams D corresponding to E, where N D has been given in equation (B8). Since N D is just the number of elements in S, we can write

• • • • • • • • • • • •

NE =

number of elements in S = number of equivalence classes

(B9)

= number of elements per equivalence class Summarizing, N E can be found in the following way. Draw the diagram E. Number the shoots. Then N E is the number of renumberings such that:

182

Appendices

• if two numbers are together (i.e. attached to the same dot) they remain together; • if two numbers are connected by a line, they remain connected by a line. N E is often called the symmetry factor of the diagram E. This completes the calculation of Z (note that, apart from the diagrams, the term 1 should be included in the expansion of Z (take n = 0 in equation (B2))., and our next task is the calculation of F = − ln Z .

The calculation of F A diagram in Z is called connected if it is possible to go from any dot to any other dot via the lines. Let {A1 , A2 , A3 ,L} be the set of all connected diagrams, including the inverse symmetryfactors. Any diagram in Z can be written as a product of connected diagrams. Consider a diagram E consisting of n connected parts. Of course, the same connected diagram may occur several times in E. Let ni be the multiplicity of diagram Ai , then we have

∑n

i

=n

(B10)

i

As explained above, the prefactor for this diagram E is the inverse of its symmetry factor. The factors associated with the internal symmetry of the connected diagrams have already been absorbed into the definition of Ai . There is, however, an additional symmetry related to permutations of identical diagrams. For the connected diagram Ai with multiplicity ni this gives rise to an extra factor ni ! . Therefore, Z can be written as Z=





A1n1 A2n2 1 L= n ! n2 ! n! ,L 1 n=0

∑ ∑



n = 0 n1 ,n2 Σni = n



∑ n ! (A 1

1

n =0

+ A2 + L) = exp n

∑A

i



n! A1n1 A2n2 L = n ! n2 !L ,L 1

n1 ,n2 Σni = n



(B11) F=−

∑A

i

It follows from B11 that taking the logarithm has the effect of removing all disconnected diagrams. Therefore, F is the sum of all connected diagrams, and diagrams with an even number of dots get a minus sign.

183

Appendices

Appendix C In this appendix the inverse of the matrix G −1 , defined by C1, will be calculated. Gs−σ1 =

∂ 2 Fdis δ χ = sσ + K N s N σ − ∆ s ∆ σ ∂ρ s ∂ρ σ ρs 2

Ns = N sA + NsB

(C1)

∆ s = N sA − N sB

K→∞

Equation C1 is the second order coefficient of the expansion of the free energy Fdis in powers of the composition; see equation 4.3.8. It is possible to find a closed expression for G , by inverting in two steps. The first step involves the definition of the matrix A: A −1 = g −1 + C gs σ = ρ s δ s σ

(C2)

Cs σ = K N s Nσ Due to the special form of C this matrix can easily be inverted: A −1 = g −1 + C ⇒ −1

A = ( Ι + gC ) g = g +



∑ (− gC )

n

g = g − gCg

n =1

A = ρs δ s σ −



∑ (− K 〈 N 〉) 2

n−1



(C3)

n =1

K ρs N s ρ σ Nσ 1+ K〈N 2〉

The brackets denote an average over the composition of the system, as follows: 〈 X〉 ≡

∑ρ X s

s

s

If the incompressibility limit K → ∞ is taken the expression C3 for A becomes 184

(C4)

Appendices

A = ρs δ s σ −

ρ s N s ρσ Nσ 〈N 2 〉

(C5)

The second step goes along exactly the same lines as the first one; just note that G is related to A via G −1 = A −1 − D

Ds σ =

χ ∆s∆σ 2

(C6)

After some algebra, which is analogous to the algebra leading to formula C3, one arrives at the equations ADA

G = A+

χ 2

1 −  〈 ∆2 〉 −

( ADA)s σ

2χ 2 ρ s n s ρ σ nσ = 〈N 2 〉2

ns = N sα zα

〈 N∆ 〉 2   〈N 2〉  = 〈 N A2 〉〈 N B2 〉 − 〈 N A N B 〉 2

zα =

1

(〈 N

2 A NB 〉 + 〈NB 〉,

− 〈 N A2 〉 − 〈 N A N B 〉

(C7)

)

It is convenient to define a parameter τ, which vanishes at the spinodal for macrophase separation, by31 〈N 2〉 τ= − 2χ 〈 N A2 〉〈 N B2 〉 − 〈 N A N B 〉 2

(C8)

Since in the correlated random copolymer the spinodal for macrophase separation coincides with the spinodal for microphase separation (because the position q∗ of the minimum of the second order vertex function is given by q∗ = 0 ), the parameter τ is related to the parameter t defined in equation 3.4.1 via the simple relation t = − lτ

(C9)

Rewriting the first equation in C7 one arrives at the final expression for G:

185

Appendices

Gs σ = ρ s δ s σ −

ρ s N s ρσ N σ 1  + ρs ns ρσ nσ  − 2  2 〈N 〉  τ 〈N 〉

(C10)

Appendix D In this appendix equations will be derived for the smallest eigenvalue and the corresponding eigenvector of the matrix G −1 in the neighborhood of the spinodal τ = 0 , which is determined by the condition that its lowest eigenvalue becomes zero. G −1 is defined by Gs−σ1 =

δsσ ρs

N s = N sA + N sB

+ K Ns Nσ −

χ ∆s∆σ 2

∆ s = N sA − N sB

(D1) K→∞

To tackle the problem use will be made of a perturbation theory developed by Lifshitz.75 Here this theory will be formulated for matrices of the general form Gs−σ1 = gs−σ1 −

∑ξ

α α α Fs Fσ

(D2)

α

Note that equation D1 can be cast into this form after the identification gs σ = ρ s δ s σ

ξ 1 = − K → −∞

Fs1 = N s = N sA + NsB

ξ2 =

χ 2

(D3)

Fs2 = ∆ s = N sA − NsB

The problem is to find expressions for the eigenvalues Λp and the eigenvectors Esp of G −1 if the eigenvalues λp and the eigenvectors esp of g −1 are known. Assuming that the vectors Esp and esp are normalized, equation D2 can be rewritten as

∑Λ E E = ∑λ e e − ∑ξ r

r

186

r s

r σ

r r r s σ

r

α

α α α Fs Fσ

(D4)

Appendices

Multiplying both sides with esp Eσq , summing over s and σ and using the orthonormality of the eigenvectors leads, after a little rearranging, to



esp Esq =

∑∑ α

s

s ,σ

ξ α Fsα esp Fσα Eσq λ p − Λq

(D5)

Multipling with esp' , summing over p and relabelling shows that Esq =

∑∑ α

p, σ

eσp esp Fσα ϕ qα λ p − Λq (D6)

ϕ αq ≡ ∑ ξ α Fsα Esq s

Next multiply with Fsβ and sum over s:

∑ (Φ β

−1 αβ ( Λ q ) − ξ α δ αβ

Φ αβ ( Λ q ) =



p, s ,σ



q β

=0

Fsα esp eσp Fσβ

(D7)

λ p − Λq

Therefore, the eigenvalues of G −1 can be found by solving the equation

(

)

det Φ αβ ( Λ ) − ξ α−1 δ αβ = 0

(D8)

Having found in this way an eigenvalue of G −1 , equation D7 gives the expression for ϕα, and D6 gives the corresponding eigenvector. Now consider the special case where G −1 is given by D1. Assume that the χ-parameter is so close to its spinodal value that the smallest eigenvalue Λ satisfies the condition Λ

Suggest Documents