Sphere Packing in a Pharmaceutical Context

Sphere Packing in a Pharmaceutical Context Tracy Backes May 5, 2008 1 Introduction Pomona professor Ami Radunskaya and a group of eight colleagues ...
Author: Rose Owens
3 downloads 0 Views 3MB Size
Sphere Packing in a Pharmaceutical Context Tracy Backes May 5, 2008

1

Introduction

Pomona professor Ami Radunskaya and a group of eight colleagues are currently researching the percolation threshold of time-release medicine tablets. As part of their research, they have implemented a sphere-packing algorithm that models how several component powders compact during tablet formation. In their initial implementation of the two-dimensional case of this algorithm, their code exhibited persistent simulation error. In some cases particles that were supposed to collide instead began to overlap. As the simulation progressed, these initial errors would compound until the two particles essentially occupied the same space. After receiving a copy of their code, I worked to understand the underlying concepts and evaluate how the code functioned. I eventually located the source of the error and consulted with Radunskaya to come up with several suggestions for how to handle these special cases of the simulation.

2

Background

Radunskaya et al. are modeling the percolation threshold of matrix tablets, a type of timereleased medicine tablet. The tablets are made of three types of powders: a polymer, an excipient and a drug. The powders are mixed and compressed into tablet form and then heated. The excipient and drug exist in a brittle powder form that remains relatively unaffected by the compression and heating process. However, the polymer deforms under compression and eventually melts during the heating process. As the polymer melts, it spreads out in a thin sheet, filling gaps in the tablet and enveloping the other powders. Upon cooling the redistributed polymer then forms a complex network (or “matrix”) which is responsible for the time-release capabilities of the tablet. The percolation threshold of the system describes the set of parameters necessary for creating an effective polymer network. When an effective polymer does form, we see that the tablet is held together by a series of polymer channels, all of which are filled with a mixture of the excipient and drug powders (see Figure 2). After ingestion by the patient, the polymer remains intact and resists digestion. On the other hand, the excipient and drug powders dissolve almost immediately. The diffusion of any powder located in the interior of the pill is slowed by the complex polymer channels, whereas any powder located close to the surface of the pill enters the patient’s

1

Matrix Tablet

polymer excipient drug

Figure 1: A diagram of a fully formed matrix tablet. Compression and heating deform the polymer into a series of complex channels. These channels, which are filled with a mixture of the excipient and the drug powders, form the main structure of the tablet.

system rather quickly. The end result is a stream of drug being released in the patient’s body over an extended period of time. Currently, pharmaceutical manufacturers rely on experimental results to evaluate how quickly tablets release their full dosage. They also rely on experimental results to determine the correct proportions of the three components required to form appropriate networks. If researchers can develop an appropriate mathematical model of the system, pharmaceutical companies will be better able to design and manufacture these matrix tablets. Radunskaya’s model of the system involves four basic steps: modeling the tablet before compaction as a random sphere packing; modeling the final compaction of the tablet using a graph theory framework; modeling the heating of the tablet; and predicting how long it will take the drug to diffuse to the outside of the matrix tablet [3]. My research pertains to the first stage of the model, early tablet compaction, and the numerical implementation of the random sphere packing algorithm. We assume that the three powders that form the matrix tablet are well mixed when compaction begins. Therefore, before the molecules actually begin to deform from compaction, we can approximate how the molecules arrange themselves as a random sphere packing using an algorithm outlined by Lubachevsky and Stillinger [2]. In particular, Radunskaya et al. followed the implementation of the Lubachevsky-Stillinger algorithm as described in Knott’s model of fuel propellant packing [1]. In the Lubachevsky-Stillinger algorithm, instead of modeling the initial compaction process in terms of shrinking boundary conditions, the boundary of the system remains constant and the spheres themselves grow at a uniform rate (see Figure 2). Knott’s modifications to the algorithm allow for particles of different aspect ratios and mass [1]. We begin with a unit cube filled with N randomly located spheres, each with random initial velocity. Throughout the algorithm, each sphere i is characterized by several different parameters, as shown in Figure 2: ~ri , the coordinates of its center; ~ui , the constant velocity at which the sphere is traveling; ai , the rate at which the sphere is expanding; mi , the constant mass of the sphere. As spheres move within the cube, it is inevitable that collisions will 2

Figure 2: The Lubachevsky-Stillinger algorithm fixes the boundary conditions of the system and grows the radii of each sphere at a constant rate. The end result is a valid approximation of a sphere packing with a shrinking boundary. The three panels of this figure illustrate the main concept of the algorithm in two dimensions. xi

ui

yi ai

Figure 3: Each sphere i is characterized by several different parameters: ~ri , the coordinates of its center; ~ui , the constant velocity at which the sphere is traveling; ai , the rate at which the sphere is expanding; mi , the constant mass of the sphere. (This figure is shown in two dimensions, but is readily extended to three dimensions.)

occur. To monitor these collisions, the algorithm monitors the separation distance between each pair of spheres. A collision occurs when the distance between the centers of two spheres is equal to the sum of the two instantaneous radii. In other words, when |~ri + ~ui ∆t − ~rj − ~uj ∆t| = (ai + aj )(t + ∆t)

(1)

where the spheres i and j were located at ~ri and ~rj at time t, and ∆t time has elapsed since then. If we square both sides of Eq. (1) then it is quadratic in ∆t. After solving for ∆t, the two roots of the equation will tell us the exact time of any collisions between spheres i and j. A negative root indicates a past collision, a positive root indicates a future collision, and complex roots indicate that under current conditions the two spheres will never collide. A similar equation can be derived for collisions with the boundaries of the cube. The smallest positive root of Eq. (1) represents the earliest possible collision time for any two spheres; in the case where there are no positive roots, we use the convention that the earliest collision will take place in infinite time. Accordingly, we build a matrix of next 3

collision times, where the i, j-th entry is the time until the next collision between spheres i and j. To determine the next collision, we find the minimum entry in the matrix. Because the spheres are traveling linearly at constant velocity, once the next collision has been identified, we can jump ahead to this point in the simulation. Now our task is to model the collision, update velocity vectors for the two involved spheres and update the matrix of next-collision times. As seen in Figure 4, the velocity of the two particles after collision is determined by the incoming velocities, ui and uj, the relative aspect ratios, ai and aj, and the relative masses, mi and mj , of the two spheres. We break the incoming velocity into two components–the velocities parallel to the centerline, ~u(p) , and the velocity transverse to the centerline, ~u(t) . These velocities are given by (p)

~ui =

mi Projui {~ri − ~rj } ; mj

(t)

(p)

~uj =

(p)

mi Projuj {~ri − ~rj } ; mj

(p)

~ui = ~ui − ~ui ;

(p)

~uj = ~uj − ~uj .

(2) (3)

After collision, the exit velocities ~vi and ~vj are calculated according to (p)

~vi = ~uj + h

~ri − ~rj (t) + ~ui ; |~ri − ~rj |

(p)

~vj = ~ui + h

~rj − ~ri (t) + ~uj |~rj − ~ri |

(4)

where h adds extra rebound to the system. This extra rebound is proportional to the growth rates of the radii and the masses of the two particles h=2

ai + aj 1 + mi /mj

(5)

where 2 is an arbitrary scaling factor that Radunskaya et al found to work well in simulations. The factor h is used to ensure that particles actually separate after collision despite their growing radii. The extra rebound also ensures tight packing of the spheres, a behavior typical of mechanical systems like the one we are modeling [1]. Although the above calculations conserve momentum, the extra rebound due to h adds extra energy to the system after each collision. After the velocities are updated, all entries corresponding to i and j in the matrix of next collision times are updated accordingly. At this point we find the earliest future collision and the process iterates. The algorithm eventually stops when the packing fraction of the spheres stops changing significantly after each new collision. In other words, the algorithm stops when the system has reached a fairly stable configuration. It is then possible to analyze the distribution of spheres and to move on to subsequent steps in the algorithm.

3

Discussion

My first task was to determine why the circles were able to overlap in their implementation of the algorithm. It was thought that perhaps a numerical error was involved in the process; perhaps two collisions happened so close together that Matlab could not differentiate between 4

ut

ut

ut

ut

up

up* + boost

up

ri - rj

up* + boost ri - rj

Figure 4: Updating circle velocity upon collision. (Left) Pre-collision (Right) Post-collision (This figure is shown in two dimensions, but is readily extended to three dimensions.)

them. In this case, the code would correctly handle only one of the collisions and the other collision would go undetected, allowing the two circles to overlap. Based on that assumption, I started to characterize when overlaps occurred and whether two practically simultaneous collisions were occurring. In order to understand the overlaps, I added a condition that would stop the simulation if two circles overlapped more than a certain threshold. Due to Matlab’s inability to recalculate velocities at the exact time of collision, there were many small instantaneous overlaps; I was only interested in the overlaps that lasted through several time steps. Curiously, when I began analyzing several simulations, I noticed that the two overlapping circles always had collision times that were distinct from any other recorded sphere-sphere collisions; in the case of overlapping spheres I simply looked at the negative roots of the collision equation to determine when the last collision was supposed to occur. Therefore, there did not seem to be any error associated with two simultaneous collisions. In fact, I noticed an unexpected trend. The algorithm had calculated collisions for all of the overlapping circles in the recent past. After more detective work, I realized that the overlap began immediately following the last collision of the two circles in question. In these cases, the two circles approached each other almost tangentially, causing the post-collision velocities to be almost identical to the pre-collision velocities. Therefore, the circles continued on overlapping trajectories. Because both roots to the collision equation were then negative, the algorithm has no way of detecting and correcting for this computational error and the overlap increases over time. One of the reasons that this discrepancy was able to happen is that the radii of the two colliding circles are growing with time. With constant radii, the post-collision velocities would not be problematic. However because of the growing radii, in these special cases the the post-collision velocities are not fast enough to overcome the expanding radii of the circles. Therefore, in order to fix this error it will be necessary to add an additional boost to the velocity of any circles involved in these problematic collisions.

5

4

Results and Conclusions

After isolating the conditions under which circles overlap, the next step was to find a solution to the error. There are several conditions that any such solution should satisfy. First of all, the solution should be easily extendible to the three-dimensional case. Secondly, the solution should be reasonable and not just a quick patch to the algorithm. Lastly, the solution should not overly impact the vast majority of normal collisions occurring during the simulation. There is already a “boost” factor embedded in the code that prevents circles from staying in contact after collision due to their rate of expansion. Currently, Radunskaya et al have an extra factor of 2 in their calculation of h in Eq. (5). It may be reasonable to try and increase this “boost” factor to prevent circle overlap. However, as h increases in magnitude, more energy is added to the system at each collision. This increased energy is present for all collisions, which soon leads to unrealistically fast circle movement and increased jostling. Furthermore, increasing h only reduces the probability that circles will overlap; there is still some chance that the post-collision velocities will be too close to the original velocities even for high values of h. In fact, we need to be careful about implementing any blanket solution. In the original code, the majority of collisions were handled correctly and in a realistic manner. Altering the main body of the algorithm will affect the whole simulation, which may have unintended consequences. Therefore, it may be easier to formulate a reasonable solution by considering only the case of the problem collisions. It is possible to screen each collision to see if it has the potential for causing problems. If the incoming and outgoing velocities are problematic, then we simply correct the inappropriate outgoing velocities and let the continuation continue normally. Initially, Professor Radunskaya and I had planned to check whether the post-collision velocities parallel to the centerline were faster than the combined growth rates of the radii according to    (6) Projvi {~ri − ~rj } − Projvj {~ri − ~rj } < ai + aj . where vi and vj are the outgoing velocities of circles i and j. However, I encountered difficulties when implementing this screening tool. Every circle-circle collision was satisfying Eq. (6), which meant that this was not an effective screening tool for problematic collisions. Due to time constraints, I was unable to determine why each collision satisfied this screening criteria (a result which seems counterintuitive). After additional sleuthing, I found a different way to attack the problem. Instead of directly calculating the distance between two circles, I realized that we already had the tools to detect overlapping circles. Eq. (1) allows us to calculate the collision times between circles i and j. As defined earlier, positive roots correspond to future collisions and negative roots correspond to past collisions. After a collision, both roots should be negative since the circles are traveling away from the collision. However, if there is both one negative root and one positive root, than the two circles are actually overlapping. Therefore, after updating the post-collision velocities, we can recalculate the collision times and determine if these new velocities result in any overlap. If so, the post-collision velocities must be readjusted before the algorithm can be allowed to continue. In order to adjust the velocities, we simply add an additional boost to each velocity along 6

the direction of the centerline

(new)

~vi

= ~vi + ai

~ri − ~rj ; |~ri − ~rj |

(new)

~vj

= ~vj + aj

~rj − ~ri . |~rj − ~ri |

(7)

This additional boost is enough of an adjustment to avoid any overlap between circles i and j. There was one additional change that I had to implement before the algorithm produced overlap-free simulations. In both [2] and [1], the authors mentioned the need to occasionally normalize all of the velocities in the system to account for the extra energy added with each “boost” h. However, the normalization process sometimes causes the last pair of colliding spheres to overlap due to the newly reduced post-collision velocities. Removing the normalization step completely results in completely overlap-free simulations. Unfortunately, due to time constraints, I was unable to investigate whether or not it is possible to keep a modified normalization process in the algorithm while still avoiding circle overlap. By screening the collisions for possible overlap and removing the normalization step, I was able to prevent any circles from overlapping within each simulation. This solution meets the criteria that Radunskaya and I selected, although it is not clear how well it extends to three dimensions. Hopefully Radunskaya and her colleagues will be able to use my insight to improve the functionality of their model.

5

Future Work

Although I have isolated the problem and formulated a possible solution for the case of overlapping circles, additional work remains before the job is finished. Most importantly, it remains to be seen if the solution adequately prevents spheres from overlapping in the threedimensional case. Extending the algorithm to three dimensions will also require reevaluating the magnitude of h and how large of a boost factor to include. It may be the case that the current values do not have the correct energy balance needed for an additional dimension. Furthermore, the implementation of my current solution can be improved upon to increase the efficiency of each simulation.

References [1] GM Knott, TL Jackson, and J. Buckmaster. Random packing of heterogeneous propellants. AIAA Journal, 39(4):678–686, 2001. [2] B. D. Lubachevsky and F. H. Stillinger. Geometric properties of random disk packings. Journal of Statistical Physics, 60(5):561–583, 1990. [3] A. Radunskaya, M. Arafat, B. Baeumer, C. duBois, B. Evans, P. Hinow, T. Rades, E. Spiro, and I. Tucker. Predicting the percolation threshold in tablets. Workshop on the Application of Mathematics to BioMedical Problems, 2007.

7