Dynamic production scheduling in virtual cellular manufacturing systems. Title

Title Author(s) Citation Issued Date URL Rights Dynamic production scheduling in virtual cellular manufacturing systems Ma, Jun; 马俊 Ma, J. [马俊]...
Author: Noel Golden
2 downloads 0 Views 2MB Size
Title

Author(s)

Citation

Issued Date

URL

Rights

Dynamic production scheduling in virtual cellular manufacturing systems

Ma, Jun; 马俊 Ma, J. [马俊]. (2012). Dynamic production scheduling in virtual cellular manufacturing systems. (Thesis). University of Hong Kong, Pokfulam, Hong Kong SAR. Retrieved from http://dx.doi.org/10.5353/th_b5016256 2012

http://hdl.handle.net/10722/193066

The author retains all proprietary rights, (such as patent rights) and the right to use in future works.



Dynamic Production Scheduling in Virtual Cellular Manufacturing Systems

by Ma Jun 马俊

Ph.D. Thesis

A thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at the University of Hong Kong

August 2012

Abstract of thesis entitled

Dynamic Production Scheduling in Virtual Cellular Manufacturing Systems submitted by Ma Jun for the degree of Doctor of Philosophy at the University of Hong Kong in August 2012 Manufacturing companies must constantly improve productivity to respond to dynamic changes in customer demand in order to maintain their competitiveness and marketshares. This requires manufacturers to adopt more efficient methodologies to design and control their manufacturing systems. In recent decades, virtual cellular manufacturing (VCM), as an advanced manufacturing concept, has attracted increasing attention in the research community, because traditional cellular manufacturing is inadequate when operating

in

a

highly

dynamic

manufacturing

environment. Virtual

cellular

manufacturing temporarily and dynamically groups production resources to form virtual cells according to production requirements, thus enjoying high production efficiency and flexibility simultaneously. The objective of this research is to develop cost-effective methodologies for manufacturing cell formation and production scheduling in virtual cellular manufacturing systems (VCMSs), operating in single-period/multi-period, and dynamic manufacturing environments. In this research, two mathematical models are developed to describe the characteristics of VCMSs operating under a single-period and a multi-period manufacturing environment respectively. These models aim to develop production schedules to minimize the total manufacturing cost incurred in manufacturing products for the entire planning horizon, taking into consideration many practical constraints such as workforce requirements, effective capacities of production resources, and delivery due dates of    

orders. In the multi-period case, worker training is also considered and factors affecting worker training are analyzed in detail. This research also develops a novel hybrid algorithm to solve complex production scheduling problems optimally for VCMSs. The hybrid algorithm is based on the techniques of discrete particle swarm optimization, ant colony system and constraint programming. Its framework is discrete particle swarm optimization which can locate good production schedules quickly. To prevent the optimization process being trapped into a local optimum, concepts of ant colony system and constraint programming are incorporated into the framework to greatly enhance the exploration and exploitation of the solution space, thus ensuring better quality production schedules. Sensitivity analyses of the key parameters of the hybrid algorithm are also conducted in detail to provide a theoretical foundation which shows that the developed hybrid algorithm is indeed an excellent optimization tool for production scheduling in VCMSs. In practice, the occurrence of unpredictable events such as breakdown of machines, change in the status of orders and absenteeism of workers will make the current production schedule infeasible. A new feasible production schedule may therefore need to be generated rapidly to ensure smooth manufacturing operations. This research develops several cost-effective production rescheduling strategies for VCMSs operating under different dynamic manufacturing environments. These strategies facilitates the determination of when-to and how-to take rescheduling actions. To further enhance the performance of such strategies in generating new production schedules, especially for large-scale manufacturing systems, a parallel approach is established to implement the developed hybrid algorithm on GPU with compute unified device architecture. The convergence characteristics of the proposed hybrid algorithm are also studied theoretically by using probability theory and Markov chain model. The analysis results show that the optimization process will eventually converge to the global optimal solution. (486 words)

   

DECLARATION

I declare that this thesis represents my own work, expect where due acknowledge is made, and that it has not been previously included in a thesis, dissertation or report submitted to this University or to any other institution for a degree, diploma or other qualification.

___________________ Ma Jun August 2012

i   

ACKNOWLEDGEMENTS

Foremost, I would like to express my sincere appreciation to my supervisor, Prof. K.L. Mak, for his dedicated guidance, invaluable advice and long-term encouragement throughout my doctoral study. His enthusiasm and supervision helped me a lot in all the time of my research. I could not have imaged having a better mentor for my PhD study. I am grateful to my fellow colleagues for their generous assistance in the preparation of this thesis. The valuable discussions with them provided me referable advice for my research. I am also thankful to the staff members of the general office in the Department of Industrial and Manufacturing Systems Engineering for their altruistic help over the past four years. Last but not least, I would like to thank my parents, sister and girlfriend. Without their lasting support and understanding, I would never have completed my thesis. It was their sharing of my stress and frustration that sustained my research work. Whatever achievement there is in completing a doctoral degree, I am proud to share this with them.

ii   

TABLE OF CONTENTS  

DECLARATION ............................................................................................................................ i  ACKNOWLEDGEMENTS ......................................................................................................... ii TABLE OF CONTENTS ............................................................................................................ iii  LIST OF FIGURES……………..………………...…………………………………………………………………………….viii  LIST OF TABLES ......................................................................................................................... x  NOTATION ................................................................................................................................ xiii  

CHAPTER 1 INTRODUCTION  1.1 Virtual Cellular Manufacturing Systems (VCMSs) ........................................................... 1‐1  1.1.1 The virtual cellular manufacturing concept ................................................................. 1‐2  1.1.2 Virtual cellular manufacturing versus cellular manufacturing .................................... 1‐8  1.2 Production Scheduling in VCMSs.................................................................................... 1‐10  1.3 Combinatorial Optimization Problems and Algorithms for Production Scheduling Problems ........................................................................................................................................ 1‐14  1.4 Research Objectives ......................................................................................................... 1‐15  1.5 Outline of the Thesis ........................................................................................................ 1‐16  1.6 Summary of Thesis Contributions .................................................................................... 1‐17   

CHAPTER 2 LITERATURE REVIEW  2.1 Introduction ........................................................................................................................ 2‐1  2.2 Cellular Manufacturing Systems ........................................................................................ 2‐1  2.2.1 Descriptive procedures ................................................................................................ 2‐2  2.2.2 Product flow analysis .................................................................................................. 2‐3  2.2.3 Graph partitioning approaches .................................................................................... 2‐5  2.2.4 Similarity coefficient based approaches ...................................................................... 2‐6  2.2.5 Mathematical programming approaches ..................................................................... 2‐7  2.2.6 Artificial intelligence approaches ................................................................................ 2‐8  2.2.7 Other heuristic approaches ........................................................................................ 2‐10  iii   

2.3 Virtual Cellular Manufacturing Systems .......................................................................... 2‐11  2.3.1 The evolution to virtual cellular manufacturing systems .......................................... 2‐11  2.3.2 Human issues in cellular manufacturing environments ............................................. 2‐16  2.4 Optimizatin Approaches ................................................................................................... 2‐19  2.4.1 Particle Swarm Optimization .................................................................................... 2‐19  2.4.2 Ant Colony Optimization .......................................................................................... 2‐30  2.4.3 Constraint Programming ........................................................................................... 2‐38  2.5 Dynamic Production Scheduling ...................................................................................... 2‐44  2.5.1 Knowledge of uncertainties ....................................................................................... 2‐44  2.5.2 Dynamic scheduling approaches ............................................................................... 2‐45  2.5.3 When and how to reschedule ..................................................................................... 2‐47  2.5.4 Dynamic production scheduling techniques .............................................................. 2‐49  2.6 Chapter Summary ............................................................................................................. 2‐53   

CHAPTER 3 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER A SINGLE-PERIOD MANUFACTURING ENVIRONMENT  3.1 Introduction ........................................................................................................................ 3‐1  3.2 The Production Scheduling Framework ............................................................................. 3‐3  3.3 Mathematical Modeling for Production Scheduling in VCMSs under a Single-Period Manufacung Envionment .................................................................................................. 3‐5  3.3.1 Assumptions ................................................................................................................ 3‐5  3.3.2 Notations ..................................................................................................................... 3‐6  3.3.3 Mathematical model .................................................................................................... 3‐8  3.4 Illustrative Example.......................................................................................................... 3‐12  3.4.1 Manufacturing system configuration ......................................................................... 3‐12  3.4.2 Requirement of no work-in-process inventory between workstations ...................... 3‐14  3.4.3 Production schedule of all of the jobs ....................................................................... 3‐15  3.5 Solution Algorithms ......................................................................................................... 3‐19  3.5.1 Discrete particle swarm optimization ........................................................................ 3‐19  3.5.2 Constraint programming ............................................................................................ 3‐25  3.5.3 Hybridization of DPSO and CP (CPSO) ................................................................... 3‐27  iv   

3.5.4 Hybridization of DPSO, CP and ACS (ACPSO) ...................................................... 3‐33  3.5.5 ACPSO performance and sensitivity analyses .......................................................... 3‐37  3.6 Chapter Summary ............................................................................................................. 3‐42   

CHAPTER 4 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER A MULTI-PERIOD MANUFACTURING ENVIRONMENT  4.1 Introduction ........................................................................................................................ 4‐1  4.2 Mathematical Modeling of Production Scheduling in VCMSs under a Multi-period Manufacturing Environment ............................................................................................. 4‐2  4.2.1 Assumptions ................................................................................................................ 4‐2  4.2.2 Notations ..................................................................................................................... 4‐3  4.2.3 Mathematical model .................................................................................................... 4‐5  4.3 Illustrative Example.......................................................................................................... 4‐10  4.3.1 Manufacturing system configuration ......................................................................... 4‐10  4.3.2 Job production information ....................................................................................... 4‐11  4.3.3 Complete job production schedule ............................................................................ 4‐14  4.4 Solution Algorithms ......................................................................................................... 4‐17  4.4.1 Discrete particle swarm optimization ........................................................................ 4‐18  4.4.2 Hybridization of DPSO, CP and ACS (ACPSO) ...................................................... 4‐24  4.5 Computational Experiments and Results .......................................................................... 4‐29  4.5.1 Manufacturing system configuration ......................................................................... 4‐29  4.5.2. Analysis of worker training level ............................................................................ 4‐32  4.5.3. Comparison of ACPSO and CPSO performances .................................................... 4‐38  4.6 Chapter Summary ............................................................................................................ 4‐42   

CHAPTER 5 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER DYNAMIC MANUFACTURING ENVIRONMENTS  5.1 Introduction ........................................................................................................................ 5‐1  5.2 Dynamic Production Scheduling in VCMSs with Random Machine Breakdowns and Worker Absenteeisms ....................................................................................................... 5‐2 

v   

5.2.1 VCMS characteristics .................................................................................................. 5‐3  5.2.2 Right-shift policy for VCMSs ..................................................................................... 5‐4  5.2.3 The proposed VCMS rescheduling policy................................................................... 5‐8  5.2.4 Methodology of generating predictive schedules ...................................................... 5‐12  5.2.5 Computational experiments and results ..................................................................... 5‐13  5.3 Production Scheduling in VCMSs under a Rolling Horizon Environment ...................... 5‐36  5.3.1 Mathematical model .................................................................................................. 5‐36  5.3.2 Rescheduling policies ................................................................................................ 5‐37  5.3.3 Comparison of rescheduling policies ........................................................................ 5‐38  5.4 Production Scheduling in VCMSs under a Comprehensive Dynamic Manufacturing Environment .................................................................................................................... 5‐44  5.5 Chapter Summary ............................................................................................................. 5‐51   

CHAPTER 6 PARALLEL IMPLEMENTATION OF ACPSO ON GPU WITH CUDA  6.1 Introduction ........................................................................................................................ 6‐1  6.2 The CUDA Architecture..................................................................................................... 6‐2  6.3 Parallel Implementation of ACPSO on GPU with CUDA ................................................. 6-7  6.3.1 Global memory data organization ............................................................................... 6‐9  6.3.2 Initialization stage ..................................................................................................... 6‐10  6.3.3 Iteration stage ............................................................................................................ 6‐12  6.3.4 Generating pseudo-random numbers ......................................................................... 6‐13  6.4 Computational Experiments and Results .......................................................................... 6‐14  6.4.1 Generation of test problems....................................................................................... 6‐14  6.4.2 Computational results ................................................................................................ 6‐16  6.5 Chapter Summary ............................................................................................................. 6‐25   

CHAPTER 7 PROPERTIES 

THE

THEORETICAL

ANALYSIS

OF

CONVERGENCE

7.1 Introduction ........................................................................................................................ 7‐1  7.2 Convergence Properties of ACPSO.................................................................................... 7‐2  7.2.1 Convergence properties of DPSO ............................................................................... 7‐2  vi   

7.2.2 Convergence Properties of ACS .................................................................................. 7‐9  7.2.3 Convergence Properties of ACPSO........................................................................... 7‐14  7.3 Markovian Properties of ACPSO ..................................................................................... 7‐16  7.4 Chapter Summary ............................................................................................................. 7‐21   

CHAPTER 8 CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE WORK  

REFERENCES......................................................................................................................... R‐1   

vii   

LIST OF FIGURES Figure 1-1. The layout of a job shop. ........................................................................................... 1‐3  Figure 1-2. The layout of a flow line. ........................................................................................... 1‐4  Figure 1-3. Positioning of various manufacturing layouts. .......................................................... 1‐5  Figure 1-4. The layout of a cellular manufacturing system. ......................................................... 1‐6  Figure 1-5. The layout of a virtual cellular manufacturing system. ............................................. 1‐7  Figure 2-1. The particle swarm optimization procedure. ........................................................... 2‐20  Figure 2-2. Concept of modification of a search point in PSO. ................................................. 2‐23  Figure 2-3. The ant colony optimization procedure. .................................................................. 2‐33  Figure 2-4. A possible solution to 8-queens problem. ................................................................ 2‐39  Figure 3-1. Schematic diagram of the production scheduling framework. .................................. 3‐3  Figure 3-2. The production schedule of the workstations. ......................................................... 3‐16  Figure 3-3. The production schedule of the workers. ................................................................. 3‐17  Figure 3-4. The heuristic for determining the production outputs of a job. ............................... 3‐25  Figure 3-5. The procedure of constraint programming with backtracking propagation. ............ 3‐26  Figure 3-6. The CPSO hybrid algorithm procedure. .................................................................. 3‐28  Figure 3-7. The procedure for detecting the critical production resource. ................................. 3‐30  Figure 3-8. Performance of locating good solutions with the same number of iterations. ......... 3‐38  Figure 3-9. The effects of particle size on solution quality. ....................................................... 3‐40  Figure 3-10. The effects of the heuristic value on search performance. ..................................... 3‐40  Figure 3-11. The effects of theta value on search performance. ................................................ 3‐41  Figure 3-12. The effects of pheromone evaporation rate on the search process. ....................... 3‐42  Figure 4-1. A sample for illustrating the extra capacity concept. ............................................... 4‐25  Figure 4-2. The ACPSO procedure for production scheduling of multi-period VCMSs. .......... 4‐27  Figure 4-3. The effect of training cost on training level. ............................................................ 4‐34  Figure 4-4. Worker training level under different inventory-holding costs. .............................. 4‐37  Figure 4-5. Comparison of manufacturing cost after the same number of iterations. ................ 4‐40  Figure 5-1. The revised right-shift policy for VCMSs procedure. ............................................... 5‐6  Figure 5-2. The heuristic for determining production outputs of remaining tasks. ...................... 5‐7  Figure 5-3. The procedure of the proposed rescheduling policy. ............................................... 5‐11  Figure 5-4. Absorption rates under different disruption levels................................................... 5‐23  viii   

Figure 5-5. The effect of critical cumulative task delay on rescheduling frequency. ................. 5‐23  Figure 5-6. The effect of severe level of disruptions on rescheduling rate. ............................... 5‐24  Figure 5-7. The effect of efficiency weight on manufacturing cost increment ratio. ................. 5‐34  Figure 5-8. The effect of disruption severe level on manufacturing cost increment ratio. ......... 5‐35  Figure 5-9. Comparison of two different initial job sizes. (a) initial job size=5. (b) initial job size=10. ................................................................................................................... 5‐40  Figure 5-10. Performance of the ARRIVAL policy. .................................................................. 5‐41  Figure 5-11. Performance of the PERIODIC policy. ................................................................. 5‐41  Figure 5-12. Performance of the RATIO policy. ........................................................................ 5‐42  Figure 5-13. The procedure of the proposed strategy. ................................................................ 5‐45  Figure 6-1. Grid of thread blocks. ................................................................................................ 6‐3  Figure 6-2. The ACPSO procedure. ............................................................................................. 6‐7  Figure 6-3. Global memory of data organization. ........................................................................ 6‐9  Figure 6-4. Programming code for finding the global best solution. .......................................... 6‐11  Figure 6-5. The kernel for generating random status. ................................................................ 6‐13  Figure 6-6. The operation for generating random numbers. ....................................................... 6‐14  Figure 6-7. Comparison of manufacturing cost. ......................................................................... 6‐19  Figure 6-8. Computational times on CPU and GPU. ................................................................. 6‐21  Figure 6-9. Effect of particle size on speed-up ratio. ................................................................. 6‐22  Figure 6-10. Effect of job number on speed-up ratio. ................................................................ 6‐23  Figure 6-11. Effect of system configuration on speed-up ratio. ................................................. 6‐24 

 

ix   

LIST OF TABLES Table 1-1. Characteristics of job shop, flow line, CM and VCM. ................................................ 1‐8  Table 1-2. The differences between virtual cellular manufacturing and cellular manufacturing. 1‐9  Table 3-1. The manufacturing information for the list of incoming jobs. .................................. 3‐12  Table 3-2. Travelling distances among the workstations. .......................................................... 3‐13  Table 3-3. Operating cost of each workstation per second. ........................................................ 3‐13  Table 3-4. Worker characteristics. .............................................................................................. 3‐14  Table 3-5. Sample production outputs of a job. ......................................................................... 3‐15  Table 3-6. A summary of the formation of virtual manufacturing cells. .................................... 3‐18  Table 3-7. An example of the job production sequence of a particle. ........................................ 3‐20  Table 3-8. The changes of probabilities from velocity of the job production sequence. ............ 3‐22  Table 3-9. Test problem generating schemes. ............................................................................ 3‐31  Table 3-10. Performance comparison of CPSO and DPSO. ...................................................... 3‐33  Table 3-11. Performance comparison of ACPSO and DPSO. ................................................... 3‐38  Table 4-1. The operating cost of each workstation per second. ................................................. 4‐10  Table 4-2. Travelling distances among the workstations. .......................................................... 4‐10  Table 4-3. Worker characteristics. .............................................................................................. 4‐11  Table 4-4. Training cost of operating each type of workstation in each period. ........................ 4‐11  Table 4-5. Production route, processing time and transportation cost of each job. .................... 4‐12  Table 4-6. Customer demands for all jobs in the planning horizon. ........................................... 4‐12  Table 4-7. The inventory-holding cost of each job in each period. ............................................ 4‐13  Table 4-8. The subcontracting cost of each job in each period. ................................................. 4‐13  Table 4-9. The formation of virtual manufacturing cells in the planning horizon. .................... 4‐14  Table 4-10. The creation and termination times of the virtual manufacturing cells. .................. 4‐15  Table 4-11. The worker training scheme. ................................................................................... 4‐16  Table 4-12. The inventory-holding and subcontracting volumes of jobs. .................................. 4‐17  Table 4-13. Test problem generating schemes. .......................................................................... 4‐32  Table 4-14. Worker training level under different training costs and job tightness. .................. 4‐33  Table 4-15. Training levels under different inventory-holding costs. ........................................ 4‐36  Table 4-16. Training levels under different subcontracting costs. ............................................. 4‐37 

x   

Table 4-17. Comparative performance of CPSO and ACPSO over the same number of iterations.  .................................................................................................................................................... 4‐39  Table 4-18. Comparison of manufacturing cost after the same computational time. ................. 4‐41  Table 5-1. Production resource assignment for the job. ............................................................... 5‐3  Table 5-2. Job production outputs. ............................................................................................... 5‐5  Table 5-3. Notations used in the proposed policy. ..................................................................... 5‐10  Table 5-4. Schemes for generating test problems. ...................................................................... 5‐16  Table 5-5. Rescheduling frequency for Schemes 1 to 24 under ρ  0.5 . .................................. 5‐19  Table 5-6. Rescheduling frequency for Schemes 1 to 24 under ρ  0.75 . ............................... 5‐19  Table 5-7. Rescheduling frequency for Schemes 1 to 24 under ρ  1 . ..................................... 5‐20  Table 5-8. Average rescheduling frequency for Schemes 1 to 24. ............................................. 5‐20  Table 5-9. Rescheduling frequency under different levels of ρ . .............................................. 5‐21  Table 5-10. Average rescheduling frequency for Schemes 25 to 48. ......................................... 5‐21  Table 5-11. Average rescheduling frequency for Schemes 49 to 72. ......................................... 5‐22  Table 5-12. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  0.5 . .............. 5‐26  Table 5-13. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  0.75 . ............ 5‐26  Table 5-14. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  1 . .................. 5‐27  Table 5-15. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  0.5 . ............ 5‐28  Table 5-16. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  0.75 . .......... 5‐29  Table 5-17. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  1 . ................ 5‐30  Table 5-18. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  0.5 . ............ 5‐31  Table 5-19. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  0.75 . .......... 5‐32  Table 5-20. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  1 . ................ 5‐33  Table 5-21. Manufacturing cost increment ratio for Schemes 1 to 3. ........................................ 5‐34  Table 5-22. Reactive scheduling policies in a rolling horizon environment. ............................. 5‐38  Table 5-23. Comparison of rescheduling policies under the same rescheduling frequency levels. 5‐ 43  Table 5-24. Schemes for generating test problems. .................................................................... 5‐46  Table 5-25. Parameter schemes for the rescheduling strategy. .................................................. 5‐47  Table 5-26. Performance of the proposed rescheduling strategy in comprehensive dynamic manufacturing environment. ....................................................................................................... 5‐48  xi   

Table 5-27. Characteristics of machine breakdowns and worker absenteeisms in comprehensive dynamic manufacturing environments. ...................................................................................... 5‐51  Table 6-1. Test problem generating schemes. ............................................................................ 6‐16  Table 6-2. Computational times under different block sizes. ..................................................... 6‐17  Table 6-3. Comparative manufacturing costs. ............................................................................ 6‐18  Table 6-4. The speed-up ratio of parallel ACPSO. ..................................................................... 6‐20  Table 6-5. Computational times on CPU and GPU. ................................................................... 6‐21 

xii   

NOTATION ,

the task delay caused by the disruption occurring at time point t in time slice s .

d w1 , w2

the distance between workstation w1 and w2 .

D*

critical cumulative task delay.

,

the cumulative task delay at time point t of time slice s .

DD j

the delivery due date of job j .

DD j , p

the delivery due date of job j in period p .

D (r )

the travelling distance of production route r .

D ( rj , p )

the material travelling distance of production route rj , p .

El , w

equal to one if worker l has the ability to operate workstation w ; otherwise, it is equal to zero.

El , w, p

equal to one if worker l can handle workstation w in period p ; otherwise, it is equal to zero.

ft j ,i , w ( r ), s

the completion time of operation i of job j on workstation w using route r in time slice s .

ft j ,i , w( rj , p ), s , p IV j , p

the completion time of operation i  of job j on workstation w( rj , p ) in time slice s of period p . the inventory-holding volume of job j in period p .

j, j '

job type.

Kj

the total number of operations of job j .

l

labor type.

L( j, i)

the set of workers that can handle operation i of job j .

L j ,i

the number of workers that can handle operation i of job j .

M

a fixed positive integer.

M ( j, i)

the set of workstations that can handle operation i of job j .

M j ,i

the number of workstations that can manufacture operation i of job j .

MCw , s

the maximum capacity of workstation w in time slice s .

MCw, s , p

the maximum production capacity of workstation w in time slice s of period p .

xiii   

NC

the number of selectable components for each variable.

O j ,i

the operation i of job j .

p, p '

a period. the best solution found by particle k until iteration t . the best solution found by the swarm until iteration t .

pre[i ][ p ]  

the volume of operation i of job j in the remaining tasks scheduled up to time slice p in the current schedule.

PR j ,i , w ( r ), s

a decision variable representing the processing rate of operation i of job j processed by using workstation w in the production route r during time slice s .

PR j ,i , w( rj , p ), s , p

the processing rate of operation i  of job j on workstation w( rj , p ) in time slice s  of period p . the processing time of producing one unit of operation i of job j by using workstation w in the production route r .

pt j ,i , w ( r )

pt j ,i , w( rj , p )

the processing time of operation i  of job j on workstation w( rj , p ) .

PL

the length of a time slice. a production route.

rj , p

the production route of job j in period p .

rt j

the release time of job j .

rs[i ][ p ]

the volume of operation i of job j scheduled up to time slice p in the right-shift schedule.

R

job due date range.

s, s '

a time slice.

st j ,i , w ( r ), s

the starting time of operation i of job j on workstation w using route r in time slice s .

st j ,i , w( rj , p ), s , p

the starting time of operation i  of job j on workstation w( rj , p ) in time slice s of period p .

svi

the volume of the i th type of sub-jobs.

S sP,t

the current predictive schedule.

S sR,t

the right-shift schedule revised through right-shift policy.

SV j

the subcontracting volume of job j .

SV j , p

the subcontracting volume of job j in period p . xiv 

 

SV jP

the subcontracting volume of job j in the schedule SsP,t .

SV jR

the subcontracting volume of job j in the schedule SsR,t .

trl , w, p

the training cost of worker l for workstation type w in period p .

T

the tightness of due date.

TIN l , w, s

the time interval in which worker l is operating workstation w in time slice s .

TIN l , w, s , p

the time interval in which worker l is operating workstation w in time slice s of period p .

TN l

the total number of time slices in which worker l is assigned to operate workstations within the planning horizon.

TN l , p

the number of time slices to which worker l is assigned during period p .

vi

the volume of the i th operation in the second part of remaining tasks.

Vj

the production volume (customer demand) of job j .

V j, p

the volume of customer demand of job j in period p .

V jP,i , s

the unfinished volume of operation i of job j in time slice s in the predictive schedule SsP,t .

V jR,i , s

the unfinished volume of operation i of job j in time slice s in the rightshift schedule SsR,t .

V

the volume of job j actually produced in period p .

j, p

the maximum value of particle velocities. Vkt

the velocity of particle k at iteration t .

w, w'

workstation type.

w( r )

workstation w used in the production route r .

w( rj , p )

workstation w used in the production route rj , p . particle k of the swarm at iteration t .

X j ,i , w( r ), s

a zero-one binary decision variable where it is equal to one if job j has operation i launched by workstation w in the production route r in time slice s ; otherwise, it is zero.

X j ,i , w( rj , p ), s , p

a zero-one binary variable where it is equal to one if job j has operation i launched on workstation w( rj , p ) in time slice s of period p ; otherwise, it is zero.

xv   

Y j ,i , w ( r ), s

a zero-one binary decision variable where itis equal to one if job j has operation i processed by workstation w in the production route r in time slice s ; otherwise, it is zero.

Y j ,i , w( rj , p ),s , p

a zero-one binary variable where it is equal to one if job j has operation i processed on workstation w( rj , p ) in time slice s of period p ; otherwise, it is zero.

Z j ,i ,l

equal to one if worker l is assigned to handle operation i of job j ; otherwise, it is zero.

Z j ,i ,l , p

equal to one if worker l is assigned to handle operation i  of job j in period p ; otherwise it is zero.

Zl ,s

equal to one if worker l is assigned to time slice s ; otherwise, it is zero.

Z l ,s , p

equal to one if worker otherwise, it is zero.

αj

the cost of moving one unit of job j per unit distance.

βl

the salary of worker l per time slice.

( β1 , β2 )

disruption duaration parameter.

 j, p

the inventory-holding cost of job type j per unit in period p .

σj

the subcontracting cost of job j per unit.

 j, p

the subcontracting cost of job j per unit in period p .

ρ

the evaporation weight of pheromone value in ACS, and the efficiency weight in robust predictive-reactive rescheduling.

η

production resource utilization.

λ

disruption frequency parameter.

ψw

the operating cost of workstation w per unit time.



a parameter in the range of (0, 1).

0

the initial pheromone value.

l is assigned to time slice s of period p ;

xvi   

CHAPTER 1 INTRODUCTION

1.1 Virtual Cellular Manufacturing Systems (VCMSs) The manufacturing industry has been facing relentless pressure from an increasingly dynamic manufacturing environment since the turn of the century. This atmosphere is characterized by a demand for a greater variety of products with shorter manufacturing cycles. Manufacturers must constantly enhance their production efficiencies to meet customer demands as well as maintain their competitiveness and market-shares. The manufacturing industry has devoted great effort to designing effective manufacturing systems for many decades. “Group Technology”, for example, is a manufacturing philosophy which groups together similar parts in order to achieve a higher level of integration between the industry’s design and manufacturing components. Cellular manufacturing (CM) and virtual cellular manufacturing (VCM) are two typical manufacturing designs utilizing Group Technology. Cellular manufacturing has long been a favorite among the various manufacturing layouts for its efficient improvement of the productivity of batch production systems (Mak et al. 2005). However, some deficiencies limit the application of CM in the practical manufacturing environment. First, the machine workload in cellular manufacturing systems (CMSs) is usually unbalanced. The machines in a cellular manufacturing system are usually duplicated in order to help restrict the manufacturing of parts to their respective cells. This will generate excessive production capacity and lead to low machine utilization. Second, CMSs are not efficient in responding to dynamic changes in the manufacturing environment. One of the underlying assumptions in cellular manufacturing systems is that both product mix and product demand are relatively stable over time. CMS efficiency may decline drastically when the product pattern changes (Wemmerlov and Hyer 1989). Virtual cellular manufacturing was proposed as an alternative to CM in order to overcome these deficiencies (McLean et al. 1982). VCM is designed to maintain both high production efficiency and product flexibility. Furthermore,

1‐1   

it is effective in handling various changes inherent to dynamic manufacturing environments.

1.1.1 The virtual cellular manufacturing concept The most popular manufacturing systems in use prior to the advent of the VCM concept included “job shop”, “flow line”, and “cellular manufacturing system”. The following will present a general overview of these commonly used systems as well as the newly proposed virtual cellular manufacturing system (VCMS). Job shop, also called “process layout”, is the most common manufacturing system in use around the world. It generally features the highest flexibility among the system options so that a great variety of products with small lot sizes can be produced. Figure 11 shows the physical configuration of a job shop, in which workstations of the same type are placed together on the production floor. Due to this functional layout, job shop yields high flexibility and machine utilization. However, this layout is not without its problems. When the processing of a job operation is finished, it usually must move a relatively long distance to reach the next stage. It may have to travel the length of the entire system to complete all of the required operations. Thus, many resources are spent on nonproductive activities and complex material flow management. Furthermore, job shop efficiency may decline as the number of workstations increases. Job shop also suffers from many other deficiencies such as significant work-in-process inventory, long production cycle time, and excessive setup time, etc. All of these deficiencies conspire to limit the application of job shop to competitive and dynamic manufacturing environments.

1‐2   

Figure 1-1. The layout of a job shop. In contrast to job shop, flow line (also called “product layout”) is a product-oriented manufacturing layout. Figure 1-2 is the physical configuration of a flow line. A flow line is organized according to the sequence of operations required for a product. It is suitable for manufacturing high volumes of products with high production rates. This manufacturing layout simplifies material flow management, and reduces the work-inprocess inventory and leading time. However, the major deficiency of flow line is its lack of flexibility. It is not adaptable to producing products for which it was not designed. Flow line features specialized workstations that are setup to perform limited operations and are difficult to reconfigure. Thus, flow line works well only in stable mass production environments (as opposed to dynamic manufacturing environments). 1‐3   

 

Figure 1-2. The layout of a flow line. As stated above, job shop is a process-oriented manufacturing layout that features high flexibility but low efficiency; flow line is a product-oriented manufacturing layout, featuring high efficiency but low flexibility. Neither is capable of meeting modern production requirements. Cellular manufacturing was proposed as a balance between job shop and flow line, introduced to permit reasonable flexibility and efficiency simultaneously (Figure 1-3).

1‐4   

Figure 1-3. Positioning of various manufacturing layouts. Figure 1-4 shows the physical configuration of a cellular manufacturing system. The parts that have similar attributes are grouped together to form a part family, and the workstations that produce those parts are grouped together to form a manufacturing cell. This enables similar parts to be manufactured efficiently within the same cell (Drolet 1989). Manufacturing cell capacity can be determined by considering only the part families that are produced by the cell, and the workstations are positioned according to the operation sequences of the majority of the parts. CM’s basic tenet is to decompose a complex manufacturing facility into several smaller, more similar manufacturing cells, each of which is dedicated to manufacturing a single part family. The decomposition process limits each manufacturing cell to only a few workstations, thus greatly simplifying material flow management. CM suffers the major deficiency of unbalanced machine workload, however, because machines in CMSs are usually duplicated to restrict the manufacturing of parts to their respective manufacturing cells. Machine duplication will generate excessive production capacity and lead to low machine utilization. Some possible remedies have been proposed in relevant studies. One idea is to route parts outside of their assigned cells in order to reduce unbalanced machine workload, but this attempt will break the cardinal CM rule and increase the difficulty in managing material flow. Another approach suggests using multi-function workstations to balance machine workload, but this will generate high investment costs because such advanced machinery is usually very expensive.

1‐5   

Workstation A

Workstation B

Workstation D

Workstation E

Workstation C

 

Figure 1-4. The layout of a cellular manufacturing system. A concept called virtual cellular manufacturing was proposed in the early 1980s to overcome CM’s deficiencies (McLean et al. 1982, Drolet 1989). Figure 1-5 shows the physical configuration of a VCMS. VCM, like CM, is a manufacturing concept which makes use of group technology and strikes a balance between process layout and product layout. The main difference between VCM and traditional cellular manufacturing is that the workstations in a VCMS are distributed evenly on the production floor instead of being grouped into clusters. Rather than being identified as a fixed physical grouping of workstations, the virtual manufacturing cell appears as data files in a virtual cell controller (Drolet 1989). When a job arrives, the virtual cell controller will take command of the workstations to form a virtual manufacturing cell. The controller will also oversee the manufacturing of a job until it is finished. Meanwhile, the workstations

1‐6   

are not locked into a particular virtual manufacturing cell. This renders them free for allocation to other jobs if they possess remaining production capacities. When the job has been completed, the virtual manufacturing cell terminates, and the workstations released will become available again for other incoming jobs. Hence, a workstation may be part of one virtual manufacturing cell at this time, and be part of another cell in the near future. The sharing of workstations among virtual manufacturing cells creates an expectation that machine utilization and overall productivity will be improved. Furthermore, the physical discontinuity and dynamic configuration of virtual manufacturing cells render VCMSs significantly more adaptable to demand uncertainties and product specification changes.

Figure 1-5. The layout of a virtual cellular manufacturing system.

1‐7   

These four manufacturing layouts differ in many aspects, and the major characteristics of which are summarized in Table 1-1: Layouts

Manufacturing environment

Advantage

Disadvantage

Job shop

A great variety of High flexibility and Low efficiency products with small workstation lot sizes utilization

Flow line

High volumes of High efficiency products with high production rates

Low flexibility

CM

Stable production Simplified material patterns with management, easier identifiable part scheduling task families

Low workstation utilization, reduced flexibility, and relatively high investment

VCM

Unstable production High efficiency and Double competition pattern with great workstation production variation utilization, relatively low investment Table 1-1. Characteristics of job shop, flow line, CM and VCM.

1.1.2 Virtual cellular manufacturing versus cellular manufacturing Both CM and VCM are manufacturing designs belonging to the Group Technology field, the fundamental principle of which is to make use of the similarity between operations. Both manufacturing layouts strike a balance between process layout and product layout, thus simultaneously enjoying better levels of efficiency and flexibility. The major difference between CM and VCM lies in the physical configuration of their workstations. CM assumes relatively permanent physical groupings of workstations, each grouping (or cell) being dedicated to the manufacturing of a single part family. The similarities of parts in a cell lead to reduced setup time and simplified material handling. In order to reduce the transportation of parts between different manufacturing cells, machines are 1‐8   

usually duplicated to restrict the manufacturing of parts to their respective cells. This duplication will generate excessive production capacity and lead to low machine utilization. It was reported that the highest machine utilization rate attained under CM in the U.S. was only about 60 per cent (Lewis 1981), and that utilization rate is difficult to attain as the number of part types becomes larger. Morris and Tersine (1990) claimed that CM systems only yield good performance under conditions of relatively high setup and material handling frequencies with stable production demand patterns. Virtual cellular manufacturing

Cellular manufacturing

1. Most workstations are replicated across 1. Workstations producing a part family are physically grouped together. Most the production floor. workstations have duplication to reduce the inter-cell transportation of jobs. 2. Systems are operated on a just-in-time 2. Systems are operated on a flow shop basis. No work-in-process inventory is basis. Manufacturing cells may need to be rearranged in response to changes in allowed. product mix. 3. Job production volume is usually not 3. Parts are produced on batch basis, and very large. there is no restriction on the size of batches. 4. The route of producing a job is not 4. The route of manufacturing a job is specified in advance, but just determined almost the same. Reduced flexibility and according to the current status of the low machine utilization. production floor. More flexibility and high machine utilization. . Table 1-2. The differences between virtual cellular manufacturing and cellular manufacturing. In VCMSs, the workstations are evenly distributed on the production floor, not clustered together in physical groups. When jobs arrive, virtual manufacturing cells are created according to the manufacturing requirements of the jobs and the current system status. More than one virtual cell can share a workstation if there is enough remaining production capacity. VCM thus achieves many of the benefits associated with CM while retaining high flexibility and workstation utilization. Indeed, it is widely believed that

1‐9   

VCM is more suitable for dynamic manufacturing environments due to the temporal configuration of virtual manufacturing cells. These two manufacturing layouts have many differences aside from workstation layout, which are listed in Table 1-2.

1.2 Production Scheduling in VCMSs In order to run a VCMS well, a good production schedule should provide at least the following three types of basic information (Mak et al. 2005). First, it should specify the types of workstations and other production resources that will be grouped to form virtual manufacturing cells. Second, it should identify the bottleneck production resources in each virtual manufacturing cell, and determine the most appropriate processing rates to manufacture the assigned jobs. Third, it should specify the suitable times to create and terminate the virtual manufacturing cells. Although VCM has many advantages in terms of efficiency and flexibility, its features also make the formulation of production schedules a very complex task. Some major concerns in formulating VCMS production schedules are presented as follows: (a) How to determine a suitable production route for each part? Most workstation types are replicated across the production floor in VCMSs. A production route should be determined for each job in order to generate a complete production schedule. Thus, a large number of possible production routes need to be analyzed. In the process of determining a suitable job production route, many factors such as the remaining capacity of production resources and the distance between workstations, etc., should be considered. Take this example to illustrate the point. Say there is a VCMS with two identical workstations of type A, labeled as workstations 1 and 2; three identical workstations of type B, labeled as workstations 3, 4, and 5; and three identical workstations of type C, labeled as workstations 6, 7, and 8. Assuming the job manufacturing must follow production route A  B  C , then there are 18 total possible production routes for this job: (1-3-6), (1-3-7), (1-3-8), (1-4-6), (1-4-7), (1-4-8), (1-5-6), (1-5-7), (1-5-8), (2-3-6), (2-3-7), (2-3-8), (2-4-6), (2-4-7), (2-4-8), (2-5-6), (2-5-7) and (2-

1‐10   

5-8). As the number of jobs and/or production resources increases, the search space will become larger and the production scheduling problem will become more complicated. (b) How to improve production efficiency and machine utilization? A good production schedule should reduce leading time and improve machine utilization as much as possible. The production of each part in a VCMS is usually of medium volume, ranging from more than a dozen to less than 100 units. Previous research decomposed production orders into numerous small jobs, each of which potentially having a different production route (Wong 1999). The main idea of this approach is as follows. A production order of large volume will occupy some resources for a long time. Once a virtual manufacturing cell is created for this job, these production resources will be locked up for a long time, leaving relatively few remaining capacitie for other cells. This may also cause these production resources to become bottlenecked as their effective remaining capacities are relatively small. The decomposition approach can improve machine utilization, but will increase the difficulty in managing material flow. An alternative approach, dividing the planning horizon into a certain number of equal time slices, has been commonly adopted in recent years (Mak et al. 2005). In each time slice, a production resource can be utilized to produce a job, but also can be assigned to produce other jobs if it completes its work but still has available capacities. This approach can also guarantee that a production resource will not be occupied for a long time by a specific job. Leading time can thus be reduced while machine utilization is improved. (c) Is work-in-process inventory between workstations allowed? Control of the creation and termination of virtual manufacturing cells is crucial for running a VCMS. Permitting work-in-process inventory between workstations would convolute material flow management, and also make controlling cell creation and termination too difficult. For example, consider a part being manufactured in a virtual manufacturing cell (2-5-6) which bottlenecks at workstation 5. The work-in-process inventory will accumulate at workstation 5, and workstation 6 will remain idle while waiting for incoming parts to arrive from workstation 5. It is hard, therefore, for the 1‐11   

virtual cell controller to determine optimal operations for the system. This will also inevitably increase the difficulty in managing material flow in the manufacturing system. Therefore, no work-in-process inventory is allowed between workstations in this research. If the planning horizon is divided into a certain number of equal time slices as stated above, the requirement of no work-in-process inventory between workstations can be easily satisfied by imposing a constraint on the processing rates of jobs: the processing rate of an operation in a time slice must be equal to that of its preceding operation in the last time slice, and that of its succeeding operation in the next time slice. In this way, the appropriate processing rate will be determined by the bottlenecked workstation, and any remaining production capacities of other workstations can be assigned for other incoming jobs. This method can simplify the material flow management while maintaining reasonably high machine utilization. (d) Will the production schedule be affected by other production resources? Part manufacturing depends upon much more than simply workstations. Other critical hardwares such as tools and fixtures are necessary as well, not to mention skilled workers, who are also needed to operate the machinery. A strong virtual manufacturing cell must therefore consider the requirements of all of these production resources. For instance, the cell controller must ensure that all the required production resources are delivered to the appropriate workstations and that the skilled workers are available – all at the right time. The effective capacities of these production resources must therefore be taken into consideration while generating a feasible production schedule. Obviously, the inclusion of these resource constraints increases the difficulty of finding the optimal production schedule. (e) Is the production scheduling algorithm capable of handling expected and unexpected changes occurring in the manufacturing system? System performance may be affected by any number of changes to the practical manufacturing environment. Nonetheless, most disturbances can be classified into two general categories according to their characteristics: expected and unexpected. Expected changes (such as shifts in product mix and demand) usually require higher setup 1‐12   

frequency and more complex cell formation to meet their demands. The formation of part families and manufacturing cells in cellular manufacturing systems are based on an underlying assumption that product mix and demand are relatively stable over time. Thus it is hard to maintain system optimality when demand patterns change. This handicap limits the application of cellular manufacturing systems to dynamic manufacturing environments. Unexpected changes, such as machine breakdowns and worker absenteeisms, may render the current production schedule infeasible or deteriorate the system performance. The selected production scheduling algorithm must be able to rapidly modify the current production schedule to cope with disruptions in a dynamic manufacturing environment. Much research has been conducted in the VCM field since its concept was introduced in the 1980s. Most of these attempts focused on the possibility and feasibility of applying VCM concepts to real manufacturing environments, and on comparing the performance of VCMSs with other manufacturing systems through simulation analysis. Only a few inquiries have been made into the study of VCMS production scheduling problems. Drolet (1989) developed the first mathematical model to formulate VCMS production schedules, proposing a two-step approach. To begin, all of the integer requirements in the model are ignored, enabling the relaxed model to be easily solved by using any existing linear programming technique. Second, the solution obtained from the first step is adjusted by rounding off the integer variables, so as to satisfy the integer requirements. Although this approach can significantly reduce the computational effort, the ultimate solution it yields is infeasible in most cases. “Genetic algorithm” was used for decades as the optimization tool to solve production scheduling problems. Mak and Wang (2002a) applied genetic algorithm to formulate production schedules for cellular manufacturing systems. Furthermore, Mak and Wang (2002b) developed a mathematical model to describe VCMS production schedules and proposed a genetic algorithm with “age” to solve the complex problem. Production orders are decomposed into numerous small jobs in their model, each of which can have a different production route. Based on their research, Mak et al. (2005) developed another model to study VCMS production scheduling problems. In their model, the planning horizon is divided into a certain 1‐13   

number of equal time slices, and each job is assigned a unique production route so as to simplify material flow management.

1.3 Combinatorial Optimization Problems and Algorithms for Production Scheduling Problems Almost all practical production scheduling problems are essentially combinatorial optimization problems, each of which can be described as an optimization problem with discrete variables and a finite search space. “Optimization” here usually refers to minimizing or maximizing the value of an objective function (this research only considers minimization problems, because the objective function in a production scheduling problem is usually to minimize some measure of system performance, such as makespan, job tardiness, and total manufacturing cost, etc.). The formal definition of an optimization problem was given by Papadimitriou and Steiglitz in 1982. They asserted that an optimization problem is composed of an objective function f and a feasible solution space S. The goal of the problem is to find a solution x  S satisfying: f ( x )  f ( y ) for y  S

(1-1)

Such a solution x satisfying Equation (1-1) is called a global optimum. The solution space of a problem is usually characterized by a set of constraints (inequalities and equalities). A great variety of algorithms have been proposed to find the global optimal solution of a combinatorial optimization problem. The most common approaches can be classified into two taxonomies: exact algorithms and approximation algorithms. Approximate algorithms can be further subdivided into two types: local search heuristics and metaheuristics. Combinatorial optimization problems such as production scheduling problems are usually NP-hard. Exact algorithms can ensure finding the global optimal solution, but computational time requirements will increase exponentially as the problem size becomes larger. Thus exact algorithms are usually used to solve optimization problems of small size. One of the most famous exact algorithms is “branch-and-bound”, which is a 1‐14   

technique for the complete enumeration of all possible solutions without having to try them one by one. The key step of the branch-and-bound process is called “pruning”: if the lower bound for some tree node A is greater than the upper bound of some other node

B, then node A can be safely discarded from the search space. This will reduce the search space and thus save computational effort. Since exact algorithms require a prohibitive amount of computation time and a reasonably near-optimal solution is usually acceptable in practice, approximation algorithms have attracted more attention both in practice and in theory, due to their fast computational speed. As stated above, approximation algorithms include local search heuristics and meta-heuristics. The principle of local search heuristics is to update the solution through a smart rule or an iterative rule until no improvement on the solution quality can be made. However, local search heuristics have many deficiencies. First, they easily become stuck in local optima. Second, these heuristics are usually problemspecific, and it is hard to find a general application for all types of optimization problems. Some commonly used heuristics include descent heuristics, and hill-climbing heuristics, etc. Meta-heuristics refer to intelligent strategies which optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics differ from local search heuristics in many aspects. First, meta-heuristics make few or no assumptions about the problems being optimized and can search a very large space of candidate solutions. Second, meta-heuristics are capable of escaping from local optima to try other parts of solution space for better solutions. In recent years, the most popular meta-heuristics include genetic algorithm, tabu search, simulated annealing, particle swarm optimization, and ant colony optimization, etc.

1.4 Research Objectives The goal of this research is to study production scheduling problems in VCMSs under single-period, multi-period and dynamic manufacturing environments. The main objectives of this research are listed as follows: (a) To develop two mathematical models to formulate VCMS production schedules in single-period and multi-period situations respectively, taking workforce requirements 1‐15   

into consideration at the same time. The objective of each model is to formulate a production schedule for all jobs, in order to minimize the total manufacturing cost within the entire planning horizon. (b) To develop an effective hybrid algorithm based on the techniques of ant colony system, constraint programming and discrete particle swarm optimization in order to locate the optimal or near-optimal production schedule for VCMSs. (c) To derive a suitable mechanism for generating test problems to evaluate the performance of the proposed hybrid algorithm. (d) To develop some effective rescheduling strategies for VCMSs operating in dynamic manufacturing environments, where disruptions include random machine breakdowns, worker absenteeisms, dynamic job arrivals, and changes in production volume. (e) To develop an effective parallel approach for implementing the proposed algorithm on GPU with compute unified device architecture (CUDA). (f) To study the convergence characteristics of the proposed hybrid algorithm theoretically.

1.5 Outline of the Thesis The rest of the thesis is organized as follows: Chapter 2 presents a literature review that forms the basis of this research. The review briefly summarizes the previous research in the development of cellular manufacturing systems, the evolution to virtual cellular manufacturing systems, some related optimization algorithms, and the works in dynamic production scheduling. Chapter 3 develops a mathematical model to formulate production schedules for VCMSs with workforce requirements under a single-period manufacturing environment. The objective is to minimize the total manufacturing cost related to the production schedule. A simple numerical example is taken to illustrate VCMS characteristics. A hybrid algorithm based on the techniques of ant colony system, constraint programming, 1‐16   

and discrete particle swarm optimization is also proposed to solve the complex optimization problem. In addition, sensitivity analyses are conducted to study the effects of the parameters on the algorithm’s performance. Chapter 4 extends the VCMS production scheduling problems into a static multiperiod situation, where worker training and inventory-holding costs are also taken into consideration. The hybrid algorithm proposed in Chapter 3 is modified to make it suitable for the multi-period manufacturing environment. Chapter 5 studies the production scheduling problems of VCMSs under dynamic manufacturing environments. This chapter first deals with machine breakdowns and worker absenteeisms by using a strategy based on the cumulative task delay concept, and then studies the VCMS production scheduling problem in a rolling horizon environment. Finally, simulations are conducted in a more comprehensive dynamic manufacturing environment, where disruptions occurring in the manufacturing system include random machine breakdowns and worker absenteeisms, dynamic job arrivals, and changes in production volume. Chapter 6 presents an efficient parallel implementation approach for the proposed hybrid algorithm on GPU with CUDA to reduce the computational time. Factors affecting the speed-up ratio are investigated to facilitate the understanding of this parallel approach. Chapter 7 focuses on analyzing the convergence characteristics of the proposed hybrid algorithm theoretically by using probability theory and Markov chain model. Finally, Chapter 8 completes the thesis with a brief conclusion and recommendations for future research.

1.6 Summary of Thesis Contributions The thesis represents thorough research into production scheduling problems for VCMSs in single-period, multi-period, and dynamic manufacturing environments. A production scheduling framework is adopted to guide production schedule formulation of 1‐17   

VCMSs. Two mathematical models, together with a novel hybrid algorithm and several effective rescheduling strategies, are developed to generate/regenerate complete and feasible production schedules for the entire planning horizon. The objective is to provide production schedulers and managers a more lucid understanding of VCMSs, and enable them to plan and control manufacturing activities more efficiently and systematically. The developed scheduling methodology, including the production scheduling framework, two models, a hybrid optimization algorithm and these rescheduling strategies, features several special characteristics. First, it reduces the locking up of production resources by dividing the planning horizon into a certain number of equal time slices. Second, it simplifies material flow management by eliminating work-inprocess inventory between workstations. Third, the proposed hybrid algorithm can provide a complete, feasible, and good-quality production schedule for the entire planning horizon without deviation from the actual manufacturing environment. Finally, the production scheduling framework with these proposed rescheduling strategies is capable of handling various unpredictable disruptions in the manufacturing environment. Two mathematical models are developed to describe the characteristics of VCMSs operating in a single-period and a multi-period manufacturing environment respectively. In both models, all manufacturing activities are measured with cost, and the aim is to minimize the total manufacturing cost incurred in manufacturing product orders for the entire planning horizon. These models include many practical constraints such as workforce requirements, effective capacities of production resources, and delivery due dates of orders. In particular, worker training and factors affecting worker training are detailedly investigated in the multi-period case. The obtained production schedule facilitates the determination of the composition of each virtual manufacturing cell, the appropriate processing rates for each individual job, and the creation and termination times of each virtual manufacturing cell (as well as the worker training plan and inventory-holding scheme in the multi-period situation). This research develops an effective hybrid algorithm based on the techniques of discrete particle swarm optimization, constraint programming, and ant colony system to solve complex production scheduling problems for VCMSs. The intent of the developed 1‐18   

hybrid algorithm is to combine the complementary advantages of these techniques to improve search performance. Discrete particle swarm optimization ensures to locate good solutions quickly. Constraint programming is used to increase the exploitation of the search space, while ant colony system is adopted to increase its exploration. Since the proposed optimization algorithm does not make any strong assumptions on the form of the mathematical model, it provides production managers with great flexibility to include in the objective function and system constraints that truly describe the characteristics of the manufacturing environment. The performance of the proposed hybrid algorithm is evaluated by simulating a large set of randomly generate test problems. Sensitivity analyses are also conducted to study the effects of key parameters such as particle size, maximum number of iterations, and the weight coefficient of pheromone value, etc., on search performance of the hybrid algorithm. Dynamic production scheduling of VCMSs is conducted in order to make the research more close to the practical manufacturing environment, where a great variety of disruptions occur unexpectedly. These unpredictable events will deteriorate system performance and even render the current production schedule infeasible. A new feasible production schedule may therefore need to be generated rapidly to ensure smooth manufacturing operations. This research proposes several cost-effective rescheduling strategies to deal with commonly encountered disruptions in dynamic manufacturing environments so as to fill the gap between scheduling theory and scheduling practice. Among them, a cumulative task delay based strategy is adopted to deal with machine breakdowns and worker absenteeisms, a hybrid strategy based on the number of incoming jobs and percentage of task completion is adopted to deal with dynamic job arrivals in a rolling time horizon, and an immediate rescheduling strategy is used to deal with changes in production volume by immediately adjusting the system status. These results make the scheduling methodology more reliable for practical applications. To further improve productivity and realize a smooth manufacturing process, it is desirable to reduce the computational time of generating/regenerating production schedules as much as possible. This research proposes a parallel approach for implementing the proposed hybrid algorithm on GPU with CUDA so as to improve its 1‐19   

computational speed. The proposed hybrid algorithm is intrinsically parallel as its framework is a population-based meta-heuristic, and thus can be effectively implemented on GPU. The performance of the parallel implementation approach is evaluated based on a large set of randomly generated test problems. Analyses are also conducted to reveal which factors affect speed-up ratio greatly. In order to study the search performance of the proposed hybrid algorithm, theoretical analyses are conducted in this research by using probability theory and the Markov chain model. The effects of DPSO’s information sharing mechanism and ACS’s pheromone trail mechanism on the convergence characteristics of the proposed hybrid algorithm are investigated in detail. Furthermore, an absorbing Markov chain model is formulated to estimate the expected convergence time of the proposed hybrid algorithm. These results provide a theoretical foundation to apply the proposed hybrid algorithm as the optimization tool to formulate production schedules for VCMSs.

1‐20   

CHAPTER 2 LITERATURE REVIEW

2.1 Introduction This chapter provides a comprehensive literature review concerning the development of traditional cellular manufacturing systems, the evolution to virtual cellular manufacturing systems, some effective optimization algorithms for production planning and scheduling, and the research in dynamic production scheduling. The evolution of manufacturing concepts from cellular manufacturing to virtual cellular manufacturing is presented first to highlight the differences between these two effective manufacturing concepts. An approach-based classification system for the existing cell formation methods is also introduced, while the advantages and disadvantages of using these approaches to form manufacturing cells are provided. Second, the historical background of some effective meta-heuristics, including particle swarm optimization, ant colony optimization and constraint programming, are presented. In subsequent chapters of this thesis, a hybrid algorithm based on the best aspects of these three techniques will be established to solve optimally the cell formation and production scheduling problems of VCMSs. Finally, an overall review of dynamic production scheduling is presented, and several effective rescheduling strategies are introduced in detail.

2.2 Cellular Manufacturing Systems Small batch manufacturing is usually carried out in a job shop environment. However, job shop may be not the best choice for stable product patterns, as it will lead to high setup frequency, increased setup cost, and reduced productivity. Great effort has therefore been made to improve the productivity of batch production systems. From among the proposed approaches, “group technology” has drawn significant attention. This is a manufacturing philosophy that identifies similar parts and groups them into families in order to take advantage of their similarities in design and manufacturing functions. “Cellular manufacturing” is a successful application of group technology, and has long 2‐1   

been considered an efficient methodology for executing batch production manufacturing (Greence and Sadowski 1984, Wemmerlov and Hyer 1989). The major characteristic of cellular manufacturing systems is that the parts which undergo similar operations are grouped to form a part family, and the workstations that produce those parts are grouped together to form a manufacturing cell, enabling those parts to be manufactured within the same cell. Due to the similarity of parts and the proximity of workstations in a manufacturing cell, cellular manufacturing systems boast many advantages including easier scheduling tasks, simplified material flow management, reduced setup time, low work-in-process inventory, and reduced flow time, etc. (Greene and Cleasry 1985). The most important consideration in running an efficient cellular manufacturing system, therefore, is to properly determine the taxonomies of part families and manufacturing cells, and then allocate part families to manufacturing cells or vice versa. These are very complicated tasks. Due to the complexity, various part-machine grouping approaches have been reported in relevant literature. The prevailing objective criteria for forming part families and machine cells in these approaches include the minimization of manufacturing costs and setup time, and the maximization of machine utilization and routing flexibility, etc. In general, the most common part-machine grouping approaches can be briefly divided into these seven categories: (a) Descriptive procedures; (b) Production flow analysis; (c) Graph partitioning; (d) Similarity coefficient based approach; (e) Mathematical programming; (f) Artificial intelligence; and (g) Other heuristics. These seven accepted approaches will be introduced here.

2.2.1 Descriptive procedures The first approach for wedding parts and machines is descriptive procedures. Descriptive procedures are methods of forming part families and machine groups according to their characteristics. Generally, descriptive procedures can be further divided into three sub-categories: part families identification (PFI), machine groups identification (MGI), and part families/machine grouping (PF/MG) (Wemmetlov and Hyer 1989).

2‐2   

The PFI’s operating procedure is to first identify the part families and then allocate machines to them. PFI methods can be sub-classified as those based on informal systems and those based on formal coding and classification systems (Burbidge 1963). The approaches based on informal systems identify part families and machine cells by using some simple rules of thumb, visual examination or other criteria. These techniques are easy to implement and can quickly generate solutions. However, they are not always useful in solving large-scale problems because they can lead to hasty assumptions and, therefore, inferior solution quality. The approaches based on formal coding and classification systems, on the other hand, group parts together based on a number of attributes, such as shape, dimension, material composition, and operations requirement, etc. A universal system of coding and classification is not practical because the development of a formal coding and classification system is usually expensive and timeconsuming. The MGI procedure, as contrasted with PFI, first groups machines to form manufacturing cells based on part routings, and then allocates parts to these machine groups (Damodaran et. al. 1992). The PF/MG approaches identify part families and machine groups simultaneously. Burbidge (1963) proposed the earliest PF/MG descriptive approach for the cell formation problem based on the production flow of parts. EI-Essawy et al. (1972) developed another PF/MG approach based on the component flow of parts. Both methods emphasize the importance of local factors such as manufacturing processes, which cannot easily be explicitly formulated. These descriptive procedures approaches share the common factor that they determine the formation of part families and machine groups based on a subjective evaluation. As the performance of these approaches is not easily quantifiable, their practical application is limited.

2.2.2 Product flow analysis In cellular manufacturing systems, the parts that undergo similar manufacturing operations are grouped together to form a part family. It is for this purpose that product 2‐3   

flow analysis, i.e., the attempt to group parts according to their manufacturing processes, has been widely used across the manufacturing industry. Production flow analysis consists of three successive steps (Chu and Tsai 1990). The first step is to utilize route sheets to record the relationships between parts and the associated machines. Based on these relationships, the parts requiring identical manufacturing operations are sorted out. Second, a part-machine matrix (also called “incidence matrix” by some researchers) is established by analyzing the manufacturing sequence and workload requirements for manufacturing these parts. A part-machine matrix is a zero-one (0-1) binary matrix summarizing the relationships between the parts and machines, where the rows and columns indicate the parts and machines respectively. In the matrix, the entry of the ith row and jth column, aij , is equal to one if machine j is involved in the manufacturing process of part i; otherwise, it is equal to zero. Another type of matrix known as “production flow analysis chart” has also been proposed, wherein the manufacturing sequence and workload are also taken into consideration, and the entries are not limited to zero or one. Third, part families and machine cells are determined by rearranging the rows and columns of the part-machine matrix or production flow analysis chart until diagonal blocks are obtained. As the rearrangement process usually involves grouping the parts and machines into a number of clusters, this approach is also called “cluster analysis”. The most popular cluster analysis approaches include array-based clustering techniques, hierarchical clustering techniques, and nonhierarchical clustering techniques. In array-based clustering techniques, a machine-component matrix is developed to represent the requirements of components on machines. The commonly used array-based clustering techniques include Bond Energy Algorithm (McCormick et al. 1972, Gongaware and Ham 1981), Direct Clustering Algorithm (Chan and Milner 1982), Rank Order Clustering (King 1980a, 1980b, King and Nakornchai 1982), Modified Rank Order Clustering (Chandrasekharan and Rajagopalan 1986), Occupancy Value Method (Khator and Irani 1987) and Hamiltonian Path Heuristic (Askin et al. 1991) etc.

2‐4   

In hierarchical clustering techniques, the data in the part-machine matrix are separated into a number of broad cells, each of which is then further divided into smaller groups until terminal groupings are obtained. Here “terminal” means that the groups cannot be further divided. Two effective hierarchical clustering techniques include Average Linkage Algorithm (Gupta and Seifoddini 1990) and Set Merging Algorithm (Vakharia and Wemmerlov 1995). The number of clusters is decided first in non-hierarchical clustering techniques. Following that, the iteration process is conducted from an initial partition of the data set or some preliminary seed points all the way until the termination condition is achieved (Chandrasekharan and Rajagopalan 1987, Lemoine and Mutel 1983, Srinivasan and Narendean 1991). All of the production flow analysis procedures focus on handling the part-machine matrix in some way and do not take factors such as production volume and processing rate into consideration. These approaches are thus highly subjective, and the quality of the obtained solutions is highly dependent upon the initial solutions and the techniques used.

2.2.3 Graph partitioning approaches In graph partitioning approaches, each cell formation problem is represented as a machine-machine graph or a part-machine graph, wherein the vertices denote the machines or parts, and the arcs linking these nodes denote the processing of parts. The purpose of the graph partitioning approaches is to generate disconnected sub-graphs from the part-machine graph or machine-machine graph so as to determine the formation of part families and manufacturing cells. Many scholars have used the graph partitioning approaches to study cell formation problems. Kernighan and Lin (1970) developed a two-stage graph partitioning algorithm. In their approach, parts are allocated to specific machines in the first stage, and machines are grouped into manufacturing cells in the second stage. Faber and Carter (1986) proposed a graph theoretic algorithm to form part families and manufacturing cells by transforming the machine similarity matrix into a cluster network. Kumar et al. (1986) 2‐5   

presented a zero-one (0-1) quadratic programming model with linear constraints to study the parts grouping problem by optimizing the production flow between machines in each sub-graph. Two algorithms were also proposed in their research to determine the minimum number or minimal manufacturing cost of subcontractible parts while achieving disaggregation. Askin and Chiu (1990) developed a cost-based mathematical model and a heuristic approach, based on the Bipartite graph method, to determine the cell formation (Kusiak and Chow 1987). Their model consideres additional realistic factors such as machine setup cost, investment holding cost and machine depreciation. Vobra et al. (1990) developed a network-based algorithm to minimize the processing time necessary outside of the dedicated part cell. Wu and Salvendy (1993) also developed a network model to divide the machine-machine graph into cells with the consideration of operation sequences. In their research, two effective algorithms were proposed to divide the network by locating the minimum cut sets of the network. The major deficiency of the graph partitioning approaches is that these methods focus on a single criterion, while part-machine grouping problems are typically multi-criteria problems.

2.2.4 Similarity coefficient based approaches The “similarity coefficient” concept has long been used to solve cell formation problems. The principle of similarity coefficient-based approaches is to form a set of reciprocally independent machine cells, each of which is capable of fully manufacturing the part families assigned to it. These approaches are generally more flexible than the aforementioned approaches in incorporating manufacturing data into the process of forming machine cells. Similarity coefficient was first proposed by McAuley in 1972 (McAuley 1972). According to the original definition, a similarity coefficient measures the likeness of two parts by comparing their machine requirements, production routings, etc. The value of the similarity coefficient is in the range of [0, 1]. The coefficient is equal to one if the two parts are highly similar, and zero if the two parts are completely dissimilar. Once the coefficients between any two parts are calculated, part families and machine cells can be 2‐6   

formed by grouping those parts sharing similarity coefficients above the predefined thresholds. Some variants of similarity coefficient have been proposed since the proposition of the original concept, such as the “single linkage cluster analysis” (McAuley 1972) and the “average linkage method” (Seifoddini and Wolfe 1986). Gupta and Seifoddini (1990) also proposed a more advanced similarity coefficient based approach to form part families and machine cells. In their approach, the production data (including production volume, production routing, and processing time) was taken into consideration in the early stages of cell formation decisions. Many recent researchers incorporated similarity coefficients with the mathematical programming approach. Generally, similarity coefficients are embedded in the objective functions of the mathematical models, and mathematical programming techniques are used to form the part families and machine cells. This optimizes the similarities between parts in the same machine cell. Kusiak (1988) developed a p-median model, taking the selection of alternative process plan into consideration. The objective of his model is to optimize the sum of the similarity coefficients. Kasilingam (1989) combined similarity coefficients with integer programming models to form part families and machine cells, and proposed a Lagrange relaxation approach to deal with the system constraints. Similarity coefficients are also commonly combined with graph-theoretic approaches. Rajagopalan and Batra (1975) used Jaccard’s similarity to define the weights of the arcs in part-machine graphs. Faber and Carter (1986) presented a novel similarity coefficient capable of considering a large number of operational requirements when the clusters are formed. However, the approach of combing the similarity coefficients with graph theory also has a serious deficiency: choosing an appropriate measure of similarity between parts and machines is difficult.

2.2.5 Mathematical programming approaches Most of the aforementioned approaches do not consider the production data and cost elements involved in implementing the cell design. Thus the obtained formation of part families and machine cells may be inferior. Mathematical programming approaches 2‐7   

determine the formation of part families and machine cells by using the techniques of linear programming or goal programming. These approaches take a great variety of objectives such as minimizing part transformation among cells, maximizing the machine utilization and minimizing machine setup times into consideration. As they can consider various practical constraints, mathematical programming approaches are capable of developing more realistic models. Kusiak (1987) firstly adopted mathematical programming to study part family formation problems. In his research, a p-median model with the objective of maximizing the similarity between parts was developed to identify part families. Choobineh (1988) presented a two-stage approach to form part families and machine cells. In the first stage, a sequential approach is used to determine the formation of part families. In the second stage, a mathematical model is formulated to form machine cells with the goal of minimizing the total operating cost. Srinivasan et al. (1990) developed an assignment model to solve cell formation problems with the goal of maximizing the similarity between parts. In their study, a sequential procedure was provided to first form machine cells, and then to identify part families. Rajamani et al. (1990) developed an integer programming model to form part families and machine cells simultaneously. Their model incorporates several realistic issues such as alternate process plans, material handling cost, and machine investment cost, etc. Askin and Chiu (1990) presented another mathematical programming model to take more realistic factors into consideration, including investment holding cost, machine setup cost, and machine depreciation. Due to the complexity of their mathematical model, a heuristic algorithm was proposed to generate near-optimal solutions. The advantage of mathematical programming approaches is that they are capable of considering any realistic requirements in the mathematical model, either as a part of the objective function, or as a group of constraints, so long as the considered issues can be quantified. However, this advantage is a double-edged sword. As the number of constraints considered in the mathematical models increases, it becomes harder to obtain optimal solutions within an acceptable computational time window due to the complexity.

2.2.6 Artificial intelligence approaches 2‐8   

The artificial intelligence approaches, including fuzzy clustering approach, neural network, knowledge based expert systems and pattern recognition methods, have been successfully used as the optimization tools to solve cell formation problems in recent decades. Most early studies on cell formation problems are conducted under several assumptions. These assumptions are: (1) that all of the information for cell formation such as product demand, processing time, and operating costs, etc., is known and deterministic; (2) that the objective function and system constraints are precisely formulated; and (3) that each part/machine must belong to one and only one part family/machine cell. Some scholars have made great efforts to relax these assumptions by using the artificial intelligence approaches. Chu and Hayya (1991) proposed a fuzzy cmean clustering method to solve the cell formation problems in which a part (machine) could belong to more than one part family (machine cell). Chu and Tsai (1993) adopted the fuzzy mathematical programming approach to study cell formation problems with uncertainties in the objective functions and system constraints of the mathematical models. Xu and Wang (1989) utilized the fuzzy equivalence approach to find the proper number of part families, and then applied fuzzy classification to improve the results generated from the fuzzy equivalence analysis. The aforementioned conventional approaches share a common deficiency that the system has to adjust the problem’s entire data set whenever new parts or machines arrive in the system. Neural networks are able to overcome this limitation. The major advantage of neural networks lies in their ability to handle large data sets and classify new parts or machines within existing part families and/or machine cells without reconsidering the entire data set over again. In addition, neural networks have parallel processing capability which enables them to locate relatively good solutions within a short computational period. The popular neural networks approaches include “competitive learning” (Malave and Ramachandran 1991, Venugopal and Narendran 1992, 1994, Malakooti and Yang 1995), “interactive activation and competition learning” (Currie 1992), “self-organizing feature map” (Lee et al. 1992, Kulkarni and Kiang 1995), “back propagation” (Jamal 1993), “adaptive resonance theory” (Kaparthi and Suresh 1993, Dagli and Huggahalli 2‐9   

1995), “fuzzy adaptive resonance theory” (Burke and Kamal 1995, Suresh et al. 1995), the “Hopfield network” (Jamal 1993) and “stochastic learning” (Arizono et al. 1995). “Knowledge based expert system” is another artificial intelligence-based technique that provides a promising new tool for solving cell formation problems. However, only a few research papers have analyzed this approach in the field of cell formation. Kusiak (1988) developed a knowledge based expert system considering some realistic factors such as machine capacity and technological requirements during the process of cell formation. Pattern recognition has also been applied in cell formation problems. Wu et al. (1986) used a syntactic pattern recognition approach to represent complex patterns in terms of simpler sub-patterns so as to solve cell formation problems. EIMaraghy and Gu (1988) presented a manufacturing system which combined knowledge rules and syntactic pattern recognition technologies in the process of part family formation. Singh and Qi (1991) introduced a concept called “multi-dimensional similarity coefficient”, and developed a syntactic pattern recognition based algorithm to form natural part families.

2.2.7 Other heuristic approaches The computation time required for locating the global optimal solution is prohibitive when the problem size is large because complex cell formation problems are usually NPhard. It is more realistic to obtain a relatively good solution for the problem within an acceptable computation time window (rather than insist upon the global optimal) in practice. For this reason, scholars applied various heuristic algorithms to the field of cell formation. For instance, Co and Arrar (1988) presented a three-stage heuristic to solve cell formation problems which simultaneously considers a suitable number of cells. Harhalakis et al. (1990) developed a heuristic algorithm to group parts and machines into cells with the objective of minimizing inter-cell flow. Mukattash et al. (2002) proposed three heuristic methods to generate more flexible manufacturing cells. In their research, each of the three heuristics has its own function: the first is used to deal with part assignment in the presence of processing times, the second copes with mixing part

2‐10   

assignments with alternative manufacturing plans, and the last solves part assignment problems in systems with multiple machine types.

2.3 Virtual Cellular Manufacturing Systems 2.3.1 The evolution to virtual cellular manufacturing systems Although cellular manufacturing is an efficient means for improving the productivity of batch production systems, some deficiencies of this design render it less effective in the modern manufacturing environment. First, cellular manufacturing may lead to low routing flexibility for products and low machine utilization due to the physical grouping of workstations in manufacturing cells. The second and more serious issue in implementing cellular manufacturing systems is that the product pattern has to remain relatively stable over time, which is an unrealistic charge. The efficiency of cellular manufacturing systems may decline drastically when the product pattern changes, even the system configuration needs to be rearranged in order to respond to severe changes. The modern consumer goods market is based on ever-increasing product variety and, concurrently, product life cycles are becoming much shorter. These trends lead to great conflicts with the nature of traditional cellular manufacturing. In order to revitalize cellular manufacturing techniques, an alternative approach to achieve the benefits associated with cellular manufacturing without loss of pooling synergy is to employ a process layout with family-based production controlling and scheduling schemes. A new concept called “virtual cellular manufacturing” was proposed based on this idea (McLean et al. 1982, Drolet 1989, Kannan and Ghosh 1996a, Mohamed 1996, Kochikar and Narendran 1998). Virtual cellular manufacturing, which also makes use of group technology, was designed to maintain the setup and material handling efficiencies of a traditional cellular manufacturing system and the routing flexibility of a job shop manufacturing environment simultaneously. A virtual manufacturing cell is defined as a temporarily grouping of production resources to maintain the benefits associated with cellular manufacturing. A virtual manufacturing cell is a logical grouping of production resources that are not necessarily 2‐11   

grouped into physical proximity. Thus, the major difference between VCMSs and CMSs is that the workstations in VCMSs take the functional layout, or spread over the production floor in such a way as to ensure that each type of workstation is located as closely as possible to other workstation types. According to the intention of virtual cellular manufacturing, a virtual manufacturing cell appears as data files in a virtual cell controller instead of being identified as a fixed physical grouping of machines (Drolet 1989). When a job arrives in the system, the virtual cell controller will take command of the workstations to form a virtual manufacturing cell to produce this job. The formation of a manufacturing cell for a job is dependent on the job characteristics and current system status. The controller will also superintend the job processing from start to finish. In contrast to cellular manufacturing, the workstations in a virtual manufacturing cell are not locked up on the formation of the cell, but are free to be allocated to other jobs as long as there are remaining production capacities. When the job has been completed, the virtual manufacturing cell shrinks and the released workstations will become available for other incoming jobs again. That is, a workstation may be part of one virtual manufacturing cell in one moment, and then be part of another virtual manufacturing cell later. Hence, the number and formation of virtual cells are dynamically modified during the manufacturing process. As the workstations can be shared among virtual manufacturing cells, VCMS applications are expected to yield higher productivity than traditional cellular manufacturing systems. Furthermore, VCMSs should demonstrate greater ability to deal with product pattern uncertainties due to the discontinuity and the dynamic configuration of virtual manufacturing cells. Most VCMS research to date investigates the possibility and feasibility of applying this design to real manufacturing situations, but relatively little effort has been dedicated to designing a VCMS production scheduling framework. Drolet (1989) developed the first mathematical model used to formulate production schedules for VCMSs, and also proposed a well-known opportunistic scheduling strategy to solve the scheduling problem. In his research, a linear mathematical model with integer variables was proposed to provide an optimal VCMS production schedule. By relaxing the model’s integer requirements, this complex production scheduling problem can easily be solved by using any traditional linear programming technique. Then all of the fraction numbers were 2‐12   

round off to meet the integer requirements. However, although this relaxation approach can significantly reduce computational time requirements, the rounding off procedure may render the resulting production schedule deviant and infeasible. Pioneering VCMS design work was conducted in the early 1990s so as to maintain high performance levels in productivity and flexibility in face of changes (Drolet et al. 1989, 1990, 1991). This research showed that VCMS production efficiency increases as more workstations are installed on the production floor. As the number of workstations on the production floor increases, there are more options for selecting alternative production routes to manufacture the parts. Due to the physical discontinuity and the dynamic configuration of virtual manufacturing cells, uncertainties in product pattern will not significantly affect the efficiency and effectiveness of virtual cellular manufacturing systems. Virtual cellular manufacturing systems have been widely compared with a great variety of manufacturing systems such as process layout, flow shops, and traditional cellular manufacturing systems. Kannan and Ghosh (1996b) compared the performance of VCMSs, CMs and process layouts through simulation. In their simulation experiments, the manufacturing system comprised 40 parts and 30 machines, the cellular manufacturing layout included five manufacturing cells, the process layout consisted of eight functional departments, and five different system configurations for VCMSs were considered. Primary performance measures included mean flow time, mean tardiness, and the mean and standard deviation of work-in-process. The simulation results demonstrated that VCMSs outperform both the traditional CMs and process layout under a wide range of conditions. When the uncertainty level in product pattern is low, the cellular advantages of VCMSs are utilized, while when the uncertainty level is high, the VCM’s ability to reconfigure virtual cells quickly is employed. Furthermore, VCMSs outperform CMs more significantly when setup time is low and demand uncertainty is high. This is due to the VCM’s ability to exploit part similarities while maintaining high flexibility. Later, Vakharia et al. (1999) compared the performance of virtual cells and multi-stage flow shops by using analytical model. They found that the major factors affecting the comparative performance of virtual cells and multi-stage flow shops include the number 2‐13   

of processing stages, the ratio of setup time to run time per batch, and the number of machines per stage, etc. Through comparing VCMSs to traditional CMs, Subash Babu et

al. (2000) divided CM benefits into three categories: (1) human-related factor benefits from empowerment in smaller cells; (2) improved flow and control in cells due to having to deal with smaller number of parts and machines; and (3) improved operational efficiency due to similarities, in terms of reduced setup, smaller batch sizes, increased quality, productivity and agility. Their comparison results demonstrated that VCMSs may not offer benefits in the first category, while enjoying great advantages in the latter two categories. In order to further improve the production efficiency of VCMSs in dynamic and turbulent manufacturing environments, another manufacturing concept called “dynamic cellular manufacturing” (DCM) was proposed (Rheault et al. 1996). In a dynamic cellular manufacturing system, the physical configuration of cells is prone to change over time, and the machines are allowed to be relocated on the production floor so as to minimize the total marginal cost of material handling and the cost for reconfiguring the cells. Marcoux et al. (1997) claimed that dynamic cellular manufacturing systems outperform traditional cellular manufacturing systems. They developed an integer programming model to dynamically determine the formation of manufacturing cells so as to respond to dynamic demand changes in product pattern. Ko and Egbelu (2000) demonstrated the superiority of dynamic cellular manufacturing systems to traditional cellular manufacturing systems by comparing the impact of variations in product pattern on the shop performance. Their research mainly considered two performance measures: total setup time and total material handling distances. Balakrishnan and Cheng (2005) adopted the DCM concept to study manufacturing cell formation problems in multi-period planning horizons with changing product demand. In their problem, the product demand was different but deterministic in each period. They proposed a two-stage approach based on the generalized machine assignment problem and dynamic programming to determine the formation of manufacturing cells in each period. In the first stage, all the possible cellular configurations in each period are evaluated in order to obtain an optimum solution for the machine assignment problem. The second stage considers all the periods together and generated the best multi-period plan by using the dynamic programming 2‐14   

technique. Their proposed approach showed high flexibility in incorporating existing cell formation procedures, and shed great insight into cellular manufacturing under dynamic manufacturing environments. Although many scholars have conducted research in the field of dynamic cellular manufacturing, it is still difficult to apply this manufacturing concept in practice due to the complexity of formulating the model and optimizing the objective function. In short, dynamic cellular manufacturing is still only in the conceptual stage of development. The production scheduling problems of VCMSs have attracted more and more attention in recent years. Ratchev (2001) proposed a four-step iterative and concurrent approach for determining the formation of virtual manufacturing cells. The first step involves the identification of component requirements and processing alternatives, the second step defines the boundaries of virtual cell capabilities, the third step allocates various production resources, and the final step evaluates system performance. Saad et al. (2002) investigated the possibility of using virtual cell as a reconfiguration strategy. In their research, an integrated framework was proposed to reform the manufacturing cells, and a tabu search algorithm was adopted to solve the cellular reconfiguration problem. Later, Ko and Egbelu (2003) developed an algorithm based on part routings to study the problem of machine cell formation. The number of cells is determined upon the parts’ attributes in their approach. In addition, their research also introduced the concept of sharing manufacturing cells between multiple part families. Baykasoglu (2003) took a simulated annealing algorithm to develop a distributed layout for virtual manufacturing cells. Mak et al. (2005) proposed an age-based genetic algorithm to solve VCMS production scheduling problems with the objective of minimizing the total material and component travelling distance incurred when manufacturing the jobs. In their mathematical model, the planning horizon was divided into a certain number of equal time slices and each job was assigned a unique production route, so as to simplify the material flow management. Mak et al. (2010a) presented a constraint network based approach to solve production scheduling problems of multi-period VCMSs. Some effective hybrid algorithms have also been widely applied in this field (Mak et al. 2010b, Mak and Ma 2011, Mak et al. 2011). Rezazadeh et al. (2011) developed a new mathematical model for describing virtual cell formation problems in a multi-period 2‐15   

planning horizon where the product mix and demand in each period were different but deterministic. Their mathematical model considered a great variety of realistic constraints, such as operation sequence, alternative process plans, machine time-capacity, maximal virtual cell size, and workload balance, etc. This model was transformed into a linear programming model and then solved with particle swarm optimization. All of the research demonstrates that virtual cellular manufacturing possesses great potential for improving production efficiency. Nonetheless, although more and more fine research work is coming out of the VCMS field, practical applications are still rare. More effort should be paid to promote the practical development of VCMS applications.

2.3.2 Human issues in cellular manufacturing environments Although it is important to consider human issues in designing a manufacturing process (Norman et al. 2002), researchers and practitioners usually pay most of their attention to technical issues (e.g., cell formation and design) in designing cellular manufacturing systems. Bopaya et al. (2005) indicated that, although cell-based manufacturing enhances productivity and forms the central theme of research in manufacturing, the importance of human issues has never received sufficient attention. The lack of understanding of the human side of cellular manufacturing would significantly reduce benefits associated with this manufacturing mode. Thus, more research work should be conducted to include human issues in designing cellular manufacturing systems in order to better maintain competitive advantages. In order to maintain the production efficiency of (virtual) cellular manufacturing systems, it is necessary to understand the four basic elements of (virtual) manufacturing cells, namely workforce, equipment, operating rules, and material (Vakharia and Selim, 1994). Of all these four basic elements, the most difficult one to control is the workforce (also called human element). Indeed, it is crucial to focus on both technical issues and human issues for successful implementation of manufacturing cells. Bidanda et al. (2005) classified human issues in cellular manufacturing into eight broad areas: worker assignment

strategies,

skill

identification,

training,

communication,

autonomy,

reward/compensation system, teamwork, and conflict management. In particular, the

2‐16   

areas related to worker assignment strategies and training have attracted a lot of attention both in theory and practice, because they are important and can easily be qualified and evaluated. The “worker assignment strategies” issue is of great importance in assigning workforce to tasks within a cell or between cells. The common considerations in determining a suitable worker assignment strategy include balancing and synchronization. Balancing refers to the technique of equalizing the workload of all operations so as to minimize idle time, and synchronization refers to the method for effectively timing the material flow between cells and operations (Bidanda et al. 2005). Effective balancing and synchronization can reduce non-productive time, work-in-process, and throughput time. Many scholars conducted research on the assignment of workers to tasks in order to optimize various system performance measures. Suer (1996) proposed a two-stage hierarchy methodology for worker assignment and cell loading in worker-intensive manufacturing cells. The major consideration of this research was to determine the number of workers in each manufacturing cell, and assign workers to specific operations, so as to maximize worker productivity. Molleman and Slomp (1999) proposed a linear goal-programming approach to assign workers to specific tasks with the objective of minimizing the makespan and production time. Campbell (1999) presented a non-linear mathematical model to assign cross-trained workers in a multi-department service by using the worker capability concept. In his approach, if a worker is fully qualified at a station, the worker capability is one; if a worker cannot perform at a station, the worker capability is zero. Warner et al. (1997) analyzed relevant factors (including both technological and human interaction) for assigning workers to different manufacturing cells. In their research, a matrix was utilized to record which workers have the required ability to perform the tasks in each manufacturing cell. Following their research, Norman

et al. (2002) incorporated these factors to develop a mixed integer programming model for formulating the assignment of workers to operations in a manufacturing cell. In their research, each worker can be trained to learn new abilities. Slomp et al. (2005) presented a virtual cellular design framework employing interactive goal programming model to form virtual cells initially and then to assign workers to these virtual cells. Two major

2‐17   

objectives were considered in this study, namely efficient use of the capacity, and formation of independent virtual cells. The “skill indemnification” issue refers to the identification process of worker skills. In cellular manufacturing environments, employees work in teams, and thus greater emphasis is placed on human skills such as communication, teamwork, and conflict management. These skills may become as important as technical skills in order to maintain high level of system flexibility. In addition, skill identification can help to determine what types of workers to hire, provide a basis for determining training schemes, and facilitate the assignment of workers to cells. “Training” is also an important human issue in order to develop a workforce that is capable of handling increased responsibility in cellular manufacturing environments. A suitable worker training scheme can enhance cell flexibility, improve worker motivation, and relax constraints on worker assignments. Worker training is a two-step procedure. First, the current skill levels for all workers are assessed so that the company can determine which type and the amount of training is needed for each worker. Then, training programs are carried out based on current worker abilities and the skills needed to perform specific tasks. In many manufacturing environments, cross-training is often adopted to achieve multi-skilling and increase flexibility to create a shared sense of responsibility and balance workload among cross-trained workers. Various scholars have investigated the impact of training on system performance. Molleman and Slomp (1999) reported that the distribution of skill among workers and the degree of worker multifunctionality significantly affect system performance. Hopp and Van Oyen (2000) analyzed the factors that affect the type and effectiveness of worker cross-training. Their research showed that these factors include labor interaction, variability, resource utilization and transition efficiency, and that cross-trained workers are able to achieve higher performance than specialized workers. Askin and Huang (1997) proposed an integer mathematical model for minimizing the total worker training cost for transforming a functional layout to a cellular manufacturing system. They included crosstraining into their model, but did not consider the ability for workers to rotate among tasks for which they were cross-trained. Later, they investigated cell formation problems 2‐18   

with worker assignment and worker training (Askin and Huang 2001). They developed a mixed inter goal-programming model to describe worker assignment and training process. Park (1991) evaluated the effect of five different cross-training levels on a hypothetical dual resource constrained job shop by using simulation. His research demonstrated that systems can achieve better performance through cross-training, and that the least amount of cross-training (such as a worker can work at two workstations) is the best alternative, as the differences between the levels of additional cross-training are not significant. Brusco and Johns (1998) obtained similar research results with Park (1991) that the greatest cost benefit from cross-training was obtained by cross-training on only one additional skill. McDonald et al. (2009) presented a mathematical worker assignment model to assign cross-trained workers to tasks within a manufacturing cell in order to minimize the net present cost and to ensure job rotation while meeting customer demand. This model also determined the training necessary for workers to meet skill requirements for tasks and customer demand. Most VCMS research to date mainly paid attention to the formation of manufacturing cells. Although simultaneous development of manufacturing cell formation strategies and production schedules, taking into consideration the human issues, is important in the design of VCMSs, deriving optimal solutions for such a problem has received relatively little attention in the literature due to its complexity. In this thesis, cost-effective methodologies will be developed to solve the problem for designing optimal VCMSs operating under a multi-product, multi-period environment. Worker training will also be considered in the analysis.

2.4 Optimizatin Approaches Many optimization approaches have been adopted for solving various combinatorial optimization problems. The most popular algorithms include particle swarm optimization (PSO), ant colony optimization (ACO), and constraint programming (CP).

2.4.1 Particle Swarm Optimization

2‐19   

Particle swarm optimization – inspired from observations of birds flocking and fish schooling – is a population-based stochastic search algorithm. Unlike other optimization techniques which maintain a single solution and improve upon it until an optimal solution is obtained, particle swarm optimization manipulates a swarm of particles during the entire search process. The basic idea of this approach is to locate the optimal or nearoptimal solutions through cooperation and sharing of information among particles in the swarm. Each particle, representing a potential solution, has two important characteristics, namely position and velocity. Each particle also has two essential reasoning capabilities: the memory of its own best position, and the knowledge of the global or its neighborhood’s best position. Particles within the swarm communicate this information with each other and use it to update their own position and velocity. The search process repeats until a pre-defined termination condition is reached. The particle swarm optimization procedure is presented in Figure 2-1:

Initialize parameters Initialize populations Evaluate Do { Find the personal best Find the global best Update velocity Update position Evaluate }While(Termination) Figure 2-1. The particle swarm optimization procedure. 1. The development of particle swarm optimization Based on Heppner and Grenander’s research work on particle swarm (Heppner and Grenander 1990), Kennedy and Eberhart developed a powerful optimization method named “particle swarm optimization” by exploiting the analogues of social interaction (Kennedy and Eberhart 1995).  

2‐20   

The most important factor affecting the particles’ search trajectories in PSO is the velocity updating. In the original version of PSO, the new velocity of a particle is determined by combining the following three parts: its previous velocity in the last iteration, the attraction of the personal best solution, and the attraction of the best solution found by the particles in its neighborhood. The new position represents the simple marriage of its previous position and the new velocity. In order to balance the influence of the particle’s knowledge with that of its neighborhood, two coefficients – namely cognitive scaling coefficient and social scaling coefficient – are introduced to the velocity updating equation. Particle velocity is usually restricted to a finite range for the purpose of confining the particles within a reasonable search space. Motivated by the aim of better controlling the particles’ search scope, Shi and Eberhart (1998) proposed a new PSO version featuring a modified velocity updating equation. In their version, an “inertial weight” parameter is used to control the impact of previous velocity on the current iteration. The value of inertial weight is set in the range of [0, 1], denoting the portion of the previous velocity added to the new one. Their experiment results showed that PSO with a high inertial weight performs extensive exploration of the search space, while PSO with a low inertial weight performs extensive exploitation of the search space. Since then, many scholars have studied various strategies for setting inertial weight value. Shi and Eberhart (1998) proposed a dynamic strategy by initially setting a relatively high value for it, and then gradually reducing it to a much lower value. Their principle was to increase the exploration of the search space at the early stage of the search process, and then gradually increase the exploitation of the search space. Eberhart and Shi (2000) later adopted a fuzzy system to determine the value for the inertial weight and thereby improve PSO performance. After that, they used a random manner to set value for the inertial weight between the range of (0.5, 1) (Eberhart and Shi 2001). Zheng

et al. (2003) obtained relatively good search performance by gradually increasing inertial weight. Due to the adoption of the mechanism restricting particle velocity to a finite range, particle search trajectories may fail to converge. In order to reduce the effect of this deficiency, Clerc and Kennedy (2002) introduced another parameter called “constriction 2‐21   

coefficient” to the velocity updating equation. In their research, the constriction coefficient was set based on the values of cognitive scaling coefficient and social scaling coefficient. Their experimental results showed that the constricted particles can guarantee to converge. In the aforementioned PSO versions, the influences affecting the velocity updating of a particle come from two sources: the particle itself and its best neighborhood. The information of other neighbors has no whatsoever effect on particle movement. Kennedy and Mendes (2002) proposed a new concept called “fully informed particle swarm” which revised the particle interaction mechanism. In their approach, any particle in the neighborhood can affect the velocity iteration of another particle. The most commonly used version of continuous particle swarm optimization can be presented as follows. Assuming that the search space is n-dimensional, the particle k of the swarm at iteration t can be represented with an n-dimensional vector

X kt  ( xkt ,1 , xkt ,2 ,..., xkt ,n ) , and the velocity of this particle at this iteration can be represented with a vector Vkt  (vkt ,1 , vkt ,2 ,..., vkt ,n ) . The best previously found position of particle k until iteration t can be represented as Pkt  ( pkt ,1 , pkt ,2 ,..., pkt ,n ) , and the best position of the swarm or its neighborhood found so far can be represented with Pgt  ( pgt ,1 , pgt ,2 ,..., pgt ,n ) . At each iteration, the velocity of a particle and its new position are updated respectively according to Equations (2-1) and (2-2):

vkt ,d1  t vkt ,d  c1r1t ( pkt ,d  xkt ,d )  c2 r2t ( pgt ,d  xkt ,d )

(2-1)

xkt ,d1  xkt ,d  vkt ,d1

(2-2)

where d  {1, 2,..., n} and k  {1, 2,..., K } . Here K denotes the particle size of the swarm.  The variable t represents the inertial weight that controls the impact of the particle’s previous velocity on its current speed. The value of t is usually reduced dynamically to decrease the search area in a gradual fashion, i.e., t  (max  min ) 

2‐22   

tmax  t  min , tmax

where max and min denote the maximum value and the minimum value of t respectively. Meanwhile, tmax is a given integer number denoting the maximum number of iterations. The values of r1t and r2t are random but within in the range of [0, 1]. Furthermore, c1 and c2 represent cognitive scaling coefficient and social scaling coefficient respectively. The value of particle velocity is usually constrained within the range of [vmax , vmax ] to control the excessive roaming of particles outside of the search space. Figure 2-2 illustrates the iteration process of the position and velocity of a particle in a two-dimensional coordinate field. The data makes obvious that the new velocity is affected by three factors: the previous velocity, the personal best position found so far, and the global best position found so far.

Figure 2-2. Concept of modification of a search point in PSO. Many practical optimization problems (such as production scheduling problems) are set within discrete search spaces. To meet this demand, Kennedy and Eberhart (1997) 2‐23   

proposed a discrete version of particle swarm optimization (DPSO). Generally, DPSO differs from the original one in the following aspects. First, each particle in DPSO is composed of binary variables. Second, the velocity must be transformed into the change of probability, which is the chance of the variable taking value one. This transformation is usually achieved through the sigmoid function (2-3). Many scholars have also proposed other alternations to particle swarm optimization to make it suitable for problems with discrete search spaces. Mohan and AI-Kazemi (2001) developed several approaches to implement particle swarm optimization in a binary search space. One of them, namely “regulated discrete particle swarm”, demonstrated good performance in a large number of test problems. Agrafiotis and Cedeno (2002) used the particle locations as the probabilities for selecting features in a pattern-matching task. Pampara et al. (2005) proposed another methodology whereby the small number of coefficient of a trigonometric model was stored in each particle to generate bit strings.

s (Vkt ) 

1 1  exp(Vkt )

(2-3)

Another important factor in PSO is population topology. Generally, population topology is based on the proximity of the search space. A great variety of population topologies have been studied so far. Global best topology is the one where the best particle in the entire swarm influences each target particle. PSO with global best topology only needs to keep the index of the particle with the best objective function value. Local best topology is another method whereby each particle is assigned a neighborhood according to a certain definition, and the best particle in the neighborhood affects the target particle. The ring lattice is a typical example of local best topology, where each individual particle is connected with two adjacent particles. Three particles (including the two adjacent particles and the self) constitute the neighborhood of the target particle. Suganthan (1999) reported that local best topology has a better ability for exploring search spaces, while global best topology converges more quickly. Both the global and local best topologies belong to the genre of static topology, where neighbors and neighborhoods remain the same throughout the iteration process. Kennedy and Mendes (2002) tested numerous aspects of the social-network topology. In their research, exactly 2‐24   

1,343 random graphs were generated and then modified to meet certain criteria. Several topology structures including global best topology, local best topology, and von Neumann topology, etc., were adopted to measure the performance of particle swarm optimization. Apart from the static topologies, some scholars have also proposed a number of dynamic topologies in which the neighborhood of each particle may change during the search process. Suganthan (1999) suggested that population topology can start with the ring lattice, and then slowly increase the size of the neighborhood until the whole swarm is in the same neighborhood by the end of the search process. His research also studied another type of topology, wherein the neighbors were determined by proximity in the search space and the number of particles in the neighborhood was gradually increased during the iteration process. Peram et al. (2003) used a weighted Euclidean distance to determine the elements in a neighborhood. Liang and Suganthan (2005) generated random subpopulations with a certain number of particle size and occasionally randomized all of the connections between the particles. Janson and Middendorf (2005) organized all of the particles in a dynamic hierarchy where each particle was affected by its own previous best solution and the particle directly above it in the hierarchy. Clerc (2006) developed a parameter-free particle swarm called TRIBES. In TRIBES, topographical information such as the particle size of a neighborhood dynamically changes in response to performance feedback. Any optimization approach has its advantages and disadvantages. The major advantage of particle swarm optimization lies in its rapid convergence speed. That is, PSO can locate relatively good solutions within a short computation time window. However, PSO also suffers a severe deficiency: its evolutionary process may be stagnated as time goes on if the swarm is going to be in equilibrium (Shelokar et al. 2007). Hybridization is a promising research area motivated by the desire to combine the complementary advantages of different optimization algorithms while mitigating their weaknesses. A great range of PSO hybridizations have been developed. Some of the hybrid algorithms are based on the techniques of PSO and genetic algorithm. Particles in PSO move through the search space by means of perturbations of their positions, and individuals in genetic algorithm population breed with each other to generate new 2‐25   

generations. Angeline (1998) used a tournament selection mechanism to replace weak particles with those having good qualities. Lovebjerg et al. (2001) improved PSO convergence speed by adopting the breeding mechanism of genetic algorithm. Inspired by the principles of genetic algorithm, Brits et al. (2002) proposed a hybrid algorithm called “NichePSO”, which designates sub-swarm leaders by training the main swarm using the cognition model. Higashi and Iba (2003) combined Gaussian mutation with velocity and position updating rules to improve overall search performance. Juang (2004) incorporated the mutation, crossover, and elitist mechanisms into PSO. In his study, half of the next generation’s population came from the pool of enhanced elites, while the other half were generated by applying crossover and mutation to the enhanced elites. Particle swarm optimization has also been hybridized with many other optimization approaches. Shi et al. (2011) developed a cellular PSO based on the techniques of cellular automata and particle swarm optimization for function optimization problems. Liu and Abraham (2005) proposed a fuzzy adaptive turbulent PSO by combining turbulent PSO with a fuzzy logic controller. Jian and Chen (2006) hybridized PSO with genetic algorithm’s recombination operator and dynamic linkage discovery to solve complex real number optimization problems. Their approach first applies the technique of linkage discovery to revise the objective function before performing PSO iterations with recombination operator. Jia et al. (2011) proposed a hybrid PSO algorithm for highdimensional problems, whereas classic PSO may suffer premature convergence when applied to high-dimensional problems. In order to overcome this deficiency, they introduced Chaotic and Gaussian local search into PSO. The particles explore a wider search space in the initial search phase of their algorithm, which can help avoid premature convergence through chaotic local search. In the following phase, the algorithm refines the obtained solutions through Gaussian optimization. 2. The applications of particle swarm optimization Particle swarm optimization has been widely used as the optimization tool for solving a large range of optimization problems. The main PSO applications include image and video analysis, the design and restructuring of electrical network, antenna design, power

2‐26   

generation and power system, production scheduling, and sensor network, etc. Indeed, the PSO application possibilities are so numerous that only a few will be introduced herein. A great variety of PSO techniques have been introduced to address complex combinatorial optimization problems such as travelling salesman problems (TSPs) and production scheduling problems. Given a list of cities and their pairwise distances, the task of a TSP is to find the shortest possible route that visits each city exactly once and returns to the origin city. Pang et al. (2004) used a fuzzy DPSO approach to solve TSPs. In their study, the position of a particle was represented by a fuzzy matrix representing the set of cities, the values of which denote the degree of membership of the corresponding elements in the matrix. Lopes and Coelho (2005) combined PSO with fast local search and genetic algorithm to study TSPs. In their research, PSO with the mechanism of GA was employed to guide the movement of particles at the macro level in order to increase the exploration of the search space, while fast local search was used to search for locally improved solutions so as to increase the exploitation of the search space. Habibi et al. (2006) proposed a hybrid algorithm based on the techniques of PSO, ant colony system and simulated annealing to solve TSPs. In their algorithm, ant colony system is used to replace the individual best solution of PSO, and simulated annealing is employed to control the exploration of the group best element. Parsopoulos and Vrahatis (2006) used a smallest position value based unified PSO to solve the single machine production scheduling problem with the objective of minimizing the total weighted tardiness of jobs. The technique of smallest position value was used to adjust the production schedule by placing the index of the particle with the lowest value in the first priority, the next lowest in the second priority, and so on. Chen et al. (2006) employed DPSO with simulated annealing to study the capacitated vehicle routing problem. Advanced formulation was required in order to make DPSO suitable for this problem. In their research, each particle consisted of a certain number of sections, each of which represents a vehicle. Each section also contained a certain number of bits, each denoting a customer. The hybrid algorithm works in a multi-step way. First, the DPSO is applied to provide a globally influenced move for each particle to a new position. Next, simulated annealing performs a local search for each particle at the new position so as to increase the exploitation of the search space. If a better position in its vicinity is located, the 2‐27   

particle will move to this better position. This procedure is repeated until the pre-defined termination condition is reached. Sha and Hsu (2006) proposed a hybrid particle swarm optimization to solve the job shop scheduling problem. In their developed approach, the particle position is set based on preference list-based representation. Meanwhile, particle movement is performed based on swap operator, and particle velocity is iterated based on the tabu list concept. In addition, Giffler and Thompson’s heuristic is incorporated to decode the position of a particle into a complete production schedule. Particle swarm optimization has also been used to solve multi-criteria problems motivated by the desire to optimize a solution across multiple objectives. Parsopoulos and Vrahatis (2002) presented a modified PSO system to generate a Pareto front. Their approach adopted multiple swarms, each of which targeted one of the objectives. The best particle from one swarm was used as the global best for another swarm. Hu and Eberhart (2002) proposed a dynamic neighborhood PSO, wherein each particle in the swarm evaluates its own fitness and employs the proximity of neighboring particles against each objective to deduce its own local best solution. As the particles move through the search space, the particles in the neighborhood will change as the Pareto front emerges. PSO has been widely used in other areas as well. Conradie et al. (2002) presented an adaptive neural swarming approach, which is a PSO based algorithm, to study the possibility of applying standard neuro-controllers in industrial processes. Das et al. (2005) proposed a modified PSO approach to improve the performance of biomedical imaging by adjusting the design of Infinite Impulse Response filters. Khemka et al. (2005) adopted PSO to optimize a biomechanical model of a football kick, which simulated 17 different muscle groups involved in kicking a ball and included a large set of realistic constraints such as forbidding the toes from hitting the floor, etc. Navalertporn and Afzulpurkar (2011) proposed an integrated approach using an artificial neural network and a bidirectional particle swarm to optimize the tile manufacturing process. In their study, the artificial neural network was used to generate the relationships between the decision variables and the performance measures of interest, while the bidirectional particle swarm was adopted to perform multiple-objective optimization. Sahoo et al. 2‐28   

(2012) presented a multi-objective planning approach using particle swarm optimization to study electrical distribution systems. In their research, the number of feeders and their routes, the number and locations of sectionalizing switches, and the number and locations of tie-lines of a distribution system were optimized. The multiple objectives in their research included the following two aspects: (1) minimization of the total installation and operational cost and (2) maximization of the network reliability. 3. Parallel implementation and theoretical analysis of particle swarm optimization Like other population-based meta-heuristics, particle swarm optimization is intrinsically parallel and thus can be implemented in a parallel approach. The goal of implementing an optimization algorithm via parallel is to improve the computation speed and increase its robustness. Intuitively, the computation speed should improve n times if the swarm is parallely implemented on n processing nodes. In reality, however, the communication among particles will reduce the speed-up ratio. Gies and Rahmat-Samii (2003) certified this in their experimental results. They parallely implemented PSO on 10 nodes, and the computation speed improved about eight-fold compared to the serial implementation. They also pointed out that this figure would vary with different objective functions and communication requirements. Schutte et al. (2004) further verified this result by applying parallel implementation of PSO to solve a large-scale engineering optimization problem. Their research results proved that more processor-intensive fitness functions and smaller communication requirements would lead to more efficient parallel implementations. Chang et al. (2005) presented a parallel implementation of PSO to investigate the effect of different communication strategies on system performance. Their results demonstrated that algorithm efficiency is dependent upon the communication strategies, and that each strategy has its own advantages. Many researchers have conducted theoretical analyses of particle swarm optimization in order to facilitate a better understanding of swarm intelligence. Although the PSO iteration procedure seems simple, some aspects are difficult to comprehend. Three PSO qualities in particular pose challenges. First, the swarm is made up of a large number of particles, which interact with each other through velocity updating. Second, each particle

2‐29   

may be attracted towards a new personal best solution or global best solution at any iteration during the searching process. Third, these stochastic elements in the velocity updating equation increase the difficulty of theoretical analyses. Nonetheless, some progress has been made in recent years. Ozcan and Mohan (1999) made the first attempt with the “surfing the waves” model to track particle search trajectories. The swarm in their research was simple in that it contained only one particle, and the problem was set in a one-dimensional search space. Furthermore, their model did not consider the inertial weight, velocity clamping and the constriction coefficient with the underlying assumption that the personal best and global best solutions update concurrently. Subsequently, they relaxed this rigorous assumption and extended the research into a multi-dimensional search space with numerous particles (Ozcan and Mohan 1999). Yasuda et al. (2003) conducted similar research under the following circumstance: one particle, a one-dimensional search space, and a velocity updating equation with inertial weight but no stochastic elements. The deficiency common to these research agendas is that they did not take stochastic elements into consideration. Some scholars have since extended theoretical analysis into stochastic situations in order to better sync the research with standard PSO. Clerc (2006) analyzed the distribution of the velocities of a particle controlled by the updating equation with inertial weight and stochastic forces. Kadirkamanathan et al. (2006) conducted similar research by using Lyapunov stability analysis, where a particle is represented as a nonlinear feedback system. Recently, Poli and Broomhead (2007) presented a novel model to study the sampling distribution and its changes over iterations for canonical PSO as well as some modified PSO versions. Their research determined the stable regions of the parameter spaces by analyzing the moments of the sampling distribution.

2.4.2 Ant Colony Optimization 1. The origins of ant colony optimization Inspired by observations of real ant colonies, the ant colony optimization technique is effective for solving discrete combinatorial optimization problems (Dorigo et al. 1991, Dorigo 1992). More specifically, this approach is motivated from ant foraging behavior, 2‐30   

the sine qua non of which is the indirect communication among the ants by means of chemical pheromone trails. Ants are social insects which live in colonies, and their behavior seeks for the colony survival rather than the survival of individuals. In order to find the food source, ants firstly explore the area around their nest randomly. While moving, ants deposit a certain amount of pheromone on the passing trail. When choosing the way to move, ants tend to select the paths with strong pheromone concentrations with a high probability. Once an ant has found a food source, it takes some of the food back to the nest, and drops a certain amount of pheromone on the return trip according to the quantity and quality of the food found. The pheromone can be smelled by other ants and will guide them to the food source. Based on these observations, Dorigo et al. (1991) proposed an ant system model consisting of a large number of artificial ants. This is the first ACO algorithm, usually called “Ant System”. The artificial ants in the model differ from real ants in three major aspects (Blum 2005a). First, the artificial ants move in a synchronized way, while real ants are asynchronous. That is, each artificial ant moves from the nest to the food source and follows the same route back. Second, real ants deposit pheromone on the paths whenever they are moving, but the artificial ants only leave pheromone on the way back to the nest. Third, the foraging behavior of real ants is dependent on an implicit evaluation of a solution, which means that shorter paths will be finished earlier, and thus will receive pheromone reinforcement more quickly. However, the artificial ants evaluate a solution based on some quality measure to determine the strength of the pheromone reinforcement on the return trip. The details of the Ant System model can be further introduced in the context of taking the travelling salesman problem for example. The model begins by constructing a solution for each ant. First, one node of the graph representing a city in the problem is randomly selected as the start point. The ant then forms a route by moving from the current node to another unvisited node. The resulting path is added into the solution under construction. A complete solution is constructed when each city has been visited. During the construction process, the probability of choosing the next city to visit is based on Equation (2-4): 2‐31   

p(ei , j ) 

 i, j  i , k

j  T

(2-4)

kT

Here T denotes the set of unvisited cities, the variable p(ei,j) represents the probability of moving from the current node i to node j, and  i , j is the pheromone value on the route form node i to node j. Once all ants have finished constructing their solutions, pheromone evaporation is performed on all of the pheromone trials according to Equation (2-5):

 i , j  (1   ) i , j

(2-5)

After that, ants return back to the nest and each ant performs pheromone reinforcement on its passing trip according to Equation (2-6):

 i, j   i, j 

Q f ( s)

(2-6)

Here Q is a positive constant number and f(s) represents the objective function value of the solution s represented by the passed route. Subsequently, all ants restart to construct the solutions. This process is repeated until a pre-defined termination condition is satisfied. The basic procedure of using Ant System to solve discrete combinatorial optimization problems is as follows. First, a finite set of solution components, which are used to construct the solutions to the problem at hand, should be derived. Second, a set of pheromone values should be defined, each of which is associated with one solution component. After these steps are completed, the ants will perform the iteration process stated above to locate the optimal solutions to the problems. 2. Successful ACO variants The ant colony optimization meta-heuristic was first formalized in Dorigo et al. (1999). The general framework of ant colony optimization is presented in Figure 2-3:

2‐32   

Figure 2-3. The ant colony optimization procedure. The AntBasedSolutionConstruction() is used within the procedure to construct solutions for the ants. Its principle is almost identical to Ant System, except that the heuristic value is adopted in the process of choosing the next node to move in ant colony optimization. The function of PheromoneUpdate() is to update the pheromone value on the pheromone trails, and usually occurs through two operations: pheromone evaporation and pheromone reinforcement. Pheromone evaporation uniformly decreases all pheromone values in an attempt to increase the exploration of the search space and avoid rapid convergence toward a sub-optimal solution. Pheromone reinforcement, meanwhile, increases the pheromone value on the solution components based on one or more solutions from the current and/or from earlier iterations. Pheromone reinforcement is usually performed on the components in the selected solutions according to Equation (27):

 i  (1   ) i  

  F ( s)

sSupd

s

(2-7)

Here Supd indicates the set of selected solutions to perform pheromone reinforcement, and s denotes the weight of solution s. Many strategies have been used for determining the configuration of Supd , such as some of the solutions generated in the respective iteration, the best solution found so far, and the best solution obtained in the respective iteration, etc. The function of DaemonActions() is to perform centralized actions that 2‐33   

cannot be implemented by single ants. This is optional, and the commonly adopted actions in this step include local search to the constructed solutions and deposition of additional pheromone on some trails based on the global information. Although the Ant System algorithm can be used to solve a great range of optimization problems, its performance is usually not satisfactory. Therefore, several ACO variants based on Ant System have been proposed in recent decades, the difference among them lying mainly in the pheromone updating strategy. The most successful ACO variants include Elitist AS, Ant Colony System, Rank-Based AS, MAX-MIN Ant System, and Hyper-Cube Framework (Blum 2005a). Elitist AS is the first improvement over Ant System (Dorigo 1992). In this approach, Supd contains the solutions generated in the respective iteration and the best solution

found so far. The weights for the solutions generated in the respective iteration are all equal to one, and the weight for the best solution found so far is larger than or equal to one. This approach increases the exploitation of the best solution found so far due to the bias towards the components belonging to it. Ant Colony System, developed in 1997 (Dorigo and Gambardella 1997), differs from the original Ant System algorithm in many aspects. First, during the process of selecting the next component to construct a solution, the ants in this approach select the component which has the largest product of the pheromone value and the heuristic value with a certain probability q, or select the component according to the principle in Ant System (also considering the heuristic value) with probability 1-q. Second, the pheromone reinforcement is based on the best solution found so far, and pheromone evaporation is only applied on the pheromone trails that belong to the components in the best solution found so far. Third, an additional pheromone update is applied to the newly selected component after each solution construction step. This will decrease the pheromone values on the visited solution components, making them less desirable for other ants to follows, and thus increasing the exploration of the search space within each iteration. Bullnheimer et al. (1999) proposed another improvement called Rank-Based AS. In this approach, Supd consists of m elements, including the best m-1 solutions generated in 2‐34   

the respective iteration and the best solution found so far. The weight of the solutions selected from the respective iteration is set at s  m  rs where rs is the rank of solution s. The weight for the best solution found so far is set as m. Thus, the best solution found so far has the greatest influence on the pheromone update, increasing the exploitation of the search space, and the solutions selected from the respective iteration also influence the pheromone update, increasing the exploration of the search space. Stutzle and Hoos (2000) proposed MAX-MIN Ant System, wherein the pheromone updating is dependent on the best solution obtained in the respective iteration more often at the start of the algorithm, and gradually prone to base on the best solution found so far during the run of the algorithm. In addition, an explicit lower bound is used to control the range of the pheromone values. Actually, Ant System algorithm has an upper bound of pheromone values naturally. That is, the pheromone values in MAX-MIN Ant System are restricted in a finite range. The most recent improvement over Ant System algorithm is Hyper-Cube Framework (Blum and Dorigo 2004), wherein the weight in the pheromone updating equation for each solution in the set Supd is in inverse proportion to the quality of the solution. 3. The applications of ant colony optimization The Ant System algorithm was initially proposed to solve the travelling salesman problems. Since then, ACO algorithms have been widely applied to a large number of discrete combinatorial optimization problems. For instance, many researchers contributed great effort to extend the application of ACO algorithms to the protein folding problem (Shmygelska et al. 2002, Shmygelska and Hoos 2005). The protein folding problem has attracted increasing attention in the field of computational biology, molecular biology, physics and biochemistry. The task of this problem is to isolate the functional shape or conformation of proteins in a two- or three-dimensional space. Stutzle (1998) used ant colony optimization to solve the production scheduling problem of flow shop. Gambardella et al. (1999) developed an ant colony system to solve vehicle routing problems with time windows. Overall, ACO algorithms have also been widely used in a great variety of discrete combinatorial optimization problems, such as the travelling 2‐35   

salesman problem (Dorigo 1992, Dorigo et al. 1991, 1996, Dorigo and Gambardella 1997), the quadratic assignment problem (Maniezzo 1999, Maniezzo and Colorni 1999, Stutzle and Hoos 2000), the production scheduling problems (Blum 2005b, Blum and Samples 2004, Gagne et al. 2002, Merkle et al. 2002, Stuztle 1998), the timetabling problem (Socha et al. 2003), the graph coloring problem (Costa and Hertz 1997), the shortest supersequence problem (Michel and Middendorf 1998), and the communication network design problem (Maniezoo et al. 2004), etc. ACO algorithms have also been extended to study dynamic optimization problems, wherein the search spaces change dynamically. Indeed, the search space status and the quality of generated solutions may change during the search process itself. The ability to adjust these changes is thus a crucial component for the adopted algorithm to possess. Guntsch and Middendrof (2001) used an ant colony optimization approach to solve the dynamic travelling salesman problem. Other notable research in this field includes Di Caro et al. (1998), and Guntsch and Middendrof (2003). The “multi-objective optimization problem” is another research area in which ACO algorithms have been widely used. A multi-objective optimization problem refers to an optimization problem with more than one potentially conflicting objective functions. A Pareto solution set is usually found for the problem, where no solution is worse than any other in the set. Iredi et al. (2001) proposed an ant colony based approach to solve bicriterion optimization problems. The goal of their research was to minimize the total tardiness of jobs and the changeover costs in a single-machine manufacturing environment. Several ant colonies cooperated to search the solution space in order to find good solutions. Guntsch and Middendorf (2003) also studied multi-criteria optimization problems with a population-based ACO algorithm. Their approach employed one pheromone matrix for each type of optimization criterion. These matrices were obtained from the chosen population of solutions and could cope with an arbitrary number of criteria. Doerner et al. (2004) used the ant colony optimization approach to select the best project portfolio out of a given set of investment proposals, where the decision-makers usually have to take multiple objectives into consideration and often have little priori information available. 2‐36   

As many practical optimization problems are continuous, some researchers attempted to apply ACO algorithms to solve continuous optimization problems. The simplest way to apply ACO algorithms to continuous problems is to divide the domain of each variable into a set of intervals. After that, the discretized problems can be tackled by using the original discrete optimization algorithms. Unfortunately, this approach may be infeasible when the search space is too large or very high accuracy is required. In order to overcome this deficiency, some scholars proposed many continuous algorithms based on ant system to solve continuous optimization problems, including continuous ACO (Bilchev and Parmee 1995), the API algorithm (Monmarche et al. 2000), and continuous interacting ant colony (Dreo and Siarry 2002). All of these continuous ant-based algorithms are conceptually different from those used for discrete problems. Recently, Socha (2004) and Socha and Dorigo (2008) developed a continuous ant-based algorithm, which is the closest to the principle of ACO for discrete problems, to solve continuous optimization problems. More attention should be paid to the promising field of applying ACO algorithms to continuous and mixed-variable problems. 4. Theoretical analysis of ant colony optimization A major concern surrounding meta-heuristics is whether they can guarantee identification of the global optimal solution when given enough computation time. For insight, many scholars studied the convergence behavior of ACO algorithms. Gutjahr (2000) first attempted to prove the convergence properties of a particular ACO algorithm called graph-based ant system (GBAS). Following that, Gutjahr (2002) extended the research into two variants of GBAS: GBAS with time-dependent pheromone evaporation, and GBAS with time-dependent lower pheromone bounds. The results demonstrated that these algorithms would convergence to an optimal solution with probability one. Furthermore, every ant is guaranteed to converge at an optimal solution in the GBAS algorithm. Stutzle and Dorigo (2002) investigated the convergence properties for a class of ACO algorithms including Ant Colony System and MAX-MIN Ant System, where the pheromone updating adopts the elitist strategy and a lower bound for pheromone values 2‐37   

is fixed. The research results prove that these ACO algorithms will find the optimal solution with probability one. Dorigo and Stutzle (2004) also extended the research into ACO algorithms with time-dependent lower bounds for pheromone values, and demonstrated that these algorithms can guarantee to convergence in solution, which means that any arbitrary ant in the colony will construct the optimal solution with probability one in the limit. 2.4.3 Constraint Programming

1. Basic knowledge of constraint programming Constraint programming is a programming paradigm where the relations between variables are stated in the form of constraints. This approach usually represents combinatorial problems as constraint satisfaction problems (CSPs) and solves them by implicitly eliminating infeasible regions of the search space based on the problem constraints. A constraint satisfaction problem usually consists of a set of variables, a domain for each variable, and a set of constraints restricting the values that the variables can simultaneously take. The formal definition of a CSP is presented as follows: A constraint satisfaction problem is a triple (V , D, C ) where (1) V denotes a set of variables V  {v1 , v2 ,..., vn } ; (2) D indicates a function which maps each variable in V to a set of objects of arbitrary type, which is namely the domain of the variable; and (3) C represents a finite set of constraints on an arbitrary subset of variables in V restricting the values that the variables can simultaneously take. A feasible solution to a CSP is an assignment of a value from its domain for each variable without violating any constraint in the constraint set. If no feasible solution exists, then problem is unsatisfiable. According to the requirements of an application, CSPs can be classified into the following three categories: (1) CSPs in which any one solution must be found; (2) CSPs in which all solutions must be found; (3) CSPs in which

2‐38   

optimal solutions must be found, where optimality is usually determined by an objective function (Tsang 1993). The most classical example of CSPs is the N-queens problem. Given an integer N , the N-queens problem is formulated by placing N queens on N distinct squares in an

N  N chess board, satisfying the requirement that no two queens can threaten each other. In chess, the queen can threaten any other pieces that sit on the same row, column or diagonal. Figure 2-5 illustrates one possible solution to N-queens problem, with 8 queens.

Figure 2-4. A possible solution to 8-queens problem. If all of the constraints are unary or binary, the CSP is called a binary constraint satisfaction problem, which can be represented with a constraint graph. The nodes in the constraint graph represent the variables, and the edges represent the problem constraints.

2‐39   

In CSPs, a label is a variable-value pair representing the assignment of a value to a variable, and a compound label is the simultaneous assignment of values to a set of variables. Conceptually, a constraint on a set of variables is a set of compound labels for the subject variables (Tsang 1993). The constraints restrict the values that the variables can simultaneously take. There are some important consistency concepts in the field of CSPs, namely nodeconsistent, arc-consistent, and path-consistent. A CSP is node-consistent if and only if all values in the domain of every variable satisfy the constraints on it. An arc (x, y) in a CSP constraint graph is arc-consistent if and only if for each value a in the domain of variable x which satisfies the constraint on it, these exists at least one value in the domain of variable y which is compatible with the label . The definition of path-consistent is very complex, and can be learned by referencing Tsang (1993). If each arc in a binary CSP is arc-consistent, then the problem is said to be arc-consistent. Making the problem arc-consistent is helpful to reducing the complexity of finding feasible solutions to the problem. A number of approaches have been proposed to achieve this goal (Van Hentenryck and Carillon 1988). 2. Constraint programming search techniques Many propagation techniques have been developed to improve the search efficiency in order to find feasible CSP solutions. The most popular propagation techniques include backtracking, backmarking, backchecking, and backjumping. Backtracking paradigm is a basic constraint propagation technique used to solve CSPs. Its basic operation is to pick one variable at a time, and consider one value for it, making sure that the newly added label is compatible with the instantiated partial solution. If the newly added label violates certain constraints, then an alternative value (if in existence) is selected. If at any stage no value can be assigned to a variable without violating any constraint, it will backtrack to the most recently instantiated variable. This process continues until a feasible solution has been found or all possible combinations of labels have been tried and failed. The chronological backtracking technique exhausts one branch of the search tree before it shifts to another when no solution is found. Thus, this 2‐40   

strategy may be inefficient if finding one feasible solution is the requirement. In chronological backtracking, each intermediate node is a choice point, and each branch emitting from that node represents a choice. The choice points in chronological backtracking are ordered randomly. However, there is no reason to believe that the earlier choice points are more important than latter ones. Hence there is no reason to expend great effort on earlier choice points. In order to improve the search efficiency, Ginsberg and Harvey (1990) proposed the iterative broadening algorithm, which is an improvement over chronological backtracking. This approach aims to spread the computational efforts more evenly across the choice points. Specifically, this approach is the depth-first search with a threshold b. If a specific node has been visited b times, then unvisited children will be ignored. If a solution is not found under the current threshold, then the value of b is increased. This process continues until a feasible solution has been found or the value of b is equal to or greater than the number of branches in the search tree. Ginsberg and Harvey (1990) also used probability theory to demonstrate the efficiency of the iterative broadening algorithm. They calculated the probability of finding at least one goal node at any specific depth, and concluded that when the depth is large, the iterative broadening algorithm can lead to better computational speeds than chronological backtracking when there are enough solutions in the leaves of the search tree. Backjumping is an improvement over backtracking. Its mechanism is almost the same as that of backtracking, except when inconsistencies occur. The operation of backjumping is to pick one variable at a time and assign a value to it, making sure that the newly instantiated label is compatible with the partial solution found so far. When violations occur, backjumping analyzes the situation in order to identify the culprit decisions which have jointly caused the failure. If each value in the domain of the current variable is in conflict with some committed labels, backjumping backtracks to the most recent culprit decision rather than to the most recently instantiated variable. For applications with time-consuming compatibility checks, it is desirable to reduce the number of compatibility checks as much as possible. Backchecking and backmarking are effective techniques proposed for this purpose. The major control of backchecking is as follows. For a label , backchecking checks whether it is compatible with all of 2‐41   

the labels instantiated so far. If it is incompatible with the label , then backchecking will record this. As long as is committed to, will not be considered. As an improvement over backchecking, backmarking reduces the number of compatibility checks by remembering the incompatible labels which have already been committed to for each label. Furthermore, it avoids repeating compatibility checks which have already been performed successfully. The aforementioned propagation techniques only verify the constraints between the current instantiated variables. In addition, many propagation techniques take the format of forward-checking. When a value is assigned to the current variable in forward checking algorithms, any value in the domain of a future variable conflicting with the newly instantiated label is removed from the domain (Haralick and Elliott 1980). If the domain of a future variable becomes empty, the current partial solution is inevitably infeasible. Thus, forward checking can prune the branches of the search tree earlier than simple backtracking. Sabin and Freuder (1994) proposed an efficient approach, namely maintaining arc consistency (MAC). The MAC algorithm does still more work in looking ahead when an assignment is made. Whenever a new sub-problem consisting of the future variables is created by a variable assignment, the sub-problem is made arc consistent. Not only does MAC check the values of future variables against the current assignment (as forward checking does), it also evaluates future variables against each other. Thus, each value which is not supported in the domain of some future variable is deleted, as are those values which are not supported by the current assignment. This removes extra values from the domains of future variables in the hope of saving more computation time by reducing the number of compatibility checks. 3. The applications of constraint programming Constraint programming has been widely used to solve a large number of combinatorial problems, some of which are briefly introduced in this literature review. One popular research area in Operation Research involves the location of facilities such as warehouse, to better meet the customer demand. Optimal solutions for large instances of the so-called “uncapacitated facility location problem” have been obtained 2‐42   

by using the branch and bound algorithms. Among them, Erlenkotter’s (1978) algorithm is the most efficient. In his research, a simple ascent and adjustment procedure was used to produce optimal dual solutions, which in turn corresponded directly to optimal integer primal solutions. If not, a branch-and-bound procedure completed the solution process. Van Hentenryck and Carillon (1988) adopted a warehouse location problem as a case study to evaluate the performances of several approaches for solving discrete combinatorial problems, including integer programming, and constraint logic programming, etc. Assignment problems are one of the first types of industrial applications solved by constraint logic programming (CLP) technology. A typical assignment problem example involves the aircraft stand allocation at airports, i.e., where aircraft are parked during their stay at an airport. Another involves the docking of maritime cargo transport vessels as well as the stacking and storage of their cargo containers. The first industrial CLP application was developed for the HIT container harbor in Hong Kong (Perrett 1991). Its objective was to allocate docking berths to container ships in the harbor so that port resources and container stacking space would be available. Production scheduling involves allocating production resources to competitive jobs over time. The objective of a production scheduling problem is usually to optimize some measure of system performance, such as the minimization of makespan, setup frequency, and/or manufacturing costs, etc. Production scheduling problems are notoriously difficult to solve, even in simple manufacturing environments such as job shop. Lawrence (1984) created a theoretical example featuring only 15 jobs and 10 machines that still has not been solved as of the time of this writing with the most modern branch and bound algorithms. Nonetheless, many heuristics have been proposed to generate relatively good solutions within short computation time. Vaessens et al. (1996) reviewed these heuristics and compared their performances. In addition, many procedures for establishing arcconsistency in production scheduling problems have also been developed (Nuijten and Aarts 1996). Thuriot et al. (1994) proposed an approach based on constraint programming techniques for testing whether an operation must be scheduled before others can begin on a scarce production resource. 2‐43   

Other constraint programming applications include car sequencing (David and Chew 1995, Dincbas et al. 1988a, Smith 1996, Regin and Puget 1997), cutting stock (Dincbas et al. 1988b, Proll and Smith 1998), vehicle routing (Gendreau et al. 1997, Christodoulou et al. 1994, Shaw 1998), and timetabling (Nuitjen et al. 1994, Lajos 1995, Menezes and Barahona 1994), etc.

2.5 Dynamic Production Scheduling Due to the great variety of disruptions that occur on the production floor, the only real, practical problems are rescheduling problems rather than scheduling problems. Thus, great advancements in the field of “dynamic production scheduling” (also known as “production rescheduling”) are needed. 2.5.1 Knowledge of uncertainties

In order to study production rescheduling problems, the nature and frequency of disruptions occurring on the production floor must first be understood. Mckay et al. (1989) identified three basic types of uncertainties in manufacturing environments, listed here. (1) The first type of manufacturing uncertainties is complete unknowns. These uncertainties are unforeseen and often unpredictable, as no information may be available in advance. Such emergencies include natural disasters and sudden employee strikes. Little can be done to prevent these events in advance, but they must be reacted to in realtime. (2) The second type of manufacturing uncertainties is suspicious about the future. These uncertainties issue from the intuition and experience of the manager and scheduler. It is hard to incorporate them into the scheduling algorithms due to the difficulty of quantifying them. (3) The final type of manufacturing uncertainties is known uncertainties. These uncertainties are those about which some information is available when the predictive schedule is generated. A classic example of known uncertainties is machine breakdown, the frequency and duration of which can be anticipated based on probability distributions derived from historical data. Only known uncertainties in dynamic manufacturing environments will be considered in this research.

2‐44   

From another point of view, uncertainties can also be classified into two categories based on the subject matter they affect: i.e., the uncertainties can be grouped as either “resource-related” or “job-related” (Ouelhadj and Petrovic 2009). Resource-related uncertainties include machine breakdown, worker absenteeism, loading limits, material shortages, and defective materials, etc. On the other hand, job-related uncertainties include job cancellation, rush jobs, due date changes, changes in production volume, the early or late arrival of jobs, changes in job processing time, and changes in job priority, etc. 2.5.2 Dynamic scheduling approaches

Overall, there are four categories of dynamic scheduling approaches: completely reactive scheduling, predictive-reactive scheduling, robust predictive-reactive scheduling, and robust pro-active scheduling (Ouelhadj and Petrovic 2009). Completely reactive scheduling is the simplest dynamic scheduling approach. In this approach, no specific production schedule is generated in advance, and all decisions are made locally and in real-time. Priority dispatching rules are usually adopted to select the next job of the highest priority from the set of jobs awaiting service when a machine becomes free (Perkins and Kumar 1989, Church and Uzsoy 1992, Fang and Xi 1997). This approach is quick, intuitive, and easy to implement. However, this approach usually underperforms precisely because all of the decisions are made locally and in real-time. Therefore, no global information is considered. Nonetheless, a large number of priority dispatching rules have been developed. Panwalkar and Iskander (1977) classified these dispatching rules into five categories: simple dispatching rules, simple rule combinations, weighted priority indexes, heuristic scheduling rules, and other rules. No single rule performs well across all criteria. The most commonly used strategy in dynamic manufacturing environments is “predictive-reactive scheduling” (Huang et al. 1990, Dhingra et al. 1992, Szelke and Kerr 1994, Henning and Cerda 1995, Jain and Elmaraghy 1997, Jones et al. 1998, Mehta and Uzsoy 1998). As its name suggests, this is a two-step approach. First, a predictive production schedule is made in advance to optimize some performance measure without 2‐45   

considering possible disruptions on the production floor. Second, the schedule is modified to maintain feasibility or improve system performance when disruptions occur (Yamamoto and Nof 1985, Church and Uzsoy 1992, Abumaizar and Svestka 1997, Sabuncuoglu and Karabuk 1999). Generally, dynamic scheduling is an iterative approach. Wu and Li (1995) described rescheduling as a three-step iterative process: an evaluation step, a solution step, and a revision step. The evaluation step analyzes the impact of a disruption, the solution step determines the rescheduling solutions which can maintain feasibility or improve system performance, and the revision step updates the current production schedule. The predictive-reactive scheduling strategy is based on simple schedule adjustments, which are made by considering only system performance. The new schedule may deviate significantly from the original, affecting ancillary external planning activities that are based on the original schedule. To address this, “robust predictive-reactive scheduling” focuses on generating predictive-reactive schedules that minimize the effects of disruptions on the performance measures of the realized schedule (Wu et al. 1991, 1993; Leon et al. 1994). However, little research has been conducted toward generating a robust production schedule in a dynamic manufacturing environment due to the complexity. The most common method involves rescheduling based on simultaneous consideration of both shop efficiency and system stability (Wu et al. 1991, 1993, Cowling and Johansson 2002, Leus and Herroelen 2005). “Stability” denotes the measure of deviation of the newly modified post-disruption schedule from the original one. Wu et al. (1991, 1993) developed a bi-criterion robustness measure for single-machine rescheduling problems with random machine breakdowns. The criteria in their research included the minimization of the makespan and the impact of schedule change, presenting efficiency and stability respectively. They investigated two stability measures: the rate of deviation from the original job starting times, and the rate of deviation from the original job production sequence. Leon et al. (1994) developed robustness measures and robust scheduling to deal with production scheduling problems in situations featuring significant machine breakdown and processing time variations. Jensen (2001) studied different robustness measures to improve the total flow time and reduce tardiness in manufacturing systems suffering machine breakdowns. 2‐46   

The aim of robust pro-active scheduling is to generate predictive schedules satisfying performance requirements predictably in dynamic manufacturing environments (Mehta and Uzsoy 1999, Davenport et al. 2001, Vieira et al. 2003). The major difficulty of this approach lies in determining a suitable predictability measure. Metha and Uzsoy (1999) developed a predictable scheduling strategy for a single-machine subject to breakdowns by inserting a certain amount of idle time into the predictive schedule. This was done with the objective of reducing delays in job completion time. The effects of disruptions were measured by the deviation of the realized job completion time from its originally planned completion time. Their experimental results showed that this approach can significantly improve predictability at the sacrifice of a little degradation in the maximum lateness. O’Donovan et al. (1999) applied Metha and Uzsoy’s predictable scheduling approach to other situations where job tardiness is the measure of system performance. 2.5.3 When and how to reschedule

There are two important issues in dynamic production scheduling: when-to-reschedule and how-to-reschedule (Ouelhadj and Petrovic 2009). When-to-reschedule determines the suitable points in time to make rescheduling actions, and how-to-reschedule concerns the strategies adopted to implement rescheduling actions. As to the first issue, i.e., when-to-reschedule, three policies are proposed in the literature: i.e., periodic, event driven, and hybrid policies (Sabuncuoglu and Bayiz 2000, Vieira et al. 2003). The “periodic policy” mandates taking rescheduling actions at regular intervals, after gathering all available information from the production floor. In periodic policy, the dynamic scheduling problem is decomposed into a series of static problems which can be easily solved by using classical scheduling algorithms. The production schedule is then executed and not revised until the next rescheduling point. This approach yields more schedule stability and reduces rescheduling tension. However, following the established production schedule may lead to unwanted products or intermediates when severe disruptions occur on the production floor. In addition, the determination of a suitable rescheduling interval is a difficult task. 2‐47   

In “event driven policy”, rescheduling is performed in response to unexpected events that alter the current system status. Yamamoto and Nof (1985) used the event driven policy to study job shop scheduling problems with random machine breakdowns. In their research, the system would perform a rescheduling exercise whenever a machine breakdown occurred. Their experimental results showed that the event driven policy with lower computational burden and higher predictability outperforms the periodic policy and priority dispatching rules. Vieira et al. (2000a) developed analytical models to compare the performance of the periodic policy and the event driven policy under a single machine system with dynamical job arrivals. Vieira et al. (2000b) extended this research into parallel-machine systems. Their results demonstrated that rescheduling frequency has great effect on system performance. A high rescheduling frequency can allow the system to respond to disruptions quickly, but may increase the number of setups. Suwa and Sandoh (2003, 2007), and Suwa (2007) proposed a novel rescheduling strategy based on cumulative task delay, which actually belongs to the event driven policy. In this approach, the cumulative task delay is calculated whenever a disruption occurs. If the cumulative task delay exceeds a pre-defined threshold, rescheduling will be performed. The hybrid policy is a marriage of the periodic policy and the event driven policy. This policy reschedules the system both periodically as a matter of course and reactively when disruptions occur. Church and Uzsoy (1992) proposed a hybrid policy for rescheduling in single-machine and parallel-machine situations with dynamic job arrivals. In their study, the system conducted rescheduling periodically, and urgent events also triggered rescheduling actions immediately. Events which were classified as regular were ignored until the next rescheduling point. Their experimental results demonstrated that periodic policy performance deteriorates as the length of rescheduling interval increases, while the event driven policy can achieve reasonably good system performance. The periodic policy and the hybrid policy have attracted great attention in a rolling horizon environment (Church and Uzsoy 1992, Ovacik and Uzsoy 1994, Sabuncuoglu and Karabuk 1999, Vieira et al. 2000a, Aytug et al. 2005). The fundamental application of the rolling horizon setting to dynamic scheduling is due to the research of Muhlemann et al. (1982). They studied how rescheduling frequency affects system performance in a 2‐48   

dynamic job shop manufacturing environment with processing time variations and random machine breakdowns. As expected, their experimental results indicated that the system performance deteriorates as the rescheduling interval increases. Ovacik and Uzsoy (1994) adopted the rolling horizon policy to study the dynamic scheduling problem for a single-machine system with sequence-dependent setup time to minimize the maximum lateness. They reported that rolling horizon scheduling outperforms dispatching rules. Regarding the second issue, i.e., how-to-reschedule, two major rescheduling strategies are suggested in the literature: schedule repair, and complete rescheduling (Sabuncuoglu abd Bayiz 2000, Cowling and Johanson 2002, Vieira et al. 2003). Schedule repair approaches make some local adjustment to the current production schedule. This is preferable in practice due to the potential savings in computation time and the preserved stability of the system. Complete rescheduling regenerates a totally new production schedule from scratch. This approach can obtain high quality solutions, but may require prohibitive computation time. In addition, the production schedules obtained through complete rescheduling may deviate significantly from the original, which will affect other external planned activities and lead to additional production costs. Many researchers reported that practical rescheduling is usually conducted via schedule repair due to the rigorous computation time requirements of complete rescheduling, which is used only to a limited degree (Sun and Xue 2001, Dorn et al. 1995, Abumaizar and Svestka 1997). 2.5.4 Dynamic production scheduling techniques

Many techniques are widely used in the field of dynamic scheduling. The most common approaches include dispatching rules, heuristics, meta-heuristics, and artificial intelligence techniques. “Dispatching rules” play an important role in dynamic scheduling and attract great practical attention. A great variety of dispatching rules have been proposed over the past several decades. Nonetheless, no single rule performs well for all criteria. Many scholars conducted simulations to evaluate the performances of various dispatching rules in different dynamic manufacturing environments. Ramasesh (1990), and Rajendran and 2‐49   

Holthaus (1999) presented authoritative surveys of dispatching rules in dynamic job shops and flow shops. They assessed the performances of a large number of dispatching rules with respect to some common performance criteria, including variance of flow time, minimum flows time, mean tardiness, and maximum tardiness, etc. Generally speaking, processing time-based dispatching rules perform well in tight load conditions, while due date-based rules yield better performance in light load conditions. Kim and Kim (1994) developed a simulation-based scheduling system for dynamic scheduling problems. Their system consisted of two important components: a simulation mechanism and a reactive control mechanism. The function of the simulation mechanism is to evaluate various dispatching rules and select the one with the best performance. Meanwhile, the reactive control mechanism monitores the system’s manufacturing process and determines the suitable time for running the next simulation. Sabuncuoglu (1998) studied the comparative performances of dispatching rules in flexible manufacturing systems with various levels of machine breakdowns and changes in processing times. As anticipated, no single rule performs well under all possible conditions. Jeong and Kim (1998) used dispatching rules to solve real-time scheduling in flexible manufacturing systems with urgent job arrivals, machine breakdowns, and tool breakage. Holthaus (1999) presented a simulation analysis to compare the performances of various dispatching rules in job shops with machine breakdowns. Their experimental results showed that dispatching rule performance is significantly affected by breakdown parameters. “Heuristics” in the field of dynamic scheduling refer to the schedule repair methods. Schedule repair methods can revise solutions with low computational effort, but cannot guarantee the quality of the revised solutions. The most popular schedule repair heuristics include “right-shift policy”, “match-up policy”, and “partial schedule repair”. The rightshift policy is the most commonly used schedule repair method in dynamic scheduling. Its fundamental principle is to shift all of the remaining operations forwards in time by an amount equal to the amount of downtime. The match-up policy aims to make the revised schedule match up with the original one at some point in the future. Partial schedule repair only revises the operations affected directly or indirectly by the disruptions, while keeping other operations the same. Abumaizar and Svestka (1997) compared the performances of completer rescheduling, partial schedule repair, and right-shift policy 2‐50   

with respect to the measures of efficiency and stability. Their research results showed that the right-shift policy has the worst performance with respect to makespan because this approach represents simple schedule shifting based on downtime. Meanwhile, partial schedule repair can reduce much of the deviation and computational complexity associated with complete rescheduling. Mehta and Uzsoy (1999), and O’Donovan et al. (1999) adopted the right-shift policy to define predictable scheduling by inserting idle time into the production schedules. Bean et al. (1991) developed a match-up schedule repair heuristic for dynamic job shop scheduling situations with machine breakdowns. When breakdowns do occur, this approach tries to obtain a revised schedule which matches up with the original one at some point in the future. Their research results indicated that this approach can generate near-optimal solutions meanwhile maintaining high predictability. Akturk and Gorgulu (1999) used this approach to study the dynamic scheduling problems of a flow shop. The experimental results demonstrated that this approach is effective in terms of computation time, solution quality, and system stability. Other schedule repair heuristics have been proposed beyond the three popular ones stated above. Nof and Grant (1991) presented a variety of rescheduling strategies for manufacturing environments with processing time variations, machine breakdowns and new job arrivals. Their strategies included rerouting jobs to alternative machines, job splitting, and complete rescheduling. Kutanoglu and Sabuncuoglu (2001) also adopted the strategy of rerouting jobs to alternative machines to deal with machine breakdowns. Lee and Uzsoy (1999) proposed two heuristics – namely “delay schedule repair heuristic” and “update schedule repair heuristic” – to study production scheduling problems on a single batch-processing machine with dynamic job arrivals with an objective of minimizing the makespan. The results demonstrated that these two heuristics perform relatively well with only a modest computational burden. Jain and Elmaraghy (1997) proposed several schedule repair strategies to deal with disruptions occurring in flexible manufacturing systems. When a machine breakdown occurs, the remaining operations assigned to this machine are transferred to alternative machines. When a new job arrives, a priority based on the EDD (earliest due date) or FCFS (first come first served) dispatching rules is assigned to this job if it is not a rush job; otherwise, the highest priority is assigned to this job and all of the affected operations are moved forward in 2‐51   

time. When a job is cancelled, the remaining tasks are shifted forward in time on their respective machines. “Meta-heuristics” have been widely used to handle dynamic production scheduling problems in recent decades. Simply, meta-heuristics are high-level heuristics with the ability to escape local optima (Reeves 1995, Glover and Laguna 1997, Pham and Karaboga 2000). They sufficiently improve the local search heuristics to escape local optima by either accepting worse solutions, or by generating good starting solutions for the local search in a more intelligent way than just generating initial solutions randomly. Popular meta-heuristics include genetic algorithms, simulated annealing, and ant colony optimization, etc. Dorn et al. (1995) and Zweben et al. (1994) used meta-heuristics to repair production schedules rather than employing simple heuristics to prevent the solutions from getting trapped into a local optimum. Zweben et al. (1994) adopted simulated annealing to repair production schedules for space shuttle ground operations. In their research, the system used a choice function to select a repair strategy from five repair heuristics, and then applied simulated annealing to perform multiple repair iterations. Dorn et al. (1995) used tabu search to repair the production schedules in response to uncertain processing time in continuous steel caster scheduling. Mehta and Uzsoy (1999) used tabu search to generate predictable schedules in their predictable scheduling approach. Chryssolouris and Subramaniam (2001) used genetic algorithms for dynamic scheduling of job shop environments featuring machine breakdowns and alternative job routes. In their research, genetic algorithms were used to generate a new production schedule whenever a dynamic event occurred. The solutions derived from genetic algorithms were compared with those obtained from common dispatching rules, demonstrating that genetic algorithms can significantly improve system performance. Wu et al. (1991, 1993) compared the performance of genetic algorithms with local search heuristics in generating robust production schedules. The results show that genetic algorithms are capable of generating good schedules with better makespan and higher predictive stability. Rossi and Dini (2000) used genetic algorithms for the batch scheduling of flexible manufacturing systems considering the arrival of new batches, the unavailability of parts, and machine breakdowns. The results showed that genetic algorithms can greatly reduce makespan. However, Bierwirth and Mattfeld (1999) 2‐52   

reported their research results that the superiority of genetic algorithms decreases as the problem size becomes larger. In addition, the computation time required by genetic algorithms for large-scale problems is usually prohibitive. Artificial intelligence techniques such as “knowledge based systems”, “neural networks”, and “fuzzy logic”, etc., have also been widely adopted in the field of dynamic scheduling. Knowledge based systems focus on capturing the expertise or the experience of an expert within a specific domain. For example, the “International Species Information System” (ISIS) was developed at Carnegie Mellon University in 1982. Fox (1994) and Smith (1995) attempted to utilize this knowledge based system to solve job shop scheduling problems. In their study, ISIS performed a constrained-direct search to generate a production schedule. “Opportunistic Intelligent Scheduler” (OPIS), a successor of ISIS, is another knowledge based system which implements a blackboard architecture where a set of distinct heuristics are selectively employed to generate and revise the overall production schedule. The schedule repair heuristics in OPIS include job scheduler, resource scheduler, right-shifter, left-shifter, and demand swapper (Smith 1994). Later, some other knowledge-based systems such as “Intelligent Operations Scheduling System” (IOSS) (Park et al. 1996) and SONIA (Le Pape 1994) were proposed and then applied to dynamic scheduling. Some researchers combined knowledge based systems and simulation to determine the most suitable corrective actions to handle realtime events (Belz and Mertens 1996). Other artificial intelligence techniques used in dynamic scheduling can be found in Suresh and Chaudhuri (1993), Szelke and Kerr (1994), Zweben and Fox (1994), Kerr and Szelke (1995), and Meziane et al. (2000).

2.6 Chapter Summary This chapter has presented a comprehensive literature review covering topics including the development of traditional cellular manufacturing systems, the evolution to virtual cellular manufacturing systems, and the characteristics of particle swarm optimization, ant colony optimization, constraint programming, and dynamic production scheduling. With reference to these aspects, several important observations about production scheduling, in general, are noted in this chapter. 2‐53   

Traditional cellular manufacturing has long been considered an efficient means to improve the productivity of batch production systems because it provides a number of advantageous controls over the product and process layouts. The essential requirement of designing an efficient cellular manufacturing system is to identify part families and form manufacturing cells, and then allocate the part families to these manufacturing cells accordingly. The cell formation problem – the core of designing a cellular manufacturing system – is of strategic and operational importance as it affects the fundamental structure and overall performance of the system. A large number of approaches have been developed to achieve this purpose. Despite the unbalanced workload and low machine utilization, cellular manufacturing systems cannot offer highly flexible production plans and cannot respond to changes in product pattern efficiently. In order to overcome these deficiencies, a new manufacturing concept called virtual cellular manufacturing was proposed. Virtual cellular manufacturing systems combine the setup and material handling efficiency typically associated with traditional cellular manufacturing systems with the high flexibility inherited from the process layout. Unlike the traditional cellular manufacturing systems, the workstations in a virtual manufacturing cell are not physically grouped together, but rather spread over the production floor. According to the job manufacturing requirement and the current system status, machines and other production resources are temporally grouped to form a virtual manufacturing cell for part families instead of permanently dedicated to them. As workstations are shared among the virtual cells, VCMS improves machine utilization and can respond to production pattern changes more efficiently. Despite its growing popularity, only a few attempts have thus far been made to develop optimization-based scheduling algorithms for the youthful VCMS approaches. This chapter has also studied the development of constraint tool approaches for solving combinatorial optimization problems, such as particle swarm optimization, ant colony optimization, and constraint programming. Particle swarm optimization is a populationbased algorithm inspired from natural observations of birds flocking and fish schooling. The idea of this approach is to locate the global optimum through the cooperation of particles in the swarm. Many particle swarm optimization variants have since been 2‐54   

developed to solve various optimization problems. Ant colony optimization is also a population-based algorithm, inspired from observations of real ant foraging behavior. This approach aims to guide particle search behavior in much the same way that ants communicate with each other: though pheromone trails. A great variety of ACO variants including “Elitist AS”, “Rand-based AS”, “MAX-MIN AS”, and “Ant Colony System”, etc., have been proposed to improve the search efficiency. Constraint programming is an effective technique for solving problems with hard constraints. Its contribution is to eliminate infeasible regions of the search space based on the problem constraints. Many propagation techniques such as backtracking, backjumping, and backmarking are proposed to improve the searching efficiency. The applications of these algorithms for solving combinatorial optimization problems have been studied extensively in the literature. As the only true problems in practice are rescheduling problems, the research in dynamic production scheduling has also been summarized. Four categories of rescheduling approaches, namely completely reactive scheduling, predictive-reactive scheduling, robust predictive-reactive scheduling, and robust pro-active scheduling, have been introduced in detail. The core of dynamic production scheduling is to make correct rescheduling actions at suitable time points in response to real-time disruptions occurring on the production floor. Thus, two major issues in dynamic scheduling are when-toreschedule and how-to-reschedule. When-to-reschedule determines the suitable time points to take rescheduling actions. Three popular when-to-reschedule strategies are often used: i.e., periodic, event-driven, and hybrid. How-to-reschedule refers to the selection of suitable approaches to generate new production schedules. Two main alternatives to deal with this issue are schedule repair and complete rescheduling. In addition, several dynamic scheduling methods have been presented, including dispatching rules, heuristics, meta-heuristics, and artificial intelligence. Dynamic production scheduling will remain a hot topic for a long time due to its useful applications in practical manufacturing environments.

2‐55   

CHAPTER 3 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER A SINGLE-PERIOD MANUFACTURING ENVIRONMENT

3.1 Introduction Due to the complexity of inquiry, relatively little research has been conducted in the field of production scheduling for virtual cellular manufacturing systems until now. One of the earliest studies in this field was given in Drolet (1989). His research developed a linear mathematical model to formulate production schedules for VCMSs that considered production capacity constraints. In order to solve this complex problem within an acceptable computational time window, all of the integer requirements in the mathematical model were relaxed. The problem was solved by obtaining the optimal solution of the relaxed model, and then rounding off all of the fractions to meet the integer requirements. Although this approach significantly reduces the computational effort, it cannot guarantee the feasibility, not to mention the optimality, of the obtained production schedule. A decade later, Wong (1999) developed a mathematical model for VCMSs that minimizes the material travelling distance. In his study, production orders with large production volumes were decomposed into a number of small jobs, each of which can have a different production route. After that, a production schedule was generated by using an improved genetic algorithm with an additional attribute called “age” to optimize the objective function. Mak et al. (2005) would later modify the model developed by Wong (1999), but with the same objective function. In their mathematical model, the planning horizon was divided into a certain number of equal time slices, and each job was assigned a unique production route so as to simplify the material flow management. The age-based genetic algorithm was also adopted to solve the problem in their research.

3‐1   

The existing mathematical models for addressing VCMS production scheduling problems have several deficiencies, including that none of them considers workforce requirements, and that the objective function is usually too simple. A more realistic mathematical model with workforce requirements will be developed in this chapter. Its primary contribution will be to formulate enhanced production schedules for VCMSs operating in a single-period manufacturing environment. The new model will minimize the total manufacturing cost within the planning horizon while including realistic transaction costs such as machine operating cost, material transpiration cost, workers’ salaries, and subcontracting cost. Furthermore, the model is highly adaptable. The forms of the objective function and the constraints in the mathematical model can be easily modified according to actual manufacturing requirements. In order to solve the complex production scheduling problem, a hybrid algorithm (ACPSO), based on the techniques of discrete particle swarm optimization, ant colony system, and constraint programming, is proposed to locate a good solution within a reasonable computational time window. Each constituent algorithm has distinct advantages which are incorporated. Particle swarm optimization, a population-based meta-heuristic, performs competitively with many other meta-heuristics for most optimization problems. Discrete particle swarm optimization is the discrete version of particle swarm optimization, proposed to meet the demand for solving problems with discrete search spaces. Although the particle swarm optimization approach has relatively good performance (discrete or otherwise), the evolution process may nonetheless stagnate over time if the swarm is going to be in equilibrium. This is especially common in problems with hard constraints. Constraint programming, on the other hand, is an effective technique for solving problems with hard constraints, but may be inefficient if the feasible search space is very large. Ant colony system, a variant of ant colony optimization, has great ability to increase the exploration of the search space. Therefore, the aim of the proposed hybrid algorithm is to combine their complementary advantages in order to improve search performance. Sensitivity analyses have also been conducted to study the effects of the key parameters of the hybrid algorithm on search performance.

3‐2   

3.2 The Production Scheduling Framework In order to facilitate the formulation of production schedules for a manufacturing system, a production scheduling framework is often used to help attain relatively high production efficiency and routing flexibility. Generally, the production scheduling framework includes four phases: the production order loading phase, the production scheduling phase, the production schedule executing phase, and the production situation monitoring phase. These four phases are represented in Figure 3-1. Production order loading phase

Production order

Production schedule

Production scheduling phase

Executing production schedule

Production schedule executing phase

Production situation monitoring phase

Report of system monitoring

Figure 3-1. Schematic diagram of the production scheduling framework. First, in the production order loading phase, the framework compiles information about the incoming jobs. The collected data include the operation sequence of each job, the unit processing time required on each workstation, the production volume of each job, and the delivery due date of each job, etc. This exercise serves as the preparation work for the production scheduling phase.

3‐3   

The second, or production scheduling phase, is the most crucial phase in the production scheduling framework. It involves the formulation of production schedules for all of the jobs under consideration. This phase has two important tasks: (1) determining a suitable job production sequence, and (2) allocating suitable production resources for each job. A good production schedule should satisfy many important conditions. First, the production schedule should reduce the material travelling distance among workstations as much as possible. Second, it should ensure delivery due date requirements so as to meet customer demand. Third, it should satisfy all of the system constraints, such as the production capacity constraint, etc. The production schedule itself serves as two basic purposes. First, it allocates the limited production resources to competing jobs over time. Second, it serves as the basis for planning other external activities such as material procurement and preventive maintenance, etc. The third, production schedule executing phase is used to implement the production schedule on the production floor. Production managers must pay close attention to two important considerations during this phase. First, managers need to ensure the availability of all production-related resources. Second, the material flow in each virtual manufacturing cell must be strictly controlled according to the production schedule so that the jobs can be dispatched at the appropriate processing rates. This will avoid the undesirable occurrence of work-in-process inventory between workstations. The final production monitoring phase is used to monitor the execution of the production schedule. This phase is actually embedded within the production schedule executing phase and continuously updates the production status. This procedure serves two important functions. First, it monitors the manufacturing process of the scheduled jobs. When any deviation occurring between the production plan and the actual production rate is detected, the production scheduling module will be triggered to deal with it. Second, it will detect disruptions occurring in the system. When a disruption is detected, the framework will inform the production scheduling module about its impacts so as to appropriately amend the current production schedule.

3‐4   

3.3 Mathematical Modeling for Production Scheduling in VCMSs under a Single-Period Manufacung Envionment A good production schedule for VCMSs operating in a single-period situation should provide the following three types of information (Mak et al. 2005). First, it should specify the production resources that should be grouped to form virtual manufacturing cells. Second, it should identify bottleneck resources in each virtual manufacturing cell, and determine the most appropriate processing rates to process the assigned jobs. Third, it should specify the suitable times to create and terminate the virtual manufacturing cells. In this section, a mathematical model is developed to describe the characteristics of VCMSs with workforce requirements operating in a single-period situation. In order to formulate a good production schedule satisfying the aforementioned requirements, many constraints are included in the mathematical model, such as the delivery due date of each job, the maximum production capacities of the production resources, and the appropriate processing rates of each job to meet the requirement of no work-in-process inventory between workstations, etc. The objective of this model is to minimize the total manufacturing cost within the entire planning horizon. 3.3.1 Assumptions

The mathematical model is developed under the following assumptions: (1) Each type of job consists of a certain number of operations that must be manufactured according to the production route; (2) Similar job operations must be manufactured on the same workstation and handled by the same skilled worker; (3) The processing time of each job operation is deterministic and known. The machine setup time has been included in the processing time. Moreover, the processing time of an operation on any workstation that can produce it is always the same; (4) The production volume and the delivery due date of each job are deterministic and known; 3‐5   

(5) The distance between any two workstations and the transportation cost of each job are deterministic and known; (6) The planning horizon is divided into a number of equal time slices. In addition, no work-in-process inventory is allowed between workstations. That is, the processing rates of each job must satisfy the condition such that the production output of any operation in a time slice must be equal to that of its preceding operation in the last time slice, and that of its succeeding operation in the next time slice; (7) Each workstation can handle at most one operation at a time and each operation cannot be interrupted once started in any time slice; (8) The salary of each worker per time slice is deterministic and known. What is more, a worker will receive full payment for a time slice if he has been assigned to it at all, no matter the length of his actual working period; otherwise, the payment in this time slice is zero; (9) The number of workers is constant within the planning horizon. Each worker can handle at most one workstation at a time. In addition, the ability of each worker remains the same during the entire planning horizon; (10) Compared with other manufacturing costs, the subcontracting cost of each job is much higher; and (11) The material transportation time between workstations is negligible. 3.3.2 Notations

The following notations are used to develop the mathematical model: w, w'

= workstation type. w, w'  1, 2,...,W . W is the total number of workstations.

r

= a production route.

w( r )

= workstation w used in the production route r .

j, j '

= job type. j , j '  1, 2,..., N . N is the number of job types.

O j ,i

= the operation i of job j . 3‐6 

 

Kj

= the total number of operations of job j .

DD j

= the delivery due date of job j . The due date in this research is the expiration of the overall production time slice.

Vj

= the production volume (customer demand) of job j .

SV j

= the subcontracting volume of job j .

l

= labor type. l  1, 2,..., L . L represents the total number of workers.

s, s '

= a time slice. s, s '  1, 2,..., S . S denotes the planning horizon.

PL

= the length of a time slice.

MCw, s

= the maximum capacity of workstation w in time slice s .

pt j ,i , w ( r )

= the processing time of producing one unit of operation i of job j by using workstation w in the production route r .

PR j ,i , w ( r ), s

= a decision variable representing the processing rate of operation i of job j processed by using workstation w in the production route r during time slice s .

D (r )

= the travelling distance of production route r .

El , w

= equal to one if worker l has the ability to operate workstation w ; otherwise, it is equal to zero.

Z j ,i ,l

= equal to one if worker l is assigned to handle operation i of job j .

Zl ,s

= equal to one if worker l is assigned to time slice s ; otherwise, it is zero.

st j ,i , w( r ), s

= the starting time of operation i of job j on workstation w using route r in time slice s .

ft j ,i , w ( r ), s

= the completion time of operation i of job j on workstation w using route r in time slice s .

TIN l , w, s

= the time interval in which worker l is operating workstation w in time slice s .

X j ,i , w( r ), s

= a zero-one binary decision variable where X j ,i , w( r ), s is equal to one if job j has operation i launched by workstation w in the production route r in time slice s ; otherwise, it is zero. The operation may take more than one 3‐7 

 

time slice to complete. Y j ,i , w ( r ), s

= a zero-one binary decision variable where Y j ,i , w ( r ), s is equal to one if job j has operation i processed by workstation w in the production route r in time slice s ; otherwise, it is zero.

TN l

= the total number of time slices in which worker l is assigned to operate workstations within the planning horizon.

αj

= the cost of moving one unit of job j per unit distance.

βl

= the salary of worker l per time slice. = the operating cost of workstation w per unit time. = the subcontracting cost of job j per unit.

3.3.3 Mathematical model

The mathematical model of the production scheduling problem is represented as follows: minimize:

S  ( K j 1) K j

N

  (   X j

j 1

w ( r )

W

S

s 1

N

i 1

L

)  D (r )  (V j  SV j )    l  TN l  

j ,i , w ( r ), s  i 1

l 1

Kj

N

   w  Y j ,i , w( r ), s  PR j ,i , w ( r ), s  pt j ,i , w( r )    j  SV j     w 1

s 1 j 1 i 1

(3-1)

j 1

subject to: X j ,i , w( r ), s  i 1  X j ,i 1, w( r ), s i

O j ,i , w( r ), s

PR j ,i , w( r ), s i 1  PR j ,i 1, w ( r ), s i S  ( K j 1) K j

  X

w ( r )

s 1

i 1

j ,i , w ( r ), s  i 1

1

Y j ,i , w ( r ), s'  (1  X j ,i , w( r ), s )G

SV j 

DD j  ( K j 1)

 s 1

O j ,i , w( r ), s j

O j ,i , w( r ), s, s '  s

PR j ,i , w( r ), s  V j 3‐8 

 

O j ,i

(3-2) (3-3)

(3-4)

(3-5)

(3-6)

st j ,i , w ( r ), s  ( s  1)  PL

O j ,i , w( r ), s

ft j ,i , w ( r ), s  st j ,i , w( r ), s  PR j ,i , w ( r ), s  pt j ,i , w ( r ) O j ,i , w( r ), s Y j ,i , w( r ), s ft j ,i , w ( r ), s  s  PL N

Kj

 Y j 1 i 1

j ,i , w ( r ), s

O j ,i , w( r ), s

PR j ,i , w ( r ), s pt j ,i , w( r )  MCw, s w, s

 0  Y j ,i , w ( r ), s  1 PR j ,i , w ( r ), s   0  Y j ,i , w ( r ), s  0 L

Y j ,i , w( r ), s   El , w Z j ,i ,l

O j ,i , w(r ), s

O j ,i , w(r ), s

l 1

Z l , s  max ( Z j ,i ,lY j ,i , w( r ), s ) O j ,i , w ( r )

S

TN l   Z l , s

l , s

l

(3-7) (3-8) (3-9)

(3-10)

(3-11)

(3-12) (3-13)

(3-14)

s 1

TIN l , w, s   [ Z j ,i ,lY j ,i , w ( r ), s st j ,i , w ( r ), s , Z j ,i ,lY j ,i , w( r ), s ft j ,i , w( r ), s ) l , s, w O j ,i

TINl , w, s  TINl , w' , s   L

Z l 1

j ,i ,l

1

w  w' , l , s

O j ,i

X j ,i , w( r ), s , Y j ,i , w( r ), s , Zl , s , Z j ,i ,l {0,1}

(3-15) (3-16)

(3-17)

j, j ' , i, w(r ) / w, l , s

(3-18)

where G is a large integer. The objective function (3-1) of the mathematical model is to minimize the total manufacturing cost related to the production schedule within the entire planning horizon. The first term represents the material transportation costs between workstations, the

3‐9   

second term signifies the workers’ salaries, the third term denotes the machine operating costs, and the last term indicates the subcontracting costs for all of the jobs. Constraints (3-2) ensure that once operation i of job j is completed, operation i  1 of job j must start immediately in the next time slice. Constraints (3-3) guarantee that no work-in-process inventory is allowed between workstations. That is, the processing rate of any operation in a time slice must be equal to that of its preceding operation in the last time slice, and that of its succeeding operation in the next time slice. Constraints (3-4) provide that the starting times of all operations will be within the planning horizon. They also guarantee that each job will have only one unique production route. Constraints (3-5) make certain that no production can start before the time slice in which that production is launched. Constraints (3-6) show the relationship among product demand, actual production volume, and the subcontracting volume of each job. These constraints guarantee that customer demand will be met through manufacturing supply, whether it is produced by the company itself or through subcontracting. Constraints (3-7) to (3-9) state the relationship between the starting time and completion time of each operation in every time slice. Constraints (3-10) make sure that all jobs assigned on a workstation can be finished in each time slice. Constraints (3-11) stipulate that the processing rate must be greater than or equal to zero. Constraints (3-12) demand that at least one skilled worker will operate the workstation when needed.

3‐10   

Constraints (3-13) certify whether a given worker is assigned to a given time slice so as to facilitate the calculation of wages. Constraints (3-14) tabulate the number of time slices into which a worker is assigned. Constraints (3-15) show the time interval in which worker l stays on workstation w in time slice s in order to operate this workstation. Constraints (3-16) provide that a worker can operate at most one workstation at a time. Constraints (3-17) require that all types of job operations must be handled by the same skilled worker. Constraints (3-18) indicate that these variables are binary. This mathematical model is non-linear and the production scheduling problems it represents are NP-hard. This means that the computation time of locating the global optimal solutions of the problems with exact algorithms will increase exponentially as the problem size increases. In practical applications, it is usually desirable to obtain a nearoptimal (or at least a relatively good solution) within an acceptable computational time window. Thus many meta-heuristics, such as the genetic algorithms, ant colony optimization, and particle swarm optimization, etc., have been widely used in the field of production scheduling. In the mathematical model, MCw, s  PL means that there is some preload on workstation w in time slice s , while MCw, s  PL indicates that this workstation is available at any time in the time slice before scheduling. These two different cases only actually affect the production capacities of the production resources, but have little effect on the nature of the mathematical model. Without risking the loss of generality, the assumption MCw, s  PL will be made in the following research.

3‐11   

3.4 Illustrative Example A simple example is shown in this section to facilitate the understanding of the characteristics of VCMSs operating in a single-period manufacturing environment.

3.4.1 Manufacturing system configuration Eleven workstations are arranged on the production floor in this virtual cellular manufacturing system. The eleven workstations are, specifically: two drilling machines (denoted as A1 and A2 ), three multi-cutter lathes (denoted as B3 , B4 and B5 ), three CNC turning machines (denoted as C6 , C7 and C8 ), two CNC pinhole boring machines (denoted as D9 and D10 ), and one lathe (denoted as E11 ). A list of incoming jobs for the piston production unit and the corresponding manufacturing information are displayed in Table 3-1. The length of each time slice is 300 seconds. Jobs

Production route (processing time)

Volume

Due date ( time slice )

Subcontracting cost per unit

Transportation cost per distance per unit

1

B(63)-C(84)-D(72)

15

20

950

2

2

B(82)-C(109)-D(93)-E(42)

12

32

1000

1

3

A(50)-B(74)-C(92)-D(69)-E(42)

13

17

850

3

4

B(66)-C(88)-D(69)

15

35

900

2

5

A(45)-B(56)-C(75)-D(59)

17

35

1000

1

6

A(30)-B(50)-C(67)-D(57)

18

36

950

1

7

B(35)-C(46)-D(53)

45

16

900

2

8

A(25)-B(35)-C(46)-D(50)

48

30

900

3

9

B(32)-C(43)-D(53)

46

20

950

2

10

A(35)-B(38)-C(50)-D(46)

25

12

850

2

Table 3-1. The manufacturing information for the list of incoming jobs. Table 3-1 shows the manufacturing information of a list of incoming jobs. Consider job 9, for example, with its production demand of 46 units. The production route follows from workstations B, to C, to D, in that order. The unit processing times of workstations B, C, and D are 32, 43 and 53 seconds, respectively. The delivery due date is at the end of 3‐12   

the 20th time slice. The transportation cost of this job per unit distance between workstations is two units. In VCMSs, the workstations are widely spread over the production floor. The material travelling distances among the workstations are displayed in Table 3-2. For instance, the distance between workstations A1 and A2 is eight units on the production floor. From (ws) to (ws)

A1

A2

B3

B4

B5

C6

C7

C8

D9

D10

E11

A1

0

8

3

17

18

13

4

14

9

9

13

A2

8

0

9

9

10

5

10

6

4

5

4

B3

3

9

0

14

20

10

6

16

9

5

13

B4

17

9

14

0

6

5

19

9

14

10

5

B5

18

10

20

6

0

11

15

5

10

15

6

C6

13

5

10

5

11

0

14

7

9

6

4

C7

4

10

6

19

15

14

0

10

5

9

13

C8

14

6

16

9

5

7

10

0

5

12

3

D9

9

4

9

14

10

9

5

5

0

6

7

D10

9

5

5

10

15

6

9

12

6

0

9

E11

13

4

13

5

6

4

13

3

7

9

0

Table 3-2. Travelling distances among the workstations. The operating cost of each workstation per second is listed in Table 3-3. For instance, the operating cost of workstation A1 is one unit per second.  Workstation

A1

A2

B3

B4

B5

C6

C7

C8

D9

D10

E11

Operating cost

1

2

3

2

1

1

1

2

2

1

2

Table 3-3. Operating cost of each workstation per second. During the manufacturing process, each workstation must be operated by a skilled worker. The characteristics of all of the workers in the system are listed in Table 3-4. For instance, worker 1 can operate three types of workstations: type A, type B, and type E, and his salary is 100 units per time slice.

3‐13   

Worker no.

Skill type

Salary per time slice

W1

A, B, E

100

W2

B, C, D

120

W3

A, D

80

W4

C, D, E

100

W5

A, C

90

W6

B, C, E

110

W7

A, B, D

100

W8

B, D, E

110

W9

C, D, E

120

W10

A, B, C

100

Table 3-4. Worker characteristics.

3.4.2 Requirement of no work-in-process inventory between workstations Work-in-process inventory between workstations is not allowed in VCMSs. This prohibition is meant to simplify the material flow management while also facilitating the creation and termination of virtual manufacturing cells. Constraints (3-2) and (3-3) in the mathematical model are developed to ensure these requirements. Table 3-5 provides an example of the production outputs of a job to illustrate constraints (3-2) and (3-3). In this table, the value in the bracket represents the maximum processing rate of an operation during the time slice. For instance, the maximum processing rate of operation 1 in time slice 1 is five, that of operation 2 in time slice 2 is eight, and that of operation 3 in time slice 3 is six. Thus, the feasible processing rate of operation 1 in time slice 1 (operation 2 in time slice 2, and operation 3 in time slice 3) is five. This ensures that no work-inprocess inventory exists between workstations, and that the succeeding operation will start immediately in the next time slice after an operation concludes. Notably, after scheduling this job, some workstations or workers may have a certain amount of remaining capacities in some time slices, allowing them to be assigned for other manufacturing cells to produce other incoming jobs.

3‐14   

Serial no. of operations

1

2

3

Workstation no.

2

7

4

Worker no.

1

6

8

1

5(5)

0

0

2

2(2)

5(8)

0

3

3(4)

2(5)

5(6)

4

0

3(3)

2(7)

5

0

0

3(6)

Time slice no.

Table 3-5. Sample production outputs of a job.

3.4.3 Production schedule of all of the jobs Figure 3-2 shows the workstation production schedule for the 10 jobs in the example. It is clear that all of the jobs can be completed within 36 time slices. In this figure, a broken line indicates that the corresponding virtual manufacturing cell still exists, but that the processing rate of the corresponding job in the time slice(s) is zero due to the limited capacities of production resources. For example, the first operation of job 10 is assigned to workstation A1, and its manufacturing period ranges from time slice 1 to time slice 5. Figure 3-3 shows the production schedule of the workers for the 10 jobs. Also, a broken line in this figure indicates that the virtual manufacturing cell still exists, but the processing rate is zero. For example, the manufacturing of the second operation of job 10 is handled by worker 1, and its manufacturing period covers time slice 2 to time slice 6. Table 3-6 presents a list of virtual manufacturing cells, including workers and workstations, their respective compositions, and the times at which the virtual manufacturing cells are created and terminated.

3‐15   

 

Figure 3-2. The production schedule of the workstations.        

3‐16   

 

Figure 3-3. The production schedule of the workers.

3‐17   

3‐18 

 

B4—C8—D9 W8—W2—W9

B4—C8—D10—E11 W7—W6—W3—W1

A1—B3—C6—D10—E11 W7—W8—W2—W9—W4

B5—C7—D9 W1—W6—W3

A2—B4—C6—D10 W3—W7—W5—W4

A1—B4—C7—D10 W10—W8—W6—W3

B5—C7—D9 W1—W5—W3

A1—B3—C6—D9 W3—W10—W5—W4

B5—C8—D10 W1—W10—W7

A1—B5—C6—D10 W3—W1—W5—W4

1

2

3

4

5

6

7

8

9

10

1

8

13

6

16

25

2

5

15

2

Cell creation for the first operation

5

17

24

14

33

30

29

11

26

17

Cell terminatio n for the first operation

2

9

14

7

17

26

3

6

16

3

Cell creation for the second operation

6

18

25

15

34

31

30

12

27

18

Cell terminati on for the second operation

3

10

15

8

18

27

4

7

17

4

Cell creation for the third operation

7

19

26

16

35

32

31

13

28

19

Cell terminati on for the third operation

4

16

19

28

8

18

Cell creation for the fourth operation

Table 3-6. A summary of the formation of virtual manufacturing cells.

Configuration of virtual cell

Job no.

8

27

36

33

14

29

Cell terminati on for the fourth operation

9

Cell creation for the fifth operation

15

Cell terminati on for the fifth operation

3.5 Solution Algorithms In order to facilitate the formation of virtual manufacturing cells, a hybrid algorithm (CPSO), based on the techniques of discrete particle swarm optimization and constraint programming, is firstly developed to solve the complex production scheduling problem of single-period VCMSs. Then, to further improve search performance, the ant colony system principles are included into CPSO to develop another hybrid algorithm called ACPSO.

3.5.1 Discrete particle swarm optimization Discrete particle swarm optimization, the discrete version of particle swarm optimization, is proposed to meet the demands of solving optimization problems with discrete search spaces. The application of discrete particle swarm optimization to solve the production scheduling problem of single-period VCMSs will be detailed as follows. 1. Definition of discrete particles In order to solve the production scheduling problem of single-period VCMSs using the DPSO approach, the particle k of the swarm at iteration t can be presented as

X kt  ( Skt ,1 , Skt ,2 ..., Skt , N ; M kt ,1,1 , M kt ,1,2 ,..., M kt ,1, K1 ,..., M kt , N , K N ;Wkt,1,1 ,Wkt,1,2 ,...,Wkt,1, K1 ,...,Wkt, N , K N )  .  It consists of three parts. The first part concerns the job production sequence, the second part regards the workstation assignment for each operation, and the third part deals with the worker assignment for each operation. Each bit in X kt is also a vector with the following format: i.e.,

, ,

=(

, , ,

,

,…,

, , ,

,

,

=

, , ,

,…,

,

, ,

,

,

, ,

,…,

) . Here

,

, ,

,

,,

=(

, 1, ,,

,…,

, ,,

,

,…,

, ,,

, ,

) and

represents the number of workstations that

can manufacture operation i of job j , L j ,i signifies the number of workers that can handle operation i of job j , operation i of job j, and

,

,

denotes the

expresses the

workstation that can manufacture

worker that can handle the manufacturing

of operation i of job j. The best solution found by particle k until iteration t is denoted as

3‐19   

Pkt  ( PSkt ,1 ,..., PSkt , N ; PM kt ,1,1 ,..., PM kt ,1, K1 ,..., PM kt , N , K N ; PWkt,1,1 ,..., PWkt,1, K1 ,..., PWkt, N , K N )

,

while the best solution found by the swarm until iteration t is denoted as

Pgt  ( PS gt ,1 ,..., PS gt , N ; PM gt ,1,1 ,..., PM gt ,1, K1 ,..., PM gt , N , K N ; PWgt,1,1 ,..., PWgt,1, K1 ,..., PWgt, N , K N )

.

Each bit in these two expressions is also a vector with the similar format as stated above. (a) Job production sequence Each bit skt , j ,d d {1, 2,..., N } in the job production sequence S kt , j is binary where it is equal to one if job j is placed in the d th position of the job production sequence; otherwise, it is zero. For instance, if the job production sequence is (2, 3, 4, 1) in a particle, then according to the definition, skt ,2,1  skt ,3,2  skt ,4,3  skt ,1,4  1 , and all other skt , j ,d are equal to zero (see Table 3-7). Position

Job

1

2

3

4

1 2

0

0

0

1

1

0

0

0

3

0

1

0

0

4

0

0

1

0

Table 3-7. An example of the job production sequence of a particle. (b) Worker and workstation assignment Each bit in M kt , j ,i is equal to one if operation i of job j is assigned to the corresponding workstation; otherwise, it is equal to zero. Also each bit in Wkt, j ,i is binary where it is equal to one if operation i of job j is handled by the corresponding worker; otherwise, it is equal to zero. 2. Definition of velocity Similar to the structure of the particles stated above, the velocity of particle k at iteration

t

can

be

presented

with

a

3‐20   

vector

with

the

following

format:

Vkt  (VSkt ,1 ,VSkt ,2 ,...,VSkt , N ;VM kt ,1,1 ,...,VM kt ,1, K1 ,..., VM kt , N , K N ;VWkt,1,1 ,...,VWkt,1, K1 ,..., VWkt, N , K N ) . A particle velocity consists of three parts. The first part represents the velocity of the job production sequence, the second part states the velocity of the workstation assignment for each operation, and the third part denotes the velocity of the worker assignment for each operation. , ,

=(

Each , , ,

,

,…,

bit

in

,

,

, ,

,…,

Vkt , , ,

is ,

,

also

) and

a , ,

vector: =(

, , ,

,

VSkt , j  (vskt , j ,1 ,...vskt , j , N ) ,…,

, , ,

,

,…,

, , ,

,

,

) .

, As

stated in Chapter 2, a large variety of topology structures of the neighborhood have been proposed in order to balance the exploration and exploitation of the search space. Since the techniques of constraint programming and ant colony system will greatly improve the exploitation and exploration of the search space, the topology structure of the particle neighborhood will take the form of the global best in this research. That is, the whole swarm forms the neighborhood for each particle. (a) Velocity of the job production sequence In VS kt , j , the high value of vskt , j ,d indicates that job j is more likely to be placed in the d th position of the job production sequence, whereas a low value means that this job is

more likely to be placed elsewhere. At each iteration, vskt , j ,d is updated according to Equation (3-19):

vskt , j ,d  t 1vskt , 1j ,d  c1r1t 1, s ( pskt , 1j ,d  skt , 1j ,d )  c2 r2t 1, s ( psgt ,1j ,d  skt , 1j ,d ) Here t is the value of the inertial weight at iteration t : t  (max  min ) 

(3-19) tmax  t  min , tmax

max and min denote the maximum and minimum value of t , respectively, and tmax is an integer denoting the maximum number of iterations. After that, the velocity is converted to a change of probability through the sigmoid function (3-20):

s(vskt , j ,d ) 

1 1  exp(vskt , j ,d )

3‐21   

(3-20)

Here s(vskt , j ,d ) denotes the probability of placing job j in the d th position of the job production sequence. For instance, s(vskt ,1,2 )  0.2 in Table 3-8 represents that there is a 20 per cent chance that job 1 in particle k at iteration t will be placed in the second position. Position

Job

1

2

3

4

1 2

0.4

0.2

0.4

0.5

0.3

0.1

0.25

0.3

3

0.2

0.3

0.5

0.4

4

0.3

0.6

0.1

0.2

Table 3-8. The changes of probabilities from velocity of the job production sequence. (b) Velocity of the worker and workstation assignment s j ,i

s j ,i

In VM kt , j ,i and VWkt, j ,i , the high value of vmkt ,,mj ,i or vwkt ,,wj ,i indicates that operation i of job j is more likely to be handled by the corresponding workstation or worker, whereas a low value means that this operation would be better handled by another suitable workstation or skilled worker. These velocities are updated according to Equations (3-21) and (3-22): t ,ms

t 1, msj ,i

vmk , j ,ji,i  t 1vmk , j ,i t , ws

t 1, msj ,i

 c1r1t 1,m ( pmk , j ,i

t 1, ws

t 1, ws

t 1, ms

t 1, m s

t 1, ms

 mk , j ,i j ,i )  c2 r2t 1,m ( pmg , j ,i j ,i  mk , j ,i j ,i ) t 1, ws

t 1, ws

t 1, ws

vwk , j ,ji,i  t 1vwk , j ,i j ,i  c1r1t 1, w ( pwk , j ,i j ,i  wk , j ,i j ,i )  c2 r2t 1, w ( pwg , j ,i j ,i  wk , j ,i j ,i )

(3-21) (3-22)

Once updated, they are then converted into the change of probabilities through Equations (3-23) and (3-24): t , ms

s(vwk , j ,ji,i )  t , ws

s (vwk , j ,ji,i ) 

1 t , ms

1  exp(vwk , j ,ji,i )

1 t , ws

1  exp(vwk , j ,ji,i )

3‐22   

(3-23)

(3-24)

t , ms

Here s (vwk , j ,ji,i ) denotes the probability of assigning the corresponding workstation to t , ws

manufacture this operation, and s(vwk , j ,ji,i ) denotes the probability of assigning the corresponding worker to handle the manufacturing process of this operation. 3. Construction of a complete production schedule In the iteration process of discrete particle swarm optimization, each particle needs to be decoded into a complete production schedule so as to evaluate its fitness. The decoding method is as follows. (a) Construction of a job production sequence The construction process of a job production sequence starts from a null sequence and then places an unscheduled job j in the d th position from d=1 to N according to the following probability in Equation (3-25):

q ( j)  t k ,d

s (vskt , j ,d )

 s(vs j ' U

t k , j' ,d

)

(3-25)

Here U is the set or a subset of the unscheduled jobs, and qkt ,d ( j ) is the probability of placing job j in the d th position. A complete job production sequence is constructed when each job has been assigned to a position. (b) Construction of the worker and workstation assignment Actually, the procedure for constructing the worker and workstation assignments for each operation is similar to that of constructing a job production sequence. It can be accomplished by replacing set U with a set of suitable workstations or skilled workers for the operation under consideration, and selecting one component from the set according to that probability. The details are as follows. First, a set of workstations M ( j , i ) and a set of workers L ( j , i ) , that can handle this operation are determined. Then

one suitable workstation is selected form the set M ( j , i ) according to the following probability in Equation (3-26): 3‐23   

qkt , j ,i (m) 

s (vmkt ,,mj ,i )



wM ( j ,i )

s (vmkt ,,wj ,i )

m  M ( j , i )

(3-26)

Next, one skilled worker is selected from the set L ( j , i ) according to the following probability in Equation (3-27): t k , j ,i

q

(l ) 

s (vwkt ,,l j ,i )



wL ( j ,i )

s (vwkt ,,wj ,i )

l  L( j , i )

(3-27)

Here qkt , j ,i (m) denotes the probability of assigning workstation m to produce this operation and qkt , j ,i (l ) denotes the probability of assigning worker l to handle the manufacturing process of this operation. (c) A heuristic for determining the production outputs of the jobs Since the structure of a particle does not explicitly include the production outputs of the jobs, a heuristic is developed to facilitate the determination of production outputs of each job in each time slice so as to evaluate the fitness of a particle. Taking job j for example, Figure 3-4 presents the pseudo code of the heuristic, which determines the production outputs of all of the jobs according to the production priorities of the job production sequence. This heuristic determines the production outputs of a job based on the workstation and worker assignment in the particle. Some functions are used in this heuristic. The function

UtmostPR takes the remaining capacities of the corresponding worker and workstation as the inputs and generates the maximum processing rate of operation i in time slice

s  i  1 according to the production capacity constraints. The function update updates the remaining capacities of the corresponding workers and workstations after manufacturing a certain unit of operation i . The feasible production output of a job in a time slice is denoted as Min _ PR in the heuristic, which is the smallest value of the maximum feasible production outputs of all operations. This can ensure that no work-inprocess inventory exists between workstations. If the job cannot be finished before its due 3‐24   

date, the remaining production volume of this job will be subcontracted so as to meet the customer demand. The workstation assignment and worker assignment of job j  can be obtained directly from the particle. Let RCw, s represent the remaining capacity of workstation w in time slice  s , and LCl , s denote the remaining capacity of worker  l  in time slice  s . Set  s  1 , Re main _ qty  V j /*  s  is a time slice and  V j is the production volume of job  j */ While  Re main _ qty  0 and s  DD j  K j  1 do           Min _ PR  Re main _ qty    /* Min _ PR denotes the minimum processing rate*/ For i  1 to  K j       

/*  K j is the number of operations of job  j */

Max _ PR j ,i , s  i 1  UtmostPR ( LCl , s  i 1 , RCw, s  i 1 ) /*find the maximum processing rate

of operation i */        

Min _ PR  min( Min _ PR, Max _ PR j ,i , s  i 1 ) /*update the minimum processing rate*/

If Min _ PR  0 PR j ,i , w ( r ), s  i 1  Min _ PR i

/*determine the processing rate*/

Re main _ qty  Re main _ qty  Min _ PR /*update remaining volume*/ update( LCl , s  i 1 , RCw, s  i 1 , PR j ,i , w ( r ), s  i 1 ) /*update remaining capacity of

corresponding workstation and worker*/ End-if End-for s  s  1 /*go to next time slice*/ End-while If Re main _ qty  0

Subcontract all the remaining volume of this job to meet customer demand. End-if

Figure 3-4. The heuristic for determining the production outputs of a job.

3.5.2 Constraint programming

3‐25   

As stated in the literature review, constraint programming is a programming paradigm where the relations between variables are stated in the form of constraints. It usually represents problems at hand as constraint satisfaction problems (CSPs) and solves them by actively using the problem constraints to implicitly eliminate the infeasible regions of the solution space.

Figure 3-5. The procedure of constraint programming with backtracking propagation. Generally, a feasible CSP solution is found through constraint propagation. Some commonly used propagation techniques include backtracking, backmarking, and backjumping, etc. Backtracking is the simplest to implement. The basic operation of backtracking is to pick a variable at a time, and consider one value within its domain for it, making sure that the newly added label is compatible with the instantiated partial solution. If the newly added label violates certain constraints, then an alternative value from its domain, if exists, is picked. If no value can be assigned to a variable without 3‐26   

violating any constraint, it will backtrack to the most recently instantiated variable. This process continues until either a feasible solution is found or all of the possible combinations are tried and rejected. In the former case, a feasible solution (although usually not the optimal solution) is found; in the latter case, the problem has no feasible solution. The procedure for constraint programming with backtracking propagation in the field of production scheduling is presented in Figure 3-5.

3.5.3 Hybridization of DPSO and CP (CPSO) 1. The CPSO procedure Although (discrete) particle swarm optimization can quickly locate relatively good solutions, its iteration process may stagnate as time passes and the swarm enters equilibrium. This is especially common for problems with hard constraints. While constraint programming is an effective approach for problems with hard constraints, it may be not efficient enough if the feasible search space is large. Hence, the idea to adopt the principle of CP to prevent DPSO from stagnating at a local optimum is proposed. The procedure for the hybrid CPSO algorithm is presented in Figure 3-6. The assumption that the subcontracting cost of a job is much higher compared with other manufacturing costs assumed in this research usually holds true. Thus, if a job cannot be finished before its due date in the hybrid CPSO algorithm, it will be treated as inconsistent and critical production resources will be shifted to other assignments. In addition, it is obvious from the procedure outlined in Figure 3-6 that the constraint propagation in CPSO takes the form of single-level backtracking. When an inconsistency occurs, the approach is to first identify the critical production resource and check for alternative assignments. If yes, then an alternative production resource will be assigned to this operation. If not, the algorithm will not backtrack to the most recently scheduled job and randomly select a suitable workstation or skilled worker according to the DPSO mechanism regardless of consistency, and then continue to schedule the next job until all of the jobs have been scheduled. Obviously, the primary function of CP in the hybrid CPSO algorithm is to coordinate production resource assignment among jobs, but conduct no improvement on the job production sequence. 3‐27   

Step 1. Initialization Step 1.1 Initialize parameters such as particle size K, tmax , inertial weight etc. Step 1.2 Initialize all particles’ positions X kt and velocities Vkt randomly. Step 1.3 Evaluate objective function value for each particle so as to initialize Pkt and Pgt Step 2. Perform iteration process while ( t  tmax )

Step 2.1 for k=1 to K Update velocity of particle k Update the job production sequence of particle k while job production sequence list is not empty

Pick the first job j in the job production sequence list Update workstation and worker assignment of job j Check consistency while (not consistent)

detect critical resource and add it to violation set if there is alternatives of this critical resource

change a new assignment of this resource check consistency else

randomly select a resource according to DPSO mechanism set it consistent end-while

Calculate production outputs of job j . Delete job j from the job production sequence. end-while

Update Pkt end-for

Step 2.2 Update Pgt Step 2.3 Increment of iteration count t=t+1, and update inertial weight. end-while

Step 3 Report the best solution of the swarm and corresponding objective function value.

Figure 3-6. The CPSO hybrid algorithm procedure.

3‐28   

Another essential issue in CPSO is to identify the critical production resources, the procedure of which is presented in Figure 3-7, taking scheduling job j for example. Step 1. Initialization mrate[ K j ] : an array recording the rate of each operation can be produced by the assigned

workstation before its due date. Each bit is initialized with 0. wrate[ K j ] : an array recording the rate of each operation can be produced by the assigned

worker before its due date. Each bit is initialized with 0.

    indexm[ K j ] : an array denoting whether the machine producing corresponding operation is the bottleneck at least in one time slice. Each bit is initialized with 0. indexw[ K j ] : an array denoting whether the worker producing corresponding operation is the

bottleneck at least in one time slice. Each bit is initialized with 0.

             max_ rate : the maximum processing rate of job j in a time slice Step 2. Calculate the values for mrate[ K j ] , wrate[ K j ] , indexm[ K j ] , and indexw[ K j ] for s=1 to DD j  K j  1 for i=1 to Kj

Calculate prm j ,i , s  i 1 and prw j ,i , s  i 1 which denote the maximum processing rate of this operation in this time slice handled by corresponding machine and worker respectively. if prm j ,i , s  i 1  max_ rate prm j ,i , s  i 1  max_ rate

if prw j ,i , s  i 1  max_ rate prw j ,i , s  i 1  max_ rate

mrate[i ]  prm j ,i , s  i 1 , wrate[i ]  prw j ,i , s  i 1

Update the remaining capacity of corresponding machine and worker Select i m  min{ prm j ,i , s  i 1} , i w  min{ prw j ,i , s  i 1} i

i

if prm j ,i m , s  i m 1  max_ rate indexm[i m ]  1

if prw j ,i w , s  i w 1  max_ rate indexw[i w ]  1

Step 3. determine the critical resource

if all bits in mrate[ K j ] and wrate[ K j ] are equal to 0

3‐29   

No critical resource exists

if all bit in wrate[ K j ] are equal to 0, and at least one bit in mrate[ K j ] is 1

i*  arg min {mrate[i]} , the machine producing this operation is critical machine i:indexm[ i ]1

if all bit in mrate[ K j ] are equal to 0, and at least one bit in wrate[ K j ] is 1

i*  arg min {wrate[i ]} , the worker handling this operation is critical worker i:indexw[ i ]1

if both mrate[ K j ] and wrate[ K j ] exist at least one bit equal to 1

i1*  arg min {mrate[i]} , i2*  arg min {wrate[i ]} i:indexm[ i ]1

i:indexw[ i ]1

if mrate[i ]  wrate[i ] * 1

* 2

the machine producing operation i1* is the critical machine

if mrate[i1* ]  wrate[i2* ] the worker handling operation i2* is the critical worker

if mrate[i1* ]  wrate[i2* ] the corresponding machine and worker are randomly selected with probability 0.5 as the critical production resource.

Figure 3-7. The procedure for detecting the critical production resource. 2. CPSO performance The performance of the hybrid CPSO algorithm is evaluated based on a large set of randomly generated test problems. The values of the parameters in the algorithm are set as follows: Particle size

100

Maximum number of iterations

100

Maximum value of the inertial weight

0.8

Minimum value of the inertial weight

0.2

The range of velocity

[-4, 4]

Cognitive scaling coefficient

2

Social scaling coefficient

2 3‐30 

 

The job size takes three different values in the test problems: 10, 20 and 40. Two system configurations are considered. In the first one, the system contains 12 workstations and 10 workers; in the second one, the system contains 20 workstations and 20 workers. Each job contains three or four operations, all of which have a processing time randomly generated from the range of [20, 40]. The production volume of each job is randomly generated from the range of [30, 50]. The subcontracting cost of a job per unit is randomly generated from the range of [500, 1000] or [1000, 2000]. In addition, the due date of each job is randomly generated from the range of [α  PH , β  PH ] , where PH is the planning horizon determined by PH 

Expecte total processing time , nm is the min(nm , nw )  PL  η

number of workstations, and nw is the number of workers. In this research, the length of each time slice is 300 seconds, and η is set with value 0.5 according to the research of Mak et al. (2005). Two different values for (α, β ) are considered: (0.4, 0.7) and (0.6, 0.9), representing tight and loose due date constraint respectively. No.

Parameter value

No.

Parameter value

1

(10, (12, 10), (0.4, 0.7), (500, 1000))

13

(20, (20, 20), (0.4, 0.7), (500, 1000))

2

(10, (12, 10), (0.4, 0.7), (1000, 2000))

14

(20, (20, 20), (0.4, 0.7), (1000, 2000))

3

(10, (12, 10), (0.6, 0.9), (500, 1000))

15

(20, (20, 20), (0.6, 0.9), (500, 1000))

4

(10, (12, 10), (0.6, 0.9), (1000, 2000))

16

(20, (20, 20), (0.6, 0.9), (1000, 2000))

5

(10, (20, 20), (0.4, 0.7), (500, 1000))

17

(40, (12, 10), (0.4, 0.7), (500, 1000))

6

(10, (20, 20), (0.4, 0.7), (1000, 2000))

18

(40, (12, 10), (0.4, 0.7), (1000, 2000))

7

(10, (20, 20), (0.6, 0.9), (500, 1000))

19

(40, (12, 10), (0.6, 0.9), (500, 1000))

8

(10, (20, 20), (0.6, 0.9), (1000, 2000))

20

(40, (12, 10), (0.6, 0.9), (1000, 2000))

9

(20, (12, 10), (0.4, 0.7), (500, 1000))

21

(40, (20, 20), (0.4, 0.7), (500, 1000))

10

(20, (12, 10), (0.4, 0.7), (1000, 2000))

22

(40, (20, 20), (0.4, 0.7), (1000, 2000))

11

(20, (12, 10), (0.6, 0.9), (500, 1000))

23

(40, (20, 20), (0.6, 0.9), (500, 1000))

12

(20, (12, 10), (0.6, 0.9), (1000, 2000))

24

(40, (20, 20), (0.6, 0.9), 1000, 2000))

Table 3-9. Test problem generating schemes. Table 3-9 lists the schemes of generating test problems. In this table, ( n, m, d , s ) is used to denote the parameter combination, where n is the number of jobs,  m denotes the system configuration, d denotes the due date constraint, and s denotes the 3‐31   

subcontracting cost. For instance, Scheme 1 in Table 3-9 means that there are 10 jobs to be scheduled, 12 workstations and 10 workers in the system, and that the due date is tight, and the subcontracting cost is generated from the range [500, 1000].  Five test problems are randomly generated in each scheme. The DPSO and CPSO performances are obtained by averaging the results of running the algorithms five times for each test problem. Table 3-10 shows the performance comparison of DPSO and CPSO after executing each of these two algorithms 100 iterations, respectively. In the table, “cost diff” and “time diff” are calculated according to Equations (3-28) and (3-29): cost of DPSO-cost of CPSO cost of DPSO

(3-28)

computational time of CPSO-computational time of DPSO computational time of DPSO

(3-29)

cost diff=

time diff= Case No.

DPSO

CPSO

Time diff

cost

time (s)

cost

time (s)

1

184104

3.26

176424

5.68

4.17%

74.23%

2

234096

3.30

220378

5.63

5.86%

70.61%

3

157203

3.74

151229

6.20

3.80%

65.78%

4

173376

3.76

164117

6.23

5.34%

65.69%

5

162434

5.42

155560

10.03

4.23%

85.06%

6

169873

5.40

162313

10.05

4.45%

86.11%

7

154236

6.32

148760

11.26

3.55%

78.16%

8

160783

6.35

153869

11.28

4.30%

77.63%

9

427506

8.48

401770

13.56

6.02%

59.91%

10

504431

8.50

467960

13.50

7.23%

58.82%

11

387677

9.78

365501

13.89

5.72%

42.02%

12

409856

9.77

382108

13.82

6.77%

41.45%

13

390578

14.03

365737

20.52

6.36%

46.26%

14

403688

14.00

377448

20.55

6.50%

46.79%

15

376784

15.05

354817

22.60

5.83%

50.17%

16

382024

15.05

359217

22.56

5.97%

49.90%

17

927689

22.56

869801

36.77

6.24%

62.99%

3‐32   

Cost diff

18

1044563

22.50

971235

36.74

7.02%

63.29%

19

853346

23.62

803339

37.58

5.86%

59.10%

20

893677

23.60

839877

37.61

6.02%

59.36%

21

825662

30.23

776782

47.28

5.92%

56.40%

22

839844

30.25

789705

47.20

5.97%

56.03%

23

802256

30.68

759897

47.96

5.28%

56.32%

24

813554

30.70

768890

47.93

5.49%

56.12%

Table 3-10. Performance comparison of CPSO and DPSO. The comparative results in Table 3-10 clearly indicate that CPSO can obtain better solutions with the tradeoff of longer computational time, especially when the problem size becomes larger, the due date requirement becomes tighter, or the subcontracting cost becomes higher. Nonetheless, only between 3 and 7 per cent of improvement on solution quality seems to be insufficient. The principles of ant colony system will be integrated into CPSO to further improve search performance in the following section.

3.5.4 Hybridization of DPSO, CP and ACS (ACPSO) The comparative CPSO and DPSO performance comparison results reveal that the improvement on the quality of solutions found by CPSO is not remarkable enough. The quality of a VCMS production schedule is mainly determined in two parts: the job production sequence and the production resource assignment. In CPSO, constraint programming with backtracking propagation helps to coordinate the production resource assignment among jobs, but does not perform any improvement on the job production sequence. Hence, the job production sequence may easily stagnate in a local optimum and thus damage the quality of the production schedule. In order to overcome this deficiency, the principles of any colony system are adopted to further improve search performance. 1. Any colony system Ant colony optimization is an effective meta-heuristic algorithm inspired by observations of real ant colony foraging behavior. When searching for food, ants appear to begin exploring the area around their nest in a random manner. The ants produce a 3‐33   

chemical pheromone trail when moving, which can be smelled by other ants. When choosing their searching way, ants tend to select the paths with strong pheromone concentrations with higher probability. Once an ant finds a food source, it grabs some of the food back to the nest and deposits certain quantity of pheromone on the returning path in proportion to the quality and quantity of the food. The pheromone trails will guide other ants to the food source. The pheromone, which is the essential element of guiding the behavior of ants, will evaporate as time goes on, and also be deposited on the paths during the ants’ searching process. Many ACO variants have been proposed based on these pheromone deposit and evaporation techniques. ACS is one of the most successful variants, the procedure of which is introduced as follows. (a) Tour construction Ant k , currently at node i , decides to move to node j by applying the following state transition rule:





arg max     , if q  q iu iu 0  uSk  i  j  J , if q  q0

(3-30)

where iu is the heuristic value, and  iu is the amount of pheromone on the path from node i to node u . The values α and β are two positive parameters used for controlling the relative weights of the pheromone value and the heuristic value. Meanwhile, Sk  i  is a set containing those selectable nodes for ant k currently at node i. The variable q is a random number uniformly distributed in the range of [0, 1], and q0 (q0  [0,1]) is a parameter determining the relative importance between exploitation and exploration, Finally, J is the selection mechanism that randomly chooses a node from the set Sk  i  according to the pseudo random proportional distribution rule:     ij  ij   , if j  S k  i     pijk          iu iu uSk  i   if j  S k  i  0,





3‐34   

(3-31)

In Equation (3-31), pijk is the probability of ant k at node i choosing to move to node j . (b) Local updating of pheromone trails While constructing a complete solution, an ant changes the pheromone level on its visiting path by applying the following local updating rule:

 ij  1    ij    0 where  (0    1) is the pheromone evaporation rate, and

(3-32)

 0 is the initial pheromone

value. The function of local updating is to decrease the pheromone values on the visited solution components, making in this way these visited components less desirable for other ants. This mechanism increases the efficient exploration of the search space. Ants will make a better use of the pheromone information. Without considering local updating techniques, all of the ants would search in a narrow neighborhood of the best previous tour. (c) Global updating of pheromone trails Once all of the ants have completed their tours, a global updating of pheromone trails will be performed. The purpose of the global updating is to increase the pheromone level of the paths belonging to the solution found so far. The global updating of the pheromone trails is performed by applying the following rule:

 ij  1   g  ij   g   ij bs

(3-33)

where  g (0   g  1) is the pheromone evaporation rate for global updating and  ij bs is the amount of pheromone for the best solution found so far, which is calculated through Equation (3-34):  ij

bs

 1 , if the path from nodes i to j belongs to T bs  =  C bs 0, otherwise

In Equation (3-34), C bs is the objective function of the best solution found so far.

3‐35   

(3-34)

2. ACPSO procedure ACPSO is an improvement over CPSO in that it prevents the job production sequences from stagnating at a local optimum and ameliorating the locating of other good solutions. Briefly, ACPSO differs from CPSO in the following two aspects. (a) Construction of the job production sequence ACPSO adopts the principles of ACS to increase the exploration of the solution space related to the job production sequence. This prevents the search process from stagnating at a local optimum. In ACPSO, the process of constructing a job production sequence which consists of two steps can be illustrated by considering the example for determining the production priority of a job. The steps are as follows:

Step 1. Take the selection mechanism J of ACS to select min( Q, M ) unscheduled jobs. Here Q is the total number of unscheduled jobs, M is a fixed positive integer, and  is a parameter in the range of (0, 1). If min( Q , M ) is a decimal, its value takes the least integer larger than min( Q , M ) . In detail, this step can be divided into the following two sub-steps. In this research, the heuristic value is set as the inverse of the due date of the corresponding job.

Substep 1.1 Determine the set of unscheduled jobs S 0 , the cardinality of which is Q . Substep 1.2 Select min( Q, M ) jobs from the set S 0 using the selection rule J of ACS. The set of selected jobs is denoted as S1 .

Step 2. Determine a job from the set S1 using the mechanism of discrete particle swarm optimization, and schedule it in this production priority. The procedure of these two steps continues until each job has been given a production priority. (b) Updating the pheromone trails Since the ACS principle is adopted in the iteration process of the job production sequence, the local updating of the pheromone values regarding job production sequence is performed during the construction process of the job production sequence for each 3‐36   

particle. The global updating of the pheromone values regarding job production sequence is performed when the swarm has finished one complete iteration. Overall, ACPSO is a hybrid algorithm based on the techniques of ant colony system, constraint programming and discrete particle swarm optimization. DPSO serves as the main framework, ACS increases the exploration of the job production sequence in the search space, and CP increases the exploitation of the search space by coordinating production resource assignment among jobs.

3.5.5 ACPSO performance and sensitivity analyses The performance of the hybrid ACPSO algorithm is evaluated based on the test problems generated according to the schemes in Table 3-9. In this research, the value of

 is set at 0.5, the value of M is set at six, and that of the pheromone evaporating rate is set at 0.2. The computational results are listed in Table 3-11. The results in Tables 3-10 and 3-11 clearly indicate that ACPSO outperforms DPSO and CPSO in locating good quality solutions, but requires longer computational time. Furthermore, improvements on solution quality become more obvious when the problem size becomes larger, the due date requirement becomes tighter, or the subcontracting cost becomes higher. For instance, when the job size is 10, the improvement on solution quality ranges from about 11 to 16 per cent, and when the job size increases to 40, the improvement ranges from about 16 to 27 per cent. This shows that the ACPSO algorithm is robust and suitable for solving practical manufacturing problems. Taking Scheme 6 for example, Figure 3-8 shows the respective performances of these three algorithms in locating schedule solutions with the same number of iterations. Other schemes also have the similar characteristics. This figure also shows the stability of ACPSO: DPSO and CPSO are easily trapped by a local optimum, while ACPSO can quickly locate a much better solution and then the search process becomes smooth after a certain number of iterations.

3‐37   

Scheme No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

DPSO cost 184104 234096 157203 173376 162434 169873 154236 160783 427506 504431 387677 409856 390578 403688 376784 382024 927689 1044563 853346 893677 825662 839844 802256 813554

time (s) 3.26 3.30 3.74 3.76 5.42 5.40 6.32 6.35 8.48 8.50 9.78 9.77 14.03 14.00 15.05 15.05 22.56 22.50 23.62 23.60 30.23 30.25 30.68 30.70

cost 160814 198513 139549 147768 140326 141283 134586 135974 357181 400215 326113 325917 318555 315183 315971 317767 739553 775170 714677 737015 629319 610482 657609 645473

ACPSO time (s) 8.26 8.29 9.43 9.41 14.04 14.00 16.04 16.09 22.33 22.29 25.04 24.98 41.65 41.70 42.78 42.75 60.77 60.71 62.21 62.15 78.99 78.98 78.76 78.71

Cost diff 12.65% 15.20% 11.23% 14.77% 13.61% 16.83% 12.74% 15.43% 16.45% 20.66% 15.88% 20.48% 18.44% 21.92% 16.14% 16.82% 20.28% 25.79% 16.25% 17.53% 23.78% 27.31% 18.03% 20.66%

Time diff 153.37% 151.21% 152.14% 150.27% 159.04% 159.26% 153.80% 153.39% 163.32% 162.23% 156.03% 155.68% 196.86% 197.86% 184.25% 184.05% 169.37% 169.82% 163.38% 163.35% 161.30% 161.09% 156.71% 156.38%

Table 3-11. Performance comparison of ACPSO and DPSO.

manufacturing cost

DPSO

CPSO

ACPSO

190000 180000 170000 160000 150000 140000 130000 1

11

21

31

41

51

61

71

81

91

iteration Figure 3-8. Performance of locating good solutions with the same number of iterations.

3‐38   

Since the ACPSO’s longer computational time constitutes an advantage in testing conditions, it is fair to compare these three algorithms’ performances under the same computational time restraint. Denoting the computation time of running DPSO 100 iterations with TDPSO , and that of running CPSO 100 iterations with TCPSO , the quality of the best solutions found by ACPSO within TDPSO and TCPSO is checked. However, the results prove that ACPSO still features better performance than DPSO and CPSO with the same computation time. In fact, the quality of the best solution found by ACPSO after about 20 - 30 iterations is almost as good as that of the solution found after 100 iterations (see Figure 3-8) in every test problem. Additionally, the ACPSO’s increasing computational speed remains relatively stable as the problem size changes. The results in Table 3-11 demonstrate that the time differential between ACPSO and DPSO always stays in the range from 150 to 190 per cent in all test problems. Sensitivity analyses of the key parameters of ACPSO are then conducted to illustrate the effects of such parameters on the search performance of the hybrid algorithm. Figures 3-9 to 3-12 show the results. The particle size in the analyses ranges from 30 to 100. The heuristic value parameter ranges from 0 to 1. The value range of  is 0.2, to 0.5, and then to 1.The range of pheromone evaporation rate is 0.1 to 0.2, and then to 0.8. These figures adopt Scheme 6 to provide a basis without sacrificing generality. Figure 3-9 shows that particle size significantly affects search performance. Performance is relatively weak when the particle size is small. Nonetheless, the results also demonstrate that too large of a particle size is unnecessary. There is no obvious difference in results between a particle size with a value of 50 or 100. This can be explained by the characteristics of ACPSO: the adoption of the ACS mechanism increases the exploration of the job production sequence in the search space, and the backtracking propagation increases the exploitation of the search space as related to the production resource assignment in each job production sequence. These two mechanisms in concert can guarantee that ACPSO will yield an acceptable solution even if the particle size or the iteration number is not very large. This prospect is very absorbing, because in most real applications the problem size is usually very large and the manufacturers want to obtain an acceptable production schedule with minimal computational effort. 3‐39   

manufacturing cost

size=100

size=50

size=30

210000 200000 190000 180000 170000 160000 150000 140000 130000 1

11

21

31

41

51

61

71

81

91

iteration Figure 3-9. The effects of particle size on solution quality. Figure 3-10 shows the effects of the heuristic value on the search process. When the parameter is too small (e.g.   0 , which means that the heuristic value is not taken into consideration), the convergence speed is very slow. In the test experiments, there was not much difference between results when the value of  was 0.2 and 1. beta=1

beta=0.2

beta=0

manufacturing cost

200000 190000 180000 170000 160000 150000 140000 130000 1

11

21

31

41

51

61

71

81

91

iteration Figure 3-10. The effects of the heuristic value on search performance. Figure 3-11 shows the effects of  on the search process. M is assumed to be a very large integer in order to study the effects of  . When  is set to a very large value, the 3‐40   

convergence process is very slow. Actually ACPSO is reduced to be CPSO if   1 and M is set with a large value, where the ACS mechanism is disabled in the algorithm. There is not much difference in performance when the value is set to either 0.2 or 0.5. However, a medium value is preferable for this parameter because it can combine the advantages of ACS’s increased exploration with DPSO’s ability to quickly locate good solutions. The effect of M on the search performance is similar to that of  . theta=1

theta=0.5

theta=0.2

manufacturing cost

200000 190000 180000 170000 160000 150000 140000 130000 1

11

21

31

41

51

61

71

81

91

iteration Figure 3-11. The effects of theta value on search performance. Figure 3-12 shows the effects of the pheromone evaporation rate on the search process. The behaviors of the algorithm remain similar even though this parameter takes three different values, apparently denoting that this parameter has little effect on the search process. Since the pheromone evaporation rate effect on search performance is not so obvious in the hybrid algorithm, the value of this parameter is usually set to a small or medium value. However, introducing the evaporating pheromone value mechanism from ant colony optimization algorithms greatly increases the exploration of the search space. The ant colony system is a successful variant of ant colony optimization that features an especially high ability to increase search space exploration.

3‐41   

rou=0.1

rou=0.2

rou=0.8

200000

manufacturing cost

190000 180000 170000 160000 150000 140000 130000 1

11

21

31

41

51

61

71

81

91

iteration Figure 3-12. The effects of pheromone evaporation rate on the search process.

3.6 Chapter Summary A mathematical model developed to formulate production schedules for VCMSs operating in a single-period manufacturing environment has been introduced in detail. This mathematical model takes workforce requirements into consideration and includes several realistic constraints related to production capacities, and delivery due dates, etc. Th objective of the model is to minimize the total manufacturing cost within the entire planning horizon, including machine operating cost, material transportation cost, workers’ salaries, and subcontracting cost. A simple example has been provided to facilitate the understanding of the VCM concept, and demonstrate the application of the mathematical model in describing practical scheduling problems. An effective hybrid algorithm, based on the techniques of discrete particle swarm optimization, constraint programming, and ant colony system, has been proposed to solve the complex production scheduling problem as well as provide guidance for the formation of virtual manufacturing cells. DPSO is the main framework of the proposed 3‐42   

hybrid algorithm, ACS elements are used to increase the exploration of the search space related to the job production sequence, and CP is included to increase the exploitation of the search space by coordinating production resource assignment among jobs. The hybrid algorithm combines these complementary advantages so as to improve the performance of the search process. The performance of the proposed hybrid algorithm has been evaluated by solving a large set of randomly generated test problems. The computational results show that ACPSO performs better than DPSO and CPSO, especially for largescale problems, even when the particle size or the maximum number of iterations is not very large. These characteristics alone make it very attractive for practical applications. The proposed hybrid algorithm has more advantages as well. First, it does not impose rigid assumptions on the mathematical model, making it very easy to implement this approach to solve other practical problems. Second, this approach does not relax any constraints in the mathematical model, thus guaranteeing the feasibility and effectiveness of the obtained solutions. Third, this approach can help manufacturers generate a complete production schedule for the entire planning horizon, which can provide a basis for planning other external activities in advance. In addition, sensitivity analyses have also been conducted to study the effects of important parameters in the algorithm on search performance. This can facilitate schedulers and manufacturers to better understand the relationship between the parameters and the search performance, and thus to apply this approach more effectively in actual practice.

3‐43   

CHAPTER 4 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER A MULTI-PERIOD MANUFACTURING ENVIRONMENT

4.1 Introduction The last chapter explored production scheduling problems of VCMSs operating in a single-period manufacturing environment. These are, in essence, short-term planning and scheduling problems. However, confining research in a short-term environment may be not satisfactory enough in modern practice. Actual manufacturers usually desire effective medium-range planning and scheduling in order to more efficiently manage their production activities. For this reason, this chapter will extend the VCMS production scheduling research into a more realistic, multi-period manufacturing environment. In order to do so, a new mathematical model is developed to formulate production schedules for VCMSs operating in a multi-period situation. This model takes workforce requirements into consideration and includes constraints related to the production capacities of various resources, and the delivery deadlines, etc. The model’s objective is to minimize the total manufacturing cost within the entire planning horizon, including machine operating cost, material transportation cost, workers’ salaries, worker training cost, subcontracting cost, and inventory-holding cost. The mathematical model for VCMSs operating in a multi-period situation differs from that for VCMSs operating in a single-period environment in two major ways. First, the workers can be trained to learn new abilities in the multi-period situation. Second, inventory-holding among periods is allowed in the multi-period situation to balance the workstation utilization, the subcontracting cost, and the inventory-holding cost so as to optimize the total manufacturing cost. The hybrid ACPSO algorithm was proposed in Chapter 3, and its effectiveness was demonstrated by a large set of randomly generated test problems. ACPSO is also adopted 4‐1   

in this chapter to solve the production scheduling problems for VCMSs operating in a multi-period situation. Furthermore, factors affecting the worker training scheme are studied in detail. These considerations can help manufacturers manage their manpower more effectively.

4.2 Mathematical Modeling of Production Scheduling in VCMSs under a Multi-period Manufacturing Environment A mathematical model is firstly developed to formulate production schedules for VCMSs operating in a multi-period manufacturing environment. Besides the three types of information that a production schedule for single-period VCMSs can provide, a production schedule for multi-period VCMSs with workforce requirements should present two more types of information. First, it should specify the worker training scheme. Second, it should determine the inventory-holding plan for the entire planning horizon.

4.2.1 Assumptions The mathematical model is developed under the following assumptions: (1) Each type of job consists of a certain number of operations that must be manufactured according to the production route; (2) The planning horizon consists of a certain number of periods, each of which is further subdivided into a certain number of equal time slices; (3) Similar job operations must be produced on the same workstation and handled by the same skilled worker in each period; (4) The processing time of each job operation is deterministic and known. The machine setup time has been included in the processing time. What is more, the processing time of an operation on any workstation that can produce it is always the same; (5) The production volume and the delivery due date of each job in each period are deterministic and known;

4‐2   

(6) The workstation configuration keeps the same within the entire planning horizon. The distance between any two workstations and the transportation cost of each job between workstations are deterministic and known; (7) Similar to the requirement in a single-period situation, work-in-process inventory is not allowed between workstations. That is, the processing rate of an operation in a time slice must be equal to that of its preceding operation in the last time slice, and that of its succeeding operation in the next time slice; (8) The salary of each worker per time slice is deterministic and known. What is more, a worker will receive full payment for a time slice if he has been assigned to it at all, no matter the length of his actual working period; otherwise, the payment in this time slice is zero; (9) Each workstation can manufacture at most one operation at a time and each operation cannot be interrupted once started in any time slice; (10) The number of workers is constant within the entire planning horizon. Each worker can handle at most one workstation at a time. In addition, each worker can be trained to learn new abilities of operating other types of workstations. The training cost is known and the training time is negligible; (11) Compared with other manufacturing costs, the subcontracting cost of each job is much higher; (12) Only end-products can be stored from period to period; (13) Only jobs with positive customer demand can be manufactured in each period; and (14) The material transportation time between workstations is negligible.

4.2.2 Notations The notations used to develop the mathematical model are presented as follows: w, w '

= workstation type. w, w '  1, 2,..., W . W represents the total number of

4‐3   

workstations. s, s '

= a time slice. s, s '  1, 2,..., S . S denotes the number of time slices in a period.

p, p '

= a period. p, p '  1, 2,..., P. P signifies the number of periods in the planning horizon.

rj , p

= the production route of job j in period p .

w( rj , p )

= workstation w used in the production route rj , p .

j, j '

= job type. j , j '  1, 2,..., N . N denotes the number of job types.

O j ,i

= the operation i  of job j .

DD j , p

= the delivery due date of job j in period p . The due date in this research is the expiration of the overall production time slice.

MCw, s , p

= the maximum production capacity of workstation w in time slice s of period p .

V j, p

= the volume of customer demand of job j in period p .

V

= the volume of job j actually produced in period p .

j, p

IV j , p

= the inventory-holding volume of job j in period p .

SV j , p

= the subcontracting volume of job j in period p .

Kj

= the number of operations of job j .

l

= a worker. l  1, 2,..., L . L represents the number of workers.

d w1 , w2

= the distance between workstation w1 and w2 .

D ( rj , p )

= the material travelling distance of production route rj , p .

PL

= the length of a time slice.

pt j ,i , w( rj , p )

= the processing time of operation i  of job j on workstation w( rj , p ) .

PR j ,i , w( rj , p ), s , p

= the processing rate of operation i  of job j on workstation w( rj , p ) in time slice s  of period p .

El , w, p

= equal to one if worker l can handle workstation w in period p ; otherwise, it is equal to zero. p  0,1,..., P . El , w,0  1 means the operating 4‐4 

 

workstation w is the initial ability of worker l . Z j ,i ,l , p

= equal to one if worker l is assigned to handle operation i  of job j in period p ; otherwise it is zero.

Z l ,s , p

= equal to one if worker l is assigned to time slice s of period p .

st j ,i , w( rj , p ), s , p

= the starting time of operation i  of job j on workstation w( rj , p ) in time slice s of period p .

ft j ,i , w( rj , p ), s , p

= the completion time of operation i  of job j on workstation w( rj , p ) in time slice s of period p .

TIN l , w, s , p

= the time interval in which worker l is operating workstation w in time slice s of period p .

TN l , p

= the number of time slices to which worker l is assigned during period p.

X j ,i , w( rj , p ), s , p

= a zero-one binary variable where it is equal to one if job j has operation i launched on workstation w( rj , p ) in time slice s of period p ;

otherwise, it is zero.

Y j ,i , w( rj , p ),s , p

= a zero-one binary variable where it is equal to one if job j has operation i processed on workstation w( rj , p ) in time slice s of period p ;

otherwise, it is zero.

j

= the cost of moving one unit of job j per unit distance.

l

= the salary of worker l per time slice.

w

= the operating cost of workstation w per unit time.

 j, p

= the inventory-holding cost of job type j per unit in period p .

 j, p

= the subcontracting cost of job j per unit in period p .

trl , w, p

= the training cost of worker l for workstation type w in period p .

4.2.3 Mathematical model

4‐5   

The mathematical model of the production scheduling problems for VCMSs operating in a multi-period manufacturing environment is presented as follows: minimize:

,

+

,

+

,

,

+

,

+

,

,,

,

+

(

, ,

, ,

, ,

,



,,

,

, ,

,

 

 



, ,

,,

(4-1) where (4-2) ,

= max

,



, ∀

,

= max

, ∀

,

,

, ,

,

,

, ,



+



,

,

,0 +

, ∀

,

∀,  

,

,

D

,0

,

= ,

,

, ,

∀ ,  

(4-3)

,

(4-4)

∀,  

, ∀(

,

)∈ ,

subject to:

X j ,i , w( rj , p ), s i 1, p  X j ,i 1, w( rj , p ), s i , p

O j ,i , w(rj , p ), s, p

PR j ,i , w( rj , p ), s i 1, p  PR j ,i 1, w( rj , p ), s i , p

O j ,i , w(rj , p ), s, p  

S  ( K j 1) K j

  X

w ( rj , p )

s 1

i 1

j ,i , w ( rj , p ), s i 1, p

Y j ,i , w( rj , p ), s ', p  (1  X j ,i , w( rj , p ), s , p )G

j , p

j , i, w(rj , p ), p, s '  s

 0  Y j ,i , w( rj , p ), s , p  1 PR j ,i , w( rj , p ), s , p   0  Y j ,i , w( rj , p ), s , p  0 4‐6   

1

O j ,i , w(rj , p ), s, p

(4-5) (4-6)

(4-7)

(4-8)

(4-9)

S

  PR

w ( r j , p ) s 1

N

j ,i , w ( r j , p ), s , p

Kj

 Y j 1 i 1

j ,i , w ( r j , p ), s , p

 V j, p

O j ,i , p

(4-10)

PR j ,i , w ( rj , p ), s , p pt j ,i , w ( rj , p )  MCw, s , p w, s, p

st j ,i ,w( rj , p ),s , p  ( p  1)  S  PL  ( s  1)  PL

O j ,i , w(rj , p ), s, p

(4-11)

(4-12)

ft j ,i , w( rj , p ), s , p  st j ,i , w( rj , p ), s , p  PR j ,i , w( rj , p ), s , p pt j ,i , w( rj , p ) O j ,i , w(rj , p ), s, p

(4-13)

Y j ,i , w( rj , p ), s , p ft j ,i , w( rj , p ), s , p  ( p  1)  S  PL  s  PL

(4-14)

IV j ,0  0

O j ,i , w(rj , p ), s, p

j

L

Y j ,i , w ( rj , p ), s , p   El , w, p Z j ,i ,l , p

O j ,i , w(rj , p ), s, p

l 1

Z l , s , p  max ( Z j ,i ,l , pY j ,i , w ( rj , p ), s , p ) O j ,i , w ( r j , p )

S

TN l , p   Z l , s , p

(4-15)

l , s, p

(4-16) (4-17)

l , p

(4-18)

s 1

TIN l , w, s , p   [ Z j ,i ,l , pY j ,i , w( rj , p ), s , p st j ,i , w( rj , p ), s , p , Z j ,i ,l , pY j ,i , w( rj , p ), s , p ft j ,i , w( rj , p ), s , p ) l , s, p, w O j ,i

TIN l , w, s , p  TIN l , w' , s , p   L

Z l 1

j ,i ,l , p

1

El , w, p  El , w, p 1

w  w' , l , s, p

(4-21)

l , w, p

where G is a large integer.

4‐7   

(4-20)

O j ,i , p

X j ,i ,w( rj , p ), s , p , Y j ,i ,w( rj , p ), s , p , Zl , s , p , Z j ,i ,l , p , El ,w, p {0,1}

(4-19)

(4-22)

j, i, w, l , s, p

(4-23)

The objective function (4-1) of the mathematical model is to minimize the total manufacturing cost related to the production schedule over the entire planning horizon. The first term represents the material transportation costs between workstations, the second term signifies the workers’ salaries, the third term denotes the machine operating costs, the fourth term indicates the inventory-holding costs, the fifth term expresses the subcontracting costs for all of the jobs, and the last term presents the workers’ training costs. Equations (4-2) denote the method of calculating the subcontracting volume of each job in every period. Equations (4-3) show the methodology of calculating the inventory-holding volume of each job in every period. Equations (4-4) demonstrate the method of calculating the material travelling distance of a production route. Constraints (4-5) ensure that once an operation is completed, its succeeding operation must start immediately in the next time slice in each period. Constraints (4-6) guarantee that no work-in-process inventory exists between workstations. That is, the processing rate of a job operation in a time slice must be equal to that of its preceding operation in the last time slice, and that of its succeeding operation in the next time slice in each period. Constraints (4-7) provide that the starting times of all operations must be within the planning horizon, and that each job will have only one unique production route in each period. Constraints (4-8) make certain that no production can start before the time slice in which that production is launched in each period. Constraints (4-9) restrict that the processing rate must be greater than or equal to zero in each time slice of a period.

4‐8   

Constraints (4-10) express the relationship between the processing rates and the production volume of each job in every period. Constraints (4-11) ensure that all jobs assigned on a workstation can be finished in each time slice of a period. Constraints (4-12) to (4-14) describe the relationship between the starting time and the completion time of each operation in every time slice. Constraints (4-15) mean that there is no inventory of any job at the beginning of the planning horizon. Constraints (4-16) require that at least one skilled worker will operate the workstation when needed. Constraints (4-17) certify whether a given worker is assigned to a given time slice in each period. Constraints (4-18) show the method of calculating the number of time slices to which a worker is assigned in each period. Constraints (4-19) signify the time interval in which a worker stays on a workstation in a time slice of a period. Constraints (4-20) ensure that a worker can operate at most one workstation at a time. Constraints (4-21) make sure that similar job operation must be handled by the same worker in each period. Constraints (4-22) show that each worker can learn new abilities within the planning horizon. Constraints (4-23) indicate that these variables are binary. Similar to the research in Chapter 3, it is assumed that MCw, s , p is equal to PL for any workstation in any time slice of any period without loss of generality in the following research. That is, there is no preload on any workstation at the beginning of the planning horizon. It is obvious that this mathematical model is non-linear and the problems it represents are NP-hard. Thus approximation algorithms such as various meta-heuristics 4‐9   

are preferable to obtain near-optimal or relatively good solutions within an acceptable computation time window in practical applications.

4.3 Illustrative Example A numerical example is provided to illustrate the production schedules for VCMSs operating in a multi-period manufacturing environment.

4.3.1 Manufacturing system configuration The sample manufacturing system contains 12 workstations and 10 workers. These workstations can be divided into four types according to their manufacturing functions, namely type A (workstation A8, A10 and A12), type B (workstation B1, B5 and B7), type C (workstation C2, C3 and C4) and type D (workstation D6, D9 and D11). The operating costs of these workstations per second are listed in Table 4-1. Workstation Operating cost

B1 2

C2 2

C3 1

C4 2

B5 2

D6 2

B7 3

A8 1

D9 1

A10 3

D11 1

A12 2

Table 4-1. The operating cost of each workstation per second. The workstations are spread over the production floor in order to reduce the distances that material needs to travel. Table 4-2 lists the material travelling distances among the workstations. From (ws) to (ws) B1 C2 C3 C4 B5 D6 B7 A8 D9 A10 D11 A12

B1 0 7 5 8 2 7 4 9 9 4 8 7

C2 7 0 6 5 10 3 10 8 5 2 3 10

C3 5 6 0 6 7 6 5 4 8 4 2 9

C4 8 5 6 0 4 6 8 7 6 2 2 8

B5 2 10 7 4 0 9 9 3 6 7 7 3

D6 7 3 6 6 9 0 10 5 9 4 5 7

B7 4 10 5 8 9 10 0 5 7 3 9 4

A8 9 8 4 7 3 5 5 0 2 10 2 2

D9 9 5 8 6 6 9 7 2 0 5 8 7

A10 4 2 4 2 7 4 3 10 5 0 3 10

Table 4-2. Travelling distances among the workstations. 4‐10   

D11 8 3 2 2 7 5 9 2 8 3 0 10

A12 7 10 9 8 3 7 4 2 7 10 10 0

Each workstation must be handled by a skilled worker during the manufacturing process. The characteristics of each worker are listed in Table 4-3. Worker no.

Types of initial ability

Salary per time slice

W1

B, D

145

W2

B, C

102

W3

A, D

135

W4

B, C

139

W5

A, C

113

W6

A, D

125

W7

C, D

116

W8

C, D

133

W9

B, C

126

W10

C, D

148

Table 4-3. Worker characteristics. Each worker can be trained to learn new abilities of operating other types of workstations in a multi-period manufacturing environment. The training costs for operating other types of workstations in each period are listed in Table 4-4. Mac type

Period 1

Period 2

Period 3

Period 4

Period 5

A

1827

1724

1149

1666

1675

B

1376

1300

1671

1797

1420

C

1698

1103

1493

1007

1211

D

1254

1338

1196

1052

1302

Table 4-4. Training cost of operating each type of workstation in each period.

4.3.2 Job production information The planning horizon contains five periods, each of which is further subdivided into 30 equal time slices with lengths of 300 seconds. There are 12 types of jobs, the production routes, processing times and transportation costs of which are listed in Table 4-5. For example, the production route of job 1 follows from workstation C to B to A, in that order. The unit processing times of these operations are 24, 21, and 27 seconds, respectively. The transportation cost of this job is two units per unit of distance. 4‐11   

Jobs

Production route

Processing time

Transportation cost

1

C-B-A

24-21-27

2

2

A-C-D

35-21-23

2

3

A-C-B

27-37-22

3

4

A-C-B-D

34-28-21-26

3

5

D-A-C

39-35-27

2

6

B-C-D-A

21-37-27-31

2

7

D-C-A

25-22-25

1

8

A-C-B

27-22-22

2

9

C-B-A-D

37-26-31-37

3

10

C-D-B-A

34-38-39-24

2

11

C-D-B

20-21-36

1

12

A-D-B-C

26-32-20-20

3

Table 4-5. Production route, processing time and transportation cost of each job. The customer demands for all of the jobs within the planning horizon are listed in Table 4-6. For instance, the customer demand of job 2 in period 2 is 44 units, and its delivery due date is at the end of the 18th time slice in period 2. The blank spaces in this table indicate that the customer demand for the corresponding job in the corresponding period is zero. Jobs

Period 1 Volume

1 2 3 4 5 6 7 8 9 10 11 12

31 45 44

Period 2 Due date

18 19 17

Volume

Due date

43 44

18 18

48 40 36 36

49 39 44

13 14 15

Period 3

38

18 18 15 16

Volume

48

Period 4 Due date

Period 5

Volume

Due date

Volume

Due date

31 44 31

17 16 15

32

18

31

16

44

18

31 48 49

18 17 15

46 36 43

14 15 19

35 34 37

14 18 12

12

37 47 45 35

17 15 12 14

19

Table 4-6. Customer demands for all jobs in the planning horizon.

4‐12   

The inventory-holding cost of each job in each period is listed in Table 4-7. For instance, the inventory-holding cost of job 1 per unit in period 1 is 28 units. Jobs

Period 1

Period 2

Period 3

Period 4

Period 5

1

28

48

44

33

46

2

43

29

37

47

30

3

47

30

38

39

30

4

41

40

40

34

34

5

44

42

36

38

47

6

33

29

25

26

48

7

36

47

44

48

36

8

30

40

46

36

43

9

37

48

49

27

29

10

37

44

41

41

37

11

48

38

39

28

45

12

44

34

31

38

47

Table 4-7. The inventory-holding cost of each job in each period. The subcontracting cost of each job in each period is listed in Table 4-8. Jobs

Period 1

Period 2

Period 3

Period 4

Period 5

1

1783

1793

1213

1620

1660

2

1437

1752

1917

1622

1389

3

1008

1771

1719

1186

1540

4

1053

1282

1849

1265

1788

5

1754

1698

1700

1745

1107

6

1535

1679

1306

1553

1360

7

1850

1137

1233

1630

1356

8

1197

1908

1735

1515

1829

9

1339

1683

1168

1530

1503

10

1340

1510

1860

1136

1411

11

1804

1008

1129

1464

1369

12

1073

1241

1137

1160

1357

Table 4-8. The subcontracting cost of each job in each period.

4‐13   

4.3.3 Complete job production schedule Period no. Period 1

Virtual manufacturing cells: workstation(worker) Job 3: A12(W3)—D11(W8)—C2(W9) Job 4: A8(W1)—C3(W5)—B1(W7)—D9(W6) Job 5: D11(W9)—A12(W3)—C3(W9) Job 10: C4(W2)—D9(W6)—B5(W7)—A8(W6) Job 11: C2(W8)—D6(W1)—B1(W10) Job 12: A12(W3)—D6(W8)—B1(W2)—C3(W5)

Period 2

Job 1: C2(W8)—B1(W2)—A12(W6) Job 2: A8(W9)—C4(W7)—D11(W6) Job 5: D11(W3)—A8(W5)—C2(W7) Job 7: D11(W3)—C2(W4)—A12(W9) Job 8: A12(W5)—C2(W2)—B5(W2) Job 9: C4(W10)—B5(W9)—A8(W6)—D6(W7) Job 12: A8(W5)—D6(W6)—B1(W2)—C2(W7)

Period 3

Job 5: D11(W8)—A8(W7)—C2(W5) Job 8: A12(W1)—C2(W7)—B1(W2) Job 9: C2(W8)—B1(W2)—A12(W5)—D11(W8) Job 10: C3(W8)—D11(W7)—B5(W8)—A8(W7) Job 11: C3(W1)—D11(W9)—B5(W3)

Period 4

Job 1: A12(W8)—C2(W2)—D6(W6) Job 3: A8(W8)—D9(W7)—C3(W8) Job 4: A10(W2)—C2(W7)—B1(W9)—D11(W6) Job 6: B5(W3)—C4(W4)—D11(W6)—A10(W2) Job 10: C3(W7)—D6(W6)—B5(W7)—A12(W6) Job 11: C3(W1)—D6(W7)—B1(W5) Job 12: A8(W9)—D9(W7)—B1(W5)—C2(W2)

Period 5

Job 1: A12(W3)—C2(W2)—D6(W6) Job 4: A10(W3)—C4(W5)—B5(W7)—D11(W7) Job 6: B5(W8)—C2(W7)—D6(W8)—A8(W6) Job 7: D11(W7)—C3(W5)—A12(W6) Job 8: A8(W2)—C3(W9)—B1(W9) Job 10: C2(W5)—D9(W9)—B5(W2)—A12(W7) Job 11: C4(W2)—D6(W6)—B1(W5) Job 12: A12(W6)—D6(W8)—B5(W2)—C3(W5)

Table 4-9. The formation of virtual manufacturing cells in the planning horizon. Table 4-9 lists the formation of virtual manufacturing cells in each period. Taking job 3 in period 1 for example, workstation A12 and worker 3 are assigned to produce the first

4‐14   

operation, workstation D11 and worker 8 are assigned to produce the second operation, and workstation C2 and worker 9 are assigned to produce the last operation. It is assumed that only jobs with positive customer demand can be manufactured in each period. Period Period 1

Job: (creation time-termination time) Job 3: (10-14)—(11-15)—(12-16) Job 4: (8-17)—(9-18)—(10-19)—(11-20) Job5: (1-24)—(2-25)—(3-26) Job 10: (1-24)—(2-25)—(3-26)—(4-27) Job 11: (1-7)—(2-8)—(3-9) Job 12: (1-21)—(2-22)—(3-23)—(4-24)

Period 2

Job 1: (1-4)—(2-5)—(3-6) Job 2: (4-10)—(5-11)—(6-12) Job 5: (1-27)—(2-28)—(3-29) Job 7: (10-22)—(11-23)—(12-24) Job 8: (6-14)—(7-15)—(8-16) Job 9: (9-22)—(10-23)—(11-24) Job 12: (1-27)—(2-28)—(3-29)—(4-30)

Period 3

Job 5: (1-3)—(2-4)—(3-5) Job 8: (1-13)—(2-14)—(3-15)—(4-16) Job 9: (1-13)—(2-14)—(3-15)—(4-16) and 25-26-27-28 Job 10: (5-26)—(6-27)—(7-28)—(8-29) Job 11: (1-11)—(2-12)—(3-13)

Period 4

Job 2: (1-16)—(2-17)—(3-18) Job 3: (1-10)—(2-11)—(3-12) Job 4: (1-4)—(2-5)—(3-6)—(4-7) Job 6: (3-12)—(4-13)—(5-14)—(6-15) Job 10: (11-12)—(12-13)—(13-14) and (23-27)—(24-28)—(25-29) Job 11: (4-14)—(5-15)—(6-16) Job 12: (11-22)—(12-23)—(13-24)—(14-25)

Period 5

Job 2: (9-14)—(10-15)—(11-16) Job 4: (3-9)—(4-10)—(5-11)—(6-12) Job 6: (1-5)—(2-6)—(3-7)—(4-8) Job 7: (4-16)—(5-17)—(6-18) Job 8: (2-14)—(3-15)—(4-16) Job 11: (1-9)—(2-10)—(3-11)

Table 4-10. The creation and termination times of the virtual manufacturing cells. Table 4-10 lists the creation and termination times of the virtual manufacturing cells in the planning horizon. For instance, the virtual manufacturing cell for manufacturing job 3 4‐15   

in period 1 is as follows: the cell creation period for the first operation is 10 and its termination period is 14; the cell creation period for the second operation is 11 and its termination period is 15; the creation period for the third operation is 12 and its termination period is 16. As to the concrete production schedule of all jobs in each time period, the format is similar to that in a single-period situation. Due to the complexity of expressing the complete production schedule, only the formation of the virtual manufacturing cells and their creation/termination times are listed in the multi-period situation (see Table 4-10). Table 4-11 lists the worker training scheme. For instance, worker 3 has the initial ability to operate workstation types A and D, hence he does not need to relearn these abilities. Instead, according to the production schedule, worker 3 will learn to operate workstation type B in period 1, and will not learn to operate workstation type C within the planning horizon. In this research, the time for learning new abilities is assumed to be at the beginning of a period, and the training time is negligible. Worker

Workstation type Type A

Type B

Type C

Type D

W1

1

/

3

/

W2

4

/

/

/

W3

/

1

/

/

W4

/

/

/

/

W5

/

4

/

/

W6

/

/

/

/

W7

3

1

/

/

W8

4

2

/

/

W9

2

/

/

1

W10

/

1

/

/

Table 4-11. The worker training scheme. Table 4-12 lists the inventory-holding volume and the subcontracting volume of each job in every period. For instance, the inventory-holding volume of job 5 in period 1 is 32 units. Through the imposition of a suitable worker training scheme and proper inventory-

4‐16   

holding plan, the customer demands in these five periods can be met perfectly well without subcontracting any of the jobs.  Jobs

Period 1

Period 2

Period 3

Period 4

Period 5

inv

sub

inv

sub

inv

sub

inv

sub

inv

sub

1

0

0

0

0

0

0

0

0

0

0

2

0

0

0

0

0

0

7

0

0

0

3

0

0

0

0

0

0

0

0

0

0

4

0

0

0

0

0

0

0

0

0

0

5

32

0

39

0

0

0

0

0

0

0

6

0

0

0

0

0

0

0

0

0

0

7

0

0

21

0

0

0

0

0

0

0

8

0

0

0

0

10

0

10

0

0

0

9

0

0

33

0

0

0

0

0

0

0

10

28

0

28

0

53

0

35

0

0

0

11

0

0

0

0

9

0

0

0

0

0

12

35

0

15

0

15

0

37

0

0

0

Table 4-12. The inventory-holding and subcontracting volumes of jobs. 

4.4 Solution Algorithms The hybrid ACPSO algorithm has been proposed to solve the production scheduling problems of single-period VCMSs in Chapter 3. In ACPSO, discrete particle swarm optimization is the main framework, ant colony system is used to increase the exploration of the search space, and constraint programming is used to increase the exploitation of the search space. Due to the combination of these algorithms’ complementary advantages, ACPSO outperforms DPSO and CPSO in locating good production schedules. Here ACPSO is also adopted to solve the production scheduling problems of VCMSs operating in a multi-period situation.

4‐17   

4.4.1 Discrete particle swarm optimization Due to the similarity between the production scheduling problems for single-period and multi-period VCMSs, the application of DPSO to solve the production scheduling problems for VCMSs operating in a multi-period situation is briefly introduced as follows. 1. Definition of discrete particles In order to solve the production scheduling problem for multi-period VCMSs by using the

DPSO

approach,

the

particle

k

at

iteration t

can

be

presented

as

X kt  ( Akt ; X kt ,1 ,..., X kt , p ,..., X kt , P ) . Here Akt  ( Akt ,1 ,..., Akt ,l ,..., Akt , L ) denotes the worker

training scheme. More specifically, Akt ,l  (akt ,1,l ,1 ,...akt ,,Pl ,11 ,..., akt ,,lp, w ,..., akt ,,Pl ,W1 ) expresses the training

scheme

of

l

worker

within

the

planning

horizon.

p p ,p ,p X kt , p  ( Skt ,,1p , Skt ,,2p ..., Skt ,, Np p ; M kt ,,1,1 , M kt ,,1,2 ,..., M kt ,,1,p K1 ,..., M kt ,, Np p , K N ;Wkt,1,1 ,Wkt,1,2 ,...,Wkt,1,, pK1 ,...,Wkt,,Np p , K N ) p

p

signifies the job production sequence, as well as the worker and workstation assignments for all of the jobs in period p , and N p denotes the number of jobs in period p. X kt , p consists of three parts: the first concerns the job production sequence, the second regards the workstation assignment for each operation, and the last part deals with the worker assignment for

each

operation. Each

bit

X kt , p

in n

t , p , m j ,ji,i

t , p , m1

Skt ,, pj  ( skt ,, pj ,1 , skt ,,pj ,2 ,..., skt ,,pj , N p ) , M kt ,, pj ,i  (mk , j ,i j ,i ,..., mk , j ,i

is also

a vector: i.e.,

) , and Wkt,, jp,i  ( wkt ,,pj ,,1i ,..., wkt ,,pj ,,iL ) .

Here n j ,i denotes the number of workstations that can manufacture operation i of job j , L signifies the total number of workers in the system, and

,

denotes the

workstation that can manufactuire operation i of job j. In a multi-period situation, each worker can be trained to learn new abilities within the planning horizon. That is, each worker is possible to handle the manufacturing of any operation. Thus Wkt,, jp,i is designed as above format. In addition, the best solution found by particle k until iteration t is denoted

as

Pkt  ( PAkt , Pkt ,1 ,..., Pkt , P )

,

PAkt  ( PAkt ,1 ,..., PAkt ,l ,..., PAkt , L )

where

,

PAkt ,l  ( pakt ,1,l ,1 ,... pakt ,,Pl ,11 ,..., pakt ,,lp, w ,..., pakt ,,Pl ,W1 )

,

p ,p Pkt , p  ( PSkt ,,1p ,..., PSkt ,, Np ; PM kt ,,1,1 ,..., PM kt ,,1,p K1 ,..., PM kt ,, Np p , K N ; PWkt,1,1 ,..., PWkt,1,, pK1 ,..., PWkt,,Np p , K N ) , p

4‐18   

p

PSkt ,, pj  ( pskt ,,pj ,1 , pskt ,,pj ,2 ,..., pskt ,,pj , N p )

n

t , p , m j ,ji,i

t , p , m1

PM kt ,, pj ,i  ( pmk , j ,i j ,i ,..., pmk , j ,i

,

,

)

and

PWkt,, jp,i  ( pwkt ,,pj ,,1i ,..., pwkt ,,pj ,,iL ) . The best solution found by the swarm until iteration t is denoted

as

Pgt  ( PAgt , Pgt ,1 ,..., Pgt , P )

,

where

PAkt  ( PAgt ,1 ,..., PAgt ,l ,..., PAgt , L )

,

PAgt ,l  ( pagt ,1,l ,1 ,... pagt ,,Pl ,11 ,..., pagt ,,pl , w ,..., pagt ,,Pl ,W1 )



p ,p Pgt , p  ( PS gt ,,1p ,..., PS gt ,,pN ; PM gt ,,1,1 ,..., PM gt ,,1,p K1 ,..., PM gt ,,pN p , K N ; PWgt,1,1 ,..., PWgt,1,, pK1 ,..., PWgt,,Np p , K N ) p

,

PS gt ,,pj  ( psgt ,,pj ,1 , psgt ,,pj ,2 ,..., psgt ,,pj , N p )

, , ,

,

=(

p

, ,

,

, ,

,…,

, , , ,

,

,

),

and

PWgt,, pj ,i  ( pwgt ,,pj,1,i ,..., pwgt ,,pj,,Li ) . (a) Worker training scheme Each bit akt ,,lp, w l {1,..., L}, w {1,..., W }, p {1,..., P  1} in Akt is binary where it is equal to one if worker l learns the ability of operating workstation type w in period p ; otherwise, it is equal to zero. Here it is worth of emphasizing two aspects. First, if operating workstation type w is the initial ability of worker l , then akt ,,lp,w is equal to zero for all p because there is no need for this worker to relearn this ability. Second, akt ,,Pl ,w1  1 means that worker l will not learn the ability of operating workstation type w within the planning horizon. (b) Job production sequence Each bit skt ,,pj ,d d {1, 2,..., N p } in

, ,

is a binary variable where it is equal to one if job

j is placed in the d th position of the job production sequence in period p in particle k at iteration t ; otherwise, it is zero. For instance, if the job production sequence in period 1 in particle k at iteration t is (2, 3, 4, 1), then according to the definition,

skt ,1,2,1  skt ,1,3,2  skt ,1,4,3  skt ,1,1,4  1 and all other skt ,1,j ,d are equal to zero.

4‐19   

(c) Worker and workstation assignment Each bit in M kt ,, pj ,i is equal to one if operation i of job j is assigned to the corresponding workstation in period p; otherwise it is zero. Similarly, each bit in Wkt,, jp,i is equal to one if the manufacturing process of operation i of job j is handled by the corresponding worker in period p; otherwise it is zero. Here it is worth pointing out that the prerequisite of

wkt ,,pj ,,il taking value one is that worker l has the ability of operating the workstation assigned to operation i of job  j in period p . 2. Definition of velocity Similar to the structure of the particles, the velocity of particle k at iteration t can be presented as Vkt  (VAkt , Vkt ,1 ,..., Vkt , p ,..., Vkt , P ) , where VAkt  (VAkt ,1 ,...,VAkt ,l ,...,VAkt , L ) ,

VAkt ,l  (vakt ,1,l ,1 ,...vakt ,,Pl ,11 ,..., vakt ,,lp, w ,..., vakt ,,Pl ,W1 )

and

p ,p Vkt , p  (VSkt ,,1p ,VSkt ,,2p ,...,VSkt ,, Np ;VM kt ,,1,1 ,...,VM kt ,,1,p K1 ,...,VM kt ,, Np p , K N ;VWkt,1,1 ,...,VWkt,1,, pK1 ,...,VWkt,,Np p , K N ) p

t k

p

t, p k

. VA denotes the velocity of workers’ ability training. V

consists of three parts: the

first concerns the velocity of job production sequence, the second states the velocity of workstation assignment for each operation, and the third represents the velocity of worker assignment for each operation. Each bit in Vkt , p is also a vector with the following format: t , p , m1

n

t , p , m j ,ji,i

VS kt ,, pj  (vskt ,, pj ,1 ,...vskt ,,pj , N p ) , VM kt ,, pj ,i  (vmk , j ,i j ,i ,..., vmk , j ,i

) , VWkt,, jp,i  (vwkt ,,pj ,,1i ,..., vwkt ,,pj ,,iL ) .

(a) Velocity of worker training In VAkt , the high value of vakt ,,lp, w indicates that worker l is more likely to learn the ability of operating workstation type w in period p , whereas a low value means that this worker is more likely to learn this ability in another period. At each iteration, vakt ,,lp, w is updated according to Equation (4-24):

vakt ,,lp, w  t 1vakt ,l1,, wp  c1r1t 1 ( pakt ,l1,, wp  akt ,l1,, wp )  c2 r2t 1, s ( pagt ,l1,, wp  akt ,l1,,wp ) l , w, p

4‐20   

(4-24)

After that, it is converted to a change of probability through the sigmoid function (4-25):

s(vakt ,,lp, w ) 

1 l , w, p 1  exp(vakt ,,lp, w )

(4-25)

Here s (vakt ,,lp, w ) denotes the probability of worker l learning the ability to operate workstation type w in period p . (b) Velocity of the job production sequence In VSkt ,, pj , high value of vskt ,,pj ,d indicates that job j is more likely to be placed in the d th position in period p of the job production sequence, whereas a low value means that

this job is better to be placed elsewhere. At each iteration, vskt ,,pj ,d is updated according to Equation (4-26):

vskt ,,pj ,d  t 1vskt , 1,j ,dp  c1r1t 1, s ( pkt , 1,j ,dp  xkt , 1,j ,dp )  c2 r2t 1, s ( pgt ,1,j ,dp  xkt , 1,j ,dp ) d  1, 2,..., N p (4-26) The velocity is then converted to the change of probability through the sigmoid function (4-27):

s(vskt ,,pj ,d ) 

1 d  1, 2,..., N p 1  exp(vskt ,,pj ,d )

(4-27)

Here s(vskt ,,pj ,d ) denotes the probability of placing job j in the d th position of the job production sequence in period p . (c) Velocity of worker and workstation assignment t , p , m sj ,i

In VM kt ,, pj ,i and VWkt,, jp,i , the high value of vmk , j ,i

or vwkt ,,pj ,,il indicates that operation i of

job  j is more likely to be handled by corresponding workstation or worker, whereas a low value means that this operation would be better handled by another suitable workstation or skilled worker. These velocities are updated according to Equations (4-28) and (4-29) respectively at each iteration. 4‐21   

t , p , msj ,i

vmk , j ,i

t 1, p , msj ,i

 t 1vmk , j ,i

t 1, p , m sj ,i

 c1r1t 1,m ( pk , j ,i

t 1, p , m sj ,i

 xk , j ,i

t 1, p , m sj ,i

)  c2 r2t 1,m ( pg , j ,i

t 1, p , m sj ,i

 xk , j ,i

vwkt ,,pj ,,il  t 1vwkt , 1,j ,ip ,l  c1r1t 1, w ( pkt , 1,j ,ip ,l  xkt , 1,j ,ip ,l )  c2 r2t 1, w ( pgt ,1,j ,ip ,l  xkt , 1,j ,ip ,l )

)   (4-28) (4-29)

The Equation (4-29) updates the velocity for all workers. However some workers may not have the ability of operating the workstation assigned to operation i of job  j in this period. Let Ltk, p ( j , i ) denote the set of workers that can handle the workstation assigned to operation i of job  j in period p of particle k at iteration t , the velocity of the worker assignment will be adjusted through Equation (4-30) for the workers not belonging to Ltk, p ( j , i ) :

vwkt ,,pj ,,il    vwkt ,,pj ,,il l  Ltk, p ( j , i)

(4-30)

Here  is a decimal number in the range of (0, 1). Also the velocities will be converted into the change of probabilities through Equations (4-31) and (4-32):

1

t , p ,ms

s (vwk , j ,i j ,i ) 

t , p ,ms

1  exp(vwk , j ,i j ,i ) 1

t , p , ws

s (vwk , j ,i j ,i ) 

t , p , ws

1  exp(vwk , j ,i j ,i )

(4-31)

(4-32)

t , p , ms

Here s(vwk , j ,i j ,i ) denotes the probability of assigning the corresponding workstation to t , p , ws

manufacture this operation, and s(vwk , j ,i j ,i ) denotes the probability of assigning the corresponding worker to handle the manufacturing process of this operation. 3. Construction of a complete production schedule In the iteration process of discrete particle swarm optimization, each particle should be decoded into a complete production schedule so as to evaluate its fitness. The decoding method is as follows.

4‐22   

(a) Workers’ ability training For workstation type w which is not the initial ability of worker l , the period in which he learns the ability of operating this types of workstations is determined according to Equation (4-33): q

t k ,l , w

( p) 

s (vakt ,,lp, w ) P 1

1  p  P 1

 s(va

t ,i k ,l , w

i 1

(4-33)

)

Here qkt ,l , w ( p) denotes the probability of learning the ability of operating workstation type

w in period p for worker l . It is worth of noting that if P  1 is selected, this worker will not learn the ability of operating workstation type w within the planning horizon. (b) Construction of job production sequence The job production sequence is constructed form period to period. Taking period p for example, it starts from a null sequence and then places an unscheduled job j in the d th position from d  1 to N p according to Equation (4-34):

qkt ,,dp ( j ) 

s (vskt ,,pj ,d )

 s(vs j ' U

t, p k , j' ,d

)

(4-34)

Here U is the set or a subset of the unscheduled jobs in period p , and qkt ,,pd ( j ) is the probability of placing job j in the d th position of the job production sequence in period p . A complete job production sequence is constructed when each job in each period has

been assigned to a position. (c) Worker and workstation assignment In order to manufacture all of the jobs, each operation must be assigned with a suitable workstation and a skilled worker. The procedure of assigning workstation and worker for each operation is as follows. First, a set of workstations M ( j , i ) and a set of workers 4‐23   

Ltk, p ( j , i ) , that can handle this operation in this period are determined. Then one suitable

workstation is selected from the set M ( j , i ) according to Equation (4-35): t, p k , j ,i

q

( m) 

s (vmkt ,,pj ,,im )



wM ( j ,i )

s (vmkt ,, pj ,,iw )

m  M ( j , i )  

(4-35)

Here qkt ,,pj ,i (m) denotes the probability of assigning workstation m to produce operation i of job  j . Also a skilled worker is selected form the set Ltk, p ( j , i ) according to Equation (4-36): qkt ,,pj ,i (l ) 

s (vwkt ,,pj ,,il )



wLtk, p ( j ,i )

t , p,w k , j ,i

s (vw

)

l  Ltk, p ( j , i )

(4-36)

Here qkt ,,pj ,i (l ) denotes the probability of assigning worker l to handle the manufacturing of operation i of job  j in period p . 4.4.2 Hybridization of DPSO, CP and ACS (ACPSO)

Since the good performance of ACPSO in locating good solutions, this hybrid algorithm will be adopted to solve the production scheduling problems for VCMSs operating in a multi-period situation. The details are presented as follows. 1. Capacity backward Due to the capacity limitation of production resources, some jobs cannot be finished before their due dates in some periods, while some production resources may have a certain amount of remaining capacities in some periods. Hence it is necessary to make a trade-off among inventory-holding cost, subcontracting cost, and production resource utilization. A concept called extra capacity is firstly introduced. The extra capacity of a job in a period is the unit of the job that can be produced more in this period without affecting other scheduled jobs. When there are backlogs in a period, the remaining capacities of production resources in previous periods can be utilized to produce these 4‐24   

jobs in backlogs which have extra capacities in these periods. If there are still backlogs after the capacity backward process, these amounts will be subcontracted so as to meet customer demand. In a multi-period situation, the customer demand in a period can be met through the following three approaches: (1) manufacturing jobs in this period by the company itself; (2) manufacturing jobs by utilizing the extra capacities in previous periods and storing them to this period; and (3) subcontracting all the unfinished jobs.

Figure 4-1. A sample for illustrating the extra capacity concept. Figure 4-1 provides a simple example to facilitate the understanding of the extra capacity concept (here workforce requirements are not considered in order to simplify the illustration.). A period consists of six time slices with the equal length of 300 seconds. A job contains three operations and its production demand is 20 units in this period. The processing times of these three operations per unit are 20, 30 and 40 seconds respectively. According to the constraints of the processing rate, the maximum processing rate of this job in a time slice is seven. Assuming that there is no preload on the workstations that this job has been assigned to, the processing duration of the first operation is from time slice 1 to time slice 3, that of the second operation is from time slice 2 to time slice 4, and that of the third operation is from time slice 3 to time slice 5. Then the remaining capacity of workstation M1 can produce one more unit of operation 1 in time slice 3 and seven more units in time slice 4. Due to the constraints of the processing rate, workstation M1 cannot produce this job in time slice 5 and time slice 6; otherwise the succeeding operations cannot be finished within this period. As presented in Figure 4-1, the machines are assigned to manufacture this job in the time slices demonstrated with dark blocks. 4‐25   

Hence the extra capacity of this job in this period is eight units. If some of these workstations are also assigned to produce other jobs, the extra capacity of this job may be affected. For instance, if workstation M3 has been totally assigned to produce other jobs in time slice 6, then the extra capacity of this job is only one unit.  2. ACPSO procedure The ACPSO procedure for production scheduling problems of multi-period VCMSs is presented in Figure 4-2. Overall, discrete particle swarm optimization is the main framework of ACPSO, ant colony system principls are adopted to increase the exploration of the search space related to worker training and job production sequence, and constraint programming is used to increase the exploitation of the search space by coordinating production resource assignment among jobs. Step 1. Initialization Step 1.1 Initialize parameters such as particle size K, tmax , inertial weight, α, β, ρ, θ. Step 1.2 Initialize all particles’ positions X kt and velocities Vkt randomly. Step 1.3 Evaluate the objective function value for each particle. Initialize Pkt and Pgt Step 1.4 Initialize the pheromone value of worker training and job production sequence. Step 2. Perform iteration process

while ( t  tmax ) Step 2.1 for k=1 to K Update velocity of particle k, including velocities of worker training, job production sequence, and production resource assignment. Update workers’ ability training of particle k Update pheromone value of worker training

for p=1 to P Update job production sequence in period p of particle k Update pheromone value of job production sequence in period p

while job production sequence list of period p is not empty Pick the first job j in the job production sequence list Update workstation and worker assignment of job j

4‐26   

Check consistency

while (not consistent) detect critical resource and add it to violation set

if there is alternatives of this critical resource change a new assignment of this resource check consistency

else randomly select a resource according to DPSO set it consistent

end-while Calculate production output of job j in period p. Delete this job from the job production sequence list

end-while end-for Evaluate the fitness of the particle and update Pkt

end-for Step 2.2 Update Pgt and perform global updating of pheromone value. Step 2.3 Increment of iteration count t=t+1

End-while Step 3 Report the best solution in the swarm and corresponding objective function value.

Figure 4-2. The ACPSO procedure for production scheduling of multi-period VCMSs. (a) Worker training and job production sequence ACPSO adopts the principles of ACS to increase the exploration of the solution space related to the job production sequence and worker training. In ACPSO, the process of specifying a worker’s training scheme which consists of two steps can be illustrated by considering the example of determining the period in which worker l learns the new ability of operating workstation type w . The steps are as follows: Step 1. Select min( Q , M ) periods from the range of [1, P  1] using the selection

mechanism J of ACS  and denote the selected set as S1 . Here Q  P  1 , M is a 4‐27   

fixed integer, and  is a decimal number in the range (0, 1). If min( Q , M ) is decimal, its value takes the least integer which is larger than min( Q , M ) . In this research, the heuristic value for worker training is set as the inverse of the training cost. Step 2. Select a period from the set

using the mechanism of DPSO. This will be the

period in which worker learns the ability of operating workstation type w . The procedure of these two steps continues until each worker’s training scheme has been determined. In a multi-period situation, the determination of job production sequence is considered period by period. The  process of constructing a job production sequence for a period which consists of two steps can be illustrated by considering the example for determining the production priority of a job in period p . The steps are as follows: Step 1. Select min( Q , M ) unscheduled jobs using the mechanism of ACS and denote

the set with S1 . Here Q is the number of unscheduled jobs in period p . M is a fixed integer number, and  is a decimal in the range (0, 1). If min( Q , M ) is decimal, its value takes the least integer which is larger than min( Q, M ) . In this research, the heuristic value for generating the job production sequence is set as the inverse of the due date of the corresponding job. Step 2. Select a job from the set S1 using the mechanism of DPSO and place the selected

job in this production priority of this period. The procedure of these two steps continues until each job has been given a production priority. Since the ACS principle is adopted in the iteration process of the worker training and job production sequence, the local updating and global updating of the pheromone values should be performed during the iteration process. After each solution construction step, the local updating of the pheromone values regarding worker training and job production sequence is performed on the currently selected solution component. Once the swarm has finished one complete iteration, the global updating of the pheromone values regarding worker training and job production sequence is performed based on the best solution 4‐28   

found so far. If the ACS principles are not adopted in the procedure outlined in Figure 42, the algorithm will be the hybridization of DPSO and CP, which is known as CPSO. (b) Backtracking in production resource assignment The backtracking propagation procedure in a multi-period situation is almost the same as that in a single-period situation. If a job cannot be finished before its due date, even after utilizing the extra capacities of previous periods, it will be treated as inconsistent and critical production resources will be shifted to other assignments. Also, the propagation technique takes the form of single-level backtracking. When an inconsistency occurs, the approach is to first detect the critical production resource, the procedure of which has been presented in Figure 3-7, and then check for alternative assignments. If yes, an alternative production resource will be assigned to this operation. If not, the algorithm will not backtrack to the most recently scheduled job and randomly select a suitable workstation or skilled worker according to the DPSO mechanism regardless of consistency, and then continue to schedule the next job until all of the jobs have been scheduled.

4.5 Computational Experiments and Results 4.5.1 Manufacturing system configuration

ACPSO performance for production scheduling problems of multi-period VCMSs is evaluated based on a large set of randomly generated test problems. The values of the parameters in the algorithm are as follows: Particle size:

100

Maximum number of iterations:

100

Maximum inertial weight:

0.8

Minimal inertial weight:

0.2

Cognitive scaling coefficient:

2 4‐29 

 

Social scaling coefficient:

2

The range of the velocity of particles:

[-4, 4]

Value of  :

0.5

Value of M :

6

The pheromone evaporation rate:

0.2

The parameter of heuristic value:

0.2

The number of periods takes one of two different values: 5 and 10. Each period is divided into 30 equal time slices with a length of 300 seconds. The number of jobs in each period is randomly generated from the range of [5, 10] or [10, 15]. Two system configurations are considered in this research. In the first one, the system consists of 12 workstations and 10 workers; in the second one, the system consists of 20 workstations and 20 workers. The volume of each job in a period is randomly generated from the range of [30, 50] or [50, 80]. The subcontracting cost of each job is randomly generated from the range of [500, 1000] or [1000, 2000]. In each period, the due date of each job is randomly generated from the range of [  PH ,   PH ] , where PH denotes the number of time slices in a period. Two values of ( ,  ) are considered in this research, i.e., (0.4, 0.7) and (0.6, 0.9), which represent the tight and the loose due date constraints, respectively. In addition, each job in the experiments contains three or four operations, each of which has a processing time randomly generated from the range of [20, 40]. The transportation cost of each job per unit distance is generated from [1, 3], and the distance between any two workstations is generated from the range of [2, 10]. Exactly 48 parameter combination schemes are considered in the experiments, and are listed in Table 4-13. Let ( p, n, d , v, s, m ) denote the parameter combinations, where p represents the number of

periods in the planning horizon, n indicates the number of jobs in each period, d denotes the due date constraints, v demonstrates the job volume, s signifies the subcontracting cost, and m denotes the system configuration. For instance, scheme 1 shows that there are 5 periods in the planning horizon, the number of jobs in a period is randomly generated from the range of [5, 10], the due date is tight, the production volume of each job is 4‐30   

randomly generated from the range of [30, 50], the subcontracting cost of the jobs is randomly generated from the range of [500, 1000], and the system contains 12 workstations and 10 workers. Scheme no.

Parameter value

1

(5, [5, 10], [0.4, 0.7], [30, 50], [500, 1000], (12, 10))

2

(5, [5, 10], [0.4, 0.7], [50, 80], [500, 1000], (12, 10))

3

(5, [5, 10], [0.6, 0.9], [30, 50], [500, 1000], (12, 10))

4

(5, [5, 10], [0.6, 0.9], [50, 80], [500, 1000], (12, 10))

5

(5, [5, 10], [0.4, 0.7], [30, 50], [1000, 2000], (12, 10))

6

(5, [5, 10], [0.4, 0.7], [50, 80], [1000, 2000], (12, 10))

7

(5, [5, 10], [0.6, 0.9], [30, 50], [1000, 2000], (12, 10))

8

(5, [5, 10], [0.6, 0.9], [50, 80], [1000, 2000], (12, 10))

9

(5, [5, 10], [0.4, 0.7], [30, 50], [500, 1000], (20, 20))

10

(5, [5, 10], [0.4, 0.7], [50, 80], [500, 1000], (20, 20))

11

(5, [5, 10], [0.6, 0.9], [30, 50], [500, 1000], (20, 20))

12

(5, [5, 10], [0.6, 0.9], [50, 80], [500, 1000], (20, 20))

13

(5, [5, 10], [0.4, 0.7], [30, 50], [1000, 2000], (20, 20))

14

(5, [5, 10], [0.4, 0.7], [50, 80], [1000, 2000], (20, 20))

15

(5, [5, 10], [0.6, 0.9], [30, 50], [1000, 2000], (20, 20))

16

(5, [5, 10], [0.6, 0.9], [50, 80], [1000, 2000], (20, 20))

17

(5, [10, 15], [0.4, 0.7], [30, 50], [500, 1000], (12, 10))

18

(5, [10, 15], [0.4, 0.7], [50, 80], [500, 1000], (12, 10))

19

(5, [10, 15], [0.6, 0.9], [30, 50], [500, 1000], (12, 10))

20

(5, [10, 15], [0.6, 0.9], [50, 80], [500, 1000], (12, 10))

21

(5, [10, 15], [0.4, 0.7], [30, 50], [1000, 2000], (12, 10))

22

(5, [10, 15], [0.4, 0.7], [50, 80], [1000, 2000], (12, 10))

23

(5, [10, 15], [0.6, 0.9], [30, 50], [1000, 2000], (12, 10))

24

(5, [10, 15], [0.6, 0.9], [50, 80], [1000, 2000], (12, 10))

25

(5, [10, 15], [0.4, 0.7], [30, 50], [500, 1000], (20, 20))

26

(5, [10, 15], [0.4, 0.7], [50, 80], [500, 1000], (20, 20))

27

(5, [10, 15], [0.6, 0.9], [30, 50], [500, 1000], (20, 20))

28

(5, [10, 15], [0.6, 0.9], [50, 80], [500, 1000], (20, 20))

29

(5, [10, 15], [0.4, 0.7], [30, 50], [1000, 2000], (20, 20))

30

(5, [10, 15], [0.4, 0.7], [50, 80], [1000, 2000], (20, 20))

4‐31   

31

(5, [10, 15], [0.6, 0.9], [30, 50], [1000, 2000], (20, 20))

32

(5, [10, 15], [0.6, 0.9], [50, 80], [1000, 2000], (20, 20))

33

(10, [5, 10], [0.4, 0.7], [30, 50], [500, 1000], (12, 10))

34

(10, [5, 10], [0.4, 0.7], [50, 80], [500, 1000], (12, 10))

35

(10, [5, 10], [0.6, 0.9], [30, 50], [500, 1000], (12, 10))

36

(10, [5, 10], [0.6, 0.9], [50, 80], [500, 1000], (12, 10))

37

(10, [5, 10], [0.4, 0.7], [30, 50], [1000, 2000], (12, 10))

38

(10, [5, 10], [0.4, 0.7], [50, 80], [1000, 2000], (12, 10))

39

(10, [5, 10], [0.6, 0.9], [30, 50], [1000, 2000], (12, 10))

40

(10, [5, 10], [0.6, 0.9], [50, 80], [1000, 2000], (12, 10))

41

(10, [10, 15], [0.4, 0.7], [30, 50], [500, 1000], (12, 10))

42

(10, [10, 15], [0.4, 0.7], [50, 80], [500, 1000], (12, 10))

43

(10, [10, 15], [0.6, 0.9], [30, 50], [500, 1000], (12, 10))

44

(10, [10, 15], [0.6, 0.9], [50, 80], [500, 1000], (12, 10))

45

(10, [10, 15], [0.4, 0.7], [30, 50], [1000, 2000], (12, 10))

46

(10, [10, 15], [0.4, 0.7], [50, 80], [1000, 2000], (12, 10))

47

(10, [10, 15], [0.6, 0.9], [30, 50], [1000, 2000], (12, 10))

48

(10, [10, 15], [0.6, 0.9], [50, 80], [1000, 2000], (12, 10))

Table 4-13. Test problem generating schemes.

4.5.2. Analysis of worker training level

In order to facilitate better medium-term manufacturing planning, factors affecting the worker training scheme are analyzed in detail. (a) Training cost Intuitively, training costs will significantly affect the worker training level. Two different types of training costs are considered in Schemes 1 and 2 of the test problems. The first type of training costs is generated from the range of [1000, 2000], and the second type is generated from the range of [11000, 12000]. Other data are kept the same in both situations. The worker training levels in these two situations are listed in Table 414. From the table, it is clear that the worker training level becomes higher when the training cost is lower. The training level is calculated through Equation (4-37): 4‐32   

L

wor ker training level 

 NA l 1

l

(4-37)

L

L  N w   LAl l 1

Here N w is the number of workstation types, IAl is the number of the initial ability types of worker l , and NAl is the number of the new ability types which worker l learns within the planning horizon. Taking Scheme 1 for example, the worker training level is 0.425 when the training cost is low, and that is 0.167 when the training cost is high. Scheme 1 Worker training level

Scheme 2

Low training cost

High training cost

Low training cost

High training cost

0.425

0.167

0.617

0.25

Table 4-14. Worker training level under different training costs and job tightness. A theorem is offered here to verify the experimental result.

Theorem 4-1. Keeping other statuses the same, the worker training level will increase as the cost of training decreases.

Proof. In order to validate the statement, let VCMS1 and VCMS 2 denote two scheduling problems. They have the same system status except that VCMS 2 has a higher training cost. In addition, let S 1 and S 2 denote the optimal production schedules of these two problems. Finally, let tr 1 and tr2 denote the worker training levels within these two problems. This theorem assures that tr 1 is larger than or equal to tr2 . This theorem can be proven through the reduction to absurdity approach. Suppose tr1  tr2 as denoted in Figure 4-3, where point A denotes the optimal worker training scheme of VCMS1 , and point B denotes that of VCMS 2 . As point A is the optimal worker training scheme of VCMS1 , this means that the decreased other manufacturing costs must be less than the increased training cost if this problem takes another production schedule with a much higher worker training level. 4‐33   

Thus, if VCMS 2 adopts the same production schedule as that in VCMS1 , the total manufacturing cost will decrease, which is contrary to the assumption that point B is the optimal worker training scheme of VCMS 2 . Hence the assumption tr1  tr2 is wrong. That is, tr 1 must be equal to or larger than tr2 .

Figure 4-3. The effect of training cost on training level. (b) Job tightness Job tightness may also affect the worker training level. The test problems in Scheme 2 have tighter deadlines than those in Scheme 1. From the computational results in Table 414, it is obvious that problems with tighter tasks will lead to higher worker training levels. Here a theorem is also developed to verify the experimental result.

Theorem 4-2. Keeping other statuses the same and assuming that the total production volume of all the jobs is much larger than one, the training level of workers will increase when the system requires tighter tasks.

4‐34   

Proof. In order to validate the statement, let VCMS1 and VCMS 2 denote two scheduling problems. They feature the same system status except that VCMS 2 has larger production volumes. In addition, let S 1 and S 2 denote the optimal production schedules of these two problems. Furthermore, let tr 1 and tr2 denote the worker training levels of these two problems. This theorem assures that tr 1 is less than or equal to tr2 . Supposing tr1  tr2 , we can first verify this theorem through the reduction to absurdity approach by assuming the condition that VCMS 2 has only one more production volume of a job than VCMS1 . As

VCMS 2

has

only

one

more

job

unit

than

VCMS1

,

then

C (VCMS2 )  C (VCMS1 )  C ( j ) , where j denotes the extra unit of the job, C (VCMS1 ) denotes the manufacturing cost of producing all the jobs in VCMS1 , and C (VCMS 2 ) represents the manufacturing cost of producing all the jobs in VCMS 2 . Due to the assumption that the production volume of jobs in the system is much larger than 1, the manufacturing cost of VCMS 2 is appropriately determined by the jobs which are also included in VCMS1 . If tr1  tr2 , we can reduce the total manufacturing cost of VCMS1 by adopting the production schedule S 2 in VCMS1 . This is contrary to the statement that S 1 is the optimal schedule of VCMS1 . That is, tr 1 is less than or equal to tr2 . More general proof for this theorem can be obtained through finite recursion of the aforementioned proof procedure. (c) Inventory-holding cost of jobs Two types of inventory-holding costs are considered in Scheme 1 to study the effects of inventory-holding cost on worker training level. In the first type, the inventory-holding is generated from the range of [50, 100]. In the second type, the inventory-holding cost is generated from the range of [250, 300]. The worker training levels of these two situations are listed in Table 4-15. 4‐35   

Training-level

Low inventory-holding cost

High inventory-holding cost

0.425

0.55

Table 4-15. Training levels under different inventory-holding costs. From the experimental result represented in Table 4-15, it is clear that worker training level increases as the inventory-holding cost increases. Theorem 4-3 is developed to verify this result.

Theorem 4-3. Keeping other system statuses the same, worker training level increases as the inventory-holding cost becomes larger.

Proof. In order to verify this statement, let VCMS1 and VCMS 2 denote two scheduling problems. They have the same system status except that VCMS 2 has a higher inventoryholding cost. In addition, let S 1 and S 2 denote the optimal production schedules of these two problems. Furthermore, tr 1 and tr2 denote the worker training levels in these two problems. This theorem assures that tr 1 is less than or equal to tr2 . Once again, the reduction to absurdity approach is adopted to verify this theorem. As demonstrated in Figure 4-4, point A is the optimal worker training scheme of VCMS1 , point C is that of VCMS 2 , and tr1  tr2 . Because point C is the optimal worker training scheme of VCMS 2 , this means that the reduced other manufacturing cost is less than the increased training cost if VCMS 2 improves the training level from point C to the level of point A. Furthermore, as the inventory-holding cost in VCMS1 is less than that in VCMS 2 , the total manufacturing cost of VCMS1 will decrease if the production schedule S 2 is adopted, the training level of which is only tr2 . This is contrary to the statement that point A is the optimal worker training scheme of VCMS1 . That is, tr 1 must be less than or equal to tr2 .

4‐36   

Figure 4-4. Worker training level under different inventory-holding costs. (d) Subcontracting cost The test problems in Schemes 1 and 5 have the same system statuses except that the problems in Scheme 5 have a higher subcontracting cost. The worker training levels of these two schemes are listed in Table 4-16.

Training level

Low subcontracting cost

High subcontracting cost

0.425

0.567

Table 4-16. Training levels under different subcontracting costs. The experimental result in Table 4-16 shows that the worker training level increases as the subcontracting cost becomes larger.

4‐37   

Theorem 4-4. Keeping other system statuses the same, the worker training level increases as the subcontracting cost becomes larger. The proof procedure of Theorem 4-4 is similar to that of Theorem 4-3.

4.5.3. Comparison of ACPSO and CPSO performances In the following test problems, the inventory-holding cost per unit job per period is generated from the range of [50, 100], and the worker training costs are generated from the range of [1000, 2000]. The performance of ACPSO is compared with that of CPSO. Under each scheme, five test problems are randomly generated and the performances are obtained by averaging the results of running these two algorithms five times for each test problem. Table 4-17 shows the comparative performance comparison of these two algorithms after executing them through 100 iterations. The “cost diff” and “time diff” in this table are calculated according to Equations (4-38) and (4-39): cost of CPSO-cost of ACPSO cost of CPSO

(4-38)

computational time of ACPSO-computational time of CPSO computational time of CPSO

(4-39)

cost diff=

time diff=

Scheme No. 1 2 3 4 5 6 7 8 9 10 11 12

CPSO

ACPSO

cost

time

cost

time

468217 820298 414363 924830 433641 1539515 649248 1662090 477689 619920 415425 756463

404 447 300 456 362 470 384 465 807 845 810 891

442953 709387 394885 762148 375211 1087711 549899 1185283 458208 536463 401543 697312

516 592 364 591 452 601 496 640 971 1025 1051 1132

4‐38   

Cost diff

Time diff

5.40% 13.52% 4.70% 17.59% 13.47% 29.35% 15.30% 28.69% 4.08% 13.46% 3.34% 7.82%

27.72% 32.44% 21.33% 29.61% 24.86% 27.87% 29.17% 37.63% 20.32% 21.30% 29.75% 27.16%

13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

505410 1194672 577593 868841 1022037 2086286 1008681 1987861 1978787 3768295 1168358 3114096 816367 1806947 787381 1926466 926931 2224827 694003 2840949 1268827 1835848 1017605 1879493 1031420 2437341 878230 2505146 1993579 3964986 1940489 3937615 2666383 7421063 2557149 6961576

885 998 924 937 815 848 841 855 786 848 714 726 1770 1631 1670 1488 1534 1661 1765 1789 860 824 917 872 687 787 759 852 1927 1960 1890 1953 1827 1905 1860 1945

473645 946544 544984 721155 908264 1879438 877148 1736305 1735449 3186298 840162 2562738 775620 1494022 740902 1568994 704429 1702224 645206 2295924 1153841 1556318 973467 1615041 931482 1731302 836691 2114654 1738675 3602054 1713603 3526996 2084108 6326209 1940430 5793015

1164 1313 1193 1204 1285 1346 1374 1347 1315 1365 1145 1202 2586 2509 2553 2357 2176 2345 2591 2658 1233 1209 1438 1383 979 1093 1211 1344 2912 2941 2998 2963 2836 2885 2993 3073

6.28% 20.77% 5.65% 17.00% 11.13% 9.91% 13.04% 12.65% 12.30% 15.44% 28.09% 17.71% 4.99% 17.32% 5.90% 18.56% 24.00% 23.49% 7.03% 19.18% 9.06% 15.23% 4.34% 14.07% 9.69% 28.97% 4.73% 15.59% 12.79% 9.15% 11.69% 10.43% 21.84% 14.75% 24.12% 16.79%

31.53% 31.56% 29.11% 28.50% 57.67% 58.73% 63.38% 57.54% 67.31% 60.97% 60.36% 65.56% 46.10% 53.83% 52.87% 58.40% 41.85% 41.18% 46.80% 48.57% 43.37% 46.72% 56.82% 58.60% 42.50% 38.88% 59.55% 57.75% 51.16% 62.75% 58.62% 50.05% 55.23% 51.44% 60.91% 58.00%

Table 4-17. Comparative performance of CPSO and ACPSO over the same number of iterations. Table 4-17 makes clear that ACPSO can locate much better scheduling solutions in the sacrifice of longer computation time, especially as the problem size becomes larger, the due date becomes tighter, or the subcontracting cost becomes higher. For instance, scheme 5 has the same parameter values as scheme 1 except that it has a higher 4‐39   

subcontracting cost, then ACPSO yields a more obvious improvement on the solution quality over CPSO. Taking scheme 3 for example, Figure 4-5 shows these two algorithms’ abilities to locate good production schedules within the same number of iterations. Other schemes have similar characteristics.

manufacturing coct

CPSO

ACPSO

435000 430000 425000 420000 415000 410000 405000 400000 395000 390000 1

11

21

31

41

51

61

71

81

91

iteration Figure 4-5. Comparison of manufacturing cost after the same number of iterations. Since ACPSO requires longer computation time than CPSO, it is fair to compare the performances of these two algorithms when the computation times are the same. Table 418 presents the comparative results. In this table, Tcpso represents the computation time required to run 100 iterations of CPSO. 1/3Tcpso

1/2 Tcpso

2/3 Tcpso

Tcpso

No.

CPSO

ACPSO

diff

CPSO

ACPSO

diff

CPSO

ACPSO

diff

CPSO

ACPSO

diff

1 2 3 4 5 6 7 8 9

472678

448052

5.21%

469182

446426

4.85%

468372

444787

5.04%

468217

442954

5.40%

833816

715262

14.22%

829466

712956

14.05%

828870

712002

14.10%

820299

709388

13.52%

417323

399950

4.16%

415743

398115

4.24%

415716

396172

4.70%

414363

394885

4.70%

932858

776065

16.81%

930592

774063

16.82%

927873

773761

16.61%

924830

773614

16.35%

441137

376301

14.70%

439028

375856

14.39%

437210

375856

14.03%

433641

375211

13.47%

1581547

1101152

30.38%

1581547

1096560

30.67%

1547606

1096560

29.14%

1539515

1096560

28.77%

660775

558257

15.51%

652895

558257

14.50%

649472

556147

14.37%

649248

554533

14.59%

1693976

1230625

27.35%

1685816

1204656

28.54%

1684419

1204086

28.52%

1662090

1195856

28.05%

483744

460605

4.78%

481004

460285

4.31%

481004

459659

4.44%

477689

458208

4.08%

4‐40   

10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

623745

542595

13.01%

620642

541332

12.78%

619920

541045

12.72%

619920

536939

13.39%

417167

405981

2.68%

416235

404536

2.81%

415577

403988

2.79%

415425

401543

3.34%

767600

703406

8.36%

764737

699795

8.49%

761496

697572

8.39%

756463

697312

7.82%

508968

474828

6.71%

508968

474828

6.71%

508354

474828

6.60%

505410

473645

6.28%

1232411

950576

22.87%

1232411

948929

23.00%

1205580

948545

21.32%

1194671

947597

20.68%

585749

549451

6.20%

581716

546484

6.06%

579304

546484

5.67%

577593

544984

5.65%

886686

728982

17.79%

886686

726934

18.02%

875344

721334

17.59%

868841

721155

17.00%

1042662

921721

11.60%

1032595

918255

9.11%

1024098

916384

10.52%

1022037

912426

10.72%

2104357

1895806

9.91%

2099109

1886014

10.15%

2095443

1884990

10.04%

2086287

1879515

9.91%

1017954

885109

13.05%

1013822

879605

13.24%

1013822

879267

13.27%

1008681

878431

12.91%

1991979

1762427

11.52%

1990875

1752221

11.99%

1988908

1748768

12.07%

1987861

1747072

12.11%

2008440

1764365

12.15%

2003558

1752912

12.51%

1998568

1750609

12.41%

1978787

1744138

11.86%

3839833

3211066

16.37%

3783028

3211066

15.12%

3772922

3208302

14.97%

3768295

3194026

15.24%

1181389

857034

27.46%

1174215

855853

27.11%

1170361

845039

27.80%

1168358

843421

27.81%

3141889

2632814

16.20%

3128407

2583321

17.42%

3115840

2582619

17.11%

3114096

2675072

14.10%

840044

779248

7.24%

840044

777384

7.46%

837755

776830

7.27%

836167

775620

7.24%

1834291

1515082

17.40%

1826357

1511788

17.22

1823662

1506387

17.40%

1806947

1502848

16.83%

790236

744909

5.74%

788611

743180

5.76%

788611

742388

5.86%

787381

741513

5.83%

1950348

1578866

19.05%

1946588

1575578

19.06%

1930768

1574750

18.44%

1926466

1574750

18.26%

934960

716204

23.40%

929394

708633

23.75%

929394

708633

23.75%

926930

705514

23.89%

2292957

1723279

24.84%

2272545

1719103

24.35%

2245191

1715190

23.61%

2224827

1709976

23.14%

700873

648029

7.54%

695718

647023

7.00%

694831

645426

7.11%

694003

645426

7.00%

2860449

2330539

18.53%

2860449

2316164

19.03%

2855843

2312786

19.02%

2840949

2305190

18.86%

1280113

1162654

9.18%

1278300

1159913

9.26%

1273301

1155971

9.21%

1268827

1153841

9.06%

1843221

1573570

14.63%

1839774

1569449

14.69%

1836689

1566321

14.72%

1835647

1557480

15.15%

1027241

975677

5.02%

1023091

975110

4.69%

1017605

973634

4.32%

1017605

973467

4.34%

1902537

1628544

14.40%

1882907

1622927

13.81%

1880583

1619530

13.88%

1879493

1618777

13.87%

1044920

940638

9.98%

1043413

937200

10.18%

1039933

935861

10.01%

1031420

933255

9.52%

2457510

1761062

28.34%

2448097

1733120

29.21%

2444605

1731302

29.18%

2437341

1731302

28.97%

879772

839542

4.57%

879772

838851

4.65%

879707

838851

4.64%

878230

836691

4.73%

2527902

2153259

14.82%

2523720

2151373

14.75%

2523720

2140681

15.18%

2505146

2133961

14.82%

2004926

1760222

12.21%

2004053

1753726

12.49%

1998493

1753726

12.25%

1993579

1740602

12.69%

3993252

3614335

9.49%

3985775

3610731

9.41%

3978275

3609858

9.26%

3964986

3606888

9.03%

1959139

1726129

11.89%

1954758

1725171

11.75%

1950228

1723904

11.61%

1940489

1723658

11.17%

3953318

3545615

10.31%

3948407

3545615

10.20%

3941194

3545615

10.04%

3937615

3543558

10.01%

2701924

2126238

21.31%

2694351

2113081

21.57%

2680351

2105363

21.45%

2666383

2084108

21.84%

7457324

6388962

14.33%

7455241

6377626

14.45%

7421063

6353452

14.39%

7421063

6353452

14.39%

2585204

1963459

24.05%

2581003

1962449

23.97%

2564194

1962334

23.47%

2557149

1948860

23.79%

6969007

5859670

15.92%

6967415

5859670

15.90%

6961576

5856788

15.87%

6961576

5837686

16.14%

Table 4-18. Comparison of manufacturing cost after the same computational time. The comparative results in Table 4-18 prove that ACPSO features better performance in locating good scheduling solutions even when the computation times are equal. The 4‐41   

improvement on the solution quality becomes more obvious as the problem size increases, the due date becomes tighter, or the subcontracting costs become higher. Hence, ACPSO is suitable for solving real-world industrial problems with large problem sizes.

4.6 Chapter Summary The VCMS production scheduling research has been extended into a multi-period manufacturing environment in this chapter. A new mathematical model has been developed to formulate the production schedules for VCMSs operating in a multi-period situation. This mathematical model considers workforce requirements and includes a great variety of constraints, such as production capacities of the various resources, and product delivery deadlines, etc. The objective of the model is to minimize total manufacturing cost related to the production schedules, including machine operating cost, material transportation cost, workers’ salaries, worker training cost, inventory-holding cost, and subcontracting cost. To facilitate the understanding of the characteristics of the production schedules for VCMSs operating in a multi-period situation, an example has been provided to explicitly demonstrate the formation of virtual manufacturing cells, the worker training scheme, the inventory-holding plan, and the subcontracting plan within the planning horizon. ACPSO has been adopted to solve the complex production scheduling problems and further provide guidance for the formation of virtual manufacturing cells. The experimental results show that ACPSO yields strong performance in generating good production schedules for VCMSs operating in multi-period situations. In addition, the factors affecting worker training level have been analyzed so that manufacturers can manage their workforce more effectively.  

4‐42   

CHAPTER 5 PRODUCTION SCHEDULING IN VIRTUAL CELLULAR MANUFACTURING SYSTEMS UNDER DYNAMIC MANUFACTURING ENVIRONMENTS

5.1 Introduction Production scheduling – i.e., the allocation of production resources to competing jobs over a period of time – is a decision-making process with the goal of optimizing one or more objectives such as job completion time, mean flow time, or job tardiness, etc. (Pinedo, 1995). A production schedule released to the production floor has two major functions. First, it allocates the limited production resources to different jobs to optimize some measure of shop performance. Second, it provides a basis for planning related external activities such as material procurement, preventive maintenance of machines, and delivery of orders to customers, etc. The VCMS production scheduling research was conducted in static and deterministic manufacturing environments in previous chapters, where all information was known in advance and no disruptions occurred during production schedule execution. It is relatively easy under these conditions to develop a predictive production schedule for a fixed planning window, based predominantly on the shop-floor status and the requirements of current orders. Such production scheduling theories are very well developed. It cannot be overstated, however, that there are only “rescheduling” rather than “scheduling” problems in practice, due to the great variety of disruptions occurring on the production floor. Machine breakdowns, worker absenteeisms, dynamic job arrivals, and changes in production volume, etc., all can frustrate the best laid plans. Managers and employees in dynamic manufacturing environments must be able to respond rapidly and effectively to unexpected events. That is, when unforeseen disruptions occur, they need to take suitable actions to maintain the overall feasibility of the current production

5‐1   

schedule or improve shop performance. Thus, it is important to fill the gap between scheduling theory and scheduling practice. Many scholars have conducted research in the field of dynamic production scheduling. However, most of their research is confined to simple manufacturing environments (such as the single-machine situation, the parallel-machine situation, and job shop) due to the complexity of production rescheduling problems. Dynamic production scheduling research will be extended to VCMSs in this chapter, and some effective rescheduling strategies will be introduced to deal with various disruptions occurring on the production floor.

5.2 Dynamic Production Scheduling in VCMSs with Random Machine Breakdowns and Worker Absenteeisms The literature review chapter summarized that uncertainties in a manufacturing environment are often classified into three categories: complete unknowns, suspicions about the future, and known uncertainties. Usually only known uncertainties are considered in the field of dynamic production scheduling, as their frequencies and durations may be predicted by probability distributions. Machine breakdowns and worker absenteeisms are two classic known uncertainties. An assumption made in this research is that when a machine breakdown or worker absenteeism occurs, its respective repair or absentee duration can be estimated precisely according to historical experience. That is, when a machine or worker suffers a setback, its return to duty date can be known exactly. Two key issues in the field of dynamic production scheduling are when to and how to deal with real-time disruptions. Whether the problem is investigated in a single-period or multi-period situation is not crucial to this analysis. In order to reduce the computational effort, the research in this section is based on the mathematical model for VCMSs operating in a single-period situation developed in Chapter 3. Generally speaking, the more complex the manufacturing system is, the easier its production schedule will be disturbed, because a more complex system is burdened by more and tighter constraints. Mehta and Uzsoy (1998) proposed a predictable scheduling 5‐2   

approach to deal with production scheduling problems with random machine breakdowns. Suwa and Sandoh (2007) proposed a cumulative task delay based rescheduling strategy to handle job shop scheduling problems with random machine breakdowns. A novel rescheduling strategy combining the advantages of these two approaches will be proposed in this chapter to study VCMS production scheduling problems with random machine breakdowns and worker absenteeisms.

5.2.1 VCMS characteristics Constraints (3-2) and (3-3) in the mathematical model provide that some workstations or workers may have a certain amount of idle time in certain time slices. This simple example illustrates this characteristic. A job consists of three operations (operations 1, 2, and 3), its production route is A  B  C , its production volume is 10, and the processing times per unit of these three operations are 20, 30, and 40 seconds, respectively. The length of a time slice is 300 seconds. The production resource assignment of this job is presented in Table 5-1 and no other job has been assigned to these production resources. Operation no.

1

2

3

Work station no.

3

4

5

Worker no.

2

4

6

Processing time

20

30

40

Table 5-1. Production resource assignment for the job. According to the constraints in the mathematical model, the processing rate of operation 1 in time slice 1, that of operation 2 in time slice 2, and that of operation 3 in time slice 3 is seven. Thus workstation 3 and worker 2 have 160 seconds of idle time in time slice 1, workstation 4 and worker 4 have 90 seconds of idle time in time slice 2, and workstation 5 and worker 6 have 20 seconds of idle time in time slice 3. That is, VCMSs will naturally generate some idle time on some production resources in some time slices due to the processing rate constraints, which can be used to help absorb the impact of disruptions.

5‐3   

5.2.2 Right-shift policy for VCMSs Right-shift policy is a popular strategy commonly used by many scholars to study dynamic production scheduling problems. It is easy to implement and can quickly modify a production schedule. In simple manufacturing systems such as single machine, parallel machines, and job shop, the principle of right-shift policy is to delay the starting times of all unfinished tasks by the length of downtime. This method is not feasible for VCMSs, however, due to the rigorous constraints (3-2) and (3-3). A revised version of right-shift policy is developed herein for VCMSs to ensure that the adjusted production schedules through this policy still satisfy all of the constraints in the mathematical model. When a disruption occurs on the production floor, the remaining tasks of each job type can be classified into two parts according to the processing state: (1) the set of the disrupted operations of this job type and their subsequent operations, and (2) other remaining tasks. Here a disrupted operation means that this operation is being processed when the disruption occurs, and the subsequent operations of a disrupted operation include the following operations required for completing one unit of this job. Two aspects of this division are worth emphasizing. First, if a particular job type is not being processed at this time, the first part of its remaining tasks is null. Second, when disruptions occur, the remaining tasks of each job type should be compiled according to its processing state so as to facilitate the modification of the production schedule (regardless of whether it is processed on this particular production resource). This is because any job may be affected directly or indirectly by a disruption occurring on any of the production resources due to the sharing of production resources among different virtual manufacturing cells. The example in Table 5-1 can be used again to facilitate understanding of these two parts of remaining tasks. The production outputs of this job in the predictive schedule are listed in Table 5-2. In the predictive schedule, workstation 3 will produce seven units of operation 1 in time slice 1 from time point 0 to 140. If a breakdown occurs on workstation 3 (or any one of the production resources) at time point 90 in time slice 1, then four units of operation 1 have been finished, and one unit is disrupted at this time. Hence, when this disruption occurs, the remaining tasks of this job include the following 5‐4   

two parts. The first is the disrupted operation and its subsequent operations (one unit of

A  B  C ), where the first operation is disrupted and its remaining processing time is 10 seconds. The second are the other remaining tasks, including five units of operation 1, nine units of operation 2, and nine units of operation 3. Serial no. of operations Workstation no. Worker no. 1 2 Time slice 3 no. 4

1 3 2 7 3 0 0

2 4 4 0 7 3 0

3 5 6 0 0 7 3

Table 5-2. Job production outputs. This research assumes that the system will firstly complete the distupted operations on their originally assigned production resources in the production schedule modification process. Hence, the first part of remaining tasks has a higher production priority than the second part. In order to ensure that the new production schedule modified through rightshift policy satisfies all VCMS constraints, a new concept called “sub-job” is introduced. A job with n operations op1  op2  ...  opn has n types of sub-jobs, the i th type of which starts from the i th operation and contains all of the subsequent operations. For instance, the first type of sub-job is the entire job itself, and the nth type of sub-job contains only the last operation. Based on this concept, the remaining tasks in the second part can be classified into a number of sub-jobs. Let vi denote the volume of the i th operation in the second part of remaining tasks, then the volume of the i th type of sub-jobs, denoted with svi , can be calculated through Equation (5-1):

vi svi   vi  vi 1

i 1 1 i  n

(5-1)

For instance, the second part of remaining task in the aforementioned example can be decomposed into two types of sub-jobs: four units of B  C and five units of

A B C .

The scheduling of the remaining tasks in the second part can be

accomplished by orderly scheduling these sub-jobs. 5‐5   

The procedure of the revised right-shift policy for VCMSs is presented in Figure 5-1. In the revised right-shift policy, the production priority and production resource assignment of all jobs are kept the same as those in the predictive schedule. The adjusted production schedule through the right-shift policy is called right-shift schedule. Update the remaining capacities of production resources

While the job production sequence list is not empty do Pick the first job in the production sequence Step 1. Make statistics of these two parts of remaining tasks of this job; Step 2. Schedule the first part of remaining tasks; Step 3. Schedule the second part of remaining tasks; Remove this job from the job sequence list.

End-while

Figure 5-1. The revised right-shift policy for VCMSs procedure. As each disrupted operation with its subsequent operations in the first part of remaining tasks is actually a special sub-job, the procedure for scheduling the first part of remaining tasks is almost the same as that for scheduling the second part. The special characteristics of each element in the first type of remaining tasks include that the disrupted operation has been partially processed and its volume is one. Figure 5-2 shows the pseudo-code of scheduling the remaining tasks, taking step 3 of the right-shift procedure for example. This step is accomplished by scheduling a number of sub-jobs. In Figure 5-2, pre[ K j ][ PH ] and rs[ K j ][ PH ] are two arrays, where pre[i ][ p ] denotes the  volume of operation i of job j in the remaining tasks scheduled up to time slice p in the current schedule. Meanwhile, rs[i ][ p ] represents the volume of operation i of job j scheduled up to time slice p in the right-shift schedule. The processing rate of a job in a time slice in the rightshift schedule is determined by two aspects. First, it cannot exceed the capacity of production resources, and must satisfy the requirement of no work-in-process inventory between workstations. Second, rs[i ][ p ] cannot exceed pre[i ][ p ] for any operation and any time

slice due to the right-shift mechanism. Compared to the current schedule, jobs in the right-shift schedule can only be delayed, but cannot be advanced. The function 5‐6   

min PR  min(min PR, PR j ,i , q i 1 ) in the procedure ensures the first requirement, and the

function min PR  min(min PR, pre[i ][q  i  i* ]  rs[i ][q  i  i* ]) guarantees the second requirement. Once an operation has been scheduled in the right-shift schedule, the rs[ K j ][ PH ] array will be updated. Suppose it is now time slice s . Calculate pre[ K j ][ PH ] and  rs[ K j ][ PH ] .  The element pre[i ][ p] denotes the volume of operation i produced up to time slice p in the predictive schedule; the element rs[i][ p] denotes the volume of operation i scheduled up to time slice p in the right-shift schedule.

While the set of sub-jobs of this job (suppose this job is type j) not null do Pick the first sub-job in the set, find the starting operation of this sub-job i* , and its volume svi* .     Set q=s, RemQty= svi*  

While RemQty>0 and q  DD j  K j  i*  do minPR= RemQty

for i= i* to  K j  

                         Find maximum PR j ,i , q  i 1 min PR  min(min PR, PR j ,i , q  i 1 )

//restrict processing rate according to resource capacity

end-for for i= i* to  K j  

                          min PR  min(min PR, pre[i][ q  i  i* ]  rs[i][ q  i  i* ]) end-for

//restrict processing rate according to right-shift mechanism

if minPR>0 then PRj,i,q+i-l= minPR, for i= i* to  K j  

                          RemQty= RemQty- minPR Update the remaining capacity of production resources and  rs[ K j ][ PH ] . 

                 End-if q=q+1

end-while Subcontract the remaining RemQty volume of this job, and remove this sub-job from the set

End-while

Figure 5-2. The heuristic for determining production outputs of remaining tasks. 5‐7   

5.2.3 The proposed VCMS rescheduling policy Mehta and Uzsoy (1998) proposed a predictable scheduling approach to deal with production scheduling problems with random machine breakdowns. In their approach, a certain amount of idle time is inserted into the production schedule to absorb the impact of machine breakdowns. The key to making this methodology work is to insert the proper amount of idle time in the proper positions. This is a very difficult task for VCMSs due to their rigorous constraints. Fortunately as shown in section 5.2.1, some production resources in VCMSs may have a certain amount of natural idle time, which can be used to help absorb the impact of disruptions. Suwa and Sandoh (2007) developed a novel when-to-schedule policy based on cumulative task delay, which is viewed as the measure to determine suitable rescheduling time points to deal with job shop scheduling problems with random machine breakdowns. Their policy can be briefly introduced as follows. A certain number of jobs need to be processed on a fixed set of machines in a standard job shop scheduling problem. Each machine can process only one job at a time, and preemption is not allowed. Let φ jk denote the task of job j to be processed on machine k. The production route of each job, due date of each job, and the processing time p jk of task φ jk are deterministic and known. Let S 0 and H express the predictive schedule starting at time point zero and the planning horizon of the job shop scheduling problem respectively. When disruptions occur, the current production schedule becomes infeasible, and then will be modified by means of some suitable rescheduling methods. During the execution of the existing schedule, inspections will be performed in order to detect schedule delays at planned times  i ( i  i , i  1, 2,..., M ) over period (0, H ] where  is the inspection time interval. For a planned inspection time  i ,  [i ] is used to denote the inspection time where rescheduling was actually performed most recently before  i . In addition, SiA denotes the realized schedule over period ( i 1 , i ] , and SiP signifies the predictive schedule over the same time period. It is obvious that the predictive schedule SiP results from scheduling at  [i ] . The delay of a task in SiA is defined as the difference 5‐8   

between the completion time of this task in the realized schedule and that in the predictive schedule. For example, the delay of task φ jk in SiA is defined as

δ jk ( SiA )  max[C jk ( SiA )  C jk ( SiP ), 0] , where C jk ( s ) signifies the completion time of task φ jk in schedule s. The cumulative task delay at inspection time  i is defined as the sum of

all of the task delays from time point  [i ] . Rescheduling will be performed if the cumulative task delay exceeds a pre-specified threshold called “critical cumulative task delay” at any planned inspection point. The aforementioned cumulative task delay based rescheduling policy has two glaring deficiencies. First, the selection of suitable inspection time points is a difficult task. Suwa and Sandoh (2007) fixed the inspection time interval in advance and evenly distributed inspection time points over the planning horizon, but this may be not the best choice. Intuitively, a good method of selecting inspection points should satisfy at least the following requirement: the inspection frequency should be higher in periods of more or more severe disruptions. Second, their response to disruptions is hysteretic and inopportune. For example, a severe disruption may occur immediately after an inspection point, but the system must wait until the next inspection point to perform rescheduling. This would significantly deteriorate system performance. In order to overcome these deficiencies, a revised rescheduling policy based on cumulative task delay and VCMS characteristics is proposed for VCMS production scheduling problems suffering random machine breakdowns and worker absenteeisms. The proposed VCMS rescheduling policy is used to determine whether to reschedule all unfinished jobs or adopt the right-shift policy based on the cumulative task delay when disruptions occur. Suppose a disruption occurs at time point t in time slice s . Let ( s, t ) express this time point, [ s , t ] denote the time point when rescheduling was actually

performed most recently before this time, S sP,t represent the current predictive, and S sR,t indicate the right-shift schedule revised through right-shift policy. Other notations used in the policy are listed in Table 5-3.

5‐9   

V jP,i , s

The unfinished volume of operation i of job j in time slice s in the predictive schedule SsP,t

V jR,i , s

The unfinished volume of operation i of job j in time slice s in the right-shift schedule SsR,t

SV jP

The subcontracting volume of job j in the schedule SsP,t

SV jR

The subcontracting volume of job j in the schedule SsR,t

D*

Critical cumulative task delay

Table 5-3. Notations used in the proposed policy. The task delay caused by the disruption occurring at time point t in time slice s is defined as Equation (5-2):

,

=

(

,,



,,

)× +

×

×|





(5-2)

The task delay caused by a disruption quantifies its impact level, the value of which is equal to the sum of delayed time slices in all of the unfinished operations. If the disruption occurs on a production resource which more jobs have been assigned to, more operations will be affected and the impact will be more severe. As the disruption duration becomes longer, the affected operations will be increasingly delayed, and thus cause even greater impact. The cumulative task delay at time point t of time slice s is calculated through Equation (5-3):

Ds ,t 



[ s , t ] ( s ' , t ' )  ( s , t )

d s ' ,t '

(5-3)

It is clear that the cumulative task delay at time point t of time slice s is the sum of all the task delays caused by the disruptions from [ s, t ] up to the time point when this disruption occurred. The proposed rescheduling policy procedure is presented in Figure 5-3 below. Inspection time points are determined in real-time in this policy. Once a disruption occurs, 5‐10   

its impact will be evaluated and the cumulative task delay will be calculated. If the cumulative task delay exceeds the critical cumulative task delay, rescheduling will be performed immediately and the cumulative task delay will be reset to value zero; otherwise, the system will employ the revised right-shift policy to modify the current production schedule, and adopt it as the new predictive schedule. This process continues until all of the jobs have been finished or the due date of every job has expired. Step 1. Generate a predictive schedule. Set s=1, t=1, Ds ,t  0 . Step 2. Check whether there is any disruption occurring at this time. If yes, go to step 3; otherwise, t=t+1, go to step 7. Step 3. Get SsR,t through right-shift policy, and calculate Ds ,t . If Ds ,t  D* , then go to step 5; else, go to step 4.

 

Step 4. Deal with this disruption by using right-shift policy; take SsR,t as the new predictive schedule. Step 5. Reschedule all unfinished jobs to generate a new predictive schedule, and set Ds ,t  0 . Step 6. t=t+1. If t  PL , go to step 2; else, go to step 7. Step 7. Check whether all jobs have been finished or due date of each job has expired. If yes, stop; else, check whether t  PL . if t  PL , go to step 2; otherwise, go to step 8. Step 8. s=s+1, t=1, go to step 2.

Figure 5-3. The procedure of the proposed rescheduling policy. The proposed rescheduling policy has the following characteristics. First, the task delay in the proposed policy evaluates the impact of a disruption based on the number of delayed time slices of all unfinished operations, rather than the difference between the completion times of the jobs in the predictive schedule and the realized schedule. This is more suitable to the mathematical model where the planning horizon is divided into a number of time slices and no work-in-process inventory is allowed between workstations. Second, it is not necessary to specify inspection time points in advance. All the rescheduling decisions are made in real-time. More specifically, activities such as rightshifting or rescheduling are only performed when disruptions occur. This guarantees that the responses to disruptions are opportune. Third, this policy deals with disruptions from 5‐11   

a cumulative point of view. Many scholars have adopted event-driven policy to deal with dynamic scheduling problems. In their research, a disruption was ignored if it was not recognized as urgent. For example, even a large number of slight disruptions would not trigger any rescheduling action if none were recognized as urgent, even where the cumulative impact of all of the disruptions was severe. The newly proposed policy cumulates the impact of all disruptions, whether individually considered urgent or not. Thus, this impact assessment approach is far more precise.  The example in Table 5-2 can be used here once again to demonstrate the function of idle time on production resources. In this example, suppose the disruption occurs on workstation 2, which has 160 seconds idle time in time slice 1. If the duration of the breakdown occurring at time point 90 is less than 160 seconds, all of the jobs assigned to this machine in this time slice can be finished. The unique difference between S sP,t and

S sR,t is that the starting times of the remaining operations assigned to the broken machine in this time slice are delayed by the length of downtime. The variables V jR,i , s and V jP,i , s are totally the same under the right-shift policy. Hence, no task delay is caused by this disruption; the impact of the disruption is completely absorbed by the idle time. If the disruption duration lasts longer than 160 seconds, the impact of this disruption will be mitigated by the idle time.

5.2.4 Methodology of generating predictive schedules A predictive schedule is generated at the beginning of the planning horizon to minimize the total manufacturing cost without considering possible disruptions. Whenever the cumulative task delay exceeds the critical cumulative task delay, it is necessary to regenerate a new predictive schedule. If the sole objective of the rescheduling process is also optimizing manufacturing cost, as in the predictive-reactive approach, the new production schedule may deviate significantly from the current production schedule. This may affect other related external activities and thus incur extra manufacturing cost. Thus, the robust predictive-reactive scheduling approach, capable of simultaneously considering both efficiency and stability, is adopted to regenerate new

5‐12   

predictive schedules in this research. The objective function during the rescheduling process is presented in Equation (5-4): Minimize 

MC D  (1   ) MCP MD

N

K j DD j

N

j 1

i 1 r  s

j 1

(5-4)

In this equation, D   |   (V jR,i ,r  V jP,i ,r )  p |   DD j  K j  | SV jR  SV jP | , where all notations have the same meanings as those in Table 5-4 except that S sR,t denotes the newly generated schedule. Variable ρ , signifying the efficiency weight, is used to balance the importance of efficiency and stability. MCP denotes the manufacturing cost of all of the unfinished jobs in the original predictive schedule. Variable D is the measure of the difference between the new production schedule and the original one. MD

represents

the

potential

maximum

task

delay

and

is

calculated

by

N  E ( K )  E ( D )  E (V ) , where E ( K ) is the expected number of operations of a job, E ( D ) is the expected job due date, and E (V ) is the expected job production volume.

The effective hybrid ACPSO algorithm is adopted as the optimization tool to generate and regenerate predictive schedules. In the dynamic situation, the maximum number of ACPSO iterations is capped at 30. The reasons for this are presented as follows. First, the rescheduling process in response to disruptions in a real manufacturing environment must be rapid; therefore the computation time for generating new production schedules must be capped. Second, the production schedule quality obtained after 30 ACPSO iterations is nearly the same as one obtained after 100 iterations. Hence, 30 maximum iterations are adequate both in terms of efficiency and effectiveness.

5.2.5 Computational experiments and results The performance of the proposed rescheduling policy is evaluated based on a set of randomly generated test problems.

5‐13   

1. Manufacturing system and job information The test problems consider two system configurations. The first system contains 12 machines and 10 workers; the second contains 20 machines and 20 workers. The number of jobs takes two different values: 5 and 10. Each job consists of three or four operations, the processing times of which are randomly generated from discrete uniform distribution [20, 40]. The production volume of each job is randomly generated from discrete uniform distribution [30, 50]. The subcontracting cost of each job is randomly generated from discrete uniform distribution [500, 1000]. The length of each time slice is 300 seconds. The due date of each job is generated from discrete uniform distribution DD j  ( μ 

Rμ Rμ ,μ ) , where μ  (1.0  T ) E (Cmax ) / η . Here variable R determines 2 2

due date range, variable T determines the tightness of due date, variable E (Cmax ) represents the expected makespan of the problem instance and calculated by expected total processing time , and variable η denotes production resource utilization. min(nw , nl )  PL

In the test problems, variable T takes two values (0.2 and 0.4), variable R takes two values (0.5 and 0.8), and variable η is set to 0.5 (as in the previous experiments). 2. Disruption instance All machines are subject to possible breakdowns just as all workers are subject to possible absenteeisms. The working durations between two consecutive breakdowns on a single machine (or two consecutive absenteeisms for a worker) are independent and identically distributed according to an exponential distribution with parameter λ . Two different λ values are considered herein: 0.001, 0.002. A high λ value indicates more frequent disruptions. Machine repair times (or worker absenteeism durations) are independent and identically distributed according to uniform distribution ( β1 p, β2 p) , where p is the mean processing time of an operation (i.e., 30 seconds in the test problems). In this research, ( β1 , β2 ) assumes three different values: (0.5, 1), (1, 2), and (2, 4). A high ( β1 , β2 ) value means that the disruptions are more severe. 5‐14   

Based on the system frameworks and the disruption instances stated above, 72 schemes are adopted in this research for generating test problems. They are listed in Table 5-4. Scheme no.

Mac and wor

Job no.

T

R

λ

( β1 , β2 )

1

(12, 10)

10

0.2

0.8

0.002

(0.5, 1)

2

(12, 10)

10

0.2

0.8

0.002

(1, 2)

3

(12, 10)

10

0.2

0.8

0.002

(2, 4)

4

(12, 10)

10

0.2

0.8

0.001

(0.5, 1)

5

(12, 10)

10

0.2

0.8

0.001

(1, 2)

6

(12, 10)

10

0.2

0.8

0.001

(2, 4)

7

(12, 10)

10

0.4

0.8

0.002

(0.5, 1)

8

(12, 10)

10

0.4

0.8

0.002

(1, 2)

9

(12, 10)

10

0.4

0.8

0.002

(2, 4)

10

(12, 10)

10

0.4

0.8

0.001

(0.5, 1)

11

(12, 10)

10

0.4

0.8

0.001

(1, 2)

12

(12, 10)

10

0.4

0.8

0.001

(2, 4)

13

(12, 10)

10

0.2

0.5

0.002

(0.5, 1)

14

(12, 10)

10

0.2

0.5

0.002

(1, 2)

15

(12, 10)

10

0.2

0.5

0.002

(2, 4)

16

(12, 10)

10

0.2

0.5

0.001

(0.5, 1)

17

(12, 10)

10

0.2

0.5

0.001

(1, 2)

18

(12, 10)

10

0.2

0.5

0.001

(2, 4)

19

(12, 10)

10

0.4

0.5

0.002

(0.5, 1)

20

(12, 10)

10

0.4

0.5

0.002

(1, 2)

21

(12, 10)

10

0.4

0.5

0.002

(2, 4)

22

(12, 10)

10

0.4

0.5

0.001

(0.5, 1)

23

(12, 10)

10

0.4

0.5

0.001

(1, 2)

24

(12, 10)

10

0.4

0.5

0.001

(2, 4)

25

(12, 10)

5

0.2

0.8

0.002

(0.5, 1)

26

(12, 10)

5

0.2

0.8

0.002

(1, 2)

27

(12, 10)

5

0.2

0.8

0.002

(2, 4)

28

(12, 10)

5

0.2

0.8

0.001

(0.5, 1)

29

(12, 10)

5

0.2

0.8

0.001

(1, 2)

30

(12, 10)

5

0.2

0.8

0.001

(2, 4)

31

(12, 10)

5

0.4

0.8

0.002

(0.5, 1)

32

(12, 10)

5

0.4

0.8

0.002

(1, 2)

33

(12, 10)

5

0.4

0.8

0.002

(2, 4)

34

(12, 10)

5

0.4

0.8

0.001

(0.5, 1)

35

(12, 10)

5

0.4

0.8

0.001

(1, 2)

36

(12, 10)

5

0.4

0.8

0.001

(2, 4)

5‐15   

37

(12, 10)

5

0.2

0.5

0.002

(0.5, 1)

38

(12, 10)

5

0.2

0.5

0.002

(1, 2)

39

(12, 10)

5

0.2

0.5

0.002

(2, 4)

40

(12, 10)

5

0.2

0.5

0.001

(0.5, 1)

41

(12, 10)

5

0.2

0.5

0.001

(1, 2)

42

(12, 10)

5

0.2

0.5

0.001

(2, 4)

43

(12, 10)

5

0.4

0.5

0.002

(0.5, 1)

44

(12, 10)

5

0.4

0.5

0.002

(1, 2)

45

(12, 10)

5

0.4

0.5

0.002

(2, 4)

46

(12, 10)

5

0.4

0.5

0.001

(0.5, 1)

47

(12, 10)

5

0.4

0.5

0.001

(1, 2)

48

(12, 10)

5

0.4

0.5

0.001

(2, 4)

49

(20, 20)

10

0.2

0.8

0.002

(0.5, 1)

50

(20, 20)

10

0.2

0.8

0.002

(1, 2)

51

(20, 20)

10

0.2

0.8

0.002

(2, 4)

52

(20, 20)

10

0.2

0.8

0.001

(0.5, 1)

53

(20, 20)

10

0.2

0.8

0.001

(1, 2)

54

(20, 20)

10

0.2

0.8

0.001

(2, 4)

55

(20, 20)

10

0.4

0.8

0.002

(0.5, 1)

56

(20, 20)

10

0.4

0.8

0.002

(1, 2)

57

(20, 20)

10

0.4

0.8

0.002

(2, 4)

58

(20, 20)

10

0.4

0.8

0.001

(0.5, 1)

59

(20, 20)

10

0.4

0.8

0.001

(1, 2)

60

(20, 20)

10

0.4

0.8

0.001

(2, 4)

61

(20, 20)

10

0.2

0.5

0.002

(0.5, 1)

62

(20, 20)

10

0.2

0.5

0.002

(1, 2)

63

(20, 20)

10

0.2

0.5

0.002

(2, 4)

64

(20, 20)

10

0.2

0.5

0.001

(0.5, 1)

65

(20, 20)

10

0.2

0.5

0.001

(1, 2)

66

(20, 20)

10

0.2

0.5

0.001

(2, 4)

67

(20, 20)

10

0.4

0.5

0.002

(0.5, 1)

68

(20, 20)

10

0.4

0.5

0.002

(1, 2)

69

(20, 20)

10

0.4

0.5

0.002

(2, 4)

70

(20, 20)

10

0.4

0.5

0.001

(0.5, 1)

71

(20, 20)

10

0.4

0.5

0.001

(1, 2)

72

(20, 20)

10

0.4

0.5

0.001

(2, 4)

Table 5-4. Schemes for generating test problems.

5‐16   

3. Parameters in ACPSO The hybrid ACPSO algorithm is used as the optimization tool to generate/regenerate predictive schedules, and its parameters are set as follows: Particle size

100

Maximum number of iterations

30

Maximum inertial weight

0.8

Minimum inertial weight

0.2

The range of particle velocities

[-4, 4]

Theta

0.5

M

6

Evaporation rate of pheromone value

0.2

The robust predictive-reactive approach is adopted in the rescheduling process because it considers both efficiency and stability simultaneously. In this research, variable ρ assumes three different values: 0.5, 0.75, and 1. A high ρ value means that efficiency is given greater weight in the objective function. When ρ is equal to 1, the rescheduling approach is simply a predictive-reactive strategy where stability is completely ignored. The most important parameter in the proposed rescheduling policy is the critical cumulative task delay D* . According to the definition, the task delay caused by a disruption is in proportion to the number of jobs and in inverse proportion to the number of production resources. In each scheme, three different values of D* are considered, representing the low, medium and high rescheduling thresholds respectively. More specifically, the values of D* in problems with 12 machines, 10 workers, and 10 jobs are 150, 300, and 600, respectively; those in problems with 12 machines, 10 workers, and 5 jobs are 38, 75, and 150, respectively; and those in problems with 20 machines, 20 workers and 10 jobs are 75, 150, and 300, respectively. Setting different values for D* in problems with varying quantities of production resources or jobs is done in order to make the number of taking rescheduling actions at a reasonable level. A too high rescheduling 5‐17   

level requires too much computational effort, while a too low rescheduling level cannot adequately demonstrate the performance of the proposed rescheduling policy. 4. Computational results and analyses Five test problems are randomly generated for each scheme in Table 5-5. The performance of the proposed policy is obtained by averaging the results of running each problem five times. The computational experiments are performed under different values of critical cumulative task delay  D* and efficiency weight ρ . Tables 5-5 to 5-11 list the computational results of rescheduling ratio (R-ratio), right-shift ratio (RS-ratio) and absorption ratio (A-ratio). The three ratios are calculated through Equations (5-5) to (5-7). In addition, D*  Low in these tables means that the value of D* is 150 for problems with 12 machines, 10 workers, and 10 jobs, 75 for problems with 12 machines, 10 workers and 5 jobs, and 38 for problems with 20 machines, 20 workers and 5 jobs.

D*  Medium  means that the value of D * for these three different types of system frameworks takes 300, 150, and 75, respectively. Meanwhile, D*  High  means that the value of D * for these three different system frameworks is 600, 300, and 150, respectively. R-ratio 

the number of performing rescheduling the number of disruptions occured the number of right-shift policy adopted the number of disruptions occured

(5-6)

the number of disruptions totally absorbed the number of disruptions occured

(5-7) 

RS-ratio 

A-ratio 

(5-5)

Tables 5-5 to 5-8 list the computational results of R-ratio, RS-ratio and A-ratio for Schemes 1 to 24. Specifically, Table 5-5 presents the results under ρ  0.5 , Table 5-6 expresses the results under ρ  0.75 , Table 5-7 lists the results under ρ  1 , and Table 58 summarizes the average results under different ρ levels.

5‐18   

Scheme no.

D * =Low 

D * =Medium 

R-ratio

RS-ratio

A-ratio

R-ratio

1

20.56%

36.58%

42.86%

2

26.46%

39.21%

34.33%

3

27.85%

48.33%

4

22.01%

5 6

D * =High 

Right‐shift

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

RS-ratio

A-ratio

13.65%

51%

35.35%

4.39%

47.59%

48.02%

44.83%

55.17%

13.08%

46.52%

40.40%

6.88%

51.64%

41.48%

58.82%

41.18%

23.82%

16.35%

57.38%

26.27%

7.55%

58.77%

33.68%

71.36%

28.64%

39.96%

38.03%

8.35%

42.83%

48.82%

5.62%

48.95%

45.43%

43.04%

56.96%

26.15%

46.18%

27.67%

13.41%

54.41%

32.18%

5.91%

55.27%

38.82%

58.42%

41.58%

31.63%

48.33%

20.04%

22.00%

61.64%

16.36%

9.62%

62.32%

28.06%

75.71%

24.29%

7

8.82%

35.13%

56.05%

3.79%

39.15%

57.06%

2.00%

40.33%

57.67%

44.07%

55.93%

8

11.91%

46.05%

42.04%

6.58%

52.96%

40.46%

2.94%

53.27%

43.79%

62.12%

37.88% 25.15%

9

16.29%

53.64%

30.07%

8.46%

57.25%

34.29%

3.59%

57.04%

39.37%

74.85%

10

13.33%

44.72%

41.94%

6.13%

43.87%

50.00%

2.22%

45.71%

52.06%

47.69%

52.31%

11

17.33%

52.00%

30.67%

8.42%

54.62%

36.95%

3.17%

57.35%

39.48%

54.55%

45.45%

12

24.08%

58.92%

17.00%

13.48%

66.57%

19.94%

6.23%

70.03%

23.74%

72.22%

27.78%

13

16.37%

32.43%

51.20%

8.20%

37.28%

54.52%

2.76%

38.18%

59.07%

41.48%

58.52% 45.40%

14

21.46%

47.10%

31.44%

11.87%

54.67%

33.47%

4.32%

54.57%

41.11%

54.60%

15

28.47%

49.82%

21.71%

18.32%

57.05%

24.63%

8.19%

61.33%

30.48%

73.64%

26.36%

16

20.67%

36.02%

43.31%

11.13%

41.65%

47.22%

5.47%

49.39%

45.14%

45.54%

54.46%

17

25.23%

49.77%

25.00%

14.85%

54.76%

30.39%

8.86%

62.24%

28.90%

64.77%

35.23%

18

38.57%

54.11%

7.32%

17.71%

65.71%

16.57%

11.85%

67.41%

20.74%

76.84%

23.16%

19

5.11%

35.76%

59.13%

3.70%

38.65%

57.65%

1.20%

35.10%

63.70%

38.39%

61.61%

20

8.99%

50.98%

40.03%

4.96%

56.41%

38.63%

1.71%

54.66%

43.63%

56.59%

43.41%

21

15.16%

55.57%

29.27%

7.21%

63.09%

29.70%

3.29%

64.58%

32.13%

73.22%

26.78%

22

13.03%

46.46%

40.51%

2.97%

42.73%

54.30%

1.47%

39.53%

59.00%

43.75%

56.25%

23

15.70%

55.13%

29.17%

5.76%

56.06%

38.18%

3.07%

57.97%

38.96%

52.46%

47.54%

24

17.88%

59.07%

23.05%

7.14%

65.27%

27.59%

4.16%

67.59%

28.25%

75.88%

24.12%

Table 5-5. Rescheduling frequency for Schemes 1 to 24 under ρ  0.5 . Scheme no.

1

D * =Low 

D * =Medium 

D * =High 

Right‐shift

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

RS-ratio

A-ratio

21.71%

38.45%

39.84%

13.43%

48.55%

38.02%

4.18%

47.92%

47.92%

44.83%

55.17%

2

23.19%

42.94%

33.87%

14.11%

49.55%

36.34%

6.00%

52.06%

41.94%

58.82%

41.18%

3

30.30%

48.59%

21.11%

17.58%

58.94%

23.48%

8.29%

58.94%

32.77%

71.36%

28.64%

4

20.72%

40.64%

38.64%

11.40%

45.20%

43.40%

6.36%

44.49%

49.15%

43.04%

56.96%

5

27.06%

49.58%

23.36%

13.90%

58.30%

27.80%

5.68%

54.17%

40.15%

58.42%

41.58%

6

36.21%

48.10%

15.69%

19.76%

56.64%

23.60%

10.56%

68.84%

20.60%

75.71%

24.29%

7

9.78%

37.12%

53.09%

4.39%

36.52%

59.09%

1.71%

38.98%

59.31%

44.07%

55.93%

8

14.18%

54.35%

31.46%

5.85%

52.46%

41.69%

2.91%

56.81%

40.28%

62.12%

37.88% 25.15%

9

20.49%

52.14%

27.37%

9.34%

60.44%

30.22%

4.44%

61.69%

33.87%

74.85%

10

12.11%

43.94%

43.94%

3.55%

39.64%

56.80%

2.11%

47.89%

50.00%

47.69%

52.31%

11

14.88%

53.26%

31.85%

7.75%

56.85%

35.40%

2.97%

55.68%

41.35%

54.55%

45.45%

12

29.17%

59.11%

11.72%

11.44%

67.29%

21.27%

5.54%

69.25%

25.21%

72.22%

27.78%

13

15.24%

36.65%

48.11%

7.82%

39.55%

52.63%

4.13%

42.25%

53.62%

41.48%

58.52%

14

22.43%

48.41%

29.16%

11.52%

53.72%

34.76%

5.23%

53.38%

41.39%

54.60%

45.40%

15

28.16%

52.50%

13.93%

17.12%

59.64%

23.24%

8.73%

61.03%

30.24%

73.64%

26.36%

16

25.10%

36.12%

38.78%

9.72%

42.66%

47.62%

4.99%

41.32%

53.69%

45.54%

54.46%

17

25.00%

53.41%

21.59%

15.61%

50.68%

33.71%

8.27%

64.78%

26.95%

64.77%

35.23%

18

35.11%

55.50%

9.40%

21.79%

67.03%

11.17%

10.09%

64.78%

25.13%

76.84%

23.16%

19

7.99%

37.28%

54.73%

2.90%

36.95%

60.15%

1.08%

37.50%

61.42%

38.39%

61.61%

20

9.70%

52.18%

38.12%

5.16%

54.67%

40.17%

1.77%

54.13%

44.10%

56.59%

43.41%

21

18.29%

54.74%

26.96%

9.75%

63.96%

26.29%

3.38%

63.00%

33.62%

73.22%

26.78%

22

14.76%

45.13%

40.11%

6.40%

51.45%

42.15%

1.86%

43.65%

54.49%

43.75%

56.25%

23

16.33%

54.23%

29.44%

5.81%

59.63%

34.56%

3.21%

63.14%

33.65%

52.46%

47.54%

24

20.20%

62.31%

17.49%

8.58%

68.87%

22.55%

4.28%

70.32%

25.40%

75.88%

24.12%

Table 5-6. Rescheduling frequency for Schemes 1 to 24 under ρ  0.75 . 5‐19   

Scheme no.

D * =Low  R-ratio

RS-ratio

D * =Medium  A-ratio

D * =High 

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

Right‐shift A-ratio

RS-ratio

A-ratio

1

18.74%

38.81%

42.45%

11.07%

47.69%

41.24%

5.01%

46.19%

48.80%

44.83%

55.17%

2

28.22%

41.24%

30.54%

15.22%

48.03%

36.75%

5.56%

52.14%

42.30%

58.82%

41.18%

3

31.12%

46.11%

22.77%

16.91%

57.73%

25.36%

9.10%

60.97%

29.93%

71.36%

28.64%

4

22.01%

41.36%

36.63%

11.92%

44.65%

43.43%

5.62%

46.22%

48.16%

43.04%

56.96%

5

29.62%

44.52%

25.85%

12.96%

52.19%

34.85%

7.61%

52.69%

39.70%

58.42%

41.58%

6

39.67%

44.79%

15.54%

19.51%

64.11%

16.38%

10.54%

67.14%

22.32%

75.71%

24.29%

7

7.26%

39.25%

53.49%

3.82%

39.79%

56.39%

1.83%

39.91%

58.26%

44.07%

55.93%

8

13.08%

53.14%

33.78%

6.83%

56.42%

36.75%

2.64%

59.31%

38.05%

62.12%

37.88% 25.15%

9

20.00%

53.33%

26.67%

10.92%

61.54%

27.54%

5.05%

61.57%

33.38%

74.85%

10

11.29%

42.26%

46.46%

4.16%

46.81%

49.03%

1.96%

44.13%

53.91%

47.69%

52.31%

11

20.20%

53.37%

26.43%

9.76%

60.69%

29.55%

3.84%

60.55%

35.61%

54.55%

45.45%

12

25.75%

58.63%

15.62%

13.45%

72.59%

13.96%

5.32%

64.63%

30.05%

72.22%

27.78%

13

13.08%

35.19%

51.72%

6.36%

39.96%

53.68%

3.40%

36.98%

59.62%

41.48%

58.52%

14

20.21%

47.49%

32.30%

12.08%

54.35%

33.57%

4.58%

56.46%

38.96%

54.60%

45.40%

15

31.54%

48.91%

19.55%

18.82%

58.88%

22.30%

8.63%

62.08%

29.29%

73.64%

26.36%

16

18.64%

41.48%

39.88%

9.49%

42.22%

48.28%

4.56%

44.49%

50.95%

45.54%

54.46%

17

29.64%

51.58%

18.78%

16.36%

60.51%

23.13%

7.29%

62.64%

30.07%

64.77%

35.23%

18

37.52%

52.28%

10.20%

21.44%

63.97%

14.59%

11.05%

67.75%

21.20%

76.84%

23.16%

19

7.88%

36.93%

55.17%

4.29%

39.97%

55.74%

0.77%

34.10%

65.13%

38.39%

61.61%

20

10.32%

53.65%

36.03%

4.39%

54.55%

41.06%

1.76%

55.95%

42.29%

56.59%

43.41%

21

18.98%

58.03%

22.99%

9.59%

60.66%

29.75%

4.34%

66.47%

29.19%

73.22%

26.78%

22

9.17%

45.56%

45.27%

4.57%

48.29%

47.14%

0.87%

40.23%

58.89%

43.75%

56.25%

23

14.61%

57.59%

27.79%

6.69%

59.88%

33.43%

2.62%

64.14%

33.24%

52.46%

47.54%

18.53%

64.97%

16.50%

9.71%

66.67%

23.62%

4.64%

68.81%

26.55%

75.88%

24.12%

24

Table 5-7. Rescheduling frequency for Schemes 1 to 24 under ρ  1 . Scheme no.

D * =Low 

D * =Medium 

D * =High 

Right‐shift

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

RS-ratio

A-ratio

1

20.34%

37.95%

41.72%

12.72%

49.08%

38.20%

4.53%

47.23%

48.24%

44.83%

55.17%

2

25.96%

41.13%

32.91%

14.14%

51.03%

37.83%

6.15%

51.95%

41.90%

58.82%

41.18%

3

29.76%

47.68%

22.56%

16.95%

58.02%

25.03%

8.31%

59.56%

32.13%

71.36%

28.64%

4

21.58%

40.65%

37.77%

10.56%

44.23%

45.21%

5.87%

46.55%

47.58%

43.04%

56.96%

5

27.61%

46.76%

25.63%

13.42%

54.97%

31.61%

6.40%

54.04%

39.56%

58.42%

41.58%

6

35.84%

47.07%

17.09%

20.42%

60.80%

18.78%

10.24%

66.10%

23.66%

75.71%

24.29%

7

8.62%

27.17%

54.21%

4.00%

38.49%

57.51%

1.85%

39.74%

58.41%

44.07%

55.93%

8

13.06%

51.18%

35.76%

6.42%

53.95%

39.63%

2.83%

56.46%

40.71%

62.12%

37.88%

9

18.93%

53.04%

28.04%

9.57%

59.74%

30.68%

4.36%

60.10%

35.54%

74.85%

25.15%

10

12.24%

43.64%

44.11%

4.61%

43.44%

51.94%

2.10%

45.91%

51.99%

47.69%

52.31% 45.45%

11

17.47%

52.88%

29.65%

8.64%

57.39%

33.97%

3.33%

57.86%

38.81%

54.55%

12

26.33%

58.89%

14.78%

12.79%

68.82%

18.39%

5.70%

67.97%

26.33%

72.22%

27.78%

13

14.90%

34.76%

50.34%

7.46%

38.93%

53.61%

3.43%

39.14%

57.43%

41.48%

58.52%

14

21.37%

47.67%

30.96%

11.82%

54.25%

33.93%

4.71%

54.80%

40.49%

54.60%

45.40%

15

29.39%

50.41%

18.40%

18.09%

58.23%

23.39%

8.52%

61.48%

30.00%

73.64%

26.36%

16

21.47%

37.87%

40.66%

10.11%

42.18%

47.71%

5.01%

45.07%

49.92%

45.54%

54.46%

17

26.62%

51.59%

21.79%

15.61%

55.32%

29.07%

8.14%

63.22%

28.64%

64.77%

35.23%

18

37.07%

53.96%

8.97%

20.31%

65.57%

14.11%

11.00%

66.65%

22.35%

76.84%

23.16%

19

6.99%

36.66%

56.34%

3.63%

38.52%

57.85%

1.02%

35.57%

63.41%

38.39%

61.61%

20

9.67%

52.27%

38.06%

4.84%

55.21%

39.95%

1.75%

54.91%

43.34%

56.59%

43.41%

21

17.48%

56.11%

26.41%

8.85%

62.57%

28.58%

3.67%

64.68%

31.65%

73.22%

26.78%

22

12.32%

45.72%

41.96%

4.65%

47.49%

47.86%

1.40%

41.14%

57.46%

43.75%

56.25%

23

15.55%

55.65%

28.80%

6.09%

58.52%

35.39%

2.97%

61.75%

35.28%

52.46%

47.54%

24

18.87%

62.12%

19.01%

8.48%

66.94%

24.58%

4.36%

68.91%

26.73%

75.88%

24.12%

Table 5-8. Average rescheduling frequency for Schemes 1 to 24. 5‐20   

The results in Tables 5-5 to 5-8 show that the value of ρ has little effect on R-ratio, RS-ratio and A-ratio. Taking Schemes 1 and 2 for example, Table 5-9 lists the computational results under different ρ levels to demonstrate this characteristic more clearly. In each scheme, the R-ratio, RS-ratio, and A-ratio are similar under different ρ values. This characteristic holds true in other schemes as well. Thus, only the average results of rescheduling frequency are listed for Schemes 25 to 72, which are presented in Tables 5-10 and 5-11. ρ

no. 

1

2

D * =Low 

D * =Medium 

R-ratio

RS-ratio

A-ratio

R-ratio

D * =High 

Right‐shift

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

0.5

20.56%

36.58%

42.86%

13.65%

51%

35.35%

4.39%

47.59%

48.02%

0.75

21.71%

38.45%

39.84%

13.43%

48.55%

38.02%

4.18%

47.92%

47.92%

1

18.74%

38.81%

42.45%

11.07%

47.69%

41.24%

5.01%

46.19%

48.80%

AVE

20.34%

37.95%

41.72%

12.72%

49.08%

38.20%

4.53%

47.23%

48.24%

0.5

26.46%

39.21%

34.33%

13.08%

46.52%

40.40%

6.88%

51.64%

41.48%

0.75

23.19%

42.94%

33.87%

14.11%

49.55%

36.34%

6.00%

52.06%

41.94%

1

28.22%

41.24%

30.54%

15.22%

48.03%

36.75%

5.56%

52.14%

42.30%

AVE

25.96%

41.13%

32.91%

14.14%

51.03%

37.83%

6.15%

51.95%

41.90%

RS-ratio

A-ratio

44.83%

55.17%

58.82%

41.18%

Table 5-9. Rescheduling frequency under different levels of ρ . Scheme no.

D * =Low 

D * =Medium 

R-ratio

RS-ratio

A-ratio

25

18.98%

29.14%

26

30.63%

38.49%

27

36.18%

28

D * =High 

Right‐shift

R-ratio

RS-ratio

A-ratio

R-ratio

RS-ratio

A-ratio

RS-ratio

A-ratio

51.88%

8.67%

33.73%

57.60%

3.31%

34.62%

62.06%

40.54%

59.46%

30.88%

15.11%

43.86%

41.02%

6.79%

51.94%

41.27%

57.97%

42.03%

38.66%

25.17%

21.49%

51.61%

26.90%

10.04%

59.51%

30.45%

71.71%

28.29%

21.47%

27.24%

51.28%

12.29%

32.71%

55.00%

4.93%

37.20%

57.87%

46.25%

53.75%

29

31.59%

37.85%

30.55%

21.24%

47.76%

31.00%

9.65%

52.04%

38.31%

64.76%

35.24%

30

43.55%

40.45%

16.00%

28.15%

54.73%

17.13%

13.86%

63.17%

22.97%

76.09%

23.91%

31

7.60%

31.27%

61.12%

1.99%

26.91%

71.10%

1.04%

29.39%

69.56%

28.57%

71.43%

32

14.85%

43.90%

41.58%

6.29%

48.90%

44.81%

2.94%

47.45%

49.60%

52.63%

47.37%

33

25.08%

45.89%

28.69%

14.53%

55.22%

30.25%

6.30%

57.56%

36.14%

67.32%

32.68%

34

8.02%

34.35%

57.63%

2.64%

38.33%

59.03%

0.21%

36.73%

63.05%

38.46%

61.54%

35

13.65%

49.17%

37.18%

8.34%

49.18%

42.48%

2.42%

53.75%

43.83%

54.55%

45.45%

36

26.84%

51.97%

21.19%

13.26%

63.24%

23.50%

5.82%

64.86%

29.32%

71.88%

28.12%

37

8.56%

27.13%

64.32%

4.14%

32.66%

63.2%

1.82%

35.45%

62.72%

30.26%

69.74%

38

17.68%

39.42%

42.89%

8.80%

48.21%

42.99%

3.98%

45.37%

50.66%

53.42%

46.58%

39

29.73%

40.64%

29.62%

17.54%

51.56%

30.90%

8.97%

57.71%

33.32%

67.12%

32.88%

40

12.20%

37.8%

50%

6.01%

39.36%

54.63%

2.29%

44.93%

52.78%

42.50%

57.50%

41

18.90%

43.91%

37.18%

10.13%

55.36%

34.51%

4.38%

59.61%

36.01%

64.50%

35.50%

42

30.16%

44.06%

25.77%

14.71%

49.11%

36.18%

8.26%

59.86%

31.88%

69.71%

30.29%

43

11.79%

30.77%

57.44%

4.80%

31.74%

63.46%

1.70%

27.91%

70.39%

27.03%

72.97%

44

18.73%

38.04%

43.23%

7.93%

42.73%

49.35%

3.10%

42.05%

54.85%

48.10%

51.90%

45

31.05%

41.55%

27.39%

15.76%

52.46%

31.78%

7.49%

58.31%

34.20%

70.32%

29.68%

46

15.00%

26.42%

58.58%

4.80%

28.71%

66.48%

1.98%

32.95%

65.07%

34.29%

65.71%

47

18.53%

35.70%

45.77%

10.99%

45.23%

42.78%

5.12%

50.51%

44.37%

60%

40%

48

29.49%

44.91%

25.60%

17.13%

54.47%

28.40%

7.93%

62.05%

30.02%

71.67%

28.33%

Table 5-10. Average rescheduling frequency for Schemes 25 to 48.

5‐21   

Scheme no.

D * =Low  R-ratio

RS-ratio

D * =Medium  A-ratio

R-ratio

RS-ratio

D * =High  A-ratio

R-ratio

RS-ratio

Right‐shift A-ratio

RS-ratio

A-ratio

49

13.59%

38.12%

48.29%

5.81%

38.74%

56.12%

1.86%

36.80%

61.34%

39.11%

60.89%

50

20.35%

49.39%

30.26%

7.95%

54.54%

37.51%

3.77%

56.93%

39.31%

53.10%

46.90%

51

26.04%

51.75%

22.21%

15.26%

60.98%

23.75%

6.05%

63.04%

31.20%

74.08%

25.92%

52

21.17%

39.51%

39.32%

7.02%

42.75%

50.22%

3.03%

44.11%

52.86%

39.74%

60.26% 39.44%

53

27.95%

55.61%

16.44%

12.00%

59.92%

28.07%

4.18%

65.43%

30.40%

60.56%

54

36.30%

56.13%

7.57%

16.77%

65.86%

17.38%

7.85%

69.95%

22.20%

82%

18%

55

5.26%

34.09%

59.65%

2.34%

35.62%

62.03%

0.73%

34.59%

64.69%

39.69%

60.31%

56

10.06%

47.74%

42.20%

4.29%

49.46%

46.25%

1.66%

52.58%

45.75%

46.85%

53.15%

57

16.28%

55.78%

27.94%

7.61%

59.49%

32.90%

3.46%

61.35%

35.19%

69.39%

30.61%

58

9.10%

44.55%

46.35%

1.98%

39.41%

58.60%

0.39%

38.13%

61.48%

36.36%

63.64%

59

12.01%

56.15%

31.84%

4.80%

60.51%

34.69%

1.77%

62.65%

35.58%

55.56%

44.44%

60

9.31%

35.82%

54.88%

3.76%

36.30%

59.93%

1.33%

34.38%

64.29%

36.90%

63.10%

61

16.08%

45.43%

38.49%

6.45%

48.95%

44.60%

2.85%

48.96%

48.18%

53.79%

46.21%

62

22.47%

53.02%

24.51%

11.46%

58.39%

30.14%

5.53%

60.04%

34.43%

71.76%

28.24%

63

22.47%

53.02%

24.51%

11.46%

58.39%

30.14%

5.53%

60.04%

34.43%

71.76%

28.24%

64

14.37%

36.27%

49.36%

5.71%

41.11%

53.17%

1.79%

38.30%

59.91%

44.00%

56.00%

65

19.35%

49.54%

31.11%

9.23%

56.60%

34.16%

3.27%

55.39%

41.35%

60.00%

40.00%

66

24.51%

53.36%

22.13%

13.31%

63.91%

22.78%

6.19%

67.10%

26.70%

74.32%

25.68%

67

5.82%

35.54%

58.64%

2.77%

36.62%

60.61%

1.04%

36.43%

62.52%

37.04%

62.96%

68

11.30%

49.07%

39.63%

5.35%

50.43%

44.22%

2.19%

51.14%

46.66%

54.70%

45.30%

69

18.68%

53.23%

28.08%

8.78%

57.85%

33.37%

4.33%

57.30%

38.36%

66.21%

33.79%

70

10.15%

42.22%

47.63%

3.69%

42.02%

54.28%

1.01%

42.57%

56.42%

48.71%

51.29%

71

13.33%

53.07%

33.60%

4.99%

57.41%

37.60%

2.43%

57.70%

39.88%

63.64%

36.36%

72

22.67%

56.64%

20.69%

10.73%

63.19%

26.08%

4.71%

65.48%

29.80%

71.25%

28.75%

Table 5-11. Average rescheduling frequency for Schemes 49 to 72. Based on the computational results displayed in Tables 5-5 to 5-11, the effects of the experimental factors on rescheduling frequency are summarized as follows: (1) Efficiency weight has little effect on rescheduling frequency. This is shown in the results in Table 5-9. It is easy to understand this characteristic: the efficiency weight is used to control the deviation of the newly generated production schedule from the original one, and the rescheduling frequency is mainly determined by the critical cumulative task delay and the severity of disruptions. (2) The impact of a portion of disruptions can be totally absorbed by the idle time naturally generated on production resources. The slighter the disruptions are, the higher the absorption rate is. Figure 5-4 shows the computational average results of absorption rate under different disruptions levels. In this figure, when the abscissa is equal to k, the “beta = (0.5, 1)” line denotes Scheme 3k-2 (a slight level of disruptions), the “beta = (1, 2)” line signifies Scheme 3k-1 (a medium level of disruptions), and the “beta = (2, 4)” line expresses Scheme 3k (a severe level of 5‐22   

disruptions). The results show that the impact of 50-60 per cent of slight disruptions (as well as 30-50 per cent of medium level disruptions, and 10-30 per cent of severe disruptions) can be totally absorbed.

A-ratio

beta=(0.5, 1)

beta=(1,2)

beta=(2, 4)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 1

4

7

10

13

16

19

22

problem instance Figure 5-4. Absorption rates under different disruption levels. (3) The higher the critical cumulative task delay, the lower the rescheduling ratio. Figure 5-5 shows the effect of critical cumulative task delay on rescheduling frequency, taking Schemes 1 to 24 for example. For instance, the rescheduling ratio in Scheme 1 is 20.34 per cent when the cumulative task delay has a low value, becomes 12.72 per cent when the cumulative task delay assumes a medium value, and is only 4.53 per cent at a high value. D=low

D=medium

D=high

45.00%

R-ratio

35.00% 25.00% 15.00% 5.00% ‐5.00%

1

4

7

10

13

16

19

22

problem instance

Figure 5-5. The effect of critical cumulative task delay on rescheduling frequency. 5‐23   

(4) The rescheduling ratio is in proportion to the value of ( β1 , β2 ) , as a high ( β1 , β2 ) value leads to more severe disruptions. Figure 5-6 shows the effect of ( β1 , β2 ) on rescheduling rate assuming a low critical cumulative task delay, for example. In this figure, when the abscissa is equal to k, the “beta = (0.5, 1)” line denotes Scheme 3k-2, the “beta = (1, 2)” line signifies Scheme 3k-1, and the “beta = (2, 4)” line expresses Scheme 3k. beta=(0.5, 1)

beta=(1, 2)

beta=(2, 4)

50.00%

R-ratio

40.00% 30.00% 20.00% 10.00% 0.00% 1

4

7

10

13

16

19

22

problem instance Figure 5-6. The effect of severe level of disruptions on rescheduling rate. (5) The rescheduling ratio increases when the number of jobs becomes larger or the number of production resources becomes smaller. When the system contains more jobs or fewer production resources, more jobs will be assigned to each production resource on average. A disruption under these conditions will affect more jobs and lead to longer task delays. Taking Schemes 1, 25, and 49 for example, problems in Scheme 1 have more jobs than those in Scheme 25, and have fewer production resources than those in Scheme 49. When the critical cumulative task delay is 150, the rescheduling ratio of problems in Scheme 1 is 20.34 per cent, that of problems in Scheme 25 is 3.31 per cent, and that of problems in Scheme 49 is 5.81 per cent. (6). The rescheduling ratio is in inverse proportion to the value of T. Variable T denotes the tightness of jobs, thus a high T value means that more jobs will be subcontracted while relatively few jobs enter the manufacturing process. As the task delay of a 5‐24   

disruption is in proportion to the number of jobs, a high T value will lead to lower cumulative task delays and thus a lower rescheduling ratio. For instance, the parameters in Schemes 1 and 7 are the same except that the value of T in Scheme 1 is smaller. Thus problems in Scheme 1 have higher rescheduling frequencies. Another important issue in dynamic production scheduling is the realized manufacturing cost. The computational results of manufacturing cost increment ratio for these schemes are listed in Tables 5-12 to 5-20. More specifically, Tables 5-12 to 5-14 list the results for Schemes 1 to 24 under ρ  0.5 , ρ  0.75 and ρ  1 , respectively. Tables 5-15 to 5-17 list the results for Schemes 25 to 48 under ρ  0.5 , ρ  0.75 and ρ  1 , respectively. Tables 5-18 to 5-20 list the results for Schemes 49 to 72 under

ρ  0.5 , ρ  0.75 and ρ  1 , respectively. The manufacturing cost increment ratio in

these tables is calculated through Equation (5-8): MC increment ratio=

Scheme no.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

cost of realized schedule 1 cost of predictive schedule at time 0 Manufacturing cost increment ratio

D * =Low

D * =Medium

D * =High

Right-shift

3.92% 14.15% 34.77% 3.77% 6.69% 17.43% 7.10% 10.53% 20.79% 6.69% 8.30% 12.83% 8.55% 17.13% 44.69% 4.56% 8.43%

7.53% 17.72% 36.13% 7.35% 9.55% 19.91% 8.00% 12.13% 20.88% 8.24% 8.93% 15.28% 13.68% 19.72% 49.55% 8.53% 13.23%

10.70% 26.98% 38.31% 10.14% 13.51% 26.78% 10.95% 12.39% 22.56% 9.11% 11.22% 18.19% 16.16% 23.05% 55.93% 13.02% 19.29%

16.36% 44.53% 51.65% 15.57% 21.32% 42.30% 13.03% 16.49% 28.51% 11.01% 14.52% 25.03% 32.53% 50.72% 80.57% 25.16% 34.30%

5‐25   

(5-8)

18 19 20 21 22 23 24

25.93% 6.96% 14.74% 28.12% 6.66% 10.50% 16.98%

29.28% 8.93% 16.52% 30.38% 9.00% 12.55% 18.63%

32.40% 11.76% 18.21% 34.05% 11.16% 14.90% 20.78%

54.33% 13.27% 27.35% 42.06% 12.86% 19.20% 27.87%

Table 5-12. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  0.5 .

Scheme no.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

2.70% 12.31% 29.69% 1.83% 4.80% 15.31% 5.07% 8.47% 16.96% 6.06% 6.37% 11.50% 6.49% 15.73% 43.47% 3.88% 7.15% 24.82% 5.79% 12.68% 26.97% 6.29% 9.03% 15.28%

4.29% 15.88% 33.36% 3.69% 7.26% 16.74% 7.30% 10.97% 18.06% 6.56% 7.20% 13.90% 10.27% 17.75% 46.34% 6.80% 10.52% 27.28% 7.85% 13.25% 29.64% 7.74% 10.17% 18.05%

9.53% 19.98% 36.24% 6.99% 11.02% 24.81% 8.84% 11.75% 19.09% 7.84% 9.84% 15.14% 13.36% 20.83% 52.52% 8.74% 18.20% 29.08% 9.70% 16.05% 31.91% 9.86% 12.30% 19.45%

16.36% 44.53% 51.65% 15.57% 21.32% 42.30% 13.03% 16.49% 28.51% 11.01% 14.52% 25.03% 32.53% 50.72% 80.57% 25.16% 34.30% 54.33% 13.27% 27.35% 42.06% 12.86% 19.20% 27.87%

Table 5-13. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  0.75 .

5‐26   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

1

1.31%

1.88%

4.13%

16.36%

2

9.33%

12.89%

17.40%

44.53%

3

24.98%

31.44%

33.53%

51.65%

4

1.03%

3.56%

5.97%

15.57%

5

3.10%

6.43%

8.32%

21.32%

6

11.79%

14.63%

22.52%

42.30%

7

3.51%

4.75%

5.98%

13.03%

8

6.43%

8.01%

10.44%

16.49%

9

15.49%

17.78%

18.65%

28.51%

10

5.44%

6.08%

6.50%

11.01%

11

4.30%

5.54%

7.09%

14.52%

12

10.56%

12.48%

13.66%

25.03%

13

3.02%

6.96%

11.46%

32.53%

14

11.51%

15.25%

18.93%

50.72%

15

39.02%

46.09%

49.79%

80.57%

16

3.07%

5.80%

7.75%

25.16%

17

4.49%

9.88%

16.80%

34.30%

18

21.32%

24.85%

28.17%

54.33%

19

4.05%

6.25%

8.12%

13.27%

20

10.54%

11.63%

14.12%

27.35%

21

25.18%

30.26%

28.75%

42.06%

22

5.35%

6.61%

8.22%

12.86%

23

7.16%

8.71%

10.53%

19.20%

24

13.15%

16.29%

18.53%

27.87%

Table 5-14. Manufacturing cost increment ratio for Schemes 1 to 24 under ρ  1 .

5‐27   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

25

9.45%

13.50%

15.57%

20.61%

26

18.86%

20.48%

29.37%

35.49%

27

41.96%

44.40%

48.23%

56.83%

28

8.13%

9.85%

13.62%

15.77%

29

12.02%

16.47%

18.13%

29.23%

30

22.74%

22.96%

34.45%

42.52%

31

4.73%

7.27%

7.82%

8.96%

32

12.68%

16.15%

17.84%

19.91%

33

28.91%

30.52%

33.72%

43.53%

34

4.92%

6.28%

8.34%

9.33%

35

8.95%

9.44%

9.75%

10.98%

36

14.91%

17.85%

18.72%

27.32%

37

14.53%

16.07%

17.96%

20.94%

38

22.74%

24.79%

26.18%

39.65%

39

42.57%

45.96%

50.39%

64.56%

40

12.13%

13.89%

18.70%

20.01%

41

16.64%

18.49%

20.40%

22.84%

42

19.69%

24.54%

27.01%

32.82%

43

7.73%

8.24%

8.89%

10.58%

44

14.46%

15.58%

16.41%

23.42%

45

27.59%

30.81%

33.02%

47.27%

46

5.48%

6.08%

7.43%

9.00%

47

8.63%

11.29%

13.97%

17.26%

48

15.11%

17.19%

19.61%

24.11%

Table 5-15. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  0.5 .

5‐28   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

25

7.64%

11.46%

13.16%

20.61%

26

17.81%

19.03%

26.76%

35.49%

27

39.50%

42.66%

43.92%

56.83%

28

5.95%

7.68%

11.97%

15.77%

29

11.78%

15.10%

15.83%

29.23%

30

21.38%

22.19%

27.88%

42.52%

31

3.75%

7.02%

7.48%

8.96%

32

12.29%

15.34%

16.58%

19.91%

33

27.54%

28.99%

32.26%

43.53%

34

3.83%

5.72%

7.09%

9.33%

35

7.99%

9.36%

9.44%

10.98%

36

13.70%

15.68%

17.44%

27.32%

37

11.91%

14.09%

16.89%

20.94%

38

21.22%

23.11%

24.98%

39.65%

39

40.45%

43.11%

48.22%

64.56%

40

10.32%

12.58%

17.16%

20.01%

41

14.35%

16.30%

18.69%

22.84%

42

18.81%

22.28%

24.39%

32.82%

43

6.77%

7.89%

7.21%

10.58%

44

13.62%

14.58%

15.30%

23.42%

45

25.57%

27.70%

28.21%

47.27%

46

4.61%

5.36%

6.81%

9.00%

47

7.00%

9.57%

11.00%

17.26%

48

14.57%

16.27%

17.43%

24.11%

Table 5-16. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  0.75 .

5‐29   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

25

4.59%

7.84%

9.78%

20.61%

26

14.09%

16.06%

21.89%

35.49%

27

37.16%

40.36%

42.82%

56.83%

28

1.89%

6.71%

9.24%

15.77%

29

11.35%

13.58%

15.02%

29.23%

30

19.21%

19.94%

24.69%

42.52%

31

2.89%

6.69%

7.04%

8.96%

32

11.13%

14.23%

15.64%

19.91%

33

26.45%

27.61%

30.43%

43.53%

34

1.92%

5.71%

5.81%

9.33%

35

7.31%

8.84%

8.90%

10.98%

36

12.56%

14.59%

16.64%

27.32%

37

10.93%

12.36%

15.20%

20.94%

38

19.33%

21.50%

24.21%

39.65%

39

39.63%

40.83%

45.21%

64.56%

40

7.45%

11.25%

16.33%

20.01%

41

11.85%

14.29%

16.77%

22.84%

42

17.89%

21.01%

23.29%

32.82%

43

6.41%

7.08%

7.16%

10.58%

44

12.19%

13.88%

14.43%

23.42%

45

24.41%

26.34%

27.29%

47.27%

46

3.23%

4.94%

6.38%

9.00%

47

6.86%

8.27%

9.62%

17.26%

48

13.84%

15.37%

16.42%

24.11%

Table 5-17. Manufacturing cost increment ratio for Schemes 25 to 48 under ρ  1 .

5‐30   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

49

5.17%

8.67%

11.17%

16.58%

50

11.83%

13.09%

18.62%

25.84%

51

26.71%

27.12%

27.52%

39.78%

52

4.12%

6.10%

8.94%

11.57%

53

6.83%

8.93%

12.40%

16.86%

54

12.57%

17.29%

20.12%

27.64%

55

5.87%

8.31%

9.57%

10.65%

56

11.49%

13.99%

15.61%

17.53%

57

24.56%

26.08%

29.93%

37.37%

58

3.49%

5.33%

6.80%

8.44%

59

6.15%

8.99%

11.03%

13.20%

60

16.24%

18.11%

19.12%

24.88%

61

5.75%

8.47%

10.88%

13.83%

62

10.34%

14.99%

16.78%

26.37%

63

25.27%

29.83%

32.17%

40.73%

64

4.00%

5.87%

7.63%

8.94%

65

5.48%

8.78%

11.86%

12.88%

66

13.87%

15.93%

17.76%

22.36%

67

7.49%

8.46%

9.63%

10.44%

68

12.78%

17.07%

19.33%

26.21%

69

29.40%

30.90%

32.56%

44.59%

70

4.57%

7.64%

8.75%

9.24%

71

7.65%

9.43%

10.65%

13.00%

72

14.93%

17.96%

19.13%

27.65%

Table 5-18. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  0.5 .

5‐31   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

49

4.58%

7.09%

8.83%

16.58%

50

9.50%

11.19%

14.60%

25.84%

51

23.41%

25.17%

27.08%

39.78%

52

3.87%

5.10%

7.26%

11.57%

53

4.97%

8.32%

9.84%

16.86%

54

12.17%

14.17%

17.06%

27.64%

55

5.70%

7.58%

8.26%

10.65%

56

9.72%

11.77%

13.49%

17.53%

57

23.50%

25.09%

27.32%

37.37%

58

2.91%

4.80%

5.76%

8.44%

59

5.40%

6.75%

9.33%

13.20%

60

14.81%

16.76%

18.09%

24.88%

61

5.28%

7.66%

9.09%

13.83%

62

8.39%

12.54%

14.31%

26.37%

63

24.13%

26.51%

30.53%

40.73%

64

2.88%

4.12%

6.63%

8.94%

65

4.39%

5.73%

8.37%

12.88%

66

11.33%

15.03%

16.40%

22.36%

67

5.71%

6.41%

8.19%

10.44%

68

11.87%

16.02%

18.10%

26.21%

69

24.74%

28.23%

30.96%

44.59%

70

3.90%

5.17%

7.46%

9.24%

71

5.55%

7.87%

9.76%

13.00%

72

13.91% 

16.57%

17.73%

27.65%

Table 5-19. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  0.75 .

5‐32   

Scheme no.

Manufacturing cost increment ratio D * =Low

D * =Medium

D * =High

Right-shift

49

4.52%

6.18%

8.29%

16.58%

50

8.61%

10.53%

13.30%

25.84%

51

21.86%

24.93%

26.03%

39.78%

52

1.95%

4.60%

6.17%

11.57%

53

3.58%

8.17%

8.94%

16.86%

54

11.62%

13.34%

16.28%

27.64%

55

5.28%

6.56%

7.58%

10.65%

56

8.17%

10.98%

12.07%

17.53%

57

22.11%

24.00%

25.46%

37.37%

58

2.67%

3.40%

4.15%

8.44%

59

4.09%

5.27%

7.51%

13.20%

60

13.41%

15.97%

17.11%

24.88%

61

4.18%

7.05%

7.93%

13.83%

62

7.43%

11.68%

13.08%

26.37%

63

24.13%

26.51%

30.53%

40.73%

64

1.49%

2.48%

3.92%

8.94%

65

3.45%

4.34%

6.79%

12.88%

66

9.85%

14.02%

15.32%

22.36%

67

3.45%

5.57%

6.86%

10.44%

68

10.96%

14.38%

16.82%

26.21%

69

23.03%

27.31%

29.32%

44.59%

70

2.72%

4.20%

5.89%

9.24%

71

3.23%

6.57%

9.15%

13.00%

72

12.89% 

14.95%

16.32%

27.65%

Table 5-20. Manufacturing cost increment ratio for Schemes 49 to 72 under ρ  1 . The effects of the experiment parameters on the manufacturing cost increment ratio as shown in Tables 5-12 to 5-20 can be summarized as follows: (1) The higher the critical cumulative task delay, the higher the manufacturing cost increment ratio. This truism is easy to understand. A high critical cumulative task delay value indicates that more disruptions will be handled through the right-shift

5‐33   

policy, which inevitably affects productivity and thus increases the manufacturing cost increment ratio. (2) The higher the efficiency weight, the lower the manufacturing cost increment ratio. A low efficiency weight value indicates that the system places primary importance on solution stability, which deteriorates the overall quality of the schedule solution. Table 5-21 presents the results of manufacturing cost increment ratio for Schemes 1 to 3 under different levels of ρ to demonstrate its effect on the realized manufacturing cost. Figure 5-7 shows the results for Schemes 1 to 24 when the critical cumulative task delay assumes a low value. ρ

Scheme no.

Manufacturing cost increment ratio

0.5 0.75 1 0.5 0.75 1 0.5 0.75 1

1

2

3

D * =Low

D * =Medium

D * =High

3.92% 2.70% 1.31% 14.15% 12.31% 9.33% 34.77% 29.69% 24.98%

7.53% 3.69% 1.88% 17.72% 15.88% 12.89% 36.13% 33.36% 31.44%

10.70% 9.53% 4.13% 26.98% 19.98% 17.40% 38.31% 36.24% 33.53%

Right-shift

16.36%

44.53%

51.65%

Table 5-21. Manufacturing cost increment ratio for Schemes 1 to 3.

cost incrementation ratio

rou=0.5

rou=0.75

rou=1

50.00% 40.00% 30.00% 20.00% 10.00% 0.00% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

problem instance Figure 5-7. The effect of efficiency weight on manufacturing cost increment ratio.

5‐34   

(3) The manufacturing cost increment ratio is in proportion to the ( β1 , β2 ) value. A high ( β1 , β2 ) value leads to more severe disruptions. Taking low critical cumulative task delay and efficiency weight of 0.5 for example, Figure 5-8 shows the effect of ( β1 , β2 )  on the manufacturing cost increment ratio. In this figure, when the abscissa is equal to k, the “beta = (0.5, 1)” line denotes Scheme 3k-2, the “beta = (1, 2)” line denotes Scheme 3k-1, and the “beta = (2, 4)” line expresses Scheme 3k.

cost increment ratio

beta=(0.5, 1)

beta=(1, 2)

beta=(2, 4)

50.00% 45.00% 40.00% 35.00% 30.00% 25.00% 20.00% 15.00% 10.00% 5.00% 0.00% 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24

problem instance Figure 5-8. The effect of disruption severe level on manufacturing cost increment ratio. (4) The manufacturing cost increment ratio is in proportion to the value of λ . A high λ   value means that more frequent disruptions will occur during the manufacturing process, which will increase the manufacturing cost of the realized schedule. For instance, problems in Scheme 2 will encounter more disruptions than those in Scheme 5, thus the manufacturing cost increment ratios in Scheme 2 are higher. The difference becomes more obvious as the disruptions become more severe. (5) The manufacturing cost increment ratio is in inverse proportion to the value of R. This means that the effect of disruptions will be more severe under intensive job due dates. Combining all of the results in Tables 5-5 to 5-20 makes clear that a tradeoff exists between rescheduling ratio and manufacturing cost increment ratio. The higher the rescheduling ratio, the lower the manufacturing cost increment ratio. Thus, it is necessary 5‐35   

for managers to select a suitable value for the critical cumulative task delay according to actual manufacturing requirements.

5.3 Production Scheduling in VCMSs under a Rolling Horizon Environment Job information was deterministic and known in advance in all of the previous research. This conception does not resemble true practice, however. In a real manufacturing environment, most information about the production orders becomes available only when they arrive at the manufacturing system. To better resemble the real manufacturing environment, the research into VCMS production scheduling is extended into a rolling horizon environment, where the jobs arrive dynamically at the manufacturing system.

5.3.1 Mathematical model In order to develop the mathematical model for VCMSs operating in a rolling horizon environment, only one group of constraints needs to be added to the mathematical model presented in chapter 3. This supplement is represented in Equation (5-9): st j ,i , w( r ), p  rt j j

(5-9)

Here rt j denotes the release time (i.e., arrival time) of job j . This group of constraints ensures that each job can only be scheduled after it has been released to the manufacturing system. Although the VCMS mathematical model operating in a rolling horizon environment is almost the same as that developed in Chapter 3, the difference between the notations in these two models must be emphasized. In Chapter 3, all job characteristics (such as due dates, the number of job operations, job production routes, operation processing times, and production volumes, etc.) are available at the beginning of the planning horizon, but become available only after the job arrives in a rolling horizon environment. In other words, the scheduler does not know any job information before it arrives.

5‐36   

5.3.2 Rescheduling policies All jobs are released to the shop floor dynamically in a rolling horizon environment. Thus the key issue is to determine a suitable “when-to-reschedule” policy to add the new incoming jobs into the existing scheduling plan. Until now, the most commonly used “when-to-schedule” policies in a rolling horizon setting include the periodic policy, the event-driven policy, and the hybrid policy. In the periodic policy, the system is monitored at regular intervals and rescheduling is performed at each planned rescheduling point. Furthermore, the periodic policies can be classified into two strategic types: (1) fixed time interval, and (2) variable time interval. In the former, rescheduling is invoked after a fixed time interval (such as at the beginning of every shift, every day, or every week). In the latter, the time interval between two consecutive scheduling points is not constant but variable. A common method of implementing the latter periodic policy is to determine the rescheduling points according to the percentage of task completion on all machines in the manufacturing system. Intuitively the variable time interval method is more responsive to the state of the system. In the second or “event-driven” policy (also called the “continuous review” policy), the system is monitored continuously and rescheduling is performed to respond to any changes (i.e., job arrivals) in the system. However, this way may lead to too frequent rescheduling. Hence, this policy is usually implemented in practice in such a way so that it is triggered after a certain number of job arrivals rather than each individual one. The third or “hybrid” policy is a strategic combination of the periodic and event-driven policies. Table 5-22 lists the characteristics of the three typical rescheduling policies. In this table, “PERIODIC” denotes the periodic policy with a fixed time interval, “RATIO” signifies the periodic policy with variable time interval, and “ARRIVAL” expresses the event-driven policy that calibrates rescheduling activities to occur with either every disruption or after a fixed number of disruptions. Finding an effective partial VCMS scheduling approach is difficult due to the rigorous constraints in the mathematical model. Therefore, full scheduling is adopted in terms of “how-to-scheduling.” That is, all of the available unfinished jobs will be rescheduled to generate a new production schedule at each rescheduling point.

5‐37   

Policy

Characteristic

PERIODIC

Reschedule after a fixed time interval

RATIO

Periodic policy with variable time interval

ARRIVAL

Reschedule to every disruption or a certain number of disruptions

Table 5-22. Reactive scheduling policies in a rolling horizon environment.

5.3.3 Comparison of rescheduling policies A comparison of the PERIODIC, RATIO, and ARRIVAL policy performances is conducted by simulating a large set of randomly generated test problems. A suitable rescheduling policy for VCMSs operating in a rolling horizon environment is selected based on the results. 1. Manufacturing system and experiment design The test experiments are setup as follows. The manufacturing system contains 20 machines and 20 workers, the distance between any two machines is randomly generated from [2, 10], the operating cost of each machine per second is randomly generated from [1, 5], and each worker can handle two workstation types. In addition, the length of a time slice is 300 seconds. The inter-arrival time between two consecutive incoming jobs is randomly generated from an exponential distribution with a mean of either 250 seconds or 500 seconds, denoting the short and long inter-arrival interval, respectively. Each job consists of three or four operations, the processing times of which are randomly generated from [20, 40]. The production volume of a job is randomly generated from [30, 50]. The subcontracting cost of each job per unit is randomly generated from [1000, 2000]. The salary of each worker per time slice is randomly generated from [100, 150]. The transportation cost of each job per unit of distance is randomly generated from [1, 3]. The job due date, which is supposed to correlate to the end of a time slice, is calculated by adding the serial number of the time slice in which it is released to a random number from [9, 15] or [12, 20], representing the tight and loose due date constraint respectively. Pilot experiments are conducted in order to restrict the computational effort to a reasonable level and to make the experimental results effective. These are conducted to 5‐38   

determine the values of some factors, including the initial job size and the total number of simulated jobs. Selecting a suitable value for the number of totally simulated jobs causes the system to reach a steady state and thus ensures that the experimental results are effective. At first, the system is run with two different initial job sizes: 5 and 10. It is clear from the results in Figure 5-9 that initial job size does not affect the long-term performance of the manufacturing system. The system reaches a steady state at nearly the same time (after finishing about 60 jobs). Achieving a steady state means that each of the sub-costs, including the average machine operating per job (AVE-M), the average worker salary per job (AVE-W), the average transportation cost per job (AVE-T), the average subcontracting cost per job (AVE-S) and the average total cost per job (AVE-TC), reaches at a steady level. The initial job size in the following experiments takes a value of 5. The number of totally simulated jobs is set at 100 so as to control the computational effort while ensuring suitably effective and reliable experimental results.  AVE‐TC

AVE‐M

AVE‐W

AVE‐T

AVE‐S

average cost per job

30000 25000 20000 15000 10000 5000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97

0 job number

 

(a)

5‐39   

AVE‐TC

AVE‐M

AVE‐W

AVE‐T

AVE‐S

average cost per job

30000 25000 20000 15000 10000 5000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97

0 job number

 

(b) Figure 5-9. Comparison of two different initial job sizes. (a) initial job size=5. (b) initial job size=10. 2. Rescheduling policy comparative results A large set of experiments are conducted to evaluate the performances of the three “when-to-schedule” policies in a rolling horizon setting. As each of these rescheduling policies has almost the same characteristics in both tight and loose situations, the performance results are obtained by averaging all of the data from the two situations. The PERIODIC policy parameter takes five different values: 3, 6, 9, 12, and 15; that of the RATIO policy takes six different values: 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9; and that of the ARRIVAL policy takes five different values: 2, 4, 6, 8, and 10. Taking short inter-arrival interval and tight due date constraint for example, Figures 5-10 to 5-12 illustrate the characteristics of these three policies. Other problem schemes have similar characteristics. In these figures, A-N means that the system performs rescheduling in response to every N new incoming jobs, P-N indicates that the system invokes rescheduling every N time slices, and R-N denotes that the system performs rescheduling when the shop floor has finished N*100 per cent of the scheduled tasks. In each of these three rescheduling policies, the lower the parameter N, the higher the rescheduling frequency.

5‐40   

A‐2

A‐4

A‐6

A‐8

A‐10

34000 32000 average cost per job

30000 28000 26000 24000 22000 20000 18000 16000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97

14000 job number

 

Figure 5-10. Performance of the ARRIVAL policy.

  P‐3

P‐6

P‐9

P‐12

P‐15

45000

average cost per job

40000 35000 30000 25000

1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97

20000 job number

 

Figure 5-11. Performance of the PERIODIC policy.

 

5‐41   

R‐0.4

R‐0.5

R‐0.6

R‐0.7

R‐0.8

R‐0.9

average cost per job

32000 30000 28000 26000 24000 22000 1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97

20000 job number

  Figure 5-12. Performance of the RATIO policy. The defining characteristics of these three rescheduling policies can be clearly derived from the computational results displayed in Figures 5-10 to 5-12: (1) Performance improves as rescheduling becomes more frequent in all three policies. In the PERIODIC policy, for example, the average manufacturing cost per job is about 38,000 if rescheduling is performed every 12 time slices, but it rockets to 40,000 if the rescheduling occurs once only every 15 time slices. (2) The marginal rate of system performance improvement becomes slighter as the rescheduling frequency increases. Thus it is necessary to strike a balance between rescheduling frequency and system performance. Taking the PERIODIC policy for example, when the rescheduling frequency increases from every 12 time slices to every 9 time slices, the manufacturing cost per job is reduced by about 7,000; when the rescheduling frequency increases from every six time slices to every three time slices, the improvement on the average manufacturing cost per job is very small. (3) All three policies can achieve very good system performance with little difference among them when the rescheduling frequency is very high. This is easy to understand. All three policies can respond to almost any change rapidly when

5‐42   

rescheduling achieves a high frequency level, which reduces the performance gap among the different rescheduling policies. (4) The RATIO policy is more stable and less sensitive to parameter N than the ARRIVAL and PERIODIC policies. The three policy performances are next compared at the same rescheduling frequency levels. In order to achieve this goal, the RATIO policy is set as the basis and the parameters of the ARRIVAL and PERIODIC policies are adjusted to achieve approximately the same rescheduling frequency. The comparative results are listed in Table 5-23 (under the same rescheduling frequency level, the number of performing rescheduling in the tight situation is usually different from that in the loose situation). Rescheduling

Tight due date

Loose due date

frequency

A-N

P-N

R-N

A-N

P-N

R-N

Low

29537

30948

29034

25587

26699

25227

Middle

25923

26103

26045

24922

25168

24800

High

25286

25959

25028

24561

24537

24329

(a) Short inter-arrival interval Rescheduling

Tight due date

Loose due date

frequency

A-N

P-N

R-N

A-N

P-N

R-N

Low

20682

23289

20938

17416

19529

17894

Middle

16247

17824

16384

15422

16088

15500

High

15745

15849

15795

15274

15391

15353

(b) Long inter-arrival interval Table 5-23. Comparison of rescheduling policies under the same rescheduling frequency levels. The tabulated results make clear that the RATIO policy performs better than the PERIODIC policy because it is more responsive to the state of the system. Furthermore, the difference between the RATIO and PERIODIC policies becomes more obvious as the rescheduling frequency drops. Meanwhile, there is not much difference between the RATIO and ARRIVAL policy performances. The ARRIVAL policy is responsive to the number of new incoming jobs, and the selection of an appropriate N parameter can prevent too many jobs from clogging up the system. The RATIO policy is responsive to 5‐43   

the status of production resources, and the selection of a suitable N parameter can improve the production resource utilization. Hence, the hybridization of the ARRIVAL and RATIO policies is preferable so long as it combines their advantages. That is, rescheduling will be performed when the number of new incoming jobs reaches a certain number, or a certain percentage of scheduled tasks have been finished.

5.4 Production Scheduling in VCMSs under a Comprehensive Dynamic Manufacturing Environment The dynamic production scheduling problems of VCMSs have been studied in a situation with random machine breakdowns and worker absenteeisms, as well as in a rolling horizon environment respectively. In practice, various disruptions and changes may occur simultaneously on the production floor. Here, the dynamic production scheduling for VCMSs will be investigated in a more comprehensive dynamic manufacturing environment, where the disruptions not only include random machine breakdowns and worker absenteeisms, but also include dynamic job arrivals, and changes in production volume. A successfully revised rescheduling strategy based on the cumulative task delay was proposed to deal with regarding machine breakdowns and worker absenteeisms. Furthermore, a hybrid policy combining the ARRIVAL and RATIO policies shows good performance in dynamic job arrival situations. In practice, the production volumes of some jobs may change during the execution of production schedules. If the system does not adapt to these changes rapidly, the correlation between supply and demand will become skewed, as the system may produce either excessive or insufficient products to meet customer demand. Hence, the strategy adopted in this research is to deal with changes in production volume by immediately performing rescheduling so as to properly adjust the production information. The integrated procedure for VCMS production scheduling in a comprehensive dynamic manufacturing environment is presented in Figure 5-13. Rescheduling in this procedure will be triggered by any one of the following four situations: (1) when the cumulative task delay exceeds the critical cumulative task delay; (2) when a certain number of new jobs have arrived at the manufacturing system; 5‐44   

(3) when a certain percentage of scheduled tasks have been finished; and (4) when the production volume of a job has changed. Step 1. Set the value for each parameter, including critical cumulative task delay D * , the parameters for RATIO_N and ARRIVAL_N, denoting with R and NewJob. Step 2. Generate a predictive schedule for currently available jobs. Set s=1, t=1, Ds ,t  0 , newjob=0, r=0 ( r is the percentage of finished jobs in the predictive schedule ). Step 3. (1). Check whether there is new job incoming at this time. If yes, newjob++; (2). Calculate r in the predictive schedule; (3). Check whether the production volume of a job is changed. (4). Check whether machine breakdown or worker absence occurs at this time. If yes, update Ds ,t . Step 4. If newjob>=NewJob, or r>=R, or Ds ,t  D* , or the production volume of a job has changed, go to step 5; otherwise, go to step 6. Step 5. Reschedule all available unfinished jobs to get a new predictive schedule, and set r=0, newjob=0, Ds ,t  0 . Go to step 8.

Step 6. Check whether there is machine breakdown or worker absence occurring. If yes, go to step 7; otherwise, go to step 8. Step 7. Deal with machine breakdown/worker absence with right-shift policy. Step 8. t=t+1. If tPH, terminate; else, go to step 3.

Figure 5-13. The procedure of the proposed strategy. A large set of experiments are randomly generated to evaluate the performance of the proposed strategy for VCMSs operating in such a comprehensive dynamic manufacturing environment. Only some important factors are considered in these experiments in order to limit the computational effort, including job due date, the duration of machine breakdowns and worker absenteeisms, the critical cumulative task delay, the parameters of the ARRIVAL and PERIODIC policies, and the inter-arrival time of job arrivals. The 5‐45   

characteristics of other production information (such as machine operating costs, worker salaries, and operation processing times, etc.), are kept at the same level as those in the previous experiments. The inter-arrival time between two consecutive incoming jobs is generated from an exponential distribution, the mean value of which assumes two different values: 250 seconds and 500 seconds, respectively. The job due date is calculated by adding the serial number of the time slice in which it is released with a random number from the range [9, 15] or [12, 20]. The lasting working period between two consecutive breakdowns on a machine or two absenteeisms by the same worker is randomly generated from an exponential distribution with a mean of 1000 seconds. The duration of a breakdown or absence is randomly generated from uniform distribution

( β1 p, β2 p) , where p  is the expected processing time of an operation (i.e., 30 seconds in this research). Here ( β1 , β2 ) assumes two different values: (0.5, 1) and (1, 2). A high ( β1 , β2 ) value means more severe disruptions. Each job in the manufacturing system may incur changes in the production volume with probability 0.1. When the change occurs, the production volume of the job is increased (or decreased) by a half with probability 0.2, or increased (or decreased) by one-quarter with probability 0.2, or reduced to zero with probability 0.2, denoting different respective levels of change. The schemes for generating test problems are listed in Table 5-24. Scheme no.

Inter-arrival of jobs

Due date

Duration of breakdown/absence

1

250

[9, 15]

[0.5, 1]

2

250

[9, 15]

[1, 2]

3

250

[12, 20]

[0.5, 1]

4

250

[12, 20]

[1, 2]

5

500

[9, 15]

[0.5, 1]

6

500

[9, 15]

[1, 2]

7

500

[12, 20]

[0.5, 1]

8

500

[12, 20]

[1, 2]

Table 5-24. Schemes for generating test problems. In the hybrid strategy for dealing with dynamic job arrivals, two different parameter values are adopted. In the first case, the parameters for the RATIO policy and the 5‐46   

ARRIVAL policy are 0.5 and 4 respectively, representing high rescheduling frequency; in the other case, the values for these two parameters are 0.8 and 8 respectively, representing low rescheduling frequency. In addition, two values for the critical cumulative task delay are considered: 300 and 600. The parameter schemes in the proposed rescheduling strategy are listed in Table 5-25. No.

(RATIO_N, ARRIVE_N)

D*

1

(0.5, 4)

300

2

(0.5, 4)

600

3

(0.8, 8)

300

4

(0.8, 8)

600

Table 5-25. Parameter schemes for the rescheduling strategy.

Five test problems are randomly generated under each scheme for generating test problems. The performance of the proposed rescheduling strategy is obtained by averaging the computational results of executing each test problem five times. The experimental results are listed in Table 5-26. Problem

Parameter

Total

Reschedule

Reschedule

Reschedule

Manufacturing

scheme no.

scheme no.

reschedule

no.

(ratio)

no.

(ratio)

no.

(ratio)

cost

no.

by

task

by

rolling

by

volume

1

2

3

delay

horizon

changing

schedule

1

47

19 (40.43%)

24 (51.06%)

4 (8.51%)

2728200

2

35

2 (5.71%)

29 (82.86%)

4 (11.43%)

2755688

3

31

17 (54.84%)

10 (32.26%)

4 (12.90%)

2797013

4

20

4 (20.00%)

12 (60.00%)

4 (20.00%)

2898788

1

67

47 (70.15%)

16 (23.88%)

4 (5.97%)

2882230

2

38

9 (23.68%)

25 (65.79%)

4 (10.53%)

2923533

3

55

46 (83.64%)

5 (9.09%)

4 (7.27%)

2882568

4

27

17 (62.96%)

6 (22.22%)

4 (14.81%)

2934684

1

74

54 (72.97%)

16 (21.62%)

4 (5.41%)

2674752

2

41

17 (41.46%)

20 (48.78%)

4 (9.76%)

2731548

3

61

54 (88.52%)

3 (4.92%)

4 (6.56%)

2840853

4

30

19 (63.33%)

7 (23.33%)

4 (13.33%)

2858270

1

104

92 (88.46%)

8 (7.69%)

4 (3.85%)

2922926

5‐47   

realized

of

4

5

6

7

8

2

48

24 (50.00%)

20 (41.67%)

4 (8.33%)

2983106

3

89

82 (92.13%)

3 (3.37%)

4 (4.49%)

2954458

4

34

24 (70.59%)

6 (17.65%)

4 (11.76%)

2985428

1

63

11 (17.46%)

43 (68.25%)

9 (14.29%)

2009195

2

59

4 (6.78%)

46 (77.97%)

9 (15.25%)

2185807

3

41

13 (31.71%)

19 (46.34%)

9 (21.95%)

2383560

4

35

5 (14.29%)

21 (60.00%)

9 (25.71%)

2484153

1

73

34 (46.58%)

31 (42.47%)

8 (10.95%)

2244003

2

69

22 (31.88%)

39 (56.52%)

8 (11.59%)

2248124

3

54

32 (59.26%)

14 (25.93%)

8 (14.81%)

2465891

4

36

11 (30.56%)

17 (47.22%)

8 (22.22%)

2492104

1

75

39 (52.00%)

28 (37.33%)

8 (10.67%)

1913466

2

54

12 (22.22%)

34 (62.96%)

8 (14.81%)

1955653

3

47

25 (53.19%)

14 (29.79%)

8 (17.02%)

1984462

4

30

7 (23.33%)

15 (50.00%)

8 (26.67%)

2135787

1

78

46 (58.97%)

24 (30.77%)

8 (10.26%)

2090714

2

55

17 (30.91%)

30 (54.55%)

8 (14.54%)

2099202

3

70

56 (80.00%)

6 (8.57%)

8 (11.43%)

2128716

4

42

24 (57.14%)

10 (23.81%)

8 (19.05%)

2226057

Table 5-26. Performance of the proposed rescheduling strategy in comprehensive dynamic manufacturing environment. In Table 5-26, the ratio in the bracket is calculated by dividing the corresponding number of reschedulings by the total number of reschedulings in the experiments. For instance, the rescheduling ratio caused by task delay is calculated by dividing the number of reschedulings caused by task delay by the total number of reschedulings. The characteristics of the proposed reschedulings strategy for the comprehensive dynamic manufacturing environment, which are actually in accordance with the results in previous sections, are summarized as follows: (1) A high critical cumulative task delay value will lead to a low rescheduling number (ratio) caused by task delay. In Scheme 1 of the test problems, 19 reschedulings are caused by task delay in parameter Scheme 1, whereas its value is reduced to two under parameter Scheme 2.

5‐48   

(2) Keeping all other parameters the same, a high critical cumulative task delay value will lead to a high realized manufacturing cost. Taking Scheme 1 of the test problems for example, the realized manufacturing cost is 2,728,200 under parameter Scheme 1, while its value is 2,755,699 under parameter Scheme 2. (3) A high value of the parameter in the hybrid policy for the rolling horizon will lead to a low rescheduling number (ratio) caused by dynamic job arrivals. Taking Scheme 1 of the test problems for example, 24 reschedulings are caused by dynamic job arrivals under parameter Scheme 1, whereas that number falls to ten under parameter Scheme 3. (4) The more severe the machine breakdowns and worker absenteeisms, the higher the rescheduling number (ratio) caused by task delay, as well as higher realized manufacturing costs. Taking parameter Scheme 1 for instance, 19 reschedulings are run with a final realized manufacturing cost of 2,728,200. However, 47 reschedulings are caused by task delay in Scheme 2, as the realized manufacturing cost increased to 2,882,230. (5) Keeping all other parameters the same, the longer the inter-arrival time, the lower the realized manufacturing cost. This shows that the impact of disruptions is more severe under intensive manufacturing environments. (6) Keeping all other parameters the same, tighter job due dates will lead to lower rescheduling numbers caused by task delay. In tighter situations, more jobs will be subcontracted and relatively few jobs will enter the manufacturing process and thus be affected by the machine breakdowns and worker absenteeisms, which will lead to a low cumulative task delay and thus a low rescheduling number caused by task delay. (7) Keeping all other parameters the same, a high value of the parameter in the hybrid policy for dynamic job arrivals will lead to higher realized manufacturing costs. In order to further verify the proposed cumulative task delay based rescheduling strategy for random machine breakdowns and worker absenteeisms, the characteristics of 5‐49   

machine breakdowns and worker absenteeisms in the comprehensive dynamic manufacturing environment are presented in Table 5-27. Generally, the characteristics are the same as those in section 5.2: (1) the impact of a portion of disruptions can be totally absorbed by idle time in the production schedule; (2) more severe disruptions will lead to higher rescheduling frequency; (3) the portion of disruptions absorbed by idle time becomes larger when the disruptions are slight; and (4) a low value of critical cumulative task delay will result in high rescheduling frequencies. All of the experimental results demonstrate that VCMS is an effective and efficient manufacturing system in both deterministic and dynamic manufacturing environments. Problem

Parameter

No.

scheme

scheme

breakdown/absence

no.

1

555

19

254

282

2

562

2

266

294

3

561

17

245

299

4

552

4

231

317

1

539

47

326

166

2

525

9

338

178

3

509

46

292

171

4

511

17

290

204

1

560

54

288

218

2

567

17

278

272

3

576

54

281

241

4

573

19

286

268

1

534

92

320

121

2

541

24

351

164

3

539

82

316

141

4

556

24

343

189

1

718

11

268

439

2

729

4

295

430

3

668

13

243

412

4

663

5

223

435

1

694

34

367

293

2

687

22

393

272

1

2

3

4

5

of

Rescheduling

5‐50   

Right-shift no.

Absorption no.

6

7

8

3

660

32

349

279

4

629

11

329

289

1

769

39

318

412

2

748

12

354

382

3

723

25

283

415

4

714

7

311

396

1

707

46

392

269

2

712

17

438

257

3

701

56

364

281

4

708

24

388

296

Table 5-27. Characteristics of machine breakdowns and worker absenteeisms in comprehensive dynamic manufacturing environments.

5.5 Chapter Summary Scheduling theory developed very well previous to this research. In practice, however, there are only “rescheduling” problems as opposed to “scheduling” problems, due to a great variety of disruptions occurring on the production floor. This chapter has presented a study of the production scheduling problems for VCMSs in dynamic manufacturing environments so as to fill the gap between scheduling theory and scheduling practice. The important issues in the dynamic production scheduling field are “how-toreschedule” and “when-to-reschedule.” How-to-reschedule determines the means of generating new production schedules in response to disruptions. In this research, it takes the form of complete rescheduling, due to the complexity of finding a suitable scheduling repair approach for VCMSs. Furthermore, the robust predictive-reactive approach is adopted in order to reduce the deviation between the new production schedule and the original one. When-to-reschedule determines the suitable rescheduling time points. Various rescheduling strategies are developed in this research to handle different types of disruptions, including machine breakdowns, worker absences, dynamic job arrivals, and changes in production volume. A strategy based on the cumulative task delay and VCMS characteristics is proposed to deal with the machine breakdowns and worker absences. The production schedule will naturally generate some idle times on some production

5‐51   

resources as a prerequisite for VCMS processing rate, which can be utilized later to mitigate the impact of disruptions. When a machine breakdown or worker absence occurs, the system will evaluate the impact of the disruption by calculating the task delay caused by it. If the cumulative task delay exceeds a pre-defined threshold, rescheduling will be performed; otherwise, the right-shift policy is employed to maintain the feasibility of the production schedule. A hybrid rescheduling strategy based on the number of newly incoming jobs and the percentage of finished tasks is developed to handle dynamic job arrivals in a rolling horizon environment. Under this policy, rescheduling will be performed when a certain number of new incoming jobs are achieved or when the system finishes a certain percentage of scheduled tasks. An immediate rescheduling policy is adopted in response to changes in production volume, aiming to rapidly recalibrate the system status. A large set of randomly generated test problems are simulated to evaluate the performances of these policies. The factors affecting system performance and rescheduling frequency have been investigated in detail. There is general balance between system performance and rescheduling frequency in dynamic production scheduling. That is, a high rescheduling frequency can improve system performance, but also increase rescheduling nervousness. Furthermore, the results show that a large portion of disruptions caused by machine breakdowns and worker absenteeisms can be totally absorbed by the available idle time (about 50-60 per cent when the disruptions are slight). This absorption rate makes it practical for the manufacturing industry to adopt VCMSs.

5‐52   

CHAPTER 6 PARALLEL IMPLEMENTATION OF ACPSO ON GPU WITH CUDA

6.1 Introduction The hybrid ACPSO algorithm has been widely used in previous chapters to solve VCMS production scheduling problems across single-period, multi-period, and dynamic manufacturing environments. The main framework of ACPSO is built upon discrete particle swarm optimization, a population-based meta-heuristic. The computation time ACPSO requires to locate the global optimal solution is related to the swarm’s particle size (i.e., a larger particle size requires longer computation time). Hence, the computational speed of serial ACPSO may be unacceptable in practice. This is especially true for problems with large size, or in dynamic manufacturing environments where adjustments must be made as soon as possible. Thus, a faster approach for implementing ACPSO is in demand in order to meet the requirements of practical applications. NVIDIA introduced a general purpose parallel computing architecture named CUDA in 2006. Its purpose is to solve complex computational problems in a more efficient way than on a CPU. In order to implement parallel computation with CUDA, it is necessary to install a compatible Graphic Processor Unit (GPU), the TOOLKIT, and the CUDA SDK in a computer, and then develop parallel programs using C language. A CUDAcompatible GPU consists of a large number of multithreaded streaming multiprocessors, each of which can execute several thread blocks simultaneously. On the other hand, discrete particle swarm optimization is a population-based algorithm. Each particle performs the same process in every iteration. ACPSO is thus intrinsically parallel and can be effectively implemented on GPU with CUDA so as to improve computational speed. This chapter begins by introducing knowledge about CUDA and then provides an approach for effectively implementing ACPSO on GPU with CUDA. The factors affecting the speed-up ratio of the parallel implementation approach are also analyzed in order to facilitate the understanding of CUDA. 6‐1   

6.2 The CUDA Architecture Driven by the market demand for real-time and high-performance 3D graphics, the programmable GPU has evolved into a highly parallel, multithreaded, streaming processor with enormous computational horsepower and high memory bandwidth. Generally, GPU is suitable to address problems that can be presented as data-parallel computations. That is, the same program is run on a large number of data elements in parallel with high arithmetic intensity (NVIDIA 2009). In November 2006, NVIDIA introduced a general purpose parallel computing architecture, called CUDA, to solve a great variety of complex computational problems in a more efficient way than on a regular CPU. CUDA can be implemented by installing a compatible GPU, the TOOLKIT, and the CUDA SDK in a computer to develop parallel programs using C language (and other languages such as FORTRAN, C++, etc., will be supported in the future). CUDA contains three key abstractions at its core: the hierarchy of threads groups, the hierarchy of memories, and barrier synchronizations. These abstractions provide for fine-grained data parallelism and thread parallelism, nested within coarsegrained data parallelism and task parallelism. Programmers need to decompose the problem at hand into many independent sub-problems and then solve them in parallel. Each sub-problem may be further subdivided into many smaller tasks which can be solved cooperatively. In CUDA terms, each sub-problem constitutes a thread block, and each task represents a thread. The program that describes the instructions to be executed by each thread is called a kernel, which is the minimum function executed by GPU. A kernel is defined by using the declaration specifier __global__ and invoked by the syntax .  1. Thread hierarchy In CUDA, each thread that executes a kernel is given a unique thread ID, which is a 3-component vector. Thus a thread can be identified using a one-dimensional, twodimensional, or three-dimensional thread index, forming a one-dimensional, twodimensional, or three-dimensional thread block.

6‐2   

Figure 6-1. Grid of thread blocks. Threads within the same block cooperate with each other by sharing data through the shared memory of the block, the size of which is 16384 bytes (or 16KB). On current GPUs, a thread block can contain up to 512 threads. CUDA also provides a simple but efficient mechanism for thread synchronization within a block: __sysncthreads()__. This command forces all threads to wait until all of the threads in the block have reached this point. However, there is no mechanism to synchronize the execution of threads in different blocks. In addition, a kernel can be executed by multiple equallyshaped thread blocks, thus the total number of threads is equal to the number of blocks times the number of threads per block. These blocks are organized to form either a one-

6‐3   

dimensional or two-dimensional grid. Figure 6-1 illustrates the structure of a grid. Moreover, these blocks are required to be executed independently. That is, it must be possible to execute them in any order, in series or in parallel. This independence requirement allows thread blocks to be scheduled in any order across any number of cores, and provides programmers the possibility of writing code that scales with the number of cores. 2. Memory hierarchy CUDA threads can access data from several different memory spaces during the execution process. Each thread has its own register and local memory. Each thread block has a shared memory, which is visible to all threads within the block and has the same lifetime as the block. There are two read-only memory spaces accessible by all threads, namely constant memory and texture memory. In addition, all threads can also access the global memory. The types of memory utilized are called: global, register, shared, local per-thread, constant, and texture. “Global memory”, the largest memory space on GPU, is used to communicate information between host and device. However the speed of reading and writing operations in global memory is very slow, usually requiring 400-600 clock cycles because it is not cached. “Register memory” is the fastest on-chip memory used to store the threads’ automatic variables. The number of 32-bit registers on each multiprocessor is capped at 16,384. Hence, the number of blocks executed on a multiprocessor will be reduced if the threads need more registers. “Shared memory” is also a type of fast on-chip memory, and boasts almost the same reading and writing speed as register. Accessing it usually requires only two clock cycles. Its major limitation, however, is that the size of the shared memory per block is only 16,384 bytes. “Local per-thread memory” is used to store large automatic variables that do not fit into registers. It is not cached, so accessing to this local memory is equally time-consuming as accessing to global memory. “Constant memory” is a cached memory space accessible for all threads. It is read only from the device and its size is limited up to 65,536 bytes per GPU. Accessing constant memory takes the same time as 6‐4   

accessing global memory in the event of a cache miss, but otherwise it is much faster. Finally, the “texture memory” space is cached so a texture fetch costs one memory read from device memory only on a cache miss, otherwise it just costs one read from the texture cache. The texture cache is optimized for 2D spatial locality, so threads of the same warp that are close together will achieve the best performance. 3. Global memory access coalescing As accessing global memory is a very slow process, it may become a bottleneck to desirable computation speeds. To overcome this deficiency, NVIDIA introduced a mechanism called coalescing which allows threads to read several data cells in a single operation if certain requirements are met. On devices with a computational capability of 1.0 or 1.1, the global memory access by all threads of a half-warp is coalesced into one or two memory transactions if it satisfies the following three requirements (NVIDIA 2009): (a) Threads must access either 4-byte words (resulting in one 64-byte memory transaction), 8-byte words (resulting in one 128-byte memory transaction), or 16byte words (resulting in two 128-byte memory transactions); (b) All 16 words must lie in the same segment , which is of size equal to the memory transaction size (or twice the memory transaction size when accessing 16-byte words); and (c) Threads must access the words in sequence, i.e., the kth thread in the half-warp must access the kth word. If a half-warp does not fulfill all of these three requirements, a separate memory transaction is issued for each thread resulting in a significant throughput reduction. On devices with a computational capability of 1.2 or higher, the global memory access by all threads of a half-warp is coalesced into a single transaction as soon as the words accessed lie in the same segment of size equal to 32 bytes if all threads access 1bytes words, or 64 bytes if all threads access 2-byte word, or 128 bytes if all threads 6‐5   

access 4-byte or 8-byte words. Coalescing is achieved for any pattern of addresses requested by the half-warp, including patterns with multiple threads accessing the same address. 4. GPU characteristics used in this research The GeForce GTX 260 is the GPU used in this research. It has a computational capability of 1.3. Its main characteristics are listed as follows: (a) It has 24 stream multiprocessors; (b) The maximum number of threads in a block is limited up to 512. The maximum sizes of the x-, y-, and z- dimension of a thread block are 512, 512, and 64 respectively; (c) The maximum size of each dimension of a grid of thread blocks is 65,535; (d) The warp size is 32 threads; (e) The number of registers per multiprocessor is 16,384; (f) The amount of shared memory available per multiprocessor is 16KB organized into 16 banks; (g) The total amount of constant memory is 64KB; (h) The total amount of local memory per thread is 16KB, and the cache working set for constant memory is 8KB per multiprocessor; (i) The maximum number of active blocks per multiprocessor is 8; (j) The maximum number of active warps per multiprocessor is 32; and (k) The maximum number of active threads per multiprocessor is 1,024.

6‐6   

6.3 Parallel Implementation of ACPSO on GPU with CUDA Without loss of generality, the parallel approach of implementing ACPSO on GPU with CUDA is illustrated with the mathematical model for single-period VCMS production scheduling problems, which has been developed in Chapter 3. The procedure of this ACPSO algorithm is summarized in Figure 6-2. Step 1. Initialization. (1) Initialize positions and velocities of all particles. (2) Perform the first evaluation of the fitness function. (3) Initialize the personal best solution and the global best solution. (4) Initialize the pheromone values of the job production sequence.

Step 2. Iteration process For  (i  0; i  tmax ; i   ) For each particle (1) Update the velocity of the job production sequence; (2) Generate a new job production sequence and perform the local update of pheromone values; (3) Update the velocity of machine assignment; (4) Generate a new machine assignment for each job (5) Update the velocity of worker assignment (6) Generate a new worker assignment for each job (7) Check consistency for each job according to the job production sequence. If violation occurs, backtrack; otherwise continue to the next job until all jobs have been scheduled. (8) Evaluate the fitness function of this particle. Update the personal best solution and the global best solution.

If all particles have finished the iteration, perform a global update of pheromone values for the job production sequence.

Step 3. Return the global best solution as the final result.

Figure 6-2. The ACPSO procedure. 6‐7   

In ACPSO, the co-dependent relationship between the iteration processes of particles lies in the following two aspects. First, the information of the global best solution must be shared among the particles to update their velocities and positions. Second, all the particles share the pheromone values of the job production sequence to update the production priority for each job. The most natural approach to remove the dependence between particles’ updates is to divide ACPSO into as many threads as the number of particles in the swarm. Each thread represents a particle and performs the iteration process for this particle. In order to avoid the time-consuming process of accessing global memory, all of the data related to a particle is stored in the local registers of the thread representing the particle. The information of the global best solution and the pheromone values of the job production sequence are kept in the shared memory of the block so that it can be accessed by all the threads quickly. However, some restrictions limit the application of this approach in practice. First, the size of the register per thread is usually not enough to store all of the data related to a particle. Thus, time-consuming use of local memory and global memory is necessary. Second, as shared memory can only be accessed by threads within the same block, all threads in this approach must be organized into one thread block, which will restrict the maximum number of particles to 512. This may not be enough for common practical problems of large size. Mussi et al. (2011) adopted this parallel approach to solve some benchmark problems. When the dimension of the problem exceeds 10, the register is not enough to store all of a particle’s data even in simple problems. This proves that while this approach is simple to conceive, it is of little practical use. There is another promising approach to take full advantage of the potential parallel computation capability of GPU with CUDA. It is possible to divide the main stages of ACPSO into separate tasks, each of which can be implemented in parallel on GPU. In this way, each stage is organized as a kernel and optimization is achieved by iterating the basic kernels constituting one ACPSO generation. Each kernel represents one stage of the algorithm that can be implemented in parallel on GPU, and the sequential activation of these kernels must be kept. Thus each kernel needs to load all of the results from its preceding kernel, and transfer its computation results into its succeeding kernel. The method of sharing data among different kernels in CUDA is to store them in global 6‐8   

memory. In order to speed up the reading and writing operations from global memory, it is necessary to take advantage of the GPU coalescing capability as much as possible. The detail of this parallel implementation approach is introduced as follows.

6.3.1 Global memory data organization

Figure 6-3. Global memory of data organization.

6‐9   

A complete particle in a single-period VCMS production scheduling problem has three constituent parts: a job production sequence, a workstation assignment for each job, and a worker assignment for each job. These three parts can be updated independently in every ACPSO generation, and each of them can be represented with a vector as illustrated in Figure 6-3. The job production sequencing information for all of the particles is placed together in global memory, as are the workstation and worker assignments. The iteration processes of the job production sequence, the workstation assignment and the worker assignment can be performed independently, and the reading/writing activities are conducted from three continuous areas of the global memory. In Figure 6-3, N represents the number of jobs, and K indicates the number of particles. 

6.3.2 Initialization stage The tasks in the initialization stage include initializing the positions for all of the particles in the swarm (i.e., job production sequence, workstation assignment, and worker assignment), initializing the velocities for all of the particles in the swarm (also for the three constituent parts), initializing the personal best solution for each particle, finding the global best solution of the swarm, and initializing the pheromone values of the job production sequence. Each of these tasks can be organized as a kernel in CUDA so as to improve computational speed. (1) Kernel for initializing the job production sequence of all particles The number of threads in this kernel is equal to the number of particles in the swarm. Each thread represents a particle and randomly generates a job production sequence. (2) Kernel for initializing the workstation assignment of all particles The number of threads in this kernel is equal to the number of operations of all of the jobs under consideration. Each thread represents an operation and randomly assigns a suitable workstation for it. (3) Kernel for initializing the worker assignment of all particles

6‐10   

This kernel is similar to the kernel for initializing the workstation assignments. The difference is that each thread in this kernel is designed to assign a skilled worker to the operation that this thread represents. (4) Kernels for initializing the velocities of all particles The velocities of each particle are also tripartite, with separate velocities for job production sequence, workstation assignment, and worker assignment. Each of the initialization processes for these three parts is organized as a kernel. In each kernel, each thread represents an element in the velocity and initializes at the value of zero. (5) Kernel for calculating the manufacturing cost Each thread in this kernel represents a particle, and calculates its objective function value. (6) Kernel for finding the global best solution __global__ void GlobalBest(int* g_idata, int* g_odata) { __shared int data[N]; __shared int index[N]; int tid=threadIdx.x; int totaled=blockIdx.x*blockDim.x+threadIdx.x; data[tid]=g_idata[totalid]; index[tid]=tid; __syncthreads(); for(int s=blockDim.x/2; s>0; s/=2) { If(tid

Suggest Documents