A BI-OBJECTIVE GENETIC ALGORITHM FOR ECONOMIC LOT SCHEDULING PROBLEM UNDER UNCERTAINTY

Proceedings of the 2014 Industrial and Systems Engineering Research Conference Y. Guan and H. Liao, eds. A BI-OBJECTIVE GENETIC ALGORITHM FOR ECONOMI...
Author: Elijah Stephens
2 downloads 0 Views 647KB Size
Proceedings of the 2014 Industrial and Systems Engineering Research Conference Y. Guan and H. Liao, eds.

A BI-OBJECTIVE GENETIC ALGORITHM FOR ECONOMIC LOT SCHEDULING PROBLEM UNDER UNCERTAINTY Canan Capa, Ali Akgunduz, Kudret Demirli [email protected], [email protected], [email protected] Concordia University, Canada

Abstract Economic lot sizing problem (ELSP) is one of the most studied problems in production planning and it deals with the scheduling of a set of products in a single machine to minimize the long run average holding and setup cost under the assumptions of known constant demand and production rates. In this study, since the traditional deterministic ELSP models neglect the uncertainty of the demand rates and setup times and so may perform poorly under certain realizations of the random demand rates and setup times we consider the stochastic version of the ELSP by exploiting the assumption of deterministic demand rates and setup times and formulate it as a bi-objective optimization problem that aims at providing solution robust production schedules. To solve the problem, we use a hybrid approach, combining genetic algorithm and linear programming. The aim of this approach is, through K number of possible schedule realizations, to generate a set of non-dominated solution robust production schedules, i.e., production schedules with least deviation from the realized schedules in terms of makespan and total cost.

Keywords production, scheduling, lot-sizing, multi-objective optimization, genetic algorithm, uncertainty

1. Introduction Economic lot scheduling problem (ELSP) deals with scheduling of multiple products in a single given machine to minimize total cost by determining both a production sequence and a lot size for each product. It is typically assumed that demand rates and setup times are known. This problem occurs in many production environments such as metal forming and plastic production environments, blending and mixing facilities and weaving production lines [1]. ELSP which is an NP hard problem [2] has been studied by a large number of researchers for more than 50 years. Due to the non-linearity and combinatorial characteristics of the problem, research efforts are more focusing on generating near optimal cyclic schedules using several heuristic solutions. These methodologies are based on any one of the common cycle, basic period, or time-varying lot size approaches. The common cycle approach is the simplest to implement when all products are manufactured in the same period while the basic period approach allows different cycle times for different products. However, cycle times must be an integer multiple of a basic period. Unlike the first two approaches, the time-varying lot size approach is more flexible than the other two approaches since lot sizes may vary for different in one cycle. Maxwell [3] and Delporte and Thomas [4] are used time varying lot size approach to solve the ELSP. Dobson [5] showed that under this approach any production sequence can be converted into a feasible sequence and proposed a solution known as Dobson’s Heuristic. Recently, meta-heuristics such as simulated annealing (SA), genetic algorithm (GA), tabu search (TS) etc. have motivated the time-varying lot size approach to solve the problem. Khouja et al. [6] presented a GA to the ELSP under the basic period approach. Moon et al. [7] applied a hybrid GA based on time-varying lot sizes which outperforms Khouja’s GA. Feldmann and Biskup [8] and Raza and Akgunduz [9] used a simulated annealing based meta-heuristic. Gaafar [10] applied a GA to dynamic lot sizing with batch ordering. Raza et al. [11] proposed a TS algorithm and Neighbourhood Search heuristics to solve the problem. In a recent study, Goncalves and Sousa [12] presented a GA combined with linear programming (LP) which considers initial and ending inventory levels as given and allows backorders.

Capa, Akgunduz, Demirli In this study, since the traditional deterministic ELSP models neglect the uncertainty of the demand rates and setup times and so may perform poorly under certain realizations of the random demand rates and setup times, we exploit the assumption of deterministic demand rates and setup times and formulate the problem as a bi-objective optimization problem that aims at providing solution robust production schedules. Based on the solution approach presented in Goncalves and Sousa [12], we adopt a hybrid approach, combining a genetic algorithm and linear programming. The aim of this approach is to generate a set of non-dominated solution robust production schedules, i.e., production schedules that do not differ much from the actually realized schedules in terms of makespan and total cost. In section 2, the problem formulation is presented. The details of the solution approach is presented in Section 3. In Section 4, the details of the experimental analysis is provided. Finally in Section 5, the paper is concluded with a discussion on the results of the case study, final remarks and possible future research agenda.

2. Problem Formulation The mathematical model presented in this section is based on the mathematical model presented in Goncalves and Sousa [12]. The only difference is in the objective functions. While Goncalves and Sousa [12] considers a single objective, in the model presented here, we have two objective functions. It is assumed that only one product can be produced at a time on the machine and demand rates and setups times follow a statistical distribution. Production rates are assumed to be deterministic. It is also assumed that the set-up cost and set-up times are independent of the production sequence. Furthermore, the inventory holding cost is directly proportional to the amount of inventory, products do not have any precedence over each other, backorders are not allowed and the production facility is assumed to be failure free and to always produce perfect quality products. Notation used in the problem formulation is introduces in Table 1.

Sets

Parameters

Variables

I N ai di hi pi si M Li n Ui n Tn ISi IEi Zin Cmax

Table 1: Notation used in problem formulation Set of products indexed by i Production intervals indexed by n setup cost for product i demand rate for product i holding cost per unit per unit time for product i production rate of product i setup time for the product i a large number inventory level of product i at the end of nth setup inventory level of product i at the end of production run following the nth setup time period corresponding to the beginning nth production interval n=1,2,…N inventory level of product i at start of the planning horizon minimum inventory level of production i at the end of planning horizon quantity of product i produced during the nth production run makespan of the production schedule

Note that although the mathematical model presented in Goncalves and Sousa [12] allows backorders, the mathematical model presented in this section assumes that backordering is not allowed. The objective function and constraints of the model are as follows:

Capa, Akgunduz, Demirli 𝑁

π‘šπ‘–π‘›π‘–π‘šπ‘–π‘§π‘’ 𝐸 [βˆ‘

βˆ‘ {π‘Žπ‘– 𝑋𝑖𝑛 + β„Žπ‘– [

π‘–βˆˆπΌ

𝑛=0

2 2 π‘ˆπ‘–π‘› βˆ’ 𝐿2𝑖𝑛 π‘ˆπ‘–π‘›+1 βˆ’ 𝐿2𝑖𝑛+1 + ]}] 2(𝑝𝑖 βˆ’ 𝑑𝑖 ) 2𝑑𝑖

(1)

π‘šπ‘–π‘›π‘–π‘šπ‘–π‘§π‘’ 𝐸(πΆπ‘šπ‘Žπ‘₯ )

(2)

Subject to: 𝑍𝑖𝑛 ≀ 𝑋𝑖𝑛 𝑀 𝑍𝑖𝑛 𝑇𝑛 + βˆ‘ 𝑠𝑖 𝑋𝑖𝑛 + βˆ‘ ≀ 𝑇𝑛+1 𝑝𝑖 π‘–βˆˆπΌ

𝑖 ∈ 𝐼; 𝑛 = 1, … , 𝑁

(3)

𝑛 = 1, … , 𝑁 βˆ’ 1

(4)

π‘–βˆˆπΌ

𝑍𝑖𝑁 ≀ πΆπ‘šπ‘Žπ‘₯ 𝑝𝑖

𝑇𝑁 + βˆ‘ 𝑠𝑖 𝑋𝑖𝑁 + βˆ‘ π‘–βˆˆπΌ

π‘–βˆˆπΌ

(5)

π‘›βˆ’1

𝐼𝑆𝑖 + βˆ‘ π‘π‘–π‘˜ βˆ’ 𝑑𝑖 (𝑇𝑛 + βˆ‘ 𝑠𝑗 𝑋𝑗𝑛 ) = 𝐿𝑖𝑛 π‘˜=1

𝑖 ∈ 𝐼; 𝑛 = 1, … , 𝑁

(6)

𝑖 ∈ 𝐼; 𝑛 = 1, … , 𝑁

(7)

𝑖 ∈ 𝐼; 𝑛 = 1, … , 𝑁

(8)

𝑗

𝐿𝑖𝑛 + (1 βˆ’

𝑑𝑖 ) 𝑍 + (𝑋𝑖𝑛 βˆ’ 1)𝑀 ≀ π‘ˆπ‘–π‘› 𝑝𝑖 𝑖𝑛 𝑁

𝐼𝑆𝑖 + βˆ‘ π‘π‘–π‘˜ βˆ’ 𝑑𝑖 πΆπ‘šπ‘Žπ‘₯ β‰₯ 𝐼𝐸𝑖 π‘˜=1

πΆπ‘šπ‘Žπ‘₯ β‰₯ βˆ‘

π‘–βˆˆπΌ

βˆ‘ π‘›βˆˆπ‘

𝑠𝑖 𝑋𝑖𝑛

(9)

The objective function (1) and (2) minimize expected total cost and expected makespan of the schedule, respectively. Objective function (1) consists of the setup and the inventory holding costs during the planning horizon for all products. Constraint set (3) forces a product setup whenever there is production for that product. Constraint set (4) and constraint set (5) together guarantee that a production interval starts at the end of the previous one. They also assure capacity feasibility. Constraint set (6) gives the inventory level at the end of each production interval for each product while constraint set (7) measures inventory levels at the end of production runs. Constraint set (8) guarantees that end inventory is greater or equal to the initial inventory level. The length of the planning horizon defined by constraint (9) is at least equal to the summation of setup times. Note that the total cost function is not positive semi-definite. Hence obtaining a solution for this mathematical model is not easy. In order to make the model more tractable, a surrogate model which uses an upper bound on the inventory costs assuming 𝐿𝑖𝑛 = 𝐿𝑖𝑛+1 = 0 in the objective function is suggested. As stated in Goncalves and Sousa (2011) modified objective function has the capability of obtaining good solutions. In our solution approach, we make use of the surrogate model to obtain production schedules.

3. Solution Approach In this section we present a solution approach combining a bi-objective GA with linear programming to solve the ELSP. The aim of this approach is to generate non-dominated solution robust production schedules by considering the uncertainty in the demand rates and setup times. Solution robustness aims at constructing a production schedule that differs from the realized production schedule in the least possible amount. Therefore, in our solution approach we aim to minimize the difference between the makespan and total cost of the production schedule and actually realized makespan and total cost. This difference between the production schedule and the actually realized schedule is measured with the total sum of absolute deviations (TSAD) for both makespan and total cost of the schedule through K number of possible schedule realizations. Therefore, the objectives considered in the bi-objective GA are to π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ minimize the average TSAD of makespan (π‘‡π‘†π΄π·π‘Žπ‘£π‘” ) and average TSAD of total cost (π‘‡π‘†π΄π·π‘Žπ‘£π‘” ). Proposed bi-objective GA is an adopted version of NSGA-II suggested by Deb et al. [13] and GA presented by Goncalves and Sousa [12]. Chromosome representation, chromosome decoding procedure and schedule generation scheme are exactly the same as presented in Goncalves and Sousa [12] and population management is the same as of NSGA-II. However, our bi-objective GA differs in the chromosome evaluation procedure. Initial population is comprised of randomly generated chromosomes and we make use of one-point crossover and swap mutation operators.

Capa, Akgunduz, Demirli 3.1 Chromosome Representation A chromosome is represented with N+1 random-keys comprising real numbers between 0 and 1 where N is the maximum number of setups allowed in the schedule. Each chromosome designates a production sequence (PS). The first N genes are used to obtain a maximal production system, while gene N+1 is used to obtain the number of setups in the PS [12]. 3.2 Schedule Generation Schedule generation is accomplished in two steps for a chromosome. In the first step PS is decoded and in the second step surrogate model presented in Section 3 is solved by LP. Note that PS provides the values of binary variables for the surrogate model. Since the surrogate model does not provide the exact average cost for a given PS, the exact cost using the correct cost function. This exact average cost function value and the makespan of the schedule are used in chromosome evaluation. For details of the schedule generation scheme we refer to Goncalves and Sousa [12]. 3.3 Chromosome Evaluation Evaluation for a chromosome is based on a set of K realizations reflecting the uncertainty around the demand rates π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ and setup times. For a given chromosome both the π‘‡π‘†π΄π·π‘Žπ‘£π‘” and π‘‡π‘†π΄π·π‘Žπ‘£π‘” are assessed through a set of K realizations mimicking the implementation phase, where a realization corresponds to a sample instance obtained by a π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ simulation run using the demand rate and setup time distributions. To calculate π‘‡π‘†π΄π·π‘Žπ‘£π‘” and π‘‡π‘†π΄π·π‘Žπ‘£π‘” of a chromosome, first a set of K realizations are performed. For each such realization, production schedule is obtained with the schedule generation scheme explained in the previous section. Hence, K production schedules each having its own makespan and total cost are obtained. Then, using the formula given in Equation (10) and Equation (11), for π‘˜ π‘˜ every realization of k, π‘‡π‘†π΄π·π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› and π‘‡π‘†π΄π·π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ are calculated. π‘˜ π‘‡π‘†π΄π·π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› =

βˆ‘ |π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘›π‘˜ βˆ’ π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘›π‘Ÿ |

(10)

π‘Ÿβˆˆπ‘…:π‘Ÿβ‰ π‘˜ π‘˜ π‘‡π‘†π΄π·π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘

=

βˆ‘ | π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘π‘˜ βˆ’ π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘π‘Ÿ |

(11)

π‘Ÿβˆˆπ‘…:π‘Ÿβ‰ π‘˜

R is the set of realizations and r is the realization index. These K realizations are then sorted in their non-domination levels using the corresponding TSAD values and the schedules that have a rank value of 1 constitute the robust nonπ‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ dominated schedule set of the chromosome. Fitness pair of a chromosome (π‘‡π‘†π΄π·π‘Žπ‘£π‘” , π‘‡π‘†π΄π·π‘Žπ‘£π‘” ) is then simply calculated by taking the averages of fitness pairs of schedules in the non-dominated schedule set of the chromosome. 3.4 Construction of the Next Generation Construction of the next generation starts with the non-dominated sorting of the current population and then continues with calculation of crowding distances for each chromosome in the current population Then, parent pairs are selected from the current population and the offspring population is generated. Before the combination of the current population with the offspring population, schedule generation scheme is applied to all chromosomes in the offspring population π‘šπ‘Žπ‘˜π‘’π‘ π‘π‘Žπ‘› π‘‘π‘œπ‘‘π‘Žπ‘™π‘π‘œπ‘ π‘‘ and the performance measures (π‘‡π‘†π΄π·π‘Žπ‘£π‘” and π‘‡π‘†π΄π·π‘Žπ‘£π‘” ) are calculated for each offspring. After the combined population is obtained, this population is reduced to size of POP with a reduction procedure where POP is the population size. The steps in constructing the next generation are given in the following subsections. 3.4.1 Non-dominated Sorting The aim of non-dominated sorting is to obtain the ranks of the chromosomes that they belong to. This information is used in the parent selection and reduction process. In non-dominated sorting, for each chromosome two entities are calculated: i) domination count, the number of chromosomes which dominate the chromosome; and ii) a set (Sp) of chromosomes that the chromosome dominates. All chromosomes in the first non-dominated front will have their domination count as zero. Then, for each chromosome, each member of its set Sp is visited and its domination count is reduced by one. In doing so, if for any member the domination count becomes zero, it is put in a separate list. These

Capa, Akgunduz, Demirli members belong to the second non-dominated front. The above procedure is continued with the remaining members of the population and the third front is identified. This process continues until all fronts are identified. 3.4.2 Crowding Distance Calculation Crowding distance metric is used to compare chromosomes when selecting the parents and later in the reduction of the population process. To get an estimate of the density of chromosomes surrounding a particular chromosome in the population, we calculate the average distance of two points on either side of this point along each of the objectives. This quantity serves as an estimate of the perimeter of the cuboid formed by using the nearest neighbors as vertices and it is called as the crowding distance. In Figure 1, crowding distance of the ith chromosome in its front (marked with solid circles) is the average side length of the cuboid (shown with a dashed box).

Figure 1: Crowding Distance of the ith Chromosome (Deb et.al, 2002) The crowding distance computation requires sorting the population according to each objective function value in ascending order of magnitude. Thereafter, for each objective function, the boundary chromosomes (chromosomes with smallest and largest function values) are assigned an infinite distance value. All other intermediate chromosomes are assigned a distance value equal to the absolute normalized difference in the function values of two adjacent chromosomes. This calculation is continued with other objective functions. The overall crowding distance value is calculated as the sum of the individual distance values corresponding to each objective. Each objective function is normalized before calculating the crowding distance. Crowding distance generates an estimate of the density of chromosomes surrounding a particular chromosome. A chromosome with high crowding distance is not surrounded by other chromosomes in close proximity. Hence, we may want to keep this chromosome for the next generation, as we aim for dispersed chromosomes. 3.4.3 Selection of Parent Pairs Construction of the next generation continues with selection of parent pairs. To obtain these pairs, from the population on hand, mother population and father population are generated using binary tournament selection procedure and crowded-comparison operator. Crowded-comparison operator guides the selection process at the various stages of the algorithm. After all the population members in the set are assigned a crowding distance, we compare two chromosomes for their non-domination rank and their extent of proximity with other chromosomes. Between two chromosomes with different non-domination ranks, crowded comparison operator prefers the chromosome with the lower rank. Otherwise, if both chromosomes belong to the same front, then it prefers the chromosome that is located in a lesser crowded region. In our solution approach, we adopt binary tournament selection mechanism for the selection of parents and for the selection of chromosomes that will be transferred to the next generation along with the crowding distance operator. In binary tournament selection, two chromosomes are randomly selected from the population as a candidate to become a mother and compared using the crowded-comparison operator. The one with the larger crowding distance metric wins the tournament and is chosen as the mother. Again this procedure is applied for the selection of the father, but this time selection is repeated to find a distinct mother-father pair. 3.4.4. Reducing the Population Size After binary tournament selection, crossover, and mutation operators are used to create an offspring population, and schedule generation scheme is applied to the offspring chromosomes to obtain performance measures, a combined population is formed. Then, the combined population is sorted according to non-domination levels and crowding distances are calculated for each chromosome in the combined population. As all previous and current population members are included, elitism is ensured. Chromosomes that belong to the best non-dominated set F1 are of best chromosomes in the combined population. If the size of F1 is smaller than the population size, then we choose all members of the set for the new population. Remaining members of the new population are chosen from subsequent

Capa, Akgunduz, Demirli non-dominated fronts in the order of their ranking. To choose exactly POP members for the next population, the chromosomes of the last front is sorted using the crowded-comparison operator in descending order and the best chromosomes are chosen to fill all population slots.

4. Experimental Analysis In this section, we report the results obtained on a set of experiments conducted to evaluate the performance of the biobjective GA. In this paper we only present the experimental results of a pilot study. In this pilot study, we assume that demand rates and setup times are known in advance. Consequently, we minimize makespan and average cost of production schedules. Following the same settings as in [12], we also impose that the initial inventory be equal to the final inventory (ISi= IEi) to force the surrogate model to produce a cycle. The lower bound in the total number of setups in PS is given in Equation (12). 𝑁𝑖 denotes the maximum number of setups allowed for product i. 𝑀𝑖𝑛𝑆𝑒𝑑𝑒𝑝 = βˆ‘ max{1, 𝑁𝑖 βˆ’ 4}

(12)

π‘–πœ–πΌ

4.1 Problem Instances Analysis is conducted by generating two sets of problem instances each with 5 randomly generated problems using the uniform distribution for the parameters given in Table 2. Table 2: Problem parameter ranges for randomly generated test instances Parameters Set1 Set2 Number of Products (units) 5 10 Production Rate (units/units time) [1500-5000] Demand Rate (units/units time) [150-1200] Setup Time (time) [0.10-0.40] Setup Cost ($) [40-160] Holding Cost ($) [0.003-0.010] 4.2 Fine-Tuning of the GA Parameters Since the parameters used in GAs have a direct effect on the performance, there is a need for a fine-tuning procedure to select the best parameter combinations to be used in the implementation of the proposed solution approach. The best combination of the parameters to be used in the bi-objective GA are determined through experimentation. For this experiment, two problem instances from each set is selected and solved with the bi-objective GA. The values for the GA parameters that we tested and used to construct the parameter combinations are shown in Table 3. Table 3: Parameter values tested in selection of the best parameter combination 0.6 0.75 0.9 Crossover Rate 0.1 0.15 Mutation Rate 50 100 Population Size 30 50 Number of Generations Using each value of the GA parameters in Table 3, a total of 24 parameter combinations is obtained and tested by solving each of the determined test instances three times to reduce the undesired effect of randomness. Thus, we have run our bi-objective four times for each project-parameter combinations, yielding a total of 288 runs. To compare the performances of the parameter combinations Ballestin and Blanco [14] provide a number of performance measures used in the literature to express the quality of the solutions obtained with multi-objective optimization algorithms with their advantages and disadvantages. In our parameter selection procedure, we have used the following four different performance measures:

Capa, Akgunduz, Demirli ο‚· ο‚· ο‚· ο‚·

Scaled Extreme Hyperarea Ratio (Scaled EHR) Maximum Spread (MS) Number of Non-dominated Solutions CPU Time

Scaled extreme hyperarea ratio (Scaled EHR) that we use to compare the performances of different GA parameter combinations is based on the hypervolume indicator, defined by Zitzler and Thiele [14] and extreme hyperarea ratio (EHR) developed by KΔ±lΔ±Γ§ et.al. [15]. In our case, since the objectives of the GA are both minimization type, we have taken the point (0, 0) as the reference point. For the bi-objective case EHR is the ratio of the hyperarea of the front (area H in Figure 2 (a)) to the area bounded by the reference point and the maximum values of the two objective functions (area A in Figure 2(b)). Hence, the smaller EHR, the better.

Figure 2: Illustration of the performance metric denoted as EHR We adjusted the EHR by dividing it to the number of non-dominated solutions in the non-dominated front and called this metric as the scaled EHR. By doing so, the non-dominated fronts with a better spread are given priority over the non-dominated fronts with less spread even if the solutions in the non-dominated fronts have pairwise non-domination. The maximum spread metric (MS), developed by Zitzler and Thiele [15] shows how much the non-dominated solutions in the non-dominated front spread and it is calculated as in Equation (13). π‘š

𝑀𝑆 = βˆšβˆ‘ 𝑖=1

max

(𝑧0 ,𝑧1)βˆˆπ‘€π‘‹π‘€

(𝑓𝑖 (𝑧0 ) βˆ’ 𝑓𝑖 (𝑧1 ))2

(13)

π‘§π‘˜ denotes the kth solution in the non-dominated front and 𝑓𝑖 (π‘§π‘˜ ) denotes the scaled objective value of π‘§π‘˜ in ith objective, i.e. the objective value of π‘§π‘˜ divided by the worst objective value realized in the solutions of the nondominated front. In our case, since there are only two objectives, MS value can be calculated using only the two extreme solutions in terms of objective values in the non-dominated front and is the squared root of the summations of the squared differences of them over the two scaled objective values. The larger the MS value, the better the spread the corresponding parameter combination achieves. The number of non-dominated solutions is another performance measure that we use to compare the results of the GAs employing different parameter combinations. The solutions in the non-dominated front are the possible solutions provided to the decision maker to select from. Therefore, the higher the number of these solutions is, the better it is. The final performance measure that we used is the CPU time that the GAs employing different parameter combinations require to obtain the non-dominated fronts. To summarize, we prefer parameter combinations with smaller scaled EHR values, higher MS values, higher number of non-dominated solutions and smaller CPU times. The results of GA runs employing different parameter value combinations are compared on the basis these performance measures. To determine the best parameter combinations, non-dominated sorting is applied, then, among the non-dominated

Capa, Akgunduz, Demirli performance quadruples the one with the largest weighted scores is chosen with its corresponding parameter combination being the one to be implemented in the computational study to follow. Table 4 shows the best parameter combinations with normalized performance measure values and weighted score for these parameter combinations over the tested 24 parameter combinations obtained after the fine tuning procedure. Table 4: Non-dominated GA parameters obtained with the fine-tuning procedure Parameter Values Normalized Average Performance Measures Mutation Population Generation Scaled MS Non-domination CPU 0.10 50 30 0.40 0.38 0.00 0.99 Rate Size Number EHR Count Time 0.10 50 50 0.43 0.64 0.46 0.73 0.10 100 30 0.01 0.41 0.82 0.50 0.15 50 30 0.57 0.55 0.13 1.00 0.15 50 50 0.63 0.64 0.33 0.64 0.15 100 30 0.21 0.00 0.38 0.46 0.10 50 30 0.75 0.81 0.64 0.95 0.10 100 30 0.77 0.53 0.54 0.51 0.10 100 50 0.94 1.00 0.90 0.15 0.15 50 30 0.67 0.54 0.18 0.88 0.15 100 30 0.00 0.40 0.54 0.45 0.15 100 50 0.61 0.67 1.00 0.00 0.10 50 30 1.00 0.85 0.41 1.00 0.15 50 30 0.56 0.58 0.23 0.96

Crossover 0.60 Rate 0.60 0.60 0.60 0.60 0.60 0.75 0.75 0.75 0.75 0.75 0.75 0.90 0.90

Weighted Score 0.44 0.57 0.43 0.56 0.56 0.26 0.79 0.59 0.75 0.57 0.35 0.57 0.81 0.58

In the following sections, results are obtained employing the GA parameters listed in Table 5. Table 5: GA parameters used in the implementation Crossover Rate 0.9 Mutation Rate 0.1 Population Size 50 Number of Generations 30 4.3 Computational Results Our bi-objective GA is implemented in C# and the computational experiments were carried out on a computer with an Intel i5 1.80 GHz CPU and 12 GB RAM. Table 6 shows the best average cost, best makespan among the nondominated solutions of each problem instance with the CPU times and the number of non-dominated solutions.

Problem Size 5

10

Table 6: Performance results for the problem instances Problem Best Avg. Best CPU Time Non-dominated No Cost Makespan (seconds) Solution Count 1 209.19 5.59 8.21 1 2 1031.96 1.72 9.56 1 3 303.55 1.49 10.06 2 4 402.17 1.64 10.26 1 5 515.67 11.87 7.82 1 6 975.39 3.11 34.37 4 7 854.81 3.28 35.84 9 8 564.27 3.03 44.40 3 9 1375.22 3.24 40.72 8 10 927.58 3.16 41.86 3

In Table 7, we present the performance values with the resulting production sequences for each non-dominated solution of each problem instance. As you see, while we get only one non-dominated solution for some instances, we obtain up to nine non-dominated solutions for some instances.

Capa, Akgunduz, Demirli Problem No 1 2 3 3 4 5 6 6 6 6 7 7 7 7 7 7 7 7 7 8 8 8 9 9 9 9 9 9 9 9 10 10 10

Table 7: Non-dominated solutions for each problem instance Non-dominated Sequence -1--5--2--4--3Solution 1Number 1 -5--1--3--2--41 -2--1--4--3--52 -2--5--1--3--41 -2--1--4--5--31 -4--1--5--2--31 -8--6--10--5--4--9--3--1--7--2--62 -8--6--10--7--5--4--9--2--3--13 -3--7--6--10--5--4--1--8--2--94 -8--7--6--10--5--4--1--9--2--5--31 -9--1--10--8--6--3--4--7--1--5--2--92 -9--6--1--10--2--8--6--3--4--7--5--33 -9--2--8--5--1--10--6--4--3--7--54 -9--8--10--6--4--3--7--5--2--9--15 -9--2--6--8--5--1--10--6--4--3--7--16 -9--7--10--1--8--5--6--3--4--7--2--57 -9--2--8--5--1--10--6--4--3--78 -1--9--7--3--10--2--8--1--6--3--9--5--4--7--19 -9--6--7--3--10--2--8--1--6--3--4--7--5--1--71 -9--3--7--4--1--8--6--5--10--3--22 -9--3--10--8--6--1--4--7--5--23 -9--3--10--8--5--6--1--4--7--21 -3--9--2--6--7--9--2--8--10--5--4--1--3--22 -5--7--1--4--8--6--3--8--9--10--2--1--7--23 -2--6--7--9--2--10--5--4--8--1--9--3--24 -10--8--5--9--7--10--4--8--3--6--9--1--25 -3--7--4--6--7--9--2--8--10--5--1--36 -3--2--6--7--9--2--8--10--5--4--1--3--27 -3--2--6--7--9--2--8--10--5--4--18 -3--6--7--9--2--8--10--5--4--11 -10--5--10--8--3--6--7--5--4--9--2--12 -9--5--4--1--9--7--5--8--6--3--10--23 -6--5--4--1--9--10--7--5--2--8--3-

Avg Cost

As an illustration, we present the non-dominated front for problem 9 in Figure 3. 3000.00 2500.00 2000.00 1500.00 1000.00 500.00 0.00 3.000

3.200

3.400

3.600

3.800

4.000

Makespan

Figure 3: Non-dominated front for problem 9

4.200

4.400

Capa, Akgunduz, Demirli

5. Conclusion In this paper, we have addressed the problem of scheduling economic lots in a multi-product single-machine environment and formulated it as a multi-objective optimization problem. Demand rates and setup times are taken as random variables from a known distribution. To solve the problem, we developed a hybrid approach combining a biobjective GA and LP. Minimization of expected total cost and expected makespan is aimed to generate solution robust production schedules. These expectations are calculated through K number of schedule realizations mimicking the implementation phase where a realization corresponds to a sample instance obtained by a simulation run using the demand rate and setup time distributions. A pilot study is done for assuming the demand rates and setup times are known in advance with the objective of minimizing the makespan and average cost. Results of this pilot study are presented. Further research might be directed for the case where there are multiple identical machines.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

Boctor, F. F., 1987, β€œThe G-group heuristic for single machine lot scheduling,” International Journal of Production Research, 25(3), 363-379 Hsu, W. L., 1983, β€œOn the general feasibility test of scheduling lot sizes for several products on one machine,” Management Science, 29(1), 93-105. Maxwell, W. L.,1964, β€œThe scheduling of economic lot sizes,” Naval Research Logistics Quarterly, 11(2), 89-124 Delporte, C. M., and Thomas, L. J, 1977, β€œLot sizing and sequencing for N products on one facility,” Management Science, 23(10), 1070-1079. Dobson, G., 1987, β€œThe economic lot-scheduling problem: achieving feasibility using time-varying lot sizes,” Operations Research, 35(5), 764-771. Khouja, M., Michalewicz, Z., and Wilmot, M., 1998, β€œThe use of genetic algorithms to solve the economic lot size scheduling problem,” European Journal of Operational Research, 110(3), 509-524. Moon, I., Silver, E. A., and Choi, S., 2002, β€œHybrid genetic algorithm for the economic lot-scheduling problem,” International Journal of Production Research, 40(4), 809-824. Feldmann, M., and Biskup, D., 2003, β€œSingle-machine scheduling for minimizing earliness and tardiness penalties by meta-heuristic approaches,” Computers & Industrial Engineering, 44(2), 307-323. Raza, A. S., and Akgunduz, A., 2008, β€œA comparative study of heuristic algorithms on economic lot scheduling problem,” Computers & Industrial Engineering, 55(1), 94-109. Gaafar, L., 2006, β€œApplying genetic algorithms to dynamic lot sizing with batch ordering,” Computers & Industrial Engineering, 51(3), 433-444. Raza, S. A., Akgunduz, A., and Chen, M. Y., 2006, β€œA tabu search algorithm for solving economic lot scheduling problem,” Journal of Heuristics, 12(6), 413-426. GonΓ§alves, J. F., and Sousa, P. S., 2011, β€œA genetic algorithm for lot sizing and scheduling under capacity constraints and allowing backorders,” International Journal of Production Research, 49(9), 2683-2703. Deb, K., Pratap, A., Agarwal, S., and Meyarivan, T. A. M. T., 2002, β€œA fast and elitist multiobjective genetic algorithm: NSGA-II,” Evolutionary Computation, IEEE Transactions on, 6(2), 182-197. Ballestin, F., and Blanco, R., 2011, β€œTheoretical and practical fundamentals for multi-objective optimization in resource-constrained project scheduling problems,” Computers Operations Research, 38(1), 51-62. Zitzler, E., and Thiele, L., 1999, β€œmulti-objective evolutionary algorithms: A comparative case study and the strength pareto approach,” Evolutionary Computation, IEEE Transactions on, 3(4), 257-271. KΔ±lΔ±Γ§, M., Ulusoy, G., Şerifoğlu, F. S., 2008, β€œA bi-objective genetic algorithm approach to risk mitigation in project scheduling,” International Journal of Production Economics, 112(1), 202-216.

Suggest Documents