MAPREDUCE BASED PARTICLE SWARM OPTIMIZATION FOR LARGE SCALE PROBLEMS

MAPREDUCE BASED PARTICLE SWARM OPTIMIZATION FOR LARGE SCALE PROBLEMS Saeed Mehrjoo1 and Saman Dehghanian2 1 Department of Computer Engineering, Colle...
Author: Belinda Green
4 downloads 0 Views 780KB Size
MAPREDUCE BASED PARTICLE SWARM OPTIMIZATION FOR LARGE SCALE PROBLEMS Saeed Mehrjoo1 and Saman Dehghanian2 1

Department of Computer Engineering, College of Engineering, Dariun Branch, Islamic Azad University, Dariun, Iran [email protected]

2

Department of Computer engineering, Payame Noor University, Iran [email protected] ABSTRACT

Recently, the MapReduce data parallel programming model has become very powerful and widespread system. Solving complex and difficult optimization problems are very challenging. PSO has been used increasingly as an effective technique for solving such problems, over the previous few years. Although PSO has many advantages, it has a considerable drawback; long time to find solutions for large scale problems. In this paper, we present a novel method to run PSO on MapReduce in parallel. With a good optimization performance, MapReduce based PSO can enlarge the swarm population and problem dimension sizes, speed up its running greatly and provide users with a feasible solution for complex optimizing problems in reasonable time. Keywords: MapReduce, Particle Swarm Optimization, speedup.

1. Introduction Particle swarm optimization (PSO), developed by Eberhart and Kennedy, inspired by social behavior of bird flocking or fish schooling, is a stochastic global optimization technique [1]. In PSO, each particle modifies its position in the search space according to its best position so far found as well as the best position of the entire particles in the swarm, and try to converge to the global best solution. Recently, PSO has been used increasingly as an effective technique for solving complex and difficult optimization problems. Easy implementation, while maintaining strong abilities of convergence and global search are some advantages of PSO compared to other swarm based algorithms such as genetic algorithm and ant colony algorithm. Although PSO has many advantages, it has a considerable drawback; long time to find solutions for large scale problems, such as problems which need a large swarm population and problems with large dimensions. The main reason of this drawback is that the optimizing process of PSO requires a large number of fitness evaluations, which are most time consuming part and usually done sequentially. Unfortunately, large-scale PSO, like all large-scale parallel programs, faces a wide range of problems. Inefficient communication or poor load balancing can keep a program from scaling to a large number of processors. Once a program successfully scales, it must still address the issue of failing nodes. For example, assuming that a node fails, on average, once a year, then the probability of at least one node failing during a 24-hour job on a 256-node cluster is 1 βˆ’ (1 βˆ’ 1/365)256 = 50.5%. On a 1000-node cluster, the probability of failure rises to 93.6%. Google faced these same problems in large-scale parallelization, with hundreds of specialized parallel programs that performed web indexing, log analysis, and other operations on large datasets. MapReduce [2]; created by Google, is a programming model designed to simplify parallel data processing on large clusters. MapReduce hides the complex details of parallelization, fault tolerance, Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 122

data distribution and load balancing [2]. One of the best advantages of MapReduce is that the programmer only needs to define parameters for controlling data distribution and parallelism [2]. MapReduce framework consists of Map and Reduce functions, and an implementation of MapReduce library. In the map phase, the mapper takes a single (key; value) pair as input and produces any number of new (key; value) pairs as output. During the shuffle phase the underlying system that implements Map/Reduce sends all of the values that are associated with an individual key to the same machine. This occurs automatically, and is seamless to the programmer[3]. In the reduce stage, the reducer takes all of the values associated with a single key k, and outputs a multi set of (key; value) pairs with the same key, k. This highlights one of the sequential aspects of Map/Reduce computation: all of the maps need to finish before the reduce stage can begin [3]. Since the reducer has access to all the values with the same key, it can perform sequential computations on these values. In the reduce step, the parallelism is exploited by observing that reducers operating on different keys can be executed simultaneously. Overall, a program in the Map/Reduce paradigm can consist of many rounds of different map and reduce functions performed one after another [4].There are many researches in which parallelization of PSO algorithm is proposed [5-7]. But to the best of our knowledge there is just one research in which the authors implemented a parallel PSO on MapReduce system [8]. They used traditional PSO but in our research standard PSO is used which can return better results, while retaining the simplicity of traditional PSO. In standard PSO the communications between particles is less than traditional PSO which cause our MapReduce based implementation more efficient. In this paper, we present a novel method to run PSO on MapReduce in parallel. With a good optimization performance, MapReduce based PSO can enlarge the swarm population and problem dimension sizes, speed up its running greatly and provide users with a feasible solution for complex optimizing problems in reasonable time. As Hadoop MapReduce can be used in any clusters of ordinary PC currently, more and more people will be able to solve huge problems in real-world applications by this parallel algorithm. The paper is organized as follows. In Section 2, the Particle Swarm Optimization is presented in details. In Section 3, we briefly introduce MapReduce. MapReduce based PSO is elaborated in Section 4. A number of experiments are done on four benchmark functions and analyses of results are reported in Section 5. Finally, we conclude the paper and give our future works in Section 6. 2. Particle Swarm Optimization Particle Swarm Optimization algorithm presented by Eberhart and Kennedy in 1995 [1] is traditional particle swarm optimization. In this algorithm, each solution of the optimization problem is called a particle. The position and velocity of every particle are updated based on its current best position (Ppbest(t)) and the best position of the entire swarm (Pgbest(t)), during each of the iteration,. The position and velocity updating in traditional PSO can be formulated as follows: 𝑉𝑖𝑑 (𝑑 + 1) = π‘Šπ‘‰π‘–π‘‘ (𝑑) + 𝑐1 βˆ— π‘Ÿ1 βˆ— (𝑃𝑝𝑏𝑒𝑠𝑑(𝑑) βˆ’ 𝑋𝑖𝑑 (𝑑)) + 𝑐2 βˆ— π‘Ÿ2 βˆ— (𝑃𝑔 𝑏𝑒𝑠𝑑(𝑑) βˆ’ 𝑋𝑖𝑑 (𝑑)) 𝑋𝑖𝑑 (𝑑 + 1) = 𝑋𝑖𝑑 (𝑑) + 𝑉𝑖𝑑 (𝑑)

(1)

(2)

where i = 1, 2, ...N , N is the number of particles in the swarm namely the population. d = 1, 2, ...D, D is the dimension of solution space. In Equation (1) and (2), the learning factors c1 and c2 are nonnegative constants, r1 and r2 are random numbers uniformly distributed in the interval [0, 1], Vid ∈ [Vmin, Vmax], where Vmin and Vmax is two designated maximum velocity which is a constant preset according to the objective optimization function. If the velocity on one dimension exceeds the maximum, it will be set to Vmax and vice versa. This parameter controls the convergence rate of the PSO and can prevent the method from growing too fast. The parameter w is the inertia weight used to balance the global and local search abilities, which is a constant in the interval [0, 1]. Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 123

Daniel Bratton and James Kennedy in order to improve the performance of traditional PSO by exploring the concepts, issues, and applications of the algorithm, and many variants have also been developed, designed a Standard Particle Swarm Optimization [9]. This standard algorithm is intended for use both as a baseline for performance testing of improvements to the technique, as well as to represent PSO to the wider optimization community. Standard PSO is different from traditional PSO mainly in the following aspects [9]: 1) Swarm Communication Topology: traditional PSO uses a global topology showed in Fig. 1(a). In this topology, the best particle, which is responsible for the velocity updating of all the particles, is chosen from the whole swarm population. While in standard PSO there is no global best, every particle only uses a local best particle for velocity updating, which is chosen from its left, right neighbors and itself. We call this a local topology, as shown in Fig. 1(b) (Assuming that the swarm has a population of 12).

Figure 1: TPSO and SPSO topologies 2) Inertia Weight and Constriction: In traditional PSO, an inertia weight parameter was designed to adjust the influence of the previous particle velocities on the optimization process. By adjusting the value of w, the swarm has a greater tendency to eventually constrict itself down to the area containing the best fitness and explore that area in detail. Similar to the parameter w, standard PSO introduced a new parameter Ο‡ known as the constriction factor, which is derived from the existing constants in the velocity update equation: πœ’ =

2 |2 βˆ’ πœ‘ βˆ’ βˆšπœ‘2 βˆ’ 4πœ‘ | πœ‘ = 𝑐1 + 𝑐2

and the velocity updating formula in standard PSO is: 𝑉𝑖𝑑 (𝑑 + 1) = 𝑋(𝑉𝑖𝑑 (𝑑) + 𝑐1 βˆ— π‘Ÿ1 βˆ— (𝑃𝑙𝑏𝑒𝑠𝑑(𝑑) βˆ’ 𝑋𝑖𝑑 (𝑑)) + 𝑐2 βˆ— π‘Ÿ2 βˆ— (𝑃𝑙𝑏𝑒𝑠𝑑(𝑑) βˆ’ 𝑋𝑖𝑑 (𝑑)))

(3)

Where Plbest is no longer global best but is the local best. Statistical tests have shown that compared to traditional PSO, standard PSO can return better results, while retaining the simplicity of traditional PSO. The introduction of standard PSO can give researchers a common grounding to work from. Standard PSO can be used as a means of comparison for future developments and improvements of PSO, and thus prevent unnecessary effort being expended on β€œreinventing the wheel” on rigorously tested enhancements that are being used at the forefront of the field. So we use standard PSO in the proposed method.

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 124

3. MapReduce MapReduce is a functional programming model that is suitable for parallel computation. In this model, a program consists of a high-level map function and a reduce function which meet a few simple requirements. If a problem is formulated in this way, it can be parallelized automatically. In MapReduce, all data are in the form of keys with associated values. For example, in a program that counts the frequency of occurrences for various words, the key would be a word and the value would be its frequency. A MapReduce operation takes place in two main stages. In the first stage, the map function is called once for each input record. At each call, it may produce any number of output records. In the second stage, this intermediate output is sorted and grouped by key, and the reduce function is called once for each key. The reduce function is given all associated values for the key and outputs a new list of values (often β€œreduced” in length from the original list of values). The following notation and example are based on the original presentation [2]. 3.1 Map Function A map function is defined as a function that takes a single key-value pair and outputs a list of new key-value pairs. The input key may be of a different type than the output keys, and the input value may be of a different type than the output values: π‘šπ‘Žπ‘ ∢ ( 𝐾1 , 𝑉1) β†’ 𝑙𝑖𝑠𝑑((𝐾2 , 𝑉2)) Since the map function only takes a single record, all map operations are independent of each other and fully parallelizable. 3.2 Reduce Function A reduce function is a function that reads a key and a corresponding list of values and outputs a new list of values for that key. The input and output values are of the same type. Mathematically, this would be written: π‘Ÿπ‘’π‘‘π‘’π‘π‘’ ∢ ( 𝐾2 , 𝑙𝑖𝑠𝑑(𝑉2)) β†’ 𝑙𝑖𝑠𝑑( 𝑉2) A reduce operation may depend on the output from any number of map calls, so no reduce operation can begin until all map operations have completed. However, the reduce operations are independent of each other and may be run in parallel. Although the formal definition of map and reduce functions would indicate building up a list of outputs and then returning the list at the end, it is more convenient in practice to emit one element of the list at a time and return nothing. Conceptually, these emitted elements still constitute a list. 3.3 Benefits of MapReduce Although not all algorithms can be efficiently formulated in terms of map and reduce functions, MapReduce provides many benefits over other parallel processing models. In this model, a program consists of only a map function and a reduce function. Everything else is common to all programs. The infrastructure provided by a MapReduce implementation manages all of the details of communication, load balancing, fault tolerance, resource allocation, job startup, and file distribution. This runtime system is written and maintained by parallel programming specialists, who can ensure that the system is robust and optimized, while those who write mappers and reducers can focus on the problem at hand without worrying about implementation details. A MapReduce system determines task granularity at runtime and distributes tasks to compute nodes as processors become available. If some nodes are faster than others, they will be given more tasks, and if a node fails, the system automatically reassigns the interrupted task. Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 125

4. MapReduce based PSO We should first design a data parallel model for standard PSO to be adopted for MapReduce. As the only dependency between particles is the exchange of global best so we can parallelize PSO as the following diagram; In an iteration of Particle Swarm Optimization, each particle in the swarm moves to a new position, updates its velocity, evaluates the function at the new point, updates its personal best if this value is the best seen so far, and updates its global best after comparison with its neighbors. Except for updating its global best, each particle updates independently of the rest of the swarm. Due to the limited communication among particles, updating a swarm can be formulated as a MapReduce operation. As a particle is mapped, it receives a new position, velocity, value, and personal best. In the reduce phase, it incorporates information from other particles in the swarm to update its global best. The MapReduce based PSO implementation conforms to the MapReduce model while performing the same calculations as standard Particle Swarm Optimization. Because the number of messages emitted by the map function is proportional to the size of the particle’s dependents list, we decided to use standard PSO instead of traditional PSO. As we mentioned previously, in standard PSO the global best is communicated only between right and left neighbors not all the particles and it makes MapReduce more efficient.

Figure 2 : Diagram of Parallel PSO We assign a unique ID to each particle which is as a key for Map stage, and particle state is represented as a string which is as the value of (key, value) pair in Map function. Because of using standard PSO, the value consists of a list included by (left and right neighbors’ ids), position, velocity, value, personal best position, personal best value, global best position, and global best value. For Reduce stage we use particle ID as the key and the value is the same as Map phase except that they Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 126

have empty dependents lists. A message is sent from one particle to another as part of the MapReduce operation. In the reduce phase, the recipient reads the personal best from the message and updates its global best accordingly. 4.2 Map Function The MapReduce based PSO map function is called once for each particle in the population. The key is the id of the particle, and the value is its state string representation. The PSO mapper finds the new position and velocity of the particle and evaluates the function at the new point. It then calls the update method of the particle with this information. In addition to modifying the particle’s state to reflect the new position, velocity, and value, this method replaces the personal best if a more fit position has been found. The key to implementing PSO in MapReduce is communication between particles. Each particle maintains a dependents list containing the ids of all neighbors that need information from the particle to update their own global bests. After the Map function updates the state of a particle, it emits messages to all dependent particles. When a message is emitted, its corresponding key is the id of the destination particle, and its value is the string representation, which includes the position, velocity, value, and personal best of the source particle. The global best of the message is also set to the personal best. If the particle is a dependent of itself, as is usually the case, the map function updates the global best of the particle if the personal best is an improvement. Finally, the map function emits the updated particle and terminates. 4.3 Reduce Function The MapReduce based PSO reduce function, receives a key and a list of all associated values. The key is the id of a particular particle in the population. The values list contains the newly updated particle and a message from each left and right neighbor. The PSO reducer combines information from all of these messages to update the global best position and global best value of the particle. The reducer emits only the updated particle. In fig.3 Map and Reduce phase is illustrated. Map and Reduce phase are sequentially done and it will be iterated until a stop condition; for example specified iteration.

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 127

Figure 3: Map and Reduce Phase 5. Experimental results We implemented both serial PSO and parallel MapReduce based PSO. The proposed method performs the same operations as the sequential method. However, instead of performing PSO iterations internally, it benefits the Hadoop MapReduce system. After creating the initial swarm, it saves the particles to a file as a list of key-value pairs. This file is the input for the first MapReduce operation. Hadoop performs a sequence of MapReduce operations, each of which evaluates a single iteration of the particle swarm. The output of each MapReduce operation represents the state of the swarm after the iteration of PSO, and this output is used as the input for the following iteration. In each MapReduce operation, Hadoop calls map and reduce in parallel to update particles in the swarm. The experimental platform for this paper is based on a cluster with 8 nodes each has an Intel Core 2 Duo 2.20GHz CPU. We set up Hadoop 1.0.3; released May 2012; on Ubuntu Linux 10.04 LTS. We compared the performance between MapReduce based PSO and serial PSO based on four classical benchmark test functions as listed in table 1. Table 1: performance of MapReduce based PSO and serial PSO Name Equation Bounds 𝐷 2 (-100, 100)D βˆ‘π‘–=1 𝑋𝑖 𝑓1 𝑓2 𝑓3 𝑓4

2 βˆ‘π· 𝑖=1[𝑋𝑖 βˆ’ 10 βˆ— cos(2πœ‹π‘₯𝑖 ) + 10] 1 βˆ‘π· 𝑋 2 4000 𝑖=1 𝑖

βˆ’ ∏𝐷 𝑖=1 cos( π‘₯𝑖 /βˆšπ‘– ) + 1

2 βˆ‘π·βˆ’1 𝑖=1 (100(π‘₯𝑖+1 βˆ’ π‘₯𝑖 )

(-10, 10)D (-600, 600)D (-10, 10)D

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 128

The variables of f1, f2 and f3 are independent but variables of f4 are dependent, namely there are related variables such as the i-th and (i + 1)-th variable. The optimal solution of all the four functions is 0. In this paper, standard PSO is run both on Hadoop MapReduce in parallel mode and in a single PC serially. We call them as HMPPSO and SPSO, respectively. Now we define Speedup as the times that HMPPSO runs faster than SPSO. 𝛾 = 𝑇 HMPPSO/𝑇 SPSO

(4)

Where Ξ³ is Speedup, T HMPPSO and T SPSO is, respectively, the time that HMPPSO and Serial PSO need to optimize a function during a specific number of iterations. In the following paragraphs, Iter stands for the number of iterations that SPSO runs, D is dimension, N is swarm population; SPSOTime and HMPPSO-Time stand for the time consumed for running SPSO on CPU and MapReduce, respectively, with second as unit of time. SPSO-Value and HMPPSO-Value stand for the mean final optimized function values of the 10 runs on one CPU and MapReduce, respectively. The experimental results and analysis are given as follows. A. Running Time and Speedup versus Swarm Population We run both HMPPSO and SPSO on f1, f2 and f3 for 10 times independently, and the results are shown in table 2, 3, and 4. (D=50, Iter=2000) After analyzing the data given in table 2, 3, 4, we can make some conclusions below. Table 2: SPSO and HMPPSO results on f1 N SPSO time HMPPSO time Speedup 400 80.12 24.46 3.27 1000 256.44 51.74 4.95 200 505.56 97.94 5.16 Table 3: SPSO and HMPPSO results on f2 N SPSO time HMPPSO time Speedup 400 53.23 9.43 5.64 1000 136.34 23.61 5.77 200 274.33 47.59 5.76 Table 4: SPSO and HMPPSO results on f3 N SPSO time HMPPSO time Speedup 400 105.45 24.58 4.29 1000 276.37 54.94 5.03 200 531.85 89.83 5.92 1) Population Size Setup: When the population size grows (from 400 to 2800), the optima of a function to be found are in the same magnitude, no obvious precision improvement can be seen. So we can say that a large swarm population is not always necessary. But exceptions may still exist where large swarm population is required in order to obtain better optimization results especially in real-world optimization problems. 2) Running Time: As shown in Fig. 4, the running time of HMPPSO and SPSO is proportional to the swarm population, namely the time increase linearly with swarm population, keeping the other parameters constant. From f1, f2 to f3, the complexity of computation increases. f1 contains only square arithmetic, f2 contains square and cosine arithmetic, while f3 contains not only square and cosine but also square root arithmetic. In the case of same population size, it takes much more time for SPSO to optimize Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 129

the function with more complex arithmetic than the one with less complex arithmetic, during the same number of iterations. However, this is no longer true for SPSO. Notice that the three lines which stand for time consumed by the HMPPSO when optimizing f1, f2, f3 overlap each other in Fig. 5, namely the time remains almost the same for a HMPPSO swarm with a specific population to optimize them, during the same number of iterations. So more complex arithmetic a function has, more speed advantages can HMPPSO gain compared to SPSO when optimizing it. 3) Speedup: As seen from Fig. 5, the speedup of the same function increases with the population size, but it is limited to a specific constant. Furthermore, the line of a function with more complex arithmetic lies above the line of the functions with less complex arithmetic (f3 above f2, f2 above f1), that is to say the function with more complex arithmetic has.

600 400 200 0 400

1000

2000

f1-SPSO time

f1-HMPPSO

f2-SPSO time

f2-HMPPSO

f3-SPSO time

f3-HMPPSO

Figure 4: Running time and Swarm Population 8 6 4 2 0 400

1000 f1

f2

2000 f3

Figure 5: Speedup and Swarm Population B. Running Time and Speedup versus Dimension Now we fix the swarm population to a constant number and vary the dimension. Analysis about the relationship between running time (as well as speedup) and dimension is done here. We run both HMPPSO and SPSO on f1, f2 and f3 for 10 times, and the results are shown in table 5, 6, 7 (N=400, Iter=2000). Table 5: SPSO and HMPPSO results on f1 D SPSO time HMPPSO time Speedup 50 123.37 36.59 3.37 100 248.72 61.38 4.05 200 511.14 102.35 4.99

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 130

Table 6: SPSO and HMPPSO results on f2 D SPSO time HMPPSO time Speedup 50 258.53 50.61 5.01 100 503.45 91.81 5.48 200 1002.75 195.79 5.12

D 50 100 200

Table 7: SPSO and HMPPSO results on f3 SPSO time HMPPSO time Speedup 256.81 43.65 5.88 475.37 86.82 5.47 1035.61 177.96 5.81

1500 1000 500 0 50

100

200

f1-SPSO time

f1-HMPPSO

f2-SPSO time

f2-HMPPSO

f3-SPSO time

f3-HMPPSO

Figure 6: Speedup and Swarm Population From table 5, 6, 7, we can conclude that: 1) Running Time: As seen from Fig. 6, the running time of HMPPSO and SPSO increases linearly with dimension, keeping the other parameters constant. Functions (f2 and f3) with more complex arithmetic need more time than the function (f1) with much less complex arithmetic to be optimized by SPSO, while the time needed is almost the same when optimized by SPSO, with the same problem dimension. Just as mentioned in Section VA. 3. 2) Speedup: the speedup remains almost the same when the dimension grows. The reason is that the parallelization is applied only to the population size in SPSO, but not to the dimension. Still, the function with more complex arithmetic has a higher speedup. 6. Conclusion We have presented a novel method to run PSO on MapReduce in parallel to solve large scale problems. As the results showed, MapReduce based PSO can enlarge the swarm population and problem dimension sizes and speed up its running greatly and pro- vide users with a feasible solution for complex optimizing problems in reasonable time. The speedup remains almost the same when the dimension grows and also the running time of HMPPSO and SPSO increases linearly with dimension, keeping the other parameters constant. Acknowledgements The authors would like to thank Dariun Branch, Islamic Azad University because of its financial support.

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 131

References [1]

J. Kennedy & R. Eberhart, "Particle swarm optimization" in Neural Networks, 1995 ,Proceeding IEEE International Conference on, 1995, pp. 1942-1948 vol.4.

[2]

J. Dean and S. Ghemawat, "MapReduce: simplified data processing on large clusters," presented at the Proceedings of the 6th conference on Symposium on Opearting Systems Design & Implementation - Volume 6, San Francisco, CA, 2004.

[3]

H. Karloff, S. Suri, and S. Vassilvitskii, "A model of computation for MapReduce," presented at the Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, Austin, Texas, 2010.

[4]

S. N. Srirama, P. Jakovits, and E. Vainikko, "Adapting scientific computing problems to clouds using MapReduce," Future Gener. Comput. Syst, vol. 28, pp.184-192, 2012.

[5]

H. Prasain, G. K. Jha, P. Thulasiraman, and R. Thulasiram, "A parallel Particle swarm optimization algorithm for option pricing," in Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International Symposium on, 2010, pp. 1-7.

[6]

X. Lei and Z. Fengming, "Parallel Particle Swarm Optimization for Attribute Reduction," in Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, 2007. SNPD 2007, Eighth ACIS International Conference on, 2007, pp. 770-775.

[7]

J.Hee-Myung, L. Hwa-Seok, and P. June-Ho, "Application of parallel particle swarm optimization on power system state estimation," in Transmission & Distribution Conference & Exposition: Asia and Pacific, 2009, 2009, pp. 1-4.

[8]

A. W. McNabb, C. K. Monson, and K. D. Seppi, "Parallel PSO using MapReduce," in Evolutionary Computation, 2007. CEC 200, IEEE Congress on, 2007, pp. 7-14.

[9]

D. Bratton and J. Kennedy, "Defining a Standard for Particle Swarm Optimization," in Swarm Intelligence Symposium, 2007. SIS 2007, IEEE, 2007, pp. 120-127.

Proceeding of the 3rd International Conference on Artificial Intelligence and Computer Science (AICS2015), 12 - 13 October 2015, Penang, MALAYSIA. (e-ISBN 978-967-0792-06-4). Organized by http://worldconferences.net 132

Suggest Documents