Decision Context Based Evaluation of Multiattribute Decision Making Methods

Decision Context Based Evaluation of Multiattribute Decision Making Methods by Subrata Chakraborty B. Sc. (Electronics), M. C. A., University of Pune...
Author: Jasmin Francis
3 downloads 1 Views 1MB Size
Decision Context Based Evaluation of Multiattribute Decision Making Methods

by Subrata Chakraborty B. Sc. (Electronics), M. C. A., University of Pune, India

A Dissertation Submitted to Monash University in Fulfilment of the Requirements for the Degree of Doctor of Philosophy

Faculty of Information Technology Monash University, Australia November 2009

Copyright Notices Notice 1 Under the Copyright Act 1968, this thesis must be used only under the normal conditions of scholarly fair dealing. In particular no results or conclusions should be extracted from it, nor should it be copied or closely paraphrased in whole or in part without the written consent of the author. Proper written acknowledgement should be made for any assistance obtained from this thesis.

Notice 2 I certify that I have made all reasonable efforts to secure copyright permissions for third-party content included in this thesis and have not knowingly added copyright content to my work without the owner’s permission.

Declaration

I, Subrata Chakraborty, hereby declare that this thesis contains no material which has been accepted for the award of any other degree or diploma in any university or other institution. To the best of my knowledge and belief, this thesis contains no material previously published or written by other authors, except where due reference is made in the text of the thesis.

SUBRATA CHAKRABORTY Date:

Faculty of Information Technology Monash University, Australia

i

Acknowledgements

I would like to express my heartiest gratitude to my research supervisor, Professor Chung-Hsing Yeh. This work could never have been successfully done without his knowledgeable guidance, support and encouragement. I will always treasure the inspiration and experience I have gained by working with him. I am thankful to the Faculty of Information Technology and the Monash University for providing necessary financial support during my studies. I also thank the staff at the Clayton School of Information Technology for their great support during this period. I am grateful to my friends and fellow PhD students for their support and for making the study period fun and enjoyable. The time we spent together was encouraging and helped me in various stages of my research. Finally, I would like to thank my parents and my family for their continuous support, encouragement and sacrifices in every stage of my life. I must also thank my wife for her patience, understanding and support during this research.

ii

Abstract

Multiattribute decision making (MADM) methods generally involve evaluating a set of decision alternatives by considering a set of evaluation criteria or attributes in order to achieve a decision outcome such as ranking and selection. The diversity among the decision problems in terms of their problem structures, characteristics, decision information and specific requirements has led to the development of numerous MADM methods. With the availability of many MADM methods, selecting the most suitable one for a given problem is a challenging task for the decision maker. The decision maker may not have required experience and expertise to understand the suitability of a method for a given problem. In order to help the decision maker select a suitable method, several guidelines have been developed along with empirical and simulation studies during the past few decades. Although existing studies provide valuable insights for selecting a suitable MADM method for specific decision problems, they are inadequate and unable to resolve several open research issues in MADM research, including: (a) unavailability of general guidelines for specific decision settings, (b) lack of method comparison experiments in detailed levels, (c) inability to find the most preferred method for specific decision contexts, (d) lack of objective measures to compare a set of suitable methods for a given problem, (e) inability to consider all the stakeholders in method evaluation, and (f) inadequacy of comparison studies for group decision methods.

iii

In this study, various decision contexts are identified to understand the decision settings and the decision maker’s evaluation and selection requirements. Six new methodologies are developed to resolve decision context specific issues in the area of MADM method evaluation, comparison and selection. A new simulation model is developed to provide decision setting specific method evaluation and selection guidelines. Experiments are conducted to illustrate applications of the new simulation model. This work highlights the need for detailed level method comparisons considering internal processes of MADM methods, including: normalisation procedures, aggregation techniques and consensus techniques. A new rank similarity based approach along with an objective measure is developed to compare a set of suitable methods for a given decision problem in order to find the most preferred one. The approach measures the similarity between ranking outcomes produced by the methods being evaluated. An alternatives-oriented approach is developed to provide due considerations to the decision alternatives in the method evaluation process, when they are key stakeholders. This approach provides a new dimension to method evaluation and selection. A comparison between the TOPSIS and the modified TOPSIS methods is conducted to justify the applications of these methods in MADM problem solving. Simulation experiments and mathematical proofs are provided to help the decision maker choose between them rationally. iv

A new group consensus technique is developed to provide a much needed rational alternative to the existing techniques and to justify their usage. A novel consensus technique selection approach is developed to compare and evaluate group consensus techniques in an objective manner in order to find the one that most satisfies the group of decision makers as a whole. A new group decision method is developed based on comparative searching into the complete solution space that consists of all the possible decision outcomes. The method finds the solution preferred most by the whole group of decision makers. The research study contributes to the MADM research by introducing the concept of decision context based evaluation of MADM methods and developing new approaches, models and techniques to address context specific requirements in varying decision settings. This study also highlights the need for new perspectives towards the method evaluation processes. The research outcomes of this study have a great potential for practical problem solving. Various experimental results can be used as insightful guidelines for selecting the most suitable method for a given problem. With their simplicity and flexibility in concept and computation, the new approaches developed can be easily adopted to address new requirements in MADM method evaluation.

v

Table of Contents

Declaration ................................................................................................................... i

Acknowledgements..................................................................................................... ii

Abstract ...................................................................................................................... iii

List of Publications.................................................................................................. xiii

List of Tables ........................................................................................................... xiv

List of Figures .......................................................................................................... xvi

Chapter 1 Introduction .............................................................................................. 1 1.1 Preamble ....................................................................................................... 1 1.2 Multiattribute Decision Making Challenges................................................. 2 1.3 Research Objectives ..................................................................................... 4 1.4 Research Outline........................................................................................... 5

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons ............................................................................................ 9 2.1 Introduction .................................................................................................. 9 2.2 Classification of Multiattribute Decision Making Methods ....................... 10 2.2.1 Classification Based on the Data Type .......................................... 10 2.2.2 Classification Based on the Information Type and Features ......... 12

vi

2.2.3 Classification Based on the Number of Decision Makers ............. 14 2.3 Multiattribute Value Theory Based Methods ............................................. 15 2.3.1 Simple Additive Weighting............................................................ 16 2.3.2 Technique for Order Preference by Similarity to Ideal Solution ... 16 2.3.3 Weighted Product ........................................................................... 17 2.4 Method Comparison Studies ...................................................................... 17 2.5 Concluding Remarks .................................................................................. 21

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection .......................................................................................... 22 3.1 Introduction ................................................................................................ 22 3.2 The Multiattribute Decision Making Problem and Notation ...................... 23 3.3 Decision Context and Method Evaluation Challenges ............................... 25 3.3.1 Decision Context A ........................................................................ 26 3.3.1.1 Specifications for Decision Context A .............................. 26 3.3.1.2 Current Challenges for Decision Context A ...................... 27 3.3.2 Decision Context B ........................................................................ 27 3.3.2.1 Specifications for Decision Context B .............................. 27 3.3.2.2 Current Challenges for Decision Context B ...................... 28 3.3.3 Decision Context C ........................................................................ 28 3.3.3.1 Specifications for Decision Context C .............................. 28 3.3.3.2 Current Challenges for Decision Context C ...................... 28 3.3.4 Decision Context D ........................................................................ 29 3.3.4.1 Specifications for Decision Context D .............................. 29 3.3.4.2 Current Challenges for Decision Context D ...................... 30 vii

3.3.5 Decision Context E ........................................................................ 30 3.3.5.1 Specifications for Decision Context E ............................... 30 3.3.5.2 Current Challenges for Decision Context E ...................... 30 3.3.6 Decision Context F ......................................................................... 31 3.3.6.1 Specifications for Decision Context F ............................... 31 3.3.6.2 Current Challenges for Decision Context F....................... 32 3.4 Overview of the Methodology Developments............................................ 33 3.5 Concluding Remarks .................................................................................. 36

Chapter 4 Developments I: A Simulation Model for Method Evaluation and Selection ................................................................................................. 38 4.1 Introduction ................................................................................................ 38 4.2 The Simulation Model ................................................................................ 39 4.3 Performance Measures ............................................................................... 41 4.3.1 The Ranking Consistency Index .................................................... 41 4.3.2 The Weight Sensitivity Index......................................................... 44 4.4 Concluding Remarks .................................................................................. 48

Chapter 5 Applications of Developments I: Simulation Based Selection of a Normalisation Procedure...................................................................... 50 5.1 Introduction ................................................................................................ 50 5.2 Normalisation Procedures Evaluated.......................................................... 51 5.2.1 Vector Normalisation ..................................................................... 52 5.2.2 Linear Scale Transformation (Max-Min) ....................................... 52 5.2.3 Linear Scale Transformation (Max) ............................................... 53 viii

5.2.4 Linear Scale Transformation (Sum) ............................................... 54 5.3 Multiattribute Decision Making Methods Evaluated ................................. 55 5.3.1 The SAW Method .......................................................................... 55 5.3.2 The TOPSIS Method ...................................................................... 57 5.4 Experiments and Results for SAW ............................................................. 59 5.4.1 Simulation Experiments for SAW ................................................. 60 5.4.2 Experimental Results for SAW ...................................................... 62 5.4.2.1 Results for Change in Alternative Numbers ...................... 62 5.4.2.2 Results for Change in Attribute Numbers ......................... 64 5.4.2.3 Results for Change in Data Range ..................................... 65 5.5 Experiments and Results for TOPSIS ........................................................ 67 5.5.1 Simulation Experiments for TOPSIS ............................................. 67 5.5.2 Experimental Results for TOPSIS ................................................. 69 5.5.2.1 Results for Change in Alternative Numbers ...................... 69 5.5.2.2 Results for Change in Attribute Numbers ......................... 71 5.5.2.3 Results for Change in Data Range ..................................... 73 5.6 Concluding Remarks .................................................................................. 75

Chapter 6 Developments II: Rank Similarity Based Method Evaluation and Selection ................................................................................................. 77 6.1 Introduction ................................................................................................ 77 6.2 Methodology Development ........................................................................ 78 6.2.1 Rank Similarity and Method Evaluation ........................................ 78 6.2.2 The Rank Correlation Coefficient .................................................. 79 6.2.3 Rank Similarity Index ................................................................... 79 ix

6.3 Numerical Example .................................................................................... 81 6.3.1 Methods Used in the Example ....................................................... 81 6.3.2 The Example .................................................................................. 83 6.4 Concluding Remarks .................................................................................. 86

Chapter 7 Developments III: An Alternatives-Oriented Method Evaluation and Selection ................................................................................................. 87 7.1 Introduction ................................................................................................ 87 7.2 The Alternatives-Oriented Approach and the Preference Level................. 89 7.3 Numerical Example .................................................................................... 92 7.4 Application in Decision Support Systems .................................................. 95 7.5 Concluding Remarks .................................................................................. 99

Chapter 8 Developments IV: Comparisons between TOPSIS and Modified TOPSIS Methods ................................................................................. 100 8.1 Introduction .............................................................................................. 100 8.2 TOPSIS and Modified TOPSIS ................................................................ 101 8.2.1 The TOPSIS Method .................................................................... 101 8.2.2 The Modified TOPSIS Method .................................................... 101 8.3 Method Comparisons ................................................................................ 103 8.3.1 Comparison with Equal Weight Settings ..................................... 103 8.3.2 Comparison with Non-Equal Weight Settings ............................. 106 8.3.2.1 Simulation Results ........................................................... 106 8.3.2.2 Mathematical Analysis .................................................... 107 8.4 Concluding Remarks ................................................................................ 110 x

Chapter 9 Developments V: Evaluation of Consensus Techniques in Multiattribute Group Decision Making ............................................ 112 9.1 Introduction .............................................................................................. 112 9.2 Group Consensus Techniques .................................................................. 113 9.2.1 Consensus during the Initial Stage ............................................... 113 9.2.2 Consensus during the Intermediate Stage .................................... 115 9.2.3 Consensus during the Final Stage ................................................ 115 9.3 New Consensus Technique Based on TOPSIS......................................... 116 9.4 Consensus Technique Evaluation ............................................................. 119 9.5 Numerical Example .................................................................................. 120 9.6 A Simulation and Ties in Ranking Outcome ............................................ 123 9.7 Concluding Remarks ................................................................................ 124

Chapter 10 Developments VI: Comparison Based Group Ranking Outcome for Multiattribute Group Decisions ......................................................... 125 10.1 Introduction ............................................................................................ 125 10.2 Methodology Development .................................................................... 126 10.2.1 Finding the Most Preferred Group Ranking Outcome ............... 126 10.2.2 The Outcome Similarity Index ................................................... 127 10.3 Numerical Example ................................................................................ 128 10.4 Concluding Remarks .............................................................................. 132

xi

Chapter 11 Conclusions ......................................................................................... 133 11.1 Research Developments Summary ......................................................... 133 11.1.1 Developments I: A simulation Model and Applications ............ 133 11.1.2 Developments II: Rank Similarity Based Approach .................. 134 11.1.3 Developments III: Alternatives-Oriented Approach .................. 135 11.1.4 Developments IV: TOPSIS and Modified TOPSIS Comparison135 11.1.5 Developments V: Group Consensus Technique......................... 136 11.1.6 Developments VI: Comparison Based Group Decision Method 137 11.2 Application of the Developments ........................................................... 137 11.3 Research Contributions........................................................................... 139 11.4 Future Research ..................................................................................... 142

References ............................................................................................................... 144 Appendix A: Notation ............................................................................................ 159 Appendix B: Glossary of Terms ........................................................................... 164 Appendix C: Simulation Results ........................................................................... 168 C.1 Results for SAW ...................................................................................... 168 C.1.1 Results for Change in Alternative Numbers ................................ 168 C.1.2 Results for Change in Attribute Numbers ................................... 173 C.1.3 Results for Change in Data Range............................................... 178 C.2 Results for TOPSIS .................................................................................. 181 C.2.1 Results for Change in Alternative Numbers ................................ 181 C.2.2 Results for Change in Attribute Numbers ................................... 186 C.2.3 Results for Change in Data Range............................................... 191 xii

List of Publications Chakraborty S and Yeh C-H (2007a). A Simulation Based Comparative Study of Normalization

Procedures

in

Multiattribute

Decision

Making.

In:

Proceedings of the WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases (AIKED'07): 102-109.

Chakraborty S and Yeh C-H (2007b). Consistency Comparison of Normalization Procedures in Multiattribute Decision Making. WSEAS Transactions on Systems and Control 2 (2): 193-200.

Chakraborty S and Yeh C-H (2007c) Comparing Normalization Procedures in Multiattribute Decision Making under Various Problem Settings. In: Proceedings of the Fifth International Conference on Information Technology in Asia (CITA’07): 36-42.

Chakraborty S and Yeh C-H (2009). A Simulation Comparison of Normalization Procedures for TOPSIS. In: Proceedings of the International Conference on Computers and Industrial Engineering (CIE39): 1815-1820.

xiii

List of Tables

Table 3-1 Decision contexts addressed in various chapters ....................................... 37 Table 4-1 RCI and WSI summary .............................................................................. 48 Table 5-1 Four commonly used normalisation procedures ........................................ 54 Table 5-2 Four MADM methods for the experiment with SAW ............................... 60 Table 5-3 Four MADM methods for the experiment with TOPSIS .......................... 67 Table 5-4 Simulation results in terms of performance ............................................... 75 Table 6-1 Nine MADM methods used in the example .............................................. 82 Table 6-2 Decision matrix used in the example ......................................................... 83 Table 6-3 Resultant rank matrix ................................................................................. 84 Table 6-4 Rank correlation coefficient between MADM methods ............................ 84 Table 6-5 Rank similarity index for suitable MADM methods ................................. 85 Table 7-1 Ranking outcomes obtained....................................................................... 93 Table 7-2 Resultant rank matrix ................................................................................. 93 Table 7-3 The method preference degree matrix ....................................................... 94

xiv

List of Tables

Table 7-4 The scaled method preference degree matrix ............................................ 94 Table 7-5 The preference level for MADM method .................................................. 94 Table 7-6 Comparison between existing DSS and alternatives-oriented DSS .......... 98 Table 9-1 Rank matrix generated by combining individual ranking outcomes ....... 121 Table 9-2 The rank score matrix .............................................................................. 122 Table 9-3 The overall rank score and group ranking outcomes ............................... 122 Table 9-4 Rank similarity index for group outcomes .............................................. 123 Table 10-1 Individual ranking outcomes for each decision maker .......................... 129 Table10-2 Solution space with all the possible ranking outcomes .......................... 130 Table10-3 OSI for each possible outcome ............................................................... 131

xv

List of Figures

Figure 1-1 Stages of solving a multiattribute decision making problem ..................... 2 Figure 1-2 The research framework ............................................................................. 6 Figure 2-1 MADM classification based on the data type .......................................... 11 Figure 2-2 MADM classification based on the information type and features .......... 13 Figure 2-3 MADM classification based on the number of decision makers.............. 14 Figure 3-1 Overview of the methodology developments........................................... 34 Figure 5-1 With 10 attributes, the effects on the ranking consistency for changes in the number of alternatives .......................................................................................... 63 Figure 5-2 With 6 alternatives, the effects on the ranking consistency for changes in the number of attributes ............................................................................................. 64 Figure 5-3 With 12 alternatives, the effects on the ranking consistency for changes in the number of attributes ............................................................................................. 65 Figure 5-4 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range...................................................................................... 66 Figure 5-5 With 12 attributes and 12 alternatives, the effects on the ranking consistency for changes in the data range .................................................................. 66

xvi

List of Figures

Figure 5-6 With 12 attributes, the effects on the ranking consistency for changes in the number of alternatives .......................................................................................... 70 Figure 5-7 With 4 alternatives, the effects on the ranking consistency for changes in the number of attributes ............................................................................................. 72 Figure 5-8 With 20 alternatives, the effects on the ranking consistency for changes in the number of attributes ............................................................................................. 72 Figure 5-9 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range...................................................................................... 73 Figure 5-10 With 14 attributes and 14 alternatives, the effects on the ranking consistency for changes in the data range .................................................................. 74 Figure 7-1 Existing DSS for MADM problems ......................................................... 96 Figure 7-2 Alternatives-oriented DSS for multiattribute decision problems ............. 97 Figure 8-1 Distance in one dimensional space ......................................................... 107 Figure 8-2 Distance in two dimensional space ........................................................ 108 Figure 9-1 The group decision process in the evaluation and selection phases ....... 114 Figure 11-1 A computer based decision support system for method selection........ 138

xvii

Chapter 1 Introduction

1.1 Preamble Decision making is an important aspect of our daily life. Simple decisions like selecting a restaurant for dinner or more complex decisions like selecting a strategy for a country or organisation require a certain decision process, often known as decision analysis (Keeney and Raiffa, 1976; Hwang and Yoon, 1981; Deng, 1998). Making correct decisions is crucial as this may have significant impact on the future direction of a person, an organisation, a country or the world. Multiattribute decision making (MADM) is a special area of decision analysis. General MADM problems involve the evaluation, selection and ranking of a set of course of actions often referred to as decision alternatives with respect to a set of evaluation criteria or attributes. Most of the real world decision problems are multiattribute in nature. The suitability and applicability of MADM methods to solve real world decision problems have attracted researchers and decision makers from diverse

areas

including

management,

economics,

engineering,

computing,

mathematics, business, psychology, social science and medical science. The vast diversity in decision problems and problem areas have led to the development of numerous MADM methods (Kenney and Raiffa, 1976; Hwang and Yoon, 1981; Zeleny, 1982; Hwang and Lin, 1987; Yoon and Hwang 1995).

1

Chapter 1 Introduction

1.2 Multiattribute Decision Making Challenges Figure 1-1 shows the two major stages of solving an MADM problem: (a) problem structuring, and (b) problem solving. The structuring of the problem includes identifying the set of alternatives, identifying the set of attributes, and deciding the preferences. Stage 1 is highly dependent on the decision maker.

Stage 1: Structuring the decision problem

Select the set of alternatives to be evaluated

Decide the set of attributes to be considered

Specify the preferences and specific requirements

Decision problem

Stage

2:

Solving

the

decision

Use an appropriate MADM method to solve the decision problem

Decision outcome

Figure 1-1 Stages of solving a multiattribute decision making problem

2

Chapter 1 Introduction

The decision maker is usually familiar with the problem area of interest and is able to identify a set of decision alternatives. The decisions about a set of attributes are sometimes challenging due to inter-attribute relations like attribute hierarchy (Saaty, 1977). The decision maker often requires specifying preferences such as relative importance of the attributes. The complexity of Stage 1 may increase with diversity in data type (deterministic, probabilistic, and fuzzy) (Hwang and Yoon, 1981) in the decision problem. Although the problem structuring is a very challenging stage, it is assumed that the decision maker have enough skills, expertise and knowledge in the problem area to structure the decision problem. After the decision problem is specified from Stage 1, the decision maker simply needs to solve it with an appropriate MADM method as shown in Stage 2. This stage requires addressing many challenging tasks in multicriteria analysis (Yeh, 2002; Chakraborty and Yeh, 2007a) due to the following reasons: (a) Often the decision maker does not have required expertise and knowledge about the MADM methods and is unable to make a rational selection of an appropriate method. (b) For a given problem there may be multiple suitable MADM methods available. Objective evaluation is required to select the most preferred one among the set of suitable methods under various decision contexts and settings.

3

Chapter 1 Introduction

(c) Each MADM problem contains unique features in terms of decision settings. Hence, a generalised method selection guideline applicable to the entire decision problems is not available. (d) Very few studies on method evaluation and comparison have been conducted and the results are inadequate to cover a vast majority of MADM problems. (e) Many researchers have applied MADM methods to solve different MADM problems without proper justification and validation, thus leading to questionable further usage. (f) Method selection in a group environment has not been addressed adequately. The selection of an MADM method may has a great impact on the outcome of an MADM problem. The challenges identified and the lack of sufficient research in the area of method evaluation and selection where multiple suitable MADM methods available for a given problem highlight the need for further investigation and study in this area.

1.3 Research Objectives In order to address the challenges of MADM method evaluation and selection and to provide the decision makers with rational and efficient method selection techniques and approaches, the primary objectives of this study include:

4

Chapter 1 Introduction

(a) Review existing method evaluation and comparison studies to identify important gaps worth investigating. (b) Develop the general method selection guidelines by considering various practical MADM decision settings. (c) Identify various decision contexts considered by the decision makers while evaluating MADM methods and developing context specific method selection approaches. (d) Develop objective measures to validate comparison results. (e) Develop new techniques and methods for group decision settings. (f) Develop new measures to compare group decision making methods.

1.4 Research Outline Figure 1-2 shows the research framework of this thesis. It outlines the improvements and developments achieved in this study. The developments are grouped into three major categories: (a) simulation based study, (b) decision problems with single decision maker, and (c) decision problems with multiple decision makers. The simulation based study provides a general method evaluation guideline. The studies for single and group decision problems develop new approaches for evaluating and comparing MADM methods used for solving these problems.

5

Chapter 1 Introduction

Introduction (Chapter1)

A review of multiattribute decision making methods and method comparisons (Chapter 2)

Methodology formulation and development for method evaluation (Chapter 3)

Simulation based study

Simulation model for method evaluation (Chapter 4)

Normalisation procedure selection (Chapter 5)

Single decision problem Rank similarity based method evaluation (Chapter 6)

Group decision problem

Group consensus technique evaluation (Chapter 9)

Alternativesoriented method evaluation (Chapter 7) TOPSIS and modified TOPSIS comparison (Chapter 8)

Conclusions (Chapter 11) Figure 1-2 The research framework 6

Group outcome based on ranking comparison (Chapter 10)

Chapter 1 Introduction

In Chapter 2, a brief review on available MADM methods is presented. A review of method evaluation and comparison studies is also presented to identify research areas that require improvements and new developments. In Chapter 3, the general MADM problem is formulated along with the notation to be used in the thesis. A set of decision contexts are then identified along with their challenging issues to pave the way for the development of new context specific method evaluation and selection approaches and techniques. Chapter 4 develops a new simulation model for method evaluation and selection. The model is capable of comparing MADM methods based on the ranking outcomes they produce. The model is also able to find out the sensitivity of any method towards changes in information. It can provide general guidelines for method selection under various decision settings using a large number of simulated decision problems. In Chapter 5, the use of a particular normalisation procedure with the simple additive weighting (SAW) and the technique for order preference by similarity to ideal solution (TOPSIS) methods are justified by using the simulation model developed in Chapter 4. Chapter 6 develops a new approach to method selection based on ranking outcome. The approach is capable of comparing a set of suitable methods for a given problem to find the most suitable one in an objective manner. A new measure is developed for the purpose of objective comparison. A simple example is provided to illustrate the new approach.

7

Chapter 1 Introduction

In Chapter 7, a novel method selection approach is developed to select the most preferred MADM method from the perspective of the decision alternatives. The approach provides a new way of recognizing the importance of all the stakeholders of a decision problem. A new objective measure is developed for the method evaluation and selection. An example and the potential application of this new approach in decision support systems are also presented. Chapter 8 compares the TOPSIS and the modified TOPSIS methods using simulations and mathematical proofs to justify their applicability. In Chapter 9, a new group consensus technique is developed along with a comparison approach to justify the use of existing consensus techniques. An objective measure based on the developments in Chapter 6 is also developed to validate the comparison results. The method comparison is conducted based on the ranking outcome produced. An example is provided for a better understanding of the new approach. Chapter 10 develops a new method for solving group decision problems by using the comparison based searching in the whole solution space. The new method is unique for its capability of considering all possible outcomes while finding the group outcome for a given group decision problem. A worked example is provided to illustrate the new method. Chapter 11 summarises the developments achieved in this study along with their potential applications. The contributions of this research are also highlighted before suggesting future research direction.

8

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

2.1 Introduction Multiattribute decision making (MADM) methods have gained wide popularity for solving practical decision problems involving a set of decision alternatives and evaluation criteria. Various MADM methods have been developed in the past few decades to solve different types of MADM problems. With the availability of several MADM methods for a given problem, method comparison and selection has become a significant research issue (Zanakis et al., 1998; Chakraborty and Yeh, 2007a, 2007b, 2009). Existing comparison studies have shown major interests in justifying the suitability of certain MADM methods for a given decision problem. The existing simulation based and the empirical studies are inadequate to handle the method evaluation and selection in a comprehensive manner. In this chapter a review of the commonly used MADM methods and their classifications are first presented. A review of method comparison studies is then presented to identify the limitations of existing studies which lead to the methodology developments in this study.

9

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

2.2 Classification of Multiattribute Decision Making Methods MADM methods are diverse in structure, methodology and applications. Among various classifications available for MADM methods, the widely known ones include classifications based on (a) the data type in the decision problem, (b) the decision information type and features and (c) the number of decision makers involved (Hwang and Yoon, 1981; Triantaphyllou, 2000).

2.2.1 Classification Based on the Data Type Figure 2-1 shows the MADM method classification based on the data type. MADM problems may consist of data with probability. MADM methods are developed to deal with this stochastic data. Stochastic dominances for discrete cases are defined and used in developing outranking and rough approximation models (Hadar and Russel, 1969; Zaras and Martel, 1994; Zaras, 2001). Probability based confidence index models are developed to obtain relative preference between alternatives (Martel and D’Avignon, 1982; Martel et al., 1986). Stochastic utility additive methods use ordinal regression (Siskos, 1980 and 1983; Jacquetlagreze E and Siskos J, 1982). Interactive methods for stochastic MADM problems are also developed (Nowak, 2006). Stochastic methods are used to solve decision problems in various areas including risk analysis, portfolio analysis and financial planning, strategic planning (Muhlemann et al., 1978; De et al., 1982; Vinso, 1982; Eom et al., 1987-88; Lai and Hwang, 1993; Steuer and Na, 2003; Hanandeh and El-Zein, 2009)

10

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

Data type

Multiattribute decision Making problem

Method class

Stochastic data

Stochastic UTA, Stochastic Dominance

Fuzzy data

Fuzzy TOPSIS, Fuzzy Utility, Outranking

Deterministic data

SAW, TOPSIS, WP, ELECTREE, AHP

Figure 2-1 MADM classification based on the data type The fuzzy MADM problems contain information in linguistic terms. The evolutionary concept of fuzzy set theory (Zadeh, 1965) is applied to formulate fuzzy MADM problems (Bellman and Zadeh, 1970). Fuzzy MADM problems let the decision maker express the preferences in linguistic terms rather than a crisp scale. Over the past few decades, numerous fuzzy MADM methods have been developed to solve various practical MADM problems. Among others, these developments include α-cut (Baas and Kwakernaak, 1977; Kwakernaak, 1979; Cheng and McInnis, 1980; Dubois and Prade, 1982), fuzzy arithmetic (Bonissone, 1980 and 1982), eigenvector method (Saaty, 1977), possibility measure (Dubois et al., 1988), outranking methods (Siskos et al., 1984; Brans et al., 1984), fuzzy TOPSIS (Rebai, 1993; Chen and Wei 1997; Chu, 2002b), fuzzy utility methods (Seo and Sakawa, 1984 and 1985). Detail classifications of Fuzzy MADM methods with various decision issues and applications can be found in several studies (e.g. Chen and Hwang, 1992; Deng, 1998).

11

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

Deterministic MADM problems contain data in numeric form and the values are given precisely. MADM methods developed for this type of decision problems have gained wide acceptance due to their simplicity and computational efficiency. Some widely used methods in this class include SAW (Churchman and Ackoff, 1954; MacCrimmon, 1968; Hwang and Yoon, 1981), TOPSIS (Hwang and Yoon 1981; Yoon and Hwang 1995), ELECTREE (Benayoun et al., 1966; Roy, 1968, 1971, 1973 and 1991; Nijkamp, 1974), WP (Bridgman, 1922; Starr, 1972; Yoon, 1989) and AHP (Saaty, 1980 and 1994).

2.2.2 Classification Based on the Information Type and Features Figure 2-2 shows the MADM method classification by considering availability and features of preference information. Non-compensatory methods are based on the notion that a superiority of one attribute cannot be offset by inferiority in some other attributes (Yoon and Hwang, 1995). Decision problems where no preference of the decision maker is given can be solved by finding the nondominated alternatives using pairwise comparisons in Dominance method (Yu, 1973 and 1975; Hadar and Russel, 1974; Bergstresser et al., 1976; Wehrung et al., 1978). With pessimistic and optimistic points of view, non-compensatory problems can be solved using the Maximin (MacCrimmon, 1968; Bellman and Zadeh, 1970; Foerster, 1979) and Maximax (Dawes, 1964; MacCrimmon, 1968; Foerster, 1979) methods respectively. These methods evaluate an alternative based on the weakest and strongest attributes respectively and require the attributes to be on a common scale (Hwang and Yoon, 1981).

12

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

Information type Information Feature

Method class

No Information Multiattribut e decision Making problem

Information on Environment

Information on Attribute

Dominance Pessimistic

Maximin

Optimistic

Maximax

Standard

Conjunctive Method, Disjunctive Method

Ordinal

Lexicographic Method, Elimination by Aspect

Cardinal

SAW, TOPSIS, WP, ELECTREE, AHP

Figure 2-2 MADM classification based on the information type and features Source: Adapted from Hwang and Yoon (1981)

The decision maker may provide the preferences in various ways including (a) standard level, (b) ordinal preference and (c) cardinal preference (Hwang and Yoon, 1981). Decision problems where the alternatives must satisfy a minimum preference level for all attributes or the alternatives are to be evaluated based on its greatest value of an attribute, can be solved using the Conjunctive method and Disjunctive method (Simon, 1955; Dawes, 1964; Ando, 1979). Decision problems where ordinal preference values given by the decision maker represent the relative importance of the attributes can be solved using the Lexicographic method (Luce, 1956; Encarnacion, 1964; Bettman, 1971 and 1974; Fishburn, 1974) and the Elimination by Aspect method (Tversky, 1972a and 1972b; Bettman, 1974). 13

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

MADM problems with given cardinal preference information from the decision maker can be solved with several widely used methods including (a) additive utility based methods like SAW (Churchman and Ackoff, 1954; MacCrimmon, 1968; Klee, 1971) and AHP (Charnes et al., 1973; Saaty, 1977), (b) multiplicative utility based methods like WP (Bridgman,1922; Starr, 1972, Yoon, 1989), (c) concordance measure based methods like ELECTRE (Roy, 1971; Nijkamp and Vandelft, 1977; Voogd, 1983) and (d) closeness to ideal solution based methods like TOPSIS (Hwang and Yoon, 1981; Zeleny, 1982; Yoon and Hwang, 1995).

2.2.3 Classification Based on the Number of Decision Makers Figure 2-3 shows a method classification based on the number of decision makers involved in the problem solving process.

Decision maker

Method class

One Single decision Maker

SAW, TOPSIS, WP, ELECTREE, AHP

A Group of Decision Makers

Group TOPSIS, Social Choice Functions, Borda Score technique

Multiattribute decision Making problem

Figure 2-3 MADM classification based on the number of decision makers

With the single decision maker problem, the decision maker needs to formulate the decision problem, decide the attribute preferences and choose an 14

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

appropriate method to solve the MADM problem. Widely used MADM methods for single decision maker problems include SAW (Churchman and Ackoff, 1954; MacCrimmon, 1968; Klee, 1971), AHP (Charnes et al., 1973; Saaty, 1977), WP (Bridgman,1922; Starr, 1972, Yoon, 1989), ELECTRE (Roy, 1971; Nijkamp and Vandelft, 1977; Voogd, 1983) and TOPSIS (Hwang and Yoon, 1981; Zeleny, 1982; Yoon and Hwang, 1995). MADM problems with more than one decision maker are known as multiattribute group decision making (MAGDM) problems. Group decision problems are similar to MADM problems with the added complexity that all the decision makers need to achieve an agreed outcome which satisfies them most as a whole. The decision makers need to agree on various decision aspects including the alternatives, attributes, attribute weights and the method to be applied to solve the problem. Methods to solve the MAGDM problems include extensions to MADM methods like Group TOPSIS (Hwang and Lin, 1987; Chen, 2000; Chu 2002a; Shih et al., 2007), various social choice functions and consensus techniques (Hwang and Lin, 1987) and scoring techniques like Borda score (DeBorda, 1781; DeGrazia, 1953; Black, 1958; Arrow, 1963; Fishburn, 1973).

2.3 Multiattribute Value Theory Based Methods In this study, three widely used methods based on multiattribute value theory (MAVT) (Keeney and Raiffa, 1976) based methods are adopted in the explanation and examples of the new developments, including (a) the simple additive weighting

15

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

(SAW) method, (b) the technique for order preference by similarity to ideal solution (TOPSIS) method, and (c) the weighted product (WP) method.

2.3.1 Simple Additive Weighting The simple additive weighting (SAW) method (Churchman and Ackoff, 1954; MacCrimmon, 1968; Klee, 1971) is probably the most widely used and well known MADM method (Hwang and Yoon, 1981; Yeh, 2003). The basic principle behind this method is to obtain an overall preference score for each alternative which is used as the basis for evaluation and ranking. The overall preference score is calculated as weighted sum of the individual performance ratings for each alternative with respect to each attribute. SAW requires the attributes to be comparable and in numerical form with the given attribute weights (relative importance) from the decision maker. SAW applies a normalisation procedure to convert performance ratings with different measurement units into a comparable unit. The advantage of this method lies in its simplicity, ease of use and sound mathematical grounds.

2.3.2 Technique for Order Preference by Similarity to Ideal Solution The technique for order preference by similarity to ideal solution (TOPSIS) method (Hwang and Yoon, 1981) is based on the notion that the preferred alternative should have the shortest distance from the positive ideal solution and the longest distance from the negative ideal solution. The TOPSIS method calculates the relative

16

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

closeness for each of the alternatives (comparable to overall preference score in SAW) which is used to obtain ranking outcome. TOPSIS requires the attributes to be numerical and comparable. Similar to the SAW method, TOPSIS uses a normalisation procedure to generate a common measurement unit for performance ratings. TOPSIS is a simple, easy and efficient method applicable to practical MADM problem solving.

2.3.3 Weighted Product The weighted product (WP) method (Bridgman, 1922; Starr, 1972, Yoon, 1989) is based on multiplicative utility. Instead of an addition operator in SAW, WP uses a multiplication operator to obtain the overall utility score for each alternative by combining the performance ratings and attribute weights. The overall score is used to rank the alternatives. Other than simplicity and ease of use, the WP method does not require a normalisation procedure and is able to handle different measurement units for performance ratings implicitly. WP method imposes heavy penalty on low performing alternatives and is particularly useful where the decision maker wants to screen out poor performing alternatives.

2.4 Method Comparison Studies Multiattribute decision making (MADM) research has shown that no single method is best for all problem settings. Different ranking outcomes may be obtained

17

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

when different methods are applied to solve the same decision problem (Zanakis et al., 1998; Chakraborty and Yeh, 2007a, 2007b). Selecting an MADM method to achieve the most preferred outcome for a given decision problem thus becomes an important issue. The significance of this method evaluation and selection issue has led to many studies on how to select the most preferred method for a given decision setting. The method evaluation and selection studies conducted so far can be classified as “the decision-maker-oriented” and “the method-oriented” approaches. In the decision-maker-oriented approach, the decision maker usually applies an MADM method on the basis of previous experiences or recommendations by experts. This approach relies on the decision maker for method selection which may introduce judgemental error in the decision outcome. A study examining the behavioural impact on the method selection has showed that the method selection process is largely influenced by the decision maker’s familiarity with certain method (Buchanan, 1994). The decision-maker-oriented method selection approach provides support and enhances the knowledge of the decision maker about different decision settings and method suitability. A set of tentative guidelines for method selection has been proposed for the decision maker to solve an MADM problem (Guitouni and Martel, 1998). Studies between MAVT-based MADM methods and outranking methods such as ELECTRE (Roy, 1968, 1991) have shown the differences in structures and problem formulation (Simpson, 1996). An attempt to develop a unique way of MADM method evaluation and selection has produced a set of meta-criteria which should be satisfied by the methods (Cho, 2003). The decision-maker-oriented approach relies on the decision maker’s understanding and subjective judgement for method selection. Often the decision maker lacks enough knowledge and skills to 18

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

make justified choice of the most preferred method for the problem under consideration. The high variability in subjective method selection by the decision maker highlights the need for an objective way of method selection. The issue of objective evaluation and selection of MADM methods have been addressed in quite a few studies using the method-oriented approach. The methodoriented approach compares suitable methods on the basis of an objective performance measure. With known decision outcomes available, MADM methods are compared for predictive accuracy (Olson, 2001). This line of study has produced interesting results and is applicable for problems with historical data available, such as weather forecasting and market trend prediction. Simulation based comparison studies of several MADM methods have provided valuable insights for selecting a method for a given problem (Zanakis et al., 1998; Deng and Yeh, 2006). The results of these simulation based studies have shown the effect of the alternative numbers, attribute numbers and distribution of information, which can be used as guidelines for selecting a method for a problem with a given problem size and distribution. Sensitivity analysis has been used by several studies to examine the degree of sensitivity of various MADM methods in terms of attribute weights (Weber and Borcherding, 1993; Triantaphyllou and Sanchez, 1997; Yeh, 2002). This approach is very useful when the attributes weights are uncertain or the sensitivity of attributes weights is a major concern of the decision maker. In another line of research development, the concept of expected value loss has been introduced as a performance measure for method selection in an objective manner (Yeh, 2003). The expected value loss measures the deviations in decision

19

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

outcomes under various weight settings. The method with a minimum value loss is the most preferred one. Other studies have introduced ranking consistency as a performance measure to select the most preferred method for various problem settings involving a wide range of alternative numbers, attribute numbers and assessment data (Chakraborty and Yeh, 2007a, 2007b). Although several method evaluation and selection studies have been conducted over the past few decades, many research issues are still open for further investigation, including the following: (a) No general guideline available for MADM problems under specific decision settings. (b) No significant study has been conducted to evaluate a set of suitable methods for a given decision problem to find the most preferred one. (b) Methods are not evaluated for their internal problem solving processes. (c) Evaluation and comparison studies are not performed considering specific selection preferences (decision context) of the decision maker. (d) The perspective of alternatives is not considered in existing method comparison and selection studies. (e) The consensus techniques in multiattribute group decision making problems are not investigated for their suitability.

20

Chapter 2 A Review of Multiattribute Decision Making Methods and Method Comparisons

(f) Existing MADM methods for group decision problems often uses a limited solution space to find the group ranking outcome. This solution space limitation may be a major challenge in obtaining the most preferred outcome for the group of decision makers.

2.5 Concluding Remarks The brief review of MADM methods and their classifications presented in this chapter has shown the wide diversity in the MADM research developments. The review of method evaluation and comparison studies and subsequent identification of unresolved issues provides the motivation and platform for new developments in the area of MADM method evaluation and selection. The challenges identified in this chapter will be further discussed in detail in terms of decision contexts to be addressed in Chapter 3.

21

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

3.1 Introduction Multiattribute decision making (MADM) methods are widely used to solve real life decision problems. MADM problems are diverse in terms of decision settings and decision information available. With the availability of several MADM methods that produce different outcomes for a given problem, selecting the most preferred one for the given problem is a challenging task for the decision maker. Previous comparative studies on MADM method evaluation provide some general and problem specific guidelines for method selection (Zanakis et al., 1998; Olson, 2001). Although these studies provide significant insights on method suitability and selection, further study is required to justify the selection of a particular method for a given decision problem under a specific decision context, for which a number of suitable MADM methods are often available. In this chapter, the general MADM problem is first presented along with the formulation of the research problem. Notation for the MADM problem formulation is then introduced which is to be used throughout the thesis. Next, various decision contexts for the MADM problem along with their associated method evaluation and selection challenges and context specific requirements are discussed. Finally, an overview of the developments of context specific approaches and models for MADM 22

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

method evaluation and selection is outlined to pave the way for their presentation in Chapters 4-10.

3.2 The Multiattribute Decision Making Problem and Notation The general multiattribute decision making (MADM) problem Φ involves the following: (a) A set of Q decision makers Dq; q = 1, 2, ...,Q. (b) A set of I alternatives Ai; i = 1, 2, ..., I. (c) A set of J attributes Cj; j = 1, 2, ..., J. (d) A set of J attribute weights Wj; j =1, 2, ..., J. (e) (I*J) Performance ratings xij; i = 1, 2, ..., I; j = 1, 2, ..., J.

The general MADM problem Φ may have only one decision maker Dq (Q = 1) or a group of decision makers Dq (Q > 1). The involvement of more than one decision maker increases the challenges in solving the MADM problem Φ. The set of decision alternatives Ai (i = 1, 2, ..., I) includes various decision options the decision maker is considering for the given decision problem which are to be evaluated and ranked. For example, a buyer may have several options while buying a car. The set of attributes Cj (j = 1, 2, ..., J) are the selection criteria the decision maker considers while evaluating the decision alternatives Ai (i = 1, 2, ..., I). For example, the car buyer may evaluate the car options based on price, comfort, mileage and performance. 23

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

The set of attribute weights Wj (j =1, 2, ..., J) represents the relative importance of the attributes Cj (j = 1, 2, ..., J) to the decision maker. For example, the car buyer may consider that price is more important than comfort and performance, hence will have a higher attribute weight. The attribute weights are presented as a vector W as shown in Equation (3-2). The performance rating xij (i = 1, 2, ..., I; j = 1, 2, ..., J) represents the assessment scores provided by the decision maker Dq (q = 1, 2, ...,Q) for each alternative Ai (i = 1, 2, ..., I) with respect to each attribute Cj (j = 1, 2, ..., J). All the performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) for all the alternatives Ai (i = 1, 2, ..., I) in relation to all the attributes Cj (j = 1, 2, ..., J) can be represented as a decision matrix X as shown in Equation (3-1), where rows and columns represent alternatives and attributes respectively (Hwang and Yoon, 1981; Belton and Stewart, 2002; Yeh, 2003).

 x11 x X   21  ...   xI 1

x12 ... x1J  x22 ... x2 J  ; i  1, 2, ..., I; j  1, 2, ..., J. ... ... ...   xI 2 ... xIJ 

W  {W j }; j  1, 2, ..., J.

(3-1)

(3-2)

With (a) the set of decision makers Dq (q = 1, 2, ...,Q), (b) the set of alternatives Ai (i = 1, 2, ..., I) and (c) the set of attributes Cj (j = 1, 2, ..., J) being defined, the general MADM problem Φ can be represented as a combination of the decision matrix X and the weight vector W by using Equations (3-1) and (3-2) as

24

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

Φ  { X ,W }

(3-3)

To solve the MADM problem Φ, a number of suitable MADM methods Mk (k = 1, 2, ..., K) are available. The MADM methods Mk (k = 1, 2, ..., K) require (a) a normalisation procedure Ne (e = 1, 2, ..., E) and (b) an aggregation technique. The normalisation procedures are used to transform the performance rating xij (i = 1, 2, ..., I; j = 1, 2, ..., J) to a comparable unit as they may have diverse measurement units. The aggregation technique is applied to combine normalised performance ratings with the attribute weights Wj (j =1, 2, ..., J) to obtain an overall value Vi (i = 1, 2, ..., I) for each alternative Ai (i = 1, 2, ..., I). The overall value Vi (i = 1, 2, ..., I) is used to evaluate and rank the decision alternatives Ai (i = 1, 2, ..., I).

3.3 Decision Context and Method Evaluation Challenges A number of MADM methods Mk (k = 1, 2, ..., K) are often available to solve the general MADM problem Φ under a given decision context. For any given decision problem Φ, there may be multiple suitable methods that are acceptable to the decision maker. The research challenge is how to evaluate and select the most preferred MADM method among a set of suitable methods Mk (k = 1, 2, ..., K) under various decision contexts. The most preferred method refers to the method that satisfies a certain decision context most. The satisfaction level for the decision context needs to be measured using objective measures. The term “decision context” used in this thesis includes the decision settings and other method evaluation preferences and considerations. The term “decision settings” can be defined in terms of (a) problem type (such as problems with single 25

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

decision maker or a group of decision makers), (b) problem size (in terms of the number of alternatives and attributes) and (c) information variations in terms of data range and data type (qualitative or quantitative data). Among different MADM methods available, this study uses multiattribute value theory (MAVT) based methods (Kenney and Raiffa, 1976) in various evaluation settings and examples, due to their ability to produce a complete ranking of all the alternatives for a given problem. Six decision contexts are identified and categorised below, along with their evaluation and selection challenges and requirements.

3.3.1 Decision Context A 3.3.1.1 Specifications for Decision Context A (a) The MADM problem involves one single decision maker only. (b) The decision maker requires a general guideline for method selection under different decision settings based on the size of the problem and variation in decision information. (c) The decision maker is not very confident on the assessment of attribute weights and is concerned about its impact on the decision outcome. (d) The decision maker wants to know if the use of a specific normalisation procedure with an MADM method is justified under various decision settings.

26

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

3.3.1.2 Current Challenges for Decision Context A (a) A few simulation studies that have been conducted so far considered only a relatively small set of MADM problems and a limited number of decision settings (Zanakis et al., 1998; Olson, 2001). To address the general MADM problem, a new simulation model is required to experiment with a large number of decision problems with various decision settings. (b) Existing studies use one particular method as the basis for comparison which creates doubts on the impartiality of the comparison results. New performance measures need to be developed for comparing methods objectively, based on relative comparison between them. (c) New performance measures need to be developed to measure the sensitivity of each method with changes in certain decision information, such as attribute weights.

3.3.2 Decision Context B 3.3.2.1 Specifications for Decision Context B (a) The MADM problem involves one single decision maker only. (b) The decision maker can use a set of suitable MADM methods for a given problem. All these methods produce acceptable outcomes, but the decision maker must choose one method as the most preferred one among them.

27

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

3.3.2.2 Current Challenges for Decision Context B (a) Previous studies consider that only one method is suitable for a given problem and all other methods are not acceptable (Simpson, 1996; Triantaphyllou and Sanchez, 1997; Guitouni and Martel, 1998; Yeh 2002; 2003; Cho, 2003). A new method selection approach is required which can compare a set of suitable methods to find the most preferred one. (b)Objective performance measures need to be developed to find the most preferred method from a set of suitable methods.

3.3.3 Decision Context C 3.3.3.1 Specifications for Decision Context C (a) The MADM problem involves one single decision maker only. (b) The decision maker does not have any specific method preferences or the decision maker is not a key stakeholder. (c).The alternatives in the decision problem are the key stakeholders and should have greater inputs in the method selection process.

3.3.3.2 Current Challenges for Decision Context C (a) Previous method selection studies have been conducted from two perspectives:

“method-oriented”

and

“decision-maker-oriented”.

The

method-oriented studies compare MADM methods based on their 28

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

performances with certain decision settings (Weber and Borcherding, 1993; Zanakis et al., 1998; Olson, 2001; Yeh, 2002 and 2003; Deng and Yeh, 2006; Chakraborty and Yeh 2007a and 2007b). The decision-maker-oriented studies consider the preferences of the decision maker in method selection (Simpson, 1996; Guitouni and Martel, 1998; Cho, 2003). No study has considered the preferences of the decision alternatives in method selection. In certain decision problems, the alternatives are the key stakeholders (Jessop, 2009). Thus, there is a need for developing a new method selection approach which provides due considerations to the preferences of the decision alternatives in the method evaluation and selection process. (b)New performance measures need to be developed to evaluate the MADM methods objectively in terms of the alternatives’ preferences.

3.3.4 Decision Context D 3.3.4.1 Specifications for Decision Context D (a) The decision problem involves one single decision maker only. (b) The decision maker is unable to select between the TOPSIS (Hwang and Yoon, 1981) and the modified TOPSIS (Deng et al., 2000; Yeh, 2002) method for a given problem. These two methods are similar in structure with the only difference in handling of the attribute weight. The decision maker is concerned about the justification of using either method.

29

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

3.3.4.2 Current Challenges for Decision Context D (a) No comparison study has been conducted to compare TOPSIS with modified TOPSIS. Thus, there is a need to conduct a comprehensive comparison study to justify the use of these two methods under specific decision settings.

3.3.5 Decision Context E 3.3.5.1 Specifications for Decision Context E (a) The decision problem involves a group of decision makers. (b) The decision makers have their own decision problems reflecting their preferences and wish to observe the ranking outcomes produced by the method of their choice. (c) The group outcome needs to be achieved by consensus among the group based on the individual ranking outcomes. (d) The consensus techniques to be used require objective evaluation and justification.

3.3.5.2 Current Challenges for Decision Context E (a) Currently the Borda score technique (DeBorda, 1781; DeGrazia, 1953; Black, 1958; Arrow, 1963; Fishburn, 1973 and 1977; Gardenfors, 1973; Fine and Fine, 1974a and 1974b; Young, 1974 and 1975; Pattanaik, 1978) is 30

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

the only available group consensus technique which is able to provide a group outcome, using the individual ranking outcomes provided by each of the decision makers (Hwang and Yoon, 1981). The Borda score technique uses the average rank score of the individual ranking outcomes to produce the group outcome. The average may not always be the most preferred outcome to the group of decision makers as a whole. (b) New consensus techniques need to be developed by considering other aggregation procedures. (c) New approaches need to be developed to compare group consensus techniques and select the most preferred one for a given group decision problem.

3.3.6 Decision Context F 3.3.6.1 Specifications for Decision Context F (a) The decision problem involves a group of decision makers. (b) The decision makers have their own ranking outcomes for the problem. (c) The decision makers want to find the group outcome from the set of all possible outcomes. (d) The set of all possible outcomes is not limited by the number of available methods or decision makers. All the possible ranking combinations with the alternatives should be considered. 31

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

(e) All the ranking outcomes in the outcome set are considered valid and acceptable to the group of decision makers and they want to find the most preferred one among them.

3.3.6.2 Current challenges for Decision Context F (a) Currently available methods for group decision problems apply various aggregation procedures to achieve group solution from available individual preferences and ranking outcomes. These methods are limited by the number of individual ranking outcomes and aggregation procedures (Eckenrode, 1965; Fishburn, 1966; Souder, 1972, 1973a and 1973b; Minnehan, 1973; Keeney and Kirkwood, 1975; Dyer and Miles, 1976; Bernardo, 1977; Cook and Seiford, 1978 and 1982; Hwang and Yoon,1981; Hwang and Lin, 1987; Parkan and Wu, 1998; Chen, 2000; Chu 2002a and 2002b; Cook, 2006; Fu and Yang, 2007; Shih et al., 2007). (b) There is a need for developing a new method capable of finding the most preferred group outcome from whole solution space consisting of all the possible decision outcomes for the given group decision problem. (c) Objective performance measures need to be developed which can measure group preferences for all possible outcomes. The performance measure should be able to measure the satisfaction level of the group of decision makers as a whole.

32

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

3.4 Overview of the Methodology Developments The context specific challenges and requirements discussed in previous sections are addressed by developing a number of new approaches, methods and performance measures as shown in Figure 3-1. The new methodology developments address research issues in three distinct areas of MADM research, including: (a) simulation based generalised guidelines development for method evaluation and selection, (b) evaluation of single decision maker methods, and (c) evaluation of group decision methods. Chapters 4 and 5 address the challenges and requirements for Decision Context A. A new simulation model is developed for MADM method comparison in Chapter 4. The simulation model is capable of comparing MADM methods that can produce a complete ranking for all the decision alternatives. The simulation model can compare the performances of different MADM methods under various decision settings. The decision settings can be easily varied by changing the problem size (in terms of the number of alternatives and attributes), the information range and the attribute weights. Two new performance measures (ranking consistency index (RCI) and weight sensitivity index (WSI)) are also developed. The RCI measures the level of consistency of a particular method relative to other methods while producing a ranking outcome for certain decision settings. The WSI indicates the level of sensitivity of a method towards any change in attribute weights under various decision settings. The simulation model is then used in Chapter 5 to justify the suitability of certain normalisation procedures for SAW and TOPSIS methods under various decision settings.

33

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection Context dependent evaluation of MADM methods

1

>1

How many decision makers?

Multiattribute decision making (MADM) problem

Multiattribute group decision making (MAGDM) problem

No

Yes

Generalised method selection?

Context specific method development and selection

Simulation model for method selection (Chapter 4) Yes

Are uses of normalisation justified?

Simulation based selection of normalisation procedure (Chapter 5)

No

Are there multiple acceptable methods?

Yes

Rank similarity based selection of methods (Chapter 6)

Yes

Alternatives-oriented selection of methods (Chapter 7)

No

Are the alternatives preferences considered? No

Yes

Is the use of modified TOPSIS justified?

Comparing TOPSIS and modified TOPSIS methods (Chapter 8)

No TOPSIS based consensus and consensus technique selection (Chapter 9)

Yes

Is group consensus justified? No

Similarity based group ranking with all possible outcomes (Chapter 10)

Yes

Previous research studies

Are all outcomes considered? No

Figure 3-1 Overview of the methodology developments 34

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

Chapter 6 develops a novel method selection approach for addressing the challenges and requirements for Decision Context B. The new approach considers that all the methods being compared are valid and acceptable to the decision maker. The most preferred method is selected by comparing the ranking outcomes produced by different methods in terms of their relative similarity. A new performance measure called ranking similarity index (RSI) is developed to measure the amount of similarity that a ranking outcome (produced by a certain MADM method) has with all the other ranking outcomes produced by other MADM methods. Chapter 7 addresses the challenges and requirements for Decision Context C. A new alternatives-oriented method selection approach is developed by considering the preferences of the decision alternatives for selecting the most preferred method. The approach calculates the overall method preference of all the decision alternatives for each method being compared and uses it for selecting the most preferred method. Chapter 8 addresses the challenges and requirements associated with Decision Context D. A comprehensive comparison is conducted between the TOPSIS and the modified TOPSIS methods by using simulation experiments. Mathematical explanations are also presented to justify the use of these methods for making logical and rational method selection decisions. Chapter 9 addresses the challenges and requirements associated with Decision Context E. A new group consensus technique is developed by applying the theoretical grounds of the TOPSIS method (Hwang and Yoon, 1981). This new technique is a well justified alternative method to the conventional Borda score technique. A new consensus technique selection approach is also developed to

35

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

compare and select consensus techniques for a given group decision problem. The approach introduces a new performance index called the group similarity index (GSI), which calculates the degree of similarity for a group outcome (produced by a consensus technique) with all the ranking outcomes provided by each individual decision maker in the group. Chapter 10 develops a new multiattribute group decision making method to meet the challenges and requirements for Decision Context F. The new method finds the most preferred group ranking outcome from all the possible ranking outcomes for any given group decision problem. The method is based on the concept that the ranking outcome which is most similar to all the ranking outcomes provided by all the decision makers is the most preferred one by the group as a whole. The level of similarity for each outcome in the set of all possible ranking outcomes is measured by using a new performance measure called the outcome similarity index (OSI).

3.5 Concluding Remarks This chapter has outlined the methodology developments to be presented in the subsequent chapters. These developments address various significant unresolved issues in the area of MADM method evaluation and selection under various decision contexts discussed in Section 3.3. Table 3-1 shows the chapters of the thesis and the decision context they address.

36

Chapter 3 Methodology Formulation and Development for Method Evaluation and Selection

Table 3-1 Decision contexts addressed in various chapters Decision context

Relevant chapter

Context A

Chapters 4 and 5

Context B

Chapter 6

Context C

Chapter 7

Context D

Chapter 8

Context E

Chapter 9

Context F

Chapter 10

37

Chapter 4 Developments I: A Simulation Model for Method Evaluation and Selection

4.1 Introduction Multiattribute decision making (MADM) problems Φ are diverse in terms of (a) the number of alternatives Ai (i = 1, 2, ..., I) to be evaluated and ranked, (b) the number of attributes Cj (j = 1, 2, ..., J) to be considered, (c) the relative importance Wj (j = 1, 2, ..., J) of the attributes and (d) the data type and measurement unit for the performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) given for each alternative Ai (i = 1, 2, ..., I) against each attribute Cj (j = 1, 2, ..., J). For a decision setting, there may be multiple suitable MADM methods Mk (k = 1, 2, ..., K). Selecting the most suitable one among them is a challenging task. Various MADM method taxonomies and guidelines have been developed by several researchers to help the decision maker select suitable methods for specific problem types (Hobbs, 1980; Hwang and Yoon, 1981; Ozernoy, 1987 and 1992). Various empirical studies consisting of real life problem scenarios may help select suitable methods for a given problem (Currim and Sarin, 1984; Gemunden and Hauschildt, 1985; Belton, 1986; Hobbs, 1986; Hobbs et al., 1992; Stewart, 1992). Simulation experiments may provide the empirical studies with the experimental supports by reducing their limitations on the sample availability, assumptions and lack of expert users (Zanakis et al., 1998).

38

Chapter 4 A Simulation Model for Method Evaluation and Selection

Zanakis et al. (1998) have conducted an extensive simulation study to compare several MADM methods in terms of rank and weight variations along with the rank reversal scenarios (Saaty, 1987). Their study has highlighted the importance of the simulation study for method selection and provided some interesting comparison results which can be used for method selection purposes. Although the study has shown a new dimension to method selection, the use of the simple additive weighting (SAW) method as the basis for comparison, may limit the applicability of the comparison results. The study has also used a limited set of decision settings in terms of the number of attributes and alternatives. This chapter develops a new simulation model which addresses the Decision Context A outlined in Chapter 3 and generates method selection guidelines that have general application. The simulation model is capable of comparing any number of MADM methods under different decision settings. Two new performance measures are also developed to justify the comparison results.

4.2 The Simulation Model The simulation model is developed to address how the key decision information settings may influence the decision outcomes when different MADM methods are used. The key decision information settings include (a) the number of attributes Cj (j = 1, 2, ..., J) considered, (b) the number of alternatives Ai (i = 1, 2, ..., I) to be evaluated and ranked, (c) the diversity in the performance rating xij (i = 1, 2, ..., I; j = 1, 2, ..., J), and (d) the weights Wj (j = 1, 2, ..., J) of each attribute.

39

Chapter 4 A Simulation Model for Method Evaluation and Selection

Step 1: Identify a set of MADM methods to be compared. The set of suitable MADM methods Mk (k = 1, 2, ..., K) which are to be evaluated and compared under various decision settings are selected. The methods considered should be able to produce a complete ranking of the alternatives Ai (i = 1, 2, ..., I). Step 2: Determine the initial and the target decision settings. For each of the decision information settings (for which Methods Mk (k = 1, 2, ..., K) are to be evaluated), the initial setting (the starting value for the experiment) and target setting (the value when the experiment terminates) are determined. Step 3: Generate a set of decision problems. A large number of decision problems are generated for each of the four decision information settings. Three sets of decision problems Ω are generated to validate the correctness of the experiment results. Step 4: Solve the decision problems. In each simulation run, the decision problems in each problem set Ω are solved by Method Mk (k = 1, 2, ..., K). The ranking outcomes obtained are used for measuring the performances of the methods. Step 5: Evaluate the performance. Use an appropriate performance measure to evaluate the performance of Method Mk (k = 1, 2, ..., K). Objective measures should be applied as they provide a rational base for comparisons.

40

Chapter 4 A Simulation Model for Method Evaluation and Selection

Step 6: Vary the decision information settings. Each of the decision information settings is changed one at a time by a predefined amount (which produces significant variation in the results). Step 7: Repeat the process Steps 3 to 6 are repeated until the target settings for the decision information is reached. The whole simulation is also repeated to identify and eliminate possible irregularities in the evaluation and comparison results.

4.3 Performance Measures Two new performance measures are developed to compare MADM methods in terms of various decision settings, including the number of alternatives, the number of attributes and the data range of performance ratings.

4.3.1 The Ranking Consistency Index (RCI) The ranking of the alternatives is the final decision outcome that concerns the decision maker. When the consistency of the rankings produced is a major concern, it is important for the decision maker to select a method that produces the most consistent ranking outcome among all the methods being tested with a given decision settings. For a given MADM problem, a method is considered to be consistent with another method if it produces the same ranking outcome. The ranking consistency index (RCI) indicates the degree of consistency a method has with respect to all other methods under certain decision settings when a large number of decision problems

41

Chapter 4 A Simulation Model for Method Evaluation and Selection

are considered. A larger RCI value indicates that the corresponding method is more consistent in terms of the ranking outcome it produces. In order to calculate the RCI, a consistency weight (CW) for each ranking outcome is defined. If a ranking outcome produced by a method is the same as that of all other methods, then it has a consistency weight of 1. With Method Mk (k = 1, 2, ..., K), a set of consistency weight can be obtained as

CWn  n /( K  1); n  0, 1, ..., K - 1; k  1, 2, ..., K.

(4-1)

where n represents the number of other methods that produce the same rank as Method Mk. The RCI can be obtained for Method Mk (k = 1, 2, ..., K) as K 1

RCI k  ( Tkn * CWn ) / T ; k  1, 2, ..., K.

(4-2)

n 0

where Tkn = total number of times Method Mk (k = 1, 2, ..., K) produces the same ranking outcome with n number of other methods. T = total number of decision problems used in the simulation run. For example, consider the following simple experiment setting There are four (K = 4) Methods Mk (k = 1, 2, ..., K) to be compared. The total number of decision problems in the simulation run T = 1,000. Applying Equation (4-1) we can obtain the consistency weight for the Methods Mk (k = 1, 2, ..., K) as

42

Chapter 4 A Simulation Model for Method Evaluation and Selection

CW0 = 0, when Method Mk produces an outcome different from other methods CW1 = 1/3, when Method Mk produces an outcome similar to one of the other methods CW2 = 2/3, when Method Mk produces an outcome similar to two of the other methods CW3 = 1, when Method Mk produces an outcome similar to all the other methods In the 1,000 decision problems, the number of times Method Mk (k = 1) produces a ranking outcome similar to the other methods for a problem is recorded as T10 = 200, the number of times Method M1 produces an outcome different from all the other methods. T11 = 400, the number of times Method M1 produces an outcome similar to one of the other methods. T12 = 300, the number of times Method M1 produces an outcome similar to two of the other methods. T13 = 100, the number of times Method M1 produces an outcome similar to all of the other methods. For Method M1 we can calculate the RCI by applying Equation (4-2) with the recorded information as

43

Chapter 4 A Simulation Model for Method Evaluation and Selection

RCI1 = (200*0 + 400* 1/3 + 300* 2/3 +100*1)/1,000 = 0.43 Similarly, RCI for other three methods can be calculated. The method with the highest RCI value is the most consistent one among the methods evaluated, in terms of the ranking outcomes they produce for the given decision settings. The ranking consistency index is particularly useful for comparing MADM methods for decision settings with a varying number of alternatives and attributes, as well as with various ranges of performance ratings.

4.3.2 The Weight Sensitivity Index (WSI) The weights associated with the attributes in an MADM problem may have significant impact on the decision outcomes. The weight sensitivity index (WSI) indicates, to what extent, an MADM method is sensitive to the changes in attribute weights under certain decision settings. The weight sensitivity index is measured as the average amount of change in attribute weight required to get a change in the ranking outcome, for a large sample set of MADM problems. The weight sensitivity index for a method can be obtained by the following steps. Step 1: Select a sample set of MADM problems. A small sample set of MADM problems (usually 100) are selected randomly for a given decision settings. The sample set can be defined as

  {Φl }; l  1, 2, ..., L.

(4-3)

44

Chapter 4 A Simulation Model for Method Evaluation and Selection

where Φl (l = 1, 2, ..., L) represent decision problems consisting of a decision matrix Xl (l = 1, 2, ..., L) and weights Wlj (l = 1, 2, ..., L; j = 1, 2, ..., J), as shown in Equation (3-3). Step 2: Solve the decision problems. The decision problem Φl (l = 1, 2, ..., L) is solved with Method Mk (k = 1, 2, ..., K). The weights Wlj (l = 1, 2, ..., L; j = 1, 2, ..., J) for attributes Cj (j = 1, 2, ..., J) are equal. The ranking outcome produced by each method for each decision problem in the decision problem set Ω is used as the base ranking outcomes. Step 3: Change each attribute weight. For each decision problem Φl (l = 1, 2, ..., L) in the decision problem set Ω, the attribute weight Wlj (l = 1, 2, ..., L; j = 1, 2, ..., J) for each attribute is gradually changed one at a time, until the ranking outcome produced is different from the base ranking outcome. The change in weight Wlj (l = 1, 2, ..., L; j = 1, 2, ..., J) for Method Mk (k = 1, 2, ..., K) can be expressed as Wklj  Wlj  Wklj ; j  1, 2, ..., J; k  1, 2, ..., K; l  1, 2, ..., L.

(4-4)

where Wklj is the weight required for attribute Cj (j = 1, 2, ..., J) to get a ranking outcome different from the base outcome for Method Mk (k = 1, 2, ..., K) for the decision problem Φl (l = 1, 2, ..., L). Step 4: Calculate the average weight change. The average weight change for Method Mk (k = 1, 2, ..., K) for all the attribute Cj (j = 1, 2, ..., J) in all the decision problems Φl (l = 1, 2, ..., L) in decision problem set S can be obtained as 45

Chapter 4 A Simulation Model for Method Evaluation and Selection L

J

l 1

j 1

Wk  ( ( (W( klj ) ) / J )) / L; k  1, 2, ..., K.

(4-5)

Step 5: Obtain the weight sensitivity index (WSI) The Method Mk (k = 1, 2, ..., K) with a higher average weight change ΔWk (k = 1, 2, ..., K) is less sensitive to changes in attribute weights. The weight sensitivity index can be obtained as

WSI k  (1  Wk ); k  1, 2, ..., K.

(4-6)

A larger weight sensitivity index (WSIk) indicates that the corresponding Method (Mk) is more sensitive to attribute weight changes. The weight sensitivity index helps the decision maker select the most preferred method for various decision settings. In decision settings where the decision maker is not confident about the choice of attribute weights, a method with a lower WSI should be selected as it will have a lower impact on the ranking outcome. For example, consider the following decision problem setting There are four (K = 4) Methods Mk (k = 1, 2, ..., K) to be compared. The total number of decision problems Φl (l = 1, 2, ..., L) in the problem set for a given decision setting is 100 (L = 100). Each of the decision problems has four (J = 4) attributes Cj (j = 1, 2, ..., J). Following Step 2, each decision problem is solved by each of the four Methods (M1, M2, M3 and M4) where the attribute weights Wlj (l = 1, 2, ..., L; j = 1, 2,

46

Chapter 4 A Simulation Model for Method Evaluation and Selection

..., J) are considered equal (0.25). The ranking outcome of each decision problem Φl (l = 1, 2, ..., L) is considered as the base outcome for the corresponding decision problem. Following Step 3, for Method M1 and decision problem Φ1 we gradually change the weight W1 till the outcome changes from the base outcome. Equation (44) is then applied to obtain the weight change required. The weight change required for Method M1 in decision problem Φ1 and weight W1 is W111  0.1 . Similarly, for other three Methods (M2, M3 and M4) the weight change for each of the attributes (W1, W2, W3 and W4) in each of the 100 decision problem can be calculated. Following Step 4, the average weight change required for each of the Methods (M1, M2, M3 and M4) under given decision settings can be obtained by Equation (4-5) as W1  0.15 , W2  0.1 , W3  0.3 and W4  0.25 respectively. Applying Equation (4-6) in Step 5 the weight sensitivity index (WSI) for each of the Methods (M1, M2, M3 and M4) can be obtained as WSI1  0.85 , WSI 2  0.90 ,

WSI3  0.70 and WSI 4  0.75 respectively. The results indicate that Method M2 is the most sensitive one and Method M3 is the least sensitive one in terms of variation in attribute weights. These results can help the decision maker select an MADM method depending on the level of confidence in attribute weights.

47

Chapter 4 A Simulation Model for Method Evaluation and Selection

4.4 Concluding Remarks A new generalised simulation model together with two new performance measures has been developed in this chapter to compare MADM methods for a large number of decision problems. Table 4-1 RCI and WSI summary

Definition

Ranking Consistency Index

Weight Sensitivity Index

RCI is the measurement of the

WSI is the measurement of the

level of consistency an MADM

level of sensitivity an MADM

method shows with other MADM

method shows in response to

methods under consideration in

variations in attribute weights.

terms of the outcomes they produce. Unit of

0 to 1 scale is used where a higher

0 to 1 scale is used where a

measurement

value indicates a higher level of

higher value indicates a higher

consistency.

level of sensitivity.

Suitable for simulation

Suitable for simulation

experiments with large sample

experiments with large sample

data.

data.

To be used when outcome

To be used when weight

consistency is a concern for the

sensitivity is a concern for the

decision maker.

decision maker.

Application

48

Chapter 4 A Simulation Model for Method Evaluation and Selection

The performance measures provide an efficient and objective approach to method comparison and selection under highly diverse decision settings. Table 4-1 summarizes the two performance measures, RCI and WSI. The simulation model developed in this chapter is applied for method comparison in Chapter 5 to demonstrate its practical applicability.

49

Chapter 5 Applications of Developments I: Simulation Based Selection of a Normalisation Procedure

5.1 Introduction In multiattribute decision making (MADM) problems, each alternative is given a performance rating for each attribute, which represents the characteristics of the alternative. It is common that performance ratings for different attributes are measured in different units. To transform performance ratings into a compatible measurement unit, normalisation procedures are used. MADM methods often use one normalisation procedure to achieve compatibility between different measurement units. For example, SAW uses linear scale transformation (max method) (Fishburn, 1967; Hwang and Yoon, 1981; Yeh, 2003), TOPSIS uses vector normalisation procedure (Zeleny, 1982; Yoon and Hwang, 1995), ELECTRE uses vector normalisation (Roy, 1991; Yoon and Hwang, 1995; Figueira et al., 2005) and AHP uses linear scale transformation (sum method) (Saaty, 1977, 1980 and 1994). Enormous efforts have been made to comparative studies of MADM methods, but no significant study is conducted on the suitability of normalisation procedures used in those MADM methods. This leaves the effectiveness of various MADM methods in doubt and certainly raises the necessity to examine the effects of various

50

Chapter 5 Simulation Based Selection of a Normalisation Procedure

normalisation procedures on decision outcome when used with given MADM methods. The main purpose of this chapter is to justify and evaluate the use of a specific normalisation procedure by two most widely used MADM methods (SAW and TOPSIS) under various decision settings. Four widely applied normalisation procedures are presented and then compared by simulation experiments using the model developed in Chapter 4 to find out the most suitable ones for SAW and TOPSIS. This chapter addresses the Decision Context A by providing generalised guidelines for selecting the appropriate method and normalisation procedure under various decision settings.

5.2 Normalisation Procedures Evaluated The decision matrix X for a given MADM problem consists of performance rating xij (i = 1, 2, ..., I; j = 1, 2, ..., J) which represents the preference for each alternative Ai (i = 1, 2, ..., I) with respect to each attribute Cj (j = 1, 2, ..., J). The performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) may have different measurement units and generally a normalisation procedure is applied to convert them into a single comparable measurement unit. The four widely applied normalisation procedures in MADM methods are briefly described below, including: (a) vector normalisation, (b) linear scale transformation (max-min), (c) linear scale transformation (max), and (d) linear scale transformation (sum).

51

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.2.1 Vector Normalisation In this procedure, each performance rating xij (i = 1, 2, ..., I; j = 1, 2, ..., J) in the decision matrix X is divided by its norm. The normalised value yij (i = 1, 2, ..., I; j = 1, 2, ..., J) is obtained by

yij 

xij

; i  1, 2, ..., I; j  1, 2, ..., J.

I

x

2

(5-1)

ij

i 1

This procedure has the advantage of converting all attributes into dimensionless measurement unit, thus making inter-attribute comparison easier. But it has the drawback of having non-equal scale length leading to difficulties in straightforward comparison (Yoon and Hwang, 1995; Olson, 2001).

5.2.2 Linear Scale Transformation (Max-Min) This procedure considers both the maximum and minimum of the performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) of attributes Cj (j = 1, 2, ..., J) during calculation. For benefit and cost attributes, the normalised performance rating yij (i = 1, 2, ..., I; j = 1, 2, ..., J) is obtained by Equations (5-2) and (5-3) respectively.

yij 

yij 

xij  x j xj

max

xj xj

 xj

max

max

min min

 xij

 xj

min

; i  1, 2, ..., I; j  1, 2, ..., J.

(5-2)

; i  1, 2, ..., I; j  1, 2, ..., J.

(5-3)

52

Chapter 5 Simulation Based Selection of a Normalisation Procedure

where

xj

max

is the maximum performance rating among alternatives for attribute Cj

(j = 1, 2, …, J) and x j min is the minimum performance rating among alternatives for attribute Cj (j = 1, 2, …, J). This procedure has the advantage that the scale measurement is precisely between 0 and 1 for each attribute. The drawback is that the scale transformation is not proportional to outcome (Olson, 2001).

5.2.3 Linear Scale Transformation (Max) This procedure divides the performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) for alternatives Ai (i = 1, 2, ..., I) with respect to each attribute Cj (j = 1, 2, …, J) by the maximum performance rating for that attribute. For benefit and cost attributes, the normalised performance rating yij (i = 1, 2, ..., I; j = 1, 2, ..., J) is obtained by Equations (5-4) and (5-5) respectively.

yij 

xij xj

max

yij  1 

where

xj

; i  1, 2, ..., I; j  1, 2, ..., J.

xij xj

max

; i  1, 2, ..., I; j  1, 2, ..., J

(5-4)

(5-5)

max

is the maximum performance rating among alternatives for attribute Cj (j

= 1, 2, …, J). The advantage of this procedure is that outcomes are transformed in a linear way (Yoon and Hwang, 1995; Olson, 2001). 53

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.2.4 Linear Scale Transformation (Sum) This procedure divides the performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) of each attribute Cj (j = 1, 2, …, J) by the sum of performance ratings for that attribute as

yij 

xij

; i  1, 2, ..., I; j  1, 2, ..., J.

n

x

(5-6)

j

j 1

where x j is performance rating for each alternative for attribute Cj (j = 1, 2, …, J) (Yoon and Hwang, 1995). Table 5-1 summarizes the four normalisation procedures described in Section 5.2. Table 5-1 Four commonly used normalisation procedures Notation Vector

yij 

Normalisation

xij I

x

2

ij

Features

Advantages / Disadvantages

Performance

Converts all measurement units

ratings are

for attributes into a comparable

divided by its

dimensionless unit.

norm.

Use of non-equal scale length

i 1

leads to difficulties in straightforward comparison. Linear Scale (Max-Min)

yij 

xij  x j xj

max

min

 xj

min

Performance

Converts all measurement units

ratings are

for attributes into a comparable

divided by

dimensionless unit.

the range.

Considers the two extreme

54

Chapter 5 Simulation Based Selection of a Normalisation Procedure

performance rating values in calculation. Transformation is linear. Scale transformation is not proportional to outcome. Linear Scale

yij 

(Max)

xij xj

Performance

Converts all measurement units

ratings are

for attributes into a comparable

divided by

dimensionless unit.

max

the maximum Transformation is linear. one.

Considers only the maximum value.

Linear Scale (Sum)

yij 

xij

Performance

Converts all measurement units

ratings are

for attributes into a comparable

divided by

dimensionless unit.

their sum.

Transformation is linear.

n

x

j

j 1

5.3 Multiattribute Decision Making Methods Evaluated In these experiments, the SAW and TOPSIS methods are evaluated to find the most suitable normalisation procedure under various decision settings.

5.3.1 The SAW Method The simple additive weight (SAW) method, also known as the weighted sum method, is probably the best known and most widely used MADM method (Hwang 55

Chapter 5 Simulation Based Selection of a Normalisation Procedure

and Yoon, 1981). The basic logic of the SAW method is to obtain a weighted sum of the performance ratings of each decision alternative over all the attributes. The overall weighted preference value is used as the basis for comparison between the alternatives. This method involves the following two steps:

Step 1: Obtain the normalised decision matrix. The performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) in the decision matrix

X shown in Equation (3-1) are normalised by applying Equation (5-4). The normalised performance ratings yij (i = 1, 2, ..., I; j = 1, 2, ..., J) can be given as a matrix shown in Equation (5-7).

 y11 y Y   21  ...   yI 1

y12 y22

... yI 2

... y1J  ... y2 J  ... ...   ... y IJ 

(5-7)

Step 2: Obtain the overall preference value. The overall preference value for alternative Ai (i = 1, 2, ..., I) can be obtained by combining the attribute weights Wj (j = 1, 2, ..., J) from Equation (3-2) with the Equation (5-7) as

Vi 

J

W y ; i = 1, 2, ..., I. . j

(5-8)

ij

j 1

Where Vi (i = 1, 2, ..., I) is the overall preference value of decision alternative Ai (i = 1, 2, ..., I); Wj (j = 1, 2, ..., J) is the weight for attribute Cj (j = 1, 2, ..., J) and yij (i = 1, 2, ..., I; j = 1, 2, ..., J) are normalised performance ratings

56

Chapter 5 Simulation Based Selection of a Normalisation Procedure

(Hwang and Yoon, 1981; Zeleny, 1982). An alternative with a greater overall value V i (i = 1, 2, ..., I) will receive a higher ranking.

5.3.2 The TOPSIS Method

The technique for order preference by similarity to ideal solution (TOPSIS) has been used extensively to solve various practical MADM problems, due to its simplicity, computational efficiency and the ability to measure the performances of the decision alternatives in simple mathematical form (Yeh and Chang, 2009). In TOPSIS, an index known as similarity to positive-ideal solution is defined by combining the closeness to the positive-ideal solution and remoteness to the negative-ideal solution. This index is used to rank the alternatives (Hwang and Yoon, 1981; Zeleny, 1982). We will refer to the index as the overall preference value in order to maintain uniformity with other methods used. The TOPSIS method involves the following steps. Step 1: Calculate the normalised performance ratings. The performance ratings xij (i = 1, 2, ..., I; j = 1, 2, ..., J) in the decision matrix X shown in Equation (3-1) are normalised by applying Equation (5-1). The normalised performance ratings yij (i = 1, 2, ..., I; j = 1, 2, ..., J) can be given as a matrix similar to Equation (5-7).

Step 2: Calculate weighted normalised performance rating. The weight Wj (j = 1, 2, ..., J) from Equation (3-2) is combined with normalised decision matrix Y from Equation (5-7) to get the weighted

57

Chapter 5 Simulation Based Selection of a Normalisation Procedure

normalised performance rating vij (i = 1, 2, ..., I; j = 1, 2, ..., J) shown in Equation (5-9). The weighted normalised decision matrix is shown in Equation (5-10). vij  W j * yij ; i = 1, 2, …, I; j = 1, 2, …, J. .

 v11 v12 v v V   21 22  ... ...   v I 1 vI 2

... v1J  ... v2 J  ... ...   ... vIJ 

(5-9)

(5-10)

Step 3: Identify the positive-ideal and negative-ideal solutions. The set of positive-ideal solution A* and negative-ideal solution A- are identified from Equation (5-10) in terms of weighted normalised performance ratings.





(5-11)





(5-12)

A*  v1* , v*2 , ..., v*J

A   v1 , v 2 , ..., v J

 max v ij , if j is a benifit attribute  * where v j    min v ij , if j is a cost attribute

 min v ij , if j is a benifit attribute  v j    max v ij , if j is a cost attribute Step 4: Calculate separation measure. The separation measures for each decision alternative Ai (i = 1, 2, ..., I) is calculated using n-dimensional Euclidean distance. The separation (distance)

58

Chapter 5 Simulation Based Selection of a Normalisation Procedure

of each alternative from the positive-ideal solution A* and negative-ideal solution A- can be obtained by Equation (5-13) and (5-14) respectively.

J

 (v

S  * i

ij

 v *j ) 2 ; i  1, 2, ..., I.

(5-13)

 v j ) 2 ; i  1, 2, ..., I.

(5-14)

j

J

 (v

S i 

ij

j

Step 5: Obtain the overall preference value The overall preference value Vi (i = 1, 2, ..., I) for each alternative Ai (i = 1, 2, ..., I) can be calculated as

Vi 

Si ; i  1, 2, ..., I. Si  Si*

(5-15)

A higher value of Vi (i = 1, 2, ..., I) indicates a higher ranking of alternative Ai (i = 1, 2, ..., I).

5.4 Experiments and Results for SAW Simulation studies are conducted for the SAW method to find out the most suitable normalisation procedure for this method under various decision settings. The simulation model developed in Chapter 4 is used for the experiments. The performance measure ranking consistency index (RCI) developed in Chapter 4 is applied to compare the performance of the four normalisation procedures presented in the last section.

59

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.4.1 Simulation Experiments for SAW

This simulation experiment is conducted to evaluate four normalisation procedures which can be used with SAW. It is then used to identify the most suitable one for SAW under various decision settings. The experiment is conducted using the following seven steps. Step 1: Identify the set of MADM Methods Mk (k = 1, 2, ..., K) to be compared. The combination of each normalisation procedure and SAW aggregation technique is considered as an MADM method. Table 5-2 shows the methods to be compared for SAW. Table 5-2 Four MADM methods for the experiment with SAW MADM Normalisation procedure

Aggregation technique

M1(S)

N1: Vector normalisation

SAW

M2(S)

N2: Linear scale transformation (max-min)

SAW

M3(S)

N3: Linear scale transformation (max)

SAW (Conventional)

M4(S)

N4: Linear scale transformation, (sum)

SAW

method

Step 2: Determine the initial and the target decision settings. The experiments test three decision information settings including the number of alternatives, the number of attributes and the data range for the decision problem. The initial and target settings for each are selected as

60

Chapter 5 Simulation Based Selection of a Normalisation Procedure

(a) The number of alternatives with initial setting as 4 alternatives and target setting as 20 alternatives. (b) The number of attributes with initial setting as 4 attributes and target setting as 20 attributes. (c) The data range for performance ratings for each attribute with equally divided range between 1 and 10,000. The reason to choose 4 as the lower limit and 20 as the upper limit for the number of alternatives and attributes is that it is a range wide enough to produce significant results. The upper and lower limits (4 and 20) for the number of alternatives and the number of criteria chosen in this study are not to be considered as the only choice. Experiments were conducted with different sets of lower and upper limits and it was found that the limit value between 4 and 20 provides significant results required for this study. The data range is chosen as 1 to 10,000 as it can generate sufficient variations for problems with a different number of alternatives and attributes. Different data ranges were tested and it was found that the data range of 1 to 10,000 provides enough samples to achieve significant conclusive outcomes. Step 3: Generate a large number of decision problems for the current settings. For each decision setting, 10,000 decision matrices are generated randomly in each simulation run. Although the sample problem set with 10,000 matrices is large enough to produce significant comparative results, the validity of results is tested by generating three different sample sets with 10,000 matrices each for each decision information setting. 61

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Step 4: Solve each decision problem with Method Mk (k = 1, 2, ..., K). Each of the 10,000 decision matrices generated in Step 3 is solved using each of the MADM methods in Table 5-2. Step 5: Use measures to evaluate the performances of Method Mk (k = 1, 2, ..., K). The performances of the methods for a given decision settings are evaluated using the ranking consistency index (RCI) obtained by applying Equations (41) and (4-2). Step 6: Vary particular decision information setting in a given amount. The three decision information settings presented in Step 2 are varied one at a time. The number of alternatives and the number of attributes are increased by 2 each time. The data range is narrowed by increasing the lower limit by 10% to determine the new setting. Step 7: Repeat Step 3 to Step 6 until the target information setting is reached.

5.4.2 Experimental Results for SAW

5.4.2.1 Results for Change in Alternative Numbers These experiments are conducted to investigate the impact of the number of alternatives on the ranking consistency. The number of attributes is set between 2 to 20 along with the data range 1 to 10,000 (evenly distributed to the attributes). The number of alternatives is then changed from 2 to 20. With each setting of the alternative the ranking consistency is measured for the four SAW methods given in Table 5-2.

62

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Figure 5-1 shows the results obtained by changing the number of alternatives with the attribute number set to 10 (for complete results refer to Appendix C). The results clearly show that the ranking consistency for all the methods reduces significantly with the increase of the number of alternatives in the decision problems. The Method M1(S) is surely the best performer is all cases where as the Method M2(S) is the worst one. Methods M3(S) and M4(S) performs close to Method M1(S). Method M3(S) is better than Method M4(S) with small number of alternative but M4(S) is relatively better than M3(S) for problems with large number of alternatives. The experiment results suggest that instead of the conventional linear scale transformation- max (N3) normalisation procedure, the vector normalisation (N1) procedure should be used with SAW when the number of alternatives in an MADM problem is a concern of the decision maker.

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure 5-1 With 10 attributes, the effects on the ranking consistency for changes in the number of alternatives

63

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.4.2.2 Results for Change in Attribute Numbers These experiments are conducted to find out how the number of attributes involved in a decision problem can affect the ranking outcome. Four SAW methods given in Table 5-1 are evaluated to find their consistency in ranking for decision problems with a different number of attributes. The data range is set between 1 and 10,000. The number of alternatives involved is set between 2 and 20 for each experiment. The number of attributes is increased from 2 to 20 for each setting to measure change in ranking consistency index. Figures 5-2 and 5-3 show the results from two settings. The ranking consistency for all the methods decreases gradually with an increase in the number of attributes. Method M1(S) is most consistent over all ranges of attributes and M2(S) is the least consistent at all times. With a small number of alternatives, M3(S) is more consistent than M4(S). But with a larger number, M4(S) is more consistent.

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (S)

0.3

M2 (S) M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure 5-2 With 6 alternatives, the effects on the ranking consistency for changes in the number of attributes 64

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Ranking Consistency Index

0.3 0.25 0.2 M1 (S)

0.15

M2 (S) 0.1

M3 (S) M4 (S)

0.05 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure 5-3 With 12 alternatives, the effects on the ranking consistency for changes in the number of attributes

5.4.2.3 Results for Change in Data Range These experiments address the issue of data variation in a decision problem. The size of the decision problem for each experiment is set by selecting problems with a specific number of attributes and alternatives from 4 to 20. For each decision setting, the data range for each attribute is narrowed by 10%. Figures 5-4 and 5-5 show the results from two decision settings (refer to Appendix C for the complete results). The results show that Method M2(S) is not affected by the change in data range and remains the least consistent one for all decision settings. For all decision settings, ranking consistency for Methods M1(S), M3(S) and M4(S) increases with narrower data ranges. Although Method M1(S) is the best performer, 65

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Ranking Consistency Index

0.9 0.8 0.7 0.6 0.5

M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure 5-4 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (S)

0.3

M2 (S)

0.2

M3 (S) M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure 5-5 With 12 attributes and 12 alternatives, the effects on the ranking consistency for changes in the data range

66

Chapter 5 Simulation Based Selection of a Normalisation Procedure

M4(S) and M3(S) show close performances. The performance of Method M4(S) is almost same as M1(S) for decision problems with large problem sizes and very narrow data ranges. Method M3(S) shows a decent performance for large problems (in terms of attributes and alternatives).

5.5 Experiments and Results for TOPSIS

5.5.1 Simulation Experiments for TOPSIS

This simulation experiment is conducted to evaluate four normalisation procedures which can be used with TOPSIS. It is then used to identify the most suitable one for TOPSIS under various decision settings. The experiment is conducted using the following seven steps. Step 1: Identify the set of MADM Methods Mk (k = 1, 2, ..., K) to be compared. The combination of each normalisation procedure and TOPSIS aggregation technique is regarded as an MADM method. Table 5-3 shows the methods to be compared for TOPSIS. Table 5-3 Four MADM methods for the experiment with TOPSIS MADM

Normalisation procedure

Aggregation technique

M1(T)

N1: Vector normalisation

TOPSIS (Conventional)

M1(T)

N2: Linear scale transformation (max-min)

TOPSIS

M3(T)

N3: Linear scale transformation (max)

TOPSIS

M4(T)

N4: Linear scale transformation (sum)

TOPSIS

method

67

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Step 2: Determine the initial and the target decision settings. The experiments test three decision information settings including the number of alternatives, the number of attributes and the data range for the decision problem. The initial and target settings for each is selected as (a) The number of alternatives with initial setting as 4 alternatives and target setting as 20 alternatives. (b) The number of attributes with initial setting as 4 attributes and target setting as 20 attributes. (c) The data range for performance ratings for each attribute with an equally divided range between 1 and 10,000. The reason to choose 4 as the lower limit and 20 as the upper limit for the number of alternatives and attributes is that it is a range wide enough to produce significant results. The upper and lower limits (4 and 20) for the number of alternatives and the number of criteria chosen in this study are not to be considered as the only choice. Experiments were conducted with different sets of lower and upper limits and it was found that the limit value between 4 and 20 provides significant results required for this study. The data range is chosen as 1 to 10,000 as it can generate sufficient variations for problems with a different number of alternatives and attributes. Different data ranges were tested and it was found that the data range of 1 to 10,000 provides enough samples to achieve significant conclusive outcomes. Step 3: Generate a large number of decision problems for the current settings. 68

Chapter 5 Simulation Based Selection of a Normalisation Procedure

For each decision setting, 10,000 unique decision matrices are generated randomly in each simulation run. Although the sample problem set with 10,000 matrices is large enough to produce significant comparative results, the validity of results are tested by generating three different sample sets with 10,000 matrices each for each decision information setting. Step 4: Solve each decision problem with Method Mk (k = 1, 2, ..., K). Each of the 10,000 decision matrices generated in Step 3 are solved using each of the MADM methods in Table 5-3. Step 5: Use measures to evaluate the performances of Method Mk (k = 1, 2, ..., K). The performances of the methods for a given decision settings are evaluated using the ranking consistency index (RCI) obtained by applying Equations (41) and (4-2). Step 6: Vary particular decision information setting in a given amount. The three decision information settings presented in Step 2 are varied one at a time. The number of alternatives and attributes are increased by 2 each time. The data range is narrowed by increasing the lower limit by 10% to determine the new setting. Step 7: Repeat Step 3 to Step 6 until the target information setting is reached.

5.5.2 Experiment Results for TOPSIS

5.5.2.1 Results for Change in Alternative Numbers

69

Chapter 5 Simulation Based Selection of a Normalisation Procedure

The experiments are conducted similarly to the experiments for the SAW method. With the set data range and attribute numbers, the number of alternatives is increased to check the impact on ranking consistency. The results in Figure 5-6 (the complete result is given in Appendix C) shows that Method M1(T) is the best performer and M2(T) is the worst. Method M3(T) performs better than M4(T) for problems with a smaller number of alternatives but worse in case of larger ones. Method M1(T) performs similarly to M4(T) for decision problems with the number of alternatives over 14. For all the four methods, the ranking consistency drops dramatically with a larger number of alternatives. This results shows that the conventional TOPSIS method M1(T) is currently using the most consistent normalisation procedure, the vector normalisation (N1).

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure 5-6 With 12 attributes, the effects on the ranking consistency for changes in the number of alternatives

70

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.5.2.2 Results for Change in Attribute Numbers The number of alternatives and the data range are set for each experiment and the number of attributes is changed from 2 to 20 in steps of 2 in order to find the impact on the ranking consistency. Figures 5-7 and 5-8 show the results of two different decision settings (refer to Appendix C for the complete results). The ranking consistency for all the methods is not affected much with the change in the number of attributes where the problem involves a smaller number of alternatives. In decision settings with a larger number of alternatives, all for methods show a decrease in ranking consistency when the number of attributes is increased. For decision settings with a smaller number of alternatives, Method M1(T) performs best with M3(T) slightly better than M4(T). In decision settings with a larger number of alternatives, M1(T) is matched with the performance by M4(T) when the number of attributes is increased. However, in such settings, the ranking consistency of Method M3(T) decreases significantly to match the poor performance of Method M2(T).

71

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure 5-7 With 4 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.035 0.03 M1 (T)

0.025

M2 (T)

0.02

M3 (T)

0.015

M4 (T) M1 (T)

0.01

M2 (T)

0.005

M3 (T)

0

M4 (T) 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure 5-8 With 20 alternatives, the effects on the ranking consistency for changes in the number of attributes

72

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.5.2.3 Results for Change in Data Range With the number of alternatives and attributes set for each of these experiments, the data range is narrowed by 10% steps to assess the impact on the ranking consistency. Figures 5-9 and 5-10 presents the results of changes in data range with two different decision settings in terms of the number of attributes and the number of alternatives (results for the complete range is available in Appendix C). For both the smaller and larger settings, Method M2(T) is unaffected by any change in data range and is the worst performer in terms of ranking consistency.

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (T)

0.4

M2 (T)

0.3

M3 (T)

0.2

M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure 5-9 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range

73

Chapter 5 Simulation Based Selection of a Normalisation Procedure

Ranking Consistency Index

0.5 0.45 0.4 0.35 0.3 0.25

M1 (T)

0.2

M2 (T)

0.15

M3 (T)

0.1

M4 (T)

0.05 0 100 90 80 70 60 50 40 30 20 Data Range (%)

Figure 5-10 With 14 attributes and 14 alternatives, the effects on the ranking consistency for changes in the data range

For decision settings with a smaller number of alternatives and attributes, methods M1(T), M3(T) and M4(T) performs similarly with M1(T) being slightly better. There is an increase in ranking consistency with a narrower data range for all these three methods. For decision settings with a larger number of alternatives and attributes, M1(T) and M4(T) performs very closely and shows a sharp rise in performance with narrower data ranges. Performance for Method M3(T) also increases but does not perform as well as M1(T).

74

Chapter 5 Simulation Based Selection of a Normalisation Procedure

5.6 Concluding Remarks Table 5-4 provides a quick reference to the results (trends) for different variations of the SAW and TOPSIS methods under different decision settings. The experiments in this chapter have presented useful results that can be used as general guidelines for selecting the most suitable normalisation procedure for SAW and TOPSIS under various decision settings. The results have shown that the conventional methods are not necessarily the best performing ones in all decision settings. The experiments prove that, using different normalisation procedures to solve a given problem may lead to different ranking outcomes, thus highlighting the need for a new way of method evaluation and comparison.

Table 5-4 Simulation results in terms of performance

Variation in number of Attributes and Alternatives

Decision Settings

N1

N2

Attributes: L Alternatives: L

H (Best)

M (Worst)

Attributes: L Alternatives: M

M (Best)

Attributes: L Alternatives: H

SAW N3

TOPSIS N3

N4

N1

N2

H (Near to N1)

H (Near to N3)

H (Best)

M (Worst)

H (Near to N4)

H

L (Worst)

M

M

M (Best)

L (Worst)

L

M

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

Attributes: M Alternatives: L

H (Best)

M (Worst)

H (Near to N1)

H (Near to N3)

H (Best)

M (Worst)

H

H (Near to N3)

Attributes: M Alternatives: M

M (Best)

L (Worst)

M

M

L (Best)

L (Worst)

L

L

Attributes: M Alternatives: H

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

Attributes: H Alternatives: L

H (Best)

M (Worst)

H (Near to N1)

H (Near to N3)

H (Best)

M (Worst)

H

H (Near to N3)

Attributes: H Alternatives: M

M (Best)

L (Worst)

M

M

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

Attributes: H Alternatives: H

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

L (Best)

L (Worst)

L (Near to N2)

L (Near to N1)

75

N4

Variation in number of Attributes and Alternatives and Data Range

Chapter 5 Simulation Based Selection of a Normalisation Procedure Attributes & alternatives: L Data Range: L

H (Best)

M (Worst)

H (Near to N4)

H (Near to N1)

H (Best)

M (Worst)

H

H (Near to N1)

Attributes & alternatives: L Data Range: M

H (Best)

M (Worst)

H (Near to N4)

H (Near to N1)

H (Best)

M (Worst)

H

H

Attributes & alternatives: L Data Range: H

H (Best)

M (Worst)

H (Near to N4)

H (Near to N1)

H (Best)

M (Worst)

H (Near to N4)

H

Attributes & alternatives: M Data Range: L

H (Best)

L (Worst)

H

H (Near to N1)

H (Best)

L (Worst)

H

H (Near to N1)

Attributes & alternatives: M Data Range: M

H (Best)

L (Worst)

H

H (Near to N1)

H (Best)

L (Worst)

M

H

Attributes & alternatives: M Data Range: H

M (Best)

L (Worst)

M

M (Near to N3)

M (Best)

L (Worst)

L

L

Attributes & alternatives: H Data Range: L

H (Best)

L (Worst)

M

H (Near to N1)

H (Best)

L (Worst)

M

H (Near to N1)

Attributes & alternatives: H Data Range: M

M (Best)

L (Worst)

L

M (Near to N1)

M (Best)

L (Worst)

L

M (Near to N1)

Attributes & alternatives: H Data Range: H

L (Best)

L (Worst)

L

L (Near to N1)

L (Best)

L (Worst)

L (Near to M2)

L (Near to N1)

H = High, M = Moderate, L = Low

76

Chapter 6 Developments II: Rank Similarity Based Method Evaluation and Selection

6.1 Introduction Multiattribute decision making (MADM) problems are diverse greatly in terms of the decision information, the decision context and the applications. With the availability of multiple suitable MADM methods Mk (k = 1, 2, ..., K) for a given MADM problem Φ, selecting the most suitable one is an extremely challenging task (Yeh, 2003; Chakraborty and Yeh, 2007a). Several comparative and simulation based studies suggest the suitability of certain methods under given decision settings (Simpson, 1996; Zanakis et al., 1998; Olson, 2001; Chakraborty and Yeh, 2009). Under certain decision contexts, the decision maker may use the results of these studies for method evaluation and selection. In a decision context where a suitable and acceptable set of MADM methods Mk (k = 1, 2, ..., K) is available for a given problem Φ, the decision maker needs to select the most preferred one among them. The Decision Context B identified in Chapter 3 is the decision context to be addressed in this chapter. Previous studies cannot guarantee that the MADM method selected is the most preferred one for the given problem when Decision Context B is considered.

77

Chapter 6 Rank Similarity Based Method Evaluation and Selection

In this chapter, a novel method selection approach is developed to select the most preferred method from a set of suitable and acceptable MADM methods Mk (k = 1, 2, ..., K) for a given problem Φ. The approach considers the similarities between the ranking outcomes produced by a given set of suitable MADM methods Mk (k = 1, 2, ..., K).

6.2 Methodology Development 6.2.1 Rank Similarity and Method Evaluation

The new method selection approach is developed for dealing with the decision context where each of the outcomes produced by a set of suitable MADM methods M k (k  1, 2, ..., K ) are considered valid and acceptable to the decision maker. The most preferred method is to be chosen from the set of suitable MADM methods M k (k  1, 2, ..., K ) depending on the most preferred ranking outcome. The solution space is considered to be limited, as it consists of the ranking outcomes produced by a specific set of suitable MADM methods M k (k  1, 2, ..., K ) for the given problem Φ. Hence, the most preferred outcome must be among the ranking outcomes produced. For a given MADM problem Φ, the solution space consists of different ranking

outcomes

Ok (k  1, 2, ..., K )

produced

by

each

suitable

Method

M k (k  1, 2, ..., K ) , which are all valid and acceptable to the decision maker. The most preferred MADM method is the one that produces the most preferred outcome. The most preferred outcome is the one which is closest to all other outcomes. The 78

Chapter 6 Rank Similarity Based Method Evaluation and Selection

closeness between the ranking outcomes can be measured in terms of the similarity between them. In this new approach, the similarity between two ranking outcomes is measured by using the rank correlation coefficient (Spearman, 1904). The method which produces the outcome most similar to all other outcomes is the most preferred method for the given problem.

6.2.2 The Rank Correlation Coefficient

The rank correlation coefficient is widely used as a measurement of association between different ranks (Kendall, 1955; Raju and Pillai, 1999). It has been successfully applied in various studies to test the sensitivity and significance of certain information in different MADM problem settings (Zanakis et.al, 1998; Triantaphyllou and Sanchez, 1997; Yurdakul and Yusuf, 2009). The rank correlation coefficient between two ranks can be defined as I

6

  1

d

2 i

i 1

I3  I

; i  1, 2, ..., I.

(6-1)

where di is the difference between the ranks for the alternative Ai (i = 1, 2, ..., I).

6.2.3 Rank Similarity Index

The rank similarity index (RSI) is developed as a measure of decision outcome similarity for an MADM method with all the other suitable MADM methods in the set of acceptable methods. This measure indicates the relative closeness of a method with other methods in terms of ranking outcome similarity.

79

Chapter 6 Rank Similarity Based Method Evaluation and Selection

The RSI is the average of the rank correlation coefficients between a ranking outcome and all the other ranking outcomes. The method with the largest RSI indicates that the ranking outcome it produces is most similar or closest to all other outcomes, hence the most preferred one. The rank similarity index can be obtained using the following five steps. Step 1: Generate the rank matrix (Rk) This step involves solving the decision problem with each MADM method in the acceptable set and obtaining the ranking outcomes. The outcomes are presented as a matrix called the rank matrix (Rk), formed by combining the ranking outcomes Ok (k  1, 2, ..., K ) given to alternative Ai (i = 1, 2, ..., I) by Method M k (k = 1, 2, ..., K) as shown in Equation (6-2).

 r11 r12 r r Rk   21 22  ... ...   rI 1 rI 2

... r1K  ... r2 K  ; i  1, 2, ..., I; k  1, 2, ..., K. ... ...   ... rIK 

(6-2)

where rik (1 ≤ rik ≤ I) represents the rank of alternative Ai (i = 1, 2, ..., I) by using Method Mk (k = 1, 2, ..., K). Step 2: Calculate rank correlation (RC) between ranking outcomes The rank correlations for Method M k (k = 1, 2, ..., K) in relation to each of the other Methods M h (h = 1, 2, ..., K; k ≠ h) are calculated by applying Equations (6-1) and (6-2) as

RCkh   ( M k , M h ); k  1, 2, ..., K; h  1, 2, ..., K; k  h.

80

(6-3)

Chapter 6 Rank Similarity Based Method Evaluation and Selection

Step 3: Calculate the rank similarity index (RSI) for each method The rank similarity index for Method M k (k = 1, 2, ...,K) can be calculated by taking the average of correlations calculated by Equation (6-3) as

RSI k  (

K

 RC

kh

) / K ; k  1, 2, ..., K ; h  1, 2, ..., K ; k  h.

(6-4)

h 1

Step 4: Find the largest rank similarity index (RSI+)

RSI   max{RSI k }; k  1, 2, ..., K .

(6-5)

The method with the largest rank similarity index (RSI+) is the most preferred one for the given MADM problem.

6.3 Numerical Example 6.3.1 Methods Used in the Example

In this example, variants of the three widely used MADM methods are used, including: (a) the simple additive weighting (SAW), (b) the technique for order preference by similarity to ideal solution (TOPSIS), and (c) the weighted product (WP). The SAW and TOPSIS methods have been presented in Chapter 5. The WP method can be presented as

Vi 

J

x

Wj ij

; i  1, 2, ..., I.

(6-6)

j 1

81

Chapter 6 Rank Similarity Based Method Evaluation and Selection

where xij (i = 1, 2, ..., I; j = 1, 2, ..., J) is performance rating in decision matrix X as shown in Equation (3-1); Wj (j = 1, 2, ..., J) is weight for attribute Cj (j = 1, 2, ..., J) as shown in Equation (3-2); Vi is the overall preference value for alternative Ai (i = 1, 2, ..., I). The alternatives Ai (i = 1, 2, ..., I) are ranked according to the value of Vi. A higher Vi value indicates a higher ranking for the alternative Ai (i = 1, 2, ..., I). Table 6-1 shows the nine suitable MADM methods evaluated in this example. These methods include four variants of SAW as shown in Table 5-1, four variants of TOPSIS as shown in Table 5-2, and the WP method shown in Equation (6-6).

Table 6-1 Nine MADM methods used in the example MADM

Normalisation procedure

Aggregation technique

M1

N1: Vector normalisation

SAW

M2

N2: Linear scale transformation (max-min)

SAW

M3 *

N3: Linear scale transformation (max)

SAW

M4

N4: Linear scale transformation (sum)

SAW

M5**

N/A

WP

M6***

N1: Vector normalisation

TOPSIS

M7

N2: Linear scale transformation (max-min)

TOPSIS

M8

N3: Linear scale transformation (max)

TOPSIS

M9

N4: Linear scale transformation (sum)

TOPSIS

method

* M3 is the conventional SAW method ** M5 is the conventional WP method *** M6 is the conventional TOPSIS method 82

Chapter 6 Rank Similarity Based Method Evaluation and Selection

6.3.2 The Example

To illustrate the rank similarity based method selection approach, the decision matrix from the graduate fellowship applicants ranking case is used (Yoon and Hwang, 1995). Table 6-2 shows the decision matrix. The attributes weights for the decision problem are given as W = (0.3, 0.1, 0.3, 0.15, 0.15). The methods shown in Table 6-1 produce different ranking outcomes which are shown as a rank matrix in Table 6-3 by applying Equation (6-2). The rank correlation coefficients for each method with respect to other methods are calculated by applying Equation (6-3) on Table 6-3, and the results are shown in Table 6-4. The rank similarity index is calculated by applying Equation (6-4) on Table 6-4 and the results are shown in Table 6-5.

Table 6-2 Decision matrix used in the example Attribute Alternative

C1

C2

C3

C4

C5

A1

690

3.1

9

7

4

A2

590

3.9

7

6

10

A3

600

3.6

8

8

7

A4

620

3.8

7

10

6

A5

700

2.8

10

4

6

A6

650

4

6

9

8

83

Chapter 6 Rank Similarity Based Method Evaluation and Selection

Table 6-3 Resultant rank matrix MADM method Alternative

M1

M2

M3

M4

M5

M6

M7

M8

M9

A1

6

2

5

6

6

4

2

3

5

A2

5

6

6

5

4

3

6

5

3

A3

1

5

3

2

1

2

4

2

1

A4

4

4

4

4

3

5

5

4

4

A5

3

1

1

3

5

1

1

1

2

A6

2

3

2

1

2

6

3

6

6

Table 6-4 Rank correlation coefficient between MADM methods M1

M2

M3

M4

M5

M6

M7

M8

M9

M1

1

-0.086

0.714

0.943

0.829

0.143

0.086

0.143

0.371

M2

-0.086

1

0.600

0.029

-0.543

0.086

0.943

0.429

-0.257

M3

0.714

0.600

1

0.771

0.257

0.200

0.657

0.371

0.143

M4

0.943

0.029

0.771

1

0.771

-0.086

0.143

-0.086

0.086

M5

0.829

-0.543

0.257

0.771

1

-0.200

-0.429

-0.257

0.200

M6

0.143

0.086

0.200

-0.086

-0.200

1

0.257

0.829

0.886

M7

0.086

0.943

0.657

0.143

-0.429

0.257

1

0.543

-0.086

M8

0.143

0.429

0.371

-0.086

-0.257

0.829

0.543

1

0.714

M9

0.371

-0.257

0.143

0.086

0.200

0.886

-0.086

0.714

1

84

Chapter 6 Rank Similarity Based Method Evaluation and Selection

Table 6-5 Rank similarity index for suitable MADM methods

RSI

M1

M2

M3

M4

M5

M6

M7

M8

M9

0.393

0.150

0.464

0.321

0.079

0.264

0.264

0.336

0.25

From Table 6-5, we can select the largest RSI using Equation (6-5) as

RSI   RSI ( M 3 )  0.464 . This suggests that Method M3 produces the ranking outcome most similar to that of all other methods. Hence, this method is the most preferred one for the given problem under the decision context considered. These results can be used in conjunction with other decision contexts where the decision maker is considering multiple contexts and can select a method which most satisfies all the contexts. In this particular example, it is observed that the conventional SAW method (i.e. Method M3) is the best performer, and conventional TOPSIS method (i.e. Method M6) and WP method (i.e. method M5) do not perform well. This highlights the need for a change in the way the existing method comparison and selection studies are conducted. These results reinforce the argument that MADM methods considered for selection should not just include the ones originally developed or commonly applied (such as M3, M5 and M6 in Table 6-2). Instead, the comparisons must be done at more detail levels including normalization procedures and aggregation techniques, wherever possible.

85

Chapter 6 Rank Similarity Based Method Evaluation and Selection

6.4 Concluding Remarks The rank similarity based MADM method selection approach developed provides an efficient, yet simple context dependent approach for a given problem. Although the illustrated example has used the variants of SAW, TOPSIS and WP methods only, the approach is applicable for selecting from any set of MADM methods capable of producing a complete ranking outcome. The importance of applying a problem specific method selection approach for any given MADM problem rather than the generalised selection approach has also been highlighted.

86

Chapter 7 Developments III: An Alternatives-Oriented Method Evaluation and Selection

7.1 Introduction The decision-maker-oriented and the method-oriented approach have been developed from the perspectives of the decision maker and the MADM method respectively, as discussed in Chapter 2. In MADM problem settings where the decision maker is not a key stakeholder, the decision alternatives as the key stakeholders should have a greater influence in determining the decision outcome. The significance of the role played by the alternatives in the decision making process has led to the development of the alternatives-oriented method evaluation and selection approach presented in this chapter. A recent study on MBA ranking by Financial Times has well discussed the inconsistency and bias during problem structuring (Jessop, 2009). The same decision problem (once structured) may need to address the method selection issue as well. When ranking MBA programmes, the Financial Times chooses a method without considering the view of the relevant business schools on the method selection process. The ranking outcome has great impacts on the business schools being ranked; hence they are the major stakeholders and should have involvement in the method selection process. Similarly, some multiattribute ranking problems such as 87

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

ranking of universities (as the decision alternatives) are based on a given set of evaluation criteria (as the attributes). The role of the decision maker if existent is restricted to obtain a ranking outcome and to analyse the data. The MADM method being applied to obtain the rank is decided subjectively by the decision maker. With other suitable methods available, the one used by the decision maker is not necessarily produce the most preferred outcome by all the stakeholders. It is thus our belief that decision alternatives should play the role of the decision maker (where no decision maker is available or the decision maker is not a key stakeholder), as in this case the alternatives are the key stakeholders of the decision problem. As a stakeholder, an alternative will naturally have a higher preference for a method that gives it a better ranking. The alternatives-oriented approach developed in this chapter uses a new performance measure called the method preference level for justifying the method selection. The preference level indicates the satisfaction or acceptability degree of all the alternatives as a whole for each suitable MADM method, thus providing the basis for objective comparison. In addition to the objective performance measure, the novelty of this approach lies in its new way of addressing the MADM method evaluation and selection problem from the perspective of the alternatives, which makes the method selection possible even without the presence of the decision maker. This chapter addresses the Decision Context C outlined in Chapter 3. In subsequent sections, the new alternatives-oriented approach is developed along with the performance measure. A worked example is then presented to illustrate the effectiveness of the approach.

88

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

7.2 The Alternatives-Oriented Approach and the Preference Level The alternatives-oriented approach considers the preference of each decision alternative for selecting an MADM method to solve a given multiattribute decision problem that requires a complete ranking of the decision alternatives. Each MADM method produces a ranking outcome for the given problem. An alternative will naturally have a higher preference for an MADM method which gives it a higher rank. The preference degree of each alternative for each MADM method is determined by considering its preference over other methods. The preference degrees of a method given individually by all the alternatives are combined to obtain the overall preference level of each method. The preference level of an MADM method with respect to all the alternatives indicates the level of satisfaction or acceptability it provides to all the alternatives as a whole. The method that provides the highest level of satisfaction to all the alternatives as a whole is the most preferred one for the given problem. The new approach involves five steps, given below. Step 1: Generate the rank matrix Each MADM Method Mk (k = 1, 2, ..., K) produces individual rankings outcomes Ok (k = 1, 2, ..., K) for each decision alternative Ai (i = 1, 2, ..., I) for the given decision problem Φ. The rank matrix R is obtained by organising the rank of each alternative produced by each method, as shown by Equation (6.2) in Chapter 6. Step 2: Obtain the preference degree The method preference degree indicates the extent to which a decision alternative Ai (i = 1, 2, ..., I) prefers a Method Mk (k = 1, 2, ..., K) over other 89

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

methods. The method which provides the alternative with the highest rank will receive the highest degree of preference (i.e. 1). The preference degree pik (i = 1, 2, ..., I; k = 1, 2, ..., K) of each Method Mk (k = 1, 2, ..., K) generated by each alternative Ai (i = 1, 2, ..., I) is obtained by

pik 

K  bki ; i  1, 2, ..., I; k  1, 2, ..., K. K

(7-1)

where K is the number of methods considered; bki is the number of methods producing better ranking than Method Mk (k = 1, 2, ..., K) for alternative Ai (i = 1, 2, ..., I). The preference degree matrix P is generated by combining preference degrees of all the methods with respect to each alternative as  p11 p P   21  ...   pI 1

p12

...

p22

...

...

...

pI 2

...

p1K  p2 K  ...   pIK 

(7-2)

Step 3: Calculate the scaled preference degree Preference degrees of different decision alternatives in Equation (7-2) have different units which require being converted into a single unit for comparison. The highest preference degree of an alternative for any method can be 1. Hence, the conversion of preference degrees into a unified scale should be in such a manner that for any alternative the preference degrees are summed up to 1. This scaling can be obtained by applying Equation (5-6) in Chapter 5 and Equation (7-2) as

90

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

uik 

pik

; i  1, 2, ..., I; k  1, 2, ..., K.

K

p

(7-3)

ik

k 1

The resultant scaled preference matrix U is given as u11 u12 u u22 U   21  ... ...  u I 1 u I 2

... u1K  ... u2 K  ... ...   ... u IK 

(7-4)

Step 4: Calculate the preference level The method preference level Lk (k = 1, 2, ..., K) is the overall preference degree for each method Mk (k = 1, 2, ..., K) by all the decision alternatives. It is calculated as the average of the scaled preference degrees to a method for each alternative Ai (i = 1, 2, ..., I) and presented in percentage for ease of comparison, given as

Lk  [(

I

u

ik

) / I ] *100 ; i  1, 2, ..., I; k  1, 2, ..., K.

(7-5)

i 1

Step 5: Select the most preferred method The method Mk (k = 1, 2, ..., K) with the highest preference level Lk (k = 1, 2, ..., K) is the most preferred (or acceptable) one for the decision problem Φ under investigation, as it best satisfies all the decision alternatives in terms of their method preferences as a whole. The most preferred method can be selected by finding the highest preference level Lk (k = 1, 2, ..., K) as

91

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

Lk  Max( Lk ) ; k  1, 2, ..., K.

(7-6)

7.3 Numerical Example To illustrate the alternatives-oriented method selection approach, we use the decision matrix from the graduate fellowship applicants ranking problem presented in Yoon and Hwang (1995). Table 6-2 in Chapter 6 shows the decision matrix. The attribute weights for the decision problem are given as W = (0.3, 0.2, 0.2, 0.15, 0.15). We will use the nine MADM methods suitable for solving this problem shown in Table 6-1 in Chapter 6. The Methods (M1, M2, ..., M9) given in Table 6-2 are applied separately to solve the decision problem given in Table 6-1, with six alternatives (A1, A2, ..., A6) to be ranked. Table 7-1 shows the ranking outcomes obtained by each MADM method. Table 7-2 shows the rank matrix obtained by following Step 1 with Table 7-1. Table 7-3 shows the method preference degree, which is calculated by Equations (71) and (7-2) using the data in Table 7-2. Table 7-4 shows the scaled preference degree, obtained by Equations (7-3) and (7-4) using data in Table 7-3. Table 7-5 shows the overall preference level for each MADM method (M1, M2, ..., M9), which is calculated by Equation (7-5) using data in Table 7-4.

92

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

Table 7-1 Ranking outcomes obtained MADM

Ranking

method M1

A6 > A4 > A2 > A3 > A1 > A5

M2

A6 > A5 > A1 > A4 > A3 > A2

M3

A6 > A4 > A2 > A3 > A1 > A5

M4

A6 > A4 > A2 > A3 > A1 > A5

M5

A6 > A4 > A3 > A2 > A1 > A5

M6

A6 > A2 > A4 > A3 > A5 > A1

M7

A1 > A5 > A6 > A4 > A2 > A3

M8

A6 > A4 > A2 > A3 > A5 > A1

M9

A2 > A4 > A3 > A6 > A5 > A1

Table 7-2 Resultant rank matrix MADM method Alternative

M1

M2

M3

M4

M5

M6

M7

M8

M9

A1

5

3

5

5

5

6

1

6

6

A2

3

6

3

3

4

2

5

3

1

A3

4

5

4

4

3

4

6

4

3

A4

2

4

2

2

2

3

4

2

2

A5

6

2

6

6

6

5

2

5

5

A6

1

1

1

1

1

1

3

1

4

93

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

Table 7-3 The method preference degree matrix MADM method Alternative

M1

M2

M3

M4

M5

M6

M7

M8

M9

A1

7/9

8/9

7/9

7/9

7/9

1/3

1

1/3

1/3

A2

7/9

1/9

7/9

7/9

1/3

8/9

2/9

7/9

1

A3

7/9

2/9

7/9

7/9

1

7/9

1/9

7/9

1

A4

1

2/9

1

1

1

1/3

2/9

1

1

A5

4/9

1

4/9

4/9

4/9

7/9

1

7/9

7/9

A6

1

1

1

1

1

1

2/9

1

1/9

Table 7-4 The scaled method preference degree matrix MADM method Alternative

M1

M2

M3

M4

M5

M6

M7

M8

M9

A1

0.13

0.15

0.13

0.13

0.13

0.06

0.17

0.06

0.06

A2

0.14

0.02

0.14

0.14

0.06

0.16

0.04

0.14

0.18

A3

0.13

0.04

0.13

0.13

0.16

0.13

0.02

0.13

0.16

A4

0.15

0.03

0.15

0.15

0.15

0.05

0.03

0.15

0.15

A5

0.07

0.16

0.07

0.07

0.07

0.13

0.16

0.13

0.13

A6

0.14

0.14

0.14

0.14

0.14

0.14

0.03

0.14

0.02

Table 7-5 The preference level for MADM method MADM method Preference level L (%)

M1

M2

M3

M4

M5

M6

M7

M8

M9

12.48

8.94

12.48

12.48

11.76

10.84

7.51

12.15

11.38

94

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

The highest preference level identified from Table 7-5 by Equation (7-6) as Lk+ = 12.48% (k = 1, 2, …, K). Methods M1, M3 and M4 produce the same ranking outcome which has the highest preference level from all the alternatives as a whole. Hence, the same ranking outcome produced by Methods M1, M3 and M4 is most acceptable to all the alternatives as a whole for the given decision problem, thus making any of Methods M1, M3 and M4 be the most preferred method. This result shows that the most preferred method is selected if it produces the most preferred ranking outcome for all the alternatives as a whole. As shown in this example, there may be more than one most preferred MADM method, if these methods produce the same most preferred ranking outcome.

7.4 Application in Decision Support Systems Figure 7-1 shows an existing typical decision support system (DSS) for solving multiattribute decision problems. In a DSS of this kind, active participation of the decision maker is required at Stages 1 and 2. At Stage 1, the decision maker constructs the decision problem with a decision matrix and a weight vector. During Stage 2, the decision maker chooses a method from a given set of suitable methods which is used at Stage 3 to solve the problem. This system may not produce the best outcome, because the method selected by the decision maker may not necessarily produce the most preferred ranking outcome by all the stakeholders. Research shows that there is no best way to select the most suitable MADM method for a given decision problem. The high dependency of the existing DSS on the decision maker for the preferred method selection may induce bias, depending on the decision maker’s knowledge, expertise, experience and preference. 95

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

START

Problem statement

Stage 1: Problem formulation

Identify decision alternatives

Identify attributes Decision matrix

Generate the multiattribute decision problem

Stage 2: Method selection

Subjectively select a preferred method

Weight vector Suitable MADM methods Preferred method

Stage 3: Problem solving

Obtain ranking outcome

Ranking outcome

END Figure 7-1 Existing DSS for MADM problems Figure 7-2 shows a new DSS based on the alternatives-oriented approach developed in this paper for solving the general multiattribute decision problem. This new system uses the alternatives-oriented method selection approach to combine 96

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

Stages 2 and 3 from Figure7-1 into a single stage. The system uses each method from the suitable set of methods to obtain ranking outcomes, which are then used to find the most preferred outcome and method. The system provides an objective way of method selection for producing the most preferred outcome, thus eliminating the subjective method selection dependency on the decision maker. START Stage 1: Problem formulation

Identify decision alternatives

Problem statement

Identify attributes

Generate the multiattribute decision problem

Stage 2: Method selection and problem solving

Decision matrix Weight vector Suitable MADM methods

Ranking outcomes

Obtain ranking outcomes

Preferred outcome

Select preferred outcome and method

Preferred method

END Figure 7-2 Alternatives-oriented DSS for multiattribute decision problems 97

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

Table 7-6 shows a comparison between an existing DSS and the new alternatives-oriented DSS for the general multiattribute decision problem.

Table 7-6 Comparison between existing DSS and alternatives-oriented DSS Existing DSS

Alternatives-oriented DSS



Problem statement



Problem statement



Suitable methods



Suitable methods

Method



Subjective approach



Objective approach

selection



Selected by the



Selected by the system

Input

decision maker based

based on ranking

on knowledge and

outcomes.

experiences. Ranking

Output





One ranking obtained



Multiple rankings

by only the chosen

obtained by each of the

method.

suitable methods.

Ranking by the chosen



method.

Preferred outcome, relative to other outcomes.



Preferred method, relative to other methods.

98

Chapter 7 An Alternatives-Oriented Method Evaluation and Selection

7.5 Concluding Remarks Method selection has become a key research issue in solving a multiattribute decision making (MADM) problem. To address this important issue, this chapter has presented a new alternatives-oriented approach by considering the preference of the decision alternatives as the stakeholders of the decision problem. Departure from the decision-maker-oriented and the method-oriented approaches in method selection research, the alternatives-oriented approach objectively selects the best MADM method that produces a ranking outcome preferred most by all the alternatives as a whole. The approach is efficient in calculating the total preference level for each MADM method by considering the preference of each alternative. The approach with its objective measure is particularly suitable for problem settings where no decision maker is available for method selection or the decision maker is not a key stakeholder. A numerical example has also been presented to demonstrate the simplicity and ease of use of the new approach. Although a study of comparing compensatory MADM methods with cardinal rankings is exemplified, the approach is applicable to compare any set of suitable MADM methods that produce a complete ranking. With its simplicity in concept and computation, it can be readily incorporated into a decision support system for solving multiattribute decision problems that require a complete ranking of the decision alternatives.

99

Chapter 8 Developments IV: Comparisons between TOPSIS and Modified TOPSIS Methods

8.1 Introduction The technique for order preference by similarity to ideal solution (TOPSIS) (Hwang and Yoon, 1981) is one of the most widely used MADM methods for solving practical MADM problems. A variant of TOPSIS named modified TOPSIS was developed with the argument about how the attribute weight should be applied while solving MADM problems (Deng et al., 2000). Both TOPSIS and modified TOPSIS have been applied for problem solving by various researchers. Both methods use the same Euclidean distance measure with the exception of when the attribute weight is to be incorporated with the solution. It is very difficult for the decision maker to choose between these two methods due to extreme similarity between them in their mathematical structures and their applicability to solve same kind of MADM problems. Thus there is a need to evaluate and compare these two methods to justify their suitability and applications. In this chapter, the Decision Context D outlined in Chapter 3 is addressed. Comparison studies between the TOPSIS and the modified TOPSIS methods are conducted to justify the appropriateness of the usage of attribute weights.

100

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

8.2 TOPSIS and Modified TOPSIS The TOPSIS and the modified TOPSIS methods are explained by considering how these methods are applied to solve the general MADM problem Φ as given in Equation (3-3). The MADM problem Φ consists of the decision matrix X as shown in Equation (3-1) and the attribute weight vector W given in Equation (3-2) in Chapter 3.

8.2.1 The TOPSIS Method

It has been used extensively to solve various practical MADM problems for comprehensive mathematical concept, easy usability and simplicity, computational efficiency and ability to measure alternative performances in simple mathematical form (Yeh, 2003). In TOPSIS, an index known as similarity to positive-ideal solution is defined by combining the closeness to positive-ideal solution and remoteness to negativeideal solution. This index is used to rank the competing alternatives (Hwang and Yoon, 1981; Zeleny, 1982). The TOPSIS method has been presented in detail in Chapter 5 Equations (5-7) and (5-9) to (5-15).

8.2.2 The Modified TOPSIS Method

Modified TOPSIS incorporates the attribute weights with performance ratings in a different manner from the TOPSIS method. The overall performance index is calculated using the distance from positive-ideal and negative-ideal solutions. The distance is related with the alternative weights. The modified TOPSIS proposes the 101

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

use of alternative weights with the Euclidean distances (Deng et al., 2000). Modified TOPSIS inherits all the positive aspects of TOPSIS, and also rectifies the use of nonweighted Euclidean distance in TOPSIS. The modified TOPSIS method involves the following steps. Step 1: Obtain normalised decision matrix The normalised decision matrix is calculated similar to TOPSIS in Chapter 5. The matrix can be presented as Equation (5-7) in Chapter 5. Step 2: Identify the positive-ideal and negative-ideal solutions The positive-ideal solution B* and the negative-ideal solution B- can be obtained in terms of normalised performance ratings from Equation (5-7) as





(8-1)





(8-2)

B*  y1* , y*2 , ..., y*J

B   y1 , y2 , ..., yJ

max yij ; for benifit attribute Where y*j    min yij ; for cost attribute  min yij ; for benifit attribute y j   max yij ; for cost attribute Step 3: Obtain the weighted Euclidean distance The weighted Euclidean distances from the positive-ideal and negative-ideal solutions for each alternative Ai (i = 1, 2, ..., I) are calculated by applying Equations (3-2), (5-7), (8-1) and (8-2) as

102

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

Di* 

J

W ( y j

ij

 y *j ) 2 ; i  1, 2, ..., I.

(8-3)

 y j ) 2 ; i  1, 2, ..., I.

(8-4)

j 1

Di 

J

W ( y j

ij

j 1

where Wj (j = 1, 2, ..., J) is weights for attributes Cj (j = 1, 2, ..., J). Step 4. Obtain the overall performance index The overall performance index for each alternative Ai (i = 1, 2, ..., I) is obtained as Di Vi  * ; i  1, 2, ..., I. ( Di  Di )

(8-5)

Performance index Vi (i = 1, 2, ..., I) is used to rank the competing alternatives. A higher index value indicates a better alternative performance.

8.3 Method Comparisons The TOPSIS and modified TOPSIS methods are compared under two different weight settings: (a) all the attribute weights are equal and (b) the attribute weights are not equal.

8.3.1 Comparison with Equal Weight Settings

A problem solving simulation is done with more than 1,000 MADM problems with equal attribute weight settings. For each problem, the TOPSIS and the

103

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

modified TOPSIS methods produces exactly the same ranking outcome. This result can be justified by the following mathematical proof. The TOPSIS Equation (5-15) is expanded using Equations (5-13) and (5-14) as

J

 (v

ij

 v j ) 2

j 1

Vi 

J

(

 (v

 (v

v ) 

ij

; i  1, 2, ..., I.

J

* 2 j

v ) )

ij

j 1

(8-6)

 2 j

j 1

Equation (8-6) can be further extended by applying Equations (5-9) to (5-12) as

J

 (W y j

ij

 W j y j ) 2

j 1

Vi 

J

(

 (W y j

ij

; i  1, 2, ..., I.

J

 (W y

 W j y *j ) 2 

j

j 1

ij

(8-7)

 W j y j ) 2 )

j 1

or J

W

2 j

( yij  y j ) 2

j 1

Vi 

J

(

W

2 j

( yij  y )  * 2 j

j 1

J

W

; i  1, 2, ..., I. 2 j

(8-8)

 2 j

( yij  y ) )

j 1

With the equal weight settings, applying W j  W to Equation (8-8)

J

W

2

( yij  y j ) 2

j 1

Vi 

J

(

W j 1

2

( yij  y )  * 2 j

J

W j 1

or

104

; i  1, 2, ..., I. 2

 2 j

( yij  y ) )

(8-9)

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods J

(y

ij

 y j ) 2

j 1

Vi 

J

(

(y

ij

; i  1, 2, ..., I.

J

(y

 y *j ) 2 

j 1

ij

(8-10)

 y j ) 2 )

j 1

Similarly, the modified TOPSIS Equation (8-5) can be expanded by using Equations (8-3) and (8-4) as

J

W ( y j

ij

 y j ) 2

j 1

Vi 

J

(

W ( y j

ij

; i  1, 2, ..., I.

J

W ( y

y )  * 2 j

j

j 1

ij

(8-11)

 2 j

y ) )

j 1

With the equal weight settings, applying W j  W to Equation (8-11)

J

W ( y

ij

 y j ) 2

j 1

Vi 

J

(

W ( y

ij

; i  1, 2, ..., I.

J

W ( y

y )  * 2 j

j 1

ij

(8-12)

 2 j

y ) )

j 1

or J

(y

ij

 y j ) 2

j 1

Vi 

J

(

(y

ij

j 1

 y *j ) 2 

; i  1, 2, ..., I.

J

(y

ij

(8.13)

 y j ) 2 )

j 1

Comparing Equations (8-10) and (8-13) it is observed that the two methods are exactly the same. This mathematical explanation justifies the same ranking results obtained during the simulation study. It also highlights the extreme structural similarities between the two methods and justifies the need for further investigation under non-equal weights. 105

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

8.3.2 Comparison with Non-Equal Weight Settings

A simulation study and results will be presented before providing a mathematical comparison of the TOPSIS and modified TOPSIS methods under nonequal weight settings.

8.3.2.1 Simulation Results In this simulation study, the decision matrix from the graduate fellowship applicants ranking case presented by Yoon and Hwang (1995) is used. Table 6-2 in Chapter 6 shows the decision matrix. The simulation is started with equal attribute weight W = (0.2, 0.2, 0.2, 0.2, 0.2) for the five attributes. With this equal weight setting, the decision problem is solved with both the TOPSIS and modified-TOPSIS. The ranking outcomes obtained, are exactly the same and are used as the base outcomes. The attribute weights are then changed gradually with a step of 0.1 producing 126 distinct weight sets between the range of (0.6, 0.1, 0.1, 0.1, 0.1) and (0.1, 0.1, 0.1, 0.1, 0.6). The increment step is decided to be 0.1 because it produces significant result variations required for this study. For each set of weights, the MADM problem is solved using both TOPSIS and modified TOPSIS methods. The simulation shows that 70% of the 126 weight sets generates distinct ranking outcomes for the two methods.

106

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

The simulation results and the previous sections for equal weight settings highlight the fact that the only difference between TOPSIS and modified TOPSIS is in how the attribute weight is incorporated during calculations. A closer inspection of expanded TOPSIS Equation (8-8) and expanded modified TOPSIS Equation (8-11) shows that the only difference between the two methods is that in TOPSIS Wj2 is used but in modified TOPSIS Wj is used while calculating the distances from the positive-ideal and the negative-ideal solution. Thus, further mathematical analysis under non-equal weight settings is required to establish the validity of these methods.

8.3.2.2 Mathematical Analysis The modified TOPSIS method suggests that the distance between performance ratings should be weighted rather than the performance ratings as done in TOPSIS. Considering this argument rational and valid, the equation is derived from the basic Euclidean distance theory (Greenacre, 2009). A single dimension problem with two vectors P [x1] and Q [x2] shown in Figure 8-1.

O

P [x1]

|x1 – x2| Q [x2]

x1

Axis 1

x2

Figure 8-1 Distance in one dimensional space The distance between P and Q is obtained as |PQ| = d x  | x1  x2 |

(8-14)

107

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

If the dimension has any weight associated with it, then the weighted distance can be expressed as

Dx  W1 | x1  x2 |

(8-15)

Now consider the problem with two dimensions with vectors P [x1, x2] and Q [y1, y2] as shown in Figure 8-2.

Axis 2 P [x1, x2]

x2 |x2 – y2| B

y2

Q [y1, y2]

O x1

y1

Axis 1

|x1 – y1|

Figure 8-2 Distance in two dimensional space Source: Adapted from Greenacre (2009)

Using the Pythagoras’ theorem for right-angled triangle, from Figure 8-2 we can write the distance between P and Q as

|PQ|2 = (dxy)2 = (x1 – y1)2 + (x2 – y2)2

(8-16)

or d xy =

x1

 y1  +  x2  y2  2

2

108

(8-17)

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

By applying Equations (8-14) and (8-15) the two dimensional weighted Euclidean distance can be obtained from Equation (8-17) as

Dxy =

W1 | x1

 y1 | + W2 | x2  y2 | 2

2

(8-18)

Similarly, the Euclidean distance and the weighted Euclidean distance can be obtained for three dimensional problems with P [x1, x2, x3] and Q [y1, y2, y3] as shown in Equations (8-19) and (8-20) respectively.

d xy =

x1

 y1  +  x2  y2   ( x3  y3 ) 2

Dxy =

W1 | x1

2

2

 y1 | + W2 | x2  y2 |  (W3 | x3  y3 |) 2 2

2

(8-19)

(8-20)

The weighted Euclidean distance for vectors P and Q with j (j = 1, 2, …, J) dimensions can be obtained similarly as

Dxy 

J

 (W j 1

j

| x j  y j |) 2

(8-21)

or

Dxy 

J

W j 1

2 j

( x j  y j )2

(8-22)

The mathematical derivation of Equation (8-22) proves that while calculating the weighted Euclidean distance, squared weight should be used. The multidimension used in the derivation is analogous to MADM problem solving by TOPSIS and modified TOPSIS where the attributes are considered as dimensions. Comparisons between Equation (8-22) and the TOPSIS Equation (8-8) and the 109

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

modified TOPSIS Equation (8-11) prove that the TOPSIS method applies the weight in a correct manner. The concept of distance weighting introduced in the modified TOPSIS is valid and rational. The modified TOPSIS method derives objective weight using the entropy concept (Shannon and Weaver, 1947) based on information variation in the MADM problem (Deng et al., 2000). This objective weight shows the relative significance of attributes in terms of their impacts on the decision outcomes. The objective weight should be treated differently from the attribute weights provided by the decision maker and should never be used in the process of solving the MADM problem. The objective weight certainly can indicate the decision maker regarding the significance of attributes so that the decision maker can be careful while solving the problem. On the other hand, although the TOPSIS method uses the weighting of normalised performance rating and does not explicitly applies the distance weighting concept, the mathematical structure of TOPSIS is implicitly the same as that of the weighted Euclidean distance.

8.4 Concluding Remarks This chapter has provided extensive simulation and mathematical proof based comparisons between two widely used MADM methods: the TOPSIS method and the modified TOPSIS method. The evaluations have shown the validity of the arguments presented for the modified TOPSIS. It has been proved that the TOPSIS method should be used for MADM problems where both TOPSIS and modified 110

Chapter 8 Comparisons between TOPSIS and Modified TOPSIS Methods

TOPSIS could be applied, as it handles the attribute weights in an appropriate manner. This will help the decision makers who are not sure about choosing between these two methods.

111

Chapter 9 Developments V: Evaluation of Consensus Techniques in Multiattribute Group Decision Making

9.1 Introduction Multiattribute group decision making (MAGDM) problems are similar to the multiattribute decision making (MADM) problems with the exception that there are multiple decision makers. With multiple decision makers, the challenges in solving such problems are significant. In addition to the challenging issues associated with MADM problems, the major challenge in solving MAGDM problems is to find a compromise solution that will best satisfy all the decision makers as a whole. The decision consensus can be achieved at different stages in problem solving (Fu and Yang, 2007). With ranking outcomes available from each of the decision makers in a group, the conventional consensus technique to achieve the final stage consensus is the additive Borda score technique (Hwang and Lin, 1987; Shih et al., 2001; 2004). The additive Borda score technique is very simple and easy to use but the additive aggregation produces a group ranking outcome which represents the central tendency (average) of the individual ranking outcomes and not necessarily always the most preferred one by the group. This issue highlights the need to explore other aggregation techniques to achieve group consensus.

112

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

This chapter addresses the Decision Context E outlined in Chapter 3. In the following sections, the existing group consensus techniques are discussed before presenting a novel group consensus technique based on the concept of the Euclidean distance based TOPSIS method (Hwang and Yoon, 1981; Yoon and Hwang, 1995). A new group ranking outcome similarity based approach for selecting the most preferred consensus technique is also developed before presenting a numerical example for better illustrating the new developments.

9.2 Group Consensus Techniques Multiattribute group decision making (MAGDM) problems with various group decision problem settings can be solved by different approaches, as shown in Figure 9-1. Group consensus can be achieved during any of the three stages of MADM problem solving including (a) the initial stage, (b) the intermediate stage and (c) the final stage (Fu and Yang, 2007).

9.2.1 Consensus during the Initial Stage

In this approach, individual decision matrices from each of the decision makers are aggregated using some aggregation method like average, geometric mean, etc. This process converts the group decision problem into a single decision maker problem. The individual preferences for attribute weights are also aggregated to generate the group weight. The decision makers then need to agree on a particular MADM method to solve the problem. Although this approach has been used and improved by several researchers (Parkan and Wu, 1998; Chen, 2000; Chu, 2002a; Fu

113

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Evaluation

Cardinal approach

Ordinal approach

Scale transformation Normalization of set

Agreed criteria Have common criteria for committee member

Borda score: find the ranking of candidates Committee agrees on criteria weight via conference Assignment technique: find the collective preference ordering

Individual

Agreed criteria

Have own criteria set for each individual

Assignment technique: find the individual preference ordering

Borda score: find the collective preference ordering

Individual

Have common criteria for committee member

Simple average of rating value under each criteria Committee agrees on criteria weight via conference

Additive weighted value approach: find the collective preference ordering

TOPSIS: find the collective preference ordering

Have own criteria set for each individual

TOPSIS: find the individual preference ordering

Borda score: find the collective preference ordering

Committee further discusses And/ or revises Decision to submit the recommendation to the boss or top manager

Figure 9-1 The group decision process in the evaluation and selection phases Source: Adapted from Hwang and Lin (1987). 114

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

and Yang, 2007), it may leave the decision makers unsatisfied as they never know the possible ranking outcomes of their individual decision matrices. The agreement to use a particular MADM method to solve the problem is very difficult to achieve, as some of the decision makers may have strong logical and past experience support to use their preferred MADM method.

9.2.2 Consensus during the Intermediate Stage

This approach starts with solving individual decision matrices of each decision maker in the group separately and applies some aggregation technique at a later stage to obtain group ranking outcome (Shih et al., 2007). Despite providing the importance on individual preferences, this approach does not show possible individual ranking outcomes and the decision makers do not know the extent to which the group ranking outcome reflects their own preferences.

9.2.3 Consensus during the Final Stage

In this approach, the individual decision matrix for each decision maker is solved independently using TOPSIS, and then the additive Borda score (DeBorda, 1781; DeGrazia, 1953; Black, 1958; Arrow, 1963; Fishburn, 1973) is applied to aggregate the individual ranking outcomes into the group outcome (Hwang and Lin, 1987; Shih et al., 2001 and 2004). This approach provides the decision makers with both individual and group ranking outcomes which can be applied to find the satisfaction level of the decision makers in the decision outcomes. The commonly used additive aggregation technique is not necessarily the only way of achieving

115

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

aggregation and there is a need for comparing its performance with other aggregation techniques.

9.3 New Consensus Technique Based on TOPSIS The total Borda score is usually calculated by additive aggregation which is very simple and effective. The additive aggregation always indicates the central tendency of the group which may not be the desired solution by the group members for any particular problem. In order to rectify this limitation, a new consensus technique is developed based on the concept of the popular TOPSIS method (Hwang and Yoon, 1981; Yoon and Hwang, 1995) presented in Chapter 5. The new consensus technique is developed using the notion from the TOPSIS method that the ideal solution has the shortest distance from the positive ideal solution and the longest distance from the negative ideal solution. The group ideal ranking outcome (consensus) for a given multiattribute group decision problem can be achieved from a set of ranking outcomes Oq (q = 1, 2, ..., Q) produced by each individual decision maker Dq (q = 1, 2, ..., Q) in the group of decision makers. The new consensus technique involves the following steps. Step 1: Obtain the ranking outcome For the given decision problem Φ, obtain the ranking outcomes Oq (q = 1, 2, ..., Q) given by each decision maker Dq (q = 1, 2, ..., Q). The decision maker Dq (q = 1, 2, ..., Q) may apply their individually preferred method to obtain the outcome.

116

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Step 2: Create the rank matrix (Rq) The rank matrix (Rq) similar to Equation (6-2) in Chapter 6 is obtained by arranging the ranks given to each alternative Ai (i = 1, 2, ..., I) by decision makers Dq (q = 1, 2, ..., Q) as shown in Equation (9-1).  r11 r 21 Rq    ...   rI 1

r12 r22 ... rI 2

... r1Q  ... r2Q  ; i  1, 2, ..., I; q  1, 2, ..., Q. ... ...   ... rIQ 

(9-1)

Step 3: Provide the Borda score The Borda score ziq (i = 1, 2, ..., I; q = 1, 2, ..., Q) for each alternative Ai (i = 1, 2, ..., I) with respect to decision maker Dq (q = 1, 2, ..., Q) can be obtained using Equation (9-1) as

ziq  I  riq ; i  1, 2, ..., I; q  1, 2, ..., Q.

(9-2)

The resultant rank score matrix Z can be given as  z11 z 21 Z  ...   z I 1

z12 z 22 ... zI 2

... z1Q  ... z 2Q  ; i  1, 2, ..., I; q  1, 2, ..., Q. ... ...   ... z IQ 

(9-3)

Step 4: Identify the positive and negative ideal rank scores The positive ideal (Z*) and negative ideal (Z-) rank scores for each decision maker Dq (q = 1, 2, ..., Q) can be identified from Equation (9-3) as



Z *  z1* , z2* ,..., zQ*



(9-4)

117

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making



Z   z1 , z2 ,..., zQ



(9-5)

where z q*  max ziq ; i  1, 2, ..., I ; q  1, 2, ..., Q. z q  min ziq ; i  1, 2, ..., I ; q  1, 2, ..., Q.

Step 5: Calculate the separation measures Separation measures for each decision alternative Ai (i = 1, 2, ..., I) are calculated using the n-dimensional Euclidean distance. The separation (distance) of each alternative from the positive-ideal score Z* and the negativeideal score Z- can be obtained using Equations (9-3) to (9-5) as

Gi* 



Gi 



Q q 1

( ziq  zq* )2 ; i  1, 2, ....., I .

(9-6)

( ziq  zq )2 ; i  1, 2, ....., I .

(9-7)

Q q 1

Step 6: Obtain the overall rank score The overall rank score for each decision alternative Ai (i = 1, 2, ..., I) is obtained by applying Equations (9-6) and (9-7) as

Fi 

Gi ; i  1, 2, ..., I. Gi*  Gi

(9-8)

Step 7: Rank the alternatives The alternatives are ranked according to the overall rank score in descending order. The ranking outcome obtained is the group ranking outcome for the given problem.

118

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

9.4 Consensus Technique Evaluation With the availability of the traditional additive Borda score consensus technique and the new TOPSIS based consensus technique, comparative evaluations must be done to find out which one most satisfies all the decision makers together. To calculate the group satisfaction, a new performance measure called the group similarity index (GSI) is introduced. The GSI is based on ranking outcome similarities between the group outcome and outcomes of each individual decision maker Dq (q = 1, 2, ..., Q). The group outcome obtained by the additive Borda score can be defines as Ob and the group outcome obtained by the new TOPSIS aggregation technique can be defined as Ot. The ranking outcomes obtained by the decision makers Dq (q = 1, 2, ..., Q) can be denoted as Oq (q = 1, 2, ..., Q). The consensus technique selection can be achieved in the following steps. Step 1: Calculate rank correlation for each group outcome The rank correlations between each of the two group outcomes and each outcome Oq (q = 1, 2, ..., Q) produced by each decision makers Dq (q = 1, 2, ..., Q) can be obtained by applying Equations (6-1) and (6-3) from Chapter 6 as

RC (Ob ) q   (Ob , Oq ) ; q  1, 2, ..., Q.

(9-9)

RC (Ot ) q   (Ot , Oq ) ; q  1, 2, ..., Q.

(9-10)

119

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Step 2: Calculate the group similarity index The group similarity index (GSI) for each of the two group outcomes is obtained from Equations (9-9) and (9-10) by taking the average rank correlation as

GSI (Ob )  (

Q

 RC (O ) ) / Q. b q

(9-11)

q 1

GSI (Ot )  (

Q

 RC (O ) ) / Q.

(9-12)

t q

q 1

Step 3: Select the consensus technique

The most preferred group consensus technique for the given problem is selected based on the value of GSI calculated in the previous step. The group consensus technique with a higher GSI should be selected for the given multiattribute group decision problem and its corresponding ranking outcome will be the group outcome.

9.5 Numerical Example In this example, a group decision problem with eight alternatives (A1, A2, ..., A8) is considered. The number of decision makers in the group is six (D1, D2, ..., D6).

Individual ranking outcomes from each decision maker is obtained by applying Equation (9-1). The rank matrix Rq is generated as shown in Table 9-1. Table 9-2 shows the rank score matrix Z generated by applying Equations (9-2) and (9-3) on Table 9-1.

120

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Table 9-1 Rank matrix generated by combining individual ranking outcomes Decision maker Alternative

D1

D2

D3

D4

D5

D6

A1

1

6

8

4

5

2

A2

2

5

6

3

7

4

A3

7

4

1

7

4

3

A4

5

8

7

5

6

1

A5

6

2

3

1

1

5

A6

8

7

4

6

3

7

A7

3

3

5

2

2

8

A8

4

1

2

8

8

6

By applying Equations (9-4) to (9-8) on Table 9-2, the overall rank score Fi (i = 1, 2, ..., I) is calculated for each alternative Ai (i = 1, 2, ..., I). The alternatives Ai (i = 1, 2, ..., I) are then ranked based on the overall rank score Fi (i = 1, 2, ..., I) to

obtain the group ranking outcome. Table 9-3 shows the rank score and group ranking obtained by the new TOPSIS based consensus technique and the conventional additive Borda score technique. In order to select the most preferred consensus technique for this MAGDM problem, we calculate the GSI for outcomes produced by the additive Borda score technique and the TOPSIS based technique using Equations (9-9) to (9-12) as shown in Table 9-4. The result shows that the new TOPSIS based consensus technique is more appropriate for the given MAGDM problem.

121

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Table 9-2 The rank score matrix Decision maker Alternative

D1

D2

D3

D4

D5

D6

A1

7

2

0

4

3

6

A2

6

3

2

5

1

4

A3

1

4

7

1

4

5

A4

3

0

1

3

2

7

A5

2

6

5

7

7

3

A6

0

1

4

2

5

1

A7

5

5

3

6

6

0

A8

4

7

6

0

0

2

Table 9-3 The overall rank score and group ranking outcomes TOPSIS consensus technique

Additive Borda score technique

Alternative

Rank score

Group rank

Rank score

Group rank

A1

0.51637

4

22

3

A2

0.5

5

21

5

A3

0.51735

3

22

3

A4

0.41591

7

16

7

A5

0.65913

1

30

1

A6

0.3522

8

13

8

A7

0.56927

2

25

2

A8

0.47049

6

19

6

122

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

Table 9-4 Rank similarity index for group outcomes Rank Correlation (RCq) Consensus Technique

O1

O2

O3

O4

O5

O6

GSI

Borda (Ob)

0.268

0.547

0.128

0.617

0.547

-0.058 0.3416

TOPSIS (Ot)

0.190

0.595

0.214

0.619

0.571

-0.119 0.3452

9.6 A simulation and Ties in Ranking Outcomes Ranking outcomes with a tie between two or more alternatives is a common phenomenon. This is sometimes a difficult issue for practical problem solving where a limited number of alternatives to be selected based on the ranking outcome. A simulation based experiment is conducted for both the additive Borda consensus technique and the TOPSIS based consensus technique to identify which one is better in handling the tied rank problem while producing the group ranking outcome. The simulation is conducted by varying the rank matrix given in Table 9-1 and is then solved using both the additive Borda score and the TOPSIS based techniques. The number of times each technique produces a tied ranking outcome is then noted to find the ratio of tied rank. The variation in the rank matrix is achieved by varying the importance of each decision maker considering that initially they have equal importance. From the simulation result, it is evident that the additive Borda consensus technique produces around 20% more tied ranking outcomes than the new TOPSIS based consensus technique.

123

Chapter 9 Evaluation of Consensus Techniques in Multiattribute Group Decision Making

9.7 Concluding Remarks The TOPSIS based consensus technique presented in this chapter provides a much needed alternative consensus technique. The new technique provides the decision makers with the opportunity to justify their consensus technique selection. The rank similarity based consensus technique selection approach provides an objective way to maximise the overall group satisfaction. Simulation based experiment results highlight the superiority of the new TOPSIS based consensus technique in producing a non-tied group ranking outcome, which is a significant issue in various practical decision problem settings.

124

Chapter 10 Developments VI: Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

10.1 Introduction The objective of solving a multiattribute group decision making (MAGDM) problem is to obtain a group decision outcome that best satisfies all the decision makers as a whole. In order to achieve the group decision outcome, the decision makers use various compensatory techniques to reach the group compromise outcome (Hwang and Lin, 1987). The group compromise can be achieved at different stages of solving an MAGDM problem (Fu and Yang, 2007). The group ranking outcome is usually calculated by using the decision matrices provided by each decision maker in the group (Parkan and Wu, 1998; Chen, 2000; Chu 2002; Fu and Yang, 2007; Shih et al., 2007) or by aggregating the individual ranking outcomes given by each of the decision makers (Hwang and Lin, 1987) (as discussed in Chapter 9). The existing group decision making methods use a set of ranking outcomes to achieve the group ranking outcome, limited by the number of decision makers or by the method used. This limitation in solution space may lead to a situation where the ranking outcome, most preferred by the group as a whole, may not be found at all. This practical and significant issue highlights the need to develop a method capable of finding the most preferred group ranking outcome by

125

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

considering the whole solution space consisting of all the possible valid ranking outcomes for the given MAGDM problem. In this chapter, a new group decision method is developed. The new method is based on the ranking outcome similarity and is capable of finding the most preferred group ranking outcome from all the possible ranking outcomes for a given MAGDM problem. The new method developed in this chapter addresses the Decision Context F outlined in Chapter 3.

10.2 Methodology Development 10.2.1 Finding the Most Preferred Group Ranking Outcome

The new group decision method is based on the notion that the most preferred outcome for an MAGDM problem, if exists, must be found if we search the whole solution space comprising of all the possible ranking outcomes. To this end, a search technique based on the ranking outcome similarity is developed. With alternatives Ai (i = 1, 2, ..., I), the number of possible ranking outcomes is I!. As such, the solution space containing all the possible ranking outcomes can be defined as β = {Os} (s = 1, 2, ..., S; S = I!), in which the best outcome can be found.

In a group decision setting, there are multiple decision makers Dq (q = 1, 2, ..., Q) with individual ranking outcomes Oq (q = 1, 2, ..., Q). The individual ranking

outcomes can be obtained by using an MADM method or based on the decision maker’s own preference. The most preferred outcome for the group will be the one

126

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

closest to all the individual ranking outcomes. The closeness in ranking outcome is calculated based on the outcome similarity index developed in the following section.

10.2.2 The Outcome Similarity Index

The outcome similarity index (OSI) is based on the Spearman’s rank correlation coefficient (Spearman, 1904) as shown in Equations (6-1) and (6-3) in Chapter 6. The OSI is the measure of the similarity of a ranking outcome Os (s = 1, 2, ..., S; S = I!) in the solution space β to each individual ranking outcomes Oq (q = 1, 2, ..., Q) given by the decision makers Dq (q = 1, 2, ..., Q). A higher value of OSI

indicates a better overall similarity. The OSI can be obtained using the following steps. Step 1: Obtain the individual ranking outcomes

Individual ranking outcomes Oq (q = 1, 2, ..., Q) is obtained from each of the decision makers Dq (q = 1, 2, ..., Q). Each decision maker Dq (q = 1, 2, ..., Q) is free to apply a preferred MADM method to obtain the ranking outcome. Step2: Generate the solution space

The solution space β is generated by obtaining all the possible ranking outcomes Os (s = 1, 2, ..., S; S = I!) for the set of alternatives Ai (i = 1, 2, ..., I) to be evaluated and ranked. Step3: Calculate rank correlations

The rank correlations between each of the ranking outcome Os (s = 1, 2, ..., S; S = I!) in the solution space β and each of the individual ranking outcomes Oq

127

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

(q = 1, 2, ..., Q) given by the decision makers Dq (q = 1, 2, ..., Q) are calculated by applying Equations (6-1) and (6-3) from Chapter 6 as

RCsq   (Os , Oq ) ; s  1, 2, ..., S; S  I! ; q  1, 2, ..., Q.

(10-1)

Step4: Calculate the outcome similarity index

The outcome similarity index (OSIs) for each ranking outcome Os (s = 1, 2, ..., S; S = I!) in the solution space β is calculated by taking the average of the RCsq

(q = 1, 2, ..., Q) calculated in Equation (10-1) as Q

OSI s  ( RCsq ) / Q ; s  1, 2, ..., S; S  I!

(10-2)

q 1

Step5: Find the highest outcome similarity index

The highest outcome similarity index OSIs (s = 1, 2, ..., S; S = I!) can be obtained as

OSI s  max{OSI s }; s  1, 2, ..., S; S  I!

(10-3)

The ranking outcome corresponding to the OSIs is the closest to all the individual ranking outcomes Oq (q = 1, 2, ..., Q) given by the decision makers Dq (q = 1, 2, ..., Q) and the most preferred one by all the decision makers as a

whole.

10.3 Numerical Example To illustrate the new method, consider a multiattribute group decision making (MAGDM) problem where four alternatives (A1, A2, A3 and A4) are to be ranked by a 128

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

group of three decision makers (D1, D2 and D3). Table 10-1 shows the individual ranking outcomes given by each decision maker by using Step 1. The solution space β is obtained by Step 2 and is shown in Table 10-2. With four alternatives to be

ranked, the solution space will contain 4! = 24 possible ranking outcomes. Table 10-1 Individual ranking outcomes for each decision maker Alternatives Decision Maker

A1

A2

A3

A4

D1

1

2

3

4

D2

1

4

2

3

D3

3

2

1

4

Note that, Table 10-2 contains the ranking outcomes given by the three decision makers in Table 10-1 as (*) the ranking outcome given by decision maker D1; (**) the ranking outcome given by decision maker D2; (***) the ranking outcome given by decision maker D3.

Table 10-3 shows the OSIs (s = 1, 2, ..., 24) for each outcome Os (s = 1, 2, ..., 24) in the solution space β obtained by applying Equations (10-1) and (10-2) on Tables 10-1 and 10-2. Using Equation (10.3) and Table 10-3, the highest outcome similarity index can be observed as OSIs+ = 0.667, which corresponds to the outcome O3 (A1>A3>A2>A4). Hence, the ranking A1>A3>A2>A4 is the most preferred outcome

for the given MAGDM problem. 129

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

Table10-2 Solution space with all the possible ranking outcomes Alternatives Ranking outcomes

A1

A2

A3

A4

O1*

1

2

3

4

O2

1

2

4

3

O3

1

3

2

4

O4

1

3

4

2

O5**

1

4

2

3

O6

1

4

3

2

O7

2

1

3

4

O8

2

1

4

3

O9

2

3

1

4

O10

2

3

4

1

O11

2

4

1

3

O12

2

4

3

1

O13

3

1

2

4

O14

3

1

4

2

O15***

3

2

1

4

O16

3

2

4

1

O17

3

4

1

2

O18

3

4

2

1

O19

4

1

2

3

O20

4

1

3

2

O21

4

2

1

3

O22

4

2

3

1

O23

4

3

1

2

O24

4

3

2

1

130

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

Table10-3 OSI for each possible outcome Ranking

OSI

outcomes O1

A1>A2>A3>A4

0.533

O2

A1>A2>A4>A3

0.2

O3

A1>A3>A2>A4

0.667

O4

A1>A3>A4>A2

0

O5

A1>A4>A2>A3

0.467

O6

A1>A4>A3>A2

0.133

O7

A2>A1>A3>A4

0.333

O8

A2>A1>A4>A3

0

O9

A2>A3>A1>A4

0.6

O10

A2>A3>A4>A1

-0.4

O11

A2>A4>A1>A3

0.4

O12

A2>A4>A3>A1

-0.267

O13

A3>A1>A2>A4

0.267

O14

A3>A1>A4>A2

-0.4

O15

A3>A2>A1>A4

0.4

O16

A3>A2>A4>A1

-0.6

O17

A3>A4>A1>A2

0

O18

A3>A4>A2>A1

-0.333

O19

A4>A1>A2>A3

-0.133

O20

A4>A1>A3>A2

-0.467

O21

A4>A2>A1>A3

0

O22

A4>A2>A3>A1

-0.667

O23

A4>A3>A1>A2

-0.2

O24

A4>A3>A2>A1

-0.53

131

Chapter 10 Comparison Based Group Ranking Outcome for Multiattribute Group Decisions

10.4 Concluding Remarks The novel group decision method developed in this chapter for finding the most preferred ranking outcome removes the solution space limitations in conventional group decision making methods. The new method uses the whole solution space rather than a partial solution space to find the most preferred outcome which best satisfies all the decision makers as a whole. The new outcome similarity index measures the overall ranking similarity of each of all possible group ranking outcomes to all the individual ranking outcomes made by each of the decision makers. The new method with the outcome similarity index provides a simple, yet efficient way to find the group ranking outcome. It shows a new approach to achieving group consensus by considering all individual ranking outcomes produced by all decision makers.

132

Chapter 11 Conclusions

11.1 Research Developments Summary The new method evaluation approaches presented in this study are motivated from the need for objectively comparing and selecting multiattribute decision making (MADM) methods for a given problem under certain decision settings and decision contexts. A major research issue has been addressed where the selection of the most preferred MADM method is to be done from a set of suitable and acceptable MADM method for a given decision problem. Six new developments for method evaluation and selection have been achieved to help the decision maker(s) select the most preferred method for a given problem. A key characteristic of these developments is that all of them use the ranking outcomes produced by the MADM methods for the purpose of evaluations and comparisons. The six new developments address method evaluations in three areas of MADM research including (a) generalised method selection, (b) single decision maker problems, and (c) group decision problems. The developments are summarised below:

11.1.1 Developments I: A Simulation Model and Applications

The new simulation model developed and its application shown in Chapters 4 and 5 have addressed the Decision Context A where the decision maker requires a method selection guideline. Developments I has the following advantages:

133

Chapter 11 Conclusions

(a) The model provides general guidelines for method selection under different decision settings, including the number of attributes and the number of alternatives that the decision problem contains and the diversity in the decision information. (b) The model is capable of identifying the level of sensitivity that a particular method shows towards changes in attribute weights. (c) The simple model can be implemented easily to develop computer based systems to compare any number of MADM methods that can produce a complete ranking of the decision alternatives. (d) The application of the model justifies the use of particular normalisation procedures with the SAW and TOPSIS methods.

11.1.2 Developments II: Rank Similarity Based Approach

In Chapter 6, a new rank similarity based method evaluation and selection approach has been developed which has addressed the Decision Context B where the most preferred method is to be selected from a set of suitable methods for a given decision problem. Developments II has the following advantages: (a) The approach can perform a problem specific comparison of MADM methods. (b) The approach is capable of selecting the most preferred method from a set of suitable methods for a given decision problem. (c) The approach applies a simple and rational objective measure to justify and validate the evaluation and comparison of MADM methods. 134

Chapter 11 Conclusions

(d) The objective measure uses the concept of outcome closeness and provides clarity in the evaluation process. (e) The approach is particularly applicable for evaluating MADM methods used to solve single decision maker problems.

11.1.3 Developments III: Alternatives-Oriented Approach

The alternatives-oriented approach developed in Chapter 7 has addressed the Decision Context C where the decision alternatives are key stakeholders. In many practical problems often the alternatives are key stakeholders and are the ones affected most by the decision outcome. The advantages of Developments III include the following: (a) The approach provides a whole new perspective to method selection. (b) The approach provides due considerations to the decision alternatives in the process of method selection when they are key stakeholders. (c) The alternatives-oriented approach uses a new objective measure to compare MADM methods by considering the preferences of the decision alternatives.

11.1.4 Developments IV: TOPSIS and Modified TOPSIS Comparison

The comparative studies presented in Chapter 8 have addressed the challenges for Decision Context D where the decision maker needs to select between 135

Chapter 11 Conclusions

the TOPSIS and the modified TOPSIS methods. Developments IV has the following advantages: (a) The comparisons provide enough experimental results for the decision maker to decide the appropriate way of using these methods. (b) Mathematical proofs provide justification and validate the experimental results.

11.1.5 Developments V: Group Consensus Technique

The new group consensus technique and the consensus technique comparison approach developed in Chapter 9 have addressed the Decision Context E where the consensus among the decision makers need to be achieved based on individual ranking outcomes. Developments V has the following advantages: (a) The new TOPSIS based group consensus technique may be a rational alternative to the conventional Borda score based technique. (b) The new technique is able to identify differences between the performances of the decision alternatives in a finer detail, thus reducing ties in ranking outcomes. (c) The new consensus technique comparison compares the different consensus techniques to find the most preferred one for a given decision problem.

136

Chapter 11 Conclusions

11.1.6 Developments VI: Comparison Based Group Decision Method

The new multiattribute group decision making (MAGDM) method developed in Chapter 10 has addressed the Decision Context F where the group outcome needs to be found from the set of all possible solutions. Developments VI has the following advantages: (a) The new method provides a new perspective for solving group decision problems. (b) It eliminates the solution space limitations in currently used methods. (c) The method uses the individual outcomes to find the group outcome which provides clarity in the process, thus allowing the decision makers to validate the outcome. (d) The new objective measure developed is capable of measuring the level of group satisfaction in terms of relative closeness to the group solution.

11.2 Application of the Developments Figure 11-1 shows how the new research developments discussed in the previous section may be used in a computer based decision support system for method selection and problem solving. The system requires (a) a given decision problem to be solved, (b) a set of suitable MADM method under consideration, and (c) any specific requirements or preferences related to method selection.

137

Chapter 11 Conclusions

Decision maker(s)

Selection preferences

Decision problem

Suitable methods

Identify decision contexts

Decision context

Research developments Developments I

Developments II

Developments III

Developments

Developments V

Developments

Select context specific approach

Selected approach Apply selected evaluation approach

Preferred method

Decision outcome

Figure 11-1 A computer based decision support system for method selection 138

Chapter 11 Conclusions

An automated module uses the decision problem settings and the preferences of the decision maker to identify the context of the decision problem. The context information is then used to select the context specific evaluation approach from the set of approaches developed in this study. The selected method evaluation approach is then applied to evaluate the set of given methods for the given problem. This process will identify the most suitable method for the given problem under the chosen decision context along with the ranking outcome for the given problem.

11.3 Research Contributions The study has significant contributions to the theoretical and practical areas of multiattribute decision making and method evaluation. These contributions include the following: (a) The study introduces the idea of decision context specific method selection. The decision context includes various decision settings and evaluation preferences of the decision maker. The study proposes that the decision context for method evaluation and selection needs to be identified and then a context specific evaluation approach is to be applied. (b) The new developments use the ranking outcomes for the purpose of evaluation and comparisons. Often the decision makers are more concerned about the decision outcome. The outcome based approaches are more rationally aligned to the decision makers’ interests. The study thus provides 139

Chapter 11 Conclusions

a more acceptable way of method evaluation based on the decision outcomes. (c) The new simulation model not only provides a set of general method selection guidelines but also provides a general framework which can be easily adapted to develop new method selection experiments with any set of MADM methods. The model is efficient and can be useful in future simulation based experiments and the development of method specific selection guidelines. (d) The simulation based comparison of normalisation procedures provides useful results regarding their suitability with SAW and TOPSIS under various decision settings. These results can be used as guidelines for their future applications. (e) The simulation experiments highlight a significant change necessary in the existing method evaluation processes. Existing method evaluations compare a method with others based on a performance measure without any further study on internal processes of a method (such as normalisation, aggregation, group consensus). The simulation experiments have shown that the internal processes of an MADM method have significant impacts on the decision outcome and for a particular method there may be multiple suitable internal processes available. The study suggests that a method with different internal processes should be treated as a new method and they should be evaluated for their suitability for a given problem, instead of just evaluating the originally proposed method. 140

Chapter 11 Conclusions

(f) The rank similarity based method selection is a new way of method evaluation based on the decision outcomes for a given problem. This approach is simple and efficient, thus paving the way for further developments based on the outcome similarity. (g) A whole new perspective to method selection is discovered through the development of the alternatives-oriented approach. Previously ignored importance of decision alternatives as a stakeholder is duly addressed in this new approach. This gives the method evaluation research a new dimension “alternatives oriented” alongside the existing “decision maker oriented” and “method oriented” studies. (h) The new group consensus technique developed enhances the group decision making research by providing a rational alternative to the widely used Borda score based technique. The new comparison approach will help decision makers choose the most appropriate consensus technique through an objective comparison for a given problem. (i) The new comparison based group decision method is a new addition to the existing group decision methods in group decision analysis. The new method provides a unique way to search for the most acceptable (to all the decision makers as a whole) outcome from the set of all possible outcomes. The new method handles the group consensus issue implicitly while obtaining the group outcome.

141

Chapter 11 Conclusions

11.4 Future Research The evaluation, comparison and selection of MADM methods for a specific decision problem are still major challenges in MADM research as well as for the decision makers. Significant studies need to be done in this area to help the decision makers choose the most preferred method under certain decision settings and contexts. This study is a small stride towards that direction. The following areas could be further explored based on the research developments in this study: (a) The decision context specific approaches need to be extended in evaluating fuzzy MADM methods. Many practical MADM problems are fuzzy in nature, and to solve them fuzzy MADM methods are widely used. The developments of comparison approaches for evaluating fuzzy MADM methods will certainly help decision makers make rational method selection under a fuzzy decision environment. (b) Extensive comparative studies are needed in the area of group decision making. The method comparison approaches and the simulation model may be extended to address the needs in this area. (c) Extensions of the developments are needed to address the issue of evaluating and selecting MADM methods which do not produce a complete ranking of the decision alternatives.

(d) Simulation is used to experiment with diverse problem scenarios for demonstrating the general application of the new evaluation models

142

Chapter 11 Conclusions

developed in this study. The research scope of this study and time limitations have prevented the use of an empirical study for the evaluation models. The applications of the new evaluation models to real empirical studies are part of my future research.

143

References

Ando A (1979). On the Contributions of Herbert A. Simon to Economics. The Scandinavian Journal of Economics 81 (1): 83-93.

Arrow KJ (1963). Social Choice and Individual Values (2nd Ed). Yale University Press: New Haven. Baas SM and Kwakernaak H (1977). Rating and Ranking of Multiple-Aspect Alternatives using Fuzzy Sets. Automatica 13 (1): 47-58. Bellman RE and Zadeh LA (1970). Decision-Making in a Fuzzy Environment. Management Science Series B-Application 17 (4): B141-B164.

Belton V (1986). A Comparison of the Analytic Hierarchy Process and a Simple Multi-Attribute Value Function. European Journal of Operational Research 26 (1): 7-21. Belton V and Stewart TJ (2002). Multiple Criteria Decision Analysis: An Integrated Approach. Kluwer Academic Publishers: Boston/ Dordrecht/ London.

Benayoun R, Roy B and Sussman N (1966). Manual de Reference du Programme Electre. Note de Synthese et Formation 25. Direction Scientifique SEMA, Paris. Bergstresser K, Charnes A and Yu PL (1976). Generalization of Domination Structures and Nondominated Solutions in Multicriteria Decision Making. Journal of Optimization Theory and Applications 18 (1): 3-13.

Bernardo JJ (1977). An Assignment Approach to Choosing R & D Experiments. Decision Sciences 8 (2): 489-501.

144

References

Bettman JR (1971). A Graph Theory Approach to Comparing Consumer Information Processing Models. Management Science 18 (4 II): 114-128. Bettman JR (1974). A Threshold Model of Attribute Satisfaction Decisions. Journal of Consumer Research 1 (3): 30-35.

Black D (1958). The Theory of Committees and Elections. Cambridge University Press: Cambridge. Bonissone PP (1980). A Fuzzy Set based Linguistic Approach: Theory and Applications. In: Proceedings of the 1980 Winter Simulation Conference: 99111. Bonissone PP (1982). A Fuzzy Sets based Linguistic Approach: Theory and Applications. In: Gupta and Sanchez (Eds.), Approximate Reasoning in Decision Analysis. North-Holland.

Brans JP, Mareshal B and Vincke P (1984). Promethee: A New Family of Outranking Methods in Multicriteria Analysis. In: Brans (Eds.), Operational Research ’84. North-Holland.

Bridgman PW (1922). Dimensional Analysis. Yale University Press: New Haven, CT. Buchanan JT (1994). An Experimental Evaluation of Interactive MCDM Methods and the Decision Making Process. Journal of the Operational Research Society 45 (9): 1156-1168.

Chakraborty S and Yeh C-H (2007a). Consistency Comparison of Normalization Procedures in Multiattribute Decision Making. WSEAS Transactions on Systems and Control 2 (2): 193-200.

145

References

Chakraborty S and Yeh C-H (2007b) Comparing Normalization Procedures in Multiattribute Decision Making under Various Problem Settings. In: Proceedings of the Fifth International Conference on Information Technology in Asia (CITA’07): 36-42.

Chakraborty S and Yeh C-H (2009). A Simulation Comparison of Normalization Procedures for TOPSIS. In: Proceedings of the International Conference on Computers and Industrial Engineering (CIE39): 1815-1820.

Charnes A, Cooper WW and Kozmetsky G (1973). Measuring, Monitoring and Modelling Quality of Life. Management Science 19 (10): 1172-1188. Chen CB and Wei CC (1997). An Approach for Solving Fuzzy MADM Problems. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 5 (4): 459-480.

Chen CT (2000). Extensions of the TOPSIS for Group Decision-making under Fuzzy Environment. Fuzzy Sets and Systems 114 (1): 1-9. Chen SJ and Hwang CL (1992). Fuzzy Multiple Attribute Decision Making: Methods and Applications. Springer-Verlag: Berlin, Heidelberg, New York.

Cheng YM and McInnis B (1980). An Algorithm for Multiple Attribute, Multiple Alternative Decision Problem based on Fuzzy Sets with Application to Medical Diagnosis. IEEE Transactions on System, Man, and Cybernetics 10 (10): 645650. Cho K (2003). Multicriteria Decision Methods: An Attempt to Evaluate and Unify. Mathematical and Computer Modelling 37 (9-10): 1099-1119.

146

References

Chu TC (2002a). Facility Location Selection using Fuzzy TOPSIS under Group Decision. International Journal of Uncertainty, Fuzziness and KnowledgeBased Systems 10 (6): 687-701.

Chu TC (2002b). Selecting Plant Location via a Fuzzy TOPSIS Approach. International Journal of Advanced Manufacturing Technology 20 (11): 859-

864. Churchman CW and Ackoff RL (1954). An Approximate Measure of Value. Journal of the Operations Research Society of America 2 (2): 172-187.

Cook WD (2006). Distance-based and Ad Hoc Consensus Models in Ordinal Preference Ranking. European Journal of Operational Research 172 (2): 369385. Cook WD and Seiford LM (1978). Priority Ranking and Consensus Formation. Management Science 24 (16): 1721-1732.

Cook WD and Seiford LM (1982). R & D Project Selection in a Multidimensional Environment: A Practical Approach. Journal of the Operational Research Society 33 (5): 397-405.

Currim IS and Sarin RK (1984). A Comparative Evaluation of Multiattribute Consumer Preference Models. Management Science 30 (5): 543-561. Dawes RM (1964). Social Selection based on Multidimensional Criteria. Journal of Abnormal and Social Psychology 68 (1): 104-109.

De PK, Acharya D and Sahu KC (1982). A Chance-constrained Goal Programming Model for Capital Budgeting. Journal of the Operational Research Society 33 (7): 635-638. DeBorda J-C (1781). Memoires de l’Academie Royale des Sciences: 657-665.

147

References

DeGrazia A (1953). Mathematical Derivation of an Election System. Isis 44 (1/ 2): 42-51. Deng H (1998). Developments in Fuzzy Multicriteria Analysis and Their Applications to Decision Problems. Ph.D. Thesis, Faculty of Information

Technology, Monash University. Deng H, Yeh C-H and Willis RJ (2000). Inter-company Comparison using Modified TOPSIS with Objective Weights. Computers & Operations Research 27 (10): 963-973. Deng H and Yeh C-H (2006). Simulation-based Evaluation of Defuzzification-based Approaches to Fuzzy Multiattribute Decision Making. IEEE Transaction on Systems, Man, and Cybernetics, Part A- Systems and Humans 36 (5): 968-977.

Dubois D and Prade H (1982). The Use of Fuzzy Numbers in Decision Analysis. In: Gupta and Sanchez (Eds.), Fuzzy Information and Decision Processes. NorthHolland. Dubois D, Prade H and Testemale C (1988). Weighted Fuzzy Pattern Matching. Fuzzy Sets and Systems 28 (3): 313-331.

Dyer JS and Miles RF Jr. (1976). An Actual Application of Collective Choice Theory to the Selection of Trajectories for the Mariner Jupiter-Saturn 1977 Project. Operations Research 24 (2): 220-244. Eckenrode RT (1965). Weighting Multiple Criteria. Management Science 12 (3): 180-192. Encarnacion J Jr. (1964). A note on Lexicographical Preferences. Econometrica 32 (1-2): 215-217.

148

References

Eom HB, Lee SM, Snyder CA and Ford FN (1987-88). A Multiple Criteria Decision Support System for Global Financial Planning. Journal of Management Information Systems 4 (3): 94-113.

Figueira J, Mousseau V and Roy B (2005). ELECTRE Methods. In: Figueira, Greco and Ehrgott (Eds.), Multiple Criteria Decision Analysis: State of the Art Surveys. Springer: 133-153.

Fine B and Fine K (1974a). Social Choice and Individual Ranking I. Review of Economic Studies 41 (3): 303-322.

Fine B and Fine K (1974b). Social Choice and Individual Ranking II. Review of Economic Studies 41 (4): 459-475.

Fishburn PC (1966). A Note on Recent Developments in Additive Utility Theories for Multiple-Factor Situations. Operations Research 14 (6): 1143-1148. Fishburn PC (1967). Additive Utilities with Incomplete Product Set: Applications to Priorities and Assignments. Operations Research 15 (3): 537-542. Fishburn PC (1973). The Theory of Social Choice. Princeton University Press: Princeton, New Jersey. Fishburn PC (1974). Lexicographic Orders, Utilities and Decision Rules: A Survey. Management Science 20 (11): 1442-1471.

Fishburn PC (1977). Condorcet Social Choice Functions. SIAM Journal on Applied Mathematics 33 (3): 469-489.

Foerster JF (1979). Mode Choice Decision Process Models: A Comparison of Compensatory and Non-Compensatory Structures. Transportation Research 13A (1): 17-28.

149

References

Fu C and Yang SL (2007). Solutions to Belief Group Decision Making Using Extended TOPSIS. In: The proceedings of the 2007 International Conference on management Science & Engineering: 458-463.

Gardenfors P (1973). Positionalist Voting Functions. Theory and Decision 4 (1): 124. Gemunden HG and Hauschildt J (1985). Number of Alternatives and Efficiency in Different Types of Top Management Decisions. European Journal of Operational Research 22 (2): 178-190.

Guitouni A and Martel J-M (1998). Tentative Guidelines to Help Choosing an Appropriate MCDA Method. European Journal of Operational Research 109 (2): 501-521. Greenacre M (2009). Chapter 4: Measures of Distance Between Samples: Euclidean. http://www.econ.upf.edu/~michael/stanford/maeb4.pdf.

(Accessed

on

11/11/2009) Hadar J and Russel WR (1969). Rules for Ordering Uncertain Prospect. American Economic Review 59 (1): 25-34.

Hadar J and Russel WR (1974). Decision Making with Stochastic Dominance: An Expository Review. Omega 2 (3): 365-377. Hanandeh AE and El-Zein A (2009). Strategies for the Municipal Waste Management System to Take Advantage of Carbon Trading under Competing Policies: The Role of Energy from Waste in Sydney. Waste Management 29 (7): 2188-2194. Hobbs BF (1980). A Comparison of Weighting Methods in Power Plant Siting. Decision Sciences 11 (4): 725-737. 150

References

Hobbs BF (1986). What Can We Learn From Experiments in Multiobjective Decision-Analysis. IEEE Transactions on Systems Man and Cybernetics16 (3): 384-394. Hobbs BF, Chankong V, Hamadeh W and Stakhiv E (1992). Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning. Water Resources Research 28 (7): 1767-1779.

Hwang CL and Lin MJ (1987). Group Decision Making under Multiple Criteria:Methods and Applications. Springer-Verlag: Berlin.

Hwang CL and Yoon K (1981). Multiple Attribute Decision Making: Methods and Applications: A State of the Art Survey. Springer-Verlag: Berlin, Heidelberg,

New York. Jacquetlagreze E and Siskos J (1982). Assessing a Set of Additive Utility-functions for Multicriteria Decision-Making, The UTA Method. European Journal of Operational Research 10 (2): 151-164.

Jessop A (2009). A Portfolio Model for Performance Assessment: The Financial Times MBA Ranking. Journal of the Operational Research Society doi:10.1057/jors.2008.193. Kendall M (1955). Rank Correlation Methods (3rd edition). Hafner Publishing Company: New York. Keeney RL and Kirkwood CW (1975). Group Decision Making Using Cardinal Social Welfare Functions. Management Science 22 (4): 430-437. Keeney RL and Raiffa H (1976). Decision with Multiple Objectives, Preferences and Value Trade Offs. Cambridge University Press: New York.

151

References

Klee AJ (1971). The Role of Decision Models in the Evaluation of Competing Environmental Health Alternatives. Management Science 18 (2): B52-B67. Kwakernaak H (1979). An Algorithm for Rating Multiple-Aspect Alternatives using Fuzzy Sets. Automatica 15 (5): 615-616. Lai Y-J and Hwang C-L (1993). A Stochastic Possibility Programming Model for Bank Hedging Decision Problems. Fuzzy Sets and Systems 57 (3): 351-363. Luce RD (1956). Semiorders and a Theory of Utility Discrimination. Econometrica 24 (2): 178-191.

MacCrimmon KR (1968). Decision Making Among Multiple-Attribute Alternatives: A Survey and Consolidated Approach. RAND Memorandum RM-4823-ARPA. Martel JM and D’Avignon G (1982). Projects Ordering with Multicriteria Analysis. European Journal of Operational research 10 (1): 56-69.

Martel JM, D’Avignon G and Couillard J (1986). A Fuzzy Outranking Relation in Multicriteria Decision Making. European Journal of Operational research 25 (2): 258-271. Minnehan RF (1973). Multiple Objectives and Multigroup Decision Making in Physical Design Situations. In: Cochrane and Zeleny (Eds.), Multiple Criteria Decision Making. University of South Carolina Press: Columbia, South

Carolina. Muhlemann AP, Lockett AG and Gear AE (1978). Portfolio Modelling in Multiplecriteria Situations under Uncertainty. Decision Sciences 9 (4): 612-626. Nijkamp P (1974). A Multicriteria Analysis for Project Evaluation: EconomicEcological Evaluation of a Land Reclamation Project. Papers of the Regional Science Association 35 (1): 87-111

152

References

Nijkamp P and Vandelft A (1977). Multi-Criteria Analysis and Regional Decision Making. Martinus Nijhoff: Leiden.

Nowak M (2006). INSDECM – An Interactive Procedure for Stochastic Multicriteria Decision Problems. European Journal of Operational Research 175 (3): 14131430. Olson DL (2001). Comparison of Three Multicriteria Methods to Predict Known Outcomes. European Journal of Operational Research 130 (3): 576-587. Ozernoy VM (1987). Some Issues in Mathematical Modelling of Multiple Criteria Decision-Making Problems. Mathematical Modelling 8: 212-215. Ozernoy VM (1987). Choosing the “Best” Multiple Criteria Decision-Making Method. INFOR 30 (2): 159-171. Parkan C and Wu ML (1998). Process Selection with Multiple Objective and Subjective Attributes. Production Planning and Control 9 (2): 189-200. Pattanaik PK (1978). Strategy and Group Choice. North-Holland Publishing Company: New York. Raju KS and Pillai CRS (1999). Multicriterion Decision Making in River Basin Planning and Development. European Journal of Operational Research: 112 (3): 249-257. Rebai A (1993). BBTOPSIS – A Bag based Technique for Order Preference by Similarity to Ideal Solution. Fuzzy Sets and Systems 60 (2): 143-162. Roy B (1968). Ranking and Choice in Pace of Multiple Points of View (ELECTRE Method). Revue Francaise D Informatique De Recherche Operationnelle 2 (8): 57-75.

153

References

Roy B (1971). Problems and Methods with Multiple Objective Functions. Mathematical Programming 1 (2): 239-266.

Roy B (1973). How Outranking Relation Helps Multiple Criteria Decision Making. In: Cochrane and Zeleny (Eds.), Multiple Criteria Decision Making. University of South Carolina Press: Columbia, South Carolina. Roy B (1991). The Outranking Approach and the Foundations of ELECTRE Methods. Theory and Decision 31 (1): 49-73. Saaty TL (1977). A Scaling Method for Priorities in Hierarchical Structure. Journal of Mathematical Psychology 15 (3): 234-281.

Saaty TL (1980). The Analytic Hierarchy Process. McGraw-Hill: New York. Saaty TL (1987). Rank Generation, Preservation, and Reversal in the Analytic Hierarchy Decision-Process. Decision Sciences 18 (2): 157-177. Saaty TL (1994). How to Make a Decision: The Analytic Hierarchy Process. Interfaces 24 (6): 19-43.

Seo F and Sakawa M (1984). Fuzzy Assessment of Multiattribute Utility-Functions. Springer-Verlag: Berlin/, Heidelberg, New York. Seo F and Sakawa M (1985). Fuzzy Multiattribute Utility Analysis for Collective Choice. IEEE Transactions on Systems, Man, and Cybernetics 15 (1): 45-53. Shannon CE and Weaver W (1947). The mathematical theory of communication. The University of Illinios Press: Urbana. Shih HS, Lin WY and Lee ES (2001). Group Decision Making for TOPSIS. In: Joint 9th IFSA World Congress and 20th NAFIPS International Conference: 2712-

2717.

154

References

Shih HS, Shyur HJ and Lee ES (2007). An Extension of TOPSIS for Group Decision Making. Mathematical and Computer Modelling 45 (7-8): 801-813. Shih HS, Wang CH and Lee ES (2004). A Multiattribute GDSS for Aiding Problemsolving. Mathematical and Computer Modelling 39 (11-12): 1397-1412. Simon HA (1955). A Behavioral Model of Rational Choice. Quarterly Journal of Economics 69 (1): 99-114.

Simpson L (1996). Do Decision Makers Know What They Prefer?: MAVT and ELECTRE II. Journal of the Operational Research Society 47 (7): 919-929. Siskos J (1980). Method for Modelling Preferences Employing Additive Utilityfunctions. Rairo-Recherche Operationnelle-Operations Research 14 (1): 5382. Siskos J (1983). Analyse de Systems de Decision Multicritere en Univers Aleatoire. Foundations of Control Engineering 8.

Siskos JL, Lochard J and Lombard J (1984). A Multicriteria Decision Making Methodology under Fuzziness: Application to the Evaluation of Radiological Protection in Nuclear Power Plants. In: Zimmermann (Eds.), TIMS/ Studies in the Management Sciences 20: 261-283. North-Holland.

Souder WE (1972). A Scoring Methodology for Assessing the Suitability of Management Science Models. Management Science 18 (10): B526-B543. Souder WE (1973a). Analytical Effectiveness of Mathematical Models for R & D Project Selection. Management Science 19 (8): 907-923. Souder WE (1973b). Utility and Perceived Acceptability of R & D Project Selection Models. Management Science 19 (12): 1384-1394.

155

References

Spearman C (1904). The Proof and Measurement of Association between Two Things. The American Journal of Psychology 15 (1): 72-101. Starr MK (1972). Production Management. Prentice-Hall: Englewood Cliffs, NJ. Steuer RE and Na P (2003). Multiple Criteria Decision Making Combined with Finance: A Categorized Bibliographic Study. European Journal of Operational Research 150 (3): 496-515.

Stewart TJ (1992). A Critical Survey on the Status of Multiple Criteria Decision Making Theory and Practice. OMEGA-International Journal of Management Science 20 (5-6): 569-586.

Triantaphyllou E and Sanchez A (1997). A Sensitivity Analysis Approach for Some Deterministic Multi-criteria Decision Making Methods. Decision Sciences 28 (1): 151-194. Triantaphyllou E (2000). Multi-Criteria Decision Making Methods: A Comparative Study. Kluwer Academic Publishers: Dordrecht/ Boston/ London.

Tversky A (1972a). Choice by Elimination. Journal of Mathematical Psychology 9 (4): 341-367. Tversky A (1972b). Elimination by Aspects: A Theory of Choice. Psychological Review 79 (4): 281-299.

Vinso JD (1982). Financial Planning for the Multinational Corporation with Multiple Goals. Journal of International Business Studies 13 (3): 43-58. Voogd H (1983). Multicriteria Evaluation for Urban and Regional Planning. Pion: London.

156

References

Weber M and Borcherding K (1993). Behavioral Influences on Weight Judgments in Multiattribute Decision Making. European Journal of Operational Research 67 (1): 1-12.

Wehrung DA, Bassler JF, MacCrimmon KR and Stanbury WT (1978). Multiple Criteria Dominance Mpodels: An Empirical Study of Investment Preferences. In: Zionts (Eds.), Multiple Criteria Problem Solving:Proceedings. SpringerVerlag: Berlin/ Heidelberg/ New York. Yeh C-H (2002). A Problem-based Selection of Multi-attribute Decision-making Methods. International Transaction in Operational Research 9 (2): 169-181. Yeh C-H (2003). The Selection of Multiattribute Decision Making Methods for Scholarship Student Selection. International Journal of Selection and Assessment 11 (4): 289-296.

Yeh C-H and Chang Y-H (2009). Modelling Subjective Evaluation for Fuzzy Group Multicriteria Decision Making. European Journal of Operational Research 194 (2): 464-473.

Yoon KP (1989). The Propagation of Errors in Multiple-Attribute Decision Analysis: A Practical Approach. Journal of the Operational Research Society 40 (7): 681-686. Yoon KP and Hwang C-L (1995). Multiple Attribute Decision Making: An Introduction. Thousand Oaks, Sage Publications: London, New Delhi.

Young HP (1974). An Axiomatization of Borda’s Rule. Journal of Economic Theory 9 (1): 43-52.

Young HP (1975). Social Choice Scoring Functions. SIAM Journal on Applied Mathematics 28 (4): 824-838.

157

References

Yu PL (1973). Introduction to Domination Structures in Multicriteria Decision Problems. In: Cochrane and Zeleny (Eds.), Multiple Criteria Decision Making. University of South Carolina Press: Columbia, South Carolina. Yu PL (1975). Domination Structures and Nondominated Solutions. In: Leitmann and Marzollo (Eds.), Multicriteria Decision Making. Springer-Verlag: Wien/ New York. Yurdakul M and Yusuf TIC (2009). Application of Correlation Test to Criteria Selection for Multi Criteria Decision Making (MCDM) Models. International Journal of Advanced Manufacturing Technology 40 (3-4): 403-412.

Zadeh LA (1965). Fuzzy Sets. Information and Control 8 (3): 338-353. Zanakis SH, Solomon A, Wishart N and Dublish S (1998). Multi-attribute Decision Making: A Simulation Comparison of Select Methods. European Journal of Operational Research 107 (3): 507-529.

Zaras K (2001). Rough Approximation of a Preference Relation by a Multi-attribute Stochastic Dominance for Determinist and Stochastic Evaluation Problems. European journal of Operational Research 130 (): 305-314.

Zaras K and Martel JM (1994). Multi-attribute Analysis based on Stochastic Dominance. In: Munier and Machina (Eds.), Models and Experiments in Risk and Rationality. Kluwer Academic Publishers: Dordrecht.

Zeleny M (1982). Multiple Criteria Decision Making. Mcgraw-Hill: New York.

158

Appendix A Notation

Ai

Alternative i (i = 1, 2, ..., I).

A*

Set of positive ideal solutions for weighted normalised performance ratings.

A-

Set of negative ideal solutions for weighted normalised performance ratings.

bki

The number of methods producing better ranking than Method Mk (k = 1, 2, ..., K) for alternative Ai (i = 1, 2, ..., I).

B*

Set of positive ideal solutions for normalised performance ratings.

B-

Set of negative ideal solutions for normalised performance ratings.

Cj

Attribute or criteria j (j = 1, 2, ..., J).

CWn

Consistency weight n.

Dq

Decision maker q (q = 1, 2, ..., Q).

Dx

Weighted Euclidean distance in one dimensional space.

Dxy

Weighted Euclidean distance in two dimensional space.

di

Difference between ranks for alternative i (i = 1, 2, ..., I).

dx

Euclidean distance in one dimensional space.

dxy

Euclidean distance in two dimensional space.

Di*

Separation measure for alternative Ai (i = 1, 2, ..., I) from the positive ideal solutions for normalised performance rating.

Di-

Separation measure for alternative Ai (i = 1, 2, ..., I) from the negative ideal solutions for normalised performance rating.

e

Number of normalisation procedures.

159

Appendix A: Notation

Fi

Overall rank score i (i = 1, 2, ..., I).

Gi*

Separation measure i (i = 1, 2, ..., I) from positive-ideal rank score.

Gi-

Separation measure i (i = 1, 2, ..., I) from negative-ideal rank score.

GSI

Group similarity index.

h

Number of MADM methods and h ≠ k (k = 1, 2, ..., K).

i

Number of alternatives.

j

Number of attributes.

k

Number of MADM methods.

l

Number of decision problems.

Lk

Method preference level k (k = 1, 2, ..., K).

Lk

Highest method preference level Lk (k = 1, 2, ..., K).

Mk

MADM method k (k = 1, 2, ..., K).

n

Number of other methods that produce the same rank as Method Mk (k = 1, 2, ..., K).

Ne

Normalisation procedure e (e = 1, 2, ..., E).

Ob

Group outcome using Borda score.

Ok

Ranking outcome k produced by Method Mk (k = 1, 2, ..., K).

Oq

Ranking outcome q produced by decision maker Dq (q = 1, 2, ..., Q).

Os

Ranking outcome s (s = 1, 2, ..., S). in the set of all possible solution space.

Ot

Group outcome using TOPSIS based technique.

OSIs

Outcome similarity index s (s = 1, 2, ..., S; S = I!).

OSIs+

Highest outcome similarity index s (s = 1, 2, ..., S; S = I!).

P

Preference degree matrix.

160

Appendix A: Notation

pik

Preference degree for Method Mk (k = 1, 2, ..., K) by alternative Ai (i = 1, 2, ..., I).

q

Number of decision maker.

Rk

Rank matrix for Method Mk (k = 1, 2, ..., K).

Rq

Rank matrix for decision maker Dq (q = 1, 2, ..., Q).

rik

Rank given to alternative Ai (i = 1, 2, ..., I) by Method Mk (k = 1, 2, ..., K).

riq

Rank given to alternative Ai (i = 1, 2, ..., I) by decision maker Dq (q = 1, 2, ..., Q).

RCkh

Rank correlation between ranking outcomes produced by method Mk (k = 1, 2, ..., K) and Mh where k ≠ h.

RCsq

Rank correlation between ranking outcomes Os (s = 1, 2, ..., S; S = I!) and Oq (q = 1, 2, ..., Q).

RC(Ob)q Rank correlation with ranking outcomes Ob for outcome Oq (q = 1, 2, ..., Q). RC(Ot)q Rank correlation with ranking outcomes Ot for outcome Oq (q = 1, 2, ..., Q). RCIk

Ranking consistency index k (k = 1, 2, ..., K).

RSIk

Rank similarity index k (k = 1, 2, ..., K).

RSI+

Largest rank similarity index.

s

Number of ranking outcomes in the solution space β.

Si*

Separation measure for alternative Ai (i = 1, 2, ..., I) from the positive ideal solutions for weighted normalised performance rating.

Si-

Separation measure for alternative Ai (i = 1, 2, ..., I) from the negative ideal solutions for weighted normalised performance rating.

T

Total number of decision problems used in the simulation run.

161

Appendix A: Notation

Tkn

Number of times Method Mk (k = 1, 2, ..., K) produces the same ranking outcome with n (n = 1, 2, ..., K-1) number of other methods.

U

Scaled preference matrix.

uik

Scaled preference degree for Method Mk (k = 1, 2, ..., K) by alternative Ai (i = 1, 2, ..., I).

V

Decision matrix consisting of weighted normalised performance ratings.

Vi

Overall preference score for alternative Ai (i = 1, 2, ..., I).

vij

Weighted normalised performance rating for alternative Ai (i = 1, 2, ..., I) with respect to attribute Cj (j = 1, 2, ..., J).

v j*

Positive ideal weighted normalised performance rating for attribute Cj (j = 1, 2, ..., J).

v j-

Negative ideal weighted normalised performance rating for attribute Cj (j = 1, 2, ..., J).

W

Weight vector consisting of attribute weights for a given problem.

Wj

Weight for attribute Cj (j = 1, 2, ..., J).

Wlj

The weight for attribute Cj (j = 1, 2, ..., J) for the decision problem Φl (l = 1, 2, ..., L).

Wklj

The weight required for attribute Cj (j = 1, 2, ..., J) to get a ranking outcome different from the base outcome for Method Mk (k = 1, 2, ..., K) for the decision problem Φl (l = 1, 2, ..., L).

WSIk

Weight sensitivity index k (k = 1, 2, ..., K).

Wk

Average change in weight for Method Mk (k = 1, 2, ..., K) for all the attributes Cj (j = 1, 2, ..., J) of all the decision problem Φl (l = 1, 2, ..., L) in the decision problem set Ω.

162

Appendix A: Notation

Wklj

Change in weight for attribute Cj (j = 1, 2, ..., J) of decision problem Φl (l = 1, 2, ..., L) for Method Mk (k = 1, 2, ..., K).

X

Decision matrix consisting of performance ratings.

Xl

Decision Matrix l (l = 1, 2, ..., L).

xij

Performance rating for alternative Ai (i = 1, 2, ..., I) with respect to attribute Cj (j = 1, 2, ..., J).

Y

Decision matrix consisting of normalised performance ratings.

yij

Normalised performance rating for alternative Ai (i = 1, 2, ..., I) with respect to attribute Cj (j = 1, 2, ..., J).

y j*

Positive ideal normalised performance rating for attribute Cj (j = 1, 2, ..., J).

y j-

Negative ideal normalised performance rating for attribute Cj (j = 1, 2, ..., J).

Z

Rank score matrix.

Z*

Set of positive-ideal rank score.

Z-

Set of negative-ideal rank score.

ziq

Borda score for alternative Ai (i = 1, 2, ..., I) and decision maker Dq (q = 1, 2, ..., Q).

zq *

Positive-ideal rank score for decision maker Dq (q = 1, 2, ..., Q).

zq -

Negative-ideal rank score for decision maker Dq (q = 1, 2, ..., Q).

Φ

A multiattribute decision problem.

Φl

Multiattribute decision problem l (l = 1, 2, ..., L).



Set of given decision problem.

ρ

Rank correlation coefficient.

β

Set of all the possible ranking outcomes.

163

Appendix B Glossary of Terms

Aggregation Technique

A process to combine performance ratings with attributes weights to get an overall preference value.

Alternative

Possible course of action.

Alternatives-Oriented Approach

A method selection approach from the perspective of the alternatives.

Attribute

Characteristics considered

or

objectives

during

to

evaluation

be of

alternatives. Attribute Weight

Relative importance of attributes in the decision making process.

Consensus Technique

A process to achieve unified opinion among a group of decision makers.

Decision Alternatives

Alternatives in the context of a decision problem.

Decision Analysis

A subject area devoted to decision making issues.

Decision Contexts

Specific requirements for the decision problem and method evaluation.

164

Appendix B Glossary of Terms

Decision-Maker-Oriented Approach

A method selection approach from the perspective of the decision maker.

Decision Matrix

Performance rating for each alternative with respect to each attribute combined in a matrix form.

Decision Settings

Characteristics of a decision problem in terms of size, data and other information.

Decision Support System

A computerised system to assist the decision

maker

in

making

rational

decisions. Method Evaluation Criteria

Specific requirements for evaluation and comparison of MADM methods.

Group Consensus

Agreement within a group of decision makers.

Group Decision Problem

Decision problem with more than one decision maker.

Group Outcome

Outcome of a group decision problem.

Method Comparison

A process to compare MADM methods.

Method Evaluation

A process to evaluate MADM methods under certain performance measure.

Method-Oriented Approach

A method selection approach from the perspective of MADM methods.

165

Appendix B Glossary of Terms

Method Preference Level

Amount to which a method is preferred than

others

under

certain

specific

requirements. Method Selection

A process to select a method from a group of available methods for a given decision problem.

Multiattribute Decision Making

A decision making process where multiple alternatives are assessed based on multiple criteria under given settings.

Negative Ideal Solution

Worst possible performance rating for an attribute over all the alternatives.

Normalisation Procedure

A process to convert performance ratings with different measurement units into a comparable one.

Overall Preference Value

A

value

that

represents

the

overall

performance of an alternative with respect to all the attributes. Performance Ratings

Performance of an alternative against an attribute.

Positive Ideal Solution

Best possible performance rating for an attribute over all the alternatives.

Ranking Consistency

A measure to indicate the level of consistency a method shows under different decision settings.

166

Appendix B Glossary of Terms

Rank Correlation Coefficient

A measure to find similarity between ranks.

Rank Reversal

A

phenomenon

where

rank

of

two

alternatives swap irrationally with a change in decision settings. Solution Space

Set of all possible decision outcomes.

Weight Sensitivity

A measure to indicate how sensitive a particular method is to a change in attribute weights.

Weight Vector

Set of attribute weights for an MADM problem.

167

Appendix C Simulation Results Detailed results of the simulation experiments presented in Chapter 5 are given below.

C.1 Results for SAW C.1.1 Results for Change in Alternative Numbers

With a particular number of attributes (2, 4, ..., 20), the number of alternative is increased from 4 to 20 in steps of 2. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C-1) to (C-9).

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-1 With 4 attributes, the effects on the ranking consistency for changes in the number of alternatives

168

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-2 With 6 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-3 With 8 attributes, the effects on the ranking consistency for changes in the number of alternatives 169

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-4 With 10 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-5 With 12 attributes, the effects on the ranking consistency for changes in the number of alternatives 170

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-6 With 14 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-7 With 16 attributes, the effects on the ranking consistency for changes in the number of alternatives 171

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-8 With 18 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-9 With 20 attributes, the effects on the ranking consistency for changes in the number of alternatives 172

Appendix C: Simulation Results

C.1.2 Results for Change in Attribute Numbers

With a particular number of alternatives (2, 4, ..., 20), the number of attributes is increased from 4 to 20 in steps of 2. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C10) to (C-18).

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-10 With 4 alternatives, the effects on the ranking consistency for changes in the number of attributes

173

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (S)

0.3

M2 (S) M3 (S)

0.2

M4 (S)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-11 With 6 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.5 0.45 0.4 0.35 0.3 0.25

M1 (S)

0.2

M2 (S)

0.15

M3 (S)

0.1

M4 (S)

0.05 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-12 With 8 alternatives, the effects on the ranking consistency for changes in the number of attributes 174

Appendix C: Simulation Results

Ranking Consistency Index

0.35 0.3 0.25 0.2

M1 (S)

0.15

M2 (S) M3 (S)

0.1

M4 (S)

0.05 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-13 With 10 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.3 0.25 0.2 M1 (S)

0.15

M2 (S) 0.1

M3 (S) M4 (S)

0.05 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-14 With 12 alternatives, the effects on the ranking consistency for changes in the number of attributes 175

Appendix C: Simulation Results

Ranking Consistency Index

0.2 0.18 0.16 0.14 0.12 0.1

M1 (S)

0.08

M2 (S)

0.06

M3 (S)

0.04

M4 (S)

0.02 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-15 With 14 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.16 0.14 0.12 0.1 M1 (S)

0.08

M2 (S)

0.06

M3 (S)

0.04

M4 (S)

0.02 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-16 With 16 alternatives, the effects on the ranking consistency for changes in the number of attributes 176

Appendix C: Simulation Results

Ranking Consistency Index

0.12 0.1 0.08 M1 (S)

0.06

M2 (S) 0.04

M3 (S) M4 (S)

0.02 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-17 With 18 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.09 0.08 0.07 0.06 0.05

M1 (S)

0.04

M2 (S)

0.03

M3 (S)

0.02

M4 (S)

0.01 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-18 With 20 alternatives, the effects on the ranking consistency for changes in the number of attributes 177

Appendix C: Simulation Results

C.1.3 Results for Change in Data Range

With a particular number of alternatives and attributes (2, 4, ..., 20), the data range is narrowed from 100% to 20% in steps of 10%. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C19) to (C-24).

Ranking Consistency Index

0.9 0.8 0.7 0.6 0.5

M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-19 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range

178

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-20 With 6 attributes and 6 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (S)

0.4

M2 (S)

0.3

M3 (S)

0.2

M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-21 With 8 attributes and 8 alternatives, the effects on the ranking consistency for changes in the data range 179

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (S)

0.3

M2 (S)

0.2

M3 (S) M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-22 With 10 attributes and 10 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (S)

0.3

M2 (S)

0.2

M3 (S) M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-23 With 12 attributes and 12 alternatives, the effects on the ranking consistency for changes in the data range 180

Appendix C: Simulation Results

Ranking Consistency Index

0.6 0.5 0.4 M1 (S)

0.3

M2 (S) 0.2

M3 (S) M4 (S)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-24 With 14 attributes and 14 alternatives, the effects on the ranking consistency for changes in the data range

C.2 Results for TOPSIS C.2.1 Results for Change in Alternative Numbers

With a particular number of attributes (2, 4, ..., 20), the number of alternative is increased from 4 to 20 in steps of 2. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C-25) to (C-33).

181

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-25 With 4 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-26 With 6 attributes, the effects on the ranking consistency for changes in the number of alternatives 182

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-27 With 8 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-28 With 10 attributes, the effects on the ranking consistency for changes in the number of alternatives 183

Appendix C: Simulation Results

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-29 With 12 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-30 With 14 attributes, the effects on the ranking consistency for changes in the number of alternatives 184

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-31 With 16 attributes, the effects on the ranking consistency for changes in the number of alternatives

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-32 With 18 attributes, the effects on the ranking consistency for changes in the number of alternatives 185

Appendix C: Simulation Results

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Alternatives

Figure C-33 With 20 attributes, the effects on the ranking consistency for changes in the number of alternatives

C.2.2 Results for Change in Attribute Numbers

With a particular number of alternatives (2, 4, ..., 20), the number of attributes is increased from 4 to 20 in steps of 2. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C34) to (C-42).

186

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 4

6

8

10

12

14

16

18

20

Number of Attributes

Figure C-34 With 4 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.5 0.45 0.4 0.35 0.3 0.25

M1 (T)

0.2

M2 (T)

0.15

M3 (T)

0.1

M4 (T)

0.05 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-35 With 6 alternatives, the effects on the ranking consistency for changes in the number of attributes 187

Appendix C: Simulation Results

Ranking Consistency Index

0.35 0.3 0.25 0.2

M1 (T)

0.15

M2 (T) M3 (T)

0.1

M4 (T)

0.05 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-36 With 8 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.25 0.2 0.15

M1 (T) M2 (T)

0.1

M3 (T) M4 (T)

0.05 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-37 With 10 alternatives, the effects on the ranking consistency for changes in the number of attributes 188

Appendix C: Simulation Results

Ranking Consistency Index

0.14 0.12 0.1 0.08

M1 (T)

0.06

M2 (T) M3 (T)

0.04

M4 (T)

0.02 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-38 With 12 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.1 0.09 0.08 0.07 0.06 0.05

M1 (T)

0.04

M2 (T)

0.03

M3 (T)

0.02

M4 (T)

0.01 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-39 With 14 alternatives, the effects on the ranking consistency for changes in the number of attributes 189

Appendix C: Simulation Results

Ranking Consistency Index

0.07 0.06 0.05 0.04

M1 (T)

0.03

M2 (T) M3 (T)

0.02

M4 (T)

0.01 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-40 With 16 alternatives, the effects on the ranking consistency for changes in the number of attributes

Ranking Consistency Index

0.045 0.04 0.035 0.03 0.025

M1 (T)

0.02

M2 (T)

0.015

M3 (T)

0.01

M4 (T)

0.005 0 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-41 With 18 alternatives, the effects on the ranking consistency for changes in the number of attributes 190

Appendix C: Simulation Results

Ranking Consistency Index

0.035 0.03 M1 (T)

0.025

M2 (T)

0.02

M3 (T)

0.015

M4 (T) M1 (T)

0.01

M2 (T)

0.005

M3 (T)

0

M4 (T) 4

6

8

10 12 14 16 18 20

Number of Attributes

Figure C-42 With 20 alternatives, the effects on the ranking consistency for changes in the number of attributes

C.2.3 Results for Change in Data Range

With a particular number of alternatives and attributes (2, 4, ..., 20), the data range is narrowed from 100% to 20% in steps of 10%. The effects on the ranking consistency index (RCI) for each of the four methods can be observed in Figures (C43) to (C-48).

191

Appendix C: Simulation Results

Ranking Consistency Index

0.8 0.7 0.6 0.5 M1 (T)

0.4

M2 (T)

0.3

M3 (T)

0.2

M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-43 With 4 attributes and 4 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-44 With 6 attributes and 6 alternatives, the effects on the ranking consistency for changes in the data range 192

Appendix C: Simulation Results

Ranking Consistency Index

0.7 0.6 0.5 0.4

M1 (T)

0.3

M2 (T) M3 (T)

0.2

M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-45 With 8 attributes and 8 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-46 With 10 attributes and 10 alternatives, the effects on the ranking consistency for changes in the data range 193

Appendix C: Simulation Results

Ranking Consistency Index

0.6 0.5 0.4 M1 (T)

0.3

M2 (T) 0.2

M3 (T) M4 (T)

0.1 0 100 90

80

70

60

50

40

30

20

Data Range (%)

Figure C-47 With 12 attributes and 12 alternatives, the effects on the ranking consistency for changes in the data range

Ranking Consistency Index

0.5 0.45 0.4 0.35 0.3 0.25

M1 (T)

0.2

M2 (T)

0.15

M3 (T)

0.1

M4 (T)

0.05 0 100 90 80 70 60 50 40 30 20 Data Range (%)

Figure C-48 With 14 attributes and 14 alternatives, the effects on the ranking consistency for changes in the data range 194