Uncertainty in Task Duration and Cost Estimates: Fusion of Probabilistic Forecasts and Deterministic Scheduling

Uncertainty in Task Duration and Cost Estimates: Fusion of Probabilistic Forecasts and Deterministic Scheduling Downloaded from ascelibrary.org by GEO...
Author: Paula Flowers
2 downloads 3 Views 454KB Size
Uncertainty in Task Duration and Cost Estimates: Fusion of Probabilistic Forecasts and Deterministic Scheduling Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

Homayoun Khamooshi, Ph.D. 1; and Denis F. Cioffi, Ph.D. 2 Abstract: A model for project budgeting and scheduling with uncertainty is developed. The traditional critical-path method (CPM) misleads because there are few, if any, real-life deterministic situations for which CPM is a great match; program evaluation and review technique (PERT) has been seen to have its problems, too (e.g., merge bias, unavailability of data, difficulty of implementation by practitioners). A dual focus on the distributions of the possible errors in the time and cost estimates and on the reliability of the estimates used as planned values suggests an approach for developing reliable schedules and budgets with buffers for time and cost. This method for budgeting and scheduling is executed through either simulation or a simple analytical approximation. The dynamic buffers provide much-needed flexibility, accounting for the errors in cost and duration estimates associated with planning any real project, thus providing a realistic, practical, and dynamic approach to planning and scheduling. DOI: 10.1061/(ASCE)CO.1943-7862.0000616. © 2013 American Society of Civil Engineers. CE Database subject headings: Estimation; Scheduling; Construction costs; Budgets; Probability; Project management; Simulation; Uncertainty principles; Forecasting. Author keywords: Estimate; Reliability; Scheduling; Cost; Budget; Probability; Contingency; Project management; Simulation.

Introduction and Background Inaccurate estimation has long been identified as one of the major causes of project failure (Flyvbjerg et al. 2009; Chan and Kumaraswamy 1997; Pinto and Mantel 1990), and Standish Group reports (1998, 2009) show more projects failing and fewer successful projects. Not easily achieved are good measures of worker productivity and the total amount of the work, which when combined determine task durations. For a specific activity, underestimation is generally caused by oversight or lack of familiarity or understanding of the job at hand, but it may even be driven by organizational culture or political causes. Estimating errors on work packages or activities may delay achieving a milestone and disrupt the remaining project schedule. The delay and disruption caused by bad estimation may lead to project failure (Lee et al. 2009) or at best to project management failure, that is, not delivering the project on time, within budget, and per specifications. Abundant literature provides statistics on project management failure that link the failure to an absence of good planning and scheduling, the causes of which are either the estimates or the process used for planning and scheduling (e.g., Williams 1995; Herroelen et al. 1998; Ritch et al. 2002; De Meyer et al. 2002; Herroelen and Leus 2004, 2005). Williams (2005) provides overwhelming literature on project management failure and project overruns. He questions the 1

Assistant Professor, Dept. of Decision Sciences, School of Business, George Washington Univ., Funger Hall, Suite 415, 2201 G St., NW, Washington, DC 20052 (corresponding author). E-mail: [email protected] 2 Associate Professor, Dept. of Decision Sciences, School of Business, George Washington Univ., Funger Hall, Suite 415, 2201 G St., NW, Washington, DC 20052. E-mail: [email protected] Note. This manuscript was submitted on October 25, 2010; approved on May 29, 2012; published online on July 24, 2012. Discussion period open until October 1, 2013; separate discussions must be submitted for individual papers. This paper is part of the Journal of Construction Engineering and Management, Vol. 139, No. 5, May 1, 2013. © ASCE, ISSN 07339364/2013/5-488-497/$25.00.

underlying assumptions of project management theory, specifically the critical-path method (CPM) model and its suitability for managing specific types of projects. Although the focus of Williams’ paper is on a specific category of projects, i.e., those that are large and complex (with high levels of uncertainty), inaccurate estimates and a rigid scheduling approach are generic problems for almost all projects, which by definition are about achieving something new. Touran (2010) also claims that “the use of probabilistic risk assessment in major infrastructure projects is increasing to cope with major cost overruns and schedule delays.” As argued by Howick (2003), the delay and disruption caused by the uncertainty associated with estimates (as well as their presumed certainty) can drastically affect the cost, quality, and duration of the project. Although the objective should be developing the most accurate estimates, the inaccuracy inherent in estimation needs to be accounted for through some flexible mechanism (Khamooshi 1999), or the impact may cause more damage than would be tolerated by the stakeholders, leading to project failure. To solve the problem of unwanted scheduling iterations in design and development, Ballard (1999, 2000) and the Lean Construction Institute (LCI) at Berkeley introduced the concept of so-called phase scheduling, in which much emphasis is put on team work and on the reliability of the estimate. Ballard goes further and suggests, “The key point is to deliberately and publicly generate, quantify, and allocate schedule contingency.” The following assertions are derived from the literature: • The time needed to complete individual tasks or work packages is almost always greater than or at best equal to the original estimates used for establishing the baseline schedule. In other words, Parkinson’s law (Parkinson 1957) holds: work expands to fill available time. Work can be further delayed because of the student syndrome, i.e., waiting until the end to finish a task. • Statistics on classical project management success, that is, delivering the project on time, within budget, and per specification, show that most projects finish late (Eden et al. 2005) and overrun their budgeted costs (Hughes 1986; Standish Group 2009).

488 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013

J. Constr. Eng. Manage. 2013.139:488-497.

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

• In organizations with less emphasis on planning and control, inaccuracy of the baseline estimates due to padding (Burt and Kemp 1994) or even intentional underestimation could be accepted as norm, leading to deviation from reliable, accurate, and ethical estimates. González et al. (2010) developed an approach to improve planning reliability at an operational level. They argue that reliability and commitment are key factors for improving project performance. The lack of reliability in turn can lead to delayed and overrun projects (Flyvbjerg et al. 2002). • The cost of underestimation, which leads to rescheduling or scheduling on the go (e.g., critical chain project management, agile scheduling, and other similar approaches), is normally ignored. Although dynamism and flexibility are a must for the uncertain and volatile environments in which projects operate, the objective should be to develop as stable a schedule as possible (Khamooshi 1999; Emhejellen et al. 2003). The perceived agility costs money, especially in large and complex project environments. Williams (2005, 1995), Lichtenberg (1983), and many others have questioned the efficacy of deterministic approaches, and more than two decades ago, the British Petroleum company, for example, committed itself to probabilistic approaches for time and cost analysis for its major projects. These probabilistic-based approaches, however, provide for contingency; they do not focus on scheduling tasks. More realistic, reliable, and stable schedules are still needed.

Certainty, Uncertainty, and Errors in Scheduling Estimates Program Evaluation and Review Technique and Monte Carlo Focus on Project Duration The accuracy and the resulting reliability of estimates should be taken into account more seriously in developing project schedules. After the early stages of selection and approval, planning for a project continues with defining the scope of the project in greater detail, developing a work breakdown structure, and establishing estimates for the tasks and activities inside the work packages in preparation for schedule creation. The estimation process is used to specify the effort: the product of the duration needed to deliver the work-package products and the types and quantities of resources needed to achieve these objectives. The figures developed in this process are only estimates that are subject to uncertainty and, hence, inaccuracy. The lower the certainty, the higher the chance of exceeding the planned duration. Two divergent approaches are used for planning project durations: (1) deterministic (CPM) and (2) probabilistic, e.g., program evaluation and review technique (PERT) or Monte Carlo. In all these methods, single-valued task durations are used to develop a baseline schedule, i.e., the schedule against which the real work must be accomplished. With PERT, by taking advantage of the central-limit theorem and using mean activity durations from the assumed distributions of duration times for each task on the critical path, one can estimate the probabilities that correspond to completion times less or more than the planned project duration that is based on those mean times (which has a 50% probability). A Monte Carlo simulation, although a more-detailed calculation that eliminates PERT’s merge-bias problem and its ignorance of near-critical paths, uses what is fundamentally a similar plan of attack. Again, a distribution is assumed for each task on the critical path, and the probabilities of various project durations are found.

More than 50 years ago, PERT was developed to deal with uncertainties associated with estimated activity durations (and through them the project as a whole), but the model is not used widely as a real project planning and scheduling tool because few project managers are willing to schedule projects using PERT’s mean value for the tasks’ durations. There has been much criticism of PERT (MacCrimmon and Ryanec 1964; Klingel 1966; Britney 1976; Schonberger 1981; Baker 1986). Wayne and Cottrell (1998) introduced a simplified version to overcome some of its difficulties. The PERT method is based on and supported by the centrallimit theorem, which implies using the expected value duration, T e ¼ ða þ 4m þ bÞ=6, of each task as its planned duration (the mode or other values could be used as well). The project duration, which is the sum of the activities on the critical path, will be the average or expected project completion time, i.e., with only a 50% chance of being accomplished within that duration. The analysis assumes that the actual duration of each activity could be any value from the range given by the assumed distribution. Realistically, however, the planned duration of each task (the duration estimate used to develop the baseline schedule) is at best the time needed to finish the job; it is often exceeded. In other words, the probability of each task taking less time than the planned value is low (much less than the theoretical value of 50% if the average value is used as the planned value). This underestimation is not a theoretical problem or issue with statistical foundation of PERT analysis. Rather, it is a behavioral or perceptional one explained by Parkinson’s law (Parkinson 1957; Gutierrez and Kouvelis 1991) and the student syndrome law that work expands to fill the time available or that the job is delayed. At the task level, the studies by Buehler et al. (1994) and Roy et al. (2005) discuss the rationale and support the reality that individuals often underestimate the time needed to do a job. Because the actual time needed to do a task is normally more than what one plans for, the actual duration typically lands on the right-hand side of the time axis of the planned duration. At the system and project levels there is also ample evidence of optimism bias in time and cost estimation (Flyvbjerg 2008; Wook and Rojas 2011). These observations suggest that the planned duration at best is almost always perceived as, and ends up being, the minimum actual time needed to do the job even if the worker may know, for example, that the planned duration is the most likely value (the mode), per historical data. Thus, the possible values to the left of planned durations become very unlikely as soon as the planned duration is confirmed. This problem does not exist when the system (project) is simulated mechanically, where all values above and below the planned value have their probabilities of occurrence as per given assumed distribution. In rare cases of a “routine” project (Cioffi 2006) and management environment, the tasks may be completed at their planned durations but hardly in less time. Both PERT and the traditional Monte Carlo simulation do not account for the reality of these reduced probabilities; thus, they overestimate greatly the chances of project durations less than the scheduled duration. To develop a real working schedule, one must assume singlepoint values for the individual task durations. In this paper, by considering the reliability of the particular single-point values used in developing the actual schedule and by also examining the size and distribution of the error associated with those values, it is shown how one can develop a robust schedule and determine the proper size of its buffer; this method of then extended to incorporate cost budgeting. The overestimates of shorter project durations by PERT and Monte Carlo suggest using a new distribution in the statistical calculations; therefore, more-realistic duration distributions are first examined.

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013 / 489

J. Constr. Eng. Manage. 2013.139:488-497.

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

Which Distributions Best Describe Task Durations? As planning and scheduling specialists start developing an estimate for the duration of an activity, they look for previous experiences and data. If the data for a particular task have been recorded and are available, they can be used to develop the duration distribution of the task. A distribution can be presented with values of the durations shown on the x-axis and the occurrence likelihood (continuous) or count (discrete) depicted on the y-axis. Figs. 1(a and b) show examples of continuous and discrete probability distributions. Fente et al. (2000) developed a model for defining a beta distribution for simulating construction project tasks, but Back et al. (2000) suggested using a triangular probability distribution function for costs in construction projects. What types of estimates best describe project tasks and provide an accurate indication of their durations? • Optimistic estimates (minimum durations) that define the lower limits of beta or triangular distributions have, by definition, a probability of order 1%. Thus, if n is the number of activities on the critical path, the minimum project duration derived when using these distributions will occur at a probability of approximately 0.01 to the nth power. This probability will be vanishingly small. • Using the expected value of the durations as per central-limit theorem analysis in PERT or the modes as a rationale choice (because these are the most frequently occurring value of each estimate) instead of using optimistic values will result in longer project duration with a better chance of being achieved. But, in reality, how much better? Assuming that the tasks will not finish in less time than planned for (Buehler et al. 1994; Roy et al. 2005), the probability of finishing the project in a time less than or equal to this planned duration is approximately 0.5 to the nth power, where n is the number of activities on the critical path.

Fig. 1. (a) Continuous and (b) discrete probability distributions for duration of task

For a critical path containing only 10 tasks, this probability is less than one-tenth of 1%. • As explained previously with regard to Parkinson’s law and the student syndrome, as soon as any value is fixed as the planned value, the actual task duration has little or no chance of being realized in a time much less than its planned duration but has some probability of lasting longer than planned, depending on the given reliability for the duration. Thus, some tasks on the critical path will exceed their planned durations, and the project will overrun. • Pessimistic estimates could be used as the planned values. This choice increases the chances of success, with minimum delay, but it would most likely waste resources because the opportunity for shorter delivery has been lost. The environment and characteristics in which tasks are executed and projects are developed are unique. Any historical data gathered should not necessarily be interpreted as a replication of the same experiment, and so reproducing the duration distributions in a simulation will not result in a true prediction of the new project under consideration unless valid, detailed data are available. Research by Kirytopoulos et al. (2008) suggests that when good historical data are applied properly, these data can guide the selection of the specific duration distribution to be used for scheduling, and then PERT and Monte Carlo simulations produce essentially the same duration results. Before scheduling, the continuous probability density or the discrete distribution function for the task duration developed from historical data normally contains a mode or most likely value that is greater than or equal to the minimum duration. After scheduling, Parkinson’s law takes effect, and the duration distribution should be truncated on the left-hand side of the scheduled duration. Choosing values less than the mode could produce underestimates in the task durations, which causes overruns and frequent need of rescheduling. Thus, in most cases, the scheduled duration should be equal to or greater than the mode, and this new duration (i.e., the minimum of the now-truncated distribution) is the one to be used for the most realistic probabilistic forecast of the behavior of the project. Hence, for durations less than the scheduled duration, zero probability is assumed. A truncated version of a beta distribution or a right-angle triangle distribution may now show the new probability density function for the duration. The distribution to the right of the planned duration shows the probability distribution of the error in the estimate. This probability distribution of the error is expected to typically decrease (if a continuous distribution) with increasing duration. The size and shape of the distribution characterize the error and its uncertainty. A small error will be reflected in a small range to the right of the estimate. There are many possible ways to characterize the error distribution. A decreasing function [Fig. 2(a)] gives greater reliability because it indicates a greater chance of the realized duration being close to the minimum; a constant indicates that, within some fixed limits, all possible extensions to the duration or cost are equally likely. Lastly, there could be situations in which the probability of the error increases with the size of the error [Fig. 2(b)]. Because high reliability of the mode was established more than four decades ago (Peterson and Miller 1964; Clark 1961), the truncation idea can be extended to a discrete, binomial distribution, where the two values of the distribution represent the minimum (the mode) and maximum durations. Now the error is represented by only a single duration: the difference between the maximum and the minimum durations is the error. This method, with either continuous or discrete distributions, shifts the estimating focus for any

490 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013

J. Constr. Eng. Manage. 2013.139:488-497.

Table 1. Example Project Data: Precedence Relationships and ta (Minimum) and tb (Maximum) Values Used in Simulations Number

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

(a)

(b)

Fig. 2. Possible probability distributions of error size: (a) decreasing function; (b) increasing function

given task to establishing a single reliable task duration, the error, and its associated probability. In contrast to critical chain, which takes the estimator’s number and arbitrary assigns a fraction of it, say 50%, as planned duration, hoping that some resources will be saved, the best estimate of the planned duration is obtained by allowing the estimator to take full responsibility, and then this estimate is used in the schedule. This decision shows great trust in the estimator’s judgment. These arguments are now tested with a numerical example, in which project durations obtained with this method are compared with those obtained from traditional and modified Monte Carlo simulations.

Numerical Example Using Three Different Distributions A house-refurbishing project from the Pertmaster sample files is used to illustrate the effects of using the different distributions previously discussed; Table 1 shows the project data. With Pertmaster, four separate simulations of the project’s schedule were conducted. For each run, one of the following distributions was used when modeling the activity durations. For each task, ta is the minimum duration, tm is the mode, and tb is the maximum duration. The distributions used are as follows: 1. Truncated beta distribution: ta ¼ tm and tb specified; 2. Right-angle triangular distribution: ta ¼ tm and tb specified; 3. Binomial distribution: In contrast to the previous distributions, a discrete distribution with only two possible outcomes, ta ¼ tm and tb , where δt ¼ tb − ta is the error of the duration estimate. Table 1 shows the data for this example; and 4. Binomial distribution with half the original maximum error; the new maximum ðta þ tb Þ=2 yields an error δt ¼ ðtb − ta Þ=2. All the distributions need two input values, ta and tb . In the simulations, only the fraction of possibilities equal to the error percentage (1 minus the reliability percentage) sits to the right of the minimum; e.g., with a reliability of x percent, x percent of the simulation selections will equal the minimum duration, with

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Task

Predecessor duration

Drain off system — Chimney rebuild 1 Drill out ties 2 Cut off and reroute electric 2 Rewire 4 Lower brickwork 3 and 4 Test electrics 5 and 6 Electrics fail 7 Electrics pass 7 Upper brickwork 8 and 9 Strip off roof cover 10 Boundary wall 10 Rotten supports 11 Roof structure work 12 and 13 Recover roof 6 Plumbing 15 Dismantle scaffold 15 Joinery 17 Plaster 18 Finish 14, 16, and 19

Maximum Minimum duration duration 12 12 23 12 10 35 23 23 7 23 12 17 10 17 35 49 16 18 25 3

8 8 16 8 7 24 16 16 4 16 8 12 4 12 27 44 12 14 20 2

the remainder ð1 − xÞ at the maximum for the two binomials and somewhere between the minimum and the maximum for the two continuous distributions. Hence, the simulations differ from the traditional Monte Carlo because each activity is modeled with a branched duration. One branch yields the fixed minimum duration at a frequency equal to the reliability percentage (given by the estimator), and the other branch uses the given distribution to extend the duration beyond the minimum. Changing the reliability of the estimation does not necessarily change the shape of the distribution or the size of the estimation error; it does change the branching probability, i.e., the probability of the minimum duration being realized. The comparison between the discrete and continuous distributions will differ depending on the numbers used in the discrete distribution. For example, a binomial distribution where the longer duration is set to the maximum error (number 3 in the preceding list) is much more conservative than continuous distributions because when the error occurs, the estimated duration falls at the duration that corresponds to the maximum error. Thus, a less conservative approach was also used by using half the maximum error, ðtb − ta Þ=2, i.e., the average error, as the error duration in the binomial distribution; these behaviors are illustrated in the following. The practical advantages of the binomial outweigh minor concerns about the exact shape of the truncated duration distribution. Now the results of numerical simulations that used the preceding distributions are examined. The following tables show probabilistic duration results from simulations of 1,000 and 10,000 iterations. Table 2 shows the results of six different simulations, each using a different distribution for all project tasks: (1) a traditional beta distribution; (2) a truncated beta distribution that necessitates branched simulation; (3) a binomial distribution that uses ta as the default duration and tb as the extended duration, i.e., the default plus the total error; and (4) a binomial distribution that uses ta as the default duration but uses the average error, ðta þ tb Þ=2, for the extended duration instead of the maximum error; (5) a triangular distribution; and (6) a right-angle triangular distribution. The truncated beta, right-angle triangular, and the binomial distributions were calculated with a 90% reliable planned duration (i.e., 10% probability of error).

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013 / 491

J. Constr. Eng. Manage. 2013.139:488-497.

Table 2. Project Durations from Numerical Simulations Using Beta, Triangular, and Binomial Distributions for Task Durations Project durations ΔT Minimum ΔT Mean ΔT 90 ΔT 99 ΔT Maximum

Beta distributions

Truncated beta distributions

Binomial distributions using ta and tb

Binomial distributions using ta and ðta þ tb Þ=2

Triangular distributions

Right-angle triangular distributions

116 137 144 150 158

131 132 133 136 139 (140)

131 138 147 155 (156) 164 (170)

131 134 138 142 145 (149)

116 142 151 158 168

131 133 137 142 (143) 148 (151)

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

Note: The truncated beta, right-angle triangular, and binomial distributions were calculated with a 90% reliable planned duration (i.e., 10% probability of error). The number in parentheses in a given column is the result when the 10,000-event simulation differs from the 1,000-event simulation.

ΔT Minimum is the minimum project duration, ΔT Mean is the average duration, ΔT 90 is 90th percentile, ΔT 99 is the 99th percentile, and ΔT Max is the maximum project duration. Where the results differ, the second number in a given column shows the output of the 10,000-iteration simulation. With the modes of the truncated task distributions as the scheduled durations, the planned project duration is 131 days, i.e., the critical-path duration. It is suggested that the simulation results from the truncated and binomial distributions provide a more realistic view of the variation in project duration compared with simulation results based on the preschedule distribution of task durations because tasks usually are completed at or later than their planned durations. Note the different modeling of the preschedule and postschedule distributions. For the preschedule distribution simulation, a traditional simulation model of sampling was used from the presumed distribution before truncation, but for simulation based on the truncated distributions, the branching approach previously described was needed; it proceeds in two parts. First, the computer uses the reliability estimate to determine whether the task will exceed the planned duration. If the answer is “yes,” the simulation then finds the size of the error. Therefore, if reliability is reduced from the 90% that was assumed for this calculation, the variation and the project durations will increase. Except for the maximum project durations, increasing the number of simulation iterations from 1,000 to 10,000 does not change the results much. The maximum durations are expected to increase slightly because of the increased chance of realizing low-probability long-duration combinations. The binomial distribution with the maximum error ðtb − ta Þ did not match the continuous distributions, as well as the binomial with half the maximum error. Using the maximum error in the binomial is the most conservative estimate (and somewhat unrealistic) because of the assumption that if a task is not completed in its planned duration, it instead requires the maximum amount of time. On the other hand, when half the maximum error is used, i.e., the average error, the binomial simulation agrees well with both continuous error distributions, the truncated beta, and the right-angle triangular. For example, at the 99th percentile, there is no difference between the right-angle triangular and this binomial; others differ by a few percent. Thus, numerical simulations using a binomial distribution can reproduce the project durations obtained from simulations of continuous distributions truncated at the planned durations.

Unified Scheduling Method A project can be managed with a better schedule and with more confidence if for the planned duration of each task, the best possible estimate and a separate error, i.e., the potential extension to the duration is used. Thus, along with predictions of task durations, estimators should specify their associated level of confidence for

the estimates they put forward. The following summarizes the essence of the unified scheduling method (USM) algorithm. Algorithm If a distribution for the duration of a task exists, truncate the distribution of the duration at the planned duration and regard the remainder of the distribution as the distribution of the error. Establish the reliability of the planned duration and hence the probability of exceeding that duration or cost. If a distribution did not exist, decide on the size and distribution of the error (decreasing, increasing, uniform, or a single value for a binomial distribution assumption). Use the branched Monte Carlo simulation or the analytic binomial approximation, explained in the following, to establish the time or cost contingencies. The minimum planned duration of the project is the length of the critical path, and the size of the buffer is determined through simulation or approximation. The method covers a range of situations, from low confidence in the estimates indicated by high probability of error, to high confidence in the estimates expressed by assigning low probability to the error. If ρi is used to show the reliability of an estimated task duration, then the probability of the error being realized is 1 − ρi . High ρi values (>80%) are indicative of deterministic situations (similar to CPM), whereas low ρi values (50–80%) show more probabilistic situations; it is expected that few projects, if any, will be scheduled with reliabilities less than 50%. Binomial Approximation as Substitute for Branched Numerical Simulation The probability (reliability) of the planned duration, Δti , of any given task i, is given by ρi. The probability of the duration being extended by some (single, maximum) time δti is given by 1 − ρi ; as in the preceding (where tb was used), bi is this maximum estimated duration, Δti þ δti . The durations of the tasks are independent. Although this assumption is not always the most realistic, the dependencies can be accounted for by lowering the reliability or by increasing the maximum error of any dependent task’s duration estimate. Because the problem is considered a binomial experiment, each task’s need for extra time (or, when the method is extended to cost estimates, for extra money) will or will not exist, and the probability of occurrence (the error being realized) is 1 − ρi . The total delay, which equals the project duration’s estimation error, is the sum of the critical-path errors that occur. But from all the tasks’ errors, how can the size of the potential delay be estimated? To determine the number of tasks, a, that is expected to be delayed inPa total of n activities, the average reliability is defined as ρ ¼ ρi =n, where i ¼ 1; 2 : : : n (number of activities on the critical path).

492 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013

J. Constr. Eng. Manage. 2013.139:488-497.

Then, with a prespecified confidence level, 1 − ρ is used as the probability of success in a binomial experiment to find a. Cioffi and Khamooshi (2008) showed that for n greater than approximately 20, using this average reliability in a binomial distribution numerically matches simulated results of separate individual probabilities, ρi . In another paper, Khamooshi and Cioffi (2009) developed an approximation equation to find a99 (a at the 99% confidence level) for a given n when n is greater than approximately 20:

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

a99 ¼ 1.2nð1 − ρÞ þ 3.5 Now sort in descending order the errors associated with the critical-path activities, and add the first a ranked errors. This sum, which equals the project duration contingency, is the maximum delay in the project duration at the confidence level used for determining a. As shown previously, for the less-conservative delay estimate, the maximum error is simply divided by 2, thereby using the average error instead of the maximum. For a set of 1 − ρ values and several confidence levels, a values are shown in Tables 3 and 4. The Appendix elaborates on the mathematical underpinning of the USM approach. For the following reasons, the preceding procedure, which can be used for nearly any reasonable level of uncertainty, is a pragmatic approach for developing a real schedule and dealing with its uncertainty: 1. The estimators will focus on the reliability of the estimates and on their accuracy by providing ρi and δti . 2. The estimated durations are almost always more than 50% reliable, so the schedule will be more stable when these estimates are used as planned durations. 3. The calculation of the project duration contingency is straightforward. Table 3. Number of Tasks, a, That Can Take Longer Than Scheduled Duration at 95% Reliability and 90, 95, and 99% Confidence Levels Cutoff number of tasks

Number of tasks considered

90

95

99

10 20 30 40 50 100 200 500

1 2 3 4 4 8 14 31

2 3 4 4 5 9 15 33

2 4 5 5 6 10 17 36

Note: Confidence level is in bold.

Table 4. Number of Tasks, a, That Can Take Longer Than Scheduled Duration to Accomplish at 0.8 Reliability and 90, 95, and 99% Confidence Levels Cutoff number of tasks

Number of tasks considered

90

95

99

10 20 30 40 50 100 200 500

4 6 9 11 14 25 47 111

4 7 10 12 15 27 49 114

5 8 11 14 16 29 53 120

4. The focus on reliability introduces or strengthens a culture of responsibility, leading to a more-mature estimation process and database. 5. A project’s data can be updated and risk managed dynamically as the project proceeds to completion. The planned values provide a firm schedule to be implemented, and the contingency calculation procedure gives a cushion for delays caused by the uncertainty in the estimates. This process, which builds flexibility into the schedule, can be used to deal with cost estimation uncertainty as well. For the assessment and calculation of an insurance buffer against cost estimation uncertainty, all the tasks in the project, N, are considered and not just those on the critical path alone, n. The rest of the calculation for cost contingency is the same as for the duration cushion estimation. The budget contingency is X δci ; i ¼ 1; : : : ; b where b = number of tasks that could go over the base budget; and δci = possible cost estimation error (underestimation) for the b top-ranked tasks in the descending order of the errors. Again, the case of overestimation of cost has not been included in the method because of the low probability of these events (e.g., Hughes 1986).

Contingency Setting The same example project described previously was used to do the following calculations. The gap between the minimum duration, ΔT Minimum , and a longer duration at any particular confidence level, say 90% (ΔT 90 , as shown in Table 2), can be used to estimate a project’s duration contingency: ΔT Buffer ð90%Þ ¼ ΔT 90 − ΔT Minimum Although a simulation could always be used to measure this time or cost contingency, these buffers can be estimated straightforwardly without simulation by using USM. To show the validity of this approach to buffer estimation, the results from the preceding binomial simulations are compared with USM for both cost and schedule. To illustrate, the cost was simply assumed to be numerically equal to the duration for each task. The size of the buffer (as a function of the expected reliability) is calculated as follows: X ΔT Buffer ð90%Þ ¼ δti ; i ¼ 1; : : : ; a where δti = error for task i as defined previously; and a = number of tasks exceeding the planned durations, when a reliability of 90% is assumed for n activities, where n is the number of tasks on the critical path. In this example, n ¼ 9. As n is small, the binomial distribution is used to find a ¼ 2. Then, using the errors sorted in descending order for all tasks on the critical path, the ΔT 90 buffer is calculated by adding the top two error terms: ΔT Buffer ð90%Þ ¼ 11 þ 8 ¼ 19. Hence, ΔT 90 ¼ 131 þ 19 ¼ 150, at 90% confidence. If instead 99% confidence is desired, a ¼ 3; ΔT Buffer ð99%Þ ¼ 11 þ 8 þ 7 ¼ 26; and ΔT 99 ¼ 131 þ 19 þ 7 ¼ 157. For the cost buffer calculation, n was set to N, the total number of activities in the project, to find b, the equivalent of a used for the durations. N ¼ 20 yields b ¼ 4 and 5 at 90 and 99% confidence, respectively. So, these two possible project budgets are given by C90 ¼ CMinimum þ ΔCBuffer ð90%Þ

Note: Confidence level is in bold.

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013 / 493

J. Constr. Eng. Manage. 2013.139:488-497.

C90 ¼ 278 þ ð11 þ 8 þ 7 þ 7Þ ¼ 278 þ 33 ¼ 311

simulation results and USM values as a percentage of the planned project duration. The comparative analysis of the results summarized in Table 6 shows the level of needed temporal contingency at three confidence or probability levels and absolute percentage error (possible error as a percentage of the planned duration of the project) for each case and for all the samples as a whole. The results strongly support USM’s validity, considering that USM is an approximation technique and not an exact method.

C99 ¼ CMinimum þ ΔCBuffer ð90%Þ

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

C99 ¼ 278 þ ð11 þ 8 þ 7 þ 7 þ 7Þ ¼ 278 þ 40 ¼ 318 Table 5 shows how the analytic USM results match the previous binomial simulation results. The analytic forecast durations at 90 and 99% are slightly larger than the numerical simulations by less than 5% because of the ranking scheme that was used in USM. The contingency is calculated by adding errors listed in descending order of magnitude, so the largest numbers are added, thus erring on the side of caution. To further reinforce the validity of the approach, nine groups of students of Master of Science in Project Management (MSPM) at George Washington University were given an assignment to develop a plan for a real-life project of their choice by using USM. The students developed the plans and provided the planning and scheduling data for their projects. The projects were scheduled, and durations were determined by using numerical simulation and USM, assuming binomial distribution for each activity as needed for USM. Assuming simulation yields the right answer, the percentage error is calculated as the difference between

USM Implementation Process By using the algorithm presented previously, how USM should be implemented for scheduling is detailed. To use USM for budgeting costs, use all the cost data and not just those from critical-path activities. 1. Establish a single planned duration for scheduling each task. 2. Determine the level of confidence (reliability) in each planned duration. 3. Develop the schedule with these planned durations to establish the project’s baseline schedule and total duration. 4. If a distribution of task duration is available, truncate it to the left of the planned duration and presume that the remainder of the distribution is that of the error associated with each task’s planned duration. If the duration distribution is not available,

Table 5. Binomial Numerical Simulations Compared with Calculation Results from USM Project duration or cost

USM analytical results using a and b

Binomial simulation using a and b

USM analytical results using a and ða þ bÞ=2

Binomial simulation using a and ða þ bÞ=2

131 Not applicable 150 157 278 Not applicable 311 318

131 138 147 156 278 288 299 308

131 Not applicable 140 144 278 Not applicable 294 298

131 134 138 142 278 283 288 292

ΔT Minimum ΔT Mean ΔT 90 ΔT 99 CMinimum CMean C90 C99

Note: All results were calculated with a 90% reliable planned duration (i.e., 10% probability of error). The analytical calculations reproduce these simulations to within approximately 4% at worst. Table 6. Comparative Analysis of Results Using Simulation and USM Project name/method AAR/SIM AAR/USM Analyzer/SIM Analyzer/USM BEST/SIM BEST/USM Denomy/SIM Denomy/USM JTD/SIM JTD/USM NOAA/SIM NOAA/USM Omole/SIM Omole/USM JIAO/SIM JIAO/USM Average error percentage

Duration

Probability

Error percentage

Baseline maximum duration

Maximum

Minimum

80%

90%

99%

For 80%

For 90%

For 99%

Average

321 321 262 262 391 391 695 695 852 852 1097 1097 779 779 417 417

398 398 307 307 436 436 807 807 977 977 1308 1308 886 886 495 495

346 — 279 — 414 — 716 — 883 — 1189 — 807 — 434 —

37 45 28 33 28 42 21 28 48 50 123 123 42 40 28 50

43 50 35 36 33 46 27 35 60 54 140 146 53 56 37 56

58 58 45 42 45 55 41 42 98 76 182 170 80 76 57 60

— 2.492 — 1.908 — 3.581 — 1.007 — 0.235 — 0.000 — 0.257 — 5.276 1.844

— 2.181 — 0.382 — 3.325 — 1.151 — 0.704 — 0.547 — 0.385 — 4.556 1.654

— 0.000 — 1.145 — 2.558 — 0.144 — 2.582 — 1.094 — 0.513 — 0.719 1.531

— 1.558 — 1.145 — 3.154 — 0.767 — 1.174 — 0.547 — 0.385 — 3.517 1.676

Note: Shown are the baseline (or planned), average, and maximum project durations, and buffers (contingencies) needed matching 80, 90, and 99% reliability and the corresponding percentage errors if USM is used instead of simulation; SIM = Simulation; USM = Unified Scheduling Method. 494 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013

J. Constr. Eng. Manage. 2013.139:488-497.

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

estimate the size of the maximum error as a single-point estimate, or assume a distribution for it. The distribution’s shape can be uniform, deceasing, or even increasing, as shown in Fig. 2. The following approximations are suggested to establish the size of the error: • For a single-point estimate and a uniform distribution, assume the average error of one-half the maximum error. • For a decreasing distribution of the error, use approximately one-third of the maximum error. One-third is suggested because it is approximately the point of 50% accumulated probability in a distribution shaped as a right angle, i.e., if the line’s equation is mx þ c, where m = slope; and c = frequency at x ¼ 0 duration; pffiffiffi then the integral of the line equals 1=2 at x ¼ 1 − 1= 2 ¼ 0.29. • For an increasing distribution, use about two-thirds of thepmaximum error, i.e., at x approximately equal to ffiffiffi 1= 2 ¼ 0:71. 5. If the task error distributions are available or assumed, one can use the planned durations and their given reliabilities with the assumed distributions of the errors in a Monte Carlo simulation with branching to generate a distribution of project completion times (with the minimum completion time equal to the critical path’s duration). Use the distribution of the project duration from the simulation to estimate the buffer needed to secure a target completion time with a chosen level of confidence. 6. If the task error distributions are not available or if one does not want to simulate the project network numerically, find the average reliability, and use the binomial approximation previously detailed to calculate the buffer. 7. As the project continues, the number of remaining tasks decreases, and so the buffer calculated through USM will change. Thus, rerun USM to account for this change as well as other changes, such as those in the critical path or those in the reliability of future tasks or the size of task errors.

Summary and Recommendations The lack of a proper emphasis on the unreliability of estimates is one of the major causes of failure for many projects. Williams (1999) questioned the theoretical underpinning of traditional project management. One of the most questionable assumptions is the deterministic nature of cost and duration estimates as input to planning and scheduling when everyone agrees that 100% reliability can never be assumed. In the case of large, complex, innovative projects, this assumed certainty is noticeably much worse. Unified scheduling method stresses schedule development and not just a probabilistic prediction of total project duration and cost (which is the primary focus of traditional Monte Carlo simulation modeling). Single estimates with associated reliabilities provide the information needed for the development of more-realistic, stable schedules. Unified scheduling method provides ancillary advantages. First, the reliability of the estimates will be expressed explicitly, putting more responsibility on those involved in developing plans and schedules. This emphasis on the reliability of individual tasks duration and cost directly addresses issues such as overestimation for self protection (padding) and underestimation for political reasons. Thus, it should improve the estimation culture. Second, the introduction of reliability provides the opportunity for a better assessment of the uncertainties associated with the project’s cost and duration (Ward and Chapman 2003). At a given

probability, USM establishes upper bounds for the project’s cost and duration. Third, USM provides a dynamic way to estimate schedule and budget contingencies directly, using either numerical simulation or analytic binary approximation, which eliminates the need for numerical simulation. These planned contingencies increase the chances of project success and provide a much-needed flexibility and responsiveness lacking in the traditional CPM approach. As the project progresses, schedule and cost buffers can be modified easily, making USM a more—readily dynamic approach to scheduling. The pragmatic procedure suggested in this paper should help practitioners develop more-reliable schedules, reducing the chances of failure caused by unrealistic planning and scheduling.

Appendix. Mathematical Underpinning of USM Approach Assume that the planning department or the estimators are asked to provide the following information on each task or work package: Probðti ≤ tei Þ ¼ ρi

ð1Þ

Probðti > tei Þ ¼ 1 − ρi

ð2Þ

where ti = actual duration needed for the task or work package; tei = estimated duration needed for the task or work package; δti = potential error in the time estimate; and ρi = reliability of the duration estimate stated as a probability by the estimator. Probðci ≤ cei Þ ¼ ρi0

ð3Þ

Probðci > cei Þ ¼ 1 − ρi0

ð4Þ

where ci = actual cost or the budget needed to do the task i; cei = estimated cost or the budget needed to do the task i; δci = potential value for the error in the cost estimate; and ρi0 = reliability of the cost estimate stated as a probability by the estimator. Although the reliability of each estimate is not necessarily the same for all the elements in a project, usually there is a level of learning and experience in each organization that provides some consistency in the accuracy level of the estimates. The range of these reliability estimates will be larger for more-complex, innovative, and research-and-development types of projects and smaller for more-established and repeated types of project. As discussed in the paper, average reliability for time and cost estimates were used: ρ¼

n X ρi 1

n

ð5Þ

where n = number of tasks on the network’s critical path. ρ0 ¼

N X ρ0 i

1

N

ð6Þ

where N = number of tasks in the project. So the average probability for each of the duration estimates to go over its value is 1 − ρ, and that of the cost estimates is 1 − ρ 0 . If the completion of each task or work package is considered to be the same as one trial in a binomial experiment, the number of tasks that will finish on time and of those that will possibly be delayed should be determined. For the duration of the project, the focus should be on the n activities on the critical path; for the cost of

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013 / 495

J. Constr. Eng. Manage. 2013.139:488-497.

the project, all the work package tasks in the network, presented by N, should be considered. If a binomial distribution for the occurrence of the delays and cost overruns is assumed, the probability of exactly a out of n tasks being delayed and b out of N activities needing more money can be determined from the following representations of the binomial distribution, respectively: n! ρn−a ð1 − ρi Þa a!ðn − aÞ! i

ð7Þ

N! ρ 0N−b ð1 − ρi0 Þb b!ðN − bÞ! i

ð8Þ

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

Probðx ¼ aÞ ¼

Probðy ¼ bÞ ¼

where x and y = number of occurrences, or successes, in a binomial experiment of n or N trials, respectively. The size of the total delay is a function of the number of activities on the critical path, n, the probability of duration overrun for each activity, ρi , the potential extension or delay (size of the error, for which a point estimate is assumed; this assumption is relaxed later) and management’s level of confidence. For a predefined confidence level, e.g., 0.99, a is found by solving the following probability equation: Probðx ≤ aÞ ¼

a X 0

n! ρn−a ð1 − ρi Þa ¼ 0.99 a!ðn − aÞ! i

ProbðΔC ≤ γÞ ¼ ProbðΔC ≤ γjy≤b ÞProbðy ≤ bÞ 0

1 P γ − b0 μδci A ProbðΔC ≤ γÞ ¼ ϕ@ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pb 2 ffi Probðy ≤ bÞ σ 0 δci

ΔT ≤

a X 0

δti ≈ Nðμδti ; σ2δti Þ

ð10Þ

Probðy ≤ bÞ ¼

b X

N! ρ 0N−b ð1 − ρi0 Þb ¼ 0.99 b!ðN − bÞ! i

Then,

b X 0

ΔC99 jy¼b ≈ N

X a 0

X b 0

μδti ;

μδci ;

X

X

ð19Þ

ð20Þ

ð11Þ ΔC ≤

ΔT 99 jx¼a ≈ N

δti

The cost overrun at a given certainty level (e.g., 0.99) is the sum of deviations over b overruns from the N activities composing the project:

0

δci ≈ Nðμδci ; σ2δci Þ

ð18Þ

The sums are over i ¼ 1 to n for durations and i ¼ 1 to N for cost contingency. The probability of x being less than or equal to a or y being less than or equal to b could be assumed, for example, to be 99%. Then, by using the central-limit theorem and the normal distribution, λ and γ could be determined. The previous mathematical exercise is not practical for two reasons. First, the calculation is not straightforward for practitioners; second, the data from which the needed distributions can be developed are not available. The computational problem could be resolved for advanced practitioners by using simulation, but the lack of reliable data is a serious problem that cannot be overcome easily. Instead, a binomial distribution can be used for setting the contingencies. A binomial distribution for the duration of a task is assumed, and the total delay is found by adding the first a errors, ranked in descending order:

ð9Þ

If a distribution for the value of the error is assumed—normal, Nðμ; σÞ, negative exponential, right-angle triangular, or any other suitable distribution—then, according to the central-limit theorem, the distributions for ΔC and ΔT will be normal. That is, if

ð17Þ

δci

ð21Þ

 σ2δti

ð12Þ

References 

σ2δci

ð13Þ

To set an amount for contingency, a confidence level has to be set, say α. Then the buffer or contingency value λ has to be solved such that ProbðΔT ≤ λÞ ¼ α

ð14Þ

Using Eqs. (12) and (13) and based on central-limit theorem, ProbðΔT ≤ λÞ ¼ ProbðΔT ≤ λjx≤a ÞProbðx ≤ aÞ

ð15Þ

0 1 P λ − a0 μδti AProbðx ≤ aÞ ProbðΔT ≤ λÞ ¼ φ@ qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Pa 2 σ 0 δti

ð16Þ

where ϕ = probability found from a standard normal distribution. For the cost contingency, if a level of confidence of β is assumed, there is a contingency value γ such that

Back, W. E., Boles, W. W., and Fry, G. T. (2000). “Defining triangular probability distributions from historical cost data.” J. Constr. Eng. Manage., 126(1), 29–37. Baker, W. R. (1986). “Handling uncertainty.” IJPM, 4, 205–210. Ballard, G. (1999). “Improving work flow reliability.” Proc., 7th Annual Conf. of the Int. Group for Lean Construction, Univ. of California, Berkeley, CA, 275–286. Ballard, G. (2000). “Phase scheduling.” LCI White Paper No. 7, Lean Construction Institute, San Diego. Britney, B. R. (1976). “Bayesian point estimation and the PERT scheduling of stochastic activities.” Manage. Sci., 22(9), 938–948. Buehler, R., Griffin, D., and Ross, M. (1994). “Exploring the ‘planning fallacy’: Why people underestimate their task completion times.” J. Pers. Soc. Psychol., 67(3), 366–381. Burt, C. D. B., and Kemp, S. (1994). “Construction of activity duration and time management potential.” Appl. Cognit. Psychol., 8(2), 155–168. Chan, D. W. M., and Kumaraswamy, M. M. (1997). “A comparative study of causes of time overruns in Hong Kong construction projects.” Int. J. Proj. Manage., 15(1), 55–64. Cioffi, D. F. (2006). “Subject expertise, management effectiveness, and the newness of a project: The creation of the Oxford English Dictionary.” Proc., Project Management Institute Research Conf., Project Management Institute, Newtown Square, PA. Cioffi, D. F., and Khamooshi, H. (2008). “A practical method to determine project risk contingency budget.” J. Oper. Res. Soc., 60(4), 565–571.

496 / JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013

J. Constr. Eng. Manage. 2013.139:488-497.

Downloaded from ascelibrary.org by GEORGE WASHINGTON UNIVERSITY on 07/24/13. Copyright ASCE. For personal use only; all rights reserved.

Clark, C. E. (1961). “The greatest of a finite set of random variables.” Oper. Res., 9(2), 145–162. De Meyer, L. C. H., and Pitch, M. T. (2002). “Managing project uncertainty: From variation to chaos.” MIT Sloan Management Review, Winter, 60–67. Eden, C., Williams, T., and Ackerman, F. (2005). “Analysing project cost overruns: Comparing the ‘measured mile’ analysis and system dynamics modeling.” Int. J. Proj. Manage., 23(2), 135–139. Emhejellen, M., Emhejellen, K., and Osmundsen, P. (2003). “Cost estimation overruns in the North Sea.” Proj. Manage. J, 34, 23–29. Fente, J., Schexnayder, C., and Knutson, K. (2000). “Defining a probability distribution function for construction simulation.” J. Constr. Eng. Manage., 126(3), 234–241. Flyvbjerg, B. (2008). “Curbing optimism bias and strategic misrepresentation in planning: Reference class forecasting in practice.” Eur. Plann. Stud., 16(1), 3–21. Flyvbjerg, B., Garbuio, M., and Lovallo, D. (2009). “Delusion and deception in large infrastructure projects, two models for explaining and preventing executive disaster.” Calif. Manage. Rev., 51(2), 170–193. Flyvbjerg, B., Holm, M. S., and Buhl, S. (2002). “Understanding costs in public works projects, error or lie?” J. Am. Plann. Assoc., 68(3), 279–295. González, V., Alarcón, L. F., Maturana, S., Mundaca, F., and Bustamante, J. (2010). “Improving planning reliability and project performance using the reliable commitment model.” J. Constr. Eng. Manage., 136(10), 1129–1139. Gutierrez, G. J., and Kouvelis, P. (1991). “Parkinson’s law and its implications for project management.” Manage. Sci., 37(8), 990–1001. Herroelen, W., De Reyck, B., and Demeulemeester, E. (1998). “Resource constrained project scheduling, a survey of recent development.” Comput. Ops. Res., 25(4), 279–302. Herroelen, W., and Leus, R. (2004). “The construction of stable project baseline schedules.” Eur. J. Opl. Res., 156(3), 550–565. Herroelen, W., and Leus, R. (2005). “Project scheduling under uncertainty—Survey and research potential.” Eur. J. Opl. Res., 165(2), 289–306. Howick, S. (2003). “Using system dynamics to analyze disruption and delay in complex projects for litigation: Can the modeling purposes be met?” J. Oper. Res. Soc., 54(3), 222–229. Hughes, M. W. (1986). “Why projects fail: The effects of ignoring the obvious.” Ind. Eng., 18(4), 14–18. Khamooshi, H. (1999). “Dynamic priority—Dynamic programming scheduling method ðDPÞ2 SM: A dynamics approach to resource constraint project scheduling.” Int. J. Proj. Manage., 17(6), 383–391. Khamooshi, H., and Cioffi, D. F. (2009). “A holistic approach to program risk contingency planning.” IEEE Trans. Eng. Manage., 56(1), 171–179.

Kirytopoulos, K., Leopoulos, V., and Diamantas, V. (2008). “PERT vs. Monte Carlo simulation along with the suitable distribution effect.” Int. J. Proj. Organ. Manage., 1(1), 24–46. Klingel, A. R., Jr. (1966). “Bias in PERT project completion times calculations for a real network.” Manage. Sci., 13(4), B-194–B-201. Lee, H.-S., Shin, J.-W., Park, M., and Ryu, H.-G. (2009). “Probabilistic duration estimation model for high-rise structural work.” J. Constr. Eng. Manage., 135(12), 1289–1298. Lichtenberg, S. (1983). “Alternatives to conventional project management.” Int. J. Proj. Manage., 1(2), 101–102. MacCrimmon, K. R., and Ryanec, C. A. (1964). “An analytical study of the PERT assumptions.” Oper. Res., 12(1), 16–27. Parkinson, C. N. (1957). Parkinson’s law and other studies in administration, Random House, New York. Pertmaster [Computer software]. 〈www.pertmaster.com〉. Peterson, C., and Miller, A. (1964). “Mode, median, and mean as optimal strategies.” J. Exp. Psychol., 68(4), 363–367. Pinto, J. K., and Mantel, S. J. (1990). “The causes of project failure.” IEEE Trans. Eng. Manage., 37(4), 269–276. Ritch, M. T., Loch, C. H., and De Meyer, A. (2002). “On uncertainty, ambiguity and complexity in project management.” Manage. Sci., 48(8), 1008–1023. Roy, M. M., Christenfeld, N. J. S., and McKenzie, C. R. M. (2005). “Underestimating the duration of future events: Memory incorrectly used or memory bias?” Psychol. Bull., 131(5), 738–756. Schonberger, R. (1981). “Why projects are always late: Rationale based on manual simulation of a PERT/CPM network.” Interfaces, 11(5), 66–70. Standish Group. (1998). “CHAOS ‘98: A summary review.” A Standish Group Research Note, Standish Group International, Boston, MA, 1–4. Standish Group. (2009). “CHAOS: A summary review.” A Standish Group Research Note, Standish Group International, Boston, MA. Touran, A. (2010). “Probabilistic approach for budgeting in portfolio of projects.” J. Constr. Eng. Manage., 136(3), 361–366. Ward, S., and Chapman, C. (2003). “Transforming project risk management into project uncertainty management.” Int. J. Proj. Manage., 21(2), 97–105. Wayne, D., and Cottrell, P. E. (1998). “Simplified program evaluation and review technique (PERT).” J. Constr. Eng. Manage., 125(1), 16–22. Williams, T. M. (1995). “What are PERT estimates?” J. Oper. Res. Soc., 46(12), 1498–1504. Williams, T. M. (1999). “The need for new paradigms for complex projects.” Int. J. Proj. Manage., 17(5), 269–273. Williams, T. M. (2005). “Assessing and moving on from the dominant project management discourse in the light of project overruns.” IEEE Trans. Eng. Manage., 52(4), 497–508. Wook, S. J., and Rojas, E. M. (2011). “Impact of optimism bias regarding organizational dynamics on project planning and control.” J. Constr. Eng. Manage., 137(2), 147–158.

JOURNAL OF CONSTRUCTION ENGINEERING AND MANAGEMENT © ASCE / MAY 2013 / 497

J. Constr. Eng. Manage. 2013.139:488-497.

Suggest Documents