Software Renewal Process Comprehension using Dynamic Effort Estimation

Software Renewal Process Comprehension using Dynamic Effort Estimation Danilo Caivano, Filippo Lanubile, Giuseppe Visaggio Dipartimento di Informatica...
Author: Sherman Walton
5 downloads 0 Views 381KB Size
Software Renewal Process Comprehension using Dynamic Effort Estimation Danilo Caivano, Filippo Lanubile, Giuseppe Visaggio Dipartimento di Informatica – University of Bari Via Orabona, 4, 70126 Bari – Italy {caivano, lanubile, visaggio}@di.uniba.it

Abstract This paper presents a method for dynamic effort estimation, together with its supporting tool, and the experimental validation on a renewal project of a very aged software system. Method characteristics such as dynamic tuning and fine granularity allows the tool to quickly react to process variations. The experimental validation shows how the combination of meaningful predictors and fine grain calibration is effective for understanding the enacted process and its implicit changes, or controlling the efficacy of explicit process changes. The study also confirms that the estimation model is process-dependent and then cannot be reused for other processes albeit similar.

1. Introduction Process diversity implies estimation model diversity [Sof00], [Sof00a]. Boehm in [Boe00a] states that “traditional software estimation models did not have to deal with graphical user interface builders, object, process maturity, and web based systems”. Software estimation modeling have always faced the diversity problem by tailoring models based on historical data from different projects and organizations. In an historical experimental study, Kemerer [Kem87], compared the most popular estimation models (COCOMO [Boe81], ESTIMACS [Rub83], SLIM[Put78],[Put79], and Albrecht’s function point model [Alb79], [Alb83]), and highlighted the importance of calibrating estimation models when the context of model application (e.g., application domain, organization size, available resources) is different from the original context in which the model was built. In spite of the amount of research spent on software cost estimation (for a recent survey refer to [Boe00]), current approaches such as model-based, expertise-based, learning-oriented, dynamics-based, regression-based, composite, still suffer from the following problems: - They require the introduction of extra metrics, thus increasing both project costs and the developer’s inertia due to the adoption of unknown metrics. - Estimation functions are often overly complex or obscure, thus making difficult to establish a cause-effect relationship between measurements and perceived process performance variations. For example, Johnson observes that complicated methods do not necessarily produce more accurate estimates [Joh00]. - Estimation models are inadequate for non-classical domains ranging from legacy system rejuvenation to web applications. For example, Reifer asserts that web-projects “defeat my processes, defy my models, and make my size metrics obsolete” [Rei00]. - Process performance often change during project execution itself, thus making useless cost predictions based on project-level observations. For example, an increase in process maturity makes obsolete the project characterization made up of parameters and cost drivers as determined at project startup and results in a reduction in project effort [Cla00]. As long as a project lasts, its estimation model becomes progressively inaccurate due to parameter and driver variations. - Projects are made up of heterogeneity components, thus making necessary to use different parameters and cost drivers for each set of homogenous project components. The last two problems push towards project componentization and dynamic model recalibration. For example, COCOMO II decomposes a project in three major parts: Applications Composition, Early Design and Post-Architecture [Boe95]. For each of the three project components, the estimation model is built upon different parameters and drivers. Visaggio further partitions projects on a finer granularity basis [Vis00] so that it results adequate to highly dynamic and heterogeneous projects such as reverse engineering and restoration of software legacy systems. This paper presents a method for dynamic effort estimation, together with its supporting tool, and the experimental validation on an industrial project. The tool enables the change of the estimation model at run-time, according to the process improvements operated. The level of granularity used by the tool allows project managers to quickly realize process variation and its maturity trend. The empirical study on a legacy system renewal project shows how the granularity level and continuous model calibration are effective for estimation accuracy. When meaningful predictors are involved in model building it is easier to understand the relationship between the estimation function and process characteristics.

1

The rest of the paper is organized as follows: section 2 describes the architecture of the tool and its application according to the method; section 3 presents the empirical study; and section 4 draws some conclusions and lessons learnt from experimentation.

3. The Dynamic Effort Estimator The Dynamic Effort Estimator (DEE) is a tool for dynamical tuning of effort estimation models, as proposed in [Vis00]. DEE have been implemented on the basis of MS Access 2000 and StatSoft’s STATISTICA as main components and using MS Visual Basic as glue language. In DEE an entire project can be viewed as the execution of the same process n-times, one for each selected system component. Thus, a generic project component refers to the i-th iteration of process execution on the i-th system component. For each project component, there is a corresponding data point, including measures related to input artifacts used by the process, output artifacts produced by the process, estimated and actual efforts, and the error made. The architecture of DEE is shown in figure 1, with an emphasis on major components and data flows among them. There are two external inputs: past experience and current project. Past experience data derive from previously executed projects, albeit analogous to the current project. These data are mainly used to establish a first baseline for a coarse estimation of the effort required in the execution of project components. Current project data come from previously executed components of the current project. These data makes it possible to tune the estimation model during project.

Figure 1. The architectural components of DEE After the execution of the current project component, its actual data (i.e., its data point) become available and can be collected in a Project Repository. Such a repository is conceptually partitioned in a Product and Process Repository. A data point in the Product Repository includes the identifier of the executed project component, and the measurements on the attributes of its input/output artifacts. A data point in the Process Repository includes the project component name, the number and identifiers of people involved, and the measurements on the process attributes (for example: real effort spent, expected effort, starting and completion dates). All the measurements on product and process attributes refer to metrics whose collection has been planned by management based on organization maturity, available resources for metric collection, and developers’ ability in 2

interpretating data. The Effort Estimator module uses metrics concerning input artifacts as independent variables (M_E) in order to produce the estimated effort (Ee). The Error Estimator module uses the remaining metrics as independent variables (M_Err) to determine the estimated error (Erre). The Estimation Value Refinement module uses both Ee and Erre in order to build the adjusted estimation model (E_Adj), which is stored in the Project Repository. E_Adj is obtained through the following equation: E_Adj = Ee / (1- Erre). The Accuracy Meter module compares E_Adj to the effort actually spent. The module notifies the project manager when the accuracy falls down a predefined threshold. This information is then used by the Decision Maker module to decide whether a new, most accurate econometric model must be defined. In order to define the new econometric model, the Learning Set Capture module retrieves the data points which are used to populate the learning set. The learning set is a tradeoff between the minimum number of samples for achieving statistical significance and maximum inaccuracy admitted. For example, if the new forecasting model is a two variable linear equation, a thumb rule suggests to use a learning set with 15-20 data points for each independent variable involved. However, the current econometric model has to be used for all the components included in the learning set, but the current model is no longer adequate, and it will produce a forecast with large errors. Therefore, it is useful to make the learning set the lowest as possible in order to reduce inaccuracy in the estimation. A learning set with 30 elements represents a good compromise, in the previous example. Then, if the statistical analysis of the learning set results in a need of three independent variables to be used in the equation, the number of elements in the learning set will not be adequate and it will be necessary to enlarge the learning set to 45-60 elements. In this case, in spite of the inaccuracy of the current model, learning must continue through the same model. An alternative can be to use the same model but adjusted with an error estimation function as corrective factor. The error estimation function may use a smaller learning set because it is applied just to collect those additional data points needed for building a new model. For example, when a large learning set is necessary, rather than using an inaccurate effort model over 30 data points, it is preferable to use the inaccurate model over only 15 data points, build an error estimation function on those same 15 data points, and use both models to over other 15 data points for the purpose to build a new more accurate effort model. The drawback of having an error model with two independent variables is compensated by the advantage having limited the error in the effort estimation. The Statistical Engine module builds the new econometric model E(•) and the new error estimation function Err(•) using a (forward) stepwise multiple regression analysis. The independent variables are individually added or deleted from the model at each step of the regression until the best regression model is obtained. Stepwise multiple regression analysis helps to avoid the colinearity problem [Kit92] and is repeated twice: once in the Main Component Analyzer module and another time in the Corrective Factor Analyzer module. The metrics not included in the effort model are then used as independent variables to build the error model. Figures 2 shows some tool screenshots related to how it is possible to enter collected measures and estimate the effort needed to execute project component according to the model used. Figure 3 shows a scenario with an error exceeding a given threshold after having entered actual effort spent. Finally figure 4 shows the creation of a learning set and model building through statistical analysis.

a)

b)

c) Figure 2: (a) Start screen offering access to the main functions; (b) output displayed after having pushed “Estimate” button (c) form for entering metrics values. 3

a) b) Figure 3. Scenario of actual error computation after actual effort has been entered

a)

b)

c) d) Figure 4: Scenario of model building: (a) learning set definition and metric selection; (b) results from statistical analysis; (c) detailed report; (d) proposed models

4. The Empirical Study DEE has been tested using a combination of evaluation methods, legacy data and simulation [Zel98]. Legacy data are related to a previously completed project whose existing data represent the situation when the tool is not available for effort prediction. On the other hand, simulation is used to hypothesize how the same real environment would have reacted to the introduction of the new technology. The legacy software system used for technology validation is an aged banking application that needed to be rejuvenated because of its poor quality. The details of the overall renewal process are reported in [Vis97]. At the beginning, the features of the renewal processes were not well defined, because the actual quality of the programs required deepening some activities. For example, some programs needed a reverse engineering process, i.e., the programs needed the rebuilding of the documentation without changing the program structure. In other cases a restoration was needed, i.e., the programs were restructured focusing on the semantic issues of the involved data and procedures, without changing the program architecture. 4

After a first analysis, the overall renewal process was structured in two processes: a reverse engineering process, that was executed over the entire set of programs (638) composing the legacy system, and a restoration process executed only on a subset of programs (349). The phases of reverse engineering and restoration processes, the activities of each phase and the corresponding deliverables are summarized in the next two tables. The reverse engineering process (REV_EN) is summarized in table 1, while the restoration process (RES) also requires the execution of the phases reported in table 2 in addition to those in table 1. The two kinds of processes were very different from each other and therefore in the following they are considered as two different projects.

PHASES

ACTIVITIES

1.

Inventory

1.

2.

Abstract data

2. 3. 4. 5. 6.

3.

Reconstruction of application requirement Reconstruction of logical levels of programs

8.

4.

9. 10. 11. 12. 13. 14.

DELIVERABLES

Identification of: 1.1 Duplicated / obsolete source files 1.2 Obsolete / useless reports 1.3 Unused files 1.4 Temporary files 1.5 Permanent files 1.6 Pathological files Identification of data meaning Change variables names Identification of dead data Identification of redundant data Classification of data in: 7.1 Conceptual 7.2 Structural 7.3 Control Expected program functions derived from report analysis and from interviews to users, maintainers and program managers Building of Structure Chart Identification of dead instructions Refinement of structure chart by Sections and Paragraphs Removing of Section and Paragraphs invoked only by dead instructions Identification of the meaning of the modules previously identified Identification of modules using procedural IFs

1. 2.

(Source) Files of the legacy system Problems to solve through the rejuvenation process

3. 4. 5. 6. 7. 8.

Data dictionary Improved dependency diagram Cross reference between old and new variables names Programs using new variables names Data classification in new data dictionary Expected functions to derive conceptual data computed through rough data 9. Requirements for data reengineering 10. Expected functional requirements 11. Requirements which do not satisfy users 12. Structure chart, without changing programs structure 13. Documentation of modules in the structure chart, using their meaning and the contained instructions

Table 1. Descriptive framework of the reverse engineering process

PHASES 1. 2.

3.

4.

ACTIVITIES

Reconstruction of 1. logical level of data Reconstruction of 2. logical levels of 3. programs 4. 5. Improve programs 6. logical model 7. Test and debug

8.

DELIVERABLES

Build the dependency diagram using the data structures in the programs Refinement of structure chart by Sections and Paragraphs Removing of Section and Paragraphs invoked only by dead instructions Identification of the meaning of the modules previously identified Identification of modules using procedural IFs Identification and removing of variables became obsolete Identification of modules implementing the expected functions Equivalence test for the programs whose structure was changed

1. 2. 3.

4. 5. 6. 7. 8.

Dependency diagram Structure chart, without changing programs structure Documentation of modules in the structure chart, using their meaning and the contained instructions Structure chart with the improved structure Documentation of modules included in the improved structure chart Test Plan Test log Software Debugged

Table 2. Descriptive framework of the restoration process

A four-person team executed the entire project. During the execution, other people replaced some others, but the total team number was always constant as well as skills and experience. Data were recorded on a daily basis and were weekly monitored by the project manager: such data represent a reliable set to apply the analysis of the current work. A single person executed the renewal process for each program, and then the program can be considered as a project component. For each program the following product metrics were collected: • NLOC: number of lines of code in procedure divisions; it does not include neither comments nor empty lines; if a statement is written over multiple lines it is counted as 1. 5

• NLOCD: number of lines of code in data divisions, evaluated with the same criteria as above. • McCABE: McCabe’s cyclomatic complexity • McCABE-AV, McCabe’s cyclomatic complexity after the rejuvenation process as determined by project manager. • COMP_GA, complexity gain, i.e., variation of cyclomatic complexity value after the rejuvenation process • HALSTEAD, Halstead’s complexity • NMODI, number of internal modules, including both sections and paragraphs • NMOD, number of output modules after the rejuvenation process as determined by project manager For each program the following process metrics were collected: • EFFORT, effort spent for the rejuvenation process, expressed in person-hours • ID-DEVELOPER, identifier of the person who executed the rejuvenation process. In the actual execution of the projects (legacy data), the econometric model is expressed as Ee = NLOC / PERF, where PERF (performance goal) is 400 NLOC / hour in the reverse engineering and 200 NLOC / hour in the restoration project. When the two projects were executed, the project manager did some improving changes in order to reach the intended performance goals. The above econometric model was the model adopted as a baseline by DEE during simulation. The benefits of dynamic model recalibration can be appreciated by comparing figures 5.a and 5.b, which show the percentage of the error in the reverse engineering project, respectively, not using and using the tool. When the project was actually executed, it had an error of about 43% in the forecasting, while if it were executed with the support of DEE, the error would have been about 16%. The same comparison about the restoration project is reported in figures 6.a (without using the tool) and 6.b (using the tool): errors are respectively 17% and 8%.

a) b) Figure 5. Boxplots of (a) error in reverse engineering project executed without the tool and (b) error obtained under the hypothesis of DEE usage

a) b) Figure 6. Boxplots of (a) error in restoration project executed without the tool and (b) error obtained under the hypothesis of DEE usage

We now show how this approach makes it possible to better understand a process and its changes. First of all, the performance obtained with the programs under the two kinds of processes (respectively 538 NLOC/h for reverse engineering and 203 NLOC/h for restoration) are significantly different, as shown in Figure 7. 6

a) b) Figure 7. Boxplots of mean performance values expressed in NLOC/h of (a) the reverse engineering process and (b) restoration process Figure 8 shows the results of the ANOVA test performed over the two projects. The independent variables were Process (with values Reverse Engineering and Restoration) and developers (d1,… d4). The dependent variable was process performance measured as NLOC per hour. Figure 8.a shows the mean effects of the process while figure 8.b shows the interactions between the process and developer factors. The test rejects the null hypothesis of null differences: performances obtained with reverse engineering process are significantly higher than those obtained with restoration process. This finding confirms the theoretical differences between the two processes, reverse engineering and restoration, as described in [Vis97]. Furthermore, the restoration process is different from reengineering process too. In fact, the system aging symptoms were not overcome by restoration [Vis97]. A reengineering process on both legacy data and functions would have been necessary. Unfortunately when the project was executed, the existent technology required freezing the entire system for the duration of the reengineering process and this was considered not acceptable.

a) b) Figure 8. ANOVA test results showing: (a) the mean effects of process type and (b) the interaction between process type and developers Performing simulation of reverse engineering and restoration processes, it has been necessary to recalibrate the models three times for both processes. Table 3 and 4 include the econometric models for effort estimation, respectively for the reverse engineering and restoration process, and data points in the learning set that have been used for model building. These tables do not contain the models for error estimation because, being error too small, none of the remaining variables gives a significant explanation of its variability. Figure 9 shows the effects of process improvement during the execution of reverse engineering and restoration processes, as they were actually observed (legacy data). For both processes, programs are sorted according to the chronology of related rejuvenation processes. It can be noted that the approach has required rebuilding the estimation model after each process improvement. In the context of continuous process monitoring, departing estimates can be interpreted as clues for process changes.

7

Data points for Learning Effort Estimation Set definition 1 to 15 Ee = 0.56 +0.004257 *NLOC 16 to 30 Ee =0.658227+NLOC*(0.002102) 151 to 165 Ee =1.040427+NLOC*0.000974 Table 3: Sequence of effort models for reverse engineering process Data points for Learning Effort Estimation Set definition 290 to 304 Ee = -1.09709+NLOC*0.00568+NMOD*0.1689 305 to 320 Ee = 0.180599+NLOC *0.005341 514 to 543 Ee =-1.20710+NLOC*0.00400+NMOD*0.04454 Table 4: Sequence of effort models for restoration process

a) b) Figure 9. Performance data across legacy programs of (a) reverse engineering process and (b) restoration process

Process changes can be spontaneous as derived from maturation or intended as caused by improvement interventions decided by project management. In the former case the estimation function views a change as an unexpected event and notifies it to project management. In the latter case, the estimation function views a change as an expected event and can be used to verify its efficacy. In our simulation, being the project executed, the variation in process performance can only be interpreted as a process maturation consequence. Therefore the estimation function perceives the changes as an unexpected event. In order to understand the actual improvements it is necessary to retrieve the project decisions taken during the project execution. We obtained this information consulting the weekly reports, compiled by the project team and summarized as listed below: First Improvement. Initially the rejuvenation process was very confused. The difference between reverse engineering and restoration process was not clear. Given a program, the problem was deciding to apply a reverse engineering or restoration process to it. The improvement consisted in clearly distinguishing and better characterizing both processes. For example phase 1 in table 2, belonged to the reverse engineering process even if its output was not used. In consequence to this improvement, it became clear which process to apply on a program. Second Improvement. Since personal maturation could not further improve the renewal process, it was necessary to introduce some improvement to the instrumentation used for both projects. This was done either developing ad hoc tools or modifying commercial tools. For example, a tool to support phase 1 in reverse engineering (table 1) was developed, and a commercial tool was bought and adapted in order to support phase 1 in restoration process (table 2). Third improvement. Introduction of reading techniques for static verification of modified programs. In the restoration process these techniques were applied before executing the equivalence test between the restored program and the corresponding legacy program. A research group different from project team has improved these reading procedures during project execution. Since the above improvements involved both processes, the number of models obtained using our tool (three for reverse engineering and three for restoration, as showed in table 3 and 4) appears coherent with the number of improvements made to the processes. 8

To better understand both processes the equations used must be analyzed. First, the constant term is always present in the equations. This can be interpreted as an invariant part of the effort required. Moreover, observing tables 1 and 2, the activities probably more influenced by NLOC, seem to be respectively those in phase 4 of reverse engineering and in phase 2 and 3 of the restoration process. Furthermore the NLOC coefficient decreases progressively during project execution due to using more tools and automation. This aspect characterizes the reverse engineering process with respect to restoration process. In the restoration process the variable NMOD is involved in two of the models used. This variable is the number of output modules after restoration and was set by the program manager. After having examined the program and evaluating the frequency of maintenance requests, the project manager determines the value of NMOD. This explains the reason for considering NMOD as an independent variable. Its presence in the estimation functions suggests using it as an anchor factor of the requested effort for program restoration. The greater the number of modules to obtain, the greater is the effort needed. The third model in table 4 involves two independent variables, therefore the learning set used to determine it must contain 30-35 data points. Starting from data point 514 the second model was inaccurate. Moreover at data point 529 the following model was obtained: Ee = 0.00406*NLOC +0.04234*NMOD-1.39673

(1)

A learning set of 15 elements was too small to support a model with two independent variables. To avoid using the above model (1), because of its poor learning set, and the second model in table 4, because of its strong inaccuracy, we decided to use an error model as corrective factor for the second effort model. The resultant error model equation was: ERRe=0.003667*NMOD+(-0.000061)*COMPL_GA-0.385852

(2)

Even though the independent variables involved are two, we adopted this function because its use was limited to the time necessary for collecting other 15 data points needed for determining the third model (table 4). The benefits of this choice are highlighted in figure 10. In figure 10.b the use of the second model integrated with the error model, determined the smallest error on data points 529-543 with respect to the other possible choices.

a)

b)

c) Figure 10. Boxplot of estimation error over data points 529-543 (a) using the second estimation model of table 4 (b) using the same model integrated with the error model; (c) using function (1) 9

Finally, the third model in table 4 was obtained after having collected the necessary data points on a 30-elements learning set. The presence of NMOD in the error equation confirms that it is a good parameter to be used as driver for both effort and error estimation functions. The first and third equations in table 4 include a negative constant term. A cause for negative constant terms in the effort equations might be that module recovery and concept assignment in a restoration project could exploit information reuse. Another important factor is the individuation and removal of clones during project execution. This hypothesis is confirmed by observing that the second equation has a positive constant term and does not include NMOD. This means that the equation does not consider reuse as a determinant factor for effort estimation.

5. Conclusions In this paper we have experimentally tested a tool for the estimation of the effort required in software process execution. The tool is based on a method that does not calls for previously determined predictors but let the project manager free to choose intuitively understood variables. Since the estimation models are easily interpretable they provide the basis for understanding the relationship between the estimation function and process characteristics. Moreover, better process understanding makes it possible to define improvements to the processes. The novelty of the proposal consists in the dynamic changes of the econometric model according to the improvements operated on the processes during their executions. Each improvement injects some differences between the process preceding the improvement and the process following it: the method works focusing on such differences and making them explicit in a new econometric model. The experimental validation on a legacy system rejuvenation project showed that fine estimation granularity and model recalibration are successful for both improving the estimation accuracy and understanding the process and its variation, as determined by process maturation or explicit improvement actions. The study also confirms that the econometric model is process-dependent and then cannot be reused for other processes albeit similar. In fact, reverse engineering and restoration processes share many activities but the estimation functions are different and include distinct variables too. The experimental validation have taught us the following lessons, which can be generalized to legacy system renewal processes: • a renewal project can be more efficient if supported by tools, but the efficacy of tools have to be carefully tested in the process itself before their adoption; • to improve the quality of a legacy system without altering its structure, it is useful to define a restoration process, whose characteristics are in the middle between reverse engineering and reengineering, as defined in [Chi90]; • to reduce the required effort in a restoration process, the project manager can limit the expectations for some target variables, e.g., the number of modules to be extracted from a legacy system or complexity gain. In fact, these two independent variables appear respectively in the estimation function and error model.

References [Alb79]

A.J.Albrecht, “Measuring application development productivity”, Proc. of the IBM Applications Development Symposium, 1979, pp.83-92 [Alb83] A.J.Albrecht, J. Gaffney ,“Software functions, source lines of code and development effort prediction: a software science validation ”, IEEE Transactions on Software Engineering, vol. 9, no.6, 1983, pp.639-648 [Boe81] B.W. Boehm, Software Engineering Economics, Englewood Cliffs, Prentice Hall, 1981. [Boe95] B. Boehm et al. , “Cost Models for Future Software Life Cycle Processes: COCOMO 2.0,” Annals of Software Engineering Special Volume on Software Process and Product Measurement, Science Publishers, Amsterdam, The Netherlands, Vol. 1, pp. 45 - 60, 1995. [Boe00] B.Boehm. C.Abts, S. Chulani, “Software Development Cost Estimation Approaches - A Survey”, USC Center for Software Engineering, USC-CSE-2000-505, 10 April 2000. [Boe00a] B. Boehm, R. E. Fairley, “Software Estimation Perspective”, IEEE Software, November-December 2000, pp.22-26 [Cla00] B.K. Clark, “Quantifying the Efects of Process Improvement on Effort”, IEEE Software, Nov-Dec 2000, pp.65-70 [Chi90] E.J. Chikofsky, J.H. Cross, “Reverse Engineering and Design Recovery:A Taxonomy”, IEEE Software, Jan 1990, pp.1317 [Joh00] P.M. Johnson et al., “Empirically Guided Software Effort Guesstimation” IEEE Software Nov-Dec 2000, pp. 51-56. [Kit92] B. Kithenham, “Empirical studies of assumptions that underlie software cost estimation models», Information & Software Technology, 34(4), 1992, pp.211-218 trovare riferimento più recente

10

[Put78] [Put79] [Rei00] [Rub83] [Sof00] [Sof00a]

[Vis00] [Wal77]

[Zel98]

L. H. Putnam, “A General Empirical Solutions to the Macro Software Sizing and Estimating Problems”, IEEE Transactions on Software Engineering Vol.4, No. 4, 1978 pp.345-361 L. H. Putnam, A. Fitzsimmons “Estimating Software Cost”, Datamation, 25, 10-12,1979 D. J. Reifer, “Web Development: Estimating Quick to Market Software“ IEEE Software, November-December 2000, pp.57-64 A.S.Rubin “Macroestimation of software development paramenters: the ESTIMATICS system” , Proc. of SOFTAIR Conference on Software Development Tools, Techniques and Alternatives, 1983, pp.109-118 IEEE Software, Process Diversity, July – August 2000. IEEE Software, Estimation, November – December 2000.[Vis97] G. Visaggio, “Comprehending the Knowledge Stored in Aged Legacy Systems to Improve their Qualities with a Renewal Process”, TR-ISERN-97-26 International Software Engineering Research Network, 1997. G. Visaggio, “Flexible Estimation for Improving Software Process”, White Paper – Computer Science Department – University of Bari, 2000. C. Walston, C. Felix, “A Method of Programming Measurement and Estimation”, IBM Systems Journal 16, No. 1, 1977.

M. V. Zelkowitz, and D. Wallace, “Experimental Models for Validating Technology”, Computer, 31 (5), May 1998.

11

Suggest Documents