Teaching System for Software Cost Estimation

Teaching System for Software Cost Estimation JAMES CROWE SEPTEMBER/2009 SUPERVISOR PROF. WELLAND A DISSERTATION SUBMITTED IN PART FULFILMENT OF THE ...
Author: Martha Patrick
4 downloads 0 Views 4MB Size
Teaching System for Software Cost Estimation

JAMES CROWE SEPTEMBER/2009 SUPERVISOR PROF. WELLAND

A DISSERTATION SUBMITTED IN PART FULFILMENT OF THE REQUIREMENT OF THE DEGREE OF MS C IN INFORMATION TECHNOLOGY AT THE UNIVERSITY OF GLASGOW

I

ABSTRACT

The overarching goal of this project is clear: create a teaching tool for software cost estimation. The basic process of computing a cost estimate is relatively straightforward: values are plugged in; an algorithm is executed; a result is produced. However, the accuracy of the estimate is critically dependent upon the choice of the inputs into the model. Furthermore, in terms of revealing the sensitivity of the model, the number of different values and the extent to which these values can be manipulated through weighting variables is less straightforward. This project seeks to approach the problem in three ways, rooted within bridging the space between the simple and the complex. 1. The ‘input stage’: conveying the syntax of the formula is simple; yet, effectively conveying the semantics of the drivers is complex. 2. The ‘processing stage’: the calculation itself is simple; yet, conveying the sensitivity of each driver within this computation is complex. 3. The ‘results stage’: displaying the numeric product of the calculation is simple; yet, conveying a feel of the sensitivity of the result under different assumptions is complex. To achieve this project, the research, design, implementation, testing and evaluation of a software artefact was conducted. The resultant system addresses these fundamental problems associated with an algorithmic estimate, and creates a semantic framework to support the process of learning core processes associated with software cost estimation.

//Masters Dissertation/Abstract/

{i}

0407369

II

SECTION

C ONTENTS

I

ABSTRACT

i

II

CONTENTS

ii

III

INTRODUCTION

iv

III.I

INTRODUCTORY: ESTIMATING COSTS; COSTING ESTIMATION

iv

III.II

M ETHODOLOGIES: AN OUTLINE

iv

III.II.I EXPERT ESTIMATION

v

III.II.II ALGORITHMIC M ODELLING

v

CONSTRUCTIVE COST MODELLING

vi

III.III.I COCOMO 2.0

vi

III.III.II THE COCOMO 2.0 SUB-M ODELS

vii

III.IV

DISSERTATION STRUCTURE

viii

1

DOMAIN ANALYSIS

1

1.1

THE EARLY DESIGN MODEL

1

1.1.1 PERSON M ONTHS

2

1.1.2 FUNCTION POINTS

2

1.1.3 SCALE FACTORS

3

1.1.4 EFFORT DRIVERS

4

EXISTING ESTIMATION SOFTWARE

5

1.2.1 COSTAR

5

1.2.2 USC EXPERT COCOMO II

8

1.2.3 USC CSSE SUITE OF CONSTRUCTIVE M ODELS

10

EXISTING TEXTUAL GUIDES

11

1.3.1 SOFTWARE ENGINEERING (8TH)

11

1.4

CONCLUSION

12

2

PROBLEM ANALYSIS

14

2.1

A TEACHING TOOL FOR COST ESTIMATION

14

2.2

PROJECT PROBLEM

15

2.3

PROJECT REQUIREMENTS

15

III.III

1.2

1.3

//Masters Dissertation/Contents/

{i i}

0407369

2.3.1 STAGE 1

16

2.3.2 STAGE 2

17

2.3.3 STAGE 3

18

2.3.4 STAGE 4

19

2.4

REQUIREMENTS ANALYSIS

20

3

DESIGN ANALYSIS

21

3.1

DEVELOPMENT METHODOLOGY

21

3.2

DEVELOPMENT TECHNOLOGY

22

3.3

DESIGN PRINCIPLES

22

3.4

DESIGN DECISIONS

23

4

PROJECT EVALUATION

43

4.1

DESIGN TESTING

43

4.1.1 TESTING STRATEGY

43

4.2.2 USABILITY TESTING

44

PROJECT EVALUATION

50

4.2.1 IMPLEMENTATION EVALUATION

50

4.2.2 FUNCTIONALITY EVALUATION

51

4.2.3 CONCLUSION

52

A

DESIGN M ETHODOLOGY

54

B

PACKAGE DIAGRAM

55

C

APPLICATIONSUBSYSTEM CLASS DIAGRAM

56

D

CLASS DIAGRAM

57

E

SCHNEIDERMAN’S ‘8 GOLDEN RULES ’

58

F

QUESTIONNAIRE

59

G

TASK LIST

62

H

BIBLIOGRAPHY

63

4.2

//Masters Dissertation/Contents/

{i ii }

0407369

III

I

INTRODUCTION

N GENERAL PEOPLE EXPERIENCE THEIR PRESENT NAIVELY, AS IT WERE, WITHOUT BEING ABLE TO FORM AN ESTIMATE OF ITS CONTENTS ; THEY HAVE FIRST TO PUT THEMSELVES AT A DISTANCE FROM IT - THE

PRESENT , THAT IS TO SAY, MUST HAVE BECOME THE PAST FROM WHICH TO JUDGE THE FUTURE.

- BEFORE IT CAN YIELD POINTS OF

VANTAGE

SIGMUND F REUD , DIE ZUKUNFT EINER ILLUSION (1927)

III.I

INTRODUCTORY: ESTIMATING COSTS; COSTING E STIMATION

As an activity, software cost estimation performs fundamental functions. Primari ly, budgeting - in terms of both fiscal cost and time - is the key area informed by an estimate. Estimation has implications for project scheduling and control, and illuminates risk by exposing the impact of decisions regarding staffing, requirements and quality assurance on the budget. In his 2004 study i, Capers Jones exposed that, of the software projects analysed, only 10% were ‘deemed successful’ in terms of the ‘achievement of schedule, cost, and quality objectives’. In contrast, around 70% of these projects experienced ‘major delays and overruns or were terminated without completion’. Jones identifies six ‘key factors’ that contribute to this poor performance; of these, inadequate cost estimation plays a significant role in the failure of many projects. In the context of fixed-price contracts - where the liability for over-spend lies at the door of the developer - accurate estimation is not an academic exercise; it is integral to the performance of a software project and the survival of the development organisation. The focus of this development project is to create a tool to augment the process of learning a software cost estimation methodology. Fundamentally, this project requires four interdependent stages, which lend their structure to this dissertation: the research and evaluation of the domain, existing products and materials (Section 1); the analysis of the problem and formalisation of requirements (Section 2); the design and implementation of software to meet the needs of the solution (Section 3); the testing and evaluation of this artefact as a solution to this problem specifically, and as a piece of software generally (Section 4). More detailed information is included in the final section (III.iv) of this introduction. The remainder of this section seeks to contextualise the project and provides a brief, high-level introduction to algorithmic estimation as a field, and Constructive Cost Modelling as a methodology.

III.II

METHODOLOGIES: AN O UTLINE

Of the various methodologies that seek to provide a robust estimate, expert estimation and algorithmic modelling are prevalent approaches to software cost estimation. The fundamental divide between these methodologies is located in the process of transforming (estimated) inputs to an (estimated) outputii. Essentially, this is an evaluation of the impact of factors relevant to the software development project, and a quantification of cost based upon this decision. //Masters Dissertation/Introduction/

{ i v }

0407369

Included below is a brief outline of these techniques, including a short consideration of their inherent advantages and disadvantages. A more detailed analysis of problems associated with algorithmic modelling is considered in the Background Survey (3). III.II.I EXPERT ESTIMATION Expert estimation is rooted within a judgment-based quantification of inputs; essentially, an estimate that an ‘experienced’ professional considers to be the most likely to complete the development project. ADVANTAGES Magne Jørgensen contends that often ‘a simple average of outputs from different estimation experts’ offers a reliable estimate and that, furthermore, no ‘substantial empirical evidence favours using formal estimation models’iii over judgement-based techniques. DISADVANTAGES A degree of caution is required in regard to this notion of an ‘expert’. The transition from ‘amateur’ to ‘expert’ and inexperienced to seasoned professional is not necessarily a linear process; rather, it is an acutely context-dependent classification. In this regard, Jørgensen argues that: ‘estimation expertise [should be interpreted as] experience on very similar projects, not as general software development expertise’. iv Furthermore, decision-based judgements are inherently subjective and liable to be influenced by external factors. III.II.II ALGORITHMIC MODELLING Algorithmic cost estimation attempts to implement a mathematical calculation to forecast costs based on identified drivers, constructing a mechanical quantification of inputs. Therefore, correctly judging the process of inputting variables into a parametric formula, and the ability to interpret and use the results of this implementation, is important. Project size, the number of development staff, and external influences are manipulated alongside historical project data to fit the parameters of an algorithmic model. The specification of these cost drivers is examined later. ADVANTAGES Richard E. Fairly contends that the ‘primary goals’ for an algorithmic cost modelling tool, such as COCOMO, are to provide an: Open, constructive estimation model that reflects the changes in software engineering methods, models, and techniques [...] and to provide a model that is robust enough to accommodate a variety of development methods and practices.v

At least on face value, algorithmic cost modelling seems to negate some of the problems associated with other techniques, such as expert estimation. Parametric models diminish the responsibility of the expert in terms of inputs into the model. Furthermore, these decisions are inherently more traceable; thus, should estimates prove inaccurate, choices made at the estimation stage can be reviewed more easily.

//Masters Dissertation/Introduction/

{ v }

0407369

It follows then, that an advantage of this mechanical quantification lies in its more objective nature - it is less likely to be affected by factors that exist outside the realm of the equation. DISADVANTAGES A study carried out by Gruschke and Jørgensen seeks to reveal the process of expert estimation that underpins a formal software cost estimation model vi. They argue that, since essential inputs into the algorithm are derived from expert opinion, it must be necessarily exposed to influence from external pressures. Their study concludes that ‘the initial beliefs of the chief estimator or manager regarding costs had a strong impact on the actual cost estimated, regardless of the method of estimation used’vii. This has important implications: the belief that algorithmic estimation necessarily provides unbiased results is potentially harmful. Measures to reduce the impact of outside influences may not be considered as relevant to the formal estimatio n process. Similarly, an evaluation of the results of an estimate may not be suitably sensitive to the impact of – potentially flawed - subjective reasoning.

III.III CONSTRUCTIVE COST MODELLING The COCOMO model - originally published by Barry Boehm in 1981 viii - sought to derive an estimate based upon a parametric algorithm. This tool enabled the classification of different types of projects on a three-point scale, reflecting these choices through different weighting factors depending on the development environment. Well understood development projects controlled by integrated teams receive a low weighting, while large-scale, complex projects affected by various external factors receive a greater weighting. However, the COCOMO ‘81 - and the 1987 Ada programming language revision - was designed around waterfall development methodologies and, as such, became less effective as new processes were adopted. Thus, an elaboration of this initial research was required to root an estimation tool within the new development context. III.III.I COCOMO 2.0 Boehm’s COCOMO 2.0 model (2005) ix attempts to resolve these issues and adapt to new methodologies. Boehm et al. assert that this model addresses ‘the issues on nonsequential and rapid development process models, reengineering, reuse driven approaches, object oriented approaches, etc’x. COCOMO 2.0 contains four sub-models; these models serve different purposes and are built upon different inputs and assumptions. Figure (1) provides a diagrammatic outline of the models. Within the context of the proposed development, the Early Design Model [highlighted in Figure (1)] will be used as the basis of this project. This model should provide an appropriate demonstration of the fundamental processes involved in COCOCMO 2.0 and reveal the problems associated with its implementation, which are discussed in more detail in the Problem Statement (2). However, a brief description of the other sub-models is necessary to contextualise the proposed tool. A more detailed analysis of the processes specific to the Early Design //Masters Dissertation/Introduction/

{ v i }

0407369

Model and the components common to the COCOMO 2.0 model more generally is included within the Background Survey (3). III.III.II THE COCOMO 2.0 SUB-MODELS The following outlines are not intended to provide an exhaustive introduction to these models; rather, they seek to place the Early Design Model within the context of the other

Figure (1)

tools in conjunction with Figure (1). THE APPLICATION-COMPOSITION MODEL This model provides an estimation of the required effort for prototyping projects and for projects where software is developed through an integration of pre-existing components. Since this habitually involves the composition of a significant amount of reusable code, the estimate is adjusted upon this basis. THE EARLY DESIGN MODEL This model provides an exploratory estimation of the effort for projects where requirements have been elicited, and preliminary system designs choices are in place. The effort estimate is a relatively simplified approximation based upon identified function points. This model is covered in greater detail in the Background Survey (3). THE REUSE MODEL This model provides an estimation of the required effort for integrating significant volumes of pre-existing code. This is generally taken as reused code and generated code. In terms of reusable code, this is broadly defined as either ‘black box’ (essentially where the code does not require modification) or ‘white box’ (essentially, where the code //Masters Dissertation/Introduction/

{ v i i }

0407369

requires some degree of re-development to integrate). In terms of generated code, standardised templates are ‘embedded in a generator’xi where project-specific values are plugged in. This tool enables the modelling of this data to derive an effort estimate. THE POST ARCHITECTURE MODEL This model provides an estimation of the required effort for projects where detailed information is available. Subsequently, this model is driven by a larger selection of effort factors than implemented by the Early Design Model, but is – in principle – fundamentally similar. Consequently, it seeks to provide a more detailed and robust estimate.

III.IV DISSERTATION STRUCTURE The remainder of this document expands upon the research, design, and evaluation of this project.  Section (1) seeks to demonstrate the ‘Early Design Model’ in more detail; surveys and critically reviews the existing associated literature and software products;  Section (2) seeks to formalise the analysis of the ‘problem’ into a set of requirements and explain the implementation methodology;  Section (3) seeks to describe and justify design decisions in terms of the user interface and software architecture;  Section (4) seeks to analyse and evaluate the project and propose potential improvements and includes details of the testing methodology. Additional documentation has been included in the appendix where necessary, and source code is accessible on the accompanying CD.

i

CAPERS JONES, ‘SOFTWARE PROJEC T MANAGEMENT PRAC TIC ES : FAILURE VERSUS SUCCESS ’ IN CROSSTALK : T HE JOURNAL OF DEFENSE SOFTWARE ENGINEERING (2004) ii

MAGNE JØRGENSEN AND BARRY BOEHM, ‘VIEWPOINTS - SOFTWARE DEVELOPMENT EFFORT ESTIMATION: FORMAL MODELS OR EXPERT JUDGMENT?’ IN IEEE SOFTWARE, (MARC H/APRIL 2009) @ iii

MAGNE JØRGENSEN, ‘PRACTICAL GUIDELINES FOR EXPERT-JUDGEMENT-BASED SOFTWARE EFFORT ESTIMATION’ IN IEEE SOFTWARE (2005) iv

IBID., 2005

v RIC HARD E. (DICK ) FAIRLEY, ‘T HE INFLUENCE OF COCOMO ON SOFTWARE ENGINEERING EDUCATION AND TRAINING ’ vi M. JØRGENSEN, T ANJA GRUSC HKE, ‘INDUSTRIAL USE OF FORMAL SOFTWARE COST ESTIMATION MODELS : EXPERT ESTIMATION IN DISGUISE?’ IN SOFTWARE METRICS (2005) vii

IBID., 2005

viii

BOEHM, B. (1981), SOFTWARE ENGINEERING ECONOMICS, PRENTIC E-HALL.

ix

BOEHM, B., B. CLARK , E. HOROWITZ , C. WESTLAND , R. MADACHY, AND R. SELBY (1995), ‘COST MODELS FOR F UTURE SOFTWARE LIFE-CYC LE PROCESSES : COCOMO 2.0’ IN ANNALS OF SOFTWARE ENGINEERING 1. x

BARRY BOEHM, CHRIS ABTS , SUNITA CHULANI , ‘SOFTWARE DEVELOPMENT C OST ESTIMATION APPROAC HES – A SURVEY’ IN ANNALS OF SOFTWARE ENGINEERING 10 (2000) PP.177–205 xi

SOMMERVILLE, SOFTWARE ENGINEERING (8TH) [2007])

//Masters Dissertation/Introduction/

{ v i i i }

0407369

1

T

DOMAIN ANALYSIS

HEREFORE, LET US NOT DESPAIR , BUT INSTEAD, SURVEY THE POSITION, CONSIDER CAREFULLY THE ACTION WE MUST TAKE , AND THEN ADDRESS OURSELVES TO OUR COMMON TASK IN A MOOD OF SOBER

RESOLUTION AND QUIET CONFIDENCE, WITHOUT HASTE AND WITHOUT PAUSE. ARTHUR HENDERSON, ESSENTIAL ELEMENTS OF A UNIVERSAL & ENDURING PEACE (1934)

Cost estimation is an important process. Effectively communicating the significant principles of algorithmic estimation is similarly important. A tool to demonstrate the processes and, furthermore, the potential problems associated with this type of estimation facilitates this communication. The COCOMO algorithm requires various inputs to produce an estimate; this process requires an understanding of each element and the effects that these have upon the final output. These composite parameters are calculated through sets of weightings and translations, which are discussed below. Software and internet-based tools are available that interactively demonstrate the processes underlying algorithmic estimation. However, these tools – particularly professional applications – can provide more information than is necessary to demonstrate the fundamental practices and pitfalls of this type of modelling. Similarly, textbooks attempt to reveal algorithmic estimation through methodical, step-by-step explanations; however, within this format, it can prove difficult to effectively convey the processes or visualise the results. An analysis of these tools is discussed below, which provide a framework to design the solution to the problems identified.

1.1

THE E ARLY DESIGN MODEL

The Early Design model is used to produce an initial estimate, where no t enough information has been gathered to prepare a fine-grain estimate. In this regard, Boehm et al. assert that this model ‘involves the exploration of alternative system architectures and concepts of operation’xii. Essentially, a detailed design is not required with this model; rather, an approximate effort based upon relatively simple assumptions is made.

//Masters Dissertation/Domain Analysis/

{ 1 }

0407369

This model is based upon the generic formula for the COCOMO 2.0 family, calculated by function points and a set of five scale factors and seven effort drivers: 1.1.1 PERSON MONTHS The COCOMO models produce an effort estimate (usually) given in person months. This is a metric for expressing the cost in time that personnel are required to devote to a project until completion. In turn, fiscal cost can be derived from this metric. 1.1.2 FUNCTION POINTS Function points provide an early estimation of the likely size of a project based on the initial information available in the development cycle. They are derived from the required level of functionality in a development project as viewed through the lens of a set of project factorsxiii. Essentially, function points quantify the program functionality associated with five major factors. Each feature is then classified by complexity on a three-point scale. The complexity is reflected by a weighting, which determines the ‘Unadjusted Function Points total’ (UFP). Table (A) outlines these functions. The UFP is the function point metric implemented by COCOMO 2.0; however, usually this unrefined measure is refined by the degree of influence of fourteen influential adjustment factors. COCOMO 2.0 uses UFP simply for sizing, and refines these values within its own scale factors and effort drivers, which are discussed below (Section 3.1.3~4). Table (A)

The function point value can be translated into the ‘KSLOC’ parameter in the COCOMO algorithm using a look-up table of programming languages. Table (B) presents a selection of this conversion data aggregated by Quantitative Software Management (QSM)xiv . Although other factors Although other conversion factor tables were available, within the scope of this project, the data presented by QSM is adequate to use during implementation. Table (B) Similarly, this selection only represents a narrow range of the dataset at hand. This range could be expanded; yet, in terms of this project, provides a relevant and illuminating selection. //Masters Dissertation/Domain Analysis/

{ 2 }

0407369

The data was compiled from ‘2597 completed function point projects in the QSM database’ (QSM, 2005). However, one potential constraint with the data is associated with the reliability of multi-language projects as a source for these translation factors; consequently, this table is only based upon single-language projects only (QSM, 2005). 1.1.3 SCALE FACTORS The exponent B scales the effort required: this is a variable value computed through five scale factors. Table (C) outlines these factors.

Table (C)

The factors are aggregated and the result divided by 100. This value is added to 1.01 to produce exponent B . PREC FLEX RESL TEAM

+

PMAT

÷ 100 + 1.01 B

6.20 5.07 7.07 5.48 7.80 31.62 0.32

0.00 0.00 0.00 0.00 0.00 0.00 0.00

1.33

1.01

Table (D)

Therefore, taking each factor to its extreme value, the potential value of B lies between 1.33 and 1.01 (to two decimal places). Thus the scale can be tuned to the development environment at hand.

//Masters Dissertation/Domain Analysis/

{ 3 }

0407369

1.1.4 EFFORT DRIVERS Multiplier M is derived from seven effort drivers. These metrics measure specific project and general process factors that impact upon the effort. Table (D) outlines these factors.

Table (E)

Each driver is assigned a nominal value on the scale, which is reflected by a weighting. Table (E) outlines these weightings. M is the sum value of the multiplication of each driver. RCPX RUSE PDIF PERS PREX

×

FCIL SCED

M

2.72 1.24 2.61 2.12 1.59 1.43 1.43 60.68

Therefore, taking each driver to its extreme value, the potential value of M lies between 0.08 and 60.68 (to two decimal places). Thus, the estimated effort can be tuned to the characteristics of the development at hand.

Table (F)

//Masters Dissertation/Domain Analysis/

0.49 0.95 0.87 0.50 0.62 0.62 1.00 0.08

{ 4 }

0407369

1.2

EXISTING ESTIMATION SOFTWARE

Three examples of existing cost estimation software based upon COCOMO 2.0 have been selected. These were chosen for several reasons. Primarily, they are widely available in the public-domain; accessible online, or via download. Furthermore, they offer different approaches to the problem, demonstrating different advantages and disadvantages. In terms of limitations, the second tool – ‘USC Expert’ - only offers a solution to the Postarchitecture model. However, it has been chosen at it conveys an appropriate structure to evaluate. 1.2.1 COSTAR 7.0xv DESCRIPTION Costar is a software-based estimation tool that provides an interface for all of the COCOMO II sub-models. It is a professional tool, with many customisable and expertoriented features SCREEN SHOTS

The Costar tool provides a „wizard‟ to guide the user through the estimation process

However, this wizard only allows the input of SLOC, not any other method, such as a UFP calculation.

//Masters Dissertation/Domain Analysis/

{ 5 }

0407369

The input is derived from a simple incremental tab. Minimal explanations of the inputs are offered; however, if the driver button is clicked, the user can access, and manipulate the effort weighting. An estimate can be saved through the file menu, which allows the user to retrieve data

The interface dynamically updates the estimate as the user changes the level of a value. The display is densely packed with options and tools to tweak the estimate to the user‟s preferences. It is more suitable for a user that understands the principles and effects of data-changes than a user that is unfamiliar with the algorithmic estimation process.

Many different types of report are available; however, the most simple is the „detail report‟. Arguably, for a beginner, or as a teaching tool, this solution provides too many detailed/ expert-specific functions to provide an effective pedagogical tool.

//Masters Dissertation/Domain Analysis/

{ 6 }

0407369

CRITIQUE  



This tool provides a professional environment in which to compute estimations. However - as a teaching tool - it, arguably, suffers from over-complication. - The scope of customisation and functionality is potentially bewildering. - This detracts from the central purpose of this project – to develop a simple, effective tool for teaching an algorithmic costing process. Some features would prove advantageous - The ability to save an estimate, and re-load this data. - The dynamic update of the estimate conveys a sense of the sensitivity of each driver. - The estimation ‘wizard’ that leads the user through the process with helpful guidelines; however, the Costar tool still presumed a certain level of understanding and was not necessarily as clear as would be necessary in terms of a teaching tool.

//Masters Dissertation/Domain Analysis/

{ 7 }

0407369

1.2.2 USC EXPERT COCOMO IIxvi DESCRIPTION This web-based post-architecture model demonstrates a public-domain COCOMO II estimation tool. This tool provides sets of ratio-buttons for defining the variable input; text areas are used to capture to project size in SLOC. Once the fields have been selected, the user selects submits the data through a ‘submit’ button, and the page navigates to a results screen. When selected, hyperlinks provide a paragraph of explanation on the meaning of each driver. This action navigates the user away from the main input-page. SCREEN SHOTS

The tool presents each set of drivers in sections, selected by radio buttons. Driver sensitivity is not apparent; rather, each is assigned an arbitrary value on the VL-EH scale.

The hyperlinks provide a link to a fairly detailed explanation of the function and purpose of each driver.

The output screens are accessed by selecting „submit‟. The user is then directed to a page with this display. The left-most diagram shows values set to minimum; the right, values set to maximum.

A breakdown of the effort is offered, which attempts to map the effort value across different stages.

//Masters Dissertation/Domain Analysis/

{ 8 }

0407369

CRITIQUE 



This tool does not permit dynamic changes to the input parameters; rather, a ‘submit’ button processes all the information, before redirecting the user to the results page. - This does not facilitate a good understanding of how different cost drivers impact upon the effort estimation. - The degree of variation between two differently weighted estimates is not immediately obvious This tool provides ‘help’ links to new pages, which provide a glossary of the terms central to the estimation process. - This is a useful feature; however, providing a means of integrating this guide into the tool itself would – arguably – improve the learning experience.

//Masters Dissertation/Domain Analysis/

{ 9 }

0407369

1.2.3 USC CSSE SUITE OF CONSTRUCTIVE MODELSxvii DESCRIPTION This web-based, php-driven tool is similar to the previous implementation; however, it offers a relatively more dynamic user experience. The inputs are determined by combo boxes which, upon selection, refresh the estimate at the bottom of the page. The layout is clear and simple; although, no help or explanations are provided. SCREEN SHOTS

Model with scale driver inputs set to minimal values. A breakdown of the estimate is provided graphically through a dynamic bar chart.

Model with scale drivers set to maximum values. The effort estimate values change fluidly; however, the scale of the bar chart remains the same. A comparison to previous values is not possible. The tool allows different software size measurements; specifically: „SLOC‟ and „UFP‟. The language conversion for UFP is a choice between Java, C, and Basic.

//Masters Dissertation/Domain Analysis/

{ 1 0 }

0407369

CRITIQUE 





3.3

This tool does not provide any help in terms of the meaning of the different scale factors and cost drivers. - This renders the tool relatively inappropriate for users that are not aware of the semantics of the drivers. The tool permits relatively dynamic changes to be made to the inputs. - When values are selected or modified in the combo boxes, the page automatically refreshes and updates the result data at the bottom of the screen. - This is useful in showing the effect of different values upon the estimate and conveys a certain sense of the sensitivity. Overall, the model strives to hide data complexity as far as is possible. - Driver sensitivity is not immediately apparent

EXISTING TEXTUAL GUIDES

An influential software engineering textbook was selected to evaluate the demonstration of COCOMO II, and specifically, the Early Design Model; namely, Somerville’s Software Engineeringxviii.

Sommerville provides the fundamental algorithm, and briefly explains the principles underlying each factor.

This value is incorrect

//Masters Dissertation/Domain Analysis/

{ 1 1 }

0407369

These descriptions are, at best, simplistic.

This does not convey the semantics of the drivers. It is not clear that each driver is itself weighted.

CRITIQUE The calculation of the scale driver is essentially correct – the aggregation of the scale factors to produce a value, divided by 100, and added to the constant 1.01; however, it is not accurate. The value of the scale factors are weighted as is shown in Table (E), not simply the arbitrary value assigned on the scale 1-7. The inherently non-interactive nature of the textbook does not demonstrate the sensitivity of the model to changes in inputs; furthermore, the values are not described in any great detail. Similarly, the formation of Unadjusted Function Points is not analysed. This explanation does provide a better overview of the mechanics of the calculation than the other tools thus far, which seek to hide the complexity. However, it is not immediately obvious how selecting different inputs impacts upon the final effort; moreover, it is unclear how UFPs are calculated. This is not explored in any great detail in the estimation section, or, indeed, any other part of the textbook.

1.4

CONCLUSION

The central themes gains from the product evaluations are as follows; Importance of:        

Conveying the presence of driver weightings; In-view, dynamic estimate update; Describing driver meaning conveniently; Tutorial/ guide for beginners; Simple driver range inputs; i.e., combo/ radio/ tabbed solutions; Graphical display of estimate; Ability to recover past estimates; Simplicity of interface.

//Masters Dissertation/Domain Analysis/

{ 1 2 }

0407369

xii

BARRY BOEHM ET AL. (2000) PP.177–205

xiii

C. BEHRENS (1983), ‘MEASURING THE PRODUCTIVITY OF COMPUTER SYSTEMS DEVELOPMENT ACTIVITIES WITH F UNC TION POINTS ’ IN IEEE T RANSACTIONS ON SOFTWARE ENGINEERING , NOVEMBER 1983 xiv

QSM, F UNCTION POINT LANGUAGES T ABLE (2005) @ HTTP://WWW.QSM.COM/?Q=RESOURC ES /FUNC TION- POINT- LANGUAGES-

TABLE/INDEX .HTML | ACCESSED 18.08.09 xv

COSTAR 7.0 WAS DOWNLOADED FROM HTTP ://WWW.SOFTSTARSYSTEMS .COM/ | ACCESSED 15.03.09

xvi

COCOMO II WITH HEURISTIC RISK ASSESSMENT IS AVAILABLE AT | ACC ESSED 16.03.09 xvii

COCOMO II SUIT IS AVAILABLE AT | ACCESSED 16.03.09

xviii

SOFTWARE ENGINEERING SOMMERVILLE, (8TH) [2007])

//Masters Dissertation/Domain Analysis/

{ 1 3 }

0407369

2

A

PROBLEM ANALYSIS

N INTELLECTUAL SAYS A SIMPLE THING IN A HARD WAY. A N ARTIST SAYS A HARD THING IN A SIMPLE WAY.

2.1

CHARLES BUKOWSKI , LOCKED IN THE ARMS OF A CRAZY LIFE

A TEACHING TOOL FOR COST ESTIMATION

The basic process of computing a cost estimate is relatively straightforward: values are plugged in; an algorithm is executed; a result is produced. However, the accuracy of the estimate is critically dependent upon the choice of the inputs into the model. Furthermore, in terms of revealing the sensitivity of the model, the number of different values and the extent to which these values can be manipulated through weighting variables is less straightforward. The problem can be approached in three ways, rooted within bridging the space between the simple and the complex. 4. The ‘input stage’: conveying the syntax of the formula is simple; yet, effectively conveying the semantics of the drivers is complex. 5. The ‘processing stage’: the calculation itself is simple; yet, conveying the sensitivity of each driver within this computation is complex. 6. The ‘results stage’: displaying the numeric product of the calculation is simple; yet, conveying a feel of the sensitivity of the result under different assumptions is complex. These problems are diagrammatically represented below in figure (1). Thus, the crux of the problem is located in the effective didactic conveyance o f driver sensitivity within a pedagogical environment; or, more aptly: teaching a complex process in a simple way.

Figure (1) //Masters Dissertation/Problem Analysis/

{ 1 4 }

0407369

2.2

PROJECT PROBLEM

The problem operates on four levels. Core technical challenges are located within the lower levels; more abstract, pedagogical challenges exist at a higher level. Figure (2) demonstrates these challenges and attempts to sketch the boundaries of the development project at hand, and begins to sketch out the formal requirements at a high level. 

The basic problem is to design and build a tool that obtains parametric inputs, performs an algorithmic calculation, and produces a result.



Adding more complexity, an interface is required that effectively conveys the semantics of the cost factor and driver options.



The visualisation of the output is important, with an emphasis upon revealing the sensitivity of the inputs in the creation of the result.



Finally, a solution to explaining the process of the calculation, in regard to the effect of different assumptions and variations in the development environment, adds another layer of didacticism.

Figure (2)

Within each broad requirement described in (Figure 2), a set of more specific requirements exists.

2.3

PROJECT REQUIREMENTS

The formal requirements for the project are discussed below. This analysis applies a set of judgements in order to translate a more focused, testable, and realistic set of necessary functional requirements. It tacitly borrows from the ‘MoSCoW’ approach: identifying the ‘must-’, ‘should-’, ‘could-’ and ‘would-have’ system features; essentially, it is a prioritisation of derived requirements that facilitates the delineation of system boundaries. The corresponding diagrams seek to visualise the requirements by mapping program functionality onto an arbitrary scale of importance, starting with the most necessary functionality at the core, and working out to the least essential. This approach accords with the intended design methodology, discussed below (Section 3.1). These requirements do not seek to concretely specify how the functionality should be implemented; rather, they describe the needs from an abstract perspective. The Design Analysis (Section 3) documents and justifies the design methodology and decisions taken in greater detail.

//Masters Dissertation/Problem Analysis/

{ 1 5 }

0407369

2.3.1 FUNCTION AREA 1

Diagram (1)

REQUIREMENT 1.1 The program should allow the calculation of an effort estimate based upon a set of input values.   

Pre-determined values of ‘M’, ‘KSLOC’ and ‘ B’ are created. These values are used in a method that performs the algorithm. The effort value is outputted as a single value.

REQUIREMENT 1.2 The program should allow the calculation of the scale factor (‘ B’) and the effort multiplier (‘M’) based upon set of inputs.  

Scale factor & effort multiplier input fields (Table (B) & (E) in Domain Analysis (1)) are created. These values are translated to weighted overall values (Table (D) & (F) in Domain Analysis (1)).

REQUIREMENT 1.3 The program should allow the capture of user-defined input values of ‘M’, ‘KSLOC’, and ‘B’  

A set of input dialogues are created, prompting the user to input the values. This data is passed to objects created in 1.2

REQUIREMENT 1.4 The program should allow the capture of a user defined UFP value & derive the KSLOC.  

An input dialogue for UFP is created. This data is translated to ‘KSLOC’.

REQUIREMENT 1.5 The program should allow the capture of Function Type values and translate these into the UFP value.  

An input dialogue for UFP values is created. This data translated this value to ‘KSLOC’.

//Masters Dissertation/Problem Analysis/

{ 1 6 }

0407369

2.3.2 FUNCTION AREA 2

Diagram (2)

REQUIREMENT 2.1 The program should provide a means of capturing valid data from the user.    

This user interface provides a means of capturing user data. Certain data can be inputted through components that enforce valid input. Other data, such as KSLOC should be validated within the program logic. Components should trigger the collection of the data and the execution of the algorithm.

REQUIREMENT 2.2 The program should provide a means of conveying the semantics of the inputs to the user.  

A description of each input’s meaning and function should be provided. - This information should be hidden to hide complexity and revealed upon request. - This information should be displayed alongside the input to facilitate understanding. Tips and help provide basic description of an input.

REQUIREMENT 2.3 The program should provide a means of selecting the scope of the inputs.   

Is a ‘KSLOC’ value known, or must it be calculated? Is a ‘UFP’ value already known, or must it be calculated? The interface provides options for the scale of the estimation.

REQUIREMENT 2.4 The program should provide a means of demonstrating the relative weighting of each input value. Since some values are weighted more heavily that others, selecting ‘high’ on two different drivers will have a different effect.  

When the user selects a driver value, the relative weighting of this choice is demonstrated. The proportional makeup of the value of the driver is demonstrated.

//Masters Dissertation/Problem Analysis/

{ 1 7 }

0407369

2.3.3 FUNCTION AREA 3

Diagram (3)

REQUIREMENT 3.1 The program should provide a means of displaying the estimate output graphically.  

Rather than a simple numeric value, the value should be outputted visually. The scale of the visualisation should be determined by minimum possible effort (0) to maximum possible effort; i.e., where all effort/ scale drivers were set to their maximum weight.

REQUIREMENT 3.2 The program should provide a means to dynamically update the estimate when a single effort driver is altered.  

The graphic display should be immediately refreshed to convey the change in effort The display should be visible from/ embedded within the input screen

REQUIREMENT 3.3 The program should provide a means of semantically conveying the result of the effo rt estimate. 



A line denoting the minimum possible effort could be super imposed up this representation; i.e., where all effort/ scale drivers were set to their minimum weight. Basic textual help explains the output of the estimate and the most important factors involved within this particular estimate.

REQUIREMENT 3.4 The program should provide a means of showing previous estimate results.  

An option to save and display a static representation of the current estimate. New (dynamic) estimations should be shown alongside the previous estimate for comparison.

REQUIREMENT 3.5 The program should provide a means of loading and displaying previously saved estimates from different/ multiple sessions.

//Masters Dissertation/Problem Analysis/

{ 1 8 }

0407369

2.3.4 FUNCTION AREA 4

Diagram (4)

REQUIREMENT 4.1 The program should allow the execution of ‘tutorials’ to demonstrate estimates under different assumptions; i.e. high, medium and low values on different sizes of project.  

The tutorials guide the user through examples, where estimate inputs are predefined. The results convey a sense of the potential problems with choosing extreme values under different conditions

REQUIREMENT 4.2 The program should allow the execution of ‘walkthroughs’ to demonstrate and justify input choices within a particular scenario.  

The walkthroughs provides the most suitable input depending upon a particular scenario. The walkthrough justifies the choice of input and explains the final estimate and its likely advantages and disadvantages.

REQUIREMENT 4.3 The program should allow the execution of ‘scenarios’. These construct a scenario conveying factors relevant to a project estimate and require the user to make decisions based upon this information.   

The user is prompted to enter their inputs. When all values are collected, the result is compared with a ‘model solution’. This justifies the choices made and demonstrates the differences between the user-inputs and the system-defined inputs.

REQUIREMENT 4.4 The program should allow a scenario/walkthrough/ tutorial ‘editor’, where the user can define custom inputs.  

These custom scenarios can be saved, potentially as .xml files. This allows persistent data and, potentially, sharing custom scenarios among a group.

//Masters Dissertation/Problem Analysis/

{ 1 9 }

0407369

2.3.3 REQUIREMENTS ANALYSIS In terms of the elicitation and validation of these requirements, no formal procedures were adhered to. Rather, discussions with Prof. Ray Welland, and the evaluation of existing resources and materials (Section 1) provided the basis of these requirements. Testing of the system is the only concrete means of validating the requirements, which perhaps inserts a weakness into the project as a whole. Given additional time and resources, it may have been appropriate to identify system requirements through more formal analysis; including: interviews, questionnaires, observation and focus groups. Such methods would have been useful in formally establishing requirements and, furthermore, in gaining a clearer understanding of the current practices and associated problems.

//Masters Dissertation/Problem Analysis/

{ 2 0 }

0407369

3

P

DESIGN A NALYSIS

ROPOSE TO ANY ENGLISHMAN ANY PRINCIPLE OR INSTRUMENT , HOWEVER ADMIRABLE, AND YOU WILL OBSERVE THAT THE WHOLE EFFORT OF THE ENGLISH MIND IS DIRECTED TO FIND A DIFFIC ULTY, A

DEFECT , OR AN IMPOSSIBILITY IN IT . CHARLES BABBAGE

3.1

DEVELOPMENT METHODOLOGY

The nature of this project lends itself to an agile, dynamic development process to maximise functionality and minimise risks. Requirements have been identified and grouped into logically related units of functionality. Therefore, ‘Feature Driven Development’ (FDD) is the most appropriate methodology to frame the project around. Briefly: FDD is a software development methodology founded upon structured processes, and aimed towards reconciling more formally-orientated development tools with agile principles and practices. FDD does not encapsulate the entire life-cycle of a project; rather, it emphasises planning and design elements, which operate within a framework of accurate project monitoring, while demanding relatively frequent deliverables and concrete implementations. In terms of the scope of this report, the parts of the FDD model implemented are more effectively dealt with diagrammatically; the appropriate processes and their associated tasks are outlined in ‘Design Methodology’ (Appendix A), adapted from descriptions provided by De Lucaxix. Suffice to say, the fundamental principle followed is to design a solution to each feature, test it, and promote it to the build. This ensured that core functionality was implemented correctly, and left time to make judgements about which features were more essential to develop than others towards the end of the scheduled development time. Due to the relatively agile, non-linear nature of this development methodology, formal design documentation is minimised, and used to communicate interesting or challenging features of the design; however, a Package Diagram (Appendix B) and a (simplified) Class Diagram (Appendix D) are included. Clearly, this methodology presents challenges in terms of necessary linearity of this report. For clarity, specific details about the influence of the design methodology upon the implementation and testing strategies are discussed in the Project Evaluation (Section 4). Furthermore, the iterative nature of the development resulted in design features being informed by the Usability Testing phase, which is examined in the Project Evaluation (Section 4.1.2). Reasons underlying the changes to features after testing are not discussed in this section; rather, they are documented in the Project Evaluation, which seeks to outline and justify the need for changes to the initial design, which chiefly affected user interface components (Feature Area 2). The features documented here represent the final state of the design of each feature. This section, then, somewhat conflates design and implementation; yet, this is reflective of the development methodology, and betrays the non-linearity of project. Consequently, the structure of this section seeks to set out the underlying principles that informed that development (Section 3.3) and describe and justify the design decisions in terms of the requirements (Section 3.4). //Masters Dissertation/Design Analysis/

{ 2 1 }

0407369

3.2

DEVELOPMENT TECHNOLOGY

The project has be implemented in Java; primarily, using the development environment provided by Eclipse SDK. This technology is appropriate for a number of reasons; including:   

Familiarity with the language; - Packages that facilitate UI: Java.awt; Java.swing - JUnit testing Suitable for O-O development; Potential to create Java applets for web interface.

Using a familiar language has enabled time savings during development. A significant degree of user interaction is required by this project, both in terms of receiving input and outputting semantic information. In this regard, familiarity with the tools available in Java has helped to make this process relatively simple, and enabled time to create/ refine functionality. However, despite such advantages, java is a relatively verbose tool to design semantic and stylish interfaces. Other options could have been considered, and are discussed at greater length in the Project Evaluation (Section 4).

3.3

DESIGN PRINCIPLES

One aim in development was to implement a design that enabled classes to be flexible in terms of re-engineering; specifically, to facilitate the extension of the project over different versions of the COCOMO 2.0 model. To this end, classes were logically grouped together, as reflected in the Package Diagram, which maps requirements onto the subsystems. Essentially, the functionality of the program is compartmentalised to facilitate future changes by (hopefully) reducing the number of modifications required. Sommerville defines this need as ‘maintainability’xx; qualifying the requirement as the capacity to design using ‘fine-grain, self-contained components that may be readily changed’. This project is designed to satisfy this need; particularly, decoupling the application logic (Application Subsystem), the data held as objects (Data Management Subsystem), and UI layer (interaction Subsystem) was integral to the aim of the design. For clarity, a pseudo package diagram is included here: Handles the input of and the output of data Handles collections of Estimate objects

Interaction Subsystem

Application Subsystem Data Management Subsystem

Handles the COCOMO and UFP logic (System boundary) //Masters Dissertation/Design Analysis/

{ 2 2 }

0407369

Ultimately, the design accords with what Sommerville describes as the ‘layered approach’xxi, which supports the incremental development of systems. Certainly, this facilitates the development of this project, where features are incrementally added to the final build. The evaluation of these principles is included in the Project Evaluation (Section 4).

3.4

DESIGN DECISIONS

This analysis seeks to describe and justify the design decisions in terms of the requirements; particularly in view of the user interface and software architecture. The solution to each requirement is discussed atomically and is structured correspondingly. Where appropriate, diagrams and pseudocode is included for clarity; however, the bulk of the documentation, including a class diagram, are located in the Appendix. The design decisions are presented by roughly following an abstract structure (Table A), where appropriate. EXPLANATION “What”

The design feature at hand.

“How” How this feature meets the requirement.

JUSTIFICATION “Why”

Discussion of the technology/ techniques used - reasons underlying the choices.

“Why not” Description of options; discussion of why they were not implemented.

EVALUATION “Were there any A brief outline of the problems encountered/ the problems that remain. problems?” A higher level, more detailed evaluation is provided in the Project

“Was it successful?” Evaluation (section 4).

Table (A) – The abstract structure of the design Analysis.

//Masters Dissertation/Design Analysis/

{ 2 3 }

0407369

FEATURE 1.1 „The program should allow the calculation of an effort estimate based upon a set of input values‟. EXPLANATION The COCOMO 2.0 algorithm requires parameters; namely: the coefficient ‘A’, the ‘size’ in KSLOC, the exponent ‘b’ and the multiplier ‘m’. The algorithm logic is contained within the ApplicationSubsystem package; specifically, in the Cocomo class. This contains methods to get the values of ‘b’ and ‘m’ from the EffortMultiplier and EffortScale classes, and executes the algorithm. The effort output is formatted to two decimal places and held as a private double, accessible through public get method (Pseudocode i). Two decimal places is an appropriate level of granularity for the estimate, as more precision would not yield results of any greater use. The Cocomo class contains static methods and, as such, exists for the life of the program and does not need to be explicitly created as an object.

Pseudocode i

JUSTIFICATION The static class methods are shared as a single object throughout the project, which is useful when executing multiple estimates. Because the underlying logic of the algorithm is static itself, it was appropriate to create such a class. The Cocomo class is decoupled from the notion of a Cocomo estimate instance. To maintain an estimate ‘object’, one must be created. Such objects are discussed later. One option would have been to create the algorithm dynamically by reading in from an input file. This would enable easier access to changing the static final variables, such as the coefficient, ‘A’ (which is immutable at 2.94). However, this functionality was deemed to be outside the scope of the project, as only the Cocomo ‘early design model’ was to be simulated. Furthermore, modifications and code reuse are facilitated by decoupling components and abstracting the application data. EVALUATION One issue to note is that this design requires the Cocomo methods to be explicitly called; i.e., the application must call executeCocomo() before requesting getOutput() to derive the correct effort estimate. //Masters Dissertation/Design Analysis/

{ 2 4 }

0407369

FEATURE 1.2 „The program should allow the calculation of the scale factor („ B‟) and the effort multiplier („M‟) based upon set of inputs‟. EXPLANATION The COCOMO 2.0 algorithm parameters ‘b’ and ‘m’ are composite; they must are derived from separate calculations, outlined in the Domain Analysis (Section 1). As such, the Cocomo class must retrieve these values from the EffortMultiplier and EffortScale classes, which process the inputs required to derive these values. If these values have not been set, they are returned as 0. The weightings matrix discussed in the Domain Analysis (Section 1) are held as static objects within the classes (Pseudocode ii). The values are constant, and as with the Cocomo variables are shared throughout the lifetime of the program.

Pseudocode ii

The multipliers and scales can be set, and processed to derive the ‘b’ and ‘m’ values, which are accessible (Pseudocode iii). These classes also contain methods to instantly return the maximum and minimum values of ‘b’ and ‘m’, by iterating through these matrices and finding the highest and lowest values.

Pseudocode iii

JUSTIFICATION These classes atomically contain the data and logic associated with the scale component and the effort multiplier. Many aspects of functionality reply upon these data sets, and thus, creating static objects and methods was appropriate with this in mind. The data is not only used for computing the ‘b’ and ‘m’ values, but for holding semantic data, such as abbreviations, full names and the number of drivers, which are all accessible. This data can be used in various ways, from the simple calculation of the value, to determining the number of fields in a JComboBox dynamically. This is important for code reuse and modification. If the number of multipliers should change by, say, adding another one, the data only needs to be changed in one location, and these changes are cascaded through the code due to the object oriented nature of the development. EVALUATION One issue to note is that this design requires the EffortScale and EffortMultiplier classes to be to be explicitly called; i.e., the application must call EffortScale.setEffortMultiplier(), before requesting the value of ‘b’.

//Masters Dissertation/Design Analysis/

{ 2 5 }

0407369

FEATURE 1.3 „The program should allow the capture of user-defined input values of „M‟, „KSLOC‟, and „B‟.‟ EXPLANATION The values of ‘M’ and ‘B’ are created from a number of weighted inputs, as discussed in the Domain Analysis (Section 1). These values must be translated into their weighted value, and calculated to determine the overall value. The multiplier and scale classes contain methods to set the driver weight before calculating the overall value. The setVariable()method permits this, and corresponds to the matrices discussed in Feature 1.2. The method requires two parameters which determine the associated variable through switch statements, which ensures that the correct value is set to the correct effort multiplier or scale driver. The KSLOC is set from the Cocomo class is explicitly algorithm.

a simpler process to get and user, and exists within the as set and get methods, as it used in the COCOMO2.0

JUSTIFICATION Pseudocode iv

These methods enable an efficient access to the static values within each class. The user interface can be designed to hold the correct number of components through assessing the length of the relevant matrix, and thus the values can be efficiently set by iterating through the display components and setting the variables. EVALUATION In terms of the development principles this is, perhaps, a contentious design decision. It risks creating dependencies between classes and enforcing a concrete interaction between them. Changing the number of variables would require these methods to be modified; however, overall, this appeared to be the most logical solution to permit efficient and understandable access to the class variables.

Furthermore, it allows the collection of data from the user interface to be an efficient process; however, this is discussed in more detail later. It would have been possible to create an abstract class as the EffortMultiplier and EffortScale classes share a significant amount of code. However, ultimately it was decided that this would be over engineering the code to a degree where the time taken would be to the detriment of other features. Moreover, abstract classes are used elsewhere in the project, which demonstrates knowledge and comfort with this technique. A simplified class diagram expresses these interactions diagrammatically (Appendix C).

//Masters Dissertation/Design Analysis/

{ 2 6 }

0407369

FEATURE 1.4 „The program should allow the capture of a user defined UFP value & derive the KSLOC.‟ EXPLANATION The UFP value is, as with the KSLOC, relatively easy to retrieve from the user. However, this process is complicated by the requirement to convert into KSLOC, through a language look up table, as discussed in the Domain Analysis (Section 1). All data relating to UFPs, including the types, conversion, complexity counts and weights, are collected within the UnadjustedFunctionPoints class, in the ApplicationSubsystem package. Each logical area is divided into nested inner classes. For conversions, the static class holds data relating to the names and values of conversion languages, and stores them in a 2D object array (Pseudocode v). This enables other parts of the program to access this data, to determine the language to use for the conversion (Pseudocode vi).

Pseudocode v

The convertUFP() method permits access to the language array, and looks up the correct value, depending upon user input. It multiplies the UFP count by the appropriate weight, and calculates the KSLOC by dividing by 1000. This value is accessible through the get methods within the class.

Pseudocode vi

JUSTIFICATION These methods enable an efficient access to the static values help within each class. The user interface can be designed to hold the correct number of components through assessing the length of the relevant matrix and thus the values can be efficiently set returning the language at the corresponding array index. EVALUATION Rather than explicitly instantiating languages as a static object, they could be dynamically determined by creating an object from an external file; i.e. an XML file with languages and weightings. This would enable easy access to changing the weights and adding new languages. However, within the scope of this project, it was deemed unnecessary, as time taken would be to the detriment of other features ; moreover, XML parsing is used elsewhere in the project, which demonstrates knowledge and comfort with this technique. //Masters Dissertation/Design Analysis/

{ 2 7 }

0407369

FEATURE 1.5 „The program should allow the capture of Function Type values and translate these into the UFP value.‟ EXPLANATION Calculating the UFP value from user input requires a significant amount of data entry and processing. This is reflected by the structure of the UnadjustedFunctionPoints class. Here, inner classes deal with different aspects of the functionality required to calculate UFPs from user values. The method of collecting data is discussed in greater detail later; however, the underlying logic of the calculation is dealt with here. A significant amount of pseudocode has been included to facilitate explanation of this feature. Each Function Area, as discussed the Domain Analysis (Section 1), has a different set of weightings. As with the multipliers and scales, these weightings are held within a 2D array, UFP_COMPLEXITY_WEIGHT (Pseudocode vii). The intricacy arises with UFP because each Function Area occurrence, that is, each instance of a Function Area - of which there will be multiple - must be mapped onto a weighting of the ‘complexity count’, which is stored as UFP_COMPLEXITY_COUNT (Pseudocode viii). The user enters the value of a Function Area occurrence into a matrix (discussed in detail later). To process this value, the program determines which complexity count weight the value is attributed to by using the index of the interface matrix, and uses this value (low, average or high) to translate ‘complexity weight’ of the ‘base’ value. This process is expressed visually later.

Pseudocode vii

The ‘base value’ entered by the user is multiplied by this weight, which determines a UFP value. The sum of all of the evaluated occurrences represents the specific Function Area’s UFP coun t. The sum of all the Function Area’s UFPs is then converted into Thousands of Line of Source Code through the same process as discussed in Feature 1.4.

Pseudocode viii //Masters Dissertation/Design Analysis/

{ 2 8 }

0407369

JUSTIFICATION These methods enable an efficient access to the static values held within each class. The user interface values corresponds to the UFP_COMPLEXITY_COUNT and each Function area refers to this matrix. Because multiple calculations all reply upon the same, fixed look up tables, it was appropriate to create them as static objects and methods.

This method and indeed, other Function Area setters, require that the data from the input matrices be stored in a 2D array where the inputs follow this structure:

The data logically maps onto the underlying data structure. It does not enforce the collection of data in a certain way; only that it must be collected within an array structure that maps onto the UFP complexity count matrix. Rather than discuss the interface design of this function in the relevant feature section, it is appropriate to provide it here for clarity (Figure 1). The value of each occurrence of a function area is entered using a JSpinner.

The user selects a Function Area using the corresponding tab.

Once all tabs are evaluated, the overall UFP value appears here. The UFP value for a Function Area is updated here.

To calculate the number of UFPs, the user selects ‘calculate’. Figure 1 – The Calculated UFP interface

This interface provides a consistent framework to enter data. The top of the display deals with data entry, the bottom of the display presents data outputs. The details of the semantic aids will be discussed later. EVALUATION Again, the question of decoupling is raised in this design. It inexplicitly links the input (discussed later) to the underlying data structures and logic; it enforces the passing of data across objects to a certain type. However, to create an efficient, simple, semantic interface and architecture, it was necessary to design the objects relations in this way. As far as possible, the code is designed to cater for reuse and modification; again, the values held within the objects could be instantiated externally to tweak the data and weightings; yet, with reference to the scope of this project, it was deemed unnecessary to provide methods to modify data outside of the source code itself. //Masters Dissertation/Design Analysis/

{ 2 9 }

0407369

FEATURE 2.1 „The program should provide a means of capturing valid data from the user.‟ EXPLANATION The validation of data is a general requirement of the program. As such, it does not specifically refer to a design component, as it does a consistent methodology to minimise the need to validate at all, and to validate strictly where necessary. Figure (1) conveyes the type of interface that minimises errors by restricting data entry to specific types. Other data entry fields, such as the KSLOC entry and direct UFP entry must be validated by the program logic to ensure integrity. The KSLOC field requires that a number of the type double be entered, and imposes maximal and minimal limits to ensure the usefulness of an estimate. A JOptionPane is splashed if any constraints are broken. Similarly, the UFP Field validates that only integer vales are entered. Figure (2) demonstrates this functionality.

Figure 2 – validation message

Other user-based data to be validated, such as setting the level of multiplier and scale ranges, are discussed later. JUSTIFICATION As far as possible, the design seeks to minimise the need to validate data, by implementing components just as JComboBoxes, JCheckboxes and JRadioButtons to gauge user input. Where it is required, it is appropriate to splash error messages to alert users to an error.

//Masters Dissertation/Design Analysis/

{ 3 0 }

0407369

FEATURE

2.2

„The program should provide a means of conveying the semantics of inputs to the user. ‟ EXPLANATION As a teaching tool, the semantics of the components and features are essential. In this respect, as far as possible, the design sought to provide help text and information text for all components. The help and information text, along with associated methods to retrieve this data, is held in a central repository, accessible by any subsystem. The HelpSubsystem package is a logical collection of classes that directly relate to help. The HelpText class contains inner classes that include static objects and methods to return String objects of help text, indexed by ints (Pseudocode ix). The help is divided by type, with an inner class representing each user interface. The help is stored as a String array; some arrays are two dimensional as certain help messages contain several ‘pages’ of help. The way this help is used is explained below.

Pseudocode ix

This is the info bar. It displays the Strings retrieved from the HelpSubsystem.InfoText

When clicked, the help text associated with this component is displayed. On Mouse over, it displays information text. This is the help text area. It displays the Strings retrieved from arrays in the HelpSubsystem.HelpText. Some help contains multiple pages, which can be navigated through with clicks.

//Masters Dissertation/Design Analysis/

Figure 3 – Demonstrates the help area and information bar

These buttons are always displayed. They show the instructions associated with this interface and close the help text when clicked.

{ 3 1 }

0407369

Figure (3) depicts the information and help text when called and outputted to the user. The components implement ActionListeners and MouseListeners which call methods in the UI’s model class to update the display components with Strings of help text. Each interface is based upon the Model-View-Controller pattern and extends an abstract interface to enforce consistency. This point is important, and is expa nded upon in the Project Evaluation in more detail. The model object for each interface directly accesses the HelpSubsystem to retrieve objects (Pseudocode x, Figure 4) and update these on the help JTextArea.

Pseudocode x

Figure 4 – UML description of the Help Architecture – N.B. does not include the Abstract class

JUSTIFICATION As far as possible, the design seeks to minimise help on screen until requested by the user. This point relates to the research and critical evaluation of existing products in the Domain Analysis (Section 1), and will be expanded upon in the Project Evaluation (Section 4). Floating tooltips are implemented where appropriate, but the information bar is designed to supersede this functionality and provide an extra layer of semantic support for the user. It is hidden when not in use, to reduce the components on the display; again, this is discussed in more detail later. The instructions can always be accessed on the display through the instructions button. An option is also included in the menu bar, should the user not be familiar with the interface. Again, this is important, as the user is expected to be unfamiliar with both the system and the actions required to complete an estimate. EVALUATION The help and information is contained within one package. This is appropriate to this design as semantics are at the heart the success of the project. Logically grouping and //Masters Dissertation/Design Analysis/

{ 3 2 }

0407369

centralising the help facilitates editing and adding to the help and information text. Again, an option that was considered was holding all of the help text in an external file, but to maintain the integrity of the help, this idea was dismissed. FEATURE 2.3 „The program should provide a means of selecting the scope of the inputs.‟ EXPLANATION Fundamentally, the project enables the user to initiate an estimate in one of three ways: by entering the size of the project explicitly; by calculating the size through U FPs; by calculating UFP from a set of input data. The StartUI class provides the interface for this selection, and includes components to explain the differences between the choices, and the data required by certain choices (Figure 5). Denotes current help text the choice

Click to select the choice

Click to explain the choice

Entry fields associated with the choice

Figure 5 – The StartUI interface

When selected, the choice is highlighted and expands if any further data is required. The interface uses a JComboBox to help validate the language conversion choice, which converts the value to KSLOC as explained in Feature 1.4. The choices are mutually exclusive, forcing the user to select one of the options. As explained in Feature 2.2, the help and information text associated with the interface is displayed in the help area and information bar as appropriate.

//Masters Dissertation/Design Analysis/

{ 3 3 }

0407369

FEATURE 2.4 „The program should demonstrate the relative weighting of each input value‟. EXPLANATION Since some values are weighted more heavily that others, selecting ‘high’ on two different drivers will have a different effect. An important feature of this project was to find a way to draw the distinction between drivers based on their weight to reveal model sensitivity. This is achieved by expressing the currently selected weight of the parameter relative to the maximum weight of the driver with the most influence. The relative weight value is calculated by dividing the current weight of an input and dividing this by the maximum possible weight (Pseudocode xii). This value can then be used to draw the percentage, expressed as bars, onto a component.

Pseudocode xi – VolumeGraph creation

The VolumeGraph class achieves this by extending the methods of a JLabel and invoking java.awt.graphics to draw a graphical representation (Figure 6).

Pseudocode xii – Instantiation of a VolumeGraph object

Each Volume graph is a distinct object, and is added to the display when instantiated, and destroyed and replaced with a new object to update the display (Pseudocode xi). When the scale of the parameter is changed, the display updates with the new weighting. The values underlying the parameter are displayed in the information bar when the mouse moves over the component. Figure 6 – Two VolumeGraphs

The information icons permit access to more detailed explanations of the drivers; namely the current and maximum weight of the driver, and the maximum possible weight for a driver (Figure 7).

Figure 7 – Two separate info bar entries, for the corresponding VolumeGraphs.

EVALUATION Figure (6) graphically represents the difference in relative weight of two effort multipliers when they are both set to ‘high’. This is important as the Domai n Analysis concluded that expressing the underlying weight matrix was a problem that needed to be addressed. These components dynamically change and immediately update the estimate, which gives a better ‘feel’ of the repercussions of setting or changing a parameter. //Masters Dissertation/Design Analysis/

{ 3 4 }

0407369

FEATURE 3.1 „The program should provide a means of displaying the estimate output graphically.‟ EXPLANATION Rather than a simple numeric value, the effort is outputted visually, as a bar graph (Figure 9) and line graph (Figure 8). The scale of the bar graph visualisation is determined by the minimum possible effort to the maximum possible effort; i.e., where all effort/ scale drivers are set to their maximum weight. The bar graph shows the effect of changing the size of the project on a scale of +/-50%. Maximum effort

Effort as a percentage of the maximum

Figure 9 – Estimate Bar Chart

Size label/ modifier

Figure 8 – Estimate Line Graph

Minimum effort

These graphic displays, as well as all other graphical components are logically centralised in the GraphicsSubsystem package. The estimate parameters are used as the constructor for an Estimate object. Collections of Estimate objects are managed by the EstimateManager class, which maintains a HashMap of all estimate instances. These objects are contained within the DataManagementSubsystem and, consequently, are decoupled from the InteractionSubsystem, which contains user interfaces, and the ApplicationSubsystem, which contains the program logic. When the user executes a new estimate, an Estimate object is created in the following sequence: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

A new Estimate object is instantiated, using ‘size’ as the constructor; The Estimate is put in the HashMap, using the estimate number as the key; The EstimateUI parameter choices are used to instantiate two int arrays in the Estimate; Multiplier parameters are used to calculate the value of ‘m’ in the EffortMultiplier class; The value of ‘m’ is instantiated as a variable in the Estimate object; Scale parameters are used to calculate the value of ‘b’ by the EffortScale class; The value of ‘b’ is instantiated as a variable in the Estimate object; The size, ‘b’, and ‘m’ variables are used by the Cocomo class to calculate the effort; The output of the estimate is instantiated as a variable in the Estimate object. The number of estimates executed are incremented;

//Masters Dissertation/Design Analysis/

{ 3 5 }

0407369

The graphs are instantiated by passing an Estimate object as a parameter in the constructor (Pseudocode xiii). The Estimate objects contain methods to get the variables necessary to calculate the coordinates required by the Graphics component.

Pseudocode xiii

The graph components are embedded in a tab in a JTabbedPane. This pane is itself embedded as the component of another tab. This (master) tab has an EstimateTabComponent object set as its TabComponent (Pseudocode xiv). To clarify, there is a difference between adding a ‘component’ to a ‘tab’, and setting the ‘Tab Component’ of a ‘tab’. Figure (10) seeks to explain this difference.

Component added to Tab

TabComponent Figure 10 – Tab Component

The EstimateTabComponent is set as the replacement of the default tab component. The functionality associated with this component is discussed below. When a new Estimate is instantiated, the fundamental sequence of events for adding a new EstimateTabComponent is as follows:

1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

Create a container JTabbedPane component; Create an EstimateBarGraphTab Create an EstimateLineGraphTab Add these objects to the container component; Create a new tab in the display pane; Add the container pane to the new (master) tab; Create a new EstimateTabComponent; Add EstimateTabComponent to the master tab; Set the selected tab of the display pane to this tab; Update the VolumeGraph objects in EstimateUI;

Pseudocode xv – Tab creation

The tab can be closed by clicking the ‘x’ icon to the right of the tab. The custom tab component displays the name of the estimate. By default, this is set as the number of the estimate. However, if the user double clicks on the tab title field, a new, custom title can be set.

//Masters Dissertation/Design Analysis/

{ 3 6 }

0407369

Changing the name sets the title variable within the Estimate object, so if it is saved and re-loaded, the tab will try to display the custom name. If a custom title has not been set, the tab is identified by the default setting of the current estimate number. These features are achieved by creating inner classes within the tab component that extend JButton and JTextField methods, and implements an ActionListener and a MouseListenerxxii. Components are instantiated within the EstimateTabComponent. This has access to the tabbed display, as a view object must be passed to the component in its constructor’s parameters. This enables the ‘close tab’ button to retrieve the index of its tab, and call a removeTab(index) method, which removes the tab and the object. JUSTIFICATION The HashMap was used because it enabled more flexibility in terms of storing and undefined number of objects. HashMap structures are also iterable, which facilitates the data retrieval. Arrays could have been implemented; however, these data structures are not as flexible as a HashMap. These require stricter instantiation; i.e., the length. Since a count of the number of estimates can provide a unique key, and the organisation of the data structure was not a priority, the HashMap is justifiable choice. The renaming feature is intended to facilitate the comparison of two tabs that have different input parameters. For example, one tab estimate can be set as one scenario, and compared to another tab with the name of another scenario. The graph objects are not stored explicitly; rather, they are created and destroyed as required by the interface, using an Estimate as a parameter. Furthermore, it would be simple to add more graphical components to the display due to the use of the embedded JTabbedPane; it is a matter of adding another tab. Within the constraints of this project it was not possible to add extra graphical functionality; however, the development principles have facilitated the possibility of adding more components. EVALUATION The design of these components was aimed towards achieving as much polymorphism as possible. Consequently, these objects can be used in other contexts than the EstimateUI. They are components that extend the methods of a JTextArea and simply need an Estimate object in order to be created. FEATURE 3.2 „The program should provide a means to dynamically update the estimate when a single effort parameter is altered.‟ The EstimateBarGraphTab and EstimateLineGraphTab objects create the EstimateSizeSpinner objects, which allow the user to change the size of a particular estimate. These objects have access to the EstimateModel object and can and create a new Estimate with the size indicated by the user, using a ChangeListenerxxiii. As such, the display can be immediately ‘refreshed’ to convey the effect of the change in size. Pseudocode xvi //Masters Dissertation/Design Analysis/

{ 3 7 }

0407369

This is achieved by instantiating a new Estimate object. The EstimateTabComponent object contains components to determine whether a new tab should be created, or if the existing tab should display the results of a new estimate (Figure 11). New Tab

Overwrite Tab Figure 11

In the case of the overwrite tab selection, the existing Estimate object associated with the EstimateTabComponent is removed from the HashMap, and a new Estimate Object is instantiated. When the new tab icon is selected, a new instance is instantiated and added to the HashMap collection. Similarly, once the first estimate has been executed, if parameters in the EstimateUI are altered, a new Estimate will be created, either as a new tab, or replacing the existing Estimate object. EVALUATION The design enables the user to immediately see the response of the effort estimate to the associated parametric inputs. Multiple tabs are comparable side by side, and the VolumeGraphs are updated dynamically to indicate the inputs associated with each Estimate object, when different tabs are selected. This is possible because each Estimate object is associated with a tab through the unique key (the estimate number), which enables the Estimate object to be retrieved from the HashMap. FEATURES 3.3 & 3.4 „The program should provide a means to semantically convey the result of an estimate‟. „The program should provide a means of showing previous estimate results‟. EXPLANATION These features are described within the discussion of other features; namely, the VolumeGraphs (Feature 2.4), the Graphical Representation (Feature 3.1), and the Dynamic Update (Feature 3.2). The parameters for the results are updated dynamically, depending on the tab currently selected, thus, the user can comprehend which weights were responsible for the constitution of the effort. EVALUATION In terms of the constraints of the development time, the design of advanced graphical outputs was not possible. The basic functionality has been achieved; i.e. the weighted values are presented alongside the visualised output of the effort. However, creating a visualisation to express the composition of the effort would have improved the functionality of the project as a whole.

//Masters Dissertation/Design Analysis/

{ 3 8 }

0407369

FEATURE 3.5 „The program should provide a means of loading saved estimates from different sessions.‟

EXPLANATION Saving the current estimate is achieved by outputting the variables of an Estimate object to a formatted String. The Estimate object is retrieved from the HashMap collection by its ‘id’ variable; namely, the estimate number. The formatted string contains an XML header, and writes the appropriate Estimate data between XML elements, stored as variables within the XMLWriter class (Figure 12). In terms of loading a previous estimate, this program uses SAX as an event-driven parser implemented by the XMLParser class in the DataManagementSubsystem package to provide a means of importing XML data. The interface opens the JFileChooser at the user’s project directory by implementing the System.getProperty("user.dir") method . SAX parses this XML file and, when it encounters data between an opening tag and closing tag, extracts this PCDATA into a char array. Through this method, tag names can be defined and data assigned to variables within the program. This data is then used to create a new Estimate object, which is stored into the EstimateManager HashMap and displayed on the EstimateUI display through the process discussed in Feature 3.1. xxiv

Figure 12 – The XML output of the ‘save’ option.

JUSTIFICATION In terms of other parsing options, two major types of parser exist that seek to provide a method for importing from XML-based data storages, such as the files used in this project. Event-based API’s, such as SAX discussed above, handle different events and report the data directly to the application through method calls. Document Object Modelling (DOM) is an alternative method. These seek to map an XML document onto a tree structurexxv , and then allow the navigation of this tree. This is a useful application; however, it is relatively more complex to execute than SAX. Therefore, due to simplicity and efficiency, SAX was implemented for this project. EVALUATION In general terms, the production of XML permits an ‘offline’, ‘disconnected’ version of the data. This could have useful applications for users that require data outside the scope of the program; essentially, a subset of data can be output and manipulated. Changes made to this data can be reviewed within the program at a later date. XML is a useful format for sending data electronically, and the data is extensible; i.e. has the potential to be self-describing and thus, semantically accessible as well as functionally accessible. In terms of different choices, other mark-up languages - such as SGML - could have been used. However, SGML is verbose for the scope of this project, while, other mark-up languages, such as HTML, are too simple and does not provide the same degree of extensibility that XML provides. //Masters Dissertation/Design Analysis/

{ 3 9 }

0407369

FEATURE 4.1 „The program should allow the execution of „tutorials‟ to demonstrate estimates under different assumptions; i.e. high, medium and low values on different sizes of project.‟ EXPLANATION The system is intended to be a teaching tool and, as such, it is central to the success of the project to provide a framework of help and the potential to support pedagogical tools. However, Feature Area 4 marks the departure from the intended development schedule. Features 4.1-4.4 were not implemented as intended; the implications of this are discussed at greater length in the Project Evaluation. Help bubble selected

Unread help bubble

Mouse Over help bubble

Navigation Components

Figure 13 – The Help Interface

Instead, a ‘Help Interface’ feature was included (Figure 13), largely in response to the first batch of empirical evaluations, which is discussed in more detail in Usability Testing (4.1.2). The HelpUI contains a HelpArea object with an embedded bufferedImage of a certain feature of the system, and embedded dynamic help polygons to elicit the subtlety of a component. Help Text associated with a feature is displayed within the Help Text Area, and the various pages of the help are navigated through using the corresponding components. The HelpHelpGraphic component extends JButton methods, and exploits these to create dynamic components that repaint on user interaction (Pseudocode xvii).

Pseudocode xvii – Construction of the help polygon

JUSTIFICATION An abstract HelpArea class was necessary; although, it is likely that an interface could have proven useful. Each ‘page’ of help - expressed as a java class - extends the //Masters Dissertation/Design Analysis/

{ 4 0 }

0407369

HelpArea methods, and are cast into HelpArea objects by the HelpAreaManager, which maintains a HashMap of these pages. This enables more HelpArea objects to be created with relative ease, as the HelpModel iterates through the HashMap and instantiates as many objects as are found automatically. Overall it was considered that this was a semantic, efficient way of providing a data structure and an interactive help interface. EVALUATION The HelpUI was necessary to supplement the understanding of system features and components for novice users. The interface is successful in highlighting various components in a simple, semantic fashion. The help is interactive and tracks which help balloons have been selected, and which are yet to be viewed. Overall, this interface forms a core part of the semantic framework of help and information that is required by this project, and accords with Shneiderman’s assertion that successful designs provide ‘clear challenges, helpful tools, and excellent feedback’xxvi.

xix

JEFF DE LUCA @ http://www.nebulon.com/articles/fdd/latestprocesses.html | Accessed 02/03/09

xx

SOMERVILLE, IAN. SOFTWARE ENGINEERING (8TH) (2007) P.243

xxi

IBID ., P.251

xxii

T HE JAVA T UTORIALS , ‘HOW TO USE T ABBED PANES ’ @ HTTP ://JAVA .SUN.C OM/DOCS /BOOKS /TUTORIAL/UISWING /COMPONENTS /TABBEDPANE.HTM || ACCESSED 09/08/09 xxiii

T HE JAVA T UTORIALS , ‘HOW TO USE JSPINNERS ’ @ HTTP ://JAVA .SUN.C OM/DOCS /BOOKS /TUTORIAL/UISWING /COMPONENTS /SPINNER.HTML || ACCESSED 13/07/09 xxiv

DAVID HOWARD , ‘JAVA APPLICATIONS AND THE "CURRENT DIREC TORY" ‘ @ HTTP ://WWW.DEVX .COM/TIPS /T IP/13804 || ACC ESSED 12/08/09 xxv

DAVID MEGGINSON , ‘EVENTS VS . TREES ’ @ HTTP ://WWW.SAXPROJECT .ORG / || ACCESSED 07/07/09

xxvi

SHNEIDERMAN BEN & CATHERINE PLAISANT, DESIGNING THE USER INTERFAC E: STRATEGIES FOR EFFEC TIVE HUMAN-COMPUTER INTERACTION (4TH), ADDISON WESLEY (2005) P .547

//Masters Dissertation/Design Analysis/

{ 4 1 }

0407369

4

PROJECT EVALUATION

The 'Project Evaluation’ is subdivided into two components; however, these sections are by no means mutually exclusive. The ‘Design Testing’ (4.1), outlines the validation and testing methodology, provides an analysis of the results, and offers a critique of the methodologies used. The ‘Project Evaluation’ builds upon this assessment; yet seeks to abstract the analysis to a higher level, offer an evaluation from a broader perspective, and suggests potential improvements and future work.

4.1

DESIGN T ESTING

Testing is decomposed into a general ‘testing strategy’ and ‘usability testing’. The ‘general strategy’ describes the validation processes used throughout the implementation of the project, while the Usability Testing documents the empirical evaluations carried out towards the end of the development. 4.1.1 TESTING STRATEGY In accordance with the development methodology, each feature has been designed, tested, and redesigned as appropriate before being included in the final build. However, primarily, the testing conducted in the project during the implementation phase depended upon maintaining standards of quality of the user interface. User interaction is central to the success of this system as a teaching tool. In this regard, heuristic evaluation, based upon Shneiderman’s ‘eight golden rules’ xxvii, was conducted after the promotion of all graphical and interactive components to the build. A detailed outline of these ‘rules’ is included (Appendix E); however, the relocation of these principles to this project are discussed below. Each user interface implements the abstract class SplashUI. This enforces consistency across displays, which reduces memory load on the user. The Strive for Consistency ‘Instructions buttons’ is consistently located within the ‘help text area’, and the information bar is consistently placed above this component.

Cater to universal Usability

Design features and components cater for both novice and advanced users. The extensive help components support tasks and semantics of outputs, while, shortcuts and hidden help text enrich the interface for more experienced users.

Offer informative feedback

The majority of the display components implement Mouse Listeners, which offer feedback from the context of the information bar. Many components are dynamically painted, and change visual state, depending upon actions.

Design dialogs to yield closure

Sequences of actions are designed across atomic interfaces. When the user has completed a task, or made a core decision, the next interface is opened, and the existing frame is discarded.

//Masters Dissertation/Project Evaluation/

{ 4 2 }

0407369

As far as possible, the design seeks to minimise errors. JComboBoxes and Prevent errors other components enforce the selection of valid data. If input fields are necessary, the system validates data, and splashes error messages to inform the user of any problems.

Undo/Redo features would have been a useful addition in this regard; Permit easy reversal however, time constraints prevented their implementation. The display does of actions allow the explicit saving of estimates, which offers the user the possibility of back up and restoring data.

Support internal locus The interface responds to user actions, and does not seek to override such of control actions unless an error is encountered.

Reduce short-term memory load

The instructions button is always accessible to remind users of the task at hand. As far as possible, interface artefacts are minimised to maintain a simple, semantic, stylish display.

4.2.2 USABILITY TESTING Since the project is intended to be used as a teaching tool, gaining user feedback and implementing design tweaks is central to the success of the project. With this is mind, several empirical evaluation techniques were used to collect data. Various evaluative tools would have been appropriate in the context of the estimation software. Informal usage-based empirical methods provide useful evidence that reveal potential usability issues and errors. Two rounds of usability testing were conducted. This was essential to improve the system as a whole, by validating the design against the requirements. The usability testing is divided into three sections. Firstly, the methods used are discussed and justified; secondly, the results of this process are analysed; finally, an evaluation of the usability testing conducted as a whole is subsequently offered. METHODS USED A combination of three techniques formed the basis the empirical evaluation. A sample of ten participants was used as the basis of this process. Participants were given a task list, and observed. Following the completion of tasks, participants completed a questionnaire, documenting their experiences. These methods are discussed and justified below: - Task Lists were an appropriate tool used in conjunction with the other techniques. It provided a simple, efficient solution to determining if core requirements could be achieved. The task list is included in the Appendix (G). - Direct observation was used to permit an evaluation of the user’s experience and interaction with the interface during the execution of the task list; field notes were used to document responses and usability issues. - Questionnaire responses were used to elicit and record information regarding the participant’s interaction with the system during the execution of the task list. The questionnaire is included in the Appendix (F). //Masters Dissertation/Project Evaluation/

{ 4 3 }

0407369

QUESTIONNAIRE RESULTS The questionnaire was divided into two sections. Section (A) sought to evaluate the framework of help and information that the system is designed around. Section (B) sought to elicit more specific usability issues and evaluate the success of conveying the semantics of an effort to the user. The results examined below provide a selection of the data gathered. The bar charts compare the results of both rounds of testing side by side. Each result is analysed, and the improvements made to the design between testing phases are outlined.

Evaluation 1 indicated that although the majority of participants found the menu options clear, problems existed with the interface and it required improvements.

'HOW CLEAR WERE THE MENU OPTIONS?' 6 5 4 3 2 1 0

Evaluation 2 indicates a dramatic improvement in user comprehension; although this is likely to be partly attributed to system familiarity bias.

Evaluation 1 Very Clear

Clear

The design of the menu options did not change explicitly; rather, an extra layer of help (‘Help interface’) was designed to explain how to access the extra help and information options (Feature INSERT).

Evaluation 2 Ok

Unclear

Very Unclear

These changes improved the overall understanding of menu options, which was essential to the project as a teaching tool.

Questionnaire Data 1

Evaluation 1 indicated that the instructions could be improved upon

'HOW CLEAR DID YOU FIND THE INSTRUCTIONS? '

Evaluation 2 indicates that, although the clarity of instructions was better, the design of the instructions could still be improved significantly.

8 6 4 2 0 Evaluation 1

Very Clear

Clear

Evaluation 2

Ok

Unclear

Very Unclear

The instructional text was reworded to make the actions required to complete tasks clearer. In some cases, extra pages of instructional information were added to cover more points, particularly with reference to the UFP conversion process.

Questionnaire Data 2

//Masters Dissertation/Project Evaluation/

{ 4 4 }

0407369

Evaluation 1 indicated that the help text required improvements.

'DID YOU FIND THE HELP TEXT USEFUL?'

Evaluation 2 provided an interesting result. While some participants found the help more useful, the number that found it ‘not useful’ actually increased.

6

5 4

One explanation for this result is that the participants interpreted ‘useful’ as ‘required to complete’ the tasks. The increased familiarity with the system could also account for this response.

3

2 1 0

Evaluation 1 Very Useful Not useful

Help text was expanded and reworded. The new ‘Help Interface’ explicitly guided the users towards this help text as required to make this help more accessible.

Evaluation 2 Useful Useless

Quite useful

More features were given extra information components to increase access to help facilities.

Questionnaire Data 3

'HOW CLEAR WAS THE EFFECT OF CHANGING THE VALUES OF THE INPUTS?'

Evaluation 1 indicated that, overall, participants felt comfortable with understanding the effect of changing the value of the inputs

8

Evaluation 2 demonstrates a small improvement in the process of changing input parameters.

6 4 2

The design was changed slightly to increase the size of the JComboboxes, so the full text of the scale could be read.

0 Evaluation 1 Very Clear

Clear

Evaluation 2 Ok

Unclear

Previously, the scales with longer names, such as ‘Extra High’, were partially obscured. This is expanded upon in the Observation Results.

Very Unclear

Questionnaire Data 4

Evaluation 1 indicated that several participants had problems understanding the weight of each driver in terms of the overall estimate.

'HOW CLEAR WAS THE RELETIVE WEIGHT OF EACH INPUT?'

Evaluation 2 reveals a marked improvement in the understanding of the weightings.

8 6

This dramatic improvement could, however, be attributed to familiarity with the system.

4 2

The design was radically changed to visualise the weight of each driver as a ‘volume graph’ (Feature 2.4).

0 Evaluation 1 Very Clear

Clear

Evaluation 2 Ok

Unclear

//Masters Dissertation/Project Evaluation/

Very Unclear

Previously, only a textual representation of the data was available to the participants.

{ 4 5 }

0407369

DID YOU FIND THE 'HELP INTERFACE' USEFUL? Very Useful

The help interface was designed to support the issues raised from the first batch of testing. In the second batch of evaluations, the majority of participants indicated that this feature was ‘very useful’.

Useful Quite useful Not useful Useless

The design of the menu options did not change explicitly; rather, an additional layer of help was designed to explain how to access the extra help and information options (Feature INSERT). These changes improved the overall understanding of menu options, which was essential to the project as a teaching tool.

Questionnaire Data 5

The line graph was designed following the first batch of evaluations. In the second batch participants indicated that the line graph was, generally, not felt to be useful.

DID YOU FIND THE LINE GRAPH USEFUL?

Yes

The graph does express the impact of changes with a great degree of sensitivity; although, it is useful in demonstrating the effect of the source code being incorrectly estimated.

No The design of the line graph was intended to give users a better feel of the impact of changing the size of the source code upon a project (Feature 3.1). Questionnaire Data 6

In general terms, the first batch of questionnaire results highlighted the core areas that required improvement, while the second batch justified the changes implemented. Two fundamental sections of functionality were redesigned to cater to the participants results. Firstly, The ‘Volume Graph’ components were designed to improve the semantics of the estimate interface. Secondly, a help interface was designed and implemented to explicitly explain functionality and components, and provide the best framework for a new user to approach the software. On the initialisation of the program, an interface queries whether the user is new to system, if affirmative, the user is prompted to consult the help interface, which explains the underlying framework of the program, and the novel features associated with it. One suggestion elicited by the questionnaire (Section-B/6) was the request for a tutorial to guide the user through an estimate, and explicitly explain the results and effects of selecting different parameters. This feature was initially planned in the proposed approach, and is specified in the requirements (Function 4.1); however, development time did not permit the addition of this feature in the build. The implications of this problem are discussed to a greater extent in the Project Conclusion.

//Masters Dissertation/Project Evaluation/

{ 4 6 }

0407369

DIRECT OBSERVATION RESULTS Direct observation of participants during the execution of the task list elicited several useful results (Observation 1,2,3). The most pertinent results are documented below; each result is analysed, and the improvements made to the design between testing phases are outlined. The features are described in more detail in the corresponding section of the Design Analysis.

PROBLEM SOLUTION

Several users did not immediately associate the info button with their need for extra information to complete a task. Primarily, the addition of the help interface sought to make explicit the functionality of more subtle components. Observation 1 – Info button problem

PROBLEM

SOLUTION

Users experienced difficulty in associating the relevant button with the ‘lock tab’ task. The ImageIcon originally implemented was redesigned to convey its meaning more clearly. Additionally, an ActionListener was implemented in the inner class of the EstimateTabComponent, and on Mouse Over, the info bar explained the functionality of the component. Observation 2 – Lock tab problem

PROBLEM

Loading the XML file required the participant to navigate through several sub folders to find the correct source, which proved frustrating for some participants.

The JFileChooser was redesigned to open in the current user directory; SOLUTION implementing: fileInput.setCurrentDirectory(new File(System.getProperty("user.dir"))); Observation 3 – File retrieval problem

In general terms, the direct observation established problems encountered by users unfamiliar with the interface. As I was intimately familiar with the subtleties and functionality of the system, this evaluation proved useful in demonstrating the steps required by a user to become acquainted with the components, and exposing the semantic gaps that the design had introduced. The redesign of components and implementation of new functionality sought to bridge these gaps, and reduce the amount of guesswork required to become familiar with the interface components specifically, and with the system as a whole. EVALUATION OF THE USABILITY TESTING PROCESS The fundamental problem associated with the testing process is that the same sample of participants was used over both testing phases. Clearly, this has implications regarding the accuracy and usefulness of the results, as data recorded from the second round of testing is naturally influenced by the participant’s additional familiarity with the system. This is likely to inflate the success of the results, which renders any comparison of the data relatively less useful.

//Masters Dissertation/Project Evaluation/

{ 4 7 }

0407369

However, within the scope of the project, this restraint is acceptable, given the lack of resources at hand, and the time constraints imposed upon the ev aluation period. The aim of this process was, broadly, to establish whether the changes implemented addressed the problems identified by the first round of testing. Generally, these changes were successful. However, it must be accepted that any close ana lysis of the results is not likely to elicit accurate data. The sample of ten participants was deemed to be appropriate, in terms of the scale of the project. Clearly, the size of this sample raises questions regarding the statistical significance of the results at hand. For example, given that the sample of subjects was small, the task list did not yield as great a depth of information as a larger survey may have provided. However, the scale of the sample also provided opportunities to gain a finer grain of feedback than may have been possible with a wider survey. Behavioural observation , while sometimes difficult to transcribe and analyse on a larger scale, and prone to the ‘Hawthorne effect’, yielded a depth of information and offered the potential to accurately tailor the interface to the user needs. In terms of the questionnaire, a limited number of questions (12) were posed to encourage participation and completion; the questionnaire used both closed and open questions, which was appropriate to the scale of the sample selection. In regard to the closed questions, the respondents were encouraged to leave additional comments. Furthermore, scaled questions (using a traditional 5-point scale) were used to statistically gauge subjective responses. This was deemed appropriate to broadly gauge the success of certain features of the interface; however, due to the relatively small sample size, again, this data would not be suitable in yielding a significant numeric analysis. In regard to the open questions, a sentence completion technique was used to elicit ideas and feedback to improve the project from an end-user viewpoint. Due to the small sample size, this was deemed an appropriate method of elicitation. The respondents were also encouraged to leave additional comments in the margins or on a separate piece of paper if they wished. In terms of other techniques considered, Incident Diaries may have revealed information regarding rarer problems. This tool is a relatively inexpensive method of evaluation; however – despite issues relating to participant motivation - it is more effective over a longer term. Therefore, this technique was ruled out as inappropriate for the scope of this project. A Comparative Analysis study could have been useful in terms of validating the success of system, in terms of bridging the problems identified in other systems for teaching cost estimation (Domain Analysis). Participants could have attempted to complete a set of tasks using another tool, and the results of this interaction gathered. The participant would then use same list to achieve tasks in this system. However, the size of the sample would have limited the statistical significance of such an evaluation. Furthermore, within the context of the time constraints, it was established that effort should be focused upon assessing the success and identifying the problems associated with this project. To this end, the usability testing phase and the heuristic evaluation was, overall, successful in accomplishing these goals. //Masters Dissertation/Project Evaluation/

{ 4 8 }

0407369

4.2

PROJECT EVALUATION

This section seeks to conclude the project with a higher level assessment of the system; fundamentally, evaluating its overall success as a solution to the problems elicited, and discussing the deviations from the planned development. In this regard, the ‘Implementation evaluation’ (4.2.1) briefly draws together points regarding the processes and choices made during the implementation. ‘Functionality Evaluation’ (4.2.2) assesses the success of the system functionality against the intended aims and offers future work that can build upon this project. The ‘Conclusion’ (4.2.3) draws both of these elements together, and offers an assessment of the ultimate success of the project. 4.2.1 IMPLEMENTATION EVALUATION ARCHITECTURE In terms of the implementation, ultimately, the design seeks to accord with what Sommerville describes as the ‘layered approach’xxviii, which supports the incremental development of systems. Certainly, this facilitated the development of this project, where features are incrementally added to the final build. Sommerville notes certain problems with this design; namely that while this system can, within the inner layers: Provide basic facilities, such as file management, that are required at all levels [...] services required by the user at the top level may therefore have to punch through adjacent layers to get access to services that are provided several layers beneath it.

This, in effect serves to subvert the model; essentially, the ‘outer layer’ in the system does not only depends on its immediate predecessor. However, Sommerville offers a compromise. He attests that: These styles can be used separately or together. For example, a system may be organised around a shared data repository but may construct layers around this to present a more abstract view of the dataxxix .

Indeed, this design conflates these methods; although, in reality, they do not appear to be mutually exclusive. This compromise structure has worked effectively of this design, enabling the decoupling of components, and the more abstracted view of the data. DESIGN The design fulfils the intention to minimise the visual artefacts available on screen, by hiding the help and information until the user wishes to access this data , thus reducing memory load. The displays are consistent, which was implemented by the use of an abstract user interface, which placed commonly used components on each screen. Some interface components are, in the final analysis, over-engineered. The rationale of the user interactions was to develop a simple, semantic, stylish interface, which has been achieved, but the time taken refine one feature could have been better spent defining new features. However, overall, I am satisfied with the look and feel of the display, which reacts smoothly to user interaction, and seeks to hide data complexity at all times. TECHNOLOGY Java was used to implement this project, largely due to familiarity with the language. This said, a few problems were encountered during implementation, caused chiefly by //Masters Dissertation/Project Evaluation/

{ 4 9 }

0407369

unfamiliarity with certain techniques, where the learning process could be slow; however, once learned, the techniques are relatively simple to redeploy where necessary. Essentially, the development required concentration on a specific feature for an extended period, until the challenges associated with this feature were overcome (or worked around). Once these problems were solved, the process of reusing functionality helped to increase the rate of development significantly. For example, the Tab Component design for the FunctionPointUI required research into the peculiarities of this component, how to correctly implement components, Actions Listeners and interact with other objects. Once, understood, it was simple to reuse the broad structures for the EstimateUI, and, indeed, extend the functionality. Java.swing and .awt were suitable options to design the user interfaces. However, more sensible choices could have been made to simplify the development process. Importing packages to draw the graphs components, for example, rather than design and build the objects from scratch, would have more efficient. Indeed, entire toolsets and packages exist to assist the design and implementation of user interfaces. Window Presentation Foundation (WPF), which is based around XAML, is an example of one such product. Visual Studio and Microsoft Expression Blend are IDEs for design, integrated into each where user-interface design and programming can work concurrently to increase the efficiency of implementation. However, the overhead incurred by learning new skills and familiarising with the environment would - most likely - render any efficiencies redundant. Overall, java was an acceptable language and provided components to handle basic interaction, and the potential to create custom components to tailor the interface to the principles underlying the design. 4.2.2 FUNCTIONALITY EVALUATION In terms of functionality, the features implemented up to Feature Area 4 satisfy the criteria defined in the requirements elicitation. In addition to these aspects, the implementation of the help interface provided a decent introduction to the novelties of the interface, and supports novice users with a framework of help and information. In terms of the missing functionality, several exclusions are regrettable. If the program had allowed the execution of ‘tutorials’ to demonstrate estimates under different assumptions; i.e. high, medium and low values on different sizes of project, the user would have been guided explicitly towards the subtleties of constructive cost estimation, and attained a better understanding by working through examples. Similarly, the execution of ‘scenarios’ would have helped to place the decisions associated with the model in a business context. The construction of a scenario conveying factors relevant to a project estimate, and requiring the user to make informed decisions based upon this information, would have added an extra layer of educational functionality. This said, these features are not difficult to implement, and it is in terms of ti me restraints that this functionality was not implemented. Furthermore, the application underlying this functionality is in place, and is, where possible, designed to facilitate code reuse and modification. //Masters Dissertation/Project Evaluation/

{ 5 0 }

0407369

Thus, future work could potentially use this project as a framework, and build on several more powerful components, specifically with the focus on this system as a teaching tool. While an overwhelming degree of functionality was a criticism of the ‘Costar’ estimation tool, extra functionality would prov e useful, if implemented with the novice user in mind. Small tweaks, such as defining custom weights for the multipliers and scale drivers from within the system, or loading all static algorithm data from external XML files, would increase the control of the system, and enable a finer degree of analysis. Again, these features are built upon technology already implemented within the project, such as the SAX Parser, and, as such do not represent a huge programming step. The estimate display components, particularly the line graph, could be improved upon. As alluded to previously, importing graphic packages could prove a useful solution. The exploration of more advanced visualisation techniques provides an interesting field of research, and a new direction to take this project. The tabbed interface would only require a new component to display another type of view; the underlying framework of calculation is already in place. 4.2.3 CONCLUSION

Figure 14 – The problems initially elicited in the Domain analysis

The problems recognised in the Domain Analysis provide an appropriate starting point to assess the ultimate ‘success’ of the project. In terms of these three broad stages, and the associated problems that were identified, the project satisfies the fundamental criteria. Furthermore, the design provides an interface that caters for novice users through the consistency of the interfaces, the depth of help text, info rmation and instructions available. From these two perspectives, then, the system provides a teaching system for cost estimation.

//Masters Dissertation/Project Evaluation/

{ 5 1 }

0407369

Where the system has deficiencies, the underlying technology is in place, and the appropriate architecture has been implemented to facilitate modular changes. Where the system succeeds, these components can be reused and modified as required.

xxvii

BEN SHNEIDERMAN & CATHERINE PLAISANT, DESIGNING THE USER INTERFACE: STRATEGIES FOR EFFECTIVE HUMAN-COMPUTER

INTERACTION (4TH), ADDISON WESLEY (2005) xxviii xxix

SOMMERVILLE, IAN. SOFTWARE ENGINEERING (8TH) (2007) P.251

IBID ., P.252

//Masters Dissertation/Project Evaluation/

{ 5 2 }

0407369

A

DESIGN METHODOLOGY



As defined by Problem Analysis (2.2)



As defined by Problem Analysis (2.3)



As defined by Design Analysis(4)



As defined by Program Evaluation (4)

Aspects of FDD processes implemented in this project (Material Based on FDD process descriptions from Jeff De Luca)

//Project Proposal/Appendix/Design Methodology

{ 5 3 }

0407369

B

//Project Proposal/Appendix/Package Diagram

PACKAGE DIAGRAM WITH REQUIREMENTS

{ 5 4 }

0407369

C

APPLICATION S UBSYSTEM (SIMPLIFIED) CLASS DIAGRAM

//Project Proposal/Appendix/App Class Diagram

{ 5 5 }

0407369

D

//Project Proposal/Appendix/Package Diagram

{ 5 6 }

0407369

SIMPLIFIED CLASS DIAGRAM

E

SHNEIDERMAN ’S ‘8 GOLDEN RULES’

Strive for consistency

Consistent sequences of actions should be required in similar situation; identical terminology should be used in prompts, menus, and help screens; and consistent colour, layout, capitalisation, fonts, and so on should be employed throughout

Cater to universal Usability

Recognise the needs of diverse users and design for plasticity, facilitating transformation of content. [...] Adding features for novices, such as explanations, and features for experts, such as shortcuts and faster pacing, can enrich the interface design and improve perceived system quality.

Offer informative feedback

For every user action, there should be system feedback. For frequent and minor actions, the response can be modest, where as for infrequent and major actions, the response should be more substantial.

Design dialogs to yield closure

Sequences of actions should be organised into groups with a beginning, middle, and end. Informative feedback at the completion of a group of actions gives operators the satisfaction of accomplishment.

Prevent errors

As much as possible, design the system such that users cannot make serious errors [...] if the user makes an error, the interface should detect the error and offer simple, constructive, and specific instructions for recovery.

Permit easy reversal of actions

As much as possible, actions should be rev ersible. This feature relieves anxiety, since the user knows that errors can be undone, thus encouraging exploration of unfamiliar options.

Support internal locus of control

Operators strongly desire the sense that they are in charge of the interface and that the interface responds to their actions.

Reduce short-term memory load

The limitation of human information processing in short-term memory requires that displays be kept simple, multiple-page displays be consolidated [...] and sufficient training time be allotted for codes, mnemonics and sequences of actions.

//Project Proposal/Appendix/Golden Rules

{ 5 7 }

F

//Project Proposal/Appendix/ Questionnaire

{ 5 8 }

SAMPLE Q UESTIONNAIRE

0407369

//Project Proposal/Appendix/ Questionnaire

{ 5 9 }

0407369

//Project Proposal/Appendix/ Questionnaire

{ 6 0 }

0407369

G

TASK SHEET

Attempt to complete the tasks listed below. Take as much time as you need, and direct questions to the assistant, should you have any problems or queries. 1.

Open the program by clicking the shortcut: „start estimation‟.

2.

Create an Estimate by entering Thousands of Lines of source code directly, using this data: KSLOC

4.96

Once complete, return to the start menu 3.

Create an Estimate by entering Unadjusted Function Points directly, using this data: UFP

85

Language

C#

Once complete, return to the start menu 4.

Create an Estimate by entering data required to calculate Unadjusted Function Points, using this data: Data Elements

File Types

External inputs

05

3

1

External Outputs

02

7

2

External Inquiries

02

1

4

Internal Logical Files

02

35

1

External interface Files

03

9

9

Language

COBOL

5.

Change the value of RCPX to „LOW’, and execute an estimate.

6.

Rename the tab „RCPX low’.

7.

Find out the Scale driver with the most effect upon the estimate, and set its value to „nominal’.

8.

Select the first estimate, named „RCPX low‟ and „lock’ the tab, so that changes to the parameters are updated in this tab.

9.

Change the size of the estimate by increasing the KSLOC by +2.3.

10.

Load an estimate from a file „test estimate’.

11.

Close the tab that is neither the newly loaded estimate, nor the tab named „RCPX low‟.

12.

Compare the two remaining estimates by looking at the Line Graph associated with each tab.

Congratulations! This is the end of the task sheet. Thank you for your time and input. Finally, please complete the questionnaire in the next document.

//Project Proposal/Appendix/Task Sheet

{ 6 1 }

BIBLIOGRAPHY



BEHRENS C. (1983), ‘MEASURING THE PRODUCTIV ITY OF COMPUTER SYSTEMS DEVELOPMENT ACTIVITIES WITH FUNCTION POINTS ’ IN IEEE TRANSACTIONS ON S OFTWARE ENGINEERING, NOVEMBER 1983



BOEHM, B. (1981), SOFTWARE ENGINEERING ECONOMICS, PRENTICE-HALL.



BOEHM, B., B. CLARK, E. HOROWITZ, C. WESTLAND, R. MADACHY, AND R. SELBY (1995), ‘COST MODELS FOR FUTURE SOFTWARE LIFE-CYCLE PROCESSES : COCOMO 2.0’ IN ANNALS OF SOFTWARE ENGINEERING 1.



BOEHM, BARRY CHRIS ABTS, SUNITA CHULANI, ‘SOFTWARE DEVELOPMENT COST ESTIMATION APPROACHES – A SURVEY ’ IN A NNALS OF SOFTWARE ENGINEERING 10 (2000) PP.177–205



DE LUCA, JEFF @



FAIRLEY, RICHARD E. (DICK) ‘THE INFLUENCE OF COCOMO ON SOFTWARE ENGINEER ING EDUCATION AND TRA INING’



JONES, CAPERS ‘SOFTWARE PROJECT MANAGEMENT PRACTICES : FAILURE VERSUS SUCCESS’ IN CROSSTALK: THE JOURNAL OF DEFENSE SOFTWARE ENGINEERING (2004)



JØRGENSEN MAGNE AND BARRY BOEHM, ‘VIEWPOINTS - SOFTWARE DEVELOPMENT EFFORT ESTIMATION: F ORMAL MODELS OR EXPERT JUDGMENT?’ IN IEEE SOFTWARE, (MARCH/APRIL 2009) @



JØRGENSEN, M. TANJA GRUSCHKE, ‘INDUSTRIAL USE OF FORMAL SOFTWARE COST ESTIMATION MODELS: EXPERT ESTIMATION IN DISGUISE?’ IN SOFTWARE METRICS (2005)



JØRGENSEN, MAGNE ‘PRACTICAL GUIDELINES FOR EXPERT-JUDGEMENT-BASED SOFTWARE EFFORT ESTIMATION’ IN IEEE SOFTWARE (2005)



QSM, FUNCTION POINT LANGUAGES TABLE (2005) @ HTTP://WWW . QSM. COM/?Q=RESOURCES /FUNCTION-POINT -LANGUAGES -TABLE/INDEX . HTML



SHNEIDERMAN BEN & CATHERINE PLAISANT, DESIGNING THE USER INTERFACE: STRATEGIES EFFECTIVE H UMAN-C OMPUTER I NTERACTION (4TH), ADDISON WESLEY (2005)



SOMMERVILLE, IAN, SOFTWARE ENGINEERING, (8TH) [2007]

HTTP://WWW. NEBULON. COM/ARTICLES /FDD/LATESTPROCESSES . HTML

FOR

WEB TOOLS 

COSTAR 7.0 WAS DOWNLOADED FROM HTTP://WWW.SOFTSTARSYSTEMS .COM/ | ACCESSED 15.03.09



COCOMO II WITH HEURISTIC RISK ASSESSMENT IS AVAILABLE AT | ACCESSED 16.03.09



COCOMO II SUIT IS AVAILABLE AT | ACCESSED 16.03.09

//Project Proposal/Bibliography/

{ 6 2 }

0407369

WEB RESOURCES 

HOWARD, ‘DAVID JAVA APPLICATIONS AND THE "CURRENT DIRECTORY " ‘ @ HTTP://WWW . DEVX. COM/TIPS /TIP/13804 || ACCESSED 12/08/09



MEGGINSON , DAVID ‘EVENTS VS . TREES’ @ HTTP://WWW.SAXPROJECT .ORG/ || ACCESSED 07/07/09



THE JAVA TUTORIALS , ‘HOW TO USE JSPINNERS ’ @ HTTP://JAVA . SUN. COM/DOCS /BOOKS /TUTORIAL/UISWING/COMPONENTS /SPINNER. HTML || ACCESSED 13/07/09



THE JAVA TUTORIALS , ‘HOW TO USE TABBED PANES’ @ HTTP://JAVA . SUN. COM/DOCS /BOOKS /TUTORIAL/UISWING/COMPONENTS /TABBEDPANE. HTM || ACCESSED 09/08/09

//Project Proposal/Bibliography/

{ 6 3 }

0407369

Suggest Documents