A Model for Estimating the Cost of Securing the Network Infrastructure and Anti-Tamper

A Model for Estimating the Cost of Securing the Network Infrastructure and Anti-Tamper Donald J. Reifer Reifer Consultants, Inc., P.O. Box 4046 Torran...
Author: Katrina Wilcox
9 downloads 0 Views 405KB Size
A Model for Estimating the Cost of Securing the Network Infrastructure and Anti-Tamper Donald J. Reifer Reifer Consultants, Inc., P.O. Box 4046 Torrance, CA 90510-4046 [email protected]

Abstract This paper outlines the work accomplished by the author during the past year aimed at developing a model to estimate the effort and duration involved in securing the network infrastructure. The model is intended to be used for forecasting the investments enterprises and projects that will have to be made to develop networks with defenses to ward off intruders. If anti-tamper is a component of the defense, the model estimates the costs associated with it as well. The paper defines the scope of the model, its mathematical form and the steps that we will take in the future to have our collaborator group reach agreement on the model’s definition, calibrate it, and transition it into operational use.

1. Motivation Enterprises are relying more and more on their general-purpose communications networks to develop, deploy, and deliver products and services. Similarly, many products are actually systems that use networks as their underlying communications infrastructure. Because networks in both cases are vulnerable to attack, efforts need be taken to secure the networks against unauthorized access and tampering. It is recognized that defenses must be engineered into systems at all levels of the enterprise to protect them against both insider and outsider attacks. According to a recent CISCO white paper [1], the costs of mounting such defenses and providing protection for applications software are skyrocketing. While many of these costs can be anticipated, significant cost growth has been incurred due to inability to accurately estimate the costs associated with securing the network infrastructure. In response, difficulties have been experienced in acquiring the budgets needed to fund worthy projects. To establish adequate budgets, management must be able to determine how much time and effort is really needed to put adequate defenses in place. They must understand what portions of these budgets should be considered investments versus project costs. To provide managers with the ability to prepare such estimates, more accurate cost estimating models are required. While some work on security cost models has been accomplished [2, 3, 4], little progress has been made relative to network infrastructure estimating due primarily to scope issues. Questions about what the model should and should not estimate need to be answered. For example, does the community build a model to estimate the cost of building trustworthy software or securing the network infrastructure? While both models are needed, should one effort take precedence over the other or can both be estimated with the same formulation? Should insurance or investments be estimated? Should the model address both privacy and security cost? Should the model estimate the cost of damages or defenses? What portion of the life cycle should be encompassed by the model and what portion should be excluded? Should the model be parametric or activity-based? Should recurring and non-recurring be included in the model? Another important consideration is who do we contact to validate that model reflects reality?

1

Where do we get the data to calibrate the models? These questions and many others need to be answered before models can be developed to provide management with the answers they are searching for relative to how much time and effort it will take to secure the network infrastructure and keep it operational.

2. Bounding the Effort In early 2005, we established a collaborator’s group to scope our model development activity. Experts from several commercial and aerospace firms volunteered to help us by providing their opinions relative to what they believed would be a useful cost model. To get a clear picture of what this community desired, we used the Goal-Question-Metric approach as shown in Table 1. Reaching consensus on the model’s scope was by no means an easy task. Commercial companies were driven by capital budget limitations and staffing shortfalls, while aerospace companies faced complying with large numbers of seemingly complex sets of government requirements and constraints [5, 6]. We tried to accommodate both communities as we developed the metrics/measures that we would be used to form the basis of the model. Table 1. Goal-Question-Metric Matrix for Network Security Cost Model Goals

Questions/Answers

1. Be able to generate an accurate estimate of the time and effort needed to secure the network infrastructure

Who generates the estimate? - Engineering or Information Technology Department personnel responsible for the network infrastructure. Who reviews and approves the estimate? - Engineering or Information Technology Department management. What does the estimate include? - Time, effort and materials needed to secure the network infrastructure. - Non-recurring and recurring costs over the life of the network. - Capital and engineering costs estimated separately. - Major goal is to minimize the false alarm rates. What is the basis of estimate? - Requirements specification, trade study or feasibility study. When is the estimate generated? - When the project gets underway and periodically during development to assess performance against budget. - Annually as part of the capital budget cycle. How is the estimate generated? - Time – not estimated; viewed as a constraint that must be met by management. - Effort – engineering experience using activity-based approach. - Materials – competitive quotes for required equipment and software. Who validates the estimate? - Engineering or Information Technology Department management against expectations (how these are derived is questionable). What parts of the estimate are validated? - Effort and time actuals compared against estimates. - Material actuals compared against authorized expenditures. - Actual false alarm rates being experienced when the network is placed into operations compared against requirements. What is the basis of the validation? - Requirements specification, trade study or feasibility study. - Program protection plan (for applications security and anti-tamper). When is the estimate validated? - When projects are completed, when cost-to-complete estimates are submitted and/or annually when budgets are submitted. Where do we get the data to validate the estimates? - For actuals, from the accounting records. - For model development, expert collaborators will validate the model.

2. Be able to validate the estimate using actuals

2

Metrics/Measures

- Time – calendar months - Effort – person-months (152 hours) - Materials – $ purchases & lease costs separated as capital and project costs - Quality – minimum false alarm rate

- Time – calendar months - Effort – person-months (152 hours) - Materials – $ purchases & lease costs separated as capital and project costs - Quality – minimum false alarm rate

- Time – calendar months - Effort – person-months (152 hours) - Materials – $ purchases & lease costs separated as capital and project costs - Quality – minimum false alarm rate

3. Be able to predict the effort involved should anti-tamper be a requirement

How is the estimate validated? - Currently it is not; however, it should be statistically validated in the future by comparing actuals to estimates. What is anti-tamper? - A unique requirement in government contracts to protect critical program information from reverse engineering and tampering. Who generates the estimate? - Engineering management generates the estimate which project management includes in their program budgets. Who reviews and approves the estimate? - The estimate is reviewed by an independent red team which is charged with testing the system to ensure protection works. What does the estimate include? - Time, effort and materials needed to implement the protection called out in the Program Protection Plan (PPP). - Non-recurring and recurring costs necessary to maintain protection over the life of the system. - Hardware and software protection costs are estimated separately. What is the basis of estimate? - The Program Protection Plan (PPP) identifies the critical program information to be protected. When is the estimate generated? - Prior to the start of full scale development. How is the estimate generated? - Time – engineering experience. - Effort – engineering experience using activity-based approach. - Materials – competitive quotes for necessary equipment and software modifications and/or enhancements.

- Reasonable goal - Within 30% of actuals, 78% of the time

- Time – calendar months - Effort – person-months (152 hours) - Materials – $ purchases & lease costs separated as capital and project costs - Quality – protection capability measured as a function of incidences

- Time – calendar months - Effort – person-months (152 hours) - Materials – $ purchases & lease costs separated as capital and project costs - Quality – function of incidences

3. Mapping Goals to Life Cycle Activities In order to estimate cost for securing the network infrastructure, it was necessary to map the goals that we established for pertinent activities being conducted during both the enterprise and project life cycles. There are several standards that have been developed to guide such an activity [7, 8]. To develop the mapping, we established a candidate task list for securing the network infrastructure by life cycle phase using EIA 632 as the basis. We selected EIA 632 rather than ISO 12207 because it addressed the system rather than just the software life cycle. Once we completed the mapping, we sent the strawman to our collaborator group for comment. We then iterated the results with the collaborator group until we reached consensus on the output which is summarized in Figure 1. Conceptualize

Development

Operational Test and Evaluation

1. Requirements specification 2. Network infrastructure architecture development 3. Project planning

4. Product assessments 5. Hardware and software acquisitions 6. Software development, integration and testing

13. Program protection planning

14. Hardware & software acquisitions 15. Hardware & software modifications/ enhancements

7. Operational test and evaluation

Transition to Operations

Operate & Maintain

8. Transition and turnover 9. DITSCAP

10. Operations 11. Maintenance

12. Replace (or destroy)

19. Maintenance

20. Destroy

Program Protection Tasks (if required) 16. Red teaming 17. Independent test and evaluation 18. DITSCAP

Replace (or Destroy)

Figure 1. Network Defense Infrastructure Task Mappings to EIA 632 Life Cycle Phases 3

Table 2. Definitions for Tasks Needed to Secure the Network Infrastructure Per EIA 632 Life Cycle Phase Major Tasks 1. Requirement specification (or feasibility study) 2. Network infrastructure architecture development 3. Project planning budgeting)

(and

4. Product assessments 5. Hardware and software acquisitions 6. Software development, integration and testing 7. Operational test and evaluation 8. Transition and turnover 9. DITSCAP

10. Operations 11. Maintenance 12. Replace (or destroy)

Definition Identify the network defense requirements including the tasks of defense strategy development, requirements synthesis and specification. Develop a network defense infrastructure that satisfies the specified requirements (as an option, perform a feasibility or trade study to define requirements, identify options and recommend actions). Develop a program plan for realizing the network defense infrastructure that satisfies the requirements and implements the approved architecture (including capital budget submission). Assess candidate hardware and software products to populate the defensive architecture and select those that best satisfy the requirements (sometime via trial use). Accomplish all of the actions needed to acquire the hardware and software products selected for use in the defense infrastructure (more than just writing purchase requests). Get the network defense infrastructure to work per the requirements by tailoring hardware and software products to enforce security policies, developing necessary scripts and glue code, and performing engineering integration and testing. Test the network defense infrastructure in an operational setting and perform those modifications that may be needed to satisfy the end-user’s requirements, including performance optimization. Ready for transition, train the operators, prepare the documentation, perform cutover testing and finally turn the network defense infrastructure over to those who will operate and maintain it once it has been declared operational. Perform any testing required to certify and accredit the network for operational use. Such testing may involve third-parties chartered with independently evaluating the protection provided to satisfy Defense Information Technology Security Certification & Accreditation Program requirements. Operate the network defenses and tailor the infrastructure to satisfy new security policies and regulations. Administer the network defenses and staff the situation awareness displays as needed to respond to alerts. Keep the defenses up-to-date by performing needed repairs, optimizing system resources and performing enhancement. Incorporate new releases of hardware and software into the infrastructure as needed per the release plan. When appropriate, replace the defenses with improved protection and dismantle/destroy the network when out-dated or no longer needed.

Program Protection Tasks (If Required) 13. Program protection planning 14. Hardware and software acquisitions 15. Hardware and software modifications/enhancements

16. Red teaming 17. Independent test and evaluation 18. DITSCAP 19. Maintenance 20. Destroy

When anti-tamper is a requirement, the project must generate a Program Protection Plan (PPP) to describe how critical program information will be protected against reverse engineering and tampering. The plan also includes estimates of the time and materials needed to implement the protection and to demonstrate its effectiveness operationally. Contract with specialist firms to mechanize program protection plans. For example, such protection might coat circuit cards with special epoxies, use commercial software taken from accepted product lists and place explosive devices into hardware chases that would be enabled if the card is removed by unauthorized personnel. The engineering team would insert additional defenses per the program protection plan to protect the system and its components from tampering. For example, they might use secure wrappers to bind the software together. In addition, they might use obfuscation and guards within applications software and other protection in circuit cards to protect critical program information. Finally, they might insert honeypots to collect forensic information about exploits as attacks are mounted. Support the conduct of red-teaming of the protection. Outsider group will form red-teams to try to break the protection using typically classified means. Conduct an independent test and evaluation of the system from a program protection pointof-view to ensure that defenses protect critical program information. Perform any testing required to certify and accredit the security of the system after protection has been engineered into its hardware and software components. Maintain the protection as the hardware and software is maintained per the release plan. Destroy the protection when the hardware and software used in the system are replaced, dismantled and/or decommissioned (i.e., the destruction must be certified).

4

4. Model Development Figure 2. Seven Step Modeling Methodology [9] Analyze Existing literature

1

Perform Behavioral Analysis

2

Identify Relative Significance

3

Perform ExpertJudgment, Delphi Assessment

4

Gather Project Data

5

Determine Bayesian A-Posteriori Update

6

Gather more data; refine model

7

The model will be developed using the sevenstep process developed at the University of Southern California (USC) which is shown as Figure 4. This approach has been used successfully to develop several other cost models including COSYSMO [10] and COCOTS [11]. So far, we have completed the first step of this process. We anticipate that steps 2 to 5 will be completed this year. To complete these steps, we will ask the experts in our collaborator group to validate our work and help us rate the relative significance of variables. To solidify these ratings and confirm the expert opinions, we will gather project data. We will spend most of next year collecting, normalizing and analyzing this data to calibrate the model and verify its accuracy.

As noted, the first step in the modeling process involved investigating relevant literature and researching existing models being used to estimate resources for network protection and defense activities. We found that few publications existed that addressed the issues that our collaborators had identified. When queried, our collaborators admitted that most of the estimates they had prepared to-date relied on engineering judgment. Most of those interviewed during fact-finding did bottom-up costing. Estimates were validated based on reasonableness and then summed to form the top-level estimates. Little was done to factor synergy and economies of scale (or diseconomies) into the estimates. The net result was that managers had little confidence in the estimates. However, they had to rely on them because there were no alternatives. In preparation for the next step in our process, we worked with our collaborators to more fully define the goals of the effort at hand. Goals were agreed-upon using the Goal-Question-Metric approach along with a life cycle. The life cycle was further decomposed into tasks that were defined and mapped to the life cycle. This effort was important because it allowed us to develop the notation activity-based model as a point of departure for our future research activities.

5. Notional Model In the fall of 2005, we began performing a behavioral analysis. We asked experts from our collaborators group to help us classify parameters and determine how they varied. As an output of this effort, we wanted to be able to represent the variability in cost as a function of a range of inputs. Based on their initial inputs, we packaged the approaches selected to estimate costs for both network defense and anti-tamper according to the notational structure shown in Figure 3. Like the USC COCOTS model, they recommended that we use a staged model to gather the costs per task by life cycle phase. They felt that just few tasks in the life cycle could be estimated using a parametric model like COCOMO II. They felt that costs for tasks throughout the life cycle could be estimated using heuristic type estimating approaches and then summed as applicable. As part of this effort, we received a wide range of inputs. We are currently trying to reach a consensus on the findings. 5

Figure 3. Network Defense Infrastructure & Anti-Tamper Estimating Model Structure Life Cycle Phase

Conceptualize Development Operational Test & Evaluation

Estimating Approach - Network Defense IS Estimating Approach - AntiTamper

Transition to Operations

Operate & Maintain

Replace (or Destroy)

Heuristic Model

Parametric Model + Acquisitions

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

Heuristic Model

As noted, the parametric model for estimating the development tasks is in its formative stages. The current structure of the model and the current bases of estimates included within it are shown in Figure 4 for the network defense estimating model and Figure 5 for anti-tamper. In these Figures, you will note that acquisitions are not included as part of the model outputs. They represent an additional cost that must be added to the effort estimates. Such acquisitions represent purchases, not labor expenditures. Please note that the Figures also refer to other diagrams and tables where subsidiary models are detailed. Figure 4. Modeling Mechanisms for Network Defense Infrastructure Estimating Model Network Defense Infrastructure Estimating Model Conceptualize

See Figure 6

Development

See Figure 6

Operational Test & Evaluation

Effort OT&E (PM) = Effort function (no. of test scenarios required for acceptance) (PM)(see Table 3)

Transition to Operations

Effort Turnover (PM) = Effort Transition (PM) + Effort DITSCAP (PM)

PM = Person Month CM = Calendar Month

Duration OT&E (CM)= function (effort and available schedule time)

Where: Effort Transition = Estimated Level-of-Effort based on available manpower Effort DITSCAP = Estimated Level-of-Effort based on past experience (see Table 3) Duration Turnover (CM) = Fixed at one year for transition and eighteen months for DITSCAP Operate & Maintain

Effort O&M (PM) = Effort Ops (PM) + Effort Maintenance (PM) Where: Effort Ops = Estimated Level-of-Effort based on budgeted manpower (see Figure 7) Effort Maintenance = Estimated using code fragment changed model + additional inputs to accommodate COTS packages + hardware repairs, updates and replacement + recertification costs (see Figure 7) Duration O&M (CM) = Fixed on a annual basis for operations and release plans for maintenance

Replace (or Destroy)

Effort Replace (PM) = Effort function (system size) (PM) + Effort Recertify (PM) (see Table 3) Where: Effort Recertify = Estimated Level-of-Effort based on no. of requirements and availability of regression tests and test scripts Duration Replace (CM) = function (effort) and upgrade plans

6

Life Cycle Phase

Parameter Computed

Operational Test & Evaluation

Effort OT&E (PM)

Transition to Operations

Replace (or Destroy) PM = Person Month

Rules of Thumb

Effort Range = function (difficulty) Effort DITSCAP (PM) Effort Range = function (difficulty)

Effort f (system size)(PM) Effort Range = function (difficulty) Effort Recertify (PM) Effort Range = function (difficulty)

Small Moderate Large 1 to 10 scenarios 11 to 25 scenarios Over 25 scenarios (assume that operational test & evaluation is highly automated) 4 to 6 PM 8 to 12 PM 18 to 24 PM Limited Average Extensive Self contained , little Some external Lots of external external agency coordination, coordination, tests coordination, informal formal test and witnessed by customer test and acceptance acceptance and very formal 8 to 12 PM 24 to 36 PM 48 to 60 PM Small Moderate Large < 1K requirements Between 1 and 10K > 10K requirements system requirements 6 to 8 PM 12 to 18 PM 18 to 24 PM Small Moderate Large < 10 tests 10 to 50 tests More than 50 tests (assume that recertification testing is highly automated) 4 to 6 PM 8 to 12 PM 18 to 24 PM

Table 3. Rules of Thumb for Network Defense Infrastructure Effort Estimation Figure 5. Modeling Mechanisms for Anti-Tamper Estimating Model Anti-Tamper Estimating Model Conceptualize

Effort Conceptualize (PM) = Effort function (no. CPI items for PPP) (PM) (see Table 4) Duration Conceptualize (CM) = function (effort)

Development PM = Person Month CM = Calendar Month

Effort Development (PM) = Effort Hardware + Effort Software + Acquisitions Where: Effort Hardware = function (no. and difficulties of modifications required) Effort Software = (see Figure 8) Acquisitions = procurement of modified hardware and software (per the PPP) Duration Development = function (effort)

Operational Test & Evaluation

Effort Red Teaming = Effort function (no. of CPI for which protection needs validation) (PM) (see Table 4)

Transition to Operations

Effort Turnover (PM) = Effort IT&E (PM) + Effort DITSCAP (PM)

Duration Red Teaming = function (effort)

Where: Effort IT&E = Estimated Level-of-Effort based on past experience (see Table 4) Effort DITSCAP = Estimated Level-of-Effort based on past experience (see Table 4) Duration Turnover (CM) = Fixed at 3 to 6 months for IT&E and eighteen months for DITSCAP Maintain

Effort O&M (PM) = Effort Maintenance (PM) Where: Effort Maintenance = Estimated using code fragment changed model + additional inputs to accommodate COTS packages + hardware repairs, updates and replacement + recertification costs (see Figure 7) Duration O&M (CM) = Fixed on a annual basis for operations and release plans for maintenance

Replace (or Destroy)

Effort Destroy (PM) = Effort function (system size) (PM) (see Table 4) Duration Replace (CM) = function (effort)

7

Life Cycle Phase

Parameter Computed

Rules of Thumb

Conceptualize

Effort function (no. CPI items for PPP) (PM) Effort Range = function (difficulty)

Operational Test & Evaluation

Effort function (no. of CPI for which protection needs validation) (PM) Effort Range = function (difficulty)

Transition to Operations

Effort IT&E (PM) Effort Range = function (difficulty) Effort DITSCAP (PM) Effort Range = function (difficulty)

PM = Person Month

Replace (or Destroy)

Effort f (system size)(PM) Effort Range = function (difficulty)

Small < 5 CPI 2 to 4 PM

Moderate 6 to 10 CPI 4 to 6 PM

Large Over 10 CPI 6 to 8 PM

Small Moderate Large < 5 CPI 6 to 10 CPI Over 10 CPI (assume that operational test & evaluation is highly automated) 6 to 8 PM 8 to 12 PM 18 to 24 PM Limited Average Extensive Simple as have few Straight-forward, Complicated with CPI to qualify, less than 10 CPI, more than 10 CPI, mostly hardware both HW and SW mostly software 6 to 8 PM 12 to 18 PM 24 to 36 PM Limited Average Extensive Self contained , little Some external Lots of external external agency coordination, coordination, coordination, informal formal test and tests witnessed test and acceptance acceptance and very formal 12 to 18 PM 36 to 48 PM 60 to 72 PM Small Moderate Large < 1K requirements Between 1 and 10K > 10K requirements system requirements 8 to 12 PM 18 to 24 PM 24 to 36 PM

Table 4. Rules of Thumb Anti-Tamper Effort Estimation Figure 6. Network Defense Infrastructure Early Phase Cost Model Size: - No of requirements - No. of interfaces - No. of operational

12 Effort = A (B) ∏Di (Size) C i=1

Effort (PM) Duration (CM)

scenarios

- No. of algorithms - No. of false alarms

Calibration

Duration = Function (Effort)

Where Effort = All hours to perform engineering tasks (requirements, architecture, development, test and integration; includes task management, in PM (152 hours/month)) A = Calibration constant See Table 5 for description B = Architecture constant of cost drivers C = Power law Where: ∏D i = product of their ratings D i = Cost Drivers Size = No. of weighted predictors scaled for a given false alarm rate Note: The model takes the form of a regression model. We are currently working with our collaborators to reduce the number of cost drivers to the set that captures the variations in effort as noted by our experts. The size drivers are taken from the COSYSMO model as representative of systems comprised of both hardware and software components. Acquisitions are excluded and their costs must be added to the estimates generated.

8

Cost Driver

Description

Architecture Complexity

Rates the relative difficulty of satisfying security requirements within the network architecture. This includes efforts to secure the system against malicious code in COTS/NDI. Rates the ability of the team to innovate when implementing designs aimed at satisfying overarching security requirements and constraints for the network defense. Rates the difficulty of satisfying an often conflicting number of Key Performance Parameters (availability, performance, security, safety, and other “ilities”) operationally as network defenses are mounted and all aspects of the infrastructure are enabled. Rates the complexity of migrating components, databases, procedures and workflows to the new network defense architecture. Rates the ability to mount defenses based on the number of vendors products being used and platforms/installations that need to be defended. Effort tends to increase non-linearly with the number of vendors and platforms. Rates the experience of the security team when implementing defenses similar to those being proposed for the network. Rates the effectiveness and robustness of the processes used by the security team in establishing the network infrastructure defenses. Rates both the complexity of the overarching requirements established for network defense (common criteria assurance and functional levels, authentication and authorization, etc.). Rates the difficulty of performing work as a function of physical security constraints placed on the team implementing network security (cipher locks, guards, security processes, etc.). Rates the degree of shared vision and cooperation exhibited by the different organizations working on securing the network infrastructure (customer, developer, auditor, etc.). Rates the relative maturity of the technology selected for use in the defense of the network and the network infrastructure. Rates the coverage, integration and maturity of the tools used, both hardware and software, to mount network defenses (includes the test automation available for revalidating protection once defenses are changed).

Innovation Level of Service Requirements Migration Complexity Number and Diversity of Vendor Products & Platforms/Installations Personnel/Team Experience Process Capability Requirements Complexity Secure Facility Constraints Stakeholder Team Cohesion Technology Maturity Tool Support

Table 5. Cost Driver Candidates for Network Defense Infrastructure Early Phase Cost Model Figure 7. Maintenance Cost Model Facility updates & management

Schedule HW and SE Quality repairs Control

On-going User Support (Help Desk)

On-going User Training

Operations

Version Testing (Acceptance)

Maintenance

User Support for New Version Requirement Engineering

Configuration System & security Unscheduled Management Administration repairs

Version Development

COTS Update or Refresh Version (Synchronize and stabilize) Management

PM Maintenance = PM Nominal (ACT) D Adjusted Where:

Tailoring Update

PM Operations = PM Maintenance

ACT = Annual Change Traffic D Adjusted = Adjusted Cost Drivers PM Nominal = Nominal effort in PM

9

Glue code Maintenance

Figure 8. Anti-Tamper Early Phase Cost Model Size: - No. of function or feature points

11 Effort = A ∏Di (Size) C i=1

Calibration

Effort (PM) Duration (CM)

Duration = Function (Effort)

Where Effort = All hours to perform engineering tasks (reverse engineering, protection insertion, test and integration; includes task management, in PM (152 hours/month)) A C Di Size

= Calibration constant See Table 6 for cost drivers description = Power law = Cost Drivers Where: ∏D i = product of their ratings = Effective size of the application that is being protected

Note: The model takes the form of a regression model. We are currently working with our collaborators to reduce the number of cost drivers to the set that captures the variations in effort as noted by our experts. Acquisitions are excluded and their costs must be added to the estimates generated.

Cost Driver Architecture Complexity Degree of Ceremony Depth and Breadth of Protection Requirements Level of Service Requirements Number and Diversity of Platforms/Installations Personnel/Team Experience Process Capability Requirements Complexity Stakeholder Team Cohesion Technology Maturity Tool Support

Description Rates the relative difficulty of satisfying anti-tamper requirements based on an assessment of the system architecture including how interfaces are handled with hardware, systems software and other applications, both custom and COTS. Rates the formality in which the team operates during development, red teaming and DITSCAP certification. Ratings are a function of support required plus documentation. Rates the breadth and depth of the protection required in terms of how much protection, both hardware and software, must be mechanized to satisfy the requirements within the program protection plan. Rates the difficulty of satisfying an often conflicting number of Key Performance Parameters (availability, performance, security, safety, and other “ilities”) operationally as protection is engineered into the applications and hardware assemblies. Rates the ability to enable protection as a function of the number of platforms/installations that need to be defended. Effort tends to increase non-linearly with the number of platforms because protection must be tailored for individual installations. Rates the experience of the team when implementing protection similar to those being proposed for the applications and hardware assemblies. Rates the effectiveness and robustness of the processes used by the team in establishing the protection and validating that it properly implements provisions of the protection plan. Rates both the complexity of the overarching requirements established for network defense (common criteria assurance and functional levels, authentication and authorization, etc.). Rates the degree of shared vision and cooperation exhibited by the different organizations working on mechanizing the protection (customer, developer, auditor, etc.). Rates the relative maturity of the technology selected for use in protecting applications and hardware assemblies from reverse engineering and tampering. Rates the coverage, integration and maturity of the tools used, both hardware and software, to mechanize protection (includes the test automation available for revalidating protection once defenses are changed).

Table 6. Cost Driver Candidates for Anti-Tamper Early Phase Cost Model

10

6. Example Project We will use two examples to illustrate the use of the models. Let us assume that our firm is trying to estimate the cost of acquiring the network defense infrastructure whose configuration is shown in Figure 9. We have the cost for the hardware and software that we will purchase in hand. However, our management would also like to know how much it will cost to make the network operational. They also wish to know how much it will cost to operate and maintain the system. They have already developed a budget, but were criticized that it was too low. Proxy Server

SQL Server

DMZ

Intrusion Prevention System

Firewall Router

Servers Gateway

Gateway

Sniffer

Servers

Figure 9. Example Project To start, we would select the appropriate model that reflects the scope of what we are estimating. As stated, we have almost completed the conceptualize activity and seem to want to estimate development. The parametric model described by Figure 6 covers both. How would we use the model to estimate the time and effort involved in just one of these activities? The recommended procedure requires that you allocate the effort and duration estimated by the model using the allocations that follow in Table 7 and which have been developed empirically: Table 7. Effort and Duration Allocations by Activity/Task Activity Conceptualize Development

Task Requirements Specification Network infrastructure architecture development Project Planning Product assessments (including trial use) Hardware and software acquisitions Software development, integration and testing - Development - Integration and testing TOTALS

11

Effort Allocations

Duration Allocations

5% 7% 4% 6% 8% 70% (30%) (40%) 100%

16% 14% 70% 100%

To develop the estimate, we will analyze the high-level system requirement specifications and determine that we have a size determined by weighting the number of requirements, interfaces, operational scenarios and algorithms by complexity and false alarm rate goals. We then select the appropriate architecture constant using the guidance in Table 8 and rate the cost drivers using guidelines that are under construction. In this case, we would use 0.87 because we rated the defenses somewhere between advanced and state-of-the-art. We compute effort assuming a linear power law (C = 1) and the value for Di. Using this value for effort, we finally estimate duration using a cube root relationship of the following form: Duration = K (Effort) 0.333

Where: K = calibration constant Effort = results of using formula in Figure 6

The guidelines for selecting the appropriate cost driver rating and the associated values for use in the model are currently being developed with our collaborators along with the calibration constants. They should be available when this paper is published later in 2006. Table 8. Architecture Constant (B) Selection Network Defense Architecture No Defenses Basic Defenses Standard Defenses Advanced Defenses State-of-the-Art Defenses

Description

Value

Maybe a firewall, but that’s it. Hardware firewall at entrance; router authorization; operating system patches kept up-to-date; local authentication Basic plus intrusion detection system; network scanner to identify intrusions; log files analyzed periodically; system swept periodically to identify vulnerabilities Standard plus DMZ configuration; intrusion prevention system; layered defenses aimed at identifying and recovering from both insider and outsider attacks Advanced plus proxy server configuration; defense-in-depth with active alerts and situational awareness displays; honeypots for forensics, etc.

1.22 1.11 1.00 0.91 0.84

Should protection be a requirement, we would use the anti-tamper model to estimate the additional effort needed. Anti-tamper in this context represents the last line of defense against attack. We would utilize the model in a manner similar to the network defense model. We would then select values for parameters based on the guidance provided. Finally, we would calculate the effort to reflect the opinion of experts using the formulas provided in Figure 8.

7. Next Steps We have developed a project plan for finalizing the first release of a prototype version of both models in 2006. The milestone schedule from this plan is summarized by task in Table 9. It assumes our collaborative efforts will both continue and will supply the critiques and data necessary to continue making progress. The schedule assumes that actual experience data will become available and that we will develop an initial calibration. This calibration would be devised using the advanced Baysian approach that was used for COCOMO II [12] so that we could weight values based to reflect whether it is contingent on expert opinion or actual statistically validated numbers. While the schedule is aggressive, we believe it is achievable assuming that the effort receives a priority and that funding for it is not interrupted. Otherwise, delays will be incurred and the model will not be readied for prototype use until 2007. 12

Table 9. 2006 Milestone Schedule Task

2006 1Q

2Q

3Q

4Q

Model Development 1. Define drivers 2. Develop counting conventions 3. Rate drivers via Delphi 4. Develop model definition manual 5. Build spreadsheet model 6. Calibrate prototype model Data Collection 1. Develop data collection questionnaire 2. Test questionnaire utility via trial use 3. Capture data 4. Build Excel database 5. Statistically analyze data 6. Calibrate model and its parameters

8. Future Challenges Building models for estimating the cost of security for the network infrastructure and anti-tamper is both a time-consuming and difficult task. The challenges are numerous, but so are the rewards. Our biggest challenge will most likely be acquiring reliable data. Such data is needed to confirm expert opinion and calibrate the model against actual experience. Just getting firms to volunteer such data is difficult. Additionally, once acquired such cost data must be normalized and validated before we can make sense of it. Using collaborators we have worked with in the past to get data will enable us to overcome some of these problems. Furthermore, our experience in creating comparison cost databases will also help with the normalization issues. Yet, both of these problems remain persistent and efforts must be mounted continuously to reinforce the idea that firms are doing the right thing by sharing data with us. In addition, as part of our efforts in the future, we want to investigate the risk relationships between model cost drivers [13, 14, 15]. This will enable us to use relative effort and duration to quantify the impact of risks inherent in network defense and anti-tamper throughout the life cycle. The challenge is to make sure that the models properly identify the variables to which effort and duration are most sensitive and how these parameters are related. In addition, we recognize that risk is a measure that revolves around probabilities distributions that must be formulated and confirmed statistically in order to be meaningful. Our hope during the forthcoming year is that our efforts to confirm with actual data what the experts believe these parameters to be will establish these relationships. In parallel, we will confirm these relationships and their statistical properties using actual experience data we capture as part of our data collection efforts. Our final challenge is to maintain sufficient sponsorship to fund our efforts. This requires us to keep our sponsors engaged and interested. We will do this by making incremental progress and generating positive results. As part of this effort, we plan to continue to make the results of our efforts public via periodic progress reports and technical articles.

13

9. Conclusions The intent of this paper was to provide insight into the development of our estimating models, their heuristics, driver definitions, and future challenges. Our efforts have confirmed that the community desires models that they can use in developing accurate estimates of effort and duration for securing the network infrastructure and anti-tamper. Our contribution represents such a model. Its parametric formulation differs from other models that are currently available which estimate the cost of engineering security into developments as part of the design. We have had tremendous support in our efforts but still have a number of significant challenges ahead of us. We hope to continue collaborating with practitioners and the security community at large to produce an accurate and flexible model. Their insights and suggestions have helped us solidify concepts and generate our initial products to date. We plan to continue reporting the status and progress of our efforts as we pursue model development with the intent to make an impact in the fields of cost estimation and security engineering.

10. References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11]

[12]

CISCO Systems, The Return on Investment White Paper, 2001 (available at the following we site: http://www.cisco.com/en/us/netsol). Shawn A. Butler, “Security Attribute Evaluation Method: A Cost-Benefit Approach,” Proceedings of International Conference on Software Engineering, May 2002, pp. 232240. Donald J. Reifer, Barry W. Boehm and Murali Gangadharan, “Estimating the Cost of Security for COTS Software,” COTS-Based Software Systems, Springer, 2003, pp. 178186. Department of Education, Information Technology Security Cost Estimation Guide, November 2002. Department of Defense, DOD 5220.22-M, National Industrial Security Program Operating Manual, January 1995. Department of Defense, CCIMB-99-031, 032 and 033, Common Criteria for Information Technology Security Evaluation, v2.1, 1999. International Standards Organization, ISO/IEC 12207, Software Life Cycle Processes, August 1995. Electronics Industries Association, EIA 632, Processes for Engineering a System, 1998. Barry W. Boehm, Chris Abts, A. Winsor Brown, Sunita Chulani, Bradford K. Clark, Ellis Horowitz, Ray Madachy, Donald Reifer and Bert Steece, Software Cost Estimation with COCOMO II, Prentice-Hall, 2000. Ricardo Valerdi, Christopher Miller and Gary Thomas, Systems Engineering Cost Estimation by Consensus, Proceedings of International Conference on Systems Engineering, September 2004. Chris Abts, Barry W. Boehm and Elizabeth B. Clark, COCOTS: A COTS Software Integration Life Cycle Cost Model – Model Overview and Preliminary Data Collection Findings, University of Southern California, Report USC-CSE-2000-501, 2000 (available at http://sunset.usc.edu under technical reports). Sunita Chulani and Bert Steece, A Baysian Software Estimating Model Using a Generalized g-Prior Approach, University of Southern California, Report USC-CSE1998-515, 1998 (available at http://sunset.usc.edu under technical reports).

14

[13] [14] [15]

Ashish Arora, Dennis Hall, C. Ariel Pinto, Dwayne Ramsey, and Rahul Telang, “Measuring the Risk-Based Value of IT Security Solutions,” IT PRO, November/December 2004, pp. 35-42. Gary McGraw, “Risk Analysis in Software Design,” IEEE Security & Privacy, July/August 2004, pp. 7984. Stuart E. Schechter, “Toward Econometric Models of the Security Risk from Remote Attacks,” IEEE Security & Privacy, January/February 2005, pp. 40-44.

About the Author Donald J. Reifer is recognized as one of the leading figures in the fields of software engineering and management with over thirty-five years of progressive management experience in both industry and government. From 1993 to 1995, Mr. Reifer managed the DoD Software Initiatives Office under an Intergovernmental Personnel Act assignment with the Defense Information Systems Agency. As part of this Senior Executive Service assignment, he served as the Director of the DoD Software Reuse Initiative and Chief of the Ada Joint Program Office. Previously, while with TRW, Mr. Reifer served as Deputy Program Manager for their Global Positioning Satellite efforts. While with the Aerospace Corporation, Mr. Reifer managed all of the software efforts related to the Space Transportation System (Space Shuttle). Currently, as President of RCI, Mr. Reifer advises executives in Fortune 500 firms worldwide in the areas of software investment and improvement strategies. He is known for both his business and practical problem solving skills. Reifer received a bachelor’s degree in electrical engineering from New Jersey Institute of Technology and a masters degree in operations research from the University of Southern California.

Reifer Consultants, Inc. P.O. Box 4046 Torrance, CA 90510-4046 Phone: 310-530-4493 Fax: 310-530-4297 E-mail: [email protected]

15

Suggest Documents