The Role of Experience in Software Testing Practice

The Role of Experience in Software Testing Practice Armin Beer Siemens IT Solutions and Services PSE Gudrunstrasse 11, A-1101 Vienna, Austria armin.be...
Author: Rosaline Horton
1 downloads 2 Views 214KB Size
The Role of Experience in Software Testing Practice Armin Beer Siemens IT Solutions and Services PSE Gudrunstrasse 11, A-1101 Vienna, Austria [email protected] Abstract Practitioners report that experience plays an important role in effective software testing. We investigate the role of experience in a multiple case study about three successful projects conducted at Siemens Austria and document the state of practice in testing software systems. The studied projects were employed from the domains telecommunications, insurance and banking, as well as safety-critical railway systems. The study shows that test design is to a considerable extent based on experience in all three projects and that experience-based testing is an important supplementary approach to requirements-based testing. The study further analyzes the different sources of experience, the perceived value of experience for testing, and the measures taken to manage and evolve this experience.

1. Introduction In today’s software testing practice, successful testing often relies on the tester’s skills, intuition, and experience. “An experienced tester who knows the product and has been through a release cycle or two is able to test with vastly improved effectiveness” is thus one of the lessons learned in software testing [8]. Armour reports that he has “… found good testers have a ‘nose’ for testing. They experience a kind of intuition that tells them what to test and how …” [1], which he attributes to experience with similar systems, knowledge about the customer, the development process, and the capabilities of the developers. Consequently, experience-based testing techniques [5] such as exploratory testing [3] have emerged and emphasize the tester’s experience for effective testing. Yet the tester’s knowledge and experience is indispensable for any testing method or technique. Although a wide range of methods and techniques offer guidance for systematic and rigorous testing, it is the tester’s knowledge that plays an essential role in ultimately

Rudolf Ramler Software Competence Center Hagenberg Softwarepark 21, A-4232 Hagenberg, Austria [email protected] and successfully applying these methods and techniques in the context of a project [16]. In this paper we study how testing is performed in three mid-sized to large industry projects and observe the impact of the testers’ experience on testing. We therefore define experience as practical knowledge [13] that has been developed by direct observation of or participation in testing activities. Our goal is to better understand the role of experience for effective testing in order to develop successful testing strategies and tool support. The remainder of the paper is structured as follows. Section 2 motivates and describes the case study approach we follow in this paper. Section 3 gives an overview of the three investigated projects. Section 4 presents the resulting observation and, finally, in Section 5 conclusions are drawn for directing future research.

2. Motivation and Study Design 2.1 Research Objective Few empirical studies exist that focus on how software testing is conducted in practice. The importance of such studies has been repeatedly emphasized, as they provide insight in requirements and limiting factors of the application of software testing in practice and, thus, establish a basis for selecting research problems in which researchers and practitioners share mutual interest. For example, Martin et al. [11] argue that “Many descriptions in Software Engineering research of the application of testing and verification methods to real world problems, whilst welcome, should be considered more-so as demonstrations than as reports of how testing is done in practice. There is a need for empirically gathered data about testing that has no vested interest in demonstrating the superiority of a particular method or technology.” Furthermore, Perry et al. [12]

strengthen the case of empirical studies “used not only retrospectively to validate ideas after they’ve been created, but also proactively to direct our research.” Related studies include Martin et al’s ethnography of testing in a small software company [11], Itkonen and Rautiainen’s case studies on exploratory testing [7], Runeson’s survey of unit testing practices [14], Larndorfer et al’s investigation of software testing for continuous casting steel plants [9], and Taipale et al’s qualitative study on testing organizations and knowledge management [15]. This paper is based upon a multiple exploratory case study [17] that documents and investigates the state of practice in software testing of three successful projects conducted at Siemens Austria. The selected projects were related to different technical domains – telecommunications, banking and insurance, and railway systems – and followed a development and testing processes assessed as CMMI level 3 or above. The research objective of the case study is to explore and describe the role of experience in software testing. Thereby we follow the proposition that experience plays a major role in software testing and it is an important factor for developing test cases. Our proposition relates to the reported evidence of benefits of experience-based testing (e.g., [3][8][7]), that stresses experience as essential for effective software testing.

2.2 Research Questions Based on this proposition, a list of research questions has been defined for driving the study. The three cases presented in this paper have been selected and analyzed under the light of these questions. Moreover, these questions were further refined to a list of open interview questions in order to serve as guideline for interviews and document analysis. (1) What activities of testing are based on experience? All activities throughout the test process – from test planning to analyzing and reporting test results – most likely benefit from experience. To focus our study we narrowed the investigation to the activities involved in the typically laborious development of test cases, i.e., designing test cases including identifying test objects and conditions, selecting test data and inspection points, defining expected results, implementing test cases for execution, as well as prioritizing tests, compiling test suites, and defining coverage and test stop criteria. (2) What is the perceived value of experience for testing? While we are primarily interested in the effectiveness of testing when based on experience, the value in general has to be evaluated by the benefits achieved opposed to the costs resulting from incorporating ex-

perience in testing activities. Quantitative and qualitative data have to be collected. And, in order to minimize the influence of the subjective perception by different roles, testers and managers have to be interviewed separately. (3) What defined experience-based testing techniques are used? Experience-based testing encompasses test design techniques that “derive and/or select test cases based on the tester’s experience, knowledge and intuition” [6]. Prevalent experience-based testing techniques following this definition are: Ad hoc testing [5] subsuming completely informal testing approaches; exploratory testing [3] where the design of test cases and test execution are intertwined in order to continually design new tests using the information gained while testing; and error guessing [6] where the experience of the tester about potential defects or a recorded history of faults is used for test design. (4) How has the experience relevant for testing been established? Experience, as defined above, is used in this paper as synonym for knowledge development by the involvement in testing activities. So this initial question forks from how experience has been established into who was involved, in what activity, where and when it took place and why, to further distinguish the diverse sources of experience. (5) What measures are taken for managing and evolving experience in testing? Personal skills, experience and knowledge in general heavily influence any activity in software engineering. Hence, knowledge management became a major concern for software engineering [2][10]. Regarding the knowledge and experience as important factor, this question investigates to what extent this factor is actively managed and evolved as part of software testing and test management.

2.3 Data Collection and Interpretation For answering these questions the study relied on multiple sources of evidence. First, semi-structured interviews were conducted based on a list of open interview questions derived from the research questions. Interviewed project personnel were testers, developers, test managers, and project managers. Interview protocols document the statements given by the interviewed persons. Second, project documents and artifacts such as test plans, requirements specifications, test specifications, protocols, and test case repositories were analyzed. In addition, findings were compared with records from test process improvement initiatives guided by the test support center of Siemens. Finally, to ensure the accuracy of the results, a detailed case study

report was created and reviewed by key participants involved in the study. The project settings and observations documented in this report are summarized in the next sections, Section 3 and Section 4.

3. Description of Studied Projects In this section the three studied projects are introduced. The description is an abridgment of the documentation of the original study. Due to confidentiality reasons project names were replaced by numbering and links to involved companies were omitted. Table 1 gives an overview of the studied projects. Case #1

Case #2

Case #3

Domain

Telecommunications

Banking and insurance

Railway systems

Size

Mid-sized, 35 persons

Mid-sized, 15 to 20 persons

Large-scale, ~ 100 person

Duration

Ongoing product development

Fixed length project of one year

Ongoing product development

Charac- - Distributed teristics project - Many nonfunctional requirements

- Incremental development

- Safety-critical system

- Mandatory - Knowledge standards for acquisition development from external and testing consultants

- Multiple platforms / con- - Key user involvement figurations

Table 1: Overview of the studied projects

3.1 Case #1: Telecommunications Case #1 was a mid-sized project located in the telecommunications domain. The project’s mission was to develop a high-reliable and high-performance software platform including IP-based services for telecommunication networks. In the past years four similar projects had been completed in the same domain and considerable expertise had been available. In total 35 persons worked in this project with 12 persons in testing. Development and testing were distributed to Austria, Germany and various other European regions. The Siemens development process framework for information and communication network projects was applied. The process had been tailored for iterative development with three months release cycles. Non-functional requirements dominated the development and testing activities. More than 60 percent of the tests addressed non-functional requirements such

as availability, performance, high load, as well as longduration. Furthermore, the system had to support two different hardware platforms with a large number of different configurations per platform. Testing activities spanned over different testing levels conducted by separate teams: Individual components, integrated services, and the overall system. Component testing was conducted by the development teams to verify the functionality of the components. The basis for defining the component tests was the interface specification written by the developers themselves. Tests were primarily run in a projectspecific test environment simulating the various hardware devices. Depending on the individual developer and on his or her personal experience and preferences, additional open-source testing tools and test-driven development were applied. Integration testing of the integrated services for the different platforms and configurations focused especially on performance, to provide early feedback on design issues for development. Integration testing therefore took place in parallel to and in cooperation with development. Testers reported that up to 40 emails were exchanged per day to reproduce, discuss and fix defects. The number of test cases was relatively small and integration testing concentrated on standard scenarios due to the high effort for implementing the tests. Test cases had to be implemented as scripts for the platform-specific protocol simulator before they could be executed. The basis for test design was the requirements specification, the design specification, and documents about platform specific principles and guidelines. They provided, for example, the throughput values for different hardware platforms relevant for performance testing. Quality and granularity of requirements descriptions varied greatly, e.g., sometimes there were only one to two lines for a complex requirement. For this reason it was necessary to base tests also on experience and to work together with developers. System testing followed a requirements-based approach and test cases were traced to specified requirements. There were one or more test cases for each requirement, depending on the requirement’s complexity and size. The test cases were essentially derived from the tester’s interpretation of the requirements. As the suite of test cases grew from version to version, test case selection for regression testing became an increasingly important task. The selection of regression test cases depended mainly on the experience of the testers and was done in cooperation with development to make sure testing involved all system parts affected by changes.

Figure 1: Incremental development process in case #2

3.2 Case #2: Banking and Insurance The goal of case #2 was to replace the existing host application in the domain of banking and insurance by a Java-based Web application within the time frame of about one year. The project was conducted on the site of the company under the lead of Siemens. The project team consisted of company employees plus personnel of Siemens and other software engineering organizations. The company’s employees provided the domain knowledge and the external personnel contributed the know-how about the involved technologies as well as software engineering methods and tools. The overall project team consisted of 15 to 20 people working together in this mid-sized project. The project was split into several overlapping releases (see Figure 1), which were further structured into the phases: definition, design and implementation, and operation. Each release was put into operation once completed. The project followed an incremental development process that fostered the incorporation of results from one release in the definition of the next release. In the definition phase the requirements were elicited and specified by external requirements analysts assisted by domain experts from the company. Use cases were used to specify the requirements, which were input for the development and implementation phase and, furthermore, the basis for system testing. Testing was split into unit and integration testing, system testing, and acceptance testing. Unit and integration testing was done by the developers themselves based on guidelines issued by senior developers experienced with the applied technology. System testing was done by an independent test team of five testers recruited from key users and domain experts of the company. The test team was led by an external test manager. System test cases were derived from use cases in a formal way and specified in a test management tool.

When test cases were derived from the requirements for the first release, testers found that some of the requirements were ambiguous, lacked details necessary for testing special and exceptional cases, and some were even incorrect. Due to time restrictions in requirements engineering, the domain experts had not been able to review the requirements thoroughly. Therefore, from the second release onward, additional test cases were designed based on the experience of the testers. Moreover the testers implemented more test cases because they knew about the risk of potential defects in critical functions with public visibility, due to the usage of the host application during several years. As a result, in the second release, parts of the system were tested much more thoroughly. The experience-based test cases were specified and managed like those derived from requirements. Similarly, non-functional requirements had been specified loosely or not at all. For example, the performance requirements did neither state a clear response time nor which usage scenarios were appropriate for load testing. Again, representative values and scenarios were elaborated together with domain experts and key users based on experience.

3.3 Case #3: Railway Systems In this case we investigate testing of an electronic interlocking system, a safety-critical subsystem of a railway control system. The electronic interlocking system guarantees the safe operation of the shunts in a railway yard. Hence, safety requirements are extremely high. The software part of the interlocking system runs on dedicated hardware with triple redundancy for automatic fault detection. The development of the railway control system is an ongoing large scale project with a team of about 100 persons. The railway control system is a product in use at several different railway companies all over the

world with country-specific customization and adaptation to different types of interlocking systems. Development and testing for these kind of projects are regulated in the CENELEC standards1 such as EN 50128 (software for railway control), EN 50129 (railway signaling systems) and EN 61508 (safety of electronic systems). The standards promote the V-model for development and testing (Figure 2). Testing takes places in the validation phases at system level and software level as well as at the software module level.

Requirements

Validatio n

System level Architecture

Integration

Requirements

Validatio n

Software level Architecture

Integratio n

Design

Test

SW m odule level Implementa tion

4. Observations In this section recurring patterns in the cases under study are reported with respect to the research questions stated in Section 2: (1) What activities of testing are based on experience? (2) What is the perceived value of experience for testing? (3) What defined experience-based testing techniques are used? (4) How has the experience relevant for testing been established? (5) What measures are taken for managing and evolving experience in testing?

4.1 What activities of testing are based on experience?

Figure 2: Process model applied in case #3

The most prominent activities that relied heavily on experience are described in the following.

For testing the CENELEC standards demand a requirements-based approach. Covering 100 percent of the requirements by testing was mandatory. Test cases of all levels were therefore documented as part of the test specification and traced to requirements. Furthermore, complete code coverage had to be achieved or a written justification for uncovered execution paths had to be provided. For test automation the framework GRACE (Graphical Requirements Analysis and design method in a CENELEC based Engineering process) was used. [4]. Test cases at the system and software level were designed by testers with a thorough knowledge of railway interlocking systems on the basis of the requirements specifications and their personal knowledge about railway systems, both from a technical perspective and from the user’s point of view. In total the project had developed an extensive set of about 3.500 system test cases. The requirements specifications were written in a very precise style, but still in natural language. As a result, the systematic inference of test cases was not straightforward and techniques such as equivalence partitioning or state transition testing could not be applied for all requirements. Exploratory testing, thus, played an important role in supplementing systematic requirements-based testing.

Development of test cases. The most notable activity observed across all three cases was test case development, which relied on the interpretation of the specified requirements in order to identify objects and conditions relevant for testing. For this activity, both, domain experience and testing experience were a prerequisite to develop useful test cases. In case #1, test cases were derived from the requirements specification with varying quality and granularity, often insufficiently detailed for developing the test cases. The testers had to make assumptions about realistic values based on their personal experience or contact analysts and developers. In case #2, the system test cases were derived from use cases created by an external team of analysts. The time scheduled for reviewing the use cases by domain experts was not sufficient to result in a complete and unambiguous specification. Thus, testers with detailed domain-knowledge were involved to derive additional test cases based on their personal experience and to revise the specification. The iterative approach of the project facilitated these revisions. In case #3, test cases were derived from natural language requirements. The test cases were found by interpreting the requirements using the knowledge and experience about how railway interlocking systems work in practice. All test cases were developed by testers with years of domain experience and a thorough knowledge of testing railway interlocking systems to

1

http://www.cenelec.eu

overcome any omissions or ambiguities inherent in the natural language specification.

of experience-based tests was regarded as essential for a successful “go life” as planned in case #2.

Regression testing. Similar to the development of new test cases, the selection of regression test cases depended mainly on the experience of the testers. Test cases were selected for regression testing with the intent to cover system parts that had been changed and to detect side effects caused by these changes. The selection was done based on the list of changed or new requirements and on experience about implicit dependencies resulting from how the system had been constructed. In case #1, for example, the testers therefore cooperated with development to identify system parts affected by changes.

Test for imperfect requirements specifications. In all three projects we observed that experience enabled the development of test cases for imperfect requirements specifications. Although imperfect for testing, the specifications were usually satisfying the needs of development and additional investments were considered prohibitive due to project constraints. All analyzed projects had invested a considerable amount of time and resources in developing a comprehensive and systematic requirements specification. In case #1, the main benefit of experience-based tests was that test cases were established for aspects of the system where no or insufficient specification – just a few lines – had been available. In case #2 most test cases were derived from the specification in a systematic manner as planned. Domain experts added supplementary, yet very effective test cases based on their profound experience in order to overcome any omissions and inaccuracies in the initial specification. Applying experience-based testing instead of “perfecting” the specification was regarded as essential for keeping the tight release deadline. In case #3, experience was used to interpret the requirements and helped to bridge the gaps that existed in the textual, natural language specifications.

Test automation. Automating the test cases required comprehensive experience with the automation frameworks and tools. Tool experience has been emphasized in case #1, where the testers used special testing tools and a vendor-specific test environment. Experience with the setup and configuration of testing hardware and simulators as well as the definition of appropriate deployment descriptors was mentioned as a key issue for producing meaningful test results. In case #2, the test team included external consultants with experience in test automation. They supported establishing a successful test automation approach based on the test cases delivered by domain experts as well as a user-friendly automation framework.

4.2 What is the perceived value of experience for testing? To answer this question we first evaluated the number of defects found as a result of test cases based on the testers’ experience. Then we investigated what benefits and costs were the consequence of taking an experience-based approach to testing. Number of defects detected. We found that requirements-based tests and experience-based tests differed only as they had either been designed by refining specified requirements or by deriving them from personal experience. The resulting test cases, however, were managed and executed in the same way, using the same tools, and following the same procedures. An exact identification of defects detected due to experience-based tests was therefore not possible for all studied cases. In case #2, we found that about 10 to 20 percent of the test cases had been derived from personal experience. Yet about 50 percent of the total defects were detected due to experience-based test cases. Many of these defects were critical. The effectiveness

Costs of experience-based testing. In case #3, where reliability requirements had high priority, the amount of testing and the resulting costs were major issues. The main cost driver and, thus, the main disadvantage identified was the number of redundant test cases caused by the use of error guessing in addition to other testing techniques. The high total number of test cases resulted in an overly high testing effort.

4.3 What defined experience-based testing techniques are used? All three projects had the strict requirement to specify test cases upfront, before testing was started. Test cases and test results were maintained in a test management tool. This requirement encouraged the application of formal testing techniques based on requirements rather than on experience. Furthermore, in case #1, test cases had to be automated before they could be executed. The initial effort involved in implementing the test cases contradicted the idea of exploratory testing and rendered this testing technique infeasible for a large extent of test cases. However in case #2, deriving test cases from experience had been explicitly planned for the second iteration. By following the approach of error guessing,

domain experts involved in testing developed test cases for usage scenarios for which they had experienced failures in the past when working with the predecessor host application. In case #3, the complementary use of error guessing was necessary to satisfy the standard IEC 61508, which mandates for testing by fault expectation in addition to other testing techniques.

4.4 How has the experience relevant for testing been established? Testers gained their experience from diverse sources. The most often mentioned sources involved testing previous versions of the software system, involvement in analyzing and fixing defects, taking part in development and maintenance, and working in the domain using similar software systems. The testers in case #1 gained their experience over years of testing and developing software for telecommunication systems. An important source of experience for new testers was the close cooperation with senior developers, who contributed their experience about the system for designing and implementing test cases and for setting up the test environment. Besides, reusing test cases from earlier versions helped to disseminate relevant experience for future versions and to new testers. In case #2, the experience of domain experts recruited for testing was developed by working in different roles for the company for several years. So they had intimate knowledge about organizational and legal obligations as well as established process. In case #3, testers had several years of experience in testing different country-specific versions of the interlocking system and in working, for example, in railway control centers. Some of these testers were “railway enthusiasts” and experts with a high reputation in the project. For example, they had detailed knowledge about near-accidents in the operation of railway systems. Furthermore, they had experienced several product qualifications for different countries to certify the system’s conformance to national regulations and international standards. Consequently, they knew how to interpret the requirements stipulated in these regulations and standards and how to put them into practice.

4.5 What measures are taken for managing and evolving experience in testing? In all three projects, management considered experience as essential for effective testing. For example, in case #1 the project manager pointed out that only

experience enables effective testing and that it takes on average tree quarters to a year for new testers to become productive. As main reason he identified the domain knowledge necessary for testing. This knowledge is often not made explicit since it is regarded as a precondition for working on the project. Furthermore, as observed in case #1 as well as in case #2, this knowledge cannot always be easily inferred from existing documents and specifications. Although measures were in place to manage and evolve experience required for testing in all analyzed projects, these measures were not addressed in the test strategy or in the test plans we examined. From the interviews we retrieved the following information about measures for managing and evolving experience related to testing. Training on the job. All interview partners emphasized training on the job. In case #1, new testers were assigned to a coach who supported the tester’s knowledge development during the first period of working in the project. Case #2 was characterized by explicit knowledge transfer. Testing knowledge was acquired in form of external testers and consultants. It was then transitioned to the test team by working closely together at the same location and by documenting the applied approaches as guidelines. In case #3, the use of experienced personnel, i.e. seasoned domain experts involved in developing, repeatedly adapting and extending the system for different countries, established a sustainable base of domain and testing experience. Development and test process. In general, iterative and incremental development fostered the application of experience in testing as, for example, artifacts and personnel were reused. In case #1, reuse included test cases, test environments and test personnel. In case #2, the project was developed in three iterations resulting in increasing domain-experience as well as testing experience.

5. Conclusions and Future Work In this paper we studied three projects conducted at Siemens Austria and documented our observations about the role of experience on software testing. The following conclusions were drawn from these observations. We consider these generalized conclusions beyond the scope of documenting the state of practice. They serve as starting point for defining research questions and hypotheses to be tackled in future work. • Testing knowledge vs. domain knowledge. The development of testing knowledge was an important aspect in all studied project. However, we found that also substantial domain knowledge was

required for testing, which could only be developed adequately by working in the domain or by longterm involvement in a project. The typical path of knowledge development of senior testers started with domain knowledge, testing knowledge was developed later while working as tester and attending additional seminars. Advanced testing was usually introduced by external consultants working together with domain experts. Finding the optimal mix of testing knowledge and domain knowledge is thus a vital issue for successful projects and a major task for our future research. • Quality and testability of requirements. In all projects we found difficulties in specifying requirements, even crucial ones, consistently, completely and correctly. We conclude that reviewing and improving requirements specifications for testing is an easy yet effective measure also to improve testing. For example, in case #3, our observations have led to the enhancement of the textual specifications by more formal state charts. Nevertheless we also conclude that instead of aiming at a “perfect” specification, projects usually should also invest in applying experience-based testing to overcome issues in imperfect specifications. • Tool support for experience-based testing. An important lesson learned was that – while in all projects customized tools were available to support testing activities – the existing tools failed to leverage experience for effective testing. Our future work will include enhancing testing tools in two ways: First, tools should directly support the incorporation of the tester’s experience, e.g. as additional source for generating test cases. Second, tools should foster gaining and sharing new experience throughout testing activities in addition to producing tests results. We will thereby further investigate the question of how tools and techniques should be designed for experience-based testing.

Acknowledgements This work has partially been conducted within the competence network Softnet Austria (www.soft-net.at) and funded by the Austrian Federal Ministry of Economics (bm:wa), the province of Styria, the Steirische Wirtschaftsförderungsgesellschaft mbH (SFG), and the city of Vienna in terms of the center for innovation and technology (ZIT).

References [1] Armour P.G.: The Unconscious Art of Software Testing. CACM, vol. 48 , iss. 1, January 2005, pp. 15-18

[2] Aurum A., Jeffery R., Wohlin C., Handzic M.: Managing Software Engineering Knowledge. Springer, 2003 [3] Bach J: Exploratory Testing Explained. in E.v. Veenendaal (ed.): The Testing Practitioner. UTN Publishers, 2002, pp209-221 (v1.3.4/16/03, www.satisfice.com) [4] Beer A., Heindl M.: Issues in Testing Dependable Event-based Systems at a Systems Integration Company. 2nd Int. Conference on Availability, Reliability and Security (ARES), Vienna, Austria, 2007 [5] IEEE: Guide to the Software Engineering Body of Knowledge. IEEE Computer Society, 2004 [6] International Software Testing Qualifications Board: Standard glossary of terms used in Software Testing. version 1.2, April 2006 [7] Itkonen J., Rautiainen K.: Exploratory Testing: A Multiple Case Study. 4th Int. Symposium on Empirical Software Engineering (ISESE), Noosa Heads, Au. 2005 [8] Kaner C., Bach J., Pettichord B.: Lessons Learned in Software Testing: A Context-Driven Approach. Wiley & Sons, 2002 [9] Larndorfer S., Ramler R., Federspiel C., Lehner K.: Testing High-Reliability Software for Continuous Casting Steel Plants. 33rd EUROMICRO Conference on Software Engineering and Advanced Applications. Luebeck, Germany, 2007 [10] Lindvall M., Rus I.: Knowledge Management in Software Engineering. IEEE Software, 19(3), May/June 2002, pp. 26-38 [11] Martin D., Rooksby J., Rouncefield M., Summerville I.: ‘Good’ Organisational Reasons for ‘Bad’ Software Testing: An Ethnographic Study of Testing in a Small Software Company. 29th Int. Conference on Software Engineering (ICSE), Minneapolis, MN, 2007 [12] Perry D.E., Porter A.A., Votta L.G.: Empirical studies of software engineering: a roadmap. The Future of Software Engineering (ICSE), Limerick, Ireland, 2000 [13] Probst G., Raub S., Romhardt K.: Managing Knowledge. Building Blocks for Success. Wiley & Sons, 1999 [14] Runeson P.: A Survey of Unit Testing Practices. IEEE Software, 23(4), July/Aug. 2006, pp. 22-29 [15] Taipale O., Karhu K., Smolander K.: Observing Software Testing Practice from the Viewpoint of Organizations and Knowledge Management. 1st Int. Symposium on Empirical Software Engineering and Measurement (ESEM), Madrid, Spain, 2007 [16] Vegas S., Juristo N., Basili V.: Packaging experiences for improving testing technique selection. Journal of Systems and Software, v. 79, Nov. 2006, pp. 1606-1618 [17] Yin R.K.: Case Study Research: Design and Methods. 3rd Ed. SAGE Publications, 2002

Suggest Documents