Software Testing 3. Classes

Software Testing 3. Classes Universiteit Antwerpen Chapter 3 3. Classes (Based on “Part III: Patterns” of Testing Chapter 9 Results-Oriented Testi...
Author: Elmer Conley
5 downloads 3 Views 6MB Size
Software Testing 3. Classes

Universiteit Antwerpen

Chapter 3

3. Classes (Based on “Part III: Patterns” of Testing Chapter 9 Results-Oriented Testing & Chapter 10 Classes) • Results-Oriented Testing - Why, How - Patterns • Testing Classes - Overview - Method Scope Test Patterns - Class Scope Test Patterns - Class Scope Integration - Flattened Class Scope

Universiteit Antwerpen

2

Why Results-based Testing ? Test strategy Acceptance testing System testing Integration-testing Unit-testing

Test technique White-box

Black-box

Traditional view on software testing … doesn’t work well for modern (object-oriented) system development ➡ distinction between categories is fuzzy at best ! ➡ incremental and iterative development practices Universiteit Antwerpen

3

Test ready Implementation Under Test (IUT) • is at a certain scope (method, class, subsystem, application, …) • consists of parts (statements, methods, cluster of classes, …) Parts must pass an operability threshold • = explicit, demonstrable criterion for part interoperability • examples - clean compile & link, no memory leaks, … - pass smoke tests, pass minimal duty cycle, … - pass integration test suite, pass regression test suite, … Assessment of part operability • suspect: has not shown sufficient reliability, don’t start testing • okay: sufficient reliability, may start testing at this scope • trusted: high reliability, stop testing at this scope Universiteit Antwerpen

4

Steps in results-oriented testing Select a test pattern Generate IUT Test Suite

Anayze Part Structure Assess Part Operability All Subsets Operable ?

Design and Code IUT Harness Yes

No

Choose Integration Cycle Identify Part Subsets and Sequence Noncontrollable Part Subsets and Sequence

Test Cycle (IUT Test Suite)

Universiteit Antwerpen

Test Cycle Controllable Part Subsets and Sequence Test Cycle 5

Test Cycle

Run Test Suite Yes Yes

Coverage Met ?

Promote

Universiteit Antwerpen

All Tests Pass ? No

No

Debug Add Tests

Responsibility-based & Implementation Based meta-model

required behaviour

Validation

observed behaviour

Universiteit Antwerpen

Completeness Checking

Consistency Checking component representation

ity l i sab ting n spo d Tes e x] R se o b ba ck

Verification (checklists, proofs) [no execution]

a [Bl

Implementationbased Testing

[White box]

component implementation

7

Responsibility-based vs. Implementation-based • focus on capabilities important for designers, customers and users - one responsibility covers many implementation issues • reduce coupling between an implementation and its test plans - adaptability • finds mistakes and omissions in requirements

• tool and technique support - coverage analyzers • test planning for exception handling requires source code analysis • some automation possible - smoke tests to verify minimal executability • often the only reliable source of information

Use implementation-based test techniques to test the test suite, not the implementation ! Code coverage reports analyze test adequacy: complement responsibility-based testing; reveal which parts have not been tested Universiteit Antwerpen

8

Class Responsibilities & Contracts Responsibility = • any public service offered by a class (e.g., CRC-cards, …) • … that can be defined without regard to a particular implementation Contracts = • pre- & postcondition on server methods + class invariant - precondition: checks state beginning of server method - postcondition: checks state + response end of server method - invariant: defines all valid combinations of state variables • subclass contract must be consistent with superclass contract - Liskov’s substitution principle *** You may substitute an instance of a subclass *** for any of its superclasses. • exceptions are used to signal contract violations Contract assertions provide operational definition of responsibilities ! Universiteit Antwerpen

9

Sidetrack: ACM Turing Award Barbara Liskov Press release — NEW YORK, March 10, 2009 – ACM, the Association for Computing Machinery The ACM has named Barbara Liskov of the Massachusetts Institute of Technology (MIT) the winner of the 2008 ACM A.M. Turing Award. The award cites Liskov for her foundational innovations to designing and building the pervasive computer system designs that power daily life. Her achievements in programming language design have made software more reliable and easier to maintain. They are now the basis of every important programming language since 1975, including Ada, C++, Java, and C#. The Turing Award, widely considered the "Nobel Prize in Computing," is named for the British mathematician Alan M. Turing. The award carries a $250,000 prize, with financial support provided by Intel Corporation and Google Inc. […] In another exceptional contribution, Liskov designed the CLU programming language, an objectoriented language incorporating "clusters" to provide coherent, systematic handling of abstract data types, which are comprised of a set of data and the set of operations that can be performed on the data. She and her colleagues at MIT subsequently developed efficient CLU compiler implementations on several different machines, an important step in demonstrating the practicality of her ideas. Data abstraction is now a generally accepted fundamental method of software engineering that focuses on data rather than processes, often identified as "modular" or "objectoriented" programming. Universiteit Antwerpen

10

Integration vs. Unit Testing Distinction between integration testing and unit testing is vague • good design = clusters of tightly coupled classes - e.g. design patterns • incremental development means continuous integration - whenever component is added/changed at any scope

Integration testing • search for failures caused by intercomponent faults (i.e. the connections between the parts) Responsibility-based testing • focus on externally visible behaviour of component

Universiteit Antwerpen

wi

th in of sco th in pe e te re st

11

Coverage Goals Results-Oriented Testing has 2 coverage goals • coverage of test model - exercise all modeled responsibilities • coverage of interfaces among parts - in so doing … … exercise all parts and interfaces in our integration model Integration test model ? • typical OOA/D representations - uses cases, CRC Cards, system architecture, object interaction diagram, sequence diagram, state charts, …

Universiteit Antwerpen

12

Testing uses of OOA/D representations Typical OOA/D Responsibility Model

SCOPE Class

Subsystem

Application

Narrative Requirements

+

User Documentation

+

Use Cases Domain Model (CRC)

+ +

+

+

a.k.a. Application Object Model

System Architecture

+

+

Object Interaction

+

+

Class Interface

+

Class State Model

+

Method Specification

+

+

Table 9.2 p. 328 Universiteit Antwerpen

13

(Test) Patterns pattern = • generalized solution to a specific recurring (design) problem - capture the essence of proven solutions pattern template • includes name (design discussions) • context - when to apply, when NOT to apply - forces: different interests pull towards various directions • trade-offs - advantages and disadvantages in terms of forces • known uses

Universiteit Antwerpen

14

Test Pattern Template • • • •

name intent context (incl. test scope) fault model - what kind of faults are the target ? - motivate: why reach faults + trigger and propagate failures • strategy - how to design and implement the test suite - procedure, oracle, automation - example • entry criteria - preconditions for using this pattern ➡ when is the“Implementation Under Test” (IUT) test ready ? • exit criteria - defines the results necessary to achieve an adequate test Universiteit Antwerpen

15

Test Documentation (See also IEEE standard 829) • prepare a test plan - what to test when - deliverables - stop criteria • design the test suite - test suite hierarchy (i.e. V-model) - procedures for running the tests - prepare test cases (using appropriate test pattern) • test the component - results for each test case: pass / no pass / unable to execute ➡ store results (and follow-up action) for later reference • prepare test summary report - Evaluate the operability of the IUT

Universiteit Antwerpen

16

3. Classes (Based on “Part III: Patterns” of Testing Chapter 9 Results-Oriented Testing & Chapter 10 Classes) • Results-Oriented Testing - Why, How - Patterns • Testing Classes - Overview - Method Scope Test Patterns - Class Scope Test Patterns - Class Scope Integration - Flattened Class Scope

Universiteit Antwerpen

17

e tiv

c e p rs pe

g” n i st e t it n u “ l a Class Scope Integration ic p Ty • Small Pop • Alpha-Omega Cycle

Method Scope • Category-Partition • Combinational Function • Recursive Function • Polymorphic Message

Universiteit Antwerpen

Overview Class Scope • Invariant Boundaries • Nonmodal Class • Modal Class • Quasi-Modal Class • Transitive Operations

Flattened Class Scope • Polymorphic Server • Modal Hierarchy

18

Small Pop = Primitive (and ad hoc) way of testing a class • name: big bang at class level • intent: - demonstrate class under test (CUT) is test ready - entry criterion for subsequent patterns • applicable for simple classes with few dependencies (+ depends only on stable and reliable server classes) • strategy: - 1. write test driver after class has been developed - 2. debug class

Universiteit Antwerpen

19

Alpha-Omega Cycle = Simple and efficient way of testing a class • intent: - demonstrate class under test (CUT) is test ready - entry criterion for subsequent patterns • applicable for any class that is almost completely developed - allows incremental development • strategy: - takes object under test from alpha state to omega state - invoke simple methods first, namely 1. New or constructor method 2. Accessor (get) method 3. Boolean (predicate) method 4. Modifier (set) method 5. Iterator method 6. Delete or destructor method - within each category: first private, then protected, then public Universiteit Antwerpen

20

e tiv

c e p rs pe

g” n i st e t it n u “ l a Class Scope Integration ic p Ty • Small Pop • Alpha-Omega Cycle

Method Scope • Category-Partition • Combinational Function • Recursive Function • Polymorphic Message

Universiteit Antwerpen

Overview Class Scope • Invariant Boundaries • Nonmodal Class • Modal Class • Quasi-Modal Class • Transitive Operations

Flattened Class Scope • Polymorphic Server • Modal Hierarchy

21

Category-Partition (1/4) • = design method scope test suites based on input/output analysis • strategy & example: - 1. identify the functions of the Method Under Test (MUT) getNextElement: return next element + keep track of last position + throw NoPosition and EmptyList exceptions - 2. identify the input and output parameters of each function getNextElement input: current position + list itself getNextElement output: current element + incremented position - 3. Identify categories for each input parameter = non overlapping subsets of input + distinctly different output = equivalence classes (partition) position of last referenced element

- nth element - special cases

State of the list

- m-elements - special cases

Universiteit Antwerpen

22

Category-Partition (2/4) • strategy & example (cntnd.): - 4. partition each category into choices (= specific test value) position of last referenced element

State of the list

- nth element

n=2 n = some x> 2, x < Max n = Max

- special cases

Undefined First Last, n < Max

- m-elements

m = some x > 2, x < Max

- special cases

Empty Singleton Full (m = Max)

- 5. identify constraints on choices (i.e. mutually exclusive/inclusive) - 6. generate test cases for all valid choice combinations - 7. develop expected results (oracle) Universiteit Antwerpen

23

Category-Partition (3/4) = design method scope test suites based on input/output analysis • context: - for methods that implements one or more independent functions - methods lack cohesion ? model as separate functions - method selects 1 of many responses ? Combinational Function Test • fault model: - certain value combinations of parameters & instance variables result in missing or incorrect output - not for: faults manifested by sequences of method calls - not for: corrupt and hidden instance variables • automation: - API Test Harness

Universiteit Antwerpen

24

Category-Partition (4/4) • entry criteria: Small Pop • exit criteria - every combination of choices is tested once - test suite should force each exception at least once - code coverage: should excercise at least all branches • consequences - general purpose, nonquantitative and straightforward - identification of categories and choices is subjective - size: product of number of choices - invalid combinations ➡ may result in many test cases ➡ for parameters with many choices consider Invariant Boundaries - superclass partition test suite may be reused for subclasses (e.g. PersonList, EmployeeList, …) • known uses

Universiteit Antwerpen

25

Combinational Test (1/2) = design test suite for behaviour according to state and/or message values • strategy & example: - test according to Decision Table (see “2. Models) - example: Triangle condition/action decision table (p. 430 / 431) • fault model - same as Decision Table (see “2. Models”) • test procedure - at least one test per action - decision variables non-boolean: also boundary conditions • automation - API Test Harness - try to vary sequence of setter methods for decision variables !

Universiteit Antwerpen

26

Combinational Test (2/2) • entry criteria - Small Pop • exit criteria - produce every action at least once - (if applicable) force each exception at least once - excercise at least every branch in the method under test - if polymorphism is used: excersise each binding at least once • consequences - reveals only faults that result in an incorrect action - vulnerable to sequence of messages and corrupt instance variables ➡ vary sequence of setter methods • known (non)uses - (footnote 12 on p. 428) miscoded “break” statement triggered recovery on other machines running the same code - failure propagation - network outage of nine hours; cost 60m $ Universiteit Antwerpen

27

Recursive Function Test (1/2) = design test suite for method that calls itself recursively • strategy & example: - recursion testing loop testing - define the base case, recursive case, preconditions for initial call + all descent-phase calls (push), all ascent-phase calls (pop) - violate precondition in initial call in descent phase - attempt to violate postcondition in asent phase - test boundary cases on depth: 0, 1 and maximum - attempt to force exceptions on server object - use domain analysis for methods with multiple-arguments - use domain analysis for traversal of complex data structures • automation - API test harness

Universiteit Antwerpen

28

Recursive Function Test (2/2) • entry criteria - small pop • exit criteria - null case: zero recursion - singleton case: one recursion - maximal case: greatest allowed or feasible depth - attempted violation of initial precondition - attempted violation of descent-phase precondition - attempted violation of ascent-phase postcondition - invariant boundaries defined for values of multiple arguments and/ or states of data structures traversed - (performance issue) worst case runtime given system load and maximum depth • consequences - no more than 2 x 12 test cases Universiteit Antwerpen

29

Polymorphic Message Test (1/5) = tests for a client of a polymorphic server; exercise all bindings • example: void reportHistory (Account *acct) { if !acct-> isOpen () {

// class fragments class Account {

acct->listTransactions ();

virtual void debit();

} else {

virtual void listTransactions ();

acct --> debit(amount);

};

} }

class TimeDeposit: public Account { virtual void debit();

Account

virtual void listTransactions (); };

TimeDeposit

DemandDeposit

class DemandDeposit: public Account { virtual void debit();

debit & listTransactions are polymorphic server methods Universiteit Antwerpen

virtual void listTransactions (); } 30

Polymorphic Message Test (2/5) reportHistory actually compiles into the following code void reportHistory (Account *acct) { if !acct-> isOpen () { switch (acct->accountType()) {CASE ACCOUNT: acct->listTransactions(); break; CASE TIMEDEPOSIT:

Account

acct->listTransactions(); break; CASE DEMANDDEPOSIT:

TimeDeposit

DemandDeposit

acct->listTransactions(); break; DEFAULT: error (…)} } else {switch …} }

Universiteit Antwerpen

31

Polymorphic Message Test (3/5) See difference in call graphs on p. 442 Explicit call graph

Effective call graph if isOpen

type = Account

acct->listTransactions ();

type = TimeDeposit void reportHistory (Account *acct) { if !acct-> isOpen () { acct->listTransactions (); } else { acct --> debit(amount);

type = DemandDeposit Catch-all (= error)

} }

End-if

Universiteit Antwerpen

32

Polymorphic Message Test (4/5) • strategy - 1. determine number of candidate bindings for each message sent to polymorphic server - 2. expand the segment of message flow graph with multiway branch for each segment that has a polymorphic message - 3. add two nodes for each binding * a branch node (representing the run-time binding logic) * sequential node (representing message send and return) - 4. add a final catch-all node (represent run-time binding error) - 5. proceed with normal branch coverage testing

Universiteit Antwerpen

33

Polymorphic Message Test (5/5) • fault model - precondition violations on server class bindings - unanticipated binding (pointer arithmetic, type casts, …) - change to server class • entry criteria - small pop - stable server class (or stub for server class) • exit criteria - branch coverage on the extended message flow graph • consequences - focus on bugs in the client’s use of polymorphic servers (see Polymorphic Server Test for alternative) - only possible if complete server hierarchy is available for analysis

Universiteit Antwerpen

34

e tiv

c e p rs pe

g” n i st e t it n u “ l a Class Scope Integration ic p Ty • Small Pop • Alpha-Omega Cycle

Method Scope • Category-Partition • Combinational Function • Recursive Function • Polymorphic Message

Universiteit Antwerpen

Overview Class Scope • Invariant Boundaries • Nonmodal Class • Modal Class • Quasi-Modal Class • Transitive Operations

Flattened Class Scope • Polymorphic Server • Modal Hierarchy

35

Invariant Boundaries (1/2) Intent • Classes (interfaces, components) composed of primitive data types - Test-efficient test value combinations ? Context • An invariant can be written - implementation-based testing (method or class scope) implementation invariant (incl. encapsulated variables) - responsibility-based testing — method or class scope invariant with variables visible at class interface - responsibility-based testing — subsystem or application scope invariant with configuration parameters, system states, … Fault Model • boundary conditions often cause bugs ➡ Also for subclasses

Universiteit Antwerpen

36

Invariant Boundaries (2/2) Steps • 1. Define Class Invariant • 2. Develop “on points” and “off points” (1 x 1 domain model) - on point = on the boundary - off point = just outside boundary (smallest possible increment) • 3. Develop “in points” for the variables not referenced - in point = safely within legal boundaries Entry criteria • valid invariant is available (or can be implemented) for IUT Exit Criteria • complete set of domain tests have been developed Consequences • difficult when valid ranges or states or unknown ➡ testing opportunity: force precise specs (omissions/surprises) • Once invariant is known, generation of test cases is fast Universiteit Antwerpen

37

Invariant Boundaries — Example (1/2) class CustomerProfile { Account account1 = new Account (); Account account2 = new Account (); Money creditLimit = new Money (); short txCounter; // … }

assert( (txCounter >= 0 && txCounter 99.99 && creditLimit = 0 txCounter 99.99 creditLimit duplicate exception - remove(123) - is_member(123) must return false ! Steps • = Test of a state-model

Universiteit Antwerpen

46

Quasi-modal Class Test (2/2) Entry Criteria • minimal operability by running alpha-omega cycle Exit Criteria • achieve at least branch coverage on each method on the CUT • will provide N+ coverage (See chapter 7) • obtain higher confidence: any uncovered alpha-omega path Consequences • behavior model (= state representation) can be developed • states can be observed • may require test stubs !

Universiteit Antwerpen

47

Quasi-modal Class Test — Example (1/2)

Universiteit Antwerpen

48

Quasi-modal Class Test — Example (2/2)

Universiteit Antwerpen

49

Modal Class Test (1/2) Intent • class scope test suite for class that has fixed constraints on message sequence Context • message + domain constraints on acceptable sequences of messages • verify for all valid states - 1. messages that are to be accepted are accepted - 2. messages that are illegal are rejected - 3. resultant state for accepted and rejected messages is correct - 4. responses matches the expected messages + parameter values within range Fault model • 1, 2, 3 or 4 do not hold

Universiteit Antwerpen

50

Modal Class Test (2/2) Steps • Test a state model (see example) Entry Criteria • minimal operability by running alpha-omega cycle • if critical method/scope functions exist: test those first ! Exit Criteria • achieve at least branch coverage on each method on the CUT • will provide N+ coverage (See chapter 7) • obtain higher confidence: any uncovered alpha-omega path Consequences • behavior model (= state representation) can be developed • states can be observed • a suitable test driver is available

Universiteit Antwerpen

51

Modal Class Test — Example (1/5)

Universiteit Antwerpen

52

Modal Class Test — Example (2/5)

Universiteit Antwerpen

53

Modal Class Test — Example (3/5)

Universiteit Antwerpen

54

Modal Class Test — Example (4/5)

Universiteit Antwerpen

55

Modal Class Test — Example (5/5)

Universiteit Antwerpen

56

e tiv

c e p rs pe

g” n i st e t it n u “ l a Class Scope Integration ic p Ty • Small Pop • Alpha-Omega Cycle

Method Scope • Category-Partition • Combinational Function • Recursive Function • Polymorphic Message

Universiteit Antwerpen

Overview Class Scope • Invariant Boundaries • Nonmodal Class • Modal Class • Quasi-Modal Class • Transitive Operations

Flattened Class Scope • Polymorphic Server • Modal Hierarchy

57

Class Flattening A flattened class makes all inherited features explicit Depends largely on language/compiler semantics

Universiteit Antwerpen

58

Inheritance-related Bugs Fault Model • Incorrect Initialization - overriding constructors • Inadvertent Bindings - name-scoping rules • Missing override - e.g. isEqual, copy, hash, … • Naked access - breaks encapsulation • Square Peg in a Round Hole - subtype inheritance • Naughty Children - Liskov substitution principle must hold

Fault Model (cntnd.) • Worm Holes - subclass produces values which do not respect superclass invariants • Spaghetti Inheritance - multiple inheritance • Gnarly Hierarchies - restrictions on domain inconsistent with superclass • Weird Hierarchies - code-sharing • Fat interface - subclass inherits inappropriate methods

Individually Correct Parts do NOT necessarily result in a correct whole Universiteit Antwerpen

59

Flattened Class Scope Polymorphic Server Test Modal Hierarchy Test • Apply Polymorhic Message • Apply Modal Class Test for Test for complete hierarchy complete hierarchy • Exercise each polymorphic • On each flattened class method in its defining class AND all classes that inherit it

Universiteit Antwerpen

60

Other Class Patterns Class Scope • Invariant Boundaries • Nonmodal Class • Modal Class • Quasi-Modal Class • Transitive Operations

Universiteit Antwerpen

Flattened Class Scope • Polymorphic Server • Modal Hierarchy

61

3. Classes (Based on “Part III: Patterns” of Testing Chapter 9 Results-Oriented Testing & Chapter 10 Classes) • Results-Oriented Testing - Why, How - Patterns • Testing Classes - Overview - Method Scope Test Patterns - Class Scope Test Patterns - Class Scope Integration - Flattened Class Scope

Universiteit Antwerpen

62

Patterns are not a Panacea Patterns are NOT a good vehicle for • Analytical comparison (and contrasting) • Presenting a general model, theory, point of view • Defining terms + conceptual relationship • Lengthy and complex system of ideas

I want you to teach some of the material & serve as opponent for your colleagues ➡ Will force you to acquire deep understanding ➡ In depth understanding of parts is preferred over broad understanding of the whole Universiteit Antwerpen

63