Software Testing. Testing types. Software Testing

1 Software Testing Testing types Software Testing 2 References Software Testing • http://www.testingeducation.org/BBST/ • IBM testing course av...
4 downloads 3 Views 1MB Size
1

Software Testing Testing types

Software Testing

2

References

Software Testing

• http://www.testingeducation.org/BBST/ • IBM testing course available through the Academic Initiative: Principles of Software Testing for

Testers

3 Software Testing

Course objectives • Present different testing terms distinctions • Differentiate between black box testing and white box testing, structural and behavioral testing, etc • Test cases and test plans • High level description of the most important testing techniques • Relate testing techniques to product methodology

Test techniques • Kaner: methods of designing, running and interpreting the results of tests • Compare with testing approaches: exploratory testing or scripting testing • Any technique can be used in an exploratory or a scripted manner • Evaluated typically based on 1-3 of the below items ▫ ▫ ▫ ▫ ▫ ▫ ▫

Scope Coverage Testers Risks Activities Evaluation/oracles Desired result

5

“So Which Technique Is the Best?” Software Testing

 Each has strengths and weaknesses

Technique A Technique B

 Think in Technique H Testers terms of Coverage Technique C complement Potential problems  There is no Activities “one true Technique G Evaluation way”  Mixing techniques can improve coverage

Technique D

Technique F Technique E

6 Software Testing

Apply Techniques According to the LifeCycle

7 Software Testing

Apply Techniques According to the LifeCycle • Test Approach changes over the project • Some techniques work well in early phases, others in later ones • Align the techniques to iteration objectives Inception

Elaboration

Construction

A limited set of focused tests A few components of software under test

Transition

Many varied tests Large system under test

Simple test environment

Complex test environment

Focus on architectural & requirement risks

Focus on deployment risks

Coverage Based Techniques • • • • • • • • •

Function testing: test every function Equivalence class analysis Boundary testing Domain testing State transitions Specification based testing Requirements based testing Compliance driven testing Configuration testing

Tester Based Techniques • • • • •

User testing Alpha testing Beta testing Paired testing Expert testing

Risk Based Techniques • • • • • • • • • • •

Boundary testing Quick tests Logical Expressions Stress testing Load testing Performance testing History based testing Interoperability testing Usability testing Risk based multi-variable testing Long sequence testing

Activity Based Techniques • • • • • • • • • •

Guerilla testing Random testing Use cases Scenario testing Installation testing Dumb monkey testing Regression testing Performance testing Load testing Long sequence testing

Evaluation Based Techniques • • • • • • •

Structured around oracles (consistencies) Function equivalence testing Constraints check Self verifying data Comparison with saved results Diagnostic based techniques Verifiable state models

Desired-Result Based Techniques • • • • •

Based on specific documents Build verification Confirmation testing User acceptance testing Certification testing

14 Software Testing

Test Techniques—Dominant Test Approaches • Of the 200+ published testing techniques, there are ten basic themes. • They capture the techniques in actual practice. • In this course:

– – – – – – – – – –

Function testing Equivalence analysis Specification-based testing Risk-based testing Stress testing Regression testing Exploratory testing User testing Scenario testing Stochastic or Random testing

NOTE: The details here and the following are adapted based on the IBM testing course available through the Academic Initiative

15 Software Testing

Black Box Testing: Individual techniques –Function testing –Equivalence analysis –Specification-based testing –Risk-based testing –Stress testing –Regression testing –Exploratory testing –User testing –Scenario testing –Stochastic or Random testing

At a Glance: Function Testing Tag line Objective Testers Coverage

Black/White box unit testing Test each function thoroughly, one at a time. Any

Activities

Each function and user-visible variable A function does not work in isolation. Too many functions. Whatever works

Evaluation

Whatever works

Complexity

Simple

Harshness Readiness

Varies Any stage

Potential problems

Function testing • • • •

Function = Feature/Command/Capability Focus on function Ignoring the goal Discovering: ▫ ▫ ▫ ▫ ▫

Specifications UI: make a tour Try commands at command line Look into the code Concept-mapping tools

▫ ▫ ▫ ▫

Apply function testing as a first step in testing the program Good at finding show stoppers Combine with risk based testing Smoke testing: build verification testing

• Testing

18 Software Testing

Strengths & Weaknesses: Function Testing

• Representative cases ▫ Spreadsheet, test each item in isolation. ▫ Database, test each report in isolation • Strengths ▫ Thorough analysis of each item tested ▫ Easy to do as each function is implemented • Blind spots ▫ Misses interactions of features or interactions with background tasks ▫ Misses exploration of the benefits offered by the program. ▫ Usability, Scalability, Interoperability, etc

19 Software Testing

Individual techniques –Function testing –Equivalence analysis –Specification-based testing –Risk-based testing –Stress testing –Regression testing –Exploratory testing –User testing –Scenario testing –Stochastic or Random testing

At a Glance: Equivalence Analysis (1/2) Tag line Objective Testers Coverage Potential problems

Partitioning, boundary analysis, domain testing There are too many test cases to run. Use stratified sampling strategy to select a few test cases from a huge population. Any All data fields, and simple combinations of data fields. Data fields include input, output, and (to the extent they can be made visible to the tester) internal and configuration variables Data, configuration, error handling

At a Glance: Equivalence Analysis (2/2)

Activities

Evaluation Complexity Harshness Readiness

Divide the set of possible values of a field into subsets, pick values to represent each subset. Typical values will be at boundaries. More generally, the goal is to find a “best representative” for each subset, and to run tests with these representatives. Advanced approach: combine tests of several “best representatives”. Determined by the data Simple Designed to discover harsh single-variable tests and harsh combinations of a few variables Any stage

22 Software Testing

Strengths & Weaknesses: Equivalence Analysis

• Representative cases –Equivalence analysis of a simple numeric field. –Printer compatibility testing (multidimensional variable, doesn’t map to a simple numeric field, but stratified sampling is essential) • Strengths –Find highest probability errors with a relatively small set of tests. –Intuitively clear approach, generalizes well • Blind spots –Errors that are not at boundaries or in obvious special cases. –The actual sets of possible values are often unknowable.

23 Software Testing

Sample testing scenario • Inspired form Kaner & co • Program to test: adding two numbers of max two digits. The program will read the numbers echoing them and will print the sum. The user has to press ENTER after each number. • There are 199 values for each variable: 1 to 99: 99 values 0: 1 value -1 to -99: 99 values • There are 199 x 199 = 39,601 combination tests • Should we test them all?

24 Software Testing

Equivalence classes partitioning • To avoid unnecessary testing, partition (divide) the range of inputs into groups of equivalent tests. • We treat two tests as equivalent if they are so similar to each other that it seems pointless to test both. • Select an input value from the equivalence class as representative of the full group. • If you can map the input space to a number line, boundaries mark the point or zone of transition from one equivalence class to another. These are good members of equivalence classes to use because the program is more likely to fail at a boundary.

25 Software Testing

The classical boundary table Variable

Valid Case Equivalence Classes

Invalid Case Equivalence Classes

Boundary and Special cases

First Number

-99 to 99

< -99 or > 99

-99, -100, 99, 100

Second Number

-99 to 99

< -99 or > 99

99, -100, 99, 100

Sum

-198 to 198

< -198 or > 198

(-99, -99) (99, 99)

Notes

Don’t know how to create cases for invalid values

26 Software Testing

Boundary table as a test plan component • Makes the reasoning obvious. • Makes the relationships between test cases fairly obvious. • Expected results are pretty obvious. • Several tests on one page. • Can delegate it and have tester check off what was done. • Provides some limited opportunity for tracking. • Not much room for status.

27 Software Testing

Building the table (in practice) • Relatively few programs will come to you with all fields fully specified. Therefore, you should expect to learn what variables exist and their definitions over time. • To build an equivalence class analysis over time, put the information into a spreadsheet. Start by listing variables. Add information about them as you obtain it. • The table should eventually contain all variables. This means, all input variables, all output variables, and any intermediate variables that you can observe.

28 Software Testing

Examples of ordered sets • ranges of numbers • character codes • how many times something is done (e.g. shareware limit on number of uses of a product) (e.g. how many times you can do it before you run out of memory) • how many names in a mailing list, records in a database, variables in a spreadsheet, bookmarks, abbreviations • size of the sum of variables, or of some other computed value (think binary and think digits) • size of a number that you enter (number of digits) or size of a character string • size of a concatenated string • size of a path specification • size of a file name • size (in characters) of a document

29 Software Testing

Examples of ordered sets • size of a file (note special values such as exactly 64K, exactly 512 bytes, etc.) • size of the document on the page (compared to page margins) (across different page margins, page sizes) • size of a document on a page, in terms of the memory requirements for the page. This might just be in terms of resolution x page size, but it may be more complex if we have compression. • equivalent output events (such as printing documents) • amount of available memory (> 128 meg, > 640K, etc.) • visual resolution, size of screen, number of colors • operating system version • variations within a group of “compatible” printers, sound cards, modems, etc. • equivalent event times (when something happens) • timing: how long between event A and event B (and in which order--races) • length of time after a timeout (from JUST before to way after) -- what events are important?

30 Software Testing

Examples of ordered sets • speed of data entry (time between keystrokes, menus, etc.) • speed of input--handling of concurrent events • number of devices connected / active • system resources consumed / available (also, handles, stack space, etc.) • date and time • most recent event, first event • input or output intensity (voltage) • speed / extent of voltage transition (e.g. from very soft to very loud sound)

31 Software Testing

Conclusion • This is a systematic sampling approach to test design. We can’t afford to run all tests, so we divide the population of tests into subpopulations and test one or a few representatives of each subgroup. This keeps the number of tests manageable. • Using boundary values for the tests offers a few benefits: – They will expose any errors that affect an entire equivalence class. – They will expose errors that mis-specify a boundary. • These can be coding errors (off-by-one errors such as saying “less than” instead of “less than or equal”) or typing mistakes (such as entering 57 instead of 75 as the constant that defines the boundary). • Mis-specification can also result from ambiguity or confusion about the decision rule that defines the boundary.

– Non-boundary values are less likely to expose these errors.

32 Software Testing

Exercise: Weinberg’ s triangle problem • The program reads three integer values from a card. The three values are interpreted as representing the lengths of the sides of a triangle. The program prints a message that states whether the triangle is scalene, isosceles, or equilateral. –From Glenford J. Myers, The Art of Software Testing (1979) • Write a set of test cases that would adequately test this program.

33 Software Testing

Myers’ Answers • Test case for a valid scalene triangle • Test case for a valid equilateral triangle • Three test cases for valid isosceles triangles ▫ (a=b, b=c, a=c) • One, two or three sides has zero value (7 cases) • One side has a negative • Sum of two numbers equals the third (e.g. 1,2,3) (tried with 3 permutations a+b=c, a+c=b, b+c=a) • Sum of two numbers is less than the third (e.g. 1,2,4) (3 permutations) • Non-integer • Wrong number of values (too many, too few)

34 Software Testing

Extending the analysis • Myers included other classes of examples: – Non-numeric values – Too few inputs or too many – Values that fit within the individual field constraints but that combine into an invalid result • These are different in kind from tests that go after the wrong-boundary-specified error. • Can we do boundary analysis on these?

35 Software Testing

Examples of Myers’ categories • • • • • • • • •

1. {5,6,7} 2. {15,15,15} 3. {3,3,4; 5,6,6; 7,8,7} 4. {0,1,1; 2,0,2; 3,2,0; 0,0,9; 0,8,0; 11,0,0; 0,0,0} 5. {3,4,-6} 6. {1,2,3; 2,5,3; 7,4,3} 7. {1,2,4; 2,6,2; 8,4,2} 8. {Q,2,3} 9. {2,4; 4,5,5,6}

36 Software Testing

Potential error: Non numeric values Lower bound

Upper bound

Character

ASCII code

/

47

0

48

1

49

2

50

3

51

4

52

5

53

6

54

7

55

8

56

9

57

:

58

A

65

a

97

37 Software Testing

Potential error: Wrong number of inputs • In the triangle example, the program wanted three inputs • The valid class [of integers] is {3} • The invalid classes [of integers] are – Any number less than 3 (boundary is 2) – Any number more than 3 (boundary is 4)

Invalid combination • Sum of two sides compared with the other • Invalid values: (x, y, z) : x+y z • Also consider permutations • Boundary limits: (x, y, z) : x+y=z =>(1,2,3) – smallest side values • x+y (1,2,4) • x+y>z: => ( 2,3,4)

39 Software Testing

Potential error: invalid combination If you tested:

Would you test?

61+62

62+63

63+64

64+65

65+66

66+67

67+68

68+69

What if 127 is the maximum stored value?

40 Software Testing

More examples of risks for the add-two numbers example • Memory corruption caused by input value too large. • Mishandles leading zeroes or leading spaces. • Mishandles non-numbers inside number strings.

41 Software Testing

Risk driven equivalence classes Variable

Risk

First Input Fail on out-ofrange values

Class that should not trigger the failure

Class that might trigger the failure

Test cases (best representativ es)

-99 to 99

= 100

-100, 100

-99, 99, -100, 100

-100, -99, 100, 99

Doesn’t correctly discriminate in-range from out-of –range Misclassify digits

Non-digits

0 to 9

0 (ASCII 48) 9 (ASCII 57)

Misclassify non-digits

Digits 0-9

ASCII other than 48-57

/(ASCII 47) ;(ASCII 58)

Notes

42 Software Testing

Summary • Domain analysis (equivalence classes analysis) is a sampling strategy to cope with the problem of too many possible tests. • Traditional domain analysis considers numeric input and output fields. • Boundary analysis is optimized to expose a few types of errors such as miscoding of boundaries or ambiguity in definition of the valid/invalid sets. – However, there are other possible errors that boundary tests are insensitive to. • Domain analysis often appears mechanical and routine. Given a numeric input field and its specified boundaries, we know what to do. But as we consider new risks, we have to add a new analysis and new tests. • Rather than thinking we can pre-specify all the tests (after predicting all the risks), we should train testers in the application of equivalence classes to risk-based tests in general. As they discover new risks associated with a field (or with anything else) while testing, they can apply the analysis to come up with optimized new tests as needed.

43 Software Testing

Exercise: GUI Equivalence Analysis • Pick an app that you know and some dialogs – MS Word and its Print, Page setup, Font format dialogs • Select a dialog – Identify each field, and for each field • What is the type of the field (integer, real, string, ...)? • List the range of entries that are “valid” for the field • Partition the field and identify boundary conditions • List the entries that are almost too extreme and too extreme for the field • List a few test cases for the field and explain why the values you chose are the most powerful representatives of their sets (for showing a bug) • Identify any constraints imposed on this field by other fields

44 Software Testing

Exercise: Numeric Range with Output • The program: ▫ K=I*J ▫ I, J and K are integer variables

Write a set of test cases that would adequately test this program • Counting vowels