- A necessary part of a test case is a definition of the expected output or result.
- A programmer should avoid attempting to test his or her own program.
- A programming organization should not test its own programs.
- Thoroughly inspect the results of each test.
- Test cases must be written for invalid and unexpected, as well as valid and expected, input conditions.
- Examining a program to see if it does what it is supposed to do is only half the battle. The other half is seeing whether the program does what it is not supposed to do.
- Avoid throw-away test cases unless the program is truly a throw-away program.
- Do not plan testing effort under the tacit assumption that no error will be found.
- The probability of the existence of more errors in a section of a program is proportional to the number of errors already found in that section.
- Testing is an extremely creative and intellectually challenging task.
- Testing is the process of executing a program with the intent of finding errors.
- A good test case is one that has a high probability of detecting an as-yet undiscovered error.
- A successful test case is one that detects an as-yet undiscovered error.
According to Bill Hetzel in "The Complete Guide to Software Testing" (1988), here are some key testing principles:
- Complete testing is not possible.
- Testing is creative and difficult.
- An important reason for testing is to prevent errors.
- Testing is risk-based.
- Test must be planned.
- Testing requires independence.
In "Effective Software Testing: 50 Specific Ways to Improve Your Testing" (2003), Elfriede Dustin discusses these 50 guidelines:
- Involve Testers from the Beginning
- Verify the Requirements
- Design Test Procedures As Soon As Requirements Are Available
- Ensure That Requirement Changes Are Communicated
- Beware of Developing and Testing Based on an Existing System
- Understand the Task At Hand and the Related Testing Goals
- Consider the Risks
- Base Testing Efforts on a Prioritized Feature Schedule
- Keep Software Issues in Mind
- Acquire Effective Test Data
- Plan the Test Environment
- Estimate Test Preparation and Execution Time
- Define Roles and Responsibilities
- Require a Mixture of Testing Skills, Subject-Matter Expertise, and Experience
- Evaluate the Tester's Effectiveness
- Understand the Architecture and Underlying Components
- Verify That the System Supports Testability
- Use Logging to Increase System Testability
- Verify That the System Supports Debug and Release Execution Modes
- Divide and Conquer
- Mandate the Use of a Test-Procedure Template and Other Test-Design Standards
- Derive Effective Test Cases from Requirements
- Treat Test Procedures as "Living" Documents
- Utilitize System Design and Prototype
- Use Proven Testing Techniques when Designing Test-Case Scenarios
- Avoid Including Contstraints and Detailed Data Elements within Test Procedures
- Apply Exploratory Testing
- Structure the Development Approach to Support Effective Unit Testing
- Develop Unit Tests in Parallel or Before the Implementation
- Make Unit-Test Execution Part of the Build Process
- Know the Different Types of Testing Support Tools
- Consider Building a Tool Instead of Buying One
- Know the Impact of Automated Tools on the Testing Effort
- Focus on the Needs of Your Oranization
- Test the Tools on the Application Prototype
- Do Not Rely Solely on Capture/Playback
- Develop a Test Harness When Necessary
- Use Proven Test-Script Development Techniques
- Automate Regression tests When Feasible
- Implement Automated Builds and Smoke Tests
- Do Not Make Nonfunctional Testing an Afterthought
- Conduct Performance Testing with Production-Sized Databases
- Tailor Usability Tests to the Intended Audience
- Consider All Aspects of Security, for Specific Requirements and System-Wide
- Investigate the System's Implementation To Plan for Concurrency Tests
- Set Up an Efficient Environment for Compatibility Testing
- Clearly Define the Beginning and End of the Test-Execution Cycle
- Isolate the Test Environment from the Development Environment
- Implement a Defect-tracking Life Cycle
- Track the Execution of the Testing Program