An important part of software quality is the process of testing and validating the software.
Test management is the practice of organizing and controlling the process and artifacts required for the testing effort.
Aspects of test management
Test management can be broken into different phases:
Test artifact and resource organization: is a clearly necessary part of test management. This requires organizing and maintaining an inventory of items to test, along with the various things used to perform the testing. This addresses how teams track dependencies and relationships among test assets. The most common types of test assets that need to be managed are:
Test planning: is the overall set of tasks that address the questions of why, what, where, and when to test. The reason why a given test is created is called a test motivator (for example, a specific requirement must be validated). What should be tested is broken down into many test cases for a project. Where to test is answered by determining and documenting the needed software and hardware configurations. When to test is resolved by tracking iterations (or cycles, or time period) to the testing.
Test authoring: is a process of capturing the specific steps required to complete a given test. This addresses the question of how something will be tested. This is where somewhat abstract test cases are developed into more detailed test steps, which in turn will become test scripts (either manual or automated).
Test execution: entails running the tests by assembling sequences of test scripts into a suite of tests. This is a continuation of answering the question of how something will be tested (more specifically, how the testing will be conducted).
Test reporting: is how the various results of the testing effort are analyzed and communicated. This is used to determine the current status of project testing, as well as the overall level of quality of the application or system.
The testing effort will produce a great deal of information. From this information, metrics can be extracted that define, measure, and track quality goals for the project. These quality metrics then need to be passed to whatever communication mechanism is used for the rest of the project metrics.
A very common type of data produced by testing, one which is often a source for quality metrics, is defects. Defects are not static, but change over time. In addition, multiple defects are often related to one another. Effective defect tracking is crucial to both testing and development teams.
Test management challenges
•Not enough time to test
•Not enough resources to test
•Testing teams are not always in one place
•Difficulties with requirements
Test management recommendations
The following are general recommendations that can improve software test management.
•Start test management activities early
•Reuse test artifacts
•Utilize requirements-based testing
•Leverage remote testing resources
•Defining and enforcing a flexible testing process
•Coordinate and integrate with the rest of development
•Focus on goals and results
•Automate to save time
Few Questions should be constantly posed by the tester prior to testing which will make the objectives of testing very clear.
One way to sum up the objectives of test management is answering the following questions:
•Why should I test?
•What should I test?
•Where do I test?
•When do I test?
•How do I conduct the tests?
A few tips on Best Testing Practices.
•Learn to analyze your test results thoroughly:
Do not ignore the test result. The final test result may be ‘pass’ or ‘fail’ but trouble shooting the root cause of ‘fail’ will lead you to the solution of the problem. Testers will be respected if they not only log the bugs but also provide solutions.
•Learn to maximize the test coverage every time you test any application.
Though 100 percent test coverage might not be possible still you can always try to reach near it.
•To ensure maximum test coverage break your application under test
(AUT) into smaller functional modules. Write test cases on such individual unit modules. Also if possible break these modules into smaller parts.
•While writing test cases, write test cases for intended functionality first
First test cases should be written for valid conditions according to requirements. Then write test cases for invalid conditions. This will cover expected as well unexpected behavior of application under test.
•Think positive. Start testing the application by intend of finding bugs/errors.
Don’t think beforehand that there will not be any bugs in the application. If you test the application by intention of finding bugs you will definitely succeed to find those subtle bugs also.
•Write your test cases in requirement analysis and design phase itself.
This way you can ensure all the requirements are testable.
•If possible identify and group your test cases for regression testing.
This will ensure quick and effective manual regression testing ‘accepting user information’ is one of the modules. You can break this ‘User information’ screen into smaller parts for writing test cases: Parts like UI testing, security testing, functional testing of the ‘User information’ form etc. Apply all form field type and size tests, negative and validation tests on input fields and write all such test cases for maximum coverage.
•Applications requiring critical response time should be thoroughly tested for performance.
Performance testing is the critical part of many applications. In manual testing this is mostly ignored part by testers due to lack of required performance testing large data volume. Find out ways to test your application for performance. If not possible to create test data manually then write some basic scripts to create test data for performance test or ask developers to write one for you.
•Go beyond requirement testing.
Test application for what it is not supposed to do.
•While doing regression testing use previous bug graph (Bug graph - number of bugs found against time for different modules).
This module-wise bug graph can be useful to predict the most probable bug part of the application.
•Note down the new terms, concepts you learn while testing.
Keep a text file open while testing an application. Note down the testing progress,observations in it. Use these notepad observations while preparing final test release report. This good habit will help you to provide the complete unambiguous test report and release details.
•Many times testers or developers make changes in code base for application under test.
This is required step in development or testing environment to avoid execution of live transaction processing like in banking projects. Note down all such code changes done for testing purpose and at the time of final release make sure you have removed all these changes from final client side deployment file resources.
•Keep developers away from test environment.
This is required step to detect any configuration changes missing in release or deployment document. Some times developers do some system or application configuration changes but forget to mention those in deployment steps. If developers don’t have access to testing environment they will not do any such changes accidentally on test environment and these missing things can be captured at the right place.
•It’s a good practice to involve tester’s right from software requirement and design phase.
These way testers can get knowledge of application dependability resulting in detailed test coverage. If you are not being asked to be part of this development cycle then make request to your lead or manager to involve your testing team in all decision making processes or meetings.
•Testing teams should share best testing practices, experience with other teams in their organization.
•Increase your conversation with developers to know more about the product.
Whenever possible make face-to-face communication for resolving disputes quickly and to avoid any misunderstandings. But also when you understand the requirement or resolve any dispute - make sure to communicate the same over written communication ways like emails. Do not keep any thing verbal.
•Don’t run out of time to do high priority testing tasks.
Prioritize your testing work from high to low priority and plan your work accordingly. Analyze all associated risks to prioritize your work.
•Write clear, descriptive, unambiguous bug report.
Do not only provide the bug symptoms but also provide the effect of the bug and all possible solutions.