TIPS FOR GOOD TESTING
During my many years as a system/integration tester, I’ve worked at many sites and tested many different types of computer applications. One of the problems I’ve noticed at many of these sites is a poorly designed test case. Another is the lack of set procedures for the testers to follow. The goal of testing is to ensure that a ‘bug free’ application is delivered to the user. The only way to accomplish this is to thoroughly test the application. The results of testing are directly related to the quality of the test cases used when verifying the application. Adequate preparation of a complete set of test cases is imperative if you, as a tester, are to meet your objective. What I’ve attempted to do in the following paragraphs is describe some of the necessary steps in preparing to do testing on new or enhanced applications. If you keep these pointers in mind when writing your test cases and running your tests, you’ll find that you have a more complete test case that will really ‘shake out’ the application.
OBTAIN THE REQUIRED DOCUMENTS.
To begin writing test cases, there are a few things you need to gather. You first need to identify what ‘Release’ you will be working on and the Project(s) you will be responsible for.
Be sure to gather all associated Problem Reports to be included in the release you will be testing. Once you know the Project(s) you will be working on, you need to go to Change Request repository and print a copy of the Change Requests (CRs) associated with the Project(s). You also need to obtain the Requirements and Design documents associated with the release. These documents will describe the changes being made in the release you are going to be testing.
ANALYZE THE REQUIREMENTS.
Once you have obtained the applicable documents, study the requirements carefully to determine how best to test the software application or the changes being made. You must decide whether the change may be tested in the GUI or if you need to access the database to determine if the desired action was performed by the system. As a rule of thumb, things that normally are accessible through the GUI should be tested in the GUI. Things such as log entries or entries to the database that are not visible in the GUI should be tested as queries to the database using MS Access, Telnet, SQL Queries, or whatever method you have available to access the database.
If you are writing test cases for interactive testing that involves other applications, you must determine if specific actions need to be performed by other testers to set up the data for your test. For example, if you need to test for a specific Error Code, you may have to ask the testers of the application that generates this error to cause that Error Code to appear. This may also be a case where you need the help of the analyst, designer or developer to assist you in determining what would cause that type of error to appear. If, on the other hand, you are simply testing whether the GUI accepts the data you input during testing and is able to save it and then display the data later, your test case may be designed as an independent test.
CREATE TEST CASES FOR DATABASE TESTING.
After you have completed your analysis, you are ready to create your test cases. There are generally two forms used in Test Case development, the Test Case Cover Sheet and the Test Case Format page. These forms are normally designed by the testing organization, based on their needs. There are some basic entries that almost all test case formats should include:
· Identify the Software to be tested, including version number.
- Identify the title of the test and its purpose.
- List the equipment requirements (hardware, 3rd party software, etc.).
- Include a Required Actions section (these are the test steps, what the tester needs to do to perform the test).
- Include an Expected Results section (include a description of what the application should do after each test step has been performed).
- Create an Actual Results section (this will be completed after the tester has performed the Test Step and will describe what the application did after the test step was performed).
· Pass or Fail entry (a Pass/Fail entry should be made for each test step and one for the total test {keep in mind that the failure of some test steps may not be cause for the entire test case to fail}).
The objective of writing a good Test Case is to ensure that sufficient detail is included to allow the tester to fully understand the test scenario. Make sure the Required Actions section contains all the steps the tester needs to take to perform the test. Make sure you fully describe everything that must be included to guarantee the desired results. The Required Actions section is the ‘instructions to the tester’ part of the Test Case. If need be, and if possible, walk through the steps yourself to make sure you include everything that is needed to get the tester where they are supposed to be. If the Test Case requires creating a query to the database, make sure you list all the tables that need to be included in the query. The data for these entries comes from your analysis of the requirements and what you are attempting to test.
Once the Required Actions section is completed, the Expected Results section must be filled in. Again, pay attention to detail when entering your data. You must describe exactly what the system is supposed to display at the conclusion of the process. Make your description as simple and concise as possible, but make sure everything is included. The Tester should be able to read this column and immediately know whether the system has performed as expected.
END RESULT, HIGH QUALITY TESTING
Following these few simple principles will ensure that the test case you have written may be used by anyone while guaranteeing achievement of the same results. A well-written test case is one that guarantees the same result every time, regardless of who performs the test.
Having said all of that, the final step in testing is reporting your results.
DOCUMENT YOUR TEST RESULTS.
Describe the results you obtain in the Actual Results section of the Test Case page. Use the Remarks column to describe things that occur but are not documented as being part of the test, such as slow response time by the system.
The Test Case results must match exactly with the Expected Results column. If it does, enter a P (Pass) in the Pass/Fail column. If it does not, enter an F (Fail) in the Pass/Fail column. Describe in clear, precise words exactly what results you did obtain in the Actual Results column of the Test Case page. As mentioned earlier, a failed test step does not cause the entire test case to fail. For example, suppose while running your test you are verifying the format of the screen and find a data field has a misspelled heading. The data field accepts the data you enter and is able to save it and display the data at a later time. This would not, in most cases, be considered a serious problem. It may be a cosmetic change that can be made at a later time. Whether this would be cause for the entire test case to fail would be at the judgment of the customer. These types of decisions are best made by the Change Control Board (or person who has this decision authority). The primary responsibility of the tester in this case is to document the discrepancy.
REPORT THE FAILED TEST CASE.
If the actual results of the Test Case differ in any way from the expected results, the Test Case has failed. Fully describe what actually occurred. The Remarks column should be used to further describe why a Test Case failed. The more information provided as to what actually happened during performance of the Test Case, the easier it will be for all concerned to analyze the cause of the failure.
CHECK THE TEST CASE.
Check the Test Case first to ensure this wasn’t the cause of the failure. Poorly written or difficult to understand Test Cases can sometimes cause unexpected results. Recheck the Requirements Document(s) and make sure the Test Case was written to test what is being changed/added. If necessary, contact the designer or developer to make sure the Requirement is being tested correctly. Should you find a problem in the Test Case, the Test Case must be rewritten to exactly test the Requirement.
CONTACT THE DESIGNER OR DEVELOPER.
If the Test Case does not appear to be at fault, contact the designer and/or developer and discuss the failed Test Case. This may shed some light on the cause of the failure. If it is determined that the design or code is at fault, prepare a Change Request (CR). Provide as much information as possible, include screen prints of your results if possible. The more data provided here, the easier it will be for the developer to research and correct the problem.
It is the Tester’s responsibility to ensure that a CR is prepared and submitted for every Test Case that failed or had any problem whatsoever that requires documentation. After preparing the CR Ticket, notify the Release Coordinator so that the CR Ticket may be properly processed and assigned.
Be prepared to replicate your test case for the developer if requested to do so. This may be the easiest way for the developer to troubleshoot the problem.
RE - TEST
Once the developer tells you that the problem has been corrected, your final task is to retest the fix exactly as you originally tested it. This entire process repeats until the test cases pass and the application is ready for release to the customer.
CUSTOMER SATISFACTION
When the customer receives the application, they will have a product that is as ‘bug free’ as is humanly possible to produce. Your development effort will have been a success and the possibility of follow-on work will be greatly enhanced.
0 comments
Post a Comment