יום רביעי, 9 באפריל 2008

צ'ק ליסט לכתיבת מסמך בדיקות (STD Checklist)

הרשימה הסופית כוללת תוספות רבות של קובי הלפרין כפי שהופיעו כאן.
Preparations and format:
01. Were all the requirements identified and listed while preparing the TD?
02. Does the traceability matrix exist as a part of the document or referenced from the TD?
03. Is every requirement covered by at least one test group in the TD?
04. Was the TD written in accordance with the template?
05. Was the TD written in accordance with the guidelines in the Test Approach document?
06. Are all the test groups and test cases uniquely identified?
07. Is the naming convention used consistent throughout the document?
08. Was the Doc Revision Change Control filled properly?
09. Are the Testing Gear properly identified?
a. Tool specifications.
b. Scripts and debug tools.
c. Information that should be found in logs.
10. Are some of the tests defined as being done not in a clean environmet to identify integration and other parts influances?
11. Are all related documents mentioned and linked?
12. Are there Definitions & Abbreviations to all the terminology in the document?
13. Are all the topics that will be tested and will not be tested defined in the document?
14. Is there any reference to documantation tests (Help windows, User Manual & Installation Manual)?
15. Are there any TBD issues?
16. Was the Speller run? Were all needed updated values updated (e.g. TOC)?
17. Does the document include diversity (users, ports, slots etc)?

Test groups:
18. Were all the test groups and test cases defined in accordance with system impacts (Installability, Integrity, Security, Performance, Maintainabilility, Serviceability, Update/Migration, Documentation, Usability, Standards, Reliability, Requirements, Capability?)
19. Were all the test groups and test cases defined in accordance with triggers (Functional, Startup/Restart, Recovery/Exception, Redundancy Failover, Hardware configuration, Software configuration?)
20. Does every test group contain at least one test case?
21. Does each test group test one specific requirement?
22. Is every test group verified under different conditions/inputs/scenarios?
23. Is the testing method described clearly for each test group (to the level that will enable the reviewer to understand how the test cases will be executed and how the results will be evaluated?)

24. Does the test method for each test group include criteria for
evaluating tests results?
Test Cases:
25. Are the test cases classified according to applicable test types (Positive Test Case, Negative Test Case - about 30%, Boundaries Test Case, End Cases)?
26. Are there tast cases designed to detect problems between the tested feature and other features / sub-systems of the system?
27. Does each test case include the purpose of the test?
28. Are the test cases for the same test group testing the same requirement under different conditions?
29. Are the test groups and test cases designed in bug revealing orientation (we are testing in order to find bugs and not to show that it works?)
30. Do the tests cases have priority based on the product (e.g. complexity) and version (e.g. feature X as milestome for a customer)?
31. Do the test groups include separate test cases for positive, negative, boundary checks?
32. Do the test groups include separate test cases for end cases?
a. Were test cases with
special input values (empty file, empty message, etc) defined?
b. Were test cases defined for testing functionality under
abnormal conditions (no communication between two components, temporarily no connection to data base, etc)?
c. Were test cases with
illegal input values defined?
33. Were complicated and not only trivial scenarios defined for test case execution (for example, several users performing actions or contradictory actions, like one user deposing a message, while the subscriber is deleted, or depositing a message while another message is deposit and causes exceeding the message quota)?
34. Are the test objectives described clearly for each test group?
35. Was test results evaluation defined (at least considered) by more than one method (for example, visual observation of user’s handset, activity log record and relevant log file record?)
36. Does every test case have preparations and setup (either stated implicitly or referenced to the test group or whole TD preparations and setup?)
37. Are issues of error handling addressed?
38. Were test cases with default input values defined?
39. Was test cases dependency eliminated (can each test case run independently and not dependent of execution of other test cases)?
40. For test cases that have specific preparations and setup, was a return to initial conditions defined upon end of execution (to eliminate dependency)?
41. Were test cases defined in order to check data integrity (data loss in case of errors, failures?)
42. Were test cases defined in order to check data consistency (in case of flow errors, for example?)
43. Were test cases defined to check rollback in case of flow error?
44. Were test groups and test cases that are relevant only for specific version identified and marked as such (the idea is to have one TD for a sub-system/service/feature for all versions and test cases that are not relevant for this version can be easily identified and not executed?)
45. Are the expected test results for each test case specific and unambiguous (you understand exactly what to expect from the test results and if everything works correct, you will always expect the same results?)
46. Where we use the same set of steps (e.g. checking text fields): is it written only once with reference?
47. Where we use the same set of parameters (e.g. checking configuration files): is it written only once with reference?
48. Do all test cases include running time estimation (before the first run it is a ragh estimation, will be updated if needed after the first run)?

Test Steps:
49. Are the actions (steps) in the test procedure specific and unambiguous?
50. Do the steps include only relevant actions for test execution and are not mixed with test preparations and setup?
51. Are the test procedures brief (5-10 steps. If not, you are probably testing several things at once?)
52. Wherever a range of values (including in the test gear) can be used - is it included in the tests?

2 תגובות:

  1. שלום דורון
    אני מעוניינת לדעת אם יש לך דוגמאות לכתיבת STD גם בעברית.איך אפשר ללמוד לכתוב תסריטי בדיקה מול אפיון של מערכת.בדיקה ואלידית ולא ואלידית,צעדי הבדיקה ותוצאה צפויה.
    תודה מראש

    השבמחק
  2. אין לי דוגמאות ומעולם לא כתבתי STD בעברית. לגבי שאר השאלות - לא ניתן לדעתיח לספק תשובה קצרה לכך, ממליץ על קורס או לקרוא ספרות מקצועית.

    השבמחק

רשומות פופולריות