Tuesday, January 31, 2012

GUI checking checklist


GUI checking checklist is a very powerful fact-gathering tool deployed to ensure that our new web application behaves as expected from several GUI related considerations.

Sr.
 Check Point
Yes/No
Check Points related to Navigation
1.
Assure the existence of the "Help" menu.
 
2.
Assure that the proper commands and options are in each menu.
 
3.
Assure that all buttons on all tool bars have a corresponding key commands.
 
4.
Assure that each menu command has an alternative (hot-key) key sequence, which will invoke it where appropriate.
 
5.
In drop down list boxes, ensure that the names are not abbreviations / cut short.
 
6.
In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations.
 
7.
Ensure that duplicate hot keys do not exist on each screen.
 
8.
Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost - Continue yes/no".
 
9.
Assure that the cancel button functions the same way as the escape key.
 
10.
Assure that the Cancel button operates as a Close button when changes have be made that cannot be undone.
 
11.
Assure that only command buttons which are used by a particular window, or in a particular dialog box, are present. - I.e. make sure they don't work on the screen behind the current screen.
 
12.
When a command button is used sometimes and not at other times, assure that it is grayed out when it should not be used.
 
13.
Assure that OK and Cancel buttons are grouped separately from other command buttons.
 
14.
Assure that command button names are not abbreviations.
 
15.
Assure that all field labels/names are not technical labels, but rather are names meaningful to system users.
 
16.
Assure that command buttons are all of similar size and shape, and same font & font size.
 
17.
Assure that each command button can be accessed via a hot key combination.
 
18.
Assure that command buttons in the same window/dialog box do not have duplicate hot keys.
 
19.
Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button.
 
20.
Assure that focus is set to an object/button, which makes sense according to the function of the window / dialog box.
 
21.
Assure that all option buttons (and radio buttons) names are not abbreviations.
 
22.
Assure that option button names are not technical labels, but rather are names meaningful to system users.
 
23.
If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window / dialog box.
 
24.
Assure that option box names are not abbreviations.
 
25.
Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box".
 
26.
Assure that the Tab key sequence, which traverses the screens, does so in a logical way.
 
27.
Assure consistency of mouse actions across windows.
 
28.
Assure that the color red is not used to highlight active objects (many individuals are red-green colorblind).
 
29.
Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics).
 
30.
Assure that the screen/window does not have a cluttered appearance.
 
31.
Ctrl + F6 opens next tab within tabbed window.
 
32.
Shift + Ctrl + F6 opens previous tab within tabbed window.
 
33.
Tabbing will open next tab within tabbed window if on last field of current tab.
 
34.
Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window.
 
35.
Tabbing will go onto the next editable field in the window.
 
36.
Banner style & size & display exact same as existing windows.
 
37.
If 8 or less options in a list box, display all options on open of list box - should be no need to scroll.
 
38.
Errors on continue should cause user to get returned to the tab and the focus should be on the field causing the error. (I.e. the tab is opened, highlighting the field with the error on it).
 
39.
Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs.
 
40.
On open of tab focus will be on first editable field.
 
41.
All fonts to be the same.
 
42.
Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary.
 
43.
Micro-help text for every enabled field & button.
 
44.
Ensure all fields are disabled in read-only mode.
 
45.
Progress messages on load of tabbed screens.
 
46.
Return operates continue.
 
47.
If retrieve on load of tabbed window fails window should not open.
 

Metrics for product development


Taxonomy of Metrics:
Metrics for certain aspects of the project include:
  • Progress in terms of size and complexity.
  • Stability in terms of rate of change in the requirements or implementation, size, or complexity.
  • Modularity in terms of the scope of change.
  • Quality in terms of the number and type of errors.
  • Maturity in terms of the frequency of errors.
  • Resources in terms of project expenditure versus planned expenditure
Metric
Purpose
Sample measures/perspectives
Progress
Iteration planning
Completeness
Number of classes
SLOC
Function points
Scenarios
Test cases
These measures may also be collected by class and by package
Amount of rework per iteration (number of classes)
Stability
Convergence
Number and type of changes (bug versus enhancement; interface versus implementation)
This measure may also be collected by iteration and by package
Amount of rework per iteration
Adaptability
Convergence
Software "rework"
Average person-hours/change
This measure may also be collected by iteration and by package
Modularity
Convergence
Software "scrap"
Number of classes/categories modified per change
This measure may also be collected by iteration
Quality
Iteration planning
Rework indicator
Release criterion
Number of errors
Defect discovery rate
Defect density
Depth of inheritance
Class coupling
Size of interface (number of operations)
Number of methods overridden
Method size
These measures may also be collected by class and by package
Maturity
Test coverage/adequacy
Robustness for use
Test hours/failure and type of failure
This measure may also be collected by iteration and by package
Expenditure profile
Financial insight
Planned versus actual
Person-days/class
Full-time staff per month
% budget expended
A Complete Metrics Set
Ø  The Process:-  the sequence of activities invoked to produce the software product (and other artifacts)
Ø  The Product:- the artifacts of the process, including software, documents and models
Ø  The Project:- the totality of project resources, activities and artifacts
Ø  The Resources:- the people, methods and tools, time, effort and budget, available to the project
Process Metrics:
Short-term metrics that measure the effectiveness of the product development process and can be used to predict program and product performance
- Staffing (hours) vs. plan
- Turnover rate
- Errors per 1,000 lines of code (KSLOC)

Metrics
Comments
Duration
Elapsed time for the activity
Effort
Staff effort units (staff-hours, staff-days, ...)
Output
Artifacts and their size and quantity (note this will include defects as an output of test activities)
Software development environment usage
CPU, storage, software tools, equipment (workstations, PCs), disposables. Note that these may be collected for a project by the Software Engineering Environment Authority (SEEA).
Defects, discovery rate, correction rate.
Total repair time/effort and total scrap/rework (where this can be measured) also needs to be collected; will probably come from information collected against the defects (considered as artifacts).
Change requests, imposition rate, disposal rate.
Comments as above on time/effort.
Other incidents that may have a bearing on these metrics (freeform text)
This is a metric in that it is a record of an event that affected the process.
Staff numbers, profile (over time) and characteristics

Staff turnover
A useful metric which may explain at a post-mortem review why a process went particularly well, or badly.
Effort application
The way effort is spent during the performance of the planned activities (against which time is formally recorded for cost account management) may help explain variations in productivity: some subclasses of effort application are, for example:
·         Training
·         Familiarization
·         Management (by team lead, for example)
·         Administration
·         Research
·         Productive work—it's helpful to record this by artifact, and attempt a separation of 'think' time and capture time, particularly for documents. This will tell the project manager how much of an imposition the documentation process is on the engineer's time.
·         Lost time
·         Meetings
·         Inspections, walkthroughs, reviews - preparation and meeting effort (some of these will be separate activities and time and effort for them will be recorded against a specific review activity)
Inspections, walkthroughs, reviews (during an activity - not separately scheduled reviews)
Record the number of these and their duration, and the number of issues raised.
Process deviations (raised as non-compliances, requiring project change)
Record the numbers of these and their severity. This is an indicator that more education may be required, that the process is being misapplied, or that the way the process was configured was incorrect
Process problems (raised as process defects, requiring process change)
Record the number of these and their severity. This will be useful information at the post-mortem reviews and is essential feedback for the Software Engineering Process Authority (SEPA).

Product development Metrics
Artifacts:
·         Size — a measure of the number of things in a model, the length of something, the extent or mass of something
·         Quality
§  Defects indications that an artifact does not perform as specified or is not compliant with its specification, or has other undesirable characteristics
§  Complexity a measure of the intricacy of a structure or algorithm: the greater the complexity, the more difficult a structure is to understand and modify, and there is evidence that complex structures are more likely to fail
§  Coupling a measure of how extensively elements of a system are interconnected
§  Cohesion a measure of how well an element or component meets the requirement of having a single, well-defined, purpose
§  Primitiveness the degree to which operations or methods of a class can be composed from others offered by the class
·         Completeness a measure of the extent to which an artifact meets all requirements (stated and impliedthe Project Manager should strive to make explicit as much as possible, to limit the risk of unfulfilled expectations). We have not chosen here to distinguish between sufficient and complete.
·         Traceability — an indication that the requirements at one level are being satisfied by artifacts at a lower level, and, looking the other way, that an artifact at any level has a reason to exist
·         Volatility — the degree of change in an artifact because of defects or changing requirements
·         Effort — a measure of the work (staff-time units) that is required to produce an artifact
Documents:
Characteristic
Metrics
Size
Page count
Effort
Staff-time units for production, change and repair
Volatility
Numbers of changes, defects, opened, closed; change pages
Quality
Measured directly through defect count
Completeness
Not measured directly: judgment made through review
Traceability
Not measured directly: judgment made through review
Models:
§  Use-Case Model
Characteristic
Metrics
Size
*       Number of Use Cases
*       Number of Use Case Packages
*       Reported Level of Use Case (see white paper, "The Estimation of Effort and Size based on Use Cases" from the Resource Center)
*       Number of scenarios, total and per use case
*       Number of actors
*       Length of Use Case (pages of event flow, for example)
Effort
*       Staff-time units (with production, change and repair separated)
Volatility
*       Number of defects and change requests (open, closed)
Quality
*       Reported complexity (0-5, by analogy with COCOMO [BOE81], at class level; complexity range is narrower at higher levels of abstraction - see white paper, "The Estimation of Effort and Size based on Use Cases" from the Resource Center)
*       Defects number of defects, by severity, open, closed
Completeness
*       Use Cases completed (reviewed and under configuration management with no defects outstanding)/use cases identified (or estimated number of use cases)
Traceability
o    Scenarios realized in analysis model/total scenarios
o    Scenarios realized in design model/total scenarios
o    Scenarios realized in implementation model/total scenarios
o    Scenarios realized in test model (test cases)/total scenarios
§  Design Model
Characteristic
Metrics
Size
*       Number of classes
*       Number of design subsystems
*       Number of subsystems of subsystems .
*       Number of packages
*       Methods per class, internal, external
*       Attributes per class, internal, external
*       Depth of inheritance tree
*       Number of children
Effort
*       Staff-time units (with production, change and repair separated)
Volatility
*       Number of defects and change requests (open, closed)
Quality
Complexity
*       Response For a Class (RFC): this may be difficult to calculate because a complete set of interaction diagrams is needed.
Coupling
*       Number of children
*       Coupling between objects (class fan-out)
Cohesion
*       Number of children
Defects
*       Number of defects, by severity (open, closed)
Completeness
*       Number of classes completed/number of classes estimated (identified)
*       Design traceability (in Use-Case model)
Traceability
Number of classes in Implementation Model/number of classes
Characteristic
Metrics
Size
*       Number of classes
*       Number of components
*       Number of implementation subsystems
*       Number of subsystems of subsystems .
*       Number of packages
*       Methods per class, internal, external
*       Attributes per class, internal, external
*       Size of methods*
*       Size of attributes*
*       Depth of inheritance tree
*       Number of children
*       Estimated size* at completion
Effort
*       Staff-time units (with production, change and repair separated)
Volatility
*       Number of defects and change requests (open, closed)
*       Breakage* for each corrective or perfective change, estimated (prior to fix) and actual (upon closure)
Quality
Complexity
*       Response For a Class (RFC)
*       Cyclomatic complexity of methods**
Coupling
*       Number of children
*       Coupling between objects (class fan-out)
*       Message passing coupling (MPC)***
Cohesion
*       Number of children
*       Lack of cohesion in methods (LCOM)
Defects
*       Number of defects, by severity, open, closed
Completeness
*       Number of classes unit tested/number of classes in design model
*       Number of classes integrated/number of classes in design model
*       Implementation traceability (in Use-Case model)
*       Test model traceability multiplied by Test Completeness
*       Active integration and system test time (accumulated from test process), that is, time with system operating (used for maturity calculation)
§  Test Model
Characteristic
Metrics
Size
*       Number of Test Cases, Test Procedures, Test Scripts
Effort
*       Staff-time units (with production, change and repair separated) for production of test cases, and so on
Volatility
*       Number of defects and change requests (open, closed)against the test model
Quality
*       Defects number of defects by severity, open, closed (these are defects raised against the test model itself, not defects raised by the test team against other software)
*       Number of test cases written/number of test cases estimated
*       Test traceability (in Use-Case model)
*       Code coverage
*       Number of Test Cases reported as successful in Test Evaluation Summary/Number of test cases
§  Management
Change Modelthis is a notional model for consistent presentationthe metrics will be collected from whatever system is used to manage Change Requests.
Characteristic
Metrics
Size
*       Number of defects, change requests by severity and status, also categorized as number of perfective changes, number of adaptive changes and number of corrective changes.
Effort
*       Defect repair effort, change implementation effort in staff-time units
Volatility
*       Breakage (estimated, actual) for the implementation model subset.
Completeness
*       Number of defects discovered/number of defects predicted (if a reliability model is used)
Project Metrics
·         BCWS, Budgeted Cost for Work Scheduled
·         BCWP, Budgeted Cost for Work Performed
·         ACWP, Actual Cost of Work Performed
·         BAC, Budget at Completion
·         EAC, Estimate at Completion
·         CBB, Contract Budget Base
·         LRE, Latest Revised Estimate (EAC)


Resources Metrics:
·         People (experience, skills, cost, performance),
·         Methods and tools (in terms of effect on productivity and quality, cost),
·         Time, effort, budget (resources consumed, resources remaining)