Add a way to model test severity (similar to category)
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
HEXR |
Fix Released
|
Medium
|
Unassigned | ||
PlainBox (Toolkit) |
Fix Released
|
High
|
Zygmunt Krynicki |
Bug Description
During test result review (for cetification) one needs to keep track of the "severity" (new term) of each test in relation to the certification to the coverage guide and the form factor being certified. This information can be stored and procecss identically to how we currently handle category assignments.
In each job definition unit, we could optionally store the severity it has for certification. The severity would encode the importance of *failures*. Currently three values would be defined "whitelist" (it has to pass) "graylist" (it may fail, it's not critical, but we need a note) or "blacklist" (it can fail silently).
A test plan could override any implicit assignments (just as how we can override categories) so that we can tailor cerification requirements for a particular form factor.
The hexer database could store this information in the "Test" table, and display it as a icon-hint in a test result review view (the views that currently list all of the test results with optional filtering)
Changed in plainbox: | |
status: | Fix Committed → Fix Released |
Changed in hexr: | |
status: | New → Confirmed |
Changed in hexr: | |
importance: | Undecided → Medium |
milestone: | none → future |
Changed in hexr: | |
status: | Confirmed → Fix Released |
This is also in progress in hexer (patches ready) but I cannot change it there