Checkbox should include, in test reports, any variation in test plan
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Next Generation Checkbox (CLI) |
Fix Released
|
Wishlist
|
Jonathan Cave |
Bug Description
Sometimes, testers de-select tests. In some cases, this is fine, in others it is not acceptable. It would be helpful to observe this by seeing, in the results, a diff of the original test plan (as seen on the initial Test Selection screen and the tests actually run (after a user has manually de-selected tests).
So for example, if the PXU indicates:
network/test1
network/test2
network/test3
memory/test1
memory/test2
storage/test1
storage/test2
and the user de-selects the network tests entirely, in the results somewhere would be this:
Tests manually deselected by user:
network/test1
network/test2
network/test3
This would help a lot in finding these things, since the tests are also dynamically executed depending on several factors, it's hard to always know that something has been removed.
For example, If a node has three NICs and the tests look like this:
network/
network/
network/
network/
network/
network/
but he deselects all but the tests for device 1, the actual tests look like this:
network/
network/
And it takes a lot more digging to realize "Hey, where are the device2 and device3 network tests?"
Related branches
- Sylvain Pineau (community): Approve
- Jeff Lane : Approve
- Sheila Miguez (community): Approve
-
Diff: 66 lines (+24/-3)3 files modifiedplainbox/impl/providers/exporters/data/checkbox.json (+7/-0)
plainbox/impl/session/assistant.py (+8/-3)
plainbox/impl/session/state.py (+9/-0)
tags: | added: hwcert-server |
Changed in checkbox-ng: | |
importance: | Undecided → Wishlist |
Changed in checkbox-ng: | |
assignee: | nobody → Jonathan Cave (jocave) |
status: | New → In Progress |
Changed in checkbox-ng: | |
milestone: | none → 1.6.0 |
status: | In Progress → Fix Committed |
Changed in checkbox-ng: | |
status: | Fix Committed → Fix Released |
YAY!!! Thank you! This came up again this week as I have a tester who has not only removed tests from the main test run, but also from subsequent re-tests :/ SO making those cases more obvious will definitely help.