Unclear indication in dashboard when test setup fails
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
LAVA Server |
In Progress
|
Medium
|
Unassigned |
Bug Description
Looking at the image report for TC2 LSK at https:/
+ lava-install-
0% [Working] Err http://
Err http://
Err http://
0% [Working] Err http://
Could not resolve 'ports.ubuntu.com'
Err http://
Could not resolve 'ppa.launchpad.net'
0% [Working] Err http://
Could not resolve 'ppa.launchpad.net'
I would expect this to show up as red on the dashboard or to have a separate line item there for test prerequisites showing that some of the testsuites failed to set up, or perhaps the test setup should be included as a test within the test list for reporting. As things stand it was not at all clear to me looking at the dashboard that any attempt had been made to run the tests.
The fact that the failures happened is a separate issue to the fact that this isn't reported clearly.
Changed in lava-server: | |
assignee: | nobody → Neil Williams (codehelp) |
Changed in lava-server: | |
status: | New → Confirmed |
importance: | Undecided → Medium |
Changed in lava-server: | |
status: | Confirmed → In Progress |
Changed in lava-server: | |
assignee: | Neil Williams (codehelp) → nobody |
On Thu, 16 Jan 2014 12:17:21 -0000
Mark Brown <email address hidden> wrote:
> Public bug reported: /validation. linaro. org/dashboard/ image-reports/ linux-linaro- lsk-
>
> Looking at the image report for TC2 LSK at
> https:/
> vexpress-tc2 I see that several tests such as gator are showing as '-'
> indicating that they have not been run but there is no indication as
> to why.
There are two reasons why:
0: The relevant lava_test_shell sections were not included in the JSON
for the job
1: The test didn't get into that part of the test suite
Only the second reason would need to be indicated as a failure. When
the job JSON changes to not include that bit of YAML, that is not a
test failure.
Unfortunately, there is currently no way to reliably identify the first
case because the name of the test does not have to relate to the
filename of the YAML file.
The filter cannot tell the difference - there are simply no results
which match the filter.
> I would expect this to show up as red on the dashboard or to have a
> separate line item there for test prerequisites showing that some of
> the testsuites failed to set up, or perhaps the test setup should be
> included as a test within the test list for reporting. As things stand
> it was not at all clear to me looking at the dashboard that any
> attempt had been made to run the tests.
Quite possibly because the filter could be supporting a historical test
which applied to previous runs but has now been removed from subsequent
runs.
Filters are not tied directly to the job submission, only to the test
results and test results can be generated by a number of different job
submissions, some of which may or may not include all of the tests used
in the other submissions. e.g. this allows one test to be run across a
variety of platforms (where the other tests would not be supportable)
whilst collating all of the results from all platforms in one filter.
It may be possible to collate the lava_test_shell data in the result
bundle in such a way as to create a list of test definitions for that
job and then annotate each test_id if the job failed.
This could help the filter distinguish between tests which were not
requested and tests which were requested but failed to run.
--
Neil Williams www.linux. codehelp. co.uk/
=============
http://