Comment 8 for bug 1269782

Revision history for this message
Neil Williams (codehelp) wrote : Re: [Bug 1269782] Re: Unclear indication in dashboard when test setup fails

On Fri, 17 Jan 2014 11:31:14 -0000
Mark Brown <email address hidden> wrote:

> | Yes, but each scheduling operation is open to anyone to re-use
> | testsuites in different LAVA jobs. Therefore, more results can exist
> | for the filter to match. (Filters are the database queries behind
> the | image reports - the filter collates the test suite results into
> sets | which provide the data for the reports.)
>
> I think the biggest UX issue I'm having here is that I'm having a hard
> time matching this and therefore the conclusions you draw from it with
> what I'm looking at in the UI. What I'm doing is going to an image
> report like:
>
> https://validation.linaro.org/dashboard/image-reports/linux-linaro-lsk-
> vexpress-tc2
>
> There I can see a list of builds which if I click through are linked
> to specific jobs that LAVA ran. This means that LAVA knows exactly
> what testsuites were run in that job (since it was what ran them)
> which in turn means that it should be able to tell me if some of them
> generated errors and bombed out during their setup phase.
>
> More generally LAVA is the one running jobs so it really ought to know
> what testsuites it tried to run.

During the job execution, yes, it does know this and the fix for this
bug is to make this clearer in the final image reports, possibly by
retaining the original list of test definitions passed to the job in
the final result bundle so that the filter and then the image report
can indicate the difference between a test definition which was not
submitted as part of the job and a test definition which was submitted
but failed to provide any results.

> | Your specific bunch of existing testsuites may not change (often or
> at | all) but most sets change frequently. There will always be
> situations | where some tests expected to be in any one filter will
> simply be | omitted from the submission by the user.
>
> Sure, but that doesn't mean that if a testsuite is run and then fails
> during the environment setup then that information should be
> discarded.
>
> If I were doing this by searching for results of a given testsuite
> what you're saying would be a bit easier to relate to but the UI I'm
> going through shows results organised by job.

Under the hood, the filter is indeed searching for results of a given
testsuite.... with extra layers for particular device types, particular
bundle streams. The testsuite then matches a bundle which contains
details of the job.

This is not obvious, so that is another part of the bug.

--

Neil Williams
=============
http://www.linux.codehelp.co.uk/