Inconsistent result aggregation and reporting using lava-test-case

Bug #1297956 reported by Andrew McDermott
12
This bug affects 1 person
Affects Status Importance Assigned to Milestone
LAVA Server
Fix Released
High
Senthil Kumaran S

Bug Description

The following repo contains an example of the issues I see:

  http://git.linaro.org/git/people/andrew.mcdermott/lava-example.git

The example runs some tests in a similar way that the Java/JTREG tests run
and for each test it invokes 'lava-test-case $TESTCASE_NAME --result pass|fail"
to record the result.

Looking at the result bundle[1] the number of tests reported doesn't match the
number of times lava-test-case is invoked.

The test-runner script runs in two ways:

    while IFS=' ' read -r tc rem; do
 echo "$FUNCNAME: TC=<$tc>, REMAINDER=<$rem>"
 lava-test-case $(basename $tc .java) --result $result
    done < $filename

Using this loop I see only 18 results reported (16 pass, 2 fail).

If I change this loop to not invoke lava-test-case immediately:

    local ofile=/tmp/jtreg-test-process-file.$$
    echo "#!/bin/bash" > $ofile
    while IFS=' ' read -r tc rem; do
 echo "$FUNCNAME: TC=<$tc>, REMAINDER=<$rem>"
 echo "lava-test-case $(basename $tc .java) --result $result" >> $ofile
    done < $filename
    [ -e $ofile ] && chmod 755 $ofile
    [ -e $ofile ] && $ofile

then I see 35 results reported (32 pass, 3 fail) - which is the correct number
based on the (synthesized) tests that are executed. summary.txt in the repo
contains 35 lines.

Is this expected behaviour? The JTREG tests are using the first form of the loop and this mechanism was working[2] on Jan 22nd 2014 but has been broken since then.

[1] Examples of invocations against KVM and the RTSM model using the job descriptions
checked into:

    http://git.linaro.org/git/people/andrew.mcdermott/lava-example.git

are recorded here:

  https://validation.linaro.org/dashboard/streams/anonymous/test/bundles/7d19bde7db6d96a327981ad239ad10ebcab1b99d/
  https://validation.linaro.org/dashboard/streams/anonymous/test/bundles/58cd6f915c92c1ce7df5fe3875308bf870dffd53/

[2] https://validation.linaro.org/dashboard/streams/public/team/linaro/pre-built-vexpress64/bundles/bd09e3db07e092585e50f5c6ccb052f54df49940/

summary: - Inconsistent result aggregation and reporting using lava_test_case
+ Inconsistent result aggregation and reporting using lava-test-case
description: updated
Alan Bennett (akbennett)
Changed in lava-server:
assignee: nobody → Senthil Kumaran S (stylesen)
importance: Undecided → High
status: New → Confirmed
Revision history for this message
Tyler Baker (tyler-baker) wrote :

Any progress on this Senthil?

Changed in lava-server:
status: Confirmed → In Progress
Revision history for this message
Senthil Kumaran S (stylesen) wrote :

This is a problem with an extra 'read' command in the lava-test-case shell script.

There are two solutions

1) Change the test-runner shell script in http://git.linaro.org/git/people/andrew.mcdermott/lava-example.git to use a for loop inside 'process_file_with_immediate_lava_test_case_invocation' instead of a while loop as follows:

<snip>
    IFS=$'\n'
    for line in $(cat $filename)
    do
        tc=$(echo "$line" | awk '{print $1}')
        rem=$(echo "$line" | awk '{print $2}')
        echo "$FUNCNAME: TC=<$tc>, REMAINDER=<$rem>"
        lava-test-case $(basename $tc .java) --result $result
    done
</snip>

NOTE: The 'read' command at the end of lava-test-case shell script reads the next line (which is not intended) as input from the files pass.txt or fail.txt. Thus once again when the while loop reads the line from the file, we get a line missing, hence less number of test cases reported.

2) Remove the extra 'read' command from lava-test-case which is implemented in patchset - https://review.linaro.org/#/c/1704/

Hope 2) will be generic enough to avoid such problems in future.

Changed in lava-server:
status: In Progress → Fix Committed
Changed in lava-server:
status: Fix Committed → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.