Need a way to customise tests per device/channel

Bug #1387391 reported by Brendan Donegan
10
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Ubuntu Test Cases
New
Undecided
Unassigned

Bug Description

For a while now there has been variation in the expected results of certain tests when run on different channels and devices. We need a way to specify this variability in the test execution infrastructure. As an example tests/click_image_tests/check_preinstalled_list/check_preinstalled_list.py fetches a hardcoded url. We should be able to run this with a different list when we're testing on a specific channel.

I personally would need to investigate the CI infrastructure deeper to suggest a fix for this, but it is a problem we need to solve

Revision history for this message
Paul Larson (pwlars) wrote :

I agree with the general idea of this, and it's something I've been thinking about a bit lately. I'd extend it actually, to say that we need a more dynamic list of tests to run on a given build, based on
1. what makes sense to run (ex. certain apps are not in the rtm image on krillin, but are on all other krillin images and are even present in rtm on mako/flo/manta).
2. restricitons on running blacklisted tests - these should be very limited and have a super high priority for someone to fix. we currently have several tests that are causing harm to other tests and this has negatively affected our automated test results for far too long.
3. elimination, or at least mitigation, of tests found to be flaky
I think this becomes a larger user story though, and not just a bug to fix.

However, to address the specific example you cite - click_image_tests, I don't think it fits here. That's just a broken test that someone needs to either fix, or we need to abandon it if it doesn't make sense to run anymore. It was developed by the phonedations team I think, at a time where there was only a single image and references a list of click packages published to people.canonical.com. Either this needs to look at the image currently running and there needs to be a per {device|image} list that it checks against, or someone needs to say the test is bogus and we just quit running it.
I'm happy to take if out if QA no longer sees it as useful, just let me know.

Revision history for this message
Brendan Donegan (brendan-donegan) wrote : Re: [Bug 1387391] Re: Need a way to customise tests per device/channel

Maybe I chose a bad example - I was also thinking about the apps that
aren't in the image for RTM on Krillin like Terminal and File Manager, as
you mentioned in your first point. That is an issue we need to fix soon -
or we need to remove those suites from the list.

On Wed, Oct 29, 2014 at 10:40 PM, Paul Larson <email address hidden>
wrote:

> I agree with the general idea of this, and it's something I've been
> thinking about a bit lately. I'd extend it actually, to say that we need a
> more dynamic list of tests to run on a given build, based on
> 1. what makes sense to run (ex. certain apps are not in the rtm image on
> krillin, but are on all other krillin images and are even present in rtm on
> mako/flo/manta).
> 2. restricitons on running blacklisted tests - these should be very
> limited and have a super high priority for someone to fix. we currently
> have several tests that are causing harm to other tests and this has
> negatively affected our automated test results for far too long.
> 3. elimination, or at least mitigation, of tests found to be flaky
> I think this becomes a larger user story though, and not just a bug to fix.
>
> However, to address the specific example you cite - click_image_tests, I
> don't think it fits here. That's just a broken test that someone needs to
> either fix, or we need to abandon it if it doesn't make sense to run
> anymore. It was developed by the phonedations team I think, at a time where
> there was only a single image and references a list of click packages
> published to people.canonical.com. Either this needs to look at the
> image currently running and there needs to be a per {device|image} list
> that it checks against, or someone needs to say the test is bogus and we
> just quit running it.
> I'm happy to take if out if QA no longer sees it as useful, just let me
> know.
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1387391
>
> Title:
> Need a way to customise tests per device/channel
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/ubuntu-test-cases/+bug/1387391/+subscriptions
>

Revision history for this message
Francis Ginther (fginther) wrote :

I think this problem gets easier by executing all tests with adt-run as it removes the need for per-test suite logic, but it doesn't solve the issue of what packages to run.

Perhaps we remove all knowledge of the packages and specific tests from lp:ubuntu-test-cases/touch and move it into something like a meta package. For example, a meta package for the vivid image would contain all the packages that need to be tested as dependencies [1]. We then create a meta package for each image, which can then be maintained by a completely separate process.

[1] - I am aware that a big portion of the packages to test are click packages which cannot be declared as dependencies :-).

Revision history for this message
Chris Gagnon (chris.gagnon) wrote :

What if we want to run tests for custom scopes on a device? We will never have a green dashboard if we can't have custom tests per device/channel

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Related blueprints

Remote bug watches

Bug watches keep track of this bug in other bug trackers.