Enable automated tests for api scripts that can be promoted to tests in the Launchpad tree if needed or in the continuous integration setup

Bug #663520 reported by Diogo Matsubara
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Launchpad itself
Triaged
Low
Unassigned

Bug Description

Context: https://wiki.canonical.com/Launchpad/PolicyandProcess/ApiSupport

Goal: Enable automated tests on staging that can be promoted to tests in the Launchpad tree if needed. Proposed graduated levels:

 1. They run test script on their own against staging.
 1. We run test script before release against staging. We may want to run it against launchpad.dev before landing changes we suspect might cause problems too.
 1. We add test script into nightly buildbot runs against db-stable.
 1. We integrate test script into Launchpad's full test run.

Basic trick: we have a script that is run after each staging database refresh *and* that can be run in a launchpad test layer. This creates QA users (and any other supporting objects) that can be expected in both staging and in LP tests.

Supporting trick 1: We define a new prefix that is blacklisted, like "qa-" or something. These can only be created in the script.

Supporting trick 2: We provide functions to generate random, non-conflicting names for projects and other LP items.

Supporting trick 3: We provide a harness that runs tests against staging. We provide another harness that runs tests against a local launchpad.dev. We provide a mechanism that runs tests within the Launchpad suite.

End instructions for users: "write your script using only these existing documented users (and maybe other bits). Don't rely on anything else existing. Whenever you want to create something, use one of these helpers to get a name so that you don't get conflicts."

End result: they have scripts they and we can use for QA against staging, against launchpad.dev, and incorporated in our test suite.

Some advantages:
 * should be fairly easy to implement on our side
 * should be easy for people to get started, following standard QA practices
 * quicker for people to get feedback.

Revision history for this message
Robert Collins (lifeless) wrote : Re: [Bug 663520] [NEW] Enable automated tests for api scripts that can be promoted to tests in the Launchpad tree if needed or in the continuous integration setup

I'm concerned about the blacklist change; thats disproportionate to
the benefits we'll get from this.

I worry that you'll have issues due to the reinvention of sample data
(which is what this is), It works badly in LP tests, why will it work
any better here?

Users can create objects on staging, so why do we need precanned objects?

Gary Poster (gary)
Changed in launchpad-foundations:
status: New → Triaged
importance: Undecided → Wishlist
Revision history for this message
Gary Poster (gary) wrote :

Thanks for the comments, Robert. Diogo copy and pasted some old notes on this here at my request so that we wouldn't lose the idea, even while we have decided not to pursue it now. We'll definitely circulate this more if/when we start to think about scheduling this. That said, I'll reply now to these particular concerns.

To take a step back, why are we interested in this kind of thing?

Big goals:

 - We want to be able to have these tests create what they need to.
 - We want to be able to run the same tests on staging and dev, as close to transparently as possible.

Smaller goals:

 - We only want to work on this when we have a concrete need to--when a top-tier webservice app needs the feature for a test.
 - We want it to be easy to write these tests.
 - We want the tests to have some known approaches that can speed them up.

(We can take a further step back and talk about rationale for these goals if you like, but I'll take it as a given for this reply that we agree that these are valuable goal. The second big goal--be able to run the same tests on staging and dev--is the most arguable, AFAIK, but it's stood the test of the discussions so far.)

Those goals don't have to lead to the bug description as it stands now. As you said, we can create objects on staging. That's true.

The one thing we've identified that we can't do easily on staging is create new users, with different permissions.

To the best of our knowledge so far, a minimal, and perhaps sufficient, implementation of these goals would be to make it possible to easily create new users and log in as them on staging and dev, in tests. We might need to go further, and be able to give the users special privileges, depending on the webservice test.

A similar approach that might be easier to implement would be to have only sample data users only--a normal user, and maybe an admin user or other special users--that are available for users to log in as in staging and dev. Being able to create users seems like it would be nicer, for reasons like being able to better control the expected results of various webservice calls if you have created the user from scratch. Maybe also we would have performance issues if multiple scripts were logged in as the same person; I don't know.

Neither of these would necessarily require the blacklist change or sample data objects, per your listed concerns.

Anyway, that's our thoughts so far.

Curtis Hovey (sinzui)
Changed in launchpad:
importance: Wishlist → Low
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.