Determining what tests to run is a manual process: testr has no callback for selecting test ids
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
Testrepository |
Triaged
|
Wishlist
|
Unassigned |
Bug Description
<lifeless> james_w: no, I meant auto as in 'automatically choose the tests to run'
<james_w> right
<lifeless> james_w: so all tests if none are failing, or failures only if there are failures.
<lifeless> james_w: maybe that is what you meant by loop by itself
<james_w> I was just thinking that a loop around that could be quite useful, depending on how long your testsuite took
<lifeless> inotify++
<lifeless> + a background status window in your editor
<james_w> for my fast testsuite projects a "run everything as fast as possible, then iterate over failing", in a pane of my terminal would be good
<james_w> yeah
<james_w> plus some way to go from path -> test filter to run would be good for larger projects
<james_w> any thoughts on a way to infer test patterns based on path in testr?
<james_w> might seed my thoughts for a flight sometime soon
<lifeless> james_w: please; be nice to be friendly for C etc
<james_w> yeah, that's where I'm stuck. Would be easy to write a way to call a python function and pass the path, but that doesn't fit too well with .testr.conf
<lifeless> james_w: it is perhaps a general problem, and one that a separate tool should solve, testr could use that. Thats perhaps too fine grained: just offering food for thought
<james_w> hmm, that would make it easier
<james_w> perhaps something like urlconf from django if you know it?
<lifeless> james_w: I'm open to doing a plugin system for testr too
<lifeless> james_w: vaguely, but not intimately
<james_w> mapping of regexes -> values, pick the tightest regex that matches
<lifeless> james_w: another related problem you might enjoy
<lifeless> james_w: 'I changed file FOO, what tests do I need to run'
<james_w> plus a way to substitute groups from the match in to the value would be greaet
<james_w> lifeless: ah, that's exactly the problem I'm considering :-) What were you thinking?
<lifeless> james_w: many people have stabbed at this problem; theres some prior art I can dig up in python space, but essentially we are perhaps wanting that top level contract
<lifeless> james_w: a database of call graphs (for python code). something similar for C (using gprof); write a interface, implement the languages we care about; profit.
<lifeless> james_w: the complex bit is that you need test run instrumentation tunneled through to the db
<lifeless> james_w: which perhaps subunit can help with.
<james_w> that's more involved than I was envisaging
<james_w> much more powerful and hands off, but...
<lifeless> james_w: precisely ;)
<lifeless> james_w: so, I've been building towards this for a while, finding good building blocks and making them nice.
<lifeless> james_w: like testr
<james_w> yeah
<lifeless> james_w: I'd be delighted to have a regexp based approximation in testr
<lifeless> james_w: I'd just like it to have a contract which is fairly close to what one might want for an automated solution
<james_w> path -> arguments for the test command was what I was thinking, would that fit?
<lifeless> james_w: command to get paths (e.g. bzr st); command to do paths->test id list as string or filename
<lifeless> james_w: perhaps
<lifeless> james_w: I suggest some experimentation
<lifeless> james_w: don't worry about code quality etc for now, just play around and see what tickles your fancy
<james_w> yeah
<james_w> I was thinking inotify, but yeah, that split would work ok
<lifeless> james_w: yes, we will want a way to specify the path[s] explicitly too, both for non-vcs cases and for manual control
<lifeless> inotify too
<lifeless> long as we define a sensible contract, one can write backends for inotify, bzr, etc etc as well as paths just being supplied on the command line
<james_w> I wouldn't want "test id list" though, because I don't want to enumate tests at that level, unless testr had a way to enumerate tests too
<lifeless> james_w: testr works on enumerated tests fairly intrinsically
<lifeless> its how run --failing works: it filters the stream to get failure test ids, then uses that to construct the command line to run.
<james_w> yeah, but I just want to say "given bzrlib/foo.py run bzrlib.
<lifeless> james_w: which is a wildcard on the test id
<lifeless> james_w: inside bzrlib
<james_w> lifeless: ok, I thought you meant test id list in the -F sense: an exact list of tests to run
<lifeless> james_w: I did, and I can see the use for a wildcard here
<james_w> ok
<lifeless> james_w: I think supporting both is useful
<james_w> I'll try and take a first pass at this sometime
<james_w> I have my head full of another problem right now though
<lifeless> naturally
<lifeless> feel free to braindump wishlist bugs
<lifeless> I don't mind 'it would be nice if frogs could jump
<lifeless> style bugs
Changed in testrepository: | |
status: | New → Triaged |
importance: | Undecided → Wishlist |
summary: |
- Please provide a way to specify a mapping from changed file to tests - that should be run + Determining what tests to run is a manual process: testr has no callback + for selecting test ids |
I haven't read the description, but Trial has something exactly like what's described in the title.
Files have Emacs-style buffer variables like: test.test_ tcp -*-
# -*- test-case-name: twisted.
Trial has a long argument '--testmodule' that will look for that buffer variable in whatever module its given. If it can't find one, it will look in the module itself for tests.