Passing parameters to console runner by file

Bug #1231877 reported by Peter Brightman
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
nunit-console
Undecided
Unassigned

Bug Description

The console runner accepts lots of individual parameters. Model an appropriate xml file that allows to define all the individual parameters that usually will be passed one by one on the commandline. Give it a cardinality of 1..* in order to run the console-runner multiple times with varying parameters but within one process. Allow this file to be passed as the one and only parameter because it encapsulates all the other parameters. Example:

<?xml version="1.0" encoding="utf-8"?>
<NUnit-Run>
<Run-Parameterset name="smoke tests" fixture="this.is.the.smoketest.fixture" dll="mysmoketest.dll">
   <Filter type="Category" mode="include">BaseLine</Filter>
   <Filter type="Category" mode="exclude">Database,Online,Proxy</Filter>
   <Output type="StdOut">{{name}}.txt</Output>
   <Output type="StdErr">{{name}}-errors.txt</Output>
   <Output type="Report">{{name}}-results.xml;test-result-to-html.xsl</Output>
</Run-Parameterset>
<Run-Parameterset name="online tests" fixture="this.is.the.onlinetest.fixture" dll="myonlinetest.dll">
   <Filter type="Category" mode="include">BaseLine,Online</Filter>
   <Filter type="Category" mode="exclude">Database</Filter>
   <Output type="StdOut">{{name}}.txt</Output>
   <Output type="StdErr">{{name}}-errors.txt</Output>
   <Output type="Report">{{name}}-results.xml;test-result-to-html.xsl</Output>
</Run-Parameterset>
</NUnit-Run>

{{name}} would be a placeholder for the given name of the Run-Parameterset (attribute "name")

Now name the file for example my-nunit-runparams.xml and start the runner like this:

nunit-console-runner.exe my-nunit-runparams.xml

As mentioned in my other bug-report, embed all the run parameters into the resultfile of the test.

Revision history for this message
Charlie Poole (charlie.poole) wrote : Re: [Bug 1231877] [NEW] Passing parameters to console runner by file

How would this feature relate to and potentially interact with NUnit projects?

Would some of this info be better added to the NUnit project xml format?

Charlie

On Fri, Sep 27, 2013 at 2:09 AM, Peter Brightman <email address hidden> wrote:
> Public bug reported:
>
> The console runner accepts lots of individual parameters. Model an
> appropriate xml file that allows to define all the individual parameters
> that usually will be passed one by one on the commandline. Give it a
> cardinality of 1..* in order to run the console-runner multiple times
> with varying parameters but within one process. Allow this file to be
> passed as the one and only parameter because it encapsulates all the
> other parameters. Example:
>
>
> <?xml version="1.0" encoding="utf-8"?>
> <NUnit-Run>
> <Run-Parameterset name="smoke tests" fixture="this.is.the.smoketest.fixture" dll="mysmoketest.dll">
> <Filter type="Category" mode="include">BaseLine</Filter>
> <Filter type="Category" mode="exclude">Database,Online,Proxy</Filter>
> <Output type="StdOut">{{name}}.txt</Output>
> <Output type="StdErr">{{name}}-errors.txt</Output>
> <Output type="Report">{{name}}-results.xml;test-result-to-html.xsl</Output>
> </Run-Parameterset>
> <Run-Parameterset name="online tests" fixture="this.is.the.onlinetest.fixture" dll="myonlinetest.dll">
> <Filter type="Category" mode="include">BaseLine,Online</Filter>
> <Filter type="Category" mode="exclude">Database</Filter>
> <Output type="StdOut">{{name}}.txt</Output>
> <Output type="StdErr">{{name}}-errors.txt</Output>
> <Output type="Report">{{name}}-results.xml;test-result-to-html.xsl</Output>
> </Run-Parameterset>
> </NUnit-Run>
>
> {{name}} would be a placeholder for the given name of the Run-
> Parameterset (attribute "name")
>
> Now name the file for example my-nunit-runparams.xml and start the
> runner like this:
>
> nunit-console-runner.exe my-nunit-runparams.xml
>
> As mentioned in my other bug-report, embed all the run parameters into
> the resultfile of the test.
>
> ** Affects: nunit-3.0
> Importance: Undecided
> Status: New
>
> --
> You received this bug notification because you are subscribed to NUnit
> Extended Testing Platform.
> https://bugs.launchpad.net/bugs/1231877
>
> Title:
> Passing parameters to console runner by file
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/nunit-3.0/+bug/1231877/+subscriptions

Revision history for this message
Peter Brightman (4-peter) wrote :

Yes, this is indeed meant to be a NUnit project xml.
Yes, this info would be better added to a NUnit project xml.

Revision history for this message
Charlie Poole (charlie.poole) wrote : Re: [Bug 1231877] Re: Passing parameters to console runner by file

Any suggestions for modifying the existing format to meet your requrements?

On Mon, Sep 30, 2013 at 8:05 AM, Peter Brightman <email address hidden> wrote:
> Yes, this is indeed meant to be a NUnit project xml.
> Yes, this info would be better added to a NUnit project xml.
>
> --
> You received this bug notification because you are subscribed to NUnit
> Extended Testing Platform.
> https://bugs.launchpad.net/bugs/1231877
>
> Title:
> Passing parameters to console runner by file
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/nunit-3.0/+bug/1231877/+subscriptions

Revision history for this message
Peter Brightman (4-peter) wrote :

Due to lack of experience regarding the existing NUnit project format i first need to learn more about the existing NUnit project format, can you please give some hint and point me to some description/documentation/example. Thank you.

Revision history for this message
Charlie Poole (charlie.poole) wrote :

Hi Peter,

Sorry, I had been assuming you knew about it.

There's no schema, so I'm attaching an example. You can open
it in any editor.

In any case, I'll be looking at your request in the light of the
existing format. Where a parameter is always or usually needed
in order to run a specific assembly, then I think that parameter
belongs in the project file. Where it's something that might
vary from run to run, then it should probably be elsewhere.

Charlie

On Tue, Oct 1, 2013 at 7:40 AM, Peter Brightman <email address hidden> wrote:
> Due to lack of experience regarding the existing NUnit project format i
> first need to learn more about the existing NUnit project format, can
> you please give some hint and point me to some
> description/documentation/example. Thank you.
>
> --
> You received this bug notification because you are subscribed to NUnit
> Extended Testing Platform.
> https://bugs.launchpad.net/bugs/1231877
>
> Title:
> Passing parameters to console runner by file
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/nunit-3.0/+bug/1231877/+subscriptions

Revision history for this message
Peter Brightman (4-peter) wrote :

Hi Charlie,
thanks for the example project file, i'll take a look at it. Yes, a project file should fulfill the following:

Provide basic configuration stuff.
Provide the test-dll names
Provide path/name of output files e. g. report files
Provide include/exclude of categories
Provide filters/selectors to choose certain testmethods by namespace

Possibly things are different when using the gui-version and the console-version. Usually the console-runner is scheduled to run a night, because we have so many tests to run, that takes hours to complete. Currently we start the console-runner from a batchfile, we loop through the lines of a configuration file, this one is some kind of csv-file. Each line holds the following info:

NameOfTestcase , DLL-Name, Parameters (/Fixture= | /Run= | /Include= | /Exclude=)

The batchfile loops through all the lines and does a call to the console-runner using the dll-name as well as the parameters, the resultreport-filename is taken from the TestcaseName

Now doing it like this meanas, we start a new console-runner process for each Testcase. This works of course and each new process has a clean memory segment when starting. But i think, why not put this "loop" inside the console-runner, so we would start the console-runner just once but it would behave like it would run multiple times, each time with a different TestDll and/or different parameters as well as resultfile-names. So thats why i am thinking about a way to provide 1..* run-parameters. This is more a scenario for a console-runner, interactively using the GUI runner one would load project-file A, press run, wait for the tests to finish and load the next project-file B, press run ... and so on.

Ok, so let me have a look at the existing project file and hopefully i will get an idea how to realize the feature wish. Thanks for your help.

Revision history for this message
Peter Brightman (4-peter) wrote :

Ok, so i've tried to understand the NUnit project file, it looks like it is most useful for the GUI runner, isn't it? What i didn't understand is that there is a reference to a configfile, huh!!??

I tried to create a quick&dirty example, please take a look at the attached project file, it contains the following differences:

The test-assemblies can be listed only once, there is no more redundant information on a set for debug/release configuration
A set of tests can be configured independant of the configurations.
More than one set of tests can be configured independant of the configurations, classified by a testset name
A Config refers to a TestSet by refering to it's name
More than one configs (debug/release) can refer to the same TestSet
A TestSet contains a list of assemblies, qualified by a name (used as placeholder with curly braces for a part of the resultfilename), by a resultfile, by an assembly without any path and by run parameters (/Exclude, /Include, /Run, fixture etc.)
The set of test-DLLs contains only assembly-names, no path at all. A path attribute could be allowed to verride a path, but the path is really set in the Config.
A autorun-property specifies if the Config runs automatically when the ProjectFile is passed as argument to the console-runner.exe
Each testcase e. g. testassembly will produce it's own xml resultfile

About the GUI-runner: I have seen that using the checkboxes within the treeview, each test can be checked or unchecked. But i have not seen that this is persistet within the NUnit ProjectFile. Do i miss something here?

Revision history for this message
Charlie Poole (charlie.poole) wrote :
Download full text (4.2 KiB)

The project file is intended to be something permanent, specifying one
or more assemblies
as well as settings that are *necessary* for running the tests in
those assemblies. To
understand what I mean by necessary, consider an assembly with dependencies that
are in a subdirectory. If we don't set up a privatebinpath that
includes that subdirectory,
the assemblies won't be found, so it's not possible to run the tests
without that setting.

On the other hand, selecting particular test cases is not a permanent
thing related to
the test assemblies but is more transient. That is, today I want to
run a certain test
fixture, but tomorrow it might be different.

More specific comments inline...

> Ok, so i've tried to understand the NUnit project file, it looks like it
> is most useful for the GUI runner, isn't it? What i didn't understand is
> that there is a reference to a configfile, huh!!??

It's possible to have multiple projects for the same assembly, using
different configs.

> The test-assemblies can be listed only once, there is no more redundant information on a set for debug/release configuration

This could be seen as an improvement on the existing format, but it
has a limitation:
each config must always contain the same assemblies. The ability to
list different
assemblies in each config has always been a feature of the format.
Much of what you
suggest below only works if the configs all contain the same assemblies.

> A set of tests can be configured independant of the configurations.

Of course, this only works if the same assemblies appear in each config.

> More than one set of tests can be configured independant of the configurations, classified by a testset name

Or one could simply add more configurations, no?

> A autorun-property specifies if the Config runs automatically when the ProjectFile is passed as argument to the console-runner.exe

This is the existing defaultConfig setting. It's not shown in the
example file because it
uses a setting that exists for NUnit tests only. Since NUnit is
self-testing,t the autoConfig
setting is used to run Debug when we are running under a Debug build of NUnit,
Release otherwise.

> Each testcase e. g. testassembly will produce it's own xml resultfile

Actually, we go to a lot of trouble to produce a single xml resultfile for each
project. In fact, that's one of the main reasons for having the project file.
What's the advantage of separate result files that you would then have
to re-combine for analysis?

> About the GUI-runner: I have seen that using the checkboxes within the
> treeview, each test can be checked or unchecked. But i have not seen
> that this is persistet within the NUnit ProjectFile. Do i miss something
> here?

That's because this is considered something transient, not permanent. The
info is, however, persisted in the VisualState.xml file.

***

I'll summarize using the points you listed at the start of your comment.

> Provide basic configuration stuff.
> Provide the test-dll names

This is what the current format does

> Provide path/name of output files e. g. report files
> Provide include/exclude of categories
> Provide filters/selectors to choose certain testmethods by n...

Read more...

tags: added: feature framework
tags: added: console
removed: framework
affects: nunit-3.0 → nunit-console
tags: removed: console
Revision history for this message
Peter Brightman (4-peter) wrote :

Hi Charlie,

thanks for your statements and answers.

To 1: Well we run tests as a kind of regression tests, if we'd change test parameters each day how could we ever generate a history overview of our tests e. g. certain tests along a timeline.

To 2: We produce one resultfile per test-set in order to create an overview of the test that did run. In the overview we see the number of failed/passed/inconclusive/ignored test with a link to the detailed result file, this is the nunit resultfile that we transform to html using a xsl file.

Why would you change your /exclude /include parameters each day? Does the module that you test change from day to day so that this would be neccessary?

Revision history for this message
Charlie Poole (charlie.poole) wrote :

On your first answer, I see you are concerned with a particular usage of
nunit: the nightly run. I guess we really have to deal with two different
usage patterns:

1) Developers running tests as they work. In that case, various parameters
change depending on what the dev is working on. That's an example of how
the /exclude or /include parameters might change, not just every day, but
many times during the course of the day. Of course, some parameters can
never change, because the tests actually depend on their being set
correctly. Appbase and private binpath are in that category, for example.
It's for those parameters that I originally designed the project file
format.

2) Tests being run for the nightly build. In that case, a (relatively)
permanent set of options is used and must be saved somewhere, either in the
script or in some file format we are now creating.

I think maybe you are considering that nunit-console is _only_ used in the
second use case. However, I know of many developers who run the tests in
the console all day long, using various parameters.

That said, I think we could allow some of the options you want to set to be
included in the NUnit project file as an option. They would still, of
course, be overridden if you put an option on the command line. The
alternate approach is to create some other format to include those options,
but I don't really want to proliferate XML formats. I'll look at this
further and try to come up with an approach that adds the options you want
without changing the format for those who don't ened them.

Regarding the second answer, that makes sense to me, especially in the
current environment of NUnit V2. In NUnit 3, when the runner will produce
HTML reports for you, it may be less of a concern, but allowing the
possibility of separate result files makes sense because it gives you
flexibility to work as you are accustomed.

Charlie

On Wed, Oct 9, 2013 at 9:08 AM, Peter Brightman <email address hidden> wrote:

> Hi Charlie,
>
> thanks for your statements and answers.
>
> To 1: Well we run tests as a kind of regression tests, if we'd change
> test parameters each day how could we ever generate a history overview
> of our tests e. g. certain tests along a timeline.
>
> To 2: We produce one resultfile per test-set in order to create an
> overview of the test that did run. In the overview we see the number of
> failed/passed/inconclusive/ignored test with a link to the detailed
> result file, this is the nunit resultfile that we transform to html
> using a xsl file.
>
> Why would you change your /exclude /include parameters each day? Does
> the module that you test change from day to day so that this would be
> neccessary?
>
> --
> You received this bug notification because you are subscribed to NUnit
> Extended Testing Platform.
> https://bugs.launchpad.net/bugs/1231877
>
> Title:
> Passing parameters to console runner by file
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/nunit-console/+bug/1231877/+subscriptions
>

Revision history for this message
Peter Brightman (4-peter) wrote :

Thanks for your commentss and suggestions. Yes, i see our developers also running tests while developing using the NUnit GUI Runner. Our nightly builds of course use nunit console runner. Sure we should not proliferate XML formats, i agree absolutely. Where useful, i like your idea to put certain settings into the persisted project file and still give the oppertunity to override certain options via the commandline. For parameters that don't change often, users will be happy to put them into the projcet file and not need to specify them for each run as a commandline parameter. This way persisted parameters act as default but can still be changed/overridden when runnung a test.

B.t.w. do you plan to realize some kind of "test results along a timeline" functionality? I mean some kind of overview for regression tests (basically smoketests that test always the same thing) whereby the results are seen across a period of time. This could be a helpful information on tests that are always green and on the other side tests, that swap frequently between red/green. This could possibly give feedback to software architects e. g. to locate modules that seem to NOT fulfill the OCP (Open Closed Principe).

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers