Madspin fails to decay events for large couplings

Bug #1805107 reported by Pablo Martin on 2018-11-26
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
MadGraph5_aMC@NLO
Undecided
Unassigned

Bug Description

Hi Olivier,

I am running a LO event generation with decays through Madspin. I implemented my model in Feynrules, and I basically have SM + new scalar doublet with slepton quantum numbers + new fermion singlet under SM. You helped me with a tricky bug in "#1801760 Madspin fails with majorana particle" a couple of weeks ago, so you might remember the model.

I found another issue. When Madspin decays the events, I am getting the following warning for some of them:

INFO: All production process does not have the same total Branching Ratio.
                    Therefore the total number of events after decay will be lower than the original file.
                    [max_br = 0.99515608014, min_br = 0.992471423458]

which is not too bad, but if the coupling for the relevant vertex (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the events:

...
...
INFO: Event 291/300 : 2.8s
INFO: Event 296/300 : 2.8s
INFO: All production process does not have the same total Branching Ratio.
                    Therefore the total number of events after decay will be lower than the original file.
                    [max_br = 0.99515608014, min_br = 0.992471423458]
INFO:
INFO: Decaying the events...

Any clues about this one? Thanks!

Cheers,

Pablo

Hi,

The warning is actually an Information and not a warning (it is not even quoted as a warning actually). It just means that MadSpin is going to drop some events in order to keep the correct relative abondance of each subprocess in your sample.
Since number are quite close in your case, the number of events dropped should be quite small.

> Madspin gets stuck decaying the events:

This typically means that the un-weighting efficiency of the process is quite bad. The typical reason is that narrow-width approximation does not hold for that benchmark point, leading to theoretical trouble and issue in the Madspin method.

If you are running at LO, you have the possibility of not using MadSpin and to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+ vl), (t~ > w- b~, w-> j j ).
You can generate your process like this to test if this goes smoothly and if your NWA seems to work or not (that syntax is also based on NWA, but the cross-section is not computed assuming NWA.

Cheers,

Olivier

Download full text (3.6 KiB)

Hi Olivier,

thanks for your reply.

This typically means that the un-weighting efficiency of the process is
> quite bad. The typical reason is that narrow-width approximation does
> not hold for that benchmark point, leading to theoretical trouble and
> issue in the Madspin method.
>

 I am not getting any warning about from Madspin though.

If you are running at LO, you have the possibility of not using MadSpin and
> to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+ vl),
> (t~ > w- b~, w-> j j ).
> You can generate your process like this to test if this goes smoothly and
> if your NWA seems to work or not (that syntax is also based on NWA, but the
> cross-section is not computed assuming NWA.
>

I tried that and everything worked as expected.

Cheers,

Pablo

El mié., 28 nov. 2018 a las 11:00, Olivier Mattelaer (<
<email address hidden>>) escribió:

> Hi,
>
> The warning is actually an Information and not a warning (it is not even
> quoted as a warning actually). It just means that MadSpin is going to drop
> some events in order to keep the correct relative abondance of each
> subprocess in your sample.
> Since number are quite close in your case, the number of events dropped
> should be quite small.
>
>
> > Madspin gets stuck decaying the events:
>
> This typically means that the un-weighting efficiency of the process is
> quite bad. The typical reason is that narrow-width approximation does
> not hold for that benchmark point, leading to theoretical trouble and
> issue in the Madspin method.
>
> If you are running at LO, you have the possibility of not using MadSpin
> and to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+
> vl), (t~ > w- b~, w-> j j ).
> You can generate your process like this to test if this goes smoothly and
> if your NWA seems to work or not (that syntax is also based on NWA, but the
> cross-section is not computed assuming NWA.
>
> Cheers,
>
> Olivier
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1805107
>
> Title:
> Madspin fails to decay events for large couplings
>
> Status in MadGraph5_aMC@NLO:
> New
>
> Bug description:
> Hi Olivier,
>
> I am running a LO event generation with decays through Madspin. I
> implemented my model in Feynrules, and I basically have SM + new
> scalar doublet with slepton quantum numbers + new fermion singlet
> under SM. You helped me with a tricky bug in "#1801760 Madspin fails
> with majorana particle" a couple of weeks ago, so you might remember
> the model.
>
> I found another issue. When Madspin decays the events, I am getting
> the following warning for some of them:
>
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
>
> which is not too bad, but if the coupling for the relevant vertex
> (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the
> events:
>
> ...
> ...
> INFO: Event 291/300 : 2.8s
> ...

Read more...

Download full text (5.9 KiB)

On 28 Nov 2018, at 11:26, Pablo Martin <<email address hidden><mailto:<email address hidden>>> wrote:

Hi Olivier,

thanks for your reply.

This typically means that the un-weighting efficiency of the process is
quite bad. The typical reason is that narrow-width approximation does
not hold for that benchmark point, leading to theoretical trouble and
issue in the Madspin method.

I am not getting any warning about from Madspin though.

We are not able to check NWA for you. I'm not even sure that we trigger a warning in the
obvious case (like if the width is larger than 0.1 times the mass).

But I will add this topic on my todo list (but as low priority) and will investigate what is happening here.

I tried that and everything worked as expected.

Ok sound goods, so at least you can move forward with this method (which is more robust and likely faster than MadSpin)

Cheers,

Olivier

If you are running at LO, you have the possibility of not using MadSpin and
to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+ vl),
(t~ > w- b~, w-> j j ).
You can generate your process like this to test if this goes smoothly and
if your NWA seems to work or not (that syntax is also based on NWA, but the
cross-section is not computed assuming NWA.

I tried that and everything worked as expected.

Cheers,

Pablo

El mié., 28 nov. 2018 a las 11:00, Olivier Mattelaer (<
<email address hidden><mailto:<email address hidden>>>) escribió:

Hi,

The warning is actually an Information and not a warning (it is not even
quoted as a warning actually). It just means that MadSpin is going to drop
some events in order to keep the correct relative abondance of each
subprocess in your sample.
Since number are quite close in your case, the number of events dropped
should be quite small.

Madspin gets stuck decaying the events:

This typically means that the un-weighting efficiency of the process is
quite bad. The typical reason is that narrow-width approximation does
not hold for that benchmark point, leading to theoretical trouble and
issue in the Madspin method.

If you are running at LO, you have the possibility of not using MadSpin
and to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+
vl), (t~ > w- b~, w-> j j ).
You can generate your process like this to test if this goes smoothly and
if your NWA seems to work or not (that syntax is also based on NWA, but the
cross-section is not computed assuming NWA.

Cheers,

Olivier

--
You received this bug notification because you are subscribed to the bug
report.
https://bugs.launchpad.net/bugs/1805107

Title:
 Madspin fails to decay events for large couplings

Status in MadGraph5_aMC@NLO:
 New

Bug description:
 Hi Olivier,

 I am running a LO event generation with decays through Madspin. I
 implemented my model in Feynrules, and I basically have SM + new
 scalar doublet with slepton quantum numbers + new fermion singlet
 under SM. You helped me with a tricky bug in "#1801760<tel:1801760> Madspin fails
 with majorana particle" a couple of weeks ago, so you might remember
...

Read more...

Pablo Martin (pmartin7) wrote :
Download full text (8.6 KiB)

Hi Olivier,

We are not able to check NWA for you. I'm not even sure that we trigger a
> warning in the
> obvious case (like if the width is larger than 0.1 times the mass).
>

My bad then! I am getting a width of ~ 80 GeV for a mass of ~ 330 GeV. I
thought that was quite big but since Madspin wasn't complaining I thought
it could be ok.

Ok sound goods, so at least you can move forward with this method (which
> is more robust and likely faster than MadSpin)
>

Given such a big width, I guess it is not safe computing the cross section
with the Madspin syntax and then running the whole process using the
decay-chain syntax. Is it? I am thinking about this because the former is
about 35% larger than the latter for the benchmark I tried.

Cheers,

Pablo

El mié., 28 nov. 2018 a las 12:11, Olivier Mattelaer (<
<email address hidden>>) escribió:

> On 28 Nov 2018, at 11:26, Pablo Martin
> <<email address hidden><mailto:<email address hidden>>> wrote:
>
> Hi Olivier,
>
> thanks for your reply.
>
> This typically means that the un-weighting efficiency of the process is
> quite bad. The typical reason is that narrow-width approximation does
> not hold for that benchmark point, leading to theoretical trouble and
> issue in the Madspin method.
>
>
> I am not getting any warning about from Madspin though.
>
> We are not able to check NWA for you. I'm not even sure that we trigger a
> warning in the
> obvious case (like if the width is larger than 0.1 times the mass).
>
> But I will add this topic on my todo list (but as low priority) and will
> investigate what is happening here.
>
>
> I tried that and everything worked as expected.
>
> Ok sound goods, so at least you can move forward with this method (which
> is more robust and likely faster than MadSpin)
>
> Cheers,
>
> Olivier
>
>
>
> If you are running at LO, you have the possibility of not using MadSpin and
> to use the decay-chain syntax. (like p p > t t~, (t > w+ b, w+ > l+ vl),
> (t~ > w- b~, w-> j j ).
> You can generate your process like this to test if this goes smoothly and
> if your NWA seems to work or not (that syntax is also based on NWA, but the
> cross-section is not computed assuming NWA.
>
>
> I tried that and everything worked as expected.
>
> Cheers,
>
> Pablo
>
>
> El mié., 28 nov. 2018 a las 11:00, Olivier Mattelaer (<
> <email address hidden><mailto:<email address hidden>>>)
> escribió:
>
> Hi,
>
> The warning is actually an Information and not a warning (it is not even
> quoted as a warning actually). It just means that MadSpin is going to drop
> some events in order to keep the correct relative abondance of each
> subprocess in your sample.
> Since number are quite close in your case, the number of events dropped
> should be quite small.
>
>
> Madspin gets stuck decaying the events:
>
> This typically means that the un-weighting efficiency of the process is
> quite bad. The typical reason is that narrow-width approximation does
> not hold for that benchmark point, leading to theoretical trouble and
> issue in the Madspin method.
>
> If you are running at LO, you have the possibility of not using MadSpin
> and to use the decay-chain syntax. ...

Read more...

Hi,

In the narrow-width approximation, you expect an error on the cross-section to be proportional to
Gamma/M so in your case ~25%. So indeed finding a 35% difference when doing a proper integration sounds reasonable. Now note that such computation is still neglecting the interference term with all the other diagram leading to the same final state (and the associated amplitude square).
With such large width, you also have to test if neglecting such diagram makes sense or not..

Cheers,

Olivier

Hi Olivier,

sorry for the delay. Then the safest option is to do everything with the
decay-chain syntax, right? That solved my question, thanks! And in a few
words, just by curiosity, how is the cross section computed by using this
method?

Cheers,

Pablo

El mié., 5 dic. 2018 a las 12:02, Olivier Mattelaer (<
<email address hidden>>) escribió:

> Hi,
>
> In the narrow-width approximation, you expect an error on the
> cross-section to be proportional to
> Gamma/M so in your case ~25%. So indeed finding a 35% difference when
> doing a proper integration sounds reasonable. Now note that such
> computation is still neglecting the interference term with all the other
> diagram leading to the same final state (and the associated amplitude
> square).
> With such large width, you also have to test if neglecting such diagram
> makes sense or not..
>
> Cheers,
>
> Olivier
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1805107
>
> Title:
> Madspin fails to decay events for large couplings
>
> Status in MadGraph5_aMC@NLO:
> New
>
> Bug description:
> Hi Olivier,
>
> I am running a LO event generation with decays through Madspin. I
> implemented my model in Feynrules, and I basically have SM + new
> scalar doublet with slepton quantum numbers + new fermion singlet
> under SM. You helped me with a tricky bug in "#1801760 Madspin fails
> with majorana particle" a couple of weeks ago, so you might remember
> the model.
>
> I found another issue. When Madspin decays the events, I am getting
> the following warning for some of them:
>
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
>
> which is not too bad, but if the coupling for the relevant vertex
> (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the
> events:
>
> ...
> ...
> INFO: Event 291/300 : 2.8s
> INFO: Event 296/300 : 2.8s
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
> INFO:
> INFO: Decaying the events...
>
>
> Any clues about this one? Thanks!
>
> Cheers,
>
> Pablo
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/mg5amcnlo/+bug/1805107/+subscriptions
>

Hi,

The decay chain still assumes that all other contribution (in particular interference term) are negligible compare to the selected contribution. This is far to be automatic with large width.
And this is something that you should check at least once.

>just by curiosity, how is the cross section computed by using this
method?

We integrate over the full matrix-element whith a normal propagator for your particle.
(we just have custom bound to select the "onshell" contribution (define up to 15 times the width)

Cheers,

Olivier

Pablo Martin (pmartin7) wrote :

>
> The decay chain still assumes that all other contribution (in particular
> interference term) are negligible compare to the selected contribution.
> This is far to be automatic with large width.
> And this is something that you should check at least once.
>

I will! Thanks Olivier!

El lun., 10 dic. 2018 a las 17:21, Olivier Mattelaer (<
<email address hidden>>) escribió:

> Hi,
>
> The decay chain still assumes that all other contribution (in particular
> interference term) are negligible compare to the selected contribution.
> This is far to be automatic with large width.
> And this is something that you should check at least once.
>
> >just by curiosity, how is the cross section computed by using this
> method?
>
> We integrate over the full matrix-element whith a normal propagator for
> your particle.
> (we just have custom bound to select the "onshell" contribution (define up
> to 15 times the width)
>
> Cheers,
>
> Olivier
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1805107
>
> Title:
> Madspin fails to decay events for large couplings
>
> Status in MadGraph5_aMC@NLO:
> New
>
> Bug description:
> Hi Olivier,
>
> I am running a LO event generation with decays through Madspin. I
> implemented my model in Feynrules, and I basically have SM + new
> scalar doublet with slepton quantum numbers + new fermion singlet
> under SM. You helped me with a tricky bug in "#1801760 Madspin fails
> with majorana particle" a couple of weeks ago, so you might remember
> the model.
>
> I found another issue. When Madspin decays the events, I am getting
> the following warning for some of them:
>
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
>
> which is not too bad, but if the coupling for the relevant vertex
> (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the
> events:
>
> ...
> ...
> INFO: Event 291/300 : 2.8s
> INFO: Event 296/300 : 2.8s
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
> INFO:
> INFO: Decaying the events...
>
>
> Any clues about this one? Thanks!
>
> Cheers,
>
> Pablo
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/mg5amcnlo/+bug/1805107/+subscriptions
>

Pablo Martin (pmartin7) wrote :

By the way, what syntax should I use to check the interference diagrams? I
am kind of lost with this. Thanks very much!

Pablo

El mar., 11 dic. 2018 a las 10:21, Pablo Martín (<email address hidden>)
escribió:

> The decay chain still assumes that all other contribution (in particular
>> interference term) are negligible compare to the selected contribution.
>> This is far to be automatic with large width.
>> And this is something that you should check at least once.
>>
>
> I will! Thanks Olivier!
>
>
> El lun., 10 dic. 2018 a las 17:21, Olivier Mattelaer (<
> <email address hidden>>) escribió:
>
>> Hi,
>>
>> The decay chain still assumes that all other contribution (in particular
>> interference term) are negligible compare to the selected contribution.
>> This is far to be automatic with large width.
>> And this is something that you should check at least once.
>>
>> >just by curiosity, how is the cross section computed by using this
>> method?
>>
>> We integrate over the full matrix-element whith a normal propagator for
>> your particle.
>> (we just have custom bound to select the "onshell" contribution (define
>> up to 15 times the width)
>>
>> Cheers,
>>
>> Olivier
>>
>> --
>> You received this bug notification because you are subscribed to the bug
>> report.
>> https://bugs.launchpad.net/bugs/1805107
>>
>> Title:
>> Madspin fails to decay events for large couplings
>>
>> Status in MadGraph5_aMC@NLO:
>> New
>>
>> Bug description:
>> Hi Olivier,
>>
>> I am running a LO event generation with decays through Madspin. I
>> implemented my model in Feynrules, and I basically have SM + new
>> scalar doublet with slepton quantum numbers + new fermion singlet
>> under SM. You helped me with a tricky bug in "#1801760 Madspin fails
>> with majorana particle" a couple of weeks ago, so you might remember
>> the model.
>>
>> I found another issue. When Madspin decays the events, I am getting
>> the following warning for some of them:
>>
>> INFO: All production process does not have the same total Branching
>> Ratio.
>> Therefore the total number of events after decay
>> will be lower than the original file.
>> [max_br = 0.99515608014, min_br = 0.992471423458]
>>
>> which is not too bad, but if the coupling for the relevant vertex
>> (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the
>> events:
>>
>> ...
>> ...
>> INFO: Event 291/300 : 2.8s
>> INFO: Event 296/300 : 2.8s
>> INFO: All production process does not have the same total Branching
>> Ratio.
>> Therefore the total number of events after decay
>> will be lower than the original file.
>> [max_br = 0.99515608014, min_br = 0.992471423458]
>> INFO:
>> INFO: Decaying the events...
>>
>>
>> Any clues about this one? Thanks!
>>
>> Cheers,
>>
>> Pablo
>>
>> To manage notifications about this bug go to:
>> https://bugs.launchpad.net/mg5amcnlo/+bug/1805107/+subscriptions
>>
>

In this case, I would do the following
1) running everything at fix scale
2) generate three sample
   - the one with the decay chain
   - the opposite one (the $ syntax)
   - the full one
3) compare the difference between the sum of the two first and the third.

This comparison is typically done in most of my tutorials.
https://cp3.irmp.ucl.ac.be/projects/madgraph/attachment/wiki/MCNET2017/17_06_02_tuto_mcnet.pdf

Cheers,

Olivier

Changed in mg5amcnlo:
status: New → Invalid
Pablo Martin (pmartin7) wrote :

Thanks Olivier, that solved my question!

Cheers,

Pablo

El vie., 21 dic. 2018 a las 21:40, Olivier Mattelaer (<
<email address hidden>>) escribió:

> In this case, I would do the following
> 1) running everything at fix scale
> 2) generate three sample
> - the one with the decay chain
> - the opposite one (the $ syntax)
> - the full one
> 3) compare the difference between the sum of the two first and the third.
>
> This comparison is typically done in most of my tutorials.
>
> https://cp3.irmp.ucl.ac.be/projects/madgraph/attachment/wiki/MCNET2017/17_06_02_tuto_mcnet.pdf
>
> Cheers,
>
> Olivier
>
> ** Changed in: mg5amcnlo
> Status: New => Invalid
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1805107
>
> Title:
> Madspin fails to decay events for large couplings
>
> Status in MadGraph5_aMC@NLO:
> Invalid
>
> Bug description:
> Hi Olivier,
>
> I am running a LO event generation with decays through Madspin. I
> implemented my model in Feynrules, and I basically have SM + new
> scalar doublet with slepton quantum numbers + new fermion singlet
> under SM. You helped me with a tricky bug in "#1801760 Madspin fails
> with majorana particle" a couple of weeks ago, so you might remember
> the model.
>
> I found another issue. When Madspin decays the events, I am getting
> the following warning for some of them:
>
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
>
> which is not too bad, but if the coupling for the relevant vertex
> (phil-psi-lepton) is too large (~3), Madspin gets stuck decaying the
> events:
>
> ...
> ...
> INFO: Event 291/300 : 2.8s
> INFO: Event 296/300 : 2.8s
> INFO: All production process does not have the same total Branching
> Ratio.
> Therefore the total number of events after decay
> will be lower than the original file.
> [max_br = 0.99515608014, min_br = 0.992471423458]
> INFO:
> INFO: Decaying the events...
>
>
> Any clues about this one? Thanks!
>
> Cheers,
>
> Pablo
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/mg5amcnlo/+bug/1805107/+subscriptions
>

To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Other bug subscribers