Multiple Interactions (and Beam Remnants) - New Model

Each multiple interaction is associated with its set of initial- and final-state radiation. In the old model such radiation was only considered for the first, i.e. hardest, interaction. The technical reason had to do with the inability to handle junction string topologies, and therefore the need to simplify the description. In practice, it could be argued that the subsequent interactions would tend to be soft, near the lower or scales, and therefore not be associated with additional hard radiation. Nevertheless it was a limitation.

The new junction and beam-remnant description allows radiation to be associated with each interactions. In the intermediate model, this is done in a disjoint manner: for each interaction, all initial- and final-state radiation activity associated with it is considered in full before the next interaction is selected and the showers are still the old, virtuality-ordered ones.

In the new model, the new, transverse-momentum-ordered showers are introduced. Thus becomes the common evolution scale both for multiple interactions (MI), initial-state radiation (ISR) and final-state radiation (FSR), although the technical definition of transverse momentum is slightly different in the three cases.

One can argue that, to a good approximation, the addition of FSR can be deferred until after ISR and MI have been considered in full. Specifically, FSR does not modify the total amount of energy carried by perturbatively defined partons, it only redistributes that energy among more partons. By contrast, both the addition of a further ISR branching and the addition of a further interaction implies more perturbative energy, taken from the limited beam-remnants reservoir. These two mechanisms therefore are in direct competition with each other.

We have already advocated in favour of ordering multiple interactions in a sequence of falling values. This does not correspond to an ordering in a physical time, but rather to the requirement that the hardest interaction should be the one for which standard parton densities should apply. Any corrections, e.g. from the kind of flavour correlations already discussed, would have to be introduced by approximate prescriptions, that would become increasingly uncertain as further interactions are considered.

We now advocate that, by the same reasoning, also ISR emissions
should be interleaved with the MI chain, in one common sequence
of decreasing values. That is, a hard second interaction
should be considered before a soft ISR branching associated with
the hardest interaction. This is made possible by the adoption of
as common evolution variable. Thus, the standard parton
densities are only used to describe the hardest interaction
and the ISR branchings that occur *above* the -scales
of any secondary interactions.

In passing, note that the old showers requires two matching parameters, . These values, typically in the range 1 to 4, separate for space-like and time-like showers, are there to compensate on the average for the extra -dependent factors in the relations and , respectively, so that the showers can start from a scale comparable with that of the interaction. In our new model, with , this matching is automatic, i.e. .

The evolution downwards in can now be viewed as a generalization
of the backwards evolution philosophy [Sjö85]: given a configuration
at some resolution scale, what can that configuration have come from
at a lower scale? Technically, starting from a hard interaction, a common
sequence of subsequent evolution steps -- interactions and branchings
mixed -- can therefore be found. Assuming that the latest step occurred
at some scale, this sets the maximum
for the continued evolution. What can happen
next is then either a new interaction or a new ISR branching on one of
the two incoming sides in one of the existing interactions. The
probability distribution for
is given by

If there were no flavour and momentum constraints linking the different
subsystems, it
is easy to see that such an interleaved evolution actually is equivalent
to considering the ISR of each interaction in full before moving on to
the next interaction. Competition is introduced via the correlated parton
densities already discussed. Thus distributions are squeezed to be
nonvanishing in a range , where represents the fraction
of the original beam-remnant momentum still available for an interaction
or branching. When a trial 'th interaction is considered,
, where the sum runs over all the already
existing interactions. The are the respective momentum fractions
of the ISR shower initiators at the current resolution scale, i.e.,
an is increased each time an ISR branching is backwards-constructed
on an incoming parton leg. Similarly, the flavour content is modified to
take into account the partons already extracted by the previous
interactions, including the effects of ISR branchings. When instead a
trial shower branching is considered, the sum excludes the interaction
under consideration, since this energy *is* at the disposal of the
interaction, and similarly for the flavour content.

The choice of scale for this combined evolution is process-dependent, as before. For minimum-bias QCD events the full phase space is allowed, while the scale of a QCD hard process sets the maximum for the continued evolution, in order not to doublecount. When the hard process represents a possibility not present in the MI/ISR machinery -- production of , top, or supersymmetry, say -- again the full (remaining) phase space is available, though for processes that contain explicit jets in the matrix element, such as +jet, the ISR evolution is restricted to be below the jet cutoff. Note that, when interfacing events from an external matrix element generator, special care has to be taken to ensure that these scales are set consistently.

There is also the matter of a lower scale. Customarily such scales are chosen separately for ISR and MI, and typically lower for the former than the latter. Both cutoffs are related to the resolution of the incoming hadronic wave function, however, and in the current formalism ISR and MI are interleaved, so it makes sense to use the same regularization procedure.Therefore also the branching probability is smoothly turned off at a scale, like for MI, but the ISR suppression factor is the square root of the MI one, since only one Feynman vertex is involved in a shower branching relative to the two of a hard process. Thus the divergence in a branching is tamed to . The scale of parton densities in ISR and MI alike is maintained at , however, the argument being that the actual evolution of the partonic content is given by standard DGLAP evolution, and that it is only when this content is to be resolved that a dampening is to be imposed. This also has the boon that flavour thresholds appear where they are expected.

The cutoff for FSR is still kept separate and lower, since that scale deals with the matching between perturbative physics and the nonperturbative hadronization at long time scales, and so has a somewhat different function.

The description of beam remnants offers the same scenarios as
outlined for the intermediate model above, and as further described in
[Sjö04a]. A more recent option, not found there, is the following
`colour annealing' scenario [San05],
available only for the new model, through the options
`MSTP(95) = 2 - 5`.
It has been constructed to produce similar effects as `Tune A' and
similar tunes of the old model, but is also in some sense intended to
be representative of the `most extreme' case. It starts from the
assumption that, at hadronization time, no information from the
perturbative colour history of the event is relevant. Instead, what
determines how hadronizing strings form between the partons is a
minimization of the total potential energy stored in these strings.
That is, the partons, regardless of their formation history, will tend
to be colour connected to the partons closest to them in momentum space,
hence minimizing the total `string length', as measured by the so-called
measure [And83a,Sjö03].
Technically, the model implementation starts by erasing
the colour tags of all final-state coloured partons. It then begins an
iterative procedure (which unfortunately can be quite time-consuming):

- 104.
- Loop over all final-state coloured partons
- 105.
- For each such parton,

(*i*) compute the measure for each possible string connection from that parton to other `colour-compatible' final-state partons which do not already have string pieces connected to them (for`MSTP(95) = 2`and`MSTP(95) = 3`with the extra condition that closed gluon loops are suppressed, or, with options`MSTP(95) = 6`and`MSTP(95) = 7`available from version 6.402, with the condition that connections must be initiated from free triplets), and

(*ii*) store the connection with the smallest measure for later comparison. - 106.
- Compare all the possible `minimal string pieces' found, one for each parton. Select the largest of these to be carried out physically. (That parton is in some sense the one that is currently furthest away from all other partons.)
- 107.
- If any `dangling colour tags' are left, repeat from 1.
- 108.
- At the end of the iteration, it is possible that the last parton is a gluon, and that all other partons already form a complete colour singlet system. In this case, the gluon is simply attached between the two partons where its presence will increase the total measure the least.

Finally, let it be re-emphasized that the issue of colour correllations and reconnections, especially in hadron collisions, is an extremely challenging one, about which very little is known for certain at present (see e.g. the discussions in [San05] and references therein). The present models are therefore in this respect best regarded as vast simplifications of a presumably much more complex physical picture.