next up previous contents
Next: Joined Interactions Up: Beam Remnants and Underlying Previous: Beam-Remnant Kinematics   Contents

Multiple Interactions (and Beam Remnants) - New Model

Each multiple interaction is associated with its set of initial- and final-state radiation. In the old model such radiation was only considered for the first, i.e. hardest, interaction. The technical reason had to do with the inability to handle junction string topologies, and therefore the need to simplify the description. In practice, it could be argued that the subsequent interactions would tend to be soft, near the lower $p_{\perp\mathrm{min}}$ or $p_{\perp 0}$ scales, and therefore not be associated with additional hard radiation. Nevertheless it was a limitation.

The new junction and beam-remnant description allows radiation to be associated with each interactions. In the intermediate model, this is done in a disjoint manner: for each interaction, all initial- and final-state radiation activity associated with it is considered in full before the next interaction is selected and the showers are still the old, virtuality-ordered ones.

In the new model, the new, transverse-momentum-ordered showers are introduced. Thus $p_{\perp}$ becomes the common evolution scale both for multiple interactions (MI), initial-state radiation (ISR) and final-state radiation (FSR), although the technical definition of transverse momentum is slightly different in the three cases.

One can argue that, to a good approximation, the addition of FSR can be deferred until after ISR and MI have been considered in full. Specifically, FSR does not modify the total amount of energy carried by perturbatively defined partons, it only redistributes that energy among more partons. By contrast, both the addition of a further ISR branching and the addition of a further interaction implies more perturbative energy, taken from the limited beam-remnants reservoir. These two mechanisms therefore are in direct competition with each other.

We have already advocated in favour of ordering multiple interactions in a sequence of falling $p_{\perp}$ values. This does not correspond to an ordering in a physical time, but rather to the requirement that the hardest interaction should be the one for which standard parton densities should apply. Any corrections, e.g. from the kind of flavour correlations already discussed, would have to be introduced by approximate prescriptions, that would become increasingly uncertain as further interactions are considered.

We now advocate that, by the same reasoning, also ISR emissions should be interleaved with the MI chain, in one common sequence of decreasing $p_{\perp}$ values. That is, a hard second interaction should be considered before a soft ISR branching associated with the hardest interaction. This is made possible by the adoption of $p_{\perp}$ as common evolution variable. Thus, the standard parton densities are only used to describe the hardest interaction and the ISR branchings that occur above the $p_{\perp}$-scales of any secondary interactions.

In passing, note that the old showers requires two matching parameters, $Q^2_{\mathrm{max,shower}} = f \, p^2_{\perp,\mathrm{MI}}$. These $f$ values, typically in the range 1 to 4, separate for space-like and time-like showers, are there to compensate on the average for the extra $z$-dependent factors in the relations $p_{\perp}^2 \approx (1-z) Q^2$ and $p_{\perp}^2 \approx z(1-z) Q^2$, respectively, so that the showers can start from a $p_{\perp}$ scale comparable with that of the interaction. In our new model, with $Q^2_{\mathrm{shower}} \approx p_{\perp}^2$, this matching is automatic, i.e. $f = 1$.

The evolution downwards in $p_{\perp}$ can now be viewed as a generalization of the backwards evolution philosophy [Sjö85]: given a configuration at some $p_{\perp}$ resolution scale, what can that configuration have come from at a lower scale? Technically, starting from a hard interaction, a common sequence of subsequent evolution steps -- interactions and branchings mixed -- can therefore be found. Assuming that the latest step occurred at some $p_{\perp i-1}$ scale, this sets the maximum $p_{\perp\mathrm{max}}= p_{\perp i-1}$ for the continued evolution. What can happen next is then either a new interaction or a new ISR branching on one of the two incoming sides in one of the existing interactions. The probability distribution for $p_{\perp}= p_{\perp i}$ is given by

\frac{\d\mathcal{P}}{\d p_{\perp}} =
\left( \frac{\d\mathcal...
...}_{\mathrm{ISR}}}{\d p_{\perp}'} \right)
\d p_{\perp}' \right)
\end{displaymath} (228)

in simplified notation. Technically, the $p_{\perp i}$ can be found by selecting a new trial interaction according to $\d\mathcal{P}_{\mathrm{MI}} \, \exp ( - \int \d\mathcal{P}_{\mathrm{MI}})$, and a trial ISR branching in each of the possible places according to $\d\mathcal{P}_{\mathrm{ISR}} \, \exp ( - \int \d\mathcal{P}_{\mathrm{ISR}})$. The one of all of these possibilities that occurs at the largest $p_{\perp}$ preempts the others, and is allowed to be realized. The whole process is iterated, until a lower cutoff is reached, below which no further interactions or branchings are allowed.

If there were no flavour and momentum constraints linking the different subsystems, it is easy to see that such an interleaved evolution actually is equivalent to considering the ISR of each interaction in full before moving on to the next interaction. Competition is introduced via the correlated parton densities already discussed. Thus distributions are squeezed to be nonvanishing in a range $x\in[0,X]$, where $X < 1$ represents the fraction of the original beam-remnant momentum still available for an interaction or branching. When a trial $n$'th interaction is considered, $X = 1 - \sum_{i=1}^{n-1} x_i$, where the sum runs over all the already existing interactions. The $x_i$ are the respective momentum fractions of the ISR shower initiators at the current resolution scale, i.e., an $x_i$ is increased each time an ISR branching is backwards-constructed on an incoming parton leg. Similarly, the flavour content is modified to take into account the partons already extracted by the $n-1$ previous interactions, including the effects of ISR branchings. When instead a trial shower branching is considered, the $X$ sum excludes the interaction under consideration, since this energy is at the disposal of the interaction, and similarly for the flavour content.

The choice of $p_{\perp\mathrm{max}}$ scale for this combined evolution is process-dependent, as before. For minimum-bias QCD events the full phase space is allowed, while the $p_{\perp}$ scale of a QCD hard process sets the maximum for the continued evolution, in order not to doublecount. When the hard process represents a possibility not present in the MI/ISR machinery -- production of $\mathrm{Z}^0$, top, or supersymmetry, say -- again the full (remaining) phase space is available, though for processes that contain explicit jets in the matrix element, such as $\mathrm{W}$+jet, the ISR evolution is restricted to be below the jet cutoff. Note that, when interfacing events from an external matrix element generator, special care has to be taken to ensure that these scales are set consistently.

There is also the matter of a lower $p_{\perp\mathrm{min}}$ scale. Customarily such scales are chosen separately for ISR and MI, and typically lower for the former than the latter. Both cutoffs are related to the resolution of the incoming hadronic wave function, however, and in the current formalism ISR and MI are interleaved, so it makes sense to use the same regularization procedure.Therefore also the branching probability is smoothly turned off at a $p_{\perp 0}$ scale, like for MI, but the ISR suppression factor is the square root of the MI one, since only one Feynman vertex is involved in a shower branching relative to the two of a hard process. Thus the $\alpha_{\mathrm{s}}(p_{\perp}^2) \, \d p_{\perp}^2 / p_{\perp}^2$ divergence in a branching is tamed to $\alpha_{\mathrm{s}}(p_{\perp 0}^2 + p_{\perp}^2) \, \d p_{\perp}^2 / (p_{\perp 0}^2 + p_{\perp}^2)$. The scale of parton densities in ISR and MI alike is maintained at $p_{\perp}^2$, however, the argument being that the actual evolution of the partonic content is given by standard DGLAP evolution, and that it is only when this content is to be resolved that a dampening is to be imposed. This also has the boon that flavour thresholds appear where they are expected.

The cutoff for FSR is still kept separate and lower, since that scale deals with the matching between perturbative physics and the nonperturbative hadronization at long time scales, and so has a somewhat different function.

The description of beam remnants offers the same scenarios as outlined for the intermediate model above, and as further described in [Sjö04a]. A more recent option, not found there, is the following `colour annealing' scenario [San05], available only for the new model, through the options MSTP(95) = 2 - 5. It has been constructed to produce similar effects as `Tune A' and similar tunes of the old model, but is also in some sense intended to be representative of the `most extreme' case. It starts from the assumption that, at hadronization time, no information from the perturbative colour history of the event is relevant. Instead, what determines how hadronizing strings form between the partons is a minimization of the total potential energy stored in these strings. That is, the partons, regardless of their formation history, will tend to be colour connected to the partons closest to them in momentum space, hence minimizing the total `string length', as measured by the so-called $\lambda$ measure [And83a,Sjö03]. Technically, the model implementation starts by erasing the colour tags of all final-state coloured partons. It then begins an iterative procedure (which unfortunately can be quite time-consuming):

Loop over all final-state coloured partons
For each such parton,
(i) compute the $\lambda$ measure for each possible string connection from that parton to other `colour-compatible' final-state partons which do not already have string pieces connected to them (for MSTP(95) = 2 and MSTP(95) = 3 with the extra condition that closed gluon loops are suppressed, or, with options MSTP(95) = 6 and MSTP(95) = 7 available from version 6.402, with the condition that connections must be initiated from free triplets), and
(ii) store the connection with the smallest $\lambda$ measure for later comparison.
Compare all the possible `minimal string pieces' found, one for each parton. Select the largest of these to be carried out physically. (That parton is in some sense the one that is currently furthest away from all other partons.)
If any `dangling colour tags' are left, repeat from 1.
At the end of the iteration, it is possible that the last parton is a gluon, and that all other partons already form a complete colour singlet system. In this case, the gluon is simply attached between the two partons where its presence will increase the total $\lambda$ measure the least.

Finally, let it be re-emphasized that the issue of colour correllations and reconnections, especially in hadron collisions, is an extremely challenging one, about which very little is known for certain at present (see e.g. the discussions in [San05] and references therein). The present models are therefore in this respect best regarded as vast simplifications of a presumably much more complex physical picture.

next up previous contents
Next: Joined Interactions Up: Beam Remnants and Underlying Previous: Beam-Remnant Kinematics   Contents
Stephen Mrenna 2007-10-30