Next: Beam Remnants (and Multiple Up: Multiple Interactions Previous: The simple model   Contents

### A model with varying impact parameters

Up to this point, it has been assumed that the initial state is the same for all hadron collisions, whereas in fact each collision also is characterized by a varying impact parameter . Within the classical framework of the model reviewed here, is to be thought of as a distance of closest approach, not as the Fourier transform of the momentum transfer. A small value corresponds to a large overlap between the two colliding hadrons, and hence an enhanced probability for multiple interactions. A large , on the other hand, corresponds to a grazing collision, with a large probability that no parton-parton interactions at all take place.

In order to quantify the concept of hadronic matter overlap, one may assume a spherically symmetric distribution of matter inside the hadron, . For simplicity, the same spatial distribution is taken to apply for all parton species and momenta. Several different matter distributions have been tried, and are available. We will here concentrate on the most extreme one, a double Gaussian

 (210)

This corresponds to a distribution with a small core region, of radius and containing a fraction of the total hadronic matter, embedded in a larger hadron of radius . While it is mathematically convenient to have the origin of the two Gaussians coinciding, the physics could well correspond to having three disjoint core regions, reflecting the presence of three valence quarks, together carrying the fraction of the proton momentum. One could alternatively imagine a hard hadronic core surrounded by a pion cloud. Such details would affect e.g. the predictions for the distribution in elastic scattering, but are not of any consequence for the current topics. To be specific, the values and were picked as default values. It should be noted that the overall distance scale never enters in the subsequent calculations, since the inelastic, non-diffractive cross section is taken from literature rather than calculated from the .

Compared to other shapes, like a simple Gaussian, the double Gaussian tends to give larger fluctuations, e.g. in the multiplicity distribution of minimum-bias events: a collision in which the two cores overlap tends to have a strongly increased activity, while ones where they do not are rather less active. One also has a biasing effect: hard processes are more likely when the cores overlap, thus hard scatterings are associated with an enhanced multiple interaction rate. This provides one possible explanation for the experimental pedestal effect' [UA187]. Recent studies of CDF data [Fie02,Mor02] have confirmed that indeed something more peaked than a single Gaussian is required to understand the transition from minimum-bias to underlying-event activity.

For a collision with impact parameter , the time-integrated overlap between the matter distributions of the colliding hadrons is given by

 (211)

The necessity to use boosted distributions has been circumvented by a suitable scale transformation of the and coordinates.

The overlap is obviously strongly related to the eikonal of optical models. We have kept a separate notation, since the physics context of the two is slightly different: is based on the quantum mechanical scattering of waves in a potential, and is normally used to describe the elastic scattering of a hadron-as-a-whole, while comes from a purely classical picture of point-like partons distributed inside the two colliding hadrons. Furthermore, the normalization and energy dependence is differently realized in the two formalisms.

The larger the overlap is, the more likely it is to have interactions between partons in the two colliding hadrons. In fact, there should be a linear relationship

 (212)

where counts the number of interactions when two hadrons pass each other with an impact parameter . The constant of proportionality, , is related to the parton-parton cross section and hence increases with c.m. energy.

For each given impact parameter, the number of interactions is assumed to be distributed according to a Poisson. If the matter distribution has a tail to infinity (as the double Gaussian does), events may be obtained with arbitrarily large values. In order to obtain finite total cross sections, it is necessary to assume that each event contains at least one semi-hard interaction. The probability that two hadrons, passing each other with an impact parameter , will actually undergo a collision is then given by

 (213)

according to Poisson statistics. The average number of interactions per event at impact parameter is now
 (214)

where the denominator comes from the removal of hadron pairs which pass without colliding, i.e. with .

The relationship was earlier introduced for the average number of interactions per non-diffractive, inelastic event. When averaged over all impact parameters, this relation must still hold true: the introduction of variable impact parameters may give more interactions in some events and less in others, but it does not affect either or . For the former this is because the perturbative QCD calculations only depend on the total parton flux, for the latter by construction. Integrating eq. () over , one then obtains

 (215)

For , and given, with , can thus always be found (numerically) by solving the last equality.

The absolute normalization of is not interesting in itself, but only the relative variation with impact parameter. It is therefore useful to introduce an enhancement factor' , which gauges how the interaction probability for a passage with impact parameter compares with the average, i.e.

 (216)

The definition of the average is a bit delicate, since the average number of interactions per event is pushed up by the requirement that each event contain at least one interaction. However, an exact meaning can be given [Sjö87a].

With the knowledge of , the function of the simple model generalizes to

 (217)

The naïve generation procedure is thus to pick a according to the phase space , find the relevant and plug in the resulting in the formalism of the simple model. If at least one hard interaction is generated, the event is retained, else a new is to be found. This algorithm would work fine for hadronic matter distributions which vanish outside some radius, so that the phase space which needs to be probed is finite. Since this is not true for the distributions under study, it is necessary to do better.

By analogy with eq. (), it is possible to ask what the probability is to find the hardest scattering of an event at . For each impact parameter separately, the probability to have an interaction at is given by , and this should be multiplied by the probability that the event contains no interactions at a scale , to yield the total probability distribution

 (218)

If the treatment of the exponential is deferred for a moment, the distribution in and appears in factorized form, so that the two can be chosen independently of each other. In particular, a high- QCD scattering or any other hard scattering can be selected with whatever kinematics desired for that process, and thereafter assigned some suitable hardness' . With the chosen according to , the neglected exponential can now be evaluated, and the event retained with a probability proportional to it. From the scale of the selected interaction, a sequence of softer values may again be generated as in the simple model, using the known . This sequence may be empty, i.e. the event need not contain any further interactions.

It is interesting to understand how the algorithm above works. By selecting according to , i.e. , the primary distribution is maximally biased towards small impact parameters. If the first interaction is hard, by choice or by chance, the integral of the cross section above is small, and the exponential close to unity. The rejection procedure is therefore very efficient for all standard hard processes in the program -- one may even safely drop the weighting with the exponential completely. The large value is also likely to lead to the generation of many further, softer interactions. If, on the other hand, the first interaction is not hard, the exponential is no longer close to unity, and many events are rejected. This pulls down the efficiency for minimum bias' event generation. Since the exponent is proportional to , a large leads to an enhanced probability for rejection, whereas the chance of acceptance is larger with a small . Among events where the hardest interaction is soft, the distribution is therefore biased towards larger values (smaller ), and there is a small probability for yet softer interactions.

To evaluate the exponential factor, the program pretabulates the integral of at the initialization stage, and further increases the Monte Carlo statistics of this tabulation as the run proceeds. The grid is concentrated towards small , where the integral is large. For a selected value, the integral is obtained by interpolation. After multiplication by the known factor, the exponential factor may be found.

In this section, nothing has yet been assumed about the form of the spectrum. Like in the impact-parameter-independent case, it is possible to use a sharp cut-off at some given value. However, now each event is required to have at least one interaction, whereas before events without interactions were retained and put at . It is therefore aesthetically more appealing to assume a gradual turn-off, so that a (semi)hard interaction can be rather soft part of the time. The matrix elements roughly diverge like for . They could therefore be regularized as follows. Firstly, to remove the behaviour, multiply by a factor . Secondly, replace the argument in by . If one has included a factor by a rescaling of the argument, as mentioned earlier, replace by .

With these substitutions, a continuous spectrum is obtained, stretching from to . For the standard perturbative QCD cross section is recovered, while values are strongly damped. The scale, which now is the main free parameter of the model, in practice comes out to be of the same order of magnitude as the sharp cut-off did, i.e. 1.5-2 GeV at collider energies, but typically about 10% higher.

Above we have argued that and should only have a slow energy dependence, and even allowed for the possibility of fixed values. For the impact-parameter-independent picture this works out fine, with all events being reduced to low- two-string ones when the c.m. energy is reduced. In the variable-impact-parameter picture, the whole formalism only makes sense if , see e.g. eq. (). Since does not vanish with decreasing energy, but would do that for a fixed , this means that has to be reduced significantly at low energies, possibly even more than implied by our assumed energy dependence. The more `sophisticated' model of this section therefore makes sense at collider energies, whereas it may not be well suited for applications at fixed-target energies. There one should presumably attach to a picture of multiple soft Pomeron exchanges.

Next: Beam Remnants (and Multiple Up: Multiple Interactions Previous: The simple model   Contents
Stephen Mrenna 2007-10-30