Up to this point, it has been assumed that the initial state is the same for all hadron collisions, whereas in fact each collision also is characterized by a varying impact parameter . Within the classical framework of the model reviewed here, is to be thought of as a distance of closest approach, not as the Fourier transform of the momentum transfer. A small value corresponds to a large overlap between the two colliding hadrons, and hence an enhanced probability for multiple interactions. A large , on the other hand, corresponds to a grazing collision, with a large probability that no parton-parton interactions at all take place.
In order to quantify the concept of hadronic matter overlap, one may
assume a spherically symmetric distribution of matter inside the
For simplicity, the same spatial distribution is taken to apply
for all parton species and momenta. Several different matter
distributions have been tried, and are available. We will here
concentrate on the most extreme one, a double Gaussian
Compared to other shapes, like a simple Gaussian, the double Gaussian tends to give larger fluctuations, e.g. in the multiplicity distribution of minimum-bias events: a collision in which the two cores overlap tends to have a strongly increased activity, while ones where they do not are rather less active. One also has a biasing effect: hard processes are more likely when the cores overlap, thus hard scatterings are associated with an enhanced multiple interaction rate. This provides one possible explanation for the experimental `pedestal effect' [UA187]. Recent studies of CDF data [Fie02,Mor02] have confirmed that indeed something more peaked than a single Gaussian is required to understand the transition from minimum-bias to underlying-event activity.
For a collision with impact parameter , the time-integrated
overlap between the matter distributions of the
colliding hadrons is given by
The overlap is obviously strongly related to the eikonal of optical models. We have kept a separate notation, since the physics context of the two is slightly different: is based on the quantum mechanical scattering of waves in a potential, and is normally used to describe the elastic scattering of a hadron-as-a-whole, while comes from a purely classical picture of point-like partons distributed inside the two colliding hadrons. Furthermore, the normalization and energy dependence is differently realized in the two formalisms.
The larger the overlap is, the more likely it is to
have interactions between partons in the two colliding hadrons.
In fact, there should be a linear relationship
For each given impact parameter, the number of interactions is
assumed to be distributed according to a Poisson. If the matter
distribution has a tail to infinity (as the double Gaussian does),
events may be obtained with arbitrarily large values. In order
to obtain finite total cross sections, it is necessary to assume
that each event contains at least one semi-hard interaction. The
probability that two hadrons, passing each other with an impact
parameter , will actually undergo a collision is then given by
was earlier introduced for the average number of interactions per
non-diffractive, inelastic event. When averaged over all
impact parameters, this relation must still hold true: the
introduction of variable impact parameters may give more interactions
in some events and less in others, but it does not affect either
For the former this is because the
perturbative QCD calculations only depend on the total parton flux,
for the latter by construction. Integrating eq. () over
, one then obtains
The absolute normalization of is not interesting
in itself, but only the relative variation with impact parameter.
It is therefore useful to introduce an `enhancement factor'
, which gauges how the interaction probability for a passage
with impact parameter compares with the average, i.e.
With the knowledge of , the function of the
simple model generalizes to
By analogy with eq. (), it is possible to ask what
the probability is to find the hardest scattering of an event at
. For each impact parameter separately, the probability
to have an interaction at is given by
and this should be multiplied by the probability that the event
contains no interactions at a scale
to yield the total probability distribution
It is interesting to understand how the algorithm above works. By selecting according to , i.e. , the primary distribution is maximally biased towards small impact parameters. If the first interaction is hard, by choice or by chance, the integral of the cross section above is small, and the exponential close to unity. The rejection procedure is therefore very efficient for all standard hard processes in the program -- one may even safely drop the weighting with the exponential completely. The large value is also likely to lead to the generation of many further, softer interactions. If, on the other hand, the first interaction is not hard, the exponential is no longer close to unity, and many events are rejected. This pulls down the efficiency for `minimum bias' event generation. Since the exponent is proportional to , a large leads to an enhanced probability for rejection, whereas the chance of acceptance is larger with a small . Among events where the hardest interaction is soft, the distribution is therefore biased towards larger values (smaller ), and there is a small probability for yet softer interactions.
To evaluate the exponential factor, the program pretabulates the integral of at the initialization stage, and further increases the Monte Carlo statistics of this tabulation as the run proceeds. The grid is concentrated towards small , where the integral is large. For a selected value, the integral is obtained by interpolation. After multiplication by the known factor, the exponential factor may be found.
In this section, nothing has yet been assumed about the form of the spectrum. Like in the impact-parameter-independent case, it is possible to use a sharp cut-off at some given value. However, now each event is required to have at least one interaction, whereas before events without interactions were retained and put at . It is therefore aesthetically more appealing to assume a gradual turn-off, so that a (semi)hard interaction can be rather soft part of the time. The matrix elements roughly diverge like for . They could therefore be regularized as follows. Firstly, to remove the behaviour, multiply by a factor . Secondly, replace the argument in by . If one has included a factor by a rescaling of the argument, as mentioned earlier, replace by .
With these substitutions, a continuous spectrum is obtained, stretching from to . For the standard perturbative QCD cross section is recovered, while values are strongly damped. The scale, which now is the main free parameter of the model, in practice comes out to be of the same order of magnitude as the sharp cut-off did, i.e. 1.5-2 GeV at collider energies, but typically about 10% higher.
Above we have argued that and should only have a slow energy dependence, and even allowed for the possibility of fixed values. For the impact-parameter-independent picture this works out fine, with all events being reduced to low- two-string ones when the c.m. energy is reduced. In the variable-impact-parameter picture, the whole formalism only makes sense if , see e.g. eq. (). Since does not vanish with decreasing energy, but would do that for a fixed , this means that has to be reduced significantly at low energies, possibly even more than implied by our assumed energy dependence. The more `sophisticated' model of this section therefore makes sense at collider energies, whereas it may not be well suited for applications at fixed-target energies. There one should presumably attach to a picture of multiple soft Pomeron exchanges.