next up previous contents
Next: Beam Remnants (and Multiple Up: Multiple Interactions Previous: The simple model   Contents

A model with varying impact parameters

Up to this point, it has been assumed that the initial state is the same for all hadron collisions, whereas in fact each collision also is characterized by a varying impact parameter $b$. Within the classical framework of the model reviewed here, $b$ is to be thought of as a distance of closest approach, not as the Fourier transform of the momentum transfer. A small $b$ value corresponds to a large overlap between the two colliding hadrons, and hence an enhanced probability for multiple interactions. A large $b$, on the other hand, corresponds to a grazing collision, with a large probability that no parton-parton interactions at all take place.

In order to quantify the concept of hadronic matter overlap, one may assume a spherically symmetric distribution of matter inside the hadron, $\rho(\mathbf{x}) \, \d ^3 x = \rho(r) \, \d ^3 x$. For simplicity, the same spatial distribution is taken to apply for all parton species and momenta. Several different matter distributions have been tried, and are available. We will here concentrate on the most extreme one, a double Gaussian

\rho(r) \propto \frac{1 - \beta}{a_1^3} \, \exp \left\{
- \f...
...{\beta}{a_2^3} \,
\exp \left\{ - \frac{r^2}{a_2^2} \right\} ~.
\end{displaymath} (210)

This corresponds to a distribution with a small core region, of radius $a_2$ and containing a fraction $\beta$ of the total hadronic matter, embedded in a larger hadron of radius $a_1$. While it is mathematically convenient to have the origin of the two Gaussians coinciding, the physics could well correspond to having three disjoint core regions, reflecting the presence of three valence quarks, together carrying the fraction $\beta$ of the proton momentum. One could alternatively imagine a hard hadronic core surrounded by a pion cloud. Such details would affect e.g. the predictions for the $t$ distribution in elastic scattering, but are not of any consequence for the current topics. To be specific, the values $\beta = 0.5$ and $a_2/a_1 = 0.2$ were picked as default values. It should be noted that the overall distance scale $a_1$ never enters in the subsequent calculations, since the inelastic, non-diffractive cross section $\sigma_{\mathrm{nd}}(s)$ is taken from literature rather than calculated from the $\rho(r)$.

Compared to other shapes, like a simple Gaussian, the double Gaussian tends to give larger fluctuations, e.g. in the multiplicity distribution of minimum-bias events: a collision in which the two cores overlap tends to have a strongly increased activity, while ones where they do not are rather less active. One also has a biasing effect: hard processes are more likely when the cores overlap, thus hard scatterings are associated with an enhanced multiple interaction rate. This provides one possible explanation for the experimental `pedestal effect' [UA187]. Recent studies of CDF data [Fie02,Mor02] have confirmed that indeed something more peaked than a single Gaussian is required to understand the transition from minimum-bias to underlying-event activity.

For a collision with impact parameter $b$, the time-integrated overlap ${\cal O}(b)$ between the matter distributions of the colliding hadrons is given by

    $\displaystyle {\cal O}(b) \propto \int \d t \int \d ^3 x \, \rho(x,y,z) \,
    $\displaystyle \propto \frac{(1 - \beta)^2}{2a_1^2} \exp \left\{
- \frac{b^2}{2a...
...\right\} +
\frac{\beta^2}{2a_2^2} \exp \left\{ - \frac{b^2}{2a_2^2} \right\} ~.$ (211)

The necessity to use boosted $\rho(\mathbf{x})$ distributions has been circumvented by a suitable scale transformation of the $z$ and $t$ coordinates.

The overlap ${\cal O}(b)$ is obviously strongly related to the eikonal $\Omega(b)$ of optical models. We have kept a separate notation, since the physics context of the two is slightly different: $\Omega(b)$ is based on the quantum mechanical scattering of waves in a potential, and is normally used to describe the elastic scattering of a hadron-as-a-whole, while ${\cal O}(b)$ comes from a purely classical picture of point-like partons distributed inside the two colliding hadrons. Furthermore, the normalization and energy dependence is differently realized in the two formalisms.

The larger the overlap ${\cal O}(b)$ is, the more likely it is to have interactions between partons in the two colliding hadrons. In fact, there should be a linear relationship

\langle \tilde{n}(b) \rangle = k {\cal O}(b) ~,
\end{displaymath} (212)

where $\tilde{n} = 0, 1, 2, \ldots$ counts the number of interactions when two hadrons pass each other with an impact parameter $b$. The constant of proportionality, $k$, is related to the parton-parton cross section and hence increases with c.m. energy.

For each given impact parameter, the number of interactions is assumed to be distributed according to a Poisson. If the matter distribution has a tail to infinity (as the double Gaussian does), events may be obtained with arbitrarily large $b$ values. In order to obtain finite total cross sections, it is necessary to assume that each event contains at least one semi-hard interaction. The probability that two hadrons, passing each other with an impact parameter $b$, will actually undergo a collision is then given by

{\cal P}_{\mathrm{int}}(b) = 1 - \exp ( - \langle \tilde{n}(b) \rangle )
= 1 - \exp (- k {\cal O}(b) ) ~,
\end{displaymath} (213)

according to Poisson statistics. The average number of interactions per event at impact parameter $b$ is now
\langle n(b) \rangle =
\frac{ \langle \tilde{n}(b) \rangle }...
...b)} =
\frac{ k {\cal O}(b) }{ 1 - \exp (- k {\cal O}(b) ) } ~,
\end{displaymath} (214)

where the denominator comes from the removal of hadron pairs which pass without colliding, i.e. with $\tilde{n} = 0$.

The relationship $\langle n \rangle = \sigma_{\mathrm{hard}}/\sigma_{\mathrm{nd}}$ was earlier introduced for the average number of interactions per non-diffractive, inelastic event. When averaged over all impact parameters, this relation must still hold true: the introduction of variable impact parameters may give more interactions in some events and less in others, but it does not affect either $\sigma_{\mathrm{hard}}$ or $\sigma_{\mathrm{nd}}$. For the former this is because the perturbative QCD calculations only depend on the total parton flux, for the latter by construction. Integrating eq. ([*]) over $b$, one then obtains

\langle n \rangle =
\frac{ \int \langle n(b) \rangle \, {\ca...
...^2 b} =
\frac{\sigma_{\mathrm{hard}}}{\sigma_{\mathrm{nd}}} ~.
\end{displaymath} (215)

For ${\cal O}(b)$, $\sigma_{\mathrm{hard}}$ and $\sigma_{\mathrm{nd}}$ given, with $\sigma_{\mathrm{hard}} / \sigma_{\mathrm{nd}} > 1$, $k$ can thus always be found (numerically) by solving the last equality.

The absolute normalization of ${\cal O}(b)$ is not interesting in itself, but only the relative variation with impact parameter. It is therefore useful to introduce an `enhancement factor' $e(b)$, which gauges how the interaction probability for a passage with impact parameter $b$ compares with the average, i.e.

\langle \tilde{n}(b) \rangle = k{\cal O}(b) =
e(b) \, \langle k{\cal O}(b) \rangle ~.
\end{displaymath} (216)

The definition of the average $\langle k{\cal O}(b) \rangle$ is a bit delicate, since the average number of interactions per event is pushed up by the requirement that each event contain at least one interaction. However, an exact meaning can be given [Sjö87a].

With the knowledge of $e(b)$, the $f(x_{\perp})$ function of the simple model generalizes to

f(x_{\perp},b) = e(b) \, f(x_{\perp}) ~.
\end{displaymath} (217)

The naïve generation procedure is thus to pick a $b$ according to the phase space $\d ^2 b$, find the relevant $e(b)$ and plug in the resulting $f(x_{\perp},b)$ in the formalism of the simple model. If at least one hard interaction is generated, the event is retained, else a new $b$ is to be found. This algorithm would work fine for hadronic matter distributions which vanish outside some radius, so that the $\d ^2 b$ phase space which needs to be probed is finite. Since this is not true for the distributions under study, it is necessary to do better.

By analogy with eq. ([*]), it is possible to ask what the probability is to find the hardest scattering of an event at $x_{\perp 1}$. For each impact parameter separately, the probability to have an interaction at $x_{\perp 1}$ is given by $f(x_{\perp 1},b)$, and this should be multiplied by the probability that the event contains no interactions at a scale $x'_{\perp} > x_{\perp 1}$, to yield the total probability distribution

$\displaystyle \frac{\d {\cal P}_{\mathrm{hardest}}}{\d ^2 b \, \d x_{\perp 1}}$ $\textstyle =$ $\displaystyle f(x_{\perp 1},b) \, \exp \left\{ - \int_{x_{\perp 1}}^1
f(x'_{\perp},b) \, \d x'_{\perp} \right\}$  
  $\textstyle =$ $\displaystyle e(b) \, f(x_{\perp 1}) \, \exp \left\{ - e(b)
\int_{x_{\perp 1}}^1 f(x'_{\perp}) \, \d x'_{\perp} \right\} ~.$ (218)

If the treatment of the exponential is deferred for a moment, the distribution in $b$ and $x_{\perp 1}$ appears in factorized form, so that the two can be chosen independently of each other. In particular, a high-$p_{\perp}$ QCD scattering or any other hard scattering can be selected with whatever kinematics desired for that process, and thereafter assigned some suitable `hardness' $x_{\perp 1}$. With the $b$ chosen according to $e(b) \, \d ^2 b$, the neglected exponential can now be evaluated, and the event retained with a probability proportional to it. From the $x_{\perp 1}$ scale of the selected interaction, a sequence of softer $x_{\perp i}$ values may again be generated as in the simple model, using the known $f(x_{\perp},b)$. This sequence may be empty, i.e. the event need not contain any further interactions.

It is interesting to understand how the algorithm above works. By selecting $b$ according to $e(b) \, \d ^2 b$, i.e. ${\cal O}(b) \, \d ^2 b$, the primary $b$ distribution is maximally biased towards small impact parameters. If the first interaction is hard, by choice or by chance, the integral of the cross section above $x_{\perp 1}$ is small, and the exponential close to unity. The rejection procedure is therefore very efficient for all standard hard processes in the program -- one may even safely drop the weighting with the exponential completely. The large $e(b)$ value is also likely to lead to the generation of many further, softer interactions. If, on the other hand, the first interaction is not hard, the exponential is no longer close to unity, and many events are rejected. This pulls down the efficiency for `minimum bias' event generation. Since the exponent is proportional to $e(b)$, a large $e(b)$ leads to an enhanced probability for rejection, whereas the chance of acceptance is larger with a small $e(b)$. Among events where the hardest interaction is soft, the $b$ distribution is therefore biased towards larger values (smaller $e(b)$), and there is a small probability for yet softer interactions.

To evaluate the exponential factor, the program pretabulates the integral of $f(x_{\perp})$ at the initialization stage, and further increases the Monte Carlo statistics of this tabulation as the run proceeds. The $x_{\perp}$ grid is concentrated towards small $x_{\perp}$, where the integral is large. For a selected $x_{\perp 1}$ value, the $f(x_{\perp})$ integral is obtained by interpolation. After multiplication by the known $e(b)$ factor, the exponential factor may be found.

In this section, nothing has yet been assumed about the form of the $\d\sigma / \d p_{\perp}$ spectrum. Like in the impact-parameter-independent case, it is possible to use a sharp cut-off at some given $p_{\perp\mathrm{min}}$ value. However, now each event is required to have at least one interaction, whereas before events without interactions were retained and put at $p_{\perp}= 0$. It is therefore aesthetically more appealing to assume a gradual turn-off, so that a (semi)hard interaction can be rather soft part of the time. The matrix elements roughly diverge like $\alpha_{\mathrm{s}}(p_{\perp}^2) \, \d p_{\perp}^2 / p_{\perp}^4$ for $p_{\perp}\to 0$. They could therefore be regularized as follows. Firstly, to remove the $1/p_{\perp}^4$ behaviour, multiply by a factor $p_{\perp}^4 / (p_{\perp}^2 + p_{\perp 0}^2)^2$. Secondly, replace the $p_{\perp}^2$ argument in $\alpha_{\mathrm{s}}$ by $p_{\perp}^2 + p_{\perp 0}^2$. If one has included a $K$ factor by a rescaling of the $\alpha_{\mathrm{s}}$ argument, as mentioned earlier, replace $0.075 \, p_{\perp}^2$ by $0.075 \, (p_{\perp}^2 + p_{\perp 0}^2)$.

With these substitutions, a continuous $p_{\perp}$ spectrum is obtained, stretching from $p_{\perp}= 0$ to $E_{\mathrm{cm}}/2$. For $p_{\perp}\gg p_{\perp 0}$ the standard perturbative QCD cross section is recovered, while values $p_{\perp}\ll p_{\perp 0}$ are strongly damped. The $p_{\perp 0}$ scale, which now is the main free parameter of the model, in practice comes out to be of the same order of magnitude as the sharp cut-off $p_{\perp\mathrm{min}}$ did, i.e. 1.5-2 GeV at collider energies, but typically about 10% higher.

Above we have argued that $p_{\perp\mathrm{min}}$ and $p_{\perp 0}$ should only have a slow energy dependence, and even allowed for the possibility of fixed values. For the impact-parameter-independent picture this works out fine, with all events being reduced to low-$p_{\perp}$ two-string ones when the c.m. energy is reduced. In the variable-impact-parameter picture, the whole formalism only makes sense if $\sigma_{\mathrm{hard}} > \sigma_{\mathrm{nd}}$, see e.g. eq. ([*]). Since $\sigma_{\mathrm{nd}}$ does not vanish with decreasing energy, but $\sigma_{\mathrm{hard}}$ would do that for a fixed $p_{\perp 0}$, this means that $p_{\perp 0}$ has to be reduced significantly at low energies, possibly even more than implied by our assumed energy dependence. The more `sophisticated' model of this section therefore makes sense at collider energies, whereas it may not be well suited for applications at fixed-target energies. There one should presumably attach to a picture of multiple soft Pomeron exchanges.

next up previous contents
Next: Beam Remnants (and Multiple Up: Multiple Interactions Previous: The simple model   Contents
Stephen_Mrenna 2012-10-24