A crude option for the simulation of Bose-Einstein effects is included since long, but is turned off by default. In view of its shortcomings, alternative descriptions have been introduced that try to overcome at least some of them [Lön95].

The detailed BE physics is
not that well understood, see e.g. [Lör89]. What is
offered is an algorithm, more than just a parameterization (since very
specific assumptions and choices have been made), and yet less than a
true model (since the underlying physics picture is rather fuzzy).
In this scheme, the
fragmentation is allowed to proceed as usual, and so is the decay of
short-lived particles like . Then pairs of identical particles,
say, are considered one by one. The value of a pair
and is evaluated,

(260) |

(261) |

The change of can be translated into an effective shift of the three-momenta of the two particles, if one uses as extra constraint that the total three-momentum of each pair be conserved in the c.m. frame of the event. Only after all pairwise momentum shifts have been evaluated, with respect to the original momenta, are these momenta actually shifted, for each particle by the (three-momentum) sum of evaluated shifts. The total energy of the event is slightly reduced in the process, which is compensated by an overall rescaling of all c.m.-frame momentum vectors. It can be discussed which are the particles to involve in this rescaling. Currently the only exceptions to using everything are leptons and neutrinos coming from resonance decays (such as 's) and photons radiated by leptons (also in initial-state radiation). Finally, the decay chain is resumed with more long-lived particles like .

Two comments can be made. The Bose-Einstein effect is here interpreted almost as a classical force acting on the `final state', rather than as a quantum mechanical phenomenon on the production amplitude. This is not a credo, but just an ansatz to make things manageable. Also, since only pairwise interactions are considered, the effects associated with three or more nearby particles tend to get overestimated. (More exact, but also more time-consuming methods may be found in [Zaj87].) Thus the input may have to be chosen smaller than what one wants to get out. (On the other hand, many of the pairs of an event contains at least one particle produced in some secondary vertex, like a decay. This reduces the fraction of pairs which may contribute to the Bose-Einstein effects, and thus reduces the potential signal.) This option should therefore be used with caution, and only as a first approximation to what Bose-Einstein effects can mean.

Probably the largest weakness of the above approach is the issue how
to conserve the total four-momentum. It preserves three-momentum locally,
but at the expense of not conserving energy. The subsequent rescaling of
all momenta by a common factor (in the rest frame of the event) to restore
energy conservation is purely *ad hoc*. For studies of a single
decay, it can plausibly be argued that such a rescaling does
minimal harm. The same need not hold for a pair of resonances.
Indeed, studies [Lön95] show that this global rescaling scheme,
which we will denote , introduces an artificial negative shift
in the reconstructed mass, making it difficult (although doable)
to study the true BE effects in this case. This is one reason to consider
alternatives.

The global rescaling is also running counter to our philosophy that BE effects should be local. To be more specific, we assume that the energy density of the string is a fixed quantity. To the extent that a pair of particles have their four-momenta slightly shifted, the string should act as a `commuting vessel', providing the difference to other particles produced in the same local region of the string. What this means in reality is still not completely specified, so further assumptions are necessary. In the following we discuss four possible algorithms, whereof the last two are based strictly on the local conservation aspect above, while the first two are attempting a slightly different twist to the locality concept. All are based on calculating an additional shift for some pairs of particles, where particles and need not be identical bosons. In the end each particle momentum will then be shifted to , with the parameter adjusted separately for each event so that the total energy is conserved.

In the first approach we emulate the criticism of the global event weight methods with weights always above unity, as being intrinsically unstable. It appears more plausible that weights fluctuate above and below unity. For instance, the simple pair symmetrization weight is , with the form only obtained after integration over a Gaussian source. Non-Gaussian sources give oscillatory behaviours.

If weights above unity correspond to a shift of pairs towards smaller relative values, the below-unity weights instead give a shift towards larger . One therefore is lead to a picture where very nearby identical particles are shifted closer, those somewhat further are shifted apart, those even further yet again shifted closer, and so on. Probably the oscillations dampen out rather quickly, as indicated both by data and by the global model studies. We therefore simplify by simulating only the first peak and dip. Furthermore, to include the desired damping and to make contact with our normal generation algorithm (for simplicity), we retain the Gaussian form, but the standard is multiplied by a further factor . The factor in the exponential, i.e. a factor 3 difference in the variable, is consistent with data and also with what one might expect from a dampened form, but should be viewed more as a simple ansatz than having any deep meaning.

In the algorithm, which we denote , is then non-zero only for pairs of identical bosons, and is calculated in the same way as , with the additional factor in the exponential. As explained above, the shifts are then scaled by a common factor that ensures total energy conservation. It turns out that the average needed is . The negative sign is exactly what we want to ensure that corresponds to shifting a pair apart, while the order of is consistent with the expected increase in the number of affected pairs when a smaller effective radius is used. One shortcoming of the method, as implemented here, is that the input is not quite 2 for but rather . This could be solved by starting off with an input somewhat above unity.

The second algorithm, denoted
, is a modification of
the form intended to give
. The ansatz
is

(263) |

In the other two schemes, the original form of is retained, and the energy is instead conserved by picking another pair of particles that are shifted apart appropriately. That is, for each pair of identical particles and , a pair of non-identical particles, and , neither identical to or , is found in the neighbourhood of and . For each shift , a corresponding is found so that the total energy and momentum in the system is conserved. However, the actual momentum shift of a particle is formed as the vector sum of many contributions, so the above pair compensation mechanism is not perfect. The mismatch is reflected in a non-unit value used to rescale the terms.

The pair should be the particles `closest' to the pair affected by the BE shift, in the spirit of local energy conservation. One option would here have been to `look behind the scenes' and use information on the order of production along the string. However, once decays of short-lived particles are included, such an approach would still need arbitrary further rules. We therefore stay with the simplifying principle of only using the produced particles.

Looking at (or ) events and a pair with both particles from the same , it is not obvious whether the pair should also be selected only from this or if all possible pairs should be considered. Below we have chosen the latter as default behaviour, but the former alternative is also studied below.

One obvious measure of closeness is small invariant mass. A first choice would then be to pick the combination that minimizes the invariant mass of all four particles. However, such a procedure does not reproduce the input shape very well: both the peak height and peak width are significantly reduced, compared with what happens in the algorithm. The main reason is that either of or may have particles identical to itself in its local neighbourhood. The momentum compensation shift of is at random, more or less, and therefore tends to smear the BE signal that could be introduced relative to 's identical partner. Note that, if and its partner are very close in to start with, the relative change required to produce a significant BE effect is very small, approximately . The momentum compensation shift on can therefore easily become larger than the BE shift proper.

It is therefore necessary to disfavour momentum compensation shifts
that break up close identical pairs. One alternative would have been
to share the momentum conservation shifts suitably inside such pairs.
We have taken a simpler course, by introducing a suppression factor
for particle , where is the
value between and its nearest identical partner. The form is fixed
such that a is forbidden and then the rise matches the
shape of the BE distribution itself. Specifically, in the third
algorithm, , the pair is chosen so that the measure

(264) |

The
algorithm is inspired by the so-called
measure [And89] (not the be confused with the parameter of
). It corresponds to a string length in the Lund string
fragmentation framework. It can be shown that partons in a string are
colour-connected in a way that tends to minimize this measure. The same is
true for the ordering of the produced hadrons, although with large
fluctuations. As above, having identical particles nearby to gives
undesirable side effects. Therefore the selection is made so that

(265) |

The main switches and parameters affecting the Bose-Einstein algorithm
are `MSTJ(51) - MSTJ(57)` and `PARJ(91) - PARJ(96)`.