Dark Energy turns out to be an amazingly versatile substance, and as a consequence it has been "discovered" many times in the history of cosmology. It can be used to keep the stars fixed firmly in the firmament (Einstein's static universe). It can be used to resolve age discrepancies (the geological timescale). It can be used to create a universe that obeys the "perfect cosmological principle" (steady-state). It can be used as "filler" when you don't have enough baryonic + cold dark matter to make the universe critically bound (post-inflation). Only recently has it been used to account for the observed acceleration in the expansion rate of the universe.
The first (and for a while the only) two nontrivial models of the universe were by Einstein and De Sitter in 1917 (nicely summarized in de Sitter's paper 1917, MNRAS, 78, 3.) What is interesting is that these models were created before anyone had a clear understanding of what the universe actually is! In 1917 the canonical picture of the "universe" was that it consisted of the nearby stellar disk of the Milky Way, with the sun near the center. Spiral nebulae were recognized as a distinct class of objects, but their distances and relation to the Milky Way was still a matter of conjecture. It was in 1918 that Shapley recognized that the center of the Milky Way is off in the bulge and that the sun is simply in the periphery of the Galactic system. The confirmation of the spiral nebulae as being distant counterparts of our own Milk Way came in 1923 with Hubble's discovery of Cepheid variables in the Andromeda Galaxy. It still took a few years for astronomers to fully appreciate that it is the spiral nebulae, not the stars or globular clusters in the Milky Way, that constitute the universe at large.
Einstein's model was static - both the relative velocities and relative accelerations of particles were zero. In modern language, his model had two components - zero-pressure baryons and dark energy - with the baryonic density being twice that of the dark energy density. Additionally, his universe was "closed", with the radius of curvature being uniquely related to the dark energy (and thus baryonic) density. (Luminet, arXiv:0704.3579, points out that, for Einstein, having the universe being closed was more fundamental than it being static.)
At this point it is worth mentioning a couple of features of his model that were not entirely appreciated at the time but help distinguish it from the de Sitter model. In general, one's choice of coordinate systems is arbitrary; however, some choices are more natural than others. In modern language we would say that the natural coordinates for a model that satisfies the cosmological principle (homogeneity and isotropy) are those of the Robertson-Walker metric. In this metric, the radial coordinate is a "comoving distance" - a particle at rest with respect to the radial coordinate will remain so (i.e., a line of constant comoving distance is a geodesic.) The time coordinate is "cosmic time" - it is the proper time kept by a particle at a fixed radial coordinate. If one distributes particles uniformly in comoving distance (say, at a fixed coordinate time), then they are constant in space density as well at that time. Not all geodesics (paths of freely falling particles) need be lines of constant comoving distance; any particle on such a path is said to have a "peculiar velocity" relative to the local comoving coordinate at any particular time. (If one had a uniform population of such particles with randomly oriented peculiar velocities, such a population would be described as having both a density and a pressure, still consistent with the properties of the RW metric).
Einstein's model (Eq. 8A in de Sitter's paper) was written in proper Robertson-Walker form. Thus, there was the implicit assumption that stars were located at fixed radial coordinates and distributed uniformly in this coordinate. Also implicit in Einstein's model was that the velocities of stars are small enough (compared to the speed of light) that their pressure is negligible.
The de Sitter metric (Eq. 8B in his paper) corresponded to a universe made of purely of dark energy. Any baryonic component, such as stars or galaxies, would necessarily have a negligible density compared to the dark energy. A major problem with the de Sitter metric was that it was not written in proper Robertson-Walker form but rather was written with coordinates chosen so as to make the metric coefficients independent of time. Such coordinates did not satisfy the cosmological principle (or, as Lemaitre would say, they violated the Copernican principle that there should be no preferred location in the Universe.) A line of constant radial coordinate was not a geodesic but rather represented an accelerated observer. The time coordinate was not cosmic time. Freely falling particles had large peculiar velocities relative to lines of constant radial coordinate. A surface of constant coordinate time had spherical geometry. Two freely falling particles initially at rest relative to one another would eventually separate at exponential speed (the "scattering" effect). However, there was no obvious way to distribute stars or galaxies such that they had constant density at some particular time.
Indeed, in a sense the de Sitter "model" stalled cosmology for 10 years. It survived, however, because particles on geodesics displayed motion relative to the origin, and Slipher's measurement of galaxy radial velocities showed that galaxies had large (albeit mainly positive) velocities, whereas Einstein's universe was static. [This favoritism toward de Sitter was somewhat specious because large peculiar velocities were certainly allowed in Einstein's universe as well; it is just that Einstein's model did not require them, whereas in de Sitter's model, which wasn't even a complete model, they were inevitable.]
The de Sitter model was reworked into its more familiar Robertson-Walker form by Lanczos (1922 - still need to dig up this paper!) and Robertson (1928); in both cases a coordinate transformation was used, resulting in a metric that was dynamic (explicit time dependence in the metric coefficients). Curiously, the transformation converted the spherical space of de Sitter into a flat space of a k=0 critically bound universe. (It is often not recognized that the geometry of the space-part of a metric is not absolute but depends on the choice of coordinates.) Robertson additionally derived the Hubble law (eq. 17) and also tied it to observations in the same way that Lemaitre had done in 1927 (described below). Otherwise, Robertson was more interested in the mathematical properties of the metric (important work but not expanding on de Sitter's work particularly). It is not clear if Robertson was aware of prior papers by Lanczos or Lemaitre.
In 1922 and 1924 Friedman published his two well-known papers on expanding models of the universe. These were the first to incorporate both baryonic matter and dark energy in a dynamic model. Friedman also considered both open and closed universes. Friedman's metrics were all of Robertson- Walker form. Interestingly, in the 1922 paper, Friedman became the first to suggest taking λ = 0 .
In 1927, Lemaitre wrote his seminal paper on a model of an expanding universe. In part he simply repeated the ground covered by Friedman. Lemaitre's model had three components - baryonic matter, dark energy, and radiation, although he dropped the radiation term for most of the paper. The model was closed (positive energy), and the curvature was chosen so that it started from an Einstein universe at t=-∞ (not a big bang), infinitesimally perturbed into the expanding state. As it expanded, the dark energy density remained constant while the baryonic density dropped. The model thus evolved asymptotically to a de Sitter universe, now written in its Robertson-Walker form. The model had two free parameters - e.g., the current density of baryons and the total amount of expansion since the initial state. Lemaitre was the first to constrain a two-parameter model with observational data. He took the baryonic density from Hubble's direct measurement in 1926. To constrain the second parameter, he took the energy (Lemaitre) equation, showed how one could relate the density and expansion factor to the Hubble constant, then estimated the Hubble constant itself from data in the literature. He found that the universe had expanded by a factor 20 today. This expansion was large enough that the universe would be dark-energy dominated (ΩB ~ 10-4).
Even though Lemaitre's paper lay buried for 3 years, it was sufficiently forward-looking that it still had a big impact when Eddington unearthed it.
de Sitter 1930 - Explored more variations on Lemaitre's 1927 model, including those with a "big bang" type of beginning. These involved varying the ratio of lambda to baryonic mass and initial energy (or curvature).
Eddington 1931 (Proceedings Roy. Soc. London, A133, 605) derived a value for the cosmological constant: λ = (2Gmp/π)2 (2πmeα/h)4 = 9.79 x 10-55 cm-2. [CAUTION: Eddington's α is the INVERSE of the normal fine structure constant!!! I.E. it is 137].
Lemaitre (1934, PNAS, 20, 12) noted explicitly that dark energy acts like matter with negative pressure: "In order that absolute motion, i.e., motion relative to the vacuum, may not be detected, we must associate a pressure p = -ρc2 to the density of energy ρc 2 of the vacuum. This is essentially the meaning of the cosmical constant λ which corresponds to a negative density of vacuum ρ according to ρ = λc2/4πG = 10-27 gr./cm3." [I think 4π should be 8π] [Why "negative density"? It may be because of some sign convention.]
At this point theory had outrun observations, and cosmology moved in other directions such as Milne's kinematic relativity, Gamow's big bang, and later Bondi/Gold/Hoyle's steady state theory. (In a way, steady state theory was a variant of a de Sitter universe - Hoyle's "C-field" served nearly the same roll as dark energy in driving expansion.) Once steady state crumbled (with the discovery that the universe is not invariant with redshift or, thus, with time [quasars, radio galaxies; also the microwave background]), dark energy was no longer an essential component of cosmological models (although it was always carried along.)
Recapping all the cosmological models. Friedman's models are really a three-parameter family of models. (We ignore radiation, which was not of relevance for these early models). The three parameters are the density of baryonic matter, the density of dark energy (λ), and the curvature radius of the universe, along with a sign (open or closed). An equivalent set of parameters is to take a single overall density as the only dimensional parameter and then use ΩB, ΩΛ, and Ωk as dimensionless parameters with the constraint that the sum of the Ωs is 1. Note that the "natural" value for Ωk is either 0 (flat universe), 1 (empty universe) or -∞ (static universe); in the last case an alternate set of parameters is more useful.
H. Robertson, Rev Mod Phy, 1933, 5, 62. Cites to a passage by Einstein in a 1931 book, wherein Albert says he now prefers λ=0. [As an aside, Robertson also talks about Hubble's estimate of the density of "luminous" matter and conjectures that the total density is a factor of "between 1 and 1000?" times higher. Good shootin', Bob!]
W. Mattig, Astronomische Nachrichten, volume 284, p.109 1958 derives the by-now standard equations for luminosity distance, etc. He starts with the Friedman equations and immediately assumes Λ=0.
Sandage 1961, Ap. J. 133, 355. Same thing. He writes the equations with Λ, then sets it to 0.
Robertson, 1955, PASP, 67, 82. Same thing, but with more commentary. "... the questionable Λ ..." A driver for a non-zero lambda was that it would resolve the age problem. With the old H calibration, the universe was closed but required a positive Lambda. With the newer calibration, the universe could be flat (k=0) and have lambda=0.
"Concerning this Λ we have no a priori evidence whatsoever; in particular, it may well be zero, as Einstein, who originally raised this spook in the ether, would now have it. But here I shall provisionally retain Λ as an unknown parameter, both to show its influence on the models and to allow for the possibility that this theoretically allowable term may more legitimately arise in some future, more comprehensive field theory."
"Before the recent change of distance scale, the geological evidence
made it appear necessary to deal with a region lying above the curve