Dark Energy is Real. Really?

I don’t have much time to post today after spending all morning in a meeting about Assuring a Quality Experience in the Graduate College and in between reading project reports this afternoon.

However, I couldn’t resist a quickie just to draw your attention to a cosmology story that’s made it into the mass media, e.g. BBC Science. This concerns the recent publication of a couple of papers from the WiggleZ Dark Energy Survey which has used the Anglo-Australian Telescope. You can read a nice description of what WiggleZ (pronounced “Wiggle-Zee”) is all about here, but in essence it involves making two different sorts of measurements of how galaxies cluster in order to constrain the Universe’s geometry and dynamics. The first method is the “wiggle” bit, in that it depends on the imprint of baryon acoustic oscillations in the power-spectrum of galaxy clustering. The other involves analysing the peculiar motions of the galaxies by measuring the distortion of the clustering pattern introduced seen in redshift space; redshifts are usually denoted z in cosmology so that accounts for the “zee”.

The paper describing the results from the former method can be found here, while the second technique is described there.

This survey has been a major effort by an extensive team of astronomers: it has involved spectroscopic measurements of almost a quarter of a million galaxies, spread over 1000 square degrees on the sky, and has taken almost five years to complete. The results are consistent with the standard ΛCDM cosmological model, and in particular with the existence of the  dark energy that this model implies, but which we don’t have a theoretical explanation for.

This is all excellent stuff and it obviously lends further observational support to the standard model. However, I’m not sure I agree with the headline of press release put out by the WiggleZ team  Dark Energy is Real. I certainly agree that dark energy is a plausible explanation for a host of relevant observations, but do we really know for sure that it is “real”? Can we really be sure that there is no other explanation?  Wiggle Z has certainly produced evidence that’s sufficient to rule out some alternative models, but that’s not the same as proof.  I worry when scientists speak like this, with what sounds like certainty, about things that are far from proven. Just because nobody has thought of an alternative explanation doesn’t mean that none exists.

The problem is that a press release entitled “dark energy is real” is much more likely to be picked up by a newspaper radio or TV editor than one that says “dark energy remains best explanation”….

Share/Bookmark

Advertisements

28 Responses to “Dark Energy is Real. Really?”

  1. You don’t say “Wiggle-Zed”?

  2. I thought their main claim was that now we have supernovae and galaxy evidence for the phenomenon. Independent evidence is a good thing. Does the CMBR support Dark Energy?

    About all i’ve heard that’s known for sure is that if Dark Energy is real, then it’s repulsive.

  3. Chris Blake Says:

    Hi Peter,

    Keen reader of your blog of course, so I couldn’t resist posting a reply! Thanks very much for your description of our work, and I certainly 100% agree with your conclusion. The starting point was of course that the cosmological constant currently favoured by the data is somehow a “real material” (whatever that might be) on one side of Einstein’s equations, whereas a modification to gravity would appear on the other side. But that interpretation is certainly open to further debate and exploration by many other datasets and theories.

    Media work is something that I find quite difficult personally but nonetheless very interesting. How do we strike the balance between on the one hand wanting high scientific accuracy, which I suspect often requires a lot of background to be put across, but on the other hand boiling things down to a quick, snappy message? The caveats and qualifications required by science often don’t seem to lend themselves well to a “sound bite”.

    On the other hand, hopefully even sound bites can put across the idea to the public that in astronomy and cosmology we can try and address some interesting questions which might be worth funding. Although I sometimes don’t know if the preponderance of “dark matter” and “dark energy” in the public domain of cosmology stories appears as a positive thing, in that our data contain real, exciting puzzles to solve about the Universe, or a negative thing, in that it must sometimes appear that we are “making things up” !

    Thanks again for your blogs over the years, always interesting.

    Chris

  4. Karl Glazebrook Says:

    Peter, if scientists didn’t oversimplify they would never get anything in the media. I don’t think I’ve ever seen a scientific press release which a specialist wouldn’t be able to pick holes in.

    • telescoper Says:

      Karl,

      To simplify is good, but to oversimplify is not. It’s the sort of thing that leads to public distrust of scientists. It’s a fine line, of course, but it’s an important one.

      Peter

  5. John Peacock Says:

    Peter: it’s neither wiggle-zee nor wiggle-zed. You just say wiggles, but possibly with a french accent (or so I thought – surprised Chris Blake didn’t point it out).

  6. telescoper Says:

    I could have had a laugh by saying it’s actually pronounced “fanshawe” ..

  7. yes, its pronounced “wiggles” and this still makes me laugh, for no apparent reason.

    • telescoper Says:

      I’m sure I’ve heard it pronounced Wiggle-Zee on more than one occasion. I must have been experiencing auditory hallucinations.

  8. Cameron Shanks Says:

    Is this the offending article?
    http://www.bbc.co.uk/news/science-environment-13462926

    • Cameron Shanks Says:

      Ah I see you have already linked, sorry.
      I agree entirely with your post however… I am decidedly a layman, but I am irritated by the manner in which dark matter and energy are referred to as fact in the media.

  9. telescoper Says:

    Now that we’ve established that it annoys Australians if it’s pronounced Wiggle-Zee or Wiggle-Zed then there’s no question how it should be spoken. 😉

  10. “I worry when scientists speak like this”

    Yes, one does need to choose one’s words carefully.

    Not really on-topic, but this reminded me of the chap whose webserver was swamped when, after a conference was over, he added an additional page to the conference web-site. Title: Submission in LaTeX. 🙂

  11. JTDwyer Says:

    I attempted to read the WiggleZ report of the growth of structure, but as an innocent bystander I could get almost nothing from it.

    That gravitation would in time increase the relative localization of massive objects while the expansion of spacetime increases the intervening space between material structures composed of clustered galaxies does not surprise me at all, since the localized effect of gravitation diminishes with distance and the effects of expansion on spacetime accumulates, regardless of whether that expansion is accelerating or decelerating.

    So, let me go back to the original type Ia SuperNovae studies that concluded that the expansion of the universe is accelerating. In simple terms, standard cosmological models that predicted galaxy distance from redshift agreed with the more reliable estimation of galactic distance based on type Ia SNe luminosity for more recent light emissions from nearby galaxies without using a cosmological constant parameter. However, more distant galaxies’ distance estimates disagreed unless a cosmological constant was used to indicate acceleration.

    If I understood correctly, it was the more ancient light emissions from more distant objects that indicated an increased rate of expansion. However, those more ancient light emissions from more distant, high-z objects represent the prevailing conditions looking back to the EARLIER universe, whereas the more recent light emissions from nearer, lower-z objects represent only more recent conditions of expansion.

    From that perspective of the observational data, I can only conclude the type Ia SNe data indicated that the expansion of spacetime has DECELERATED, as originally expected, not requiring any dark energy.

    Surely I’ve simply misinterpreted something fundamental, but if so no one has yet been courteous enough to explain it to me. Anyone?

    • The basic ideas behind this have been known for almost a century, and in the 1960s at the latest even the practical details were all understood. The idea (behind this and essentially all of “classical cosmology”) is that one works out how some observable quantity (apparent luminosity, angular size etc) depends on redshift for various combinations of the cosmological parameters. One then observes this quantity and the redshift (which has a negligible error and is straightforward to observe) and then fits for the values of the cosmological parameters. The key ingredient is calculating the dependence of an observable quantity on redshift for given cosmological parameters. This is not trivial and is not immediately obvious to most people, but there is absolutely no debate on this subject because it follows from the assumptions (GR describes the universe, the universe is homogeneous on large scales etc) very straightforwardly via simple mathematics. (The assumptions are also well established and in some cases today can be derived from observations and more basic assumptions.)

      Note that is is not just “these objects are fainter” etc, but rather a very precise dependency on redshift. This also makes firm predictions for redshifts which have not been observed, and these usually differ from simple ad-hoc models constructed to “explain away” the results.

      “n simple terms, standard cosmological models that predicted galaxy distance from redshift agreed with the more reliable estimation of galactic distance based on type Ia SNe luminosity for more recent light emissions from nearby galaxies without using a cosmological constant parameter. However, more distant galaxies’ distance estimates disagreed unless a cosmological constant was used to indicate acceleration.”

      At low redshift, the log of the distance is proportional to the apparent magnitude (which is a log of luminosity), the constant of proportionality being essentially the Hubble constant (which is why it is called the Hubble constant; it is in general not constant during the evolution of the universe). Thus, at low redshift one can determine the Hubble constant without knowing the other cosmological parameters. At larger redshift, deviation from linearity is determined by the cosmological parameters Omega and lambda (the cosmological constant, which in this case is actually constant in time (though one often works with a scaled version which is scaled by the Hubble constant)). (Historical note: in the old days, one often used the parameter q, which is Omega/2 -lambda, because it is the first interesting term in a series expansion and thus appropriate for modest redshifts. These days, one doesn’t have to rely on series expansions and the redshifts are more than moderate. The supernova observations actually measure something closer to Omega – lambda (complementing the CMB observations, sensitive mainly to Omega + lambda).)

      I don’t follow the logic in your next-to-last paragraph, but it is certainly wrong in its conclusion.

  12. JTDwyer Says:

    Thank you very much for your explanation of cosmological models. Believe me, I’m not trying to be obstinate, but it is the paragraph that you didn’t understand and are certain is wrong that is the source of my conflict. The rest was just prolog. Please allow me to try to explain differently.

    Astronomers tend to consider they are observing objects, but in fact they are indirectly interpreting the properties of detected light to derive information about the emitting object. For example, the cosmological redshift imparted to distant light does not actually indicate the observed object’s recessional velocity relative to the observer but the expansion of spacetime.

    As I understand, as a packet of light traverses expanding spacetime the distance that it must traverse to arrive at any eventual destination (point of detection) is increased and its wavelength is linearly extended – an effect that accumulates in time for each packet of light or detected photon.

    A packet of light emitted from a distant object (derived from effects imparted indicating distance traversed) was initially subjected to the rate of spacetime expansion in effect at the moment of emission. For example, a detected photon that was emitted from a galaxy that is 5Glya indicates the prevailing effects of expansion as it traversed expanding spacetime for the past 5 billion years.

    A detected photon emitted from a galaxy that is 10Gla indicates not only the same effects of expansion as it traversed expanding spacetime for the past 5 billion years, just like the previous example, but it also reflects the effects of spacetime expansion that prevailed during the PRIOR 10 billion years.

    The difference between the redshifts, for example, of the two samples of light is that the light emitted from the more distant galaxy indicates the effects of expansion imparted from 5-10 billion years ago in addition to those imparted for the past 5 billion years.

    Since it was the more distant, higher-z observations that indicated a greater rate of expansion – requiring that the Omega and lambda model parameters be used to indicate increased expansion relative to the standard model parameters, it was the rate of expansion that occurred between 5 billion and 10 billion years ago that was greater than the prevailing rate during the past 5 billion years. If this is correct, the data indicates that expansion has decelerated.

    Sorry to be so tedious in my explanation, but I can’t even guess where the misunderstanding lies. I really appreciate any additional help you can provide.

    • “Since it was the more distant, higher-z observations that indicated a greater rate of expansion – requiring that the Omega and lambda model parameters be used to indicate increased expansion relative to the standard model parameters, it was the rate of expansion that occurred between 5 billion and 10 billion years ago that was greater than the prevailing rate during the past 5 billion years. If this is correct, the data indicates that expansion has decelerated.”

      First, the redshift itself tells us one thing, and one thing only: 1+z (where z is the redshift) is the ratio of the scale factor of the universe now to the scale factor of the universe when the light was emitted. Anything else we might infer requires additional assumptions.

      Second, I think your concept of how the conclusions are derived from the data is wrong. As I described, one determines the cosmological parameters from the observations. (This is not trivial, but it is absolutely well understood standard cosmology about which there is absolutely no debate since it is completely straightforward.) When one knows these parameters, one knows how the scale factor of the universe changes with time. This tells us how fast the universe is expanding now, and how fast it was expanding at any time in the past, as well as whether it was accelerating or decelerating at any given time. What comes out is that a few billion years ago, it started accelerating.

      I think your misconception comes from the idea that high-redshift objects tell us one thing and low-redshift objects another. As I said, it is the detailed dependence on redshift which is the source of our information. Of course, as you say a high-redshift object experiences the expansion of the universe for a longer time, but that is automatically taken into account.

      To simplify things somewhat, we think there is a positive cosmological constant (the value of which implies that the universe is expanding now) since high-redshift objects are fainter than they otherwise would be. This comes from the way the so-called luminosity distance depends on the redshift. It is complicated not only because the universe is expanding, but also because space is not necessarily Euclidean.

      Maybe some things are too complicated to explain in a comment on a blog post. 😐

      Let me try one more time. At very low redshift, differences due to different values of lambda and Omega are negligible. We can use these data to get the Hubble constant (which provides a sort of overall scale factor in cosmology). At somewhat larger redshifts, differences due to different cosmological parameters come into play. When comparing two cosmological models, these differences might continue to increase with increasing redshift, but they also might increase up to a point then decrease again.

      The basic assumption is that we know the absolute luminosity. Observing the apparent luminosity gives us (by definition) the luminosity difference. The complicated part is working out how the luminosity difference depends on redshift for various cosmological models. This is highly non-trivial. If you work it out, you will find that a) the currently favoured cosmological model has objects at moderate redshift fainter than in a comparable model without a cosmological constant. Then, using these cosmological parameters, you can calculate the expansion history of the universe if you want to know when it was decelerating and when it was accelerating.

      Several years ago, I wrote a paper about distance calculation in cosmology. See

      http://www.astro.multivax.de:8000/helbig/research/Publications/info/angsiz.html

      or

      http://arxiv.org/abs/astro-ph/9603028

      The main idea of the paper is somewhat different (correct for the effect which local inhomogeneities in a globally homogeneous universe have on the calculation of distances), but appendix B, which summarizes equations for certain special cases when the universe is ideally homogeneous gives some idea of what is involved. (In the general case, if the universe is ideally homogeneous, one doesn’t have a closed analytic expression, but can express the result in elliptic integrals, which is a rather complicated topic. The differential equation we present in the paper is valid in all cases, but it is not obvious how it depends on the cosmological parameters.)

    • My comment is awaiting moderation (probably because I included some URLs). Check back when it appears for my answer.

  13. JTDwyer Says:

    First, I agree this is probably too complex to clarify on a blog & will desist – thanks very much for your responses. I will read your papers with interest.

    Just to clarify a bit, as I recall the High-z Supernova Search Team seemed to use the term high-z synonymously with the term distant – that’s how I was using it and low-z (near). Briefly, I understood that they found that low-z SN Ia luminosity based distances agreed with the distances predicted by cosmological models based on the SN’s host galaxy redshift without adding ‘acceleration’. The high-z SN Ia luminosity based distances did not agree with cosmological models unless they imposed acceleration in the models.

    That infers to me that the discrepancy between the more directly determined SN Ia distance and the relation between redshift and estimated distance only arose for the high-z or distant observations. Again, it seems to me that the primary distinction between near and far galaxy observations is that the conditions of the earlier universe effect only the farther galaxies. My primary reference is
    http://arxiv.org/abs/astro-ph/9805201v1.

    Thanks again for your patience – I know my perspective is highly unorthodox at best, wrong at worst, but sometimes that can be useful. I’ll refer to your research report…

    • While not true in all cases, at least with the luminosity distance, “high z” and “distant” are more or less interchangeable. (This does not rule out, of course, the fact that the proper distance of a high-redshift object at the time the light was emitted was less than that of a lower-redshift object now.)

      “they found that low-z SN Ia luminosity based distances agreed with the distances predicted by cosmological models based on the SN’s host galaxy redshift without adding ‘acceleration’” This is OK as far as it goes, but it doesn’t go far enough. 🙂 Basically, at low redshift, the distance derived from the redshift is independent of the cosmological parametes lambda and Omega (and depends linearly on the Hubble constant). So yes, in a sense, it is the high-z objects which are interesting, but this is to be expected. If I want to measure the curve of the Earth by watching a ship disappear over the horizon, it is the distant ships which provide me with the information I need, since, whatever the curvature of the Earth (within reason), nearby it looks flat. (In fact, one could define “nearby” as “looks flat” and “low redshift” as “not high enough to be affected by lambda and Omega”.)

      I think I see where you’re coming from. Since the high-redshift objects provide the “signal”, how could that imply that the universe is accelerating now, since that should affect both samples (low and high redshift). The problem is that the near-redshift objects are so near that they are not affected at all (“to first order, everything is linear” :-)). As the redshift increases, so does the discrepancy. However, the discrepancy can decrease as the redshift increases even more, depending on what cosmological models are being compared. In this sense you are right that nearer objects should show the effect, but “near” in this sense actually means the “high-z” objects, while objects of even higher redshift might show less of a discrepancy. (It is also important to keep in mind “Fainter compared to what?”. Most (in fact, all but one) cosmological models without a cosmological constant do not predict a linear relation; the question is really the form of the curve when one plots apparent magnitude against redshift and how this compares to the expectations for various cosmological models. See Figs. (4) and (5) from the paper you mentioned.

  14. JTDwyer Says:

    Success! At least, someone who understands the subject I’m attempting to discuss and can consider from both sides to help me understand. Physicists can be so difficult to communicate with, even when (ostensibly) speaking the same language!

    Thanks so much for the analogy of the ship sailing past the horizon – that gave me a much better basis for understanding the cosmological models (since I really can’t do the math)! Also I want to commend you for the earlier explanation of redshift as the ratio of emission-apparent scale factors – new to me and very helpful!

    [The SN Ia signal used to determine distance is a brief burst (days) peak period emission luminosity. As I understand, the diminishment of its eventual detected/apparent luminosity must accumulate linearly with distance actually traversed through expanding spacetime. That its brief burst of narrow spectrum light is also redshifted I think is strong evidence that redshift is also the accumulated physical extension of light’s wavelength, imparted by the physical extension of intervening spacetime traversed. At any rate, the correlation between luminosity and the distance the light traversed for the SN Ia samples should be nearly linear.]

    For these discussions, since the redshift of the SNe host galaxies’ broader spectrum light can be calibrated to the SNe distances, there should be no question as to whether a galaxy is considered ‘near’ or ‘far’.

    I don’t follow the meaning of: “In this sense you are right that nearer objects should show the effect, but “near” in this sense actually means the “high-z” objects, while objects of even higher redshift might show less of a discrepancy.”

    With your help (after great difficulty locating the untitled Figures 4 & 5) I was able to better understand how the models work (in very general terms). As I understand, if a data set of redshifts were processed by the model, its distance would be derived using a variety of equations intended to represent the temporally varying factors affecting redshift.

    Even the referenced report presumes that universal expansion was decelerating (‘naturally’) until at a point in time several billion years ago (determined by observational analyses) when acceleration began.

    How can that be scenario be accurately represented by a model with a constant acceleration/deceleration parameter? Since the distance derived from redshift is a function of both expansion and deceleration (indexed by some inferred proxy for time?), how can an ‘acceleration’ parameter that applies increasingly to observations that are more distant AND more ancient accurately represent temporally varying universal conditions of expansion?

    It still seems to me that observational perspective is inverted. Can you possibly clarify further? Thanks again for the excellent help you’ve already given me!

    • “How can that be scenario be accurately represented by a model with a constant acceleration/deceleration parameter?”

      It isn’t. Apart from some special cases, the cosmological parameters lambda and Omega change with time. lambda is essentially the constant cosmological constant divided by the square of the Hubble constant and Omega the density divided by the square of the Hubble constant. So, as the universe expands, lamda changes since (apart from special cases) the Hubble constant changes with time. The same goes for Omega, with the additional effect that the density drops as the universe expands.

      Think of a two-dimensional parameter space of lambda and Omega. The evolution of the universe can be described by trajectories in this parameter space. Deep result: the trajectories do not cross. That means that determining lambda and Omega at any one time (like the present) determines the entire expansion history of the universe.

      The deceleration parameter q is Omega/2 – lambda. In the old days, one often used the parameter sigma which is Omega/2. So, lambda is sigma – q and q is sigma – lambda. The evolution of the universe in the sigma-q plane is discussed in this wonderful paper:
      http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1966MNRAS.132..379S&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf

      If you read just one paper on cosmology, this should be it! My former colleague at Jodrell Bank, Paddy Leahy, has produced an interesting interactive version (using lambda and Omega) which is one of the few actually good uses of Java on the internet:

      http://www.jb.man.ac.uk/~jpl/cosmo/friedman.html

      “At any rate, the correlation between luminosity and the distance the light traversed for the SN Ia samples should be nearly linear.”

      By definition, apparent luminosity drops off as the square of the luminosity distance. (There are additional details: read up on the K-correction.) The idea is to calculate the luminosity distance as a function of redshift for various cosmological parameters and use the observations to determine these parameters via curve-fitting.

      Try to digest the Stabell & Refsdal paper and spend hours at Paddy’s interactive cosmology web page.

  15. JTDwyer Says:

    Great! Sorry about the linear reference – I knew that…

    You’ve helped me a great deal already and provided some very interesting reference material. Thanks so much!

    I won’t declare conversion yet, but I have one last loose idea: if the expansion of the universe has begun to accelerate at z~-0.7, perhaps it was not due to some new factor, but that the localization (clustering) of matter and expansion of intervening space finally reached a point that eliminated universal gravitation as an effective long range inhibitor to continued expansion. Maybe the gravitational links finally broke, producing the development of large scale localized structures…

    • As the universe expands, the effects of gravity becomes weaker since the density drops. If there is no cosmological constant, then the deceleration approaches zero, but there is no acceleration. If the cosmological constant is positive, then there is a point where its effect becomes larger than that of gravitation, and the acceleration sets in. This has nothing to do with structure formation etc since we are talking about the average density on very large scales. (To be sure, some people have tried to construct ad-hoc scenarios to “explain away” the observations by postulating things like a) an underdense region on a very large scale and b) the fact that we are very near the middle of such a region. However, there is no independent evidence for such a scenario, and it looks unlikely on other grounds—typical of an ad-hoc scenario.)

  16. JTDwyer Says:

    Couldn’t the relatively recent development of mass structure on the scale of the ‘cosmic web’ significantly affect universal mass density and expansion?

    Aren’t both a positive cosmological constant and a negative deceleration parameter necessary fit the data?

    Wasn’t the deceleration parameter intended to represent the effect of expansion atrophy as the energy of initial expansion becomes dispersed?

    What effect could negative atrophy represent, other than another nonphysical analytical proxy or ‘fudge factor’ like the cosmological constant?

    The few observations of SN Ia near the cusp of the apparent transition from deceleration to acceleration seem to indicate a turbulent period. Please see Figure 1.2:
    http://www.arxiv.org/abs/1010.1162

    Speaking of analytical proxies…

    This is only distantly related, but I am much more certain that galactic dark matter was a misconception: the expected Keplerian rotational curve applies only to a relatively sparse configuration of planets, each in effect independently orbiting a dominating mass (the Sun contains 99.86% of Solar system mass). The vast disperse mass of, especially planar disc, galaxies are locally self-gravitating: their orbital velocities are primarily determined by their interactions with neighboring peer masses rather than some central mass emanating gravitational force.

    There have been some successful general efforts to explain the rotational characteristics of spiral galaxies using Newtonian dynamics, such as:
    http://www.arxiv.org/abs/1007.3778
    and some specific cases such as:
    http://www.iopscience.iop.org/0004-637X/679/1/373/
    These studies indicate that dark matter is not necessary to hold galaxies together: the gravitational effects of distributed masses is sufficient.

    Employing galactic disc objects as microlenses have been used to test for the local presence of a dark matter halo:
    http://www.arxiv.org/abs/1103.5056

    A study of hundreds of discrete Milky Way (ordinary matter) halo objects, including satellite galaxies, globular clusters, and old stars are used to constrain the mass and distribution of a dark matter halo:
    http://www.adsabs.harvard.edu/abs/2005MNRAS.364..433B

    More interestingly to me, unlike the (self-gravitating) galactic disc, these more distant discrete objects do generally comply with the Keplerian rotational curve! From that direct evidence I infer that it is the independent orbits of discrete objects around a dominating mass that produces orbital velocities diminishing with distance. Distributed mass galaxies should not be required to rotate like sparse planetary systems.

    • “Couldn’t the relatively recent development of mass structure on the scale of the ‘cosmic web’ significantly affect universal mass density and expansion?”

      No, since the scales, though large, are still small in the cosmological context.

      “Aren’t both a positive cosmological constant and a negative deceleration parameter necessary fit the data?”

      In a sense, yes. One can have a positive cosmological constant and still have deceleration, but as you say here one means both. But they are not completely independent: q – Omega/2 – lambda.

      “Wasn’t the deceleration parameter intended to represent the effect of expansion atrophy as the energy of initial expansion becomes dispersed?”

      No. It was intended to represent the deceleration, which decreases with time if there is no positive cosmological constant.

      “What effect could negative atrophy represent, other than another nonphysical analytical proxy or ‘fudge factor’ like the cosmological constant?”

      I don’t see the cosmological constant as a fudge factor, but rather a parameter which is determined by observations.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: