Normal distribution applied to Milky Way galaxy M-Sigma relation and bulge star data

July 18, 2013

Image

This superposition is supposed to show how the M-sigma relation could be applied to a given galaxy (the Milky Way). The vertical blue lines represent the positions of + and – sigma, the standard deviation of the normal distribution. The horizontal green line is positioned at the points where the blue lines intersect the distribution curve. Values are read at the green line from the vertical velocity dispersion axis.

It is seen that the M-sigma velocity dispersion for the Milky Way is about 100-103 km/s which we can use to estimate the M-sigma mass of the MW central supermassive black hole.

Msigma

This graph was made by the author of  “The M-Sigma Relation” in Wikipedia. I am trying to track his identity. No luck yet.

Take a look:

Two ten-billion-solar-mass black holes at the centers of giant elliptical galaxies

McConnell, Nicholas J.; Ma, Chung-Pei; Gebhardt, Karl; Wright, Shelley A.; Murphy, Jeremy D.; Lauer, Tod R.; Graham, James R.; Richstone, Douglas O.

Nature, Volume 480, Issue 7376, pp. 215-218 (2011)

 “Observational work conducted over the past few decades indicates that all massive galaxies have supermassive black holes at their centres. Although the luminosities and brightness fluctuations of quasars in the early Universe suggest that some were powered by black holes with masses greater than 10 billion solar masses, the remnants of these objects have not been found in the nearby Universe. The giant elliptical galaxy Messier 87 hosts the hitherto most massive known black hole, which has a mass of 6.3 billion solar masses. Here we report that NGC 3842, the brightest galaxy in a cluster at a distance from Earth of 98 megaparsecs, has a central black hole with a mass of 9.7 billion solar masses, and that a black hole of comparable or greater mass is present in NGC 4889, the brightest galaxy in the Coma cluster (at a distance of 103 megaparsecs). These two black holes are significantly more massive than predicted by linearly extrapolating the widely used correlations between black-hole mass and the stellar velocity dispersion or bulge luminosity of the host galaxy. Although these correlations remain useful for predicting black-hole masses in less massive elliptical galaxies, our measurements suggest that different evolutionary processes influence the growth of the largest galaxies and their black holes.”

    


  Of course, M-sigma works best when it is confined to galaxies of a given class. Maybe giant ellipticals constitute another such class.

  The M-sigma relation (those widely used correlations) may be written[i],[ii] :

  (1)        

         a)   M  =  Mbh  =  3.1 (σ/200 km s-1)4 x 108  Mʘ  =  M.     

  A current study, based on published black hole masses in nearby galaxies, gives[iii]

         b)   M  =  Mbh  =  1.9 (σ/200 km s-1)5.1 x 108  Mʘ  =  M.

  (2)                  Solar mass [iv]  =   Mʘ   =    1.98855 x 1030 kg                                   

  (3)                    M   =  Mbh  =   M●   =   r* v2/κG  from eqn. (2) in the paper, by the Postulate

                 Mbh  =  r* σ2/κG  =  3.1×108 (σ/200,000 m s-1)4 Mʘ       

                                      vσ  =  “σ” by the Postulate

  or

  (4)                  Mbh  =  r* σ 2/κG  =  3.1×108 (σ/200,000 m s-1)5 Mʘ   =   M  

                 Milky Way mass, Mmw  =  7×1011 M[v]

                                                or     = 1–1.5×1012 M[vi]

                                             with “Dark Matter “ contributing  

   We cannot have it both ways. Either bulge stars obey standard Kepler (SK) or adapted Kepler (AK). Which?  Is it a mixture of SK and AK as in eq. (6) of

   the paper? The Author of the paper dislikes the mixture, as it appears in eq. (6). But, such questions are good. They make for lots more research.

   So, astrophysicists, cosmologists and their grad students should love the Postulate.

[ii]     Ferrarese, F. and Merritt, D. (2000), A Fundamental Relation between Supermassive Black Holes and Their Host Galaxies, The Astrophysical Journal, 539, L9-L12

[iii]    McConnell, N. J. et al. (2011), Two ten-billion-solar-mass black holes at the centres of giant elliptical galaxies, Nature, 480, 215-218

[v]        Milky Way Mass  7×1011 M   Reid, M. J. et al. (2009). “Trigonometric parallaxes of massive star-forming regions. VI. Galactic structure, fundamental parameters, and noncircular motions”. The Astrophysical Journal 700: 137–148, solar mass M  =  1.9891×1030 kg,   Mmw  =  1.4 x 1042 kg   computed by conventional methods

[vi]    Milky Way Mass including “Dark Matter” 1–1.5×1012 M McMillan, P. J. (July 2011). “Mass models of the Milky Way”. Monthly Notices of the Royal Astronomical Society 414 (3): 2446–2457.  solar mass M  =  1.9891×1030 kg,  Mmw  =  2-3 x 1042 kg  by conventional methods

Wikipedia, rotation velocity  =  v,  AVD

 

Estimates for the mass of the Milky Way vary, depending upon the method and data used. At the low end of the estimate range, the mass of the Milky Way is 5.8×1011 solar masses (M), somewhat smaller than the Andromeda Galaxy. Measurements using the Very Long Baseline Array in 2009 found velocities as large as 254 km/s for stars at the outer edge of the Milky Way, higher than the previously accepted value of 220 km/s. As the orbital velocity depends on the total mass inside the orbital radius, this suggests that the Milky Way is more massive, roughly equaling the mass of Andromeda Galaxy at 7×1011 M within 50 kiloparsecs (160,000 ly) of its center. A 2010 measurement of the radial velocity of halo stars finds the mass enclosed within 80 kiloparsecs is 7×1011 MBut, we cannot apply standard Kepler or a correlation diagram based on unadapted Kepler, to stars that obviously do not follow Kepler’s laws, as is exemplified by the flat MW velocity dispersion diagram. But we go ahead anyway as if we haven’t a clue and do not understand. Most of the mass of the Galaxy appears to be matter of unknown form which interacts with other matter through gravitational but not electromagnetic forces; this is dubbed dark matter. A dark matter halo is spread out relatively uniformly to a distance beyond one hundred kiloparsecs from the Galactic Center. Mathematical models of the Milky Way suggests that the total mass of the entire Galaxy lies in the range 1-1.5×1012 M.

 

Galactic rotation,    velocity = v,  AVD

The stars and gas in the Galaxy rotate about its center differentially, meaning that the rotation period varies with location. As is typical for spiral galaxies, the distribution of mass in the Milky Way Galaxy is such that the orbital speed of most stars in the Galaxy does not depend strongly on their distance from the center. Away from the central bulge or outer rim, the typical stellar orbital speed is between 210 and 240 km/s. Hence the orbital period of the typical star is directly proportional only to the length of the path traveled. This is unlike the situation within the Solar System, where two-body gravitational dynamics dominate and different orbits have significantly different velocities associated with them. The rotation curve (shown in the figure) describes this rotation.

If the Galaxy contained only the mass observed in stars, gas, and other baryonic (ordinary) matter, the rotation speed would decrease with distance from the center. However, the observed curve is relatively flat, indicating that there is additional mass that cannot be detected directly with electromagnetic radiation. This inconsistency is attributed to dark matter. Alternatively, a minority of astronomers propose that a modification of the law of gravity may explain the observed rotation curve. The constant rotation speed of most of the Galaxy means that objects further from the Galactic center take longer to orbit the center than objects closer in. But, in fact, they orbit faster than they would if they followed Kepler’s 3rd  law. This is actually the problem. If they orbited according to Kepler’s 3rd, they would orbit so slowly as they neared the galactic rim that the spiral arms would wrap backward multiple times around the galactic center like the mainspring of an old windup clock. So, we can actually see the anomalous velocity dispersion at work when we observe a spiral galaxy.

Black Hole Singularities and the Possibility of Two Dimensional Gravity

March 9, 2013

According to general relativity, in a 3-D universe with time, the gravitational field of all compact objects behaves as if the objects are point masses and the field strength must decline as 1/r2. In a 3-D universe, therefore it is said, it is impossible to support a hyperbolic 1/r gravitational field. But, black holes are different.

Why bother with the whole concept of black holes if they are not different? Collapse of matter into a black hole must not only create a singularity (within the limits imposed by the Heisenberg uncertainty principle) but, the spin rate or orbital frequency of in-falling matter of the black hole must also increase without bound as radius r decreases to values near zero below the event horizon. Attempts to explain away these singularities on the basis of a non-existent quantum gravity scheme are vacuous extrapolations of tentative hypotheses that amount to pure conjecture.

Black hole singularities exist. Einstein through Schwarzchild and others say so. Who claims to be more brilliant than these fellows? I appeal to authority here only because it seems to be the only thing that impresses some. If you want to claim that BH singularities are mere artifacts of an inadequate theory, show me the Math.

Black holes are different. When matter and energy collapse under an infinitely strong gravitational field to a point mass that is as tiny as may be necessary to explain its properties (not necessarily to zero, the true meaning of infinity), the result is a phase change. Spacetime phase changes are S.O.P. in the repertoire of theoretical cosmologists, like Alan Guth. Let us adhere to the hydrodynamics metaphor used by Einstein in his development of GR. Flat spacetime is a massless superfluid. Helium IV is a superfluid but, it is not massless.

To extend the metaphor, it is not hard to imagine that spacetime could undergo a phase change, just as helium IV may. In a black hole this change involves a reduction in dimensionality. This is about the only change available to it. Analysis of the equations of GR shows that gravitational strength, Fg is proportional to 1/r(n-1) where n = number of spatial dimensions. In a 3-D universe, Fg declines as 1/r2. In a 2-D universe, Fg declines as 1/r.

Using basic geometry, GR allows gravitational fields that decline as 1/r(n-1) where n is defined as above. So, if there is to be a hyperbolic SBH gravitational field it must occur in 2-D.  The gravitational field strength in its local limit is defined as flux through a surface element of a n-sphere; that surface area behaves as 1/r(n-1). One can find the derivation of this relation in any good geometry textbook.

So, a black hole must use the gravitational energy of in-falling matter to raise its gravitational potential, the gravitational energy level, to the 2-D “state”. We are starting to talk quantum language now.

The shape of this singular BH gravitational field strength diagram, as it is a 2-D entity embedded in a 3-D space, is a nominally flat disk or platter with a potentially infinite radius. Unlike Kerr, I call this topology of the event horizon a “spin disk” because it arises from the infinite rotational and orbital spin rate of matter that has in-fallen toward the singularity. As a spacetime entity, this new phase ignores the event horizon and propagates outward to beyond the edge of the galaxy. It emanates from the central core of the galaxy wherein resides any central supermassive black hole.

Here is one of the non-intuitive consequences of GR. It is known that matter in-falling toward the event horizon must experience time dilation. From our external perspective, we would perceive time for this matter as having slowed and even stopped at the event horizon. Viewed from any point outside the event horizon, time really does stop there. But from its own perspective time does not stop and such matter does indeed drop through the event horizon where it may take part in whatever processes it might (time reversed or not).

There is simultaneously an inverse square gravitational field set up by this time-frozen matter at the event horizon and an inverse gravitational field set up by this same matter that has already in-fallen to the singularity. There is no violation of conservation laws here because no object can feel these separate effects simultaneously. If an object orbits the galactic center in the plane of the galactic disk, it feels the inverse 1/r field. If it orbits on a trajectory not aligned with the galactic plane, if it orbits chaotically, it feels the inverse square 1/r2 field.

This has consequences for the analysis of the orbital motion of close-in Milky Way bulge stars like S2 for the determination of the MW’s supermassive black hole mass according to Kepler’s laws. Kepler is valid for the 2-D case as well as for the 3-D case. But, it has to be adjusted for the 1/r gravitational field as does Newton’s law. No deep relativistic calculations are needed. One can determine what changes must be made in Newton’s law and Kepler’s laws by inspection.

Above, I explained how a 2-D gravitational field can exist in our 3-D universe. It must be associated with a black hole having an infinite spin rate as well as infinite density and infinite gravitational field strength. Within the bounds of Heisenberg uncertainty, these singularities must exist. There is no point in trying to explain them away using some kind of unfalsifiable overly advanced unintelligible gravitational quantum sophistry.

I show that the hyperbolic 1/r inverse gravitational field can exist as a spin disk surrounding any black hole with said disk extending far beyond the event horizon toward infinite radius. This explains MOND and the anomalous velocity dispersion because hyperbolic 1/r gravity means that orbital velocity, v, around a galactic center containing a black hole, v = (GMbh)1/2. That is, v becomes constant dependent only on Mbh and G. This v is not only constant for a given galaxy, it is constant from galaxy to galaxy. This means that GMbh must itself be a constant, implying a new fundamental physical constant. Remember that Noether’s theorem concerning invariants, fields and symmetry is engaged whenever a fundamental constant is invoked.

But, G may not be the same G that applies in 3 dimensions. So, I call it G*. Besides by an extension of GR, one might get G* from the M‑sigma relation as well as by the anomalous velocity dispersion. But, the mass of the central galactic supermassive black holes must first be refigured on the basis of the hyperbolic field if very many of the orbits of the bulge stars that were used to get Mbh were coincident with the galactic plane. If all or most of these orbits were chaotic and not aligned with the galactic plane, the BH mass determinations may be okay.

Perhaps the main expression of the Postulate should be written v  =  (G*Mbh)1/2 from 2-D Newton’s law. Then G*  =  v2/Mbh for any galaxy with any size supermassive black hole with its associated vavd  =  vσ  =  v. In order for G* to be constant, v2 and Mbh must vary inversely. We see that, from the M-sigma relation, they do.

So, we have a relation between Mbh and v as vavd or vσ that is linear. So, the slope is Mbh/v  =  a constant . Then Mbh/v2  =  a similar constant and v2/Mbh  =  a constant also. The Postulate says G*Mbh  =  v2 and so G*  =  v2/Mbh, as above.

When doing any calculation involving the hyperbolic (1/r) gravitational field, remember that Fg  =  G*Mm/rr1 where r1 is the unit vector of r. This ensures dimensional integrity.

In the M-sigma relation, the exponent of v or σ has been expressed as being equal to either 4 or 5 with constants adjusted as appropriate to compensate for the difference. In the past, this was done arbitrarily, for convenience. There is now theoretical grounds to choose the exponent of v such that exp(v) = 2 when using the M-sigma relation for bulge stars orbiting in the plane of the galactic disk. When analyzing the AVD one finds that vavd  =  virtually a constant within and among galaxies. When the scatter in the data is accounted for, the Postulate predicts that this will be found to be untrue and vavd2  =  G*Mbh, especially when the effect of embedded massive BHs in the body of each galaxy is made up for.

I think that σ is not distant enough from the singular center for the hyperbolic field to overcome chaotic effects. Perhaps v should be based on 2σ or on the velocities of stars orbiting in the plane of the galactic disk just beyond 98% of the average orbital radii of bulge stars. This suggests the wonderful possibility for much more research.

The Postulate explains both the anomalous velocity dispersion and the M‑sigma effect.

It may not be quite this simple because all spiral galaxies have massive black holes embedded within them, besides their central black hole. Because of the flat hyperbolic (1/r) gravitational field’s longer reach, the gravitational, spacetime spin disks of these BHs would tend to align with that of the central black hole’s so that their hyperbolic fields would combine. So G* got from vavd may not be so pure and pristine.

The meaning of the hyperbolic gravitational field of black holes is that MOND (Modified Newtonian Dynamics, suggested by Mordehai Milgrom) is explained without recourse to Dark Matter or to modifications of Newtonian dynamics. Newton and Kepler must be understood in two dimensions, that is all.

All of the observations that are said to support Dark Matter as being, say, a huge halo of WIMPs engulfing galaxies and galactic clusters also support the hyperbolic (1/r) gravitational field postulate, even the Bullet Cluster effect.  Dark Matter 3-D maps obtained by analysis of gravitational lensing also follow logically from the Postulate.

The hyperbolic (1/r) supermassive black hole gravitational field is indeed a postulate. This means that there can be no argument against it. It must be taken at face value and carried to its logical extreme whereupon it will be either reduced to absurdity or else found to be correct.

When extrapolated to the entire universe, the hyperbolic field mimics Dark Energy too. If Alan Guth’s inflaton particle originated in 2-D space and began to roll down its own hyper-gravitational super-potential slope toward a lower energy 3-D state, the higher energy 2-D potential energy would be progressively transformed in a time dependent quantum-like transition to the new 3-D “ground state”. This potential energy would show up as apparently increasing kinematic momentum of all stars and galaxies in the universe. That is, the universe would appear to be expanding at an accelerating rate.

This is an exciting idea because the whole universe is thus to be regarded as a quantum object. It may provide a route to a falsifiable certifiable theory of quantum gravity because 2-D gravity does not lead to a gravitational catastrophe as r tends to zero, due to Heisenberg uncertainty’s prior restraint in this case. And, it is renormalizable, a prerequisite for any quantum theory of gravity. This Postulate may point to a means to prove the existence of the multiverse. If Guth is right, Hugh Everett could be right.

Galactic M-Sigma Relation and the Anomalous Stellar Velocity Dispersion

June 4, 2012

Galactic M-Sigma Relation and the Anomalous Stellar Velocity Dispersion

Inverse gravitational decline versus inverse square decline

Analyzing the implications of a black hole singularity with near infinitely tight curvature close to the center and what this means to the mathematical form of the gravitational field, one concludes that a postulated singularity requires that black hole gravity declines as 1/r, not as 1/r^2. This effective “infinitely” deep gravitational “point-mass” geometrically implies a hyperbolic gravitational field profile. So, the concept has some bizarre twists.

But, general relativity does not permit a 1/r gravitational field in 3-D + t spacetime. However it does allow a hyperbolic field in 2-D + t spacetime. By GR, gravitational force must decline as 1/r^(n-1), where n = spacial dimensionality. If n = 2, gravity declines as 1/r. So, it is also posited (postulated) that there exists a 2-D, sub-event horizon, hyper-spinning, centripetally induced, infinitely broad disk singularity in all central galactic SBHs. Having mass probably concentrated nearer to the singularity center but being of spacetime in nature, the entirety of the disk singularity is immune to the event horizon of the black hole. It can therefore extend outward to far beyond the galactic rim even to nearby galaxies within a cluster or supercluster.

This 2-D gravitational field is also quantum renormalizable. It is well known that items in a 3-D space can be projected perfectly onto a 2-D surface – the holographic principle. Might this be a simple route toward validatable, falsifiable quantum gravity? It is interesting to contemplate that a supermassive central BH with its coterie of inner bulge orbiting stars may be a quantum object obeying quantum law.

This postulated set of logical statements is immune to criticism. If otherwise logical, it cannot be argued against. It must be experimentally tested. Observation is the only choice to conclusively validate or falsify such an argument. See the definition of “postulate” given below.

Definition of a Postulate

• A Postulate is assumed to be a true statement, which does not require to be proved.

• Postulates are used to derive other logical statements to solve a
problem. If a problem is thereby solved, especially if proven by
other data, the postulate must also be true.

• Postulates are also likened to axioms.

In other words, postulates are to be accepted at face value “for the sake argument” for whatever they may be worth as if they were indisputable axioms. THEN, if a whole argument containing such postulates actually works, there may be much joy. If not, it is back to the drawing board.

Newton’s law of gravity and Kepler’s laws are all easily adjusted to accommodate the hyperbolic 1/r G-field in two dimensions plus time. Kepler’s 3rd law in 2-D is derived from 2-D Newton analogously to the 3-D derivation. It is NOT the same result as if orbiting 3-D objects were limited to an Euclidean plane.

The G-field diagram is hyperbolic when its equal gravitational force contour lines are drawn with spacing in such a way that a 1/r relation is followed to the origin where spacing approaches zero. If the contour lines are then plotted having a z axis, Flamm’s hyperboloid is the result. This is a spacetime diagram, not a gravitational potential diagram.

No inner galactic bulge stellar orbits need be fitted to raw Kepler. Kepler does not define these orbits. Kepler’s laws are used merely to analyze them. The orbits are what they are. Kepler’s 2nd law applies no matter what the form of the central force. The “adjusted” Kepler’s 3rd law follows exactly from Newton’s law of gravity with reduced dimensionality according to GR. It is “adjusted” Kepler that should be used to compute central galactic supermassive black hole mass. See the Gary Kent post on WordPress.com.

There is nothing more to prove. What there is still to be done is to compare with observation.

Mathematically, the constant velocity distribution observed in spiral galaxies is explicitly derived. This means that the M Sigma relation is explained because peripheral stellar v = (GM/r*)^½. Also, Milgrom’s MOND constant, “a[o]“, is derived, where a[o] = GM/r*r[∞] = v^2/r*r[∞]. This implies that the universe must have a finite or maximum r because a[o] is an observed finite non-zero quantity. And, M, the black hole mass, may include the masses of many tens of thousands or more of very large stellar mass black holes that are thought to be embedded in every galaxy. The unit vector of r, r*, is used to maintain dimensional integrity.

No modification of Newton’s law is required. But, Newton must be regarded in the context of a 2-D hyperbolically curved spacetime. So, gravity for black holes declines as 1/r and is not an inverse square relation.

All the other effects that have been observed that have been traced to Dark Matter are also explained in this way. These include the anomalous velocity dispersion in spiral galaxies and in clusters, the weak gravitational lensing, the Sunyaev-Zel’dovich, the Sachs-Wolfe and the Bullet Cluster effects.

The hyperbolic G-field parsimoniously explains these phenomena without appeal to any unfalsifiable hypotheses of exotic dark matter. Weakly interacting massive particles and other alien perpetrators of Dark Matter effects have been researched avidly for a very long time. They must be regarded now as unfalsifiable hypotheses because it has become clear that there is no way to prove or disprove their existence or it would have been done by now.

The hyperbolic SBH singular ultra-spin disk G-field might have mass, perhaps like Alan Guth’s inflaton field in the false vacuum. Its mass, but not its hyperbolic gravitational spacetime configuration, could be confined to below the event horizon. The horizon itself could be greatly distorted – including any surrounding plasma or photon sphere. So, a photon passing through the expansive hyper-spin singular spacetime disk would experience therein an enhanced gravitational field, just as if it had passed through a Dark Matter “halo”.

The open cell foam, network or spiderweb structure of the large scale universe is also explained by the extensiveness of the hyperbolic field and its form as a 2-D saddle shape “hyperboloid of one sheet” embedded in 3-D space. Galaxies and galactic clusters will be expected to align so that the hyperbolic surfaces of their 2-D fields tend to coincide. So, even the initial structure of the nascent universe would be influenced by supermassive BHs therein which could have formed very quickly at that time.

They might have been there from t = 0 + an instant, for all we know. After all, if the inflaton particle was like an unstable subatomic particle, it may have decayed into smaller particles including many SBHs. Some have said that the inflaton particle must have decayed all at once. Under these extreme initial conditions, what experimentally validated physical law or fundamental principle is quoted thereby? So, it decays all at once. To what?

In short, the hyperbolic 1/r SBH galactic G-Field explains all the phenomena that have ever been traced to Dark Matter. The hyperbolic G-field IS Dark Matter. Its potential energy profile is generally higher than the profile of an equivalent inverse square G-field. Since m = E/c^2, it accounts for the unseen and unseeable missing mass of Dark Matter. The HBHG field is mathematically derived rigorously and satisfies the mathematical requirements of all observations.

I have written a paper on gravitational decline with distance, but I need a reviewer to help check my mathematics. kentgen1@aol.com

Black-Holes: The Hyperbolic Hyper-Massive Black-Hole Universe

February 9, 2012

See the post

The Hyperbolic Black Hole Galactic and Universe Gravitational Field

Below, after the following post.

 

Black-Holes:

The Hyperbolic Hyper-Massive Black-Hole Universe

The hyperbolic (declines as 1/r) black-hole galactic and universe gravitational field explains Dark Energy and Dark Matter.

Stephen Hawking did not buy his own pronouncements regarding the disappearance of information into black holes. Instead, as a retraction, he and some others invented a whole new theory of black-hole thermodynamics. So, in a sense, they concluded, the black-hole event horizon is a real surface. It is sometimes called a “quasi-surface”. However, the center of a black-hole is a physically real singularity. It is constrained only by the Heisenberg Uncertainty Principle.

There is no such thing as a valid theory of quantum gravity (how many papers are published in ArXiv on unicorns? By their standards, there should be dozens!) So, any appeal to QG to put the Kibosh on black-hole singularities is therefore bogus.

See The Hyperbolic Hyper-Massive Black-Hole Universe and Galactic Gravitational Field (HHBF), which is a paper written for the blog http://garyakent.wordpress.com that describes the e-Model for inflationary expansion of the universe. The hyperbolic hyper-massive black-hole gravitational field is a phenomenological postulate, that is, it is a tentative premise that should be confirmed by experiment or observation and need not wait for theoretical justification. In the case of galaxies and galactic clusters, there is already enough observational support for the galactic hyperbolic super-massive black-hole gravitational field (HSBF).

The point is emphasized that Birkhoff’s Theorem and other interpretive principles derived from general relativity cannot apply to any real black-holes. These rules presume that the massive bodies that are considered are always “unperturbed” and are perfectly “spherically symmetric”. No real black hole meets these criteria. The rules are good only for approximate calculation, not for “precision cosmology”.

Besides, GR should not prohibit a gravitational field that declines as 1/r if a metric is found, similar to the Schwarzschild metric, using assumptions and boundary conditions wherein a singular black-hole is presumed at the outset. If such a gravitational field can be confirmed, the e-model will serve as more evidence for the existence of our universe as part of a multiverse in meta-time.

Hugh Everett may one day be seen as a thinker on a par with A. Einstein. And, John Archibald Wheeler’s suggestion concerning the quantum self-interference of probability density waves may be taken more seriously while Everett’s declaration of the “reality of probability” as a sort of substance gains credence. Self-interference can explain the virtual absence of antimatter (AM) in our universe. AM would be confined to our virtual twin, which must exist according to the logical extension of Alan Guth’s inflation hypothesis wherein a virtual particle came into existence from a hyper-excited false vacuum which came to exist precisely because of its ultra-high energy level. It would be seen as the deeper mechanism behind apparent “symmetry breaking” and unbalanced annihilation of fundamental sub-nuclear particles and antiparticles to give our universe with matter as the dominant form.

The existence of an interference twin could also be helpful in explaining the hyperbolic field as the resultant of a superposition of states. As the real expression of a statistical process within the multiverse, we experience only the total sum, the superposed probability density form from which emerges probability, P —> 1. There are ways that such a superposition might affect the shape of a gravitational potential well. Gravity itself may be viewed as a probability vortex or wave in the Einstein Aether. There is much that has not been considered.

The hyperbolic black-hole gravitational field produces the mathematical result that the velocity distribution of stars in galaxies and galaxies in clusters follows the relation v =(GM)^1/2 and the gravitational potential energy follows P.E. proportional to ln(r), the natural logarithm of the radial distance from a black-hole or from the barycenter of several black holes. This is exactly the same as that predicted by hypotheses of “Dark Matter”. The hyperbolic black-hole gravitational field IS Dark Matter.

A naive interpretation of general relativity says that radiant energy like light, magnetic, electric or gravitational flux must decline as 1/r^(n-1) where r is the radial distance. Since our universe is apparently 3-D, having 3 spatial dimensions, such quantities should decline as 1/r^2. A less naive interpretation would have us find a new metric that satisfies GR but allows a decline in the gravitational force as 1/r, a hyperbolic decline, not parabolic or “exponential”, 1/r^2.

One way to do this is to choose a different coordinate system. We could choose a 2-D coordinate system. This 2-D surface would not necessarily be Euclidean. In fact, it might be hyperbolic. Then, a hyperbolic 1/r decline in gravitational strength would be not only possible, but required. But, it would be required only for black-holes.

After all, a black-hole has an event horizon that is called a “quasi-surface” because the entropy represented by all objects, including photons, that fall into a black hole is preserved on the “event surface”, a 2-D representation of the entire universe (potentially, by extrapolation of the concept). If a 3-D gravitational field could be reflected in the event surface, its image would be as a 2-D entity. Since nothing, not even light, can exit a black-hole, then neither should gravity be able to do so except by reflection in the event surface from external regions. It could get there initially because a black-hole must grow from a less massive form when it did not possess an event surface.

Another equivalent way to say this is that the 2-D overlay represents a state of the universe and the reality that we experience is a superposition of states, a linear sum of states each represented by their own equations of state in GR. The experience of quantum states is always of the sum of states. We never can sense individual component states.

Since the multiverse can have an infinite number of components, if the 2-D overlay could be composed of a virtually infinite number of 2-D sub-states, say, one for each orbiting body, however such a body and its orbit may be oriented, then so be it.

The Hyperbolic Black Hole Galactic and Universe Gravitational Field

February 4, 2012

The Hyperbolic Black Hole Galactic and Universe Gravitational Field

Figure 1   Proper Time versus Scale Factor a(t) or Hubble Distance, R and also versus Potential Energies, Expansion Velocity, and Acceleration with Dilated or Reduced “Root” Time or “Relativistic Time”

http://www.fotothing.com/Gak/photo/f7c4dd2a76b88a78fa2f590a8751b883/ 100/92-100

 image #100 in this series

http://www.fotothing.com/photos/f7c/f7c4dd2a76b88a78fa2f590a8751b883_fa2.jpg

image enlargement

Graph computed from equations given in Fig.2 for Hubble expansion of the universe in extensive units. With a proportional overlay of the associated potential energy state diagrams.

                Busy, Busy, Busy

This overly complicated ugly graph is really quite simple. The “underlying” set of curves, for which the legend applies, are composed as below and in

http://garyakent.wordpress.com

 First:

The “underlying” set of curves, for which the legend applies, are composed as follows:

1.) The straight black diagonal line represents expansion of the universe if it had occurred at the speed of light.

2.) The green curve represents the exponential expansion of the universe according to equation 3 against proper time, t, and the extensive variable R, below. It rises almost straight up at very very small values of t, then rises more slowly, nearly leveling off; then it rises again at a more sedate rate after about 2×10-14 u (time in geometric or natural units).

3.) The deep red curve refers to the slope of (2.), the velocity of expansion, i.e. the first derivative in units of U/u. It declines very steeply from an apparently infinitely high level very very early, passing through a minimum at extremely small t = 2×10-14u, whereupon it rises monotonically as shown. This is a unique and very fortuitous feature of this relation.

4.) The purple curve is supposed to represent the acceleration of (3.), i.e. the second derivative.

5.)  The sky blue curve represents “root time”, “reduced time” or “relativistic time”, t1, the (inverse e)th root of proper time, t where t1 = t(1/e)

An unexpected interaction between t1 and the rest of the exponential form produces a fortuitous minimum in expansion velocity, dropping to near zero when t is very small (which occurs much too close to the origin to show), which is a crucial feature of this graph (see the relevant plot in the image series, #97 at Gak on FotoThing.com).

This is due to the odd way that exp(B*t1) behaves at very small t. Otherwise there would be no time for the essential equilibration of temperature and density that is postulated by Alan Guth’s theory. Without such a peculiar minimum in the rate of expansion, because of how t1 works, there would be no initial exponential induction period as is assumed by Guth. If the parameter, e, is adjusted so that Hubble expansion decelerates overall, having negative or concave curvature, no such minimum in the expansion rate occurs at small values of t.

The curve (5.) is the key to the exponential equation and is the secret of why it works. Perhaps this reduced or root time may represent how proper time is vastly dilated especially as it rises from an ultra-massive physically real singularity at our initial proper time, when t = 0. For the inflaton particle is postulated by Guth to be a humongously strong gravitational point particle in the meta-time of a multiverse.

Now, this curvature parameter, e, can be adjusted to describe an open, flat or closed universe. It can be adjusted to show much less curvature or much more. So, the position and duration of the minimum that occurs very early for curve (3.) can be modulated.

But, changing parameters A or B will make (2.) completely miss passage through the point (1,1) on the graphical grid. This would not work at all because the universe with all its matter and energy must have mass/energy M == 1 µ at t = 1 u. So, here is another label for the abscissa.

When the vertical axis is interpreted as being the scale factor a(t), then the horizontal axis must be interpreted as having proper time t = 1u = 27.44 billion years (at least) because from the time of emission of light that became the CMB, to 13.72 billion years (until now, the Hubble time), our universe has expanded another 13.72 billion years (at least). If we insist that the horizontal axis t = 1u = 13.72 billion years, only the Hubble time itself, then the Hubble distance is only 13.72 billion light years or R = 1 U on the vertical axis.

Note that herein R = r, interchangeably. It does not matter how the axes are interpreted as long as one remains consistent.

The author worries that expansion velocity accelerates beyond the speed of light too soon. After over half of the universe lifetime from this point, by now we should have lost contact with the CMB. This can probably be fixed by choosing the parameter, e, so that the curvature of expansion in (2.) is rather a lot shallower. This should also move the point where its slope exceeds c by quite a bit to the right. Then, its effect could be viewed as being more benign.

According to Guth and the consensus of cosmologists and other astrophysicists who truly respect Inflation Theory, the universe was once a purely quantum entity. It still is. The very success of quantum theory is evidence that ours is a quantum universe. Why should the universe not follow a mathematically defined trajectory like this?

Second:

Overlain upon this graph is another graph with three more curves, a black, an orange red and a bright blue one.

orange red: the curve for potential energy (P.E.), from a graph of y = ln(x) representing the integral of 1/kr, where k = 1 m and x = r = t, representing either the Hubble time or the actual age of the universe (more or less).

Here, y is the vertical axis which is to be read as P.E. of the “inflaton” (or galactic) hyper-excited gravitational field or as whatever else may be the correct quantity, in natural units, depending upon which curve one is reading. Ideally, this ln(x) integral denotes the initially (near the origin) extremely high relative P.E. condition of the hyperbolic ultra-massive black hole gravity.

Hyperbolic gravity fields are allowed by GR if proper assumptions and boundary conditions are posited to find a metric, much like the Schwarzschild metric, for the space wherein is assumed to reside a singular black hole.

That is, this curve represents the “state” of the universe’s initial hugely massive “inflaton” point particle and its associated “inflationary” hyper-excited renormalizable 1/kr gravitational field. Every field always has an associated particle, so there would have been an inflaton particle and it should have been an hyper-massive or “excited” quantum point particle already possessing that “renormalizable” higher energy hyperbolic gravitational field. How could it have been else in the Everett multiverse which Guth explicitly posits?

Now, there is no question that, if one accepts Inflation Theory then, one must accept the multiverse and meta-time.

The author compares the implied transition from an evolving higher energy gravitational field state toward a changing ground state to a time dependent quantum transition or to a Tanabe-Sugano diagram in transition metal ligand field theory.

B.)   black: the curve from y = -1/x, the lower energy P.E. state of the universe under the ground state of its normal gravitational field, which was proportional to 1/r2.

P.E. being the vertical axis now, as in (A.), it is equal to -1/x or -1/r or -1/t (because the scale has r = R = t = 1 since this is all in natural or geometric units). The ln(x) = ln(r) = ln(t) curve and the 1/x = 1/r = 1/t curves have been translated so that P.E. (A.) = P.E. (B.) when ordinate x = r = t approx. = 0.34 and then the whole graph was re-scaled and vertically re-positioned to fit other abscissa or P.E. versus t constraints.

Originally, this point was (1,0) or the intersection of both curves, where

ln(x) = 0 and

-1/x +1 = 0

got from a pair of equally scaled graphical curves adapted from figures in a textbook definition of the natural logarithm.

This was not an arbitrary adjustment. It was done so that the orange red ln(x) curve approaches the origin at the same time that the sum of the (A.) orange red, and (B.) black, curves equals the bright blue curve of (C.) as it passes through the point (1,1).

Then, the intersection of (A.) & (B.), at the bright green circle, which used to be at the point (1,0), had to be translated so that its position with respect to the underlying Hubble time axis corresponded to 0.344 u, or 9 billion years ago in Hubble time, when the universe is seen to have begun to “re-inflate”, consistent with observations of “acceleration”.  So, there is only one way that the overlay curves could have been re-scaled and repositioned to meet these constraints.

bright blue: the superposition or linear sum or mixture of the gravitational potential energy states represented by curves (A.) and (B.).

Now, as was said before, (C.) had to be made to pass through the point (1,1) on the underlying graphical grid. That is, the total P.E. had to equal all the matter and energy in the universe at t = 1, the present, including “Dark Matter” and “Dark Energy”, so that M == 1µ at t = 1u. So, total mass/energy M is yet another label for the abscissa.

The orange red curve in (A.) is identified with Dark Energy, and it is seen to keep on increasing into the future while the associated scale factor, which is to be read on the abscissa as R or a(t) in this case, also increases. Then, with these simultaneous increases, the P.E. density of the universe remains constant as is indeed postulated for Dark Energy.

At the instant of the BB, the hyperbolic inflationary gravitational inflaton field of the inflaton point particle had a potential energy curve that would have looked like the orange red curve in (A.) but, no matter/energy had yet had a chance to follow this P.E. curve at the instant that the BB occurred. And, afterward it might have been forced to follow the black curve in (B.). The matter/energy in the universe could not actually experience any of the individual states represented by (A.) or (B.) but could experience only the superposition of states in (C.), bright blue.

Still, it is all consistent. The orange red P.E. curve continues to increase as it should and the black P.E. curve becomes “nearly” constant as it must while the bright blue total P.E. curve increases as the energy density of the universe must remain constant, just as theory demands.

See? Simple. (Yuk! Yuk!)  By comparison, this should make filing an income tax-return seem like a piece of cake.

This is what time dependent quantum transitions mean. This is what the multiverse means. We can experience only the final resultant of the waveform vector sets for all terms in the total probability density wave sum. We experience only the superposition, not the separate states.

Yet, some cosmologists describe Inflation Theory minus the point particle concept and sans a multiverse with no meta-time and also without the implication that the inflaton field must be an excited renormalizable gravitational field with its associated hyper-massive particle. So, the quantum nature of Inflation seems foreign. One can pick and choose the ideas one likes only in regard to congressional legislation in a hidebound committee but, Inflation Theory will never become that kind of law.

If the hyperbolic black hole gravitational field can be validated and extended to the entire universe this way, then we would have hard evidence for a kind of a multiverse.

Figure 2   Rules Sheet from TK Solver Plus for the “e-Model” of Inflationary Expansion of the Universe

The graphical series in Fig.1 was computed using the equations presented in this rules sheet.

#98 A  Equations for the mathematical model of inflationary Hubble expansion of the universe according to extensive variables.

 98 A  universe accelerating hubble expansion guth inflation

http://www.fotothing.com/Gak/photo/dd3000c76d3f16f595af18ef135cfad2/ 98/92-100

image 98A  in this image series

http://www.fotothing.com/photos/dd3/dd3000c76d3f16f595af18ef135cfad2_fab.jpg

image enlargement

This is the rules page from a UTS TK Solver Plus math program that was used to plot the exponential expansion curve shown in image 96. It depicts acceleration of Hubble expansion, the 1st and 2nd derivatives of this curve as well as a straight diagonal line showing a baseline of what expansion would look like if it occurred at the speed of light. Image 95, at FotoThing.com under Gak, shows a minimum in the 1st derivative curve, the expansion rate. The rate drops to near zero, indicating an extreme slowdown in expansion that constitutes a virtual pause at around 1^-14 to 3^-14 u or “universe time”, time in “natural units” or “geometric time”. This period lasts around 2 x 10^-14 u or 8,660 seconds (144 minutes) but is sensitive to somewhat arbitrary initial conditions like those chosen by Alan Guth in his first paper on inflation. This pause may have come earlier or later and lasted longer or shorter depending on these initial conditions. Such changes would have to be pairwise and in the correct sense or else the intersection with (1,1) will be lost.

Dark Matter is an unnecessary ad hoc fix

January 11, 2012


The singularity at the center of a black hole must be unique and have testable consequences.

Dark Matter is an unnecessary ad hoc fix to fill in the blanks in the Friedmann model under the FLRW metric. Galactic supermassive black-holes exist as true physical singularities according to the Kretschmann invariant and Schwartzchild’s analysis of his spacetime metric under GR. Therefore, as point masses, they must possess a hyperbolic (1/kr) gravitational field, NOT a field that falls off as 1/r2. Now, k = constant = 1m, S.I., for dimensional integrity. It is not true that GR cannot tolerate hyperbolic spacetime geometries. “The universe is hyperbolic.” said Albert Einstein in his classic paper of 1915. An hyperbolic field will give constant orbital acceleration to orbiting bodies as far from the center of a black-hole as we might like to measure. This means that bodies near the periphery of a galaxy should seem to move at constant velocity because rotational acceleration does not drop to near zero there as with a 1/r2 inverse square law, it becomes consant. This constant velocity distribution effect has actually been measured and has given rise to the notion of Dark Matter.

Gravitation does not fall nearest to zero between galaxies in a cluster either. So they too can bend light and affect redshifts in ways that mimic Dark Matter. The rotation of galaxies in clusters is also influenced by the black-holes that they contain with their 1/kr gravitational potential profiles. The not quite counterbalanced redshift effects in the Sunyaev-Zeldovich phenomenon are influenced by the hyperbolic galactic and galactic cluster gravitational fields that exist as light falls out of such clusters and super-clusters into a large void and as it climbs out of it again after the universe has expanded by another billion light years or more.

Scientists are mapping, not Dark Matter, but the huge extent of the network of hyperbolic galactic and super-galactic gravitational fields that behave like Dark Matter because of the mathematical properties of the hyperbolic gravitational field are similar to that expected for Dark Matter.

Primordial massive and supermassive black-holes with their 1/kr galactic gravitational fields can also mimic the “halos” of dark Matter that are postulated to have existed just after the big bang and before the emission of the cosmic microwave background. There is nothing that Dark Matter explains that cannot be accounted for just as well or better by the hyperbolic black hole gravitational field.

The hyperbolic 1/kr supermassive black-hole galactic gravitational field explains “the Dark Matter Effect” without Dark Matter and it is more parsimonious and is a falsifiable hypothesis, unlike Dark Matter which is revised every time no Dark Matter is found.

The conditions for validity of Birkhoff’s Theorem are not met for real black-holes. Therefore, Birkhoff’s Theorem does not apply. It sometimes may be used as a first approximation, but it cannot be depended upon as a rigid rule for precise calculations. “The physics near at the extreme curvature of a black-hole singularity is not well defined”. This covers Birkhoff’s too.

It does too matter how the internal mass is distributed if it is contained within a single point. Then, in fact, it is NOT distributed at all! This is the point of Kretschmann’s invariant and Schwartzschild’s GR analysis of the consequences of his metric. Ordinarily, the distribution would not matter. But, a singularity must be different. If this is not explicitly acknowledged in some way, then to say there is a singularity with such intense curvature of spacetime in its vicinity that the laws of physics must begin to break down is a meaningless fatuous gesture to humility. It is false humility if it has no ameliorating effect on professional arrogance. Please, do not just restate Birkhoff.

I contend there is a loophole here or a gross misinterpretation. The consensus interpretation of Birkhoff and of Schwartzschild/Kretschmann cannot both be true at the same time. There must be a measureable consequence of the presence of a singularity that is beyond imaginary untestable gedanken experiments. The test is the hyperbolic gravitational field. It results in a nonzero constant rotational velocity distribution effect in spiral galaxies, ellipticals, globulars and galactic clusters. This is easier to believe than Dark Matter.

The very same phenomena that are used to argue for Dark Matter can be used to argue for the hyperbolic field. So, it is testable. But, how do we choose between them? I think that Occam ’s razor is the principle of choice here. WIMPS and neutralinos and the other oddball particles that have been proposed require ad hoc additions to theories or their complete rewrites. The hyperbolic field is far simpler. All that is needed is acknowledgement that the black hole singularity is unique. No rewrite of GR. No undetectable new heavy particles that get given self-serving, revised, lower detection limits every time they are determined to be really undetectable.

There seems to be a tendency of cosmologists to think inside the box. They never really consider anything that is outside the consensus. So too do journal editors rely on  conventional wisdom. They would all have been supremely comfortable with the Pope’s decision to censor Galileo.

“Cosmologists are always wrong, but never in doubt.”   Lev Landau

A potential energy diagram is perfectly possible for a hyperbolic black-hole gravitational field

January 11, 2012

The “normal” Newtonian potential energy diagram derived from the inverse square relation versus the hyperbolic black hole gravitational potential energy diagram derived from the inverse (1/kr) relation

First of all, note that a potential energy diagram is perfectly possible for a hyperbolic black-hole gravitational field. The only trouble is with convention. Normally, one takes potential energy U as U = 0 at r = infinity. But, U keeps increasing forever with increasing r in the case of the HBH field. It does not level off to an asymptotic value. So, we would need to adopt a different convention, with U = 0 at r = 1. Then, we would have to remember that all U computed for the HBH case will need to be multiplied by -1 in order to be consistent with conventional usage.

We could represent how the ultra-massive universe excited inflaton HBH gravitational field collapses or transitions to the conventional inverse square field, thereby donating its potential energy to what is increasingly an inverse square gravitational universe, accelerating its expansion in the latter 2/3 of its evolution. We might use weighting factors. We could use a linear weighting factor or maybe an inverse square exponential form or even an hyperbolic expression.

Using x = r in the above diagram, let us try weight for the inverse square derived contribution, S = % and weight for the hyperbolic contribution, H = 1.00 – S, 0.00 < S < 1.00, so that U = S( -1/r +1) + (1.00 – S)ln(r), then total U will transition smoothly from the HBH hyperbolic potential energy to the inverse square potential. Here, 1 is a translation amount to let the inverse square derived curve superpose upon the hyperbolicaly derived. The potential energy lost by the HBH phase of the universe is made up by a gain in kinetic energy of expansion in the inverse square phase.

Remember, it is legitimate to think of Hubble expansion of spacetime carrying the objects embedded within it as a kinematic growth process. One need not always regard it as a “stretching” of spacetime, though for some other purposes, this may help.

Obviously, the resulting composite curve will have a significant positive slope on the right, connoting Dark Energy. But, the curves explicitly describe Dark Matter. So there is a strong link between DM and DE.

There have been misstatements and misinterpretations of Birkhoff’s Theorem. For instance, it has been shown by Kristin Schleich and Donald M. Witt (“A simple proof of Birkhoff’s theorem for cosmological constant”, arXiv: 09084110v2, 27oc09) that Birkhoff does not demand staticity in spherically symmetric solutions to Einstein’s vacuum field equations. Static solutions have heretofore been thought of as being required. There may be other misstatements and misinterpretations that are not yet recognized.

For instance, Birkhoff’s Theorem must actually leave black hole singular gravitational fields as an exception to the commonly quoted rigid rule that only asymptotically flat (commonly assumed meaning: inverse square) gravitational fields are allowed. Otherwise there is no way to measure or unequivocally determine that the center of a black hole is a singularity since electric charge and gravity are the only items the influence of which can escape the interior of a black hole.  Then the theories of Schwarzschild and Kretschmann that say such singularities are physically real are largely meaningless as unfalsifiable hypotheses.

There simply must be a measurable consequence of a true singularity at the center of a black hole or else its existence cannot be postulated. That the mathematics seems so very precise is not good enough. There must be a way to experimentally verify or falsify the equations.

If the gravitational singularity at the center of a super-massive galactic black hole results in a hyperbolic gravitational field, there is a way. By measuring the velocity distribution of stars in the surrounding galactic disk, it can be determined if they move with a constant velocity, v = (GM)^½ at large r, as they must if they move in a hyperbolic gravitational field. As a matter of fact, the velocities of stars in spiral galaxies do indeed move with constant velocity at large r. This can be seen as proof of a singularity at the center of a spiral galaxy’s black hole.

We often state that “The laws of physics must break down at the incredibly tight curvature of spacetime near the singularity of a black hole”. What does this mean? One thing it could mean is that Birkhoff’s Theorem breaks down too. The metrics to which Birkhoff applies probably are not strictly valid near the singularities that they themselves predict so, the “asymptotically flat” dictum may not be strictly true either. Otherwise, our cautionary statement is meaningless.

Besides, the intense benefit brought by the postulate of a hyperbolic galactic super-massive black hole gravitational field is too great to be ignored. It explains the anomalous stellar velocity distributions in galaxies, anomalous velocity distributions in galactic clusters, galactic lensing phenomena, temperature distributions within galaxies, Bullet Cluster type apparent offsets in the barycenters of colliding galaxy clusters, etc. It does everything that Dark Matter is supposed to do! So, Dark Matter is an unnecessary complication that violates Occam’s Rule.

Cosmologists will not like this idea. The LCDM model would have to be drastically revised. The consensus would have to change. Since journal editors and referees endorse only papers that conform to the consensus (they would have been comfortable with the Pope’s decision to censor Galileo), no-one will publish a paper that challenges the commonly accepted interpretation of Birkhoff’s Theorem. And, if Birkhoff does break down in the vicinity of a gravitational singularity, how can it ever be proven? One would have to develop a whole new physics of the ultra strong curvature near black hole singularities. Unless it also had consequences outside the black hole, such a theory could not be falsified so, it could not be admitted as a part of science.

Catch 22 says “Anyone who wants to get out of combat duty isn’t really crazy.” Hence, pilots who request a mental fitness evaluation are sane, and therefore must fly in combat. We would be better off flying in combat duty than trying to fly the hyperbolic super-massive galactic gravitational field into some journal’s pages.

The amazing thing is that the HBHG field does have consequences outside the BH and the event horizon.

The Scientific Method and Quintessence Analogs

December 26, 2011

The Scientific Method and Quintessence Analogs

Wolfram Math World: A null hypothesis is a statistical hypothesis that is tested for possible rejection under the assumption that it is true (usually that observations are the result of chance). The concept was introduced by R. A. Fisher. The hypothesis contrary to the null hypothesis, usually that the observations are the result of a real effect, is known as the alternative hypothesis.

Most science knowledge is statistically validated. The scientific method requires scientists and all others who claim to think rationally to answer knotty questions by means of repeatable EXPERIMENT or careful verifiable direct observation. In order to do this effectively, one must formulate a hypothesis, a statement of some putative principle that engages all the known implications. These implications must be rather direct. Circumstantial consequences are just that – circumstantial and cannot be used to PROVE a hypothesis by their mere existence. Those better “former” implications must suggest substantive experiments that will verify or confirm them or not.

It is good if there are direct elements of the principle and subservient implications of the hypothesis. It is better if a complete and utter negative statement of the hypothesis can be formulated. Then, the net algebraic sum of the original hypothesis and the negative hypothesis should be zero. Logically, the negative completely cancels the positive hypothesis. This negative hypothesis is called the “null” hypothesis because it would nullify the other if it proves to be true and it would tend to validate the positive (or alternative) hypothesis if shown to completely false. At least, it would fail to PROVE it false, if Null was shown to be true in some minor ways. Then, when Null is invalidated, if direct evidence can be found that tends to corroborate the original positive (alternative) hypothesis, we can begin to regard it as a good logical beginning. AND THEN, if this alternative hypothesis can be combined with statements of principle that have already been proven and the combined implications of such a joint statement can be so verified, as before, we have the beginnings of proof.

The result should always be a hypothesis or “theory” with predictive value. Or, when only observation is what may be possible, a good theory will predict the results of a program of detailed observation.

The key to this process is our ability to form an experimentally testable Null Hypothesis. The evidence FOR the positive statement of the hypothesis (the alternative) is insufficient in itself because circumstances may combine to fool our little experiments. We are human. If an appropriate robust Null Hypothesis cannot be formulated, the original hypothesis is unsuitable to merit the attention of the scientist. Such a defective hypothesis is termed “unfalsifiable” because no Null Hypothesis can be stated the confirmation of which would show the bad hypothesis to be “false”.

This is relevant to the debates about Dark Energy (DE), quintessence, Dark Matter (DM) and so on. Dark Energy is the reservoir of potential energy that is supposed to exist as an underlayment or foundation of the universe. Quintessence is supposed to be a new force field that is just another component of the universe. All fundamental force fields have an associated particle. So, if there is quintessence, there should be a quintessence particle also. Invisible, undetectable Dark Matter (as, for instance, enormous super-galactic clouds of invisible, undetectable WIMPs – weakly interacting massive particles) is supposed to account for the anomalous rotation velocity distribution seen in galaxies and galactic clusters. Also, DM is supposed to result in redshift effects in observation of galactic lensing and in the Sunyaev-Zeldovich counterbalanced redshift phenomena. It is also supposed to explain anomalous apparent offsets in the barycenters of colliding galactic clusters. These would be indirect effects of DM. They are all explainable by other means. The confirmed existence of such phenomena cannot prove the DE or DM hypotheses because they do not address any NULL hypothesis.

Do not forget, to call one’s self a scientist (even an amateur scientist), one must respect the scientific method. It is not a scientist’s dogma any more than it is dogma to follow the firefighter’s code; one must respect the power of fire – or else you die. One may choose to die physically, or else one may wish to perish intellectually. The scientific method assures logical life after the virtual death of a bad theory.

An unfalsifiable hypothesis has no business occupying the time of the scientist. Whole theories have often been constructed from elaborate systems of unfalsifiable hypotheses. Such theories are often fun to think about, even edifying and inspiring – but they certainly are not science.

There is a place for faith. But, if a person of faith needs experimental proof, he will have little of either. We all need faith. Psychologically, we all use some form of faith in some way. In times of trouble and sorrow, sometimes it is all we have. Psychiatrists recommend it. Psychologists say that the “normal” person is the one who can “delude” himself to this self-assessment successfully all the time. In other words, normal people “lack insight”. Facetiously, they say that if we had true insight, we would all be permanently severely depressed!

God loves us, some believe. Mere belief makes it so. Existentially, epistemologically and eschatologically, if we can say sincerely that a principle of human conduct or relations should be timelessly true for everyone, then it is so. The Human Condition IS what we can sincerely say it should be. “Things” ARE as they “should” be. This is called Primary Christian (or Buddhist, Hindu, etc.) existentialism. All men and women of faith are Primary Christian (or Jewish, Hindu, Muslim or whatever) existentialists. Scientists must not be professional existentialists. They must be “positivists” or “logical positivists” (A.J. Ayer), at least in their professional (or amateur) dealings.

All true scientists hew to a strict code of honor as well as to the scientific method. Verifiably and repeatably confirmed experimental or observational Truth is NOT just a buzzword. Such Truth is meaning. Truth is the scientist’s life. Truth is noble. To the scientist, Truth is not relative. Truth does not evolve. Truth is an absolute ideal – there is only one truth, that is, it is unique. Our understanding of truth, however, does evolve. Understanding is indeed relative and it can be flawed. But, it hews to the above stated ideals in all cases. Otherwise, it is fatuous pompous propaganda: good for politicians and some clerics, but not for the scientist.

In a very real sense, to the scientist, the ideal of “Truth” is the next best thing to God.

Cosmologists are always wrong, but never in doubt. – Lev Landau

Nobel Prize for Perlmutter & Riess

December 19, 2011

If one carefully reads the papers submitted to ArXiv astrophysics from after 1998, one sees that Saul Perlmutter’s and Adam Riess’s supernova research groups were not independent (as claimed) and that they were in serious communication. Perlmutter and Riess actually wrote a paper together before they could have otherwise come to cooperate.

They say that the data that the two groups got regarding the distances to supernovae type 1a and other bright extremely distant objects was not concordant at first. In order to force the two data sets to conform, they admit that they had to apply a mutual “adjustment”. This artificial factor was used by both groups to bring the data of each set into alignment with the other so that a smooth plot could be made that included all the data points.

The sense of this artiface alone is the sole “evidence” that they both cite for an accelerating rate of expansion of the universe. They might have applied the adjustment factor to the other data set in the opposite sense. Then, the universe expansion rate would have been seen as decelerating.

There was a choice to be made. A cynic might hazard a guess as to why they made the choice that they did. A cynic might also claim that P&R’s colleagues on the Nobel Committee were grossly biased because they were close friends and few in number. When a subcommittee reports to the full committee, though, their recommendation is often taken as Gospel. How often has the Nobel Prize award been found to be, if not unwarranted, uncompelling?

In college, we had to write laboratory reports on the textbook experiments that we did in lab. We were warned against manufacturing data. Our professors all said that this kind of “fudging” is a big “NO NO”. Ethical standards are not just for students. Still, as professionals who certainly are good scientists, Permutter and Riess, no doubt, think that they were perfectly well justified in applying their adjustment factor and did so in all honesty. But, the result is the same.

Origins, emergence and eschatology of the Universe: Dark Energy

December 14, 2011

Should we mean “the universe” or “the meta-verse” or “the multi-verse”? (Hugh Everett)

Presumably, when the universe formed from an ensemble of some sort of “inflaton” point particles (Alan Guth) as a statistically inevitable child of an extremely excited field, possibly the gravitational field itself, its hyperbolic (proportional to 1/r) field began to collapse into a parabolic (1/r^2) one. That collapse continues to this day. But, the process is almost done. There cannot be an infinite amount of energy sequestered in the hyperbolic 1/r field that would be available to fuel acceleration of the Hubble expansion rate by such a transformation. Transition to a lower potential energy parabolic field must provide a distinctly limited supply of extra impetus. Surely, after 13.72 billion years, the (1/r) potential energy mainspring has almost run down by now. The remaining (1/r) potential energy is called Dark Energy.

It accounts for the “missing mass ” in audits of universe contents and provides a convenient, theoretically rigorous and parsimonious basis for “acceleration”. Dark Energy could account for around 80% of the universe’s total mass, but audits are not so accurate. Still, The Mainspring still has enough oomph to last for at least another 140 billion years more!

The hyper-excited gravitational field sprang into existence simply because it could. It came to be in a tremendously excited state because very high excited states are much more probable than lower ones, because of the zero point cut-off. This is just like virtual particles come to exist and be annihilated all the time on the quantum level (this is confirmed by experiment). None of them become universes, though, because there is already one here. It’s a sort of a Pauli exclusion principle.

There has been some confusion. So, let us switch definitions of r. In the following, r is the rate of acceleration of expansion of the universe (or rotational acceleration around black-hole).

If the acceleration of the expansion rate is called a, and its present value is called P, then a = P at any given time, including the present. The simplest equation for the expansion rate’s effect on P would be an exponential decay expression, P = hoe^(-rt), where ho is an initial value for h, r is the rate of increase in this expansion and t is time.

We can get an estimate of a value for ho from Alan Guth’s formulation of the theory of simple inflation. The present values of both the expansion rate, P1, and acceleration rate, r, is observable. We can set t = 1, for the present value of t. So, we can summarize all relevant observations with this simple equation or the associated exponential expansion equation, R = Roe^(rt),where R is the putative instantaneous “radius” or scale factor of the universe.

The current value of the expansion rate is Ho, the Hubble “constant”, so P1 = Ho.

Back to our original definition of r (not R) as a radius or scale factor:

Exponential decay equations exhibit what is called a “dormancy” period or final plateau region. In this part of the discussion, here, r refers to distance from a center of rotation. Sorry. I missed this inconsistency in previous posts. I need a nicer symbol for the exponential period, another name for r; maybe Cyrillic backward “R” ? May be lower case Cyrillic “r” ?

The hyperbolic 1/r curve levels off near zero and continues to subside gently almost linearly for an indefinite time. Plot a graph yourself on the back of an envelope! Use mass M = 1, the smaller mass drops out for acceleration. And, assume G is any self consistent constant like G = 1. This is just for comparison purposes, so it matters not. The equation for orbital acceleration around a galaxy, say, levels off to a constant, even at infinity, for a hyperbolic 1/r black-hole galactic gravitational field potential diagram. (You have just DERIVED modified Newtonian Dynamics or MOND!) You must multiply r by the constant k = 1m (Systeme Internationale) for dimensional purity.

The current state of the universe itself may be considered as being in this (1/r) exponential decay dormancy or plateau period. The conclusion here is that acceleration of expansion may continue for a long time while very slowly decreasing nearer to zero.

The black-hole rotational acceleration connection implies that the universe may be rotating very very slowly right now. But, we cannot know. We would have to observe the universe from the outside, from the perspective of the meta-universe, to tell. From the standpoint of general relativity, we simply cannot tell from our perspective her and now.

Yet, in other words, even with acknowledged acceleration of the Hubble expansion rate, there does not necessarily have to be a “Big Rip” wherein the fabric of the cosmos is irreparably torn apart as expansion proceeds beyond a certain point.

By the way, “M Theory” doesn’t exist. M Theory is just an “ideal”. Brane Theory is not M Theory. Neither has ever predicted anything that can be experimentally verified and neither is falsifiable. Therefore, they cannot qualify as legitimate scientific propositiions. Not one single unique result has ever come from either. Furthermore, they are both unnecessary. Shrewd development of general relativity and quantum are slowly causing both to merge. What’s the hurry? Let true “M Theory” and “Brane theory” grow organically out of quantum and GR. Each step will be independently validated, then. No worry.

Origins, emergence and eschatology are fertile fields for philosophers. This is why we scientists are sometimes called “Doctors of Philosophy”, Ph.D.    Doctorae Philosophi.    I took Latin for three years and I am still not sure of this. German and Russian too, but this is no help. What happened to my old Latin grammar texts?


Follow

Get every new post delivered to your Inbox.

Join 263 other followers