ISSN: 2320-2459
DChief Scientist, Brain Perfection LTD, Israel
Received date: 28/09/2015 Accepted date: 28/03/2016 Published date:30/03/2016
Visit for more related articles at Research & Reviews: Journal of Pure and Applied Physics
Contemporary Theoretical Physics has reached a point akin to pre- 1905 Einstein’s Relativity shift: the current two pillars of modern Physics, namely: Quantum Mechanics (QM) and Relativity Theory (RT) seem contradictory of each other and major Physical phenomena cannot be accounted by them, i.e., “Dark Energy”, “Dark Matter” (70 to 90 percent of all mass and energy cannot be observed empirically), the “Arrow of Time” and other existent Physical conundrums. Such a state in Theoretical Physics calls for a basic ‘Paradigmatic Shift’ (akin to Relativity’s shift in Newtonian Physics). The key question is what are the rigorous scientific criteria by which a satisfactory ‘Paradigmatic Shift’ Theory (PST) can be validated? It is suggested that these rigorous scientific criteria include for such satisfactory PST should comprise of: the replication of all major QM and RT validated empirical findings and theoretical relationships, the resolution of all key QM-RT theoretical inconsistencies, the identification- and empirical validation- of at least one “critical prediction” differentiating this PST from existing QM and RT predictions, and the capacity of such PST to account for currently unexplained Physical conundrums (such as the ‘dark-matter’, ‘dark-energy’ enigma). Based on the recent empirical validation of one of the ‘Computational Unified Field Theory’ (CUFT) predictions, alongside its satisfaction of all of these rigorous scientific criteria, it is suggested that this CUFT may qualify as an appropriate PST. Finally, the acceptance of the CUFT ‘A-Causal Computation’ new Paradigm advances (potentially far reaching) theoretical implications such as the possibility to “reverse the flow of time”, negate “dark-matter”, “dark-energy” as “superfluous” – instead discovering the Universal Computational Principe’s accelerated increase in the number of spatial-pixels comprising each subsequent ‘Universal Simultaneous Computational Frames’ (USCF’s) (e.g., comprising all of the spatial-pixels in the physical universe at every minimal time-point, ‘c2/h).
Paradigmatic shift; Quantum mechanics; Physical conundrums; Material-causality.
When does a scientific discipline reach a state which necessitates it undergo a basic ‘Paradigmatic Shift’, e.g., a fundamental revision of its basic theoretical framework? According to Kuhn’s well known analysis of such “Paradigmatic Shifts” occurring along the natural evolution of Science, this is signified by the existence of seven critical scientific criteria [1]:
a) There arises basic intrinsic theoretical inconsistency within the ‘Existing Scientific Paradigm’, which cannot be resolved from within it;
b) There exist a series of unresolved scientific phenomena or findings which cannot be explained by this Existent Scientific Paradigm.
c) The ‘New Scientific Paradigm’ can replicate all of the major findings and laws of the ‘Existing Scientific Paradigm’.
d) The ‘New Scientific Paradigm’ can resolve these apparent theoretical inconsistencies existing within the ‘Existent Scientific Paradigm’ (described in ‘a’).
e) The ‘New Scientific Paradigm’ can account for these series of unexplained scientific phenomena (or findings) (e.g., ‘b’).
f) The ‘New Scientific Paradigm’ predicts the existence of new empirical phenomena which cannot be accounted for by the ‘Existent Scientific Paradigm’. And
g) *This “differential-critical prediction” of the ‘New Scientific Paradigm’ is validated empirically (e.g., as different from the prediction of the ‘Existing Scientific Paradigm’) thereby indicating that the ‘New Scientific Paradigm’ offers a more appropriate and empirically validated framework of the physical reality (than the Existing Scientific Paradigm).
h) *Ideally, the New Scientific Paradigm should identify new empirical (and theoretical) constructs which were not recognized by the Existing Scientific Paradigm.
* The two last criteria are not strictly demanded by Kuhn’s analysis but are rather suggested through such significant Paradigmatic Shifts in Science as Einstein’s famous 1919 empirical validation of his ‘critical prediction’ regarding the Mercury’s perihelion advance due to the curvature of space-time by the Sun’s mass.
Indeed, it is suggested that Physics has reached precisely such a point along its theoretical development which necessitates such a ‘Paradigmatic Shift’, e.g., because it satisfies each of these four basic criteria: a) its two primary pillars, namely: Quantum Mechanics and Relativity Theory seem contradictory of each other; b) there exist a series of major unresolved ‘Physical Conundrums’ (which cannot be accounted for by QM or RT), including: “Dark Energy”, “Dark Matter” and the “Arrow of Time” etc. ; Over the past four years a new promising alternative model named: the ‘Computational Unified Field Theory’ (CUFT) has been discovered [2-7] which is recognized as one of the candidate ‘Theory of Everything’ [8-11] – was shown capable of resolving the apparent theoretical inconsistency between QM and RT; c) This CUFT has also identified at least one “differential-critical prediction” which differs significantly from the predictions of both QM and RT – and indeed one of these (three) ‘differentialcritical’ predictions, namely: that relatively more “massive” particles should be measured more consistently across a series of ‘Universal Simultaneous Computational Frames’ (USCF’s) than “less massive” particles- has been recently validated through the “Proton-Radius Puzzle” [12]; And finally, d) this empirically validated CUFT ‘Theory of Everything’ is also capable of resolving a series of key Physical Conundrums including: the “Dark Energy” and “Dark Matter” and the “Arrow of Time” (and associated “Second Law of Thermodynamics”)!
The key focus of the current article is hence, to evince that the empirical validation of the CUFT signifies a basic Paradigmatic Shift from the current “Material-Causality Paradigm” (Quantum and Relativistic) to the CUFT’s (higher-ordered) “A-Causal Computation Paradigm”; Indeed, it has been previously shown [2] that both QM and RT are based upon a “Self-Referential Ontological Computational Structure” (SROCS) which assumes that it is possible to determine the value/s of any given subatomic “target” or relativistic (space-time or energy-mass) ‘phenomenon’ based on the physical interactions between that given ‘target’ and another exhaustive set of ‘probe’ elements, or between that relativistic ‘phenomenon’ and an exhaustive set of differential relativistic observer/s; But, according to one of the key theoretical postulates of the CUFT namely: the computational ‘Duality Principle’, such SROCS computational structure inevitably leads to both “logical inconsistency” and “computational indeterminacy”, which are contradicted by robust empirical evidence indicating the capacity of both Quantum and Relativistic computational systems to determine the value/s of the given subatomic ‘target’ or relativistic ‘phenomenon’. Therefore, the Duality Principle evinces that the only means for these Quantum and Relativistic computational systems to determine the value/s of the subatomic ‘target’ and relativistic ‘phenomenon’ is based on the existence of a singular higher-ordered ‘Universal Computational Principle’ (signified by the Hebrew letter “Yud”: ‘ י’) which computes the simultaneous co-occurrence of all spatial pixels in the physical universe at any given minimal time-point, i.e., given by ‘c2/h’. Indeed, according to this ‘Universal Computational Principle’ (UCP), this simultaneous computation of all of the spatial pixels in the universe (at any minimal time-point, c2/h) constitutes an extremely rapid series of “Universal Simultaneous Computational Frames” (USCF’s) wherein the UCP computes for every spatial pixel in the universe its four ‘physical features’ of ‘space’, ‘time’, ‘energy’ and ‘mass’ based on three Computational Dimensions (of ‘Framework’, ‘Consistency’ and ‘Locus’). The key point to be noted is that this UCP simultaneous computation of all spatial pixels in the physical universe (at any minimal time-point USCF frame/s) negates the possibility of the existence of any “materialcausal” relationships between any quantum or relativistic element of phenomenon – but rather points at the UCP’s “A-Causal Computation”, wherein the UCP is the sole “cause” for all of these simultaneously computed spatial pixels, phenomena in the physical universe (at any given USCF frame/s)!
Hence, the gist and purpose of this article is to highlight the Paradigmatic Shift represented by the CUFT’s discovery of the UCP’s ‘A-Causal Computation’ which negates the theoretical possibility of the currently assumed Quantum and Relativistic ‘SROCS’ computational systems representing a “Material-Causal” relationship – instead, pointing at the existence of a singular higher-ordered UCP which carries out a simultaneous ‘A-Causal Computation’ of all spatial pixels and physical phenomena in the universe… Therefore, this article advances through the development of two converging lines of inquiry:
a) Analysis of the rigorous scientific criteria necessary to produce a “Paradigmatic Shift” in any given scientific discipline, and a demonstration of the satisfaction of these rigorous scientific criteria by the CUFT; and
b) Delineation of the essential characteristics of the CUFT’s ‘A-Causal Computation’ Paradigmatic Shift in Physics, e.g., its theoretical significance, its explanation of a series of (otherwise) unexplained Physical Conundrums and its series of new ‘critical predictions’ which can be empirically (or mathematically) validated (and which can open up completely new “horizons” in contemporary Theoretical Physics).
The Need for a Paradigmatic Shift in Contemporary ‘Material-Causal’ Physics
Let us then begin with an analysis of the immanent “crisis” in contemporary Theoretical Physics and its satisfaction of the necessary rigorous scientific criteria for the occurrence of a ‘Paradigmatic Shift’ in Physics; Contemporary Theoretical Physics finds itself in a real “crisis”, i.e., one which satisfies the earlier mentioned criteria for a Paradigmatic Shift, namely: the two primary pillars of modern Physics, e.g., Quantum Mechanics (QM) and Relativity Theory (RT) seem contradictory of each other and there exists a series of “Physical Conundrums” which cannot be accounted for by either of these theories. The principle contradiction that exists between QM and RT comprise of: a) the speed of light limit set by Relativity Theory on the transmission of any signal (or information) between any two events – which is contradicted by QM’s empirical validation of the “quantum entanglement” phenomenon indicating that the measurement of one (of two) “entangled particles” simultaneously determines the complimentary physical properties of the other ‘entangled particle’ [13,14], e.g., thereby negating the abovementioned ‘speed of light’ constraint imposed by Relativity Theory on the transmission of any signal or information across space and time; and b) whereas RT is characterized by “positivistic” features (e.g., each object or phenomenon possesses a clear definitive space-time or energy-mass value), QM may only be characterized as a “probabilistic” model, e.g., attributing only probabilistic and “complimentary” spaceenergy or temporal-mass values to any event or phenomenon. In addition to this principle theoretical contradiction that exists between QM and RT, there exist a series of unresolved key Physical Conundrums which cannot be adequately accounted for by either QM or RT; these include: “dark-matter” and “dark-energy”, i.e., which refers to the fact that up to 90% of all the mass and energy in the universe (calculated based on the existing QM and RT ‘Materialistic-Causal’ Paradigm) cannot be observed empirically – and are hence attributed “hypothetical theoretical construct” (e.g., which could not be observed empirically to date…) Other critical unresolved enigmas associated with this Material-Causal Paradigm are: the “Arrow of Time” (phenomena developing only from the past to the future but not vice versa), as well as the “Second Law of Thermodynamics” (which will be challenged and revised by the new Computational Unified Field Theory’s ‘A-Causal Computation’ Paradigm, e.g., pointing at the possibility of reversing the ‘Arrow of Time’ through the new theoretical vistas opened by this singular ‘A-Causal Computation’)…
If we are to base our rigorous scientific analysis of the specific criteria that have to be demonstrated in order to call for a “Paradigmatic Shift’ (in any given scientific discipline at any given point in time) – based on [1] famous (and accepted) criteria, we could very well identify these two (abovementioned) primary crises facing modern Theoretical Physics (e.g., the principle theoretical contradiction between QM and RT and their inability to account for the series of abovementioned ‘Physical Conundrums’) – as calling for an immanent Paradigmatic Shift in contemporary Physics; This is because according to Kuhn’s conception of those particular (developmental) phases of ‘Paradigmatic Shifts’ along the natural development of Physics, it is the appearance of precisely such internal theoretical inconsistencies between certain key theories (within a given Scientific discipline) and the inability of the ‘Standard Paradigm’ to account for a series of observed empirical phenomena, which signal the upcoming of a necessary ‘Paradigmatic Shift’ (e.g., within a particular domain in Science); Moreover, according to Kuhn’s conception of such Paradigmatic Shifts, to the extent that the ‘New Scientific Paradigm can indeed resolve those key theoretical inconsistencies of the ‘Standard Paradigm’ and also explain (in a satisfactory manner) the series of unresolved Physical Conundrums, then Science has to adapt the required Paradigmatic Shift (e.g., even if it sometimes seems somewhat “reluctant” to let go of the Standard Paradigm which served the progression of scientific inquiry successfully for a while); Perhaps one additional criteria to be added to Kuhn’s original list of necessary rigorous criteria for the adoption of a Paradigmatic Shift within Physics may be taken from Einstein’s utilization of a “differential-critical prediction” which may differentiate the ‘Standard Paradigm’ from the new ‘Paradigmatic Shift’, i.e., as in the case of his differential-prediction of the (double) value of the perihelion precession of Mercury's orbit due to the curvature of space-time by mass (relative to Newton’s predicted value). Thus, to the extent that any ‘New Paradigm’ can identify (and quantify) the predicted value/s of any empirically testable measurement as significantly different than the predictions of the ‘Standard Paradigm’, then this calls for an unequivocal Paradigmatic Shift in Theoretical Physics (e.g., as indeed occurred in the case of the empirical validation of Einstein’s ‘differential-critical prediction’ relating to Mercury’s perihelion precession)…
Hence, the next step to be taken in order to validate the ‘Computational Unified Field Theory’ (CUFT) as a satisfactory New Paradigm (e.g., bringing about an undisputed Paradigmatic Shift in Physics) is to systematically review the capacity of this CUFT to resolve the apparent theoretical inconsistency that exit between QM and RT, replicate all known empirical findings of both these theories, its identification of at least one ‘differential-critical’ empirical prediction which differs from the predictions of both QM and RT and its empirical validation (e.g., associated with the ‘Proton-Radius Puzzle’ findings), and its satisfactory explanation of the above mentioned series of otherwise unexplained ‘Physical Conundrums’, e.g., “Dark Energy”, “Dark Matter” and “Arrow of Time physical phenomenon). Hence, let us begin with a review of the key Theoretical Postulates of the CUFT:
The CUFT [2-7] is based upon several key theoretical postulates that include:
The ‘duality principle’
Proves that for any physical system which is capable of empirically determining the values of a given ‘y’ factor – that system’s computational structure cannot be based (solely) on any direct (or indirect) physical interactions between that ‘y’ factor and an exhaustive set of ‘x’ factors! Essentially, the CUFT asserts that both Quantum and Relativistic computational systems possess an intrinsic "computational flaw” in violating this Duality Principle [4,5,15] this is because both Quantum and Relativistic computational systems attempt to determine the particular value of a given 'y' element, i.e., the subatomic “target” or a (spacetime, energy-mass) relativistic “phenomenon” – solely based on its direct (or indirect) physical interaction with another (exhaustive) ‘x’ factor/s, e.g., another subatomic “probe” element, or another “relativistic observer”. the ‘Duality Principle’ proves that such a “Self-Referential-Ontological-Computational-System” (SROCS) structure (e.g., of trying to determine the “existence” or “non-existence” of a particular ‘y’ value solely based on its direct physical interaction (PI) with another 'y' factor) inevitably leads to both ‘logical inconsistency’ and ensuing ‘computational indeterminacy’; this is because in cases in which this direct physical interaction between the ‘x’ and ‘y’ elements leads to a result in which the ‘y’ factor (or value) is negated, then due to the SROCS assumption that the determination of the “existence” or “non-existence” of the ‘y’ factor/value computed solely based on this direct ‘x-y’ physical interaction, we obtain that the ‘y’ factor/value both “exists” AND “does not exist” at the same SROCS computational structure, which constitutes a ‘logical inconsistency’ (e.g., contradiction)!
SROCS: PI {x,y}→ ’not y'
Moreover, based on this SROCS assumption whereby the computation of the “existence”/”non-existence” of the ‘y’ factor or value is determined solely based on this ‘x-y’ direct physical interaction, then this assumed SROCS not only leads to the above ‘logical inconsistency’ – but also seems to not able to compute whether in fact the ‘y’ value (or factor) “exists” or “does not exist”! But, since the Duality Principle applies only to computational systems for which we already know that they are capable of determining the value of their ‘y’ element (e.g., such as in the case of quantum or relativistic computational systems’ capacity to determine the empirical values of their subatomic “target” element or relativistic space-time’ ‘energy-mass’ Quantum or Relativistic “phenomenon” value) systems) – then the Duality Principle concludes that their assumed SROCS computational structure must be negated!
Specifically, in the case of the Quantum assumed SROCS structure, it is assumed that the particular measured value of the subatomic “target” (‘t [i=n] ’), is determined solely based on the direct ‘physical interaction’ (PI) of this subatomic ‘target’ – which comprises all of the possible “probability wave function” values, with another subatomic ‘probe’ element:
Quantum SROCS: PI {p, ti (1…n)} → ‘t [i=n]’
But, this means that for all those quantum target’s (probability wave functions’) “non-measured” (i.e., “non-collapsed) values: ‘t [i=1…(n-1)]’ we obtain that these ‘t [i=1…(n-1)]’ values seem to both “exist” and “not exist” at the same Quantum SROCS system!?
Quantum SROCS: PI {p, ti (n,)}‘t [{‘i=1… (n-1)’}; {‘i=n]}’→‘t [{i=n}’; NOT [{‘i=1… (N-1)’}]
As stated above, this constitutes a “logical inconsistency” (contradiction) which also leads to an apparent inability of the (assumed) Quantum SROCS computational system to compute whether the measured subatomic ‘target’ value is “n”, or “{‘i=1… (n-1)’}” (e.g., “computational indeterminacy”). But, obviously, these apparent “logical inconsistency” and “logical indeterminacy” are negated by empirical findings indicating the capacity of Quantum (computational) systems to determine what is the measured value of the subatomic ‘target’ element! Hence, the Duality Principle negates the assumed Quantum SROCS computational structure!
In much the same manner, the Duality Principle evinces that the Relativistic SROCS structure inevitable leads to the same ‘logical inconsistency’ and ensuing ‘computational indeterminacy’ for all those relativistic (‘space-time’, ‘energy-mass’) “phenomenon” “non-measured” values: ‘ph [i=1…(n-1)]’ (e.g., for a given relativistic observer). This is because according to Relativity Theory, any given (‘space-time’ or ‘energy-mass’) “phenomenon possesses a whole range of possible values as measured by differential relativistic observers; Moreover, according to Relativity’s (assumed) SROCS structure, the determination of the particular measured value ‘ph [i=n]’ of the “phenomenon” (by a given relativistic observer: ‘o’) is determined solely based on the direct physical interaction between that relativistic observer and the given “phenomenon”:
Relativistic SROCS: PI {o, phi (1…n)} → ‘ph [i=n]’
But, this implies that for any given (particular) relativistic observer – all those “non-measured” “phenomenon” (space-time, ‘energy-mass’) values ‘ph [i=1…(n-1)]’ seem to both “exist” and “not exist” at the same Relativistic SROCS system!?
PI {o, ‘ph [{‘i=1…(n-1)’} → ‘ph[{i=n}’; NOT [{‘i=1…(n-1)’}]
As shown above, this ‘logical inconsistency’ (wherein the “non-measured” ‘phenomenon’ values both “exist” AND “not exist” at the same SROCS) also leads to an apparent inability of this (assumed) Relativistic SROCS to determine the relativistic values of this phenomenon – which is contradicted by robust empirical values; Hence the Duality Principle negates this assumed Relativistic SROCS structure;
Hence, for both Quantum and Relativistic apparent SROCS structures, the Duality Principle concludes that the only means for determining the “existence” or “non-existence” of any given ‘y’ value is based on a conceptually higher-ordered ‘D2’ computational framework, which computes the “simultaneous co-occurrence” of an exhaustive series of all possible ‘x and y’ pairs. Finally, this recognition of the need to base both Quantum and Relativistic (apparently) SROCS computational systems on a higher-ordered ‘D2’ computational system has also led to the Duality Principle’s conceptual computational proof that there cannot be more than one such higher-ordered ‘D2’ computational system – underlying both Quantum and Relativistic computations [2]. Therefore, an application of the Duality Principle to both Quantum and Relativistic (apparently) SROCS computational systems has pointed at the inevitable recognition of a singular ‘Universal Computational Principle’ (signified by the Hebrew letter “yud”: ‘ (’י which computes the simultaneous ‘co-occurrences’ of all hypothetical ‘x-y’ (quantum and relativistic) pairs series!
The ‘universal computational principle’ (ucp)
This Universal Computational Principle (deduced based on the former ‘Duality Principle) is hypothesized to compute the ‘simultaneous co-occurrences’ of all exhaustive ‘spatial pixels’ in the physical universe at any given ‘minimal temporal point’ (e.g., thereby extrapolating the phenomenon of ‘quantum entanglement’ to all spatial pixels in the universe); Indeed, this UCP’s simultaneous computation of all spatial pixels in the universe (at any given minimal time-point) produces an extremely rapid series (e.g., c2/h’) of “Universal Simultaneous Computational Frames’ (USCF’s) comprising all spatial pixels at any given minimal time point.
The ucp’s computational dimensions
The UCP is also hypothesized to carry out three ‘Computational Dimensions’: The ‘Framework’ Dimension relates to certain ‘computational features’ that are computed at the ‘object’ level, or at the ‘frame’ (USCF’s) level; The ‘Consistency’ Dimension relates to the UCP’s computation of the degree of ‘consistency’ or ‘inconsistency’ of an object across a series of USCF’s frames (e.g., regarding its above mentioned ‘object’ or ‘frame’ measures and also relating to the below mentioned ‘Locus’ Dimension computation); and the ‘Locus Dimension’ relates to the UCP’s computation of any ‘Framework-Consistency’ combination from computational perspective of the ‘frame’ (termed: ‘global’) or from the ‘object’s’ computational perspective (termed: ‘local’); The fascinating facet of these UCP’s three Computational Dimensions is that they produce the four physical features of ‘space’, ‘energy’, ‘mass’ and ‘time’ – i.e., as secondary computational combinations of the ‘Framework’ and ‘Consistency’ Computational Dimensions: The CUFT posits that ‘space’ and ‘energy’ emerge as a result of the UCP’s computation of the degree of ‘consistent’ or ‘inconsistent’ measure of an ‘object’ (e.g., comprising one of the computational levels of the ‘Framework’ Dimension) the ‘Framework’ Dimension; Likewise, the basic physical features of ‘mass’ and ‘time’ arise as secondary computational features associated with the degree of ‘consistent’ or ‘inconsistent’ measure of an object relative to the ‘frame’ (also comprising the ‘Framework’ Dimension)!
Hence, the (new) computational definitions of ‘space’, ‘energy’, ‘mass’ and ‘energy’ are given by:
S : ( fi { x, y , z } [ USCF ( i ) ] + ... fj { x, y , z } [ USCF ( n ) ] ) / h x n { USCF 's }
Such that: fj {x,y,z} [USCF (i) ]) ≤ {x + fi (HXN), and + (HXN) z + (HXN)}[ USCF(i...n) ]
where the ‘space’ measure of a given object (or event) is computed based on a frame consistent computation that adds the specific USCF’s (x,y,z) localization across a series of USCF’s [1...n] – which nevertheless do not exceed the threshold of Planck’s constant per each (‘n’) number of frames (e.g., thereby providing the CUFT’s definition of “space” as ‘frame-consistent’ USCF’s measure). Conversely, the ‘energy’ of an object (e.g., whether it is the spatial dimensions of an object or event or whether it relates to the spatial location of an object) is computed based on the frame’s differences of a given object’s location/s or size/s across a series of USCF’s, divided by the speed of light 'c' multiplied by the number of USCF's across which the object's energy
Value has been measured:
E: (fj{x,y,z} [USCF(n)]) – (fi{(x+n),(y+n),(z+n)} [USCF(i...n)] ) /c x n{USCF’s}
such that:
fj {x,y,z} [ USCF(n)]) > ( fi{x + (HXN) , and + (HXN) z + (HXN) [ USCF ( i ... n )])
Wherein the energetic value of a given object, event etc. is computed based on the subtraction of that object’s “universal pixels” location/s across a series of USCF’s, divided by the speed of light multiplied by the number of USCF's. In contrast, the of ‘mass’ of an object is computed based on a measure of the number of times an ‘object’ is presented ‘consistently’ across a series of USCF’s, divided by Planck’s constant (e.g., representing the minimal degree of inter-frame’s changes):
M : Σ [DO { x , y, z } [ USCF ( n ) ] = o ( i ... j - 1) { ( x ) , ( y) , ( z ) } { USCF ( i ... n ) } / { } { USCF HXN USCF ( 1 ... n ) } / { HXN USCF de}
where the measure of ‘mass’ is computed based on a comparison of the number of instances in which an object’s (or event’s) ‘universal-pixels’ measures (e.g., along the three axes ‘x’, y’ and ‘z’) is identical across a series of USCF’s (e.g., Σoi {x,y,z} [USCF(n)] = oj{ (x + m),(y + m),(z + m)} [USCF(1...n)]) , divided by Planck’s constant.
Again, the measure of ‘mass’ represents an object-consistent computational measure – e.g., regardless of any changes in that object’s spatial (frame) position across these frames.
Finally, the ‘time’ measure is computed based on an ‘object-inconsistent’ computation of the number of instances in which an ‘object’ (i.e., corresponding to only a particular segment of the entire USCF) changes across two subsequent USCF’s (e.g., Σ oi {x,y,z} [USCF(n)] ≠ oj {(x + m),(y + m),(z + m)} [USCF(1...n)]) , divided by ‘c’:
T: Σ { oj x, y, z } [ USCF ( n ) ] ≠ or ( i ... j - 1) { (x) , (y) , (z) } [ USCF (1 ... n) ] / USCF 's cxn { } such that:
T: Σoi{x,y,z}[USCF(n)] - oj{ (x + m),(y + m),(z + m)} [USCF(1...n)] ≤ c x n{USCF’s}
Hence, the measure of ‘time’ represents a computational measure of the number of ‘object-inconsistent’ presentations any given object (or event) possesses across subsequent USCF’
(e.g., once again- regardless of any changes in that object’s ‘frame’s’ spatial position across this series of USCF’s).
Finally, the combination of the ‘Locus’ Dimension together with the ‘Framework-Consistency’ Dimensions, e.g., producing the four physical features of ‘space’, ‘energy’, ‘mass’, and ‘time’ – produces all known relativistic effects and phenomenon, e.g., such as ‘time-dilation’, ‘energy-mass’ equivalence and even the curvature of ‘space-time’!
The computational invariance principle
Another key theoretical postulate comprising the CUFT is the ‘Computational Invariance Principle’ which identifies this ‘Universal Computational Principle’ as the sole ‘computationally invariant’ element which both produces all four ‘computationally variant’ physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ and also exists independently of these physical features “in-between” any two subsequent ‘USCF’s frames; As such, the ‘Computational Invariance Principle’ recognizes the Universal Computational Principle as the sole (and singular) ‘invariant’ reality underlying the production of the four secondary computational ‘variant’ physical properties of ‘space’, ‘time’, ‘energy’ and ‘mass’ (based in part on a well-known scientific principle: “Ockham’s Razor” which prefers the simplest most parsimonious theoretical account for complex phenomena) [2].
The CUFT model can also be explained through a “cinematic-film metaphor”: Imagine yourself sitting in a cinema film presentation (e.g., seeing a film for the first time – unaware of the ‘mechanics’ of a film being presented to you)… In this case you could measure (for instance) the “velocity” (or energy) of a ‘jet-plane’ zooming through the screen, or the “time” it took this jetplane to get from point ‘A’ to point ’B’ (on the screen), or the “spatial” length of the plane etc. – being unaware that (in truth) all of these ‘spatial’, ‘temporal’, ‘energy’ (and ‘mass’) “physical” features are produced based on a ‘higher-ordered’ computation of the degree of “displacement” or “lack of displacement” occurring across the series of cinematic-film frames..
Thus, for instance, the plane’s “energy” (or velocity) is computed based on the number of ‘pixels’ that plane has been displaced across a given series of frames… Conversely, the plane’s “spatial” measure is give based on the computation of the number of ‘spatial pixels’ that remain constant across a series of cinematic film frames (e.g., resulting in the fact that the plane’s length doesn’t “increase” or “decrease” across these frames)… Likewise, the “temporal” length of the plane’s flight is computed based on the number of changes that occur in- or around- the plane (across a given number of film frames): imagine for instance what would happen to that plane’s flight temporal value if the frames were projected more slowly (e.g., in “slow-motion” where there is a smaller number of changes taking place in the plane’s flight, giving rise to a “dilated time” measure) or in a case in which precisely the same frame was presented over and over again for say one minute – time would “stand-still”…
Similarly, we can devise a special ‘cinematic-film’ operation in which any given object is projected at “below-threshold” intensity at any given single frame such that only the presentation of the same object (in the same spatial configuration) across multiple number of frames may produce a visible object and that its apparent “mass” value will be computed as a function of the number of frames in which that object appeared ‘spatially-consistent’… So, we can see that at least in the “cinematic-film metaphor”, ‘energy’, ‘space’; ‘time’ or ‘mass’ – are all produced as secondary computational measures being computed by a higher-ordered (singular) computation relating to the degree of ‘changes’- or ‘lack of changes’- of a given object across the frame, or as measured in the object itself (across a given series of cinematic film-frames)…
Quite similarly, the CUFT posits that the four basic physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ are produced through the computation of a singular (higher-ordered) ‘Universal Computational Principle’ (represented by the Hebrew letter “yud”) – of the degree of ‘consistency’ or ‘inconsistency’ across a series of extremely rapid (c2/h) ‘Universal Simultaneous Computational Frames’ (USCF’s): According to the CUFT, this Universal Computational Principle (UCP) employs two ‘Computational Dimensions’ to compute these four (secondary computational) physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ which are: ‘Consistency’ (‘consistent’ vs. ‘inconsistent’) and ‘Framework’ (‘frame’ vs. ‘object’), and an additional Computational Dimension of ‘Locus’ (‘global’ vs. ‘local’) which accounts for relativistic phenomena.
As sown above, the Computational Unified Field Theory postulates that the various combinations of the ‘Framework’ and ‘Consistency’ computational dimensions produce the known ‘physical’ features of: ‘space’ (‘frame-consistent’), ‘energy’ (‘frameinconsistent’), ‘mass’ (‘object-consistent’) and ‘time’ (‘object-inconsistent’). The next step is to explicate the various possible relationships that exists between each of these four basic ‘physical’ features and the two levels of the third Computational Dimension of ‘Locus’ – e.g., ‘global’ vs. ‘local’: It is suggested that each of these four basic physical features can be measured either from the computational framework of the entire USCF’s perspective (e.g., a ‘global’ framework) or from the computational perspective of a particular segment of those USCF’s (e.g., ‘local’ framework). Thus, for instance, the spatial features of any given object can be measured from the computational perspective of the (series of the) entire USCF’s, or it can be measured from the computational perspective of only a segment of those USCF’s – i.e., such as from the perspective of that object itself (or from the perspective of another object travelling alongside- or in some other specific relationship- to that object). In much the same manner all other (three) physical features of ‘energy’, ‘mass’ and ‘time’ (e.g., of any given object) can be measured from the ‘global’ computational perspective of the entire (series of) USCF’s or from a ‘local’ computational perspective of only a particular USCF’s segment (e.g., of that object’s perspective or of another travelling frame of reference perspective).
One possible way of formalizing these two different ‘global’ vs. ‘local’ computational perspectives (e.g., for each of the four abovementioned basic physical features) is through attaching a ‘global’ {‘g’} vs. ‘local’ {‘l’} subscript to each of the two possible (e.g., ‘global’ vs. ‘local’) measurements of the four physical features. Thus, for instance, in the case of ‘mass’ the ‘global’ (computational) perspective measures the number of times that a given object has been presented consistently (i.e., unchanged) – when measured across the (entire) USCF’s pixels (e.g., across a series of USCF’s) ; In contrast, the ‘local’ computational perspective of ‘mass’ measures the number of times that a given object has been presented consistently (e.g., unchanged) when measured from within that object’s frame of reference;
M(g) : Σ[oj {x,y,z}(g) [USCF(n) = o(i ... j - 1){(x),(y),(z)} (g) {USCF( i ... n) }/ { USCF 's HXN }such that
[Oi{x, y, z} USCF (n)] - [oi {(x + j), (y + j) , (z + j) } USCF (1 ... n) ] ≤ nxh [ USCF ( 1 ... n ) ] .M(l): Σ[oj {x,y,z} (l) [ USCF (n) ] = o ( i ... j - 1) {(x), (y), (z)} ( l ) { USCF (i ... n)} / { h x n USCF 's }
such that
[Oi {x, y, z} USCF (n)] - [oi {(x + j) , (y + j ) , (z + j) } USCF (1 ... n) ] ≤ nxh [USCF (1 ... n)] .
What is to be noted is that these (hypothesized) different measurements of the ‘global’ vs. local’ computational perspectives – i.e., as measured externally to a particular object's pixels (‘global’) as opposed to only the pixels constituting the particular segment of the USCFs which comprises the given object (or frame of reference) may in fact replicate Relativity’s known phenomenon of the increase in an object’s mass associated with a (relativistic) increase in its velocity (e.g., as well as all other relativistic phenomena of the dilation in time, shrinkage of length etc.); This is due to the fact that the ‘global’ measurement of an object’s mass critically depends on the number of times that object has been presented (consistently) across a series of USCF’s: e.g., the greater the number of (consistent) presentations the higher its mass. However, since the computational measure of ‘mass’ is computed relative to Planck’s (‘h’) constant (e.g., computed as a given object’s number of consistent presentations across a specific number of USCF’s frames); and since the spatial measure of any such object is contingent upon that object's consistent presentations (across the series of USCF’s) such that the object does not differ (‘spatially’) across frames by more than the number of USCF’s multiplied by Planck’s constant; then it follows that the higher an object’s energy – i.e., displacement of pixels across a series of USCF’s, the greater number of pixels that object has travelled and also the greater number of times that object has been presented across the series of USCF’s – which constitutes that object’s ‘global’ mass measure! In other words, when an object’s mass is measured from the ‘global’ perspective we obtain a measure of that object’s (number of external) global pixels (reference) which increases as its relativistic velocity increases, thereby also increasing the number of times that object is presented (e.g., from the global perspective) hence increasing its globally measured ‘mass’ value. In contrast, when that object’s mass is measured from the ‘local’ computational perspective – such ‘local mass’ measurement only takes into account the number of times that object has been presented (across a given series of USCF’s) as measured from within that object’s frame of reference; Therefore, even when an object increases its velocity – if we set to measure its mass from within its own frame of reference we will not be able to measure any increase in its measured ‘mass’ (e.g., since when measured from within its local frame of reference there is no change in the number of times that object has been presented across the series of USCF’s)...
Likewise, it is hypothesized that if we apply the ‘global’ vs. ‘local’ computational measures to the physical features of ‘space’, ‘energy’ and ‘time’ we will also replicate the well-known relativistic findings of the shortening of an object’s length (in the direction of its travelling), and the dilation of time (as measured by a ‘global’ observer): Thus, for instance, it is suggested that an application of the same ‘global’ computational perspective to the physical feature of ‘space’ brings about an inevitable shortening of its spatial length (e.g., in the direction of its travelling):
S(g): ( fi{x,y,z} (g) [USCF (i)] + ... fj {x,y,z} ( g) [ USCF (n)]) / h x n { USCF de }such that:
fj {x,y,z} (g) [USCF (i)]) ≤ fi {x + (HXN) ,y + (HXN) , z + (HXN)} (g) [USCF (i ... n)]
It is hypothesized that this is due to the global computational definition of an object’s spatial dimensions which computes a given object’s spatial (length) based on its consistent ‘spatial’ pixels (across a series of USCF’s) – such that any changes in that object’s spatial dimensions must not exceed Planck’s (‘h’) spatial constant multiplied by the number of USCF's; This is because given such Planck’s minimal ‘spatial threshold’ computational constraint – the faster a given relativistic object travels (e.g., from a global computational perspective) the less ‘consistent’ spatial ‘pixels’ that object possesses across frames which implies the shorter its spatial dimensions become (i.e., in the direction of its travelling); in contrast, measured from a ‘local’ computational perspective there is obviously no such “shrinkage” in an object’s spatial dimensions – since based on such a ‘local’ perspective all of the spatial ‘pixels’ comprising a given object remain unchanged across the series of USCF’s.
S {'l’}: (fi {x,y,z} {'l’} [USCF(i)] + … fj{x,y,z}{'l’} [USCF(n)]) / h x n{USCF’s}
Such that:
fj{x,y,z}{'l’} [USCF(i)]) ≤ fi{x+(hxn),y+(hxn),z+(hxn)} {'l’} [USCF(i…n)]
Somewhat similar is the case of the ‘global’ computation of the physical feature of ‘time’ which is computed based on the number of measured changes in the object’s spatial ‘pixels’ constitution (across frames):
Tg : Σoi{x,y,z}[USCF(n)] ≠ oj{(x+m),(y+m),(z+m)} [USCF(1...n)] /c x n{USCF’s},
Such that:
T: Σoi{x,y,z}[USCF(n)] - oj{(x+m),(y+m),(z+m)} [USCF(1...n)] ≤ c x n{USCF’s}
The temporal value of an event (or object) is computed based on the number of times that a given object or event has changed – relative to the speed of light (e.g., across a certain number of USCF's); However, the measurement of temporal changes (e.g., taking place at an object or event) differ significantly – when computed from the 'global' or 'local' perspectives: This is because from a 'global' perspective, the faster an object travels (e.g., relative to the speed of light) the less potential changes are exhibited in that object's or event's presentations (across the relevant series of USCF's). In contrast, from a 'local' perspective, there is no change in the number of measured changes in the given object (e.g., as its velocity increases relative to the speed of light) – since the local (computational) perspective does not encompass globally measured changes in the object's displacement (relative to the speed of light)…
Note also that we can begin appreciating the fact that from the CUFT’s (D2 USCF’s) computational perspective there seems to be inexorable (computational) interrelationships that exist between the eight computational products of the three postulated Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’; Thus, for instance, we find that an acceleration in an object’s velocity increases the number of times that object is presented (e.g., 'globally' across a given number of USCF frames) – which in turn also increases it ‘mass’ (e.g., from the ‘global Locus’ computational perspective), and (inevitably) also decreases its (global) ‘temporal’ value (due to the decreased number of instances that that object changes across those given number of frames (e.g., globally- relative to the speed of light maximal change computational constraint)... Indeed, over and beyond the hypothesized capacity of the CUFT to replicate and account for all known relativistic and quantum empirical findings, its conceptually higher-ordered ‘D2’ USCF’s emerging computational framework may point at the unification of all apparently “distinct” physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ (and ‘causality’) as well as a complete harmonization between the (apparently disparate) quantum (microscopic) and relativistic (macroscopic) phenomena and laws; the apparent disparity between quantum (microscopic) and relativistic (macroscopic) phenomena and laws; Towards that end, we next consider the applicability of the CUFT to known quantum empirical findings: Specifically, we consider the CUFT’s account of the quantum (computational) complimentary properties of ‘space’ and ‘energy’ or ‘time’ and ‘mass’; of an alternative CUFT’s account of the “collapse” of the probability wave function; and of the ‘quantum entanglement’ and ‘particle-wave duality’ subatomic phenomena; It is also hypothesized that these alternative CUFT’s theoretical accounts may also pave the way for the (long-sought for) unification of quantum and relativistic models of physical reality. First, it is suggested that the quantum complimentary ‘physical’ features of ‘space’ and ‘energy’, ‘time’ and ‘mass’ – may be due to a ‘computational exhaustiveness’ (or ‘complementarity’) of each of the (two) levels of the Computational Dimension of ‘Framework’. It is hypothesized that both the ‘frame’ and ‘object’ (‘D2- USCF’s’) computational perspectives are exhaustively comprised of their ‘consistent’ (e.g., ‘space’ and ‘energy’, or ‘mass’ and ‘time’ physical features, respectively): Thus, whether we chose to examine the USCF’s (D2) computation of a ‘frame’ – which is exhaustively comprised of its ‘space’ (‘consistent’) and ‘energy’ (‘inconsistent’) computational perspectives or if we chose to examine the ‘object’ perspective of the USCF’s (D2) series – which is exhaustively comprised of its ‘mass’ (‘consistent’) and ‘time’ (inconsistent) computational aspects: in both cases the (D2) USCF’s series is exhaustively comprised of these ‘consistent’ and ‘inconsistent’ computational aspects (e.g., of the ‘frame’ or ‘object’ perspectives)...
This means that the computational definitions of each of these pairs of ‘frame’: ‘space’ (consistent) and ‘energy’ (inconsistent) or ‘object’: ‘mass’ (consistent) or ‘time’ (inconsistent) is ‘exhaustive’ in its comprising of the USCF’s Framework (i.e., ‘frame’ or ‘object’) Dimension: Indeed, note that the computational definitions of ‘space’ and ‘energy’ exhaustively define the USCF’s (D2) Framework computational perspective of a ‘frame’:
S: [fi{x,y,z}[USCF(n)] + fj{x,y,z}[USCF(1...n)])] / h x n{USCF’s},
Such that:
fi{x,y,z}[USCF(n)]) ≤ fj{x+(hxn),y+(hxn),z+(hxn)}[USCF(1...n)]);
And
E: (fi{x,y,z}[USCF(n)]) – (fj{(x + m),(y + m),(z + m)}[USCF(1...n)])/c x n{USCF’s}
Such that:
fi {x,y,z}[USCF(n)]) > (fj {x+(hxn),y+(hxn),z+(hxn)[USCF(n)])
Likewise, note that the computational definitions of ‘mass’ and ‘time’ exhaustively define the USCF’s (D2) Framework computational perspective of an ‘object’:
M: Σ [oi{x,y,z}USCF(n)] = [oi{(x+j),(y+j),(z+j)} USCF(1...n)] / h x n{USCF’s}
Such that
[oi{x,y,z}USCF(n)] - [oi{(x+j),(y+j),(z+j)}USCF(1...n)] ≤ n x h[USCF(1...n)].
And
T: Σoi{x,y,z}[USCF(n)] ≠ oj{(x+m),(y+m),(z+m)} [USCF(1...n)] /c x n{USCF’s}
Such that:
T: Σoi{x,y,z}[USCF(n)] - oj{(x+m),(y+m),(z+m)} [USCF(1...n)] ≤ c x n{USCF’s}
Thus, it is hypothesized that it is the computational exhaustiveness of the Framework Computational Dimension‘s (two) levels (e.g., of ‘frame’ or ‘object’ perspectives) which gives rise to the known quantum complimentary ‘physical’ features of ‘space’ and ‘energy’ (e.g., the frame’s ‘consistent’ and ‘inconsistent’ perspectives) or of ‘mass’ and ‘time’ (e.g., the object’s ‘consistent’ and ‘inconsistent’ perspectives). However, since this hypothetical ‘computational exhaustiveness’ of the Framework Dimension’s (two) levels arises as an integral part of the USCF’s (D2) Universal Computational Principle’s operation – it manifests through both the (above mentioned) computational definitions of ‘space’ and ‘energy, ‘mass’ and ‘time’, as well as through a singular ‘Universal Computational Formula’, postulated below:
Based on the abovementioned three basic postulates of the ‘Duality Principle’ (e.g., including the existence of a conceptually higher-ordered ‘D2 A-Causal’ Computational framework), the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s – e.g., which are postulated to be computed at an incredible rate of ‘c2’/ ‘h’) and their accompanying three Computational Dimensions of – ‘Framework’ (‘frame’ vs. ‘object’), ‘Consistency’ (‘consistent’ vs. ‘inconsistent’) and ‘Locus’ (‘global’ vs. ‘local’) a singular ‘Universal Computational Formula’ is postulated which may underlie all (known) quantum and relativistic phenomena:
Universal Computational Formula: C2X′=SX ehtm where in the left side of this singular hypothetical Universal Computational Formula represents the (abovementioned) universal rate of computation by the hypothetical Universal Computational Principle, whereas the right side of this Universal Computational Formula represents the ‘integrative-complimentary’ relationships between the four basic physical features of ‘space’ (s), ‘time’ (t), ‘energy’ (e) and ‘mass’ (m), e.g., as comprising different computational combinations of the three (abovementioned) Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’; Note that on both sides of this Universal Computational Formula there is a coalescing of the basic quantum and relativistic computational elements – such that the rate of Universal Computation is given by the product of the maximal degree of (inter-USCF’s relativistic) change ‘c2’ divided by the minimal degree of (inter-USCF’s quantum) change ‘h’; Likewise, the right side of this Universal Computational Formula meshes together both quantum and relativistic computational relationships – such that it combines between the relativistic products of space and time (s/t) and energy-mass (e/m) together with the quantum (computational) complimentary relationship between ‘space’ and ‘energy’, and ‘time’ and ‘mass’; More specifically, this hypothetical Universal Computational Formula fully integrates between two sets of (quantum and relativistic) computations which can be expressed through two of its derivations:
(1) s = m x c2 tec
(2) t x m x c2=s x eh
The first amongst these equations indicates that there is a computational equivalence between the (relativistic) relationships of ‘space and time’ and ‘energy and mass’; specifically, that the computational ratio of ‘space’ (e.g., which according to the CUFT is a measure of the ‘frame-consistent’ feature) and ‘time’ (e.g., which is a measure of the inconsistent’ feature) is equivalent to the computational ratio of ‘mass’ (e.g., a measure of the ‘object-consistent’ feature) and ‘energy’ (e.g., ‘frame-inconsistent’ feature)... Interestingly, this (first) derivation of the CUFT’s Universal Computational Formula incorporates (and broadens) key (known) relativistic laws – such as (for instance) the ‘E=Mc2’ equation, as well as the basic concepts of ‘space-time’ and its curvature by the ‘mass’ of an object (which in turn also affects that object’s movement – i.e. ‘energy’).
The second equation explicates the (above mentioned) quantum ‘computational exhaustiveness’ (or ‘complimentary’) of the Computational Framework Dimension’s two levels of ‘frame’: ‘space’ (‘consistent’) and ‘energy’ (‘inconsistent’) and of ‘object’: ‘mass’ (‘consistent’) and ‘time’ (‘inconsistent’) ‘physical’ features, as part of the singular integrated (quantum and relativistic) Universal Computational Formula...
Thus, the three (abovementioned) postulates of the ‘Duality Principle’, the existence of a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s – computed by the ‘Universal Computational Principle’ {‘ י’} at the incredible hypothetical rate of ‘c2/h’), and the three Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’ have resulted in the formulation of the (hypothetical) new ‘Universal Computational Formula’. It is (finally) suggested that this (novel) CUFT and (embedded) Universal Computational Formula can offer a satisfactory harmonization of the existing quantum and relativistic models of physical reality, e.g., precisely through their integration within the (above) broader higher-ordered singular ‘D2’ Universal Computational Formula; In a nutshell, it is suggested that this Universal Computational Formula embodies the singular higher-ordered ‘D2’ series of (rapid) USCF’s, thereby integrating quantum and relativistic effects (laws and phenomena) and resolving any apparent ‘discrepancies’ or ‘incongruities’ between these two apparently distinct theoretical models of physical reality: Therefore, it is suggested that the three (above mentioned apparent) principle differences between quantum and relativistic theories, namely: ‘probabilistic’ vs. ‘positivistic’ models of physical reality, ‘simultaneous-entanglement’ vs. ‘non-simultaneous causality’ and ‘single-’ vs. ‘multiple-’ spatial-temporal modeling can be explained (in a satisfactory manner) based on the new (hypothetical) CUFT model (represented by the Universal Computational Formula); As suggested earlier, the apparent ‘probabilistic’ characteristics of quantum mechanics, e.g., wherein an (apparent) multi spatial-temporal “probability wave function” ‘collapses’ upon its assumed ‘SROCS’ direct (‘di1’) physical interaction with another ‘probe’ element is replaced by the CUFT’s hypothesized (singular) conceptually higher-ordered ‘D2’s’ rapid series of USCF’s (e.g., governed by the above Universal Computational Formula); Specifically, the Duality Principle’s conceptual proof for the principle inability of the SROCS computational structure to compute the “collapse” of (an assumed) “probability wave function” (‘target’ element) based on its direct physical interaction (at ‘di1’) with another ‘probe’ measuring element has led to a reformalization of the various subatomic quantum effects, including: the “collapse” of the “probability wave function”, the “particle-wave duality”, the “Uncertainty Principle’s” computational complimentary features, and “quantum entanglement” as arising from the (singular higher-ordered ‘D2’) rapid USCF’s series: Thus, instead of Quantum theory’s (currently assumed) “collapse” of the ‘probability wave function’, the CUFT posits that there exists a rapid series of ‘Universal Simultaneous Computational Frames’ (USCF’s) that can be looked at from a ‘single’ spatial-temporal perspective (e.g., subatomic ‘particle’ or relativistic well localized ‘object’ or ‘event’) or from a ‘multiple’ spatial-temporal perspective (e.g., subatomic ‘wave’ measurement or conceptualization). Moreover, the CUFT hypothesizes that both the subatomic ‘single spatial-temporal’ “particle” and ‘multiple spatial-temporal’ “wave” measurements are embedded within an exhaustive series of ‘Universal Computational Simultaneous Frames’ (USCF’s) (e.g., that are governed by the above mentioned Universal Computational Formula). In this way, it is suggested that the CUFT is able to resolve all three abovementioned (apparent) conceptual differences between quantum and relativistic models of the physical reality: This is because instead of the ‘collapse’ of the assumed ‘quantum probability wave function’ through its (SROCS based) direct physical interaction with another subatomic probe element, the CUFT posits the existence of the rapid series of USCF’s that can give rise to ‘single-spatial temporal’ (subatomic “particle” or relativistic ‘object’ or ‘event’) or to ‘multiple spatial-temporal’ (subatomic or relativistic) “wave” phenomenon; Hence, instead of the current “probabilistic-quantum” vs. “positivistic-relativistic” (apparently disparate) theoretical models, the CUFT coalesces both quantum and relativistic theoretical models as constituting integral elements within a singular rapid series of USCF’s. Thereby, the CUFT can explain all of the (apparently incongruent) quantum and relativistic phenomena (and laws) such as for instance, the (abovementioned) ‘particle’ vs. ‘wave’ and ‘quantum entanglement’ phenomena – e.g., which is essentially a representation of the fact that all singlemultiple- (or exhaustive) measurements are embedded within the series of ‘Universal Simultaneous Computational Frames’ (USCF’s) and therefore that two apparently “distinct” ‘single spatial-temporal’ measured “particles” that are embedded within the ‘multiple spatial-temporal’ “wave” measurement necessarily constitute integral parts of the same singular simultaneous USCF’s (which therefore give rise to the apparent 'quantum entanglement' phenomenon). Nevertheless, due to the above mentioned ‘computational exhaustiveness’ (or ‘complementarity’) the computation of such apparently ‘distinct’ “particles” embedded within the same “wave” and USCF’s (series) leads to the known quantum (‘uncertainty principle’s’) complimentary computational (e.g., simultaneous) constraints applying to the measurement of ‘space’ and ‘energy’ (e.g., 'frame': consistent vs. inconsistent features), or of ‘mass’ ad ‘time’ (e.g., 'object': consistent vs. inconsistent features). Such USCF’s based theoretical account for the empirically validated “quantum entanglement” natural phenomena is also capable of resolving the apparent contradictions that seems to exist between such “simultaneous action at a distance” (to quote Einstein’s famous objection) and Relativity’s constraint set upon the transmission of any signal at a velocity that exceeds the speed of light: this is due to the fact that while the CUFT postulates that the “entangled particles” are computed simultaneously (along with the entire physical universe) as part of the same USCF/s (e.g., and more specifically of the same multi spatial-temporal “wave” pattern). Another important aspect of this (hypothetical) Universal Computational Formula’s representation of the CUFT may be its capacity to replicate Relativity’s curvature of ‘space time’ based on the existence of certain massive objects (which in turn also affects their own space-time pathway etc.): Interestingly, the CUFT points at the existence of USCF’s regions that may constitute: “high-space, high-time; highmass, low-energy” vs. other regions which may be characterized as: “low-space, low-time; low-mass, high-energy” based on the computational features embedded within the CUFT (and its representation by the above Universal Computational Formula). This is based on the Universal Computational Formula’s (integrated) representation of the CUFT’s basic computational definitions ‘space’, ‘time’, ‘energy’ and ‘mass’ which represents: ‘space’ – as the number of (accumulated) USCF’s ‘consistent-frame’ pixels that any given object occupies and its (converse) computational definition of ‘time’ as the number of ‘inconsistent-object’ pixels; and likewise the computational definition of ‘mass’ – as the number of ‘consistentobject’ USCF’s pixels and of ‘energy’ – as the (computational) definition of ‘mass’ as the number of ‘inconsistent-frame’ USCF’s pixels. Hence, General Relativity may represent a 'special case' embedded within the CUFT's Universal Computational Formula integrated relationships between 'space', 'time', 'energy' and 'mass' (computational definitions): This is because General Relativity describes the specific dynamics between the "mass" of relativistic objects (e.g., a 'global object- consistent' computational measure), their curvature of "space-time" (i.e., based on an 'frame-consistent' vs. 'object-inconsistent' computational measures) and its relationship to the 'energy-mass' equivalence (e.g., reflecting a 'frame-inconsistent' – 'object-consistent' computational measures); This is because from the (above mentioned) ‘global’ computational measurement perspective there seems to exist those USCF’s regions which are displaced significantly across frames (e.g., possess a high 'global-inconsistent-frame' energy value) – and therefore also exhibit increased 'global-object-consistent' mass value, and moreover are necessarily characterized by their (apparent) curvature of 'space-time' (i.e., alteration of the 'global-frame-consistent' space values and associated 'globalobject- inconsistent' time values)…
Therefore, in the special CUFT's case described by General Relativity we obtain those "massive" objects, i.e., which arise from high 'global-frame-inconsistent' energy values (e.g., which are therefore presented many times consistently across frames – yielding a high 'global-object-consistent' mass value); These objects also produce low (dilated) global temporal values since the high 'global-object-consistent' (mass) value is inevitably linked with a low 'global-object-inconsistent' (time) value; Finally, such a high 'global-frame-inconsistent' (energy) object also invariably produces low 'global-frame-consistent' spatial measures (e.g., in the vicinity of such 'high-energy-high-mass' object). Thus, it may be the case that General Relativity’s described mechanical dynamics between the mass of objects and their curvature of ‘space-time’ (which interacts with these objects’ charted space-time pathway) represents a particular instance embedded within the more comprehensive (CUFT) Universal Computational Formula’s outline of a (singular) USCF’s-series based D2 computation (e.g., comprising the three above mentioned ‘Framework’, Consistency’ and ‘Locus’ Computational Dimensions) of the four basic ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’ interrelationships (e.g., as ‘secondary’ emerging computational products of this singular Universal Computational Formula driven process)...
Indeed, the CUFT’s hypothesized rapid series of USCF’s (governed by the above mentioned ‘Universal Computational Formula’) integrates (perfectly) between the essential quantum complimentary features of ‘space and energy’ or ‘time and mass’ (e.g., which arises as a result of the abovementioned ‘computational exhaustiveness’ of each of the Computational Framework Dimension’s ‘frame’ and ‘object’ levels, which was represented earlier by one of the derivations of the Universal Computational Formula); quantum entanglement”, the “uncertainty principle” and the “particle-wave duality” (e.g., which arises from the existence of the postulated ‘Universal Simultaneous Computational Frames’ [USCF’s] that compute the entire spectrum of the physical universe simultaneously per each given USCF and which embed within each of these USCF’s any ‘single- spatial-temporal’ measurements of “entangled particles” as constituting integral parts of a ‘multiple spatial-temporal’ “wave” patterns); Quantum mechanics’ minimal degree of physical change represented by Planck’s ‘h’ constant (e.g., which signifies the CUFT’s ‘minimal degree of inter-USCF’s change’ for all four ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’); As well as the relativistic well validated physical laws and phenomena of the “equivalence of energy and mass” (e.g., the famous “E= Mc2” which arises as a result of the transformation of any given object’s or event’s ‘frame-inconsistent’ to ‘object-consistent’ computational measures based on the maximal degree of change, but which also involves the more comprehensive and integrated Universal Computational Formula derivation: t x m x (c2/h x י) = s x e .); Relativity’s ‘space-time’ and ‘energy-mass’ relationships expressed in terms of their constitution of an integrated singular USCF’s series which is given through an alternate derivation of the same Universal Computational Formula.
Indeed, this last derivation of the Universal Computational Formula seems to encapsulate General Relativity’s proven dynamic relationships that exist between the curvature of space-time by mass and its effect on the space-time pathways of any such (massive) object/s – through the complete integration of all four physical features within a singular (conceptually higherordered ‘D2’) USCF’s series... Specifically, this (last) derivation of the (abovementioned) Universal Computational Formula seems to integrate between ‘space-time’ – i.e., as a ratio of a ‘frame-consistent’ computational measure divided by ‘object-inconsistent’ computational measure – as equal to the computational ratio that exists between ‘mass’ (e.g., ‘object-consistent’) divided by ‘energy’ (e.g., ‘frame-inconsistent’) multiplied by the Rate of Universal Computation (R = c2/h) and multiplied by the Universal Computational Principle’s operation (‘ י’); Thus, the CUFT’s (represented by the above Universal Computational Formula) may supply us with an elegant, comprehensive and fully integrated account of the four basic ‘physical’ features constituting the physical universe (e.g., or indeed any set of computational object/s, event/s or phenomena etc.): Therefore, also the Universal Computational Formula’s full integration of Relativity’s maximal degree of inter-USCF’s change (e.g., represented as: ‘c2’) together with Quantum’s minimal degree of inter-USCF’s change (e.g., represented by: Planck’s constant ’h’) produces the ‘Rate’ {R} of such rapid series of USCF’s as: R = c2/h, which is computed by the Universal Computational Principle ‘ י’ and gives rise to all four ‘physical’ features of ‘space’, ‘time’, ‘energy’ and ‘mass’ as integral aspects of the same rapid USCF’s universal computational process. Thus, we can see that the discovery of the hypothetical Computational Unified Field Theory’s (CUFT’s) rapid series of USCF’s fully integrates between hitherto validated quantum and relativistic empirical phenomena and natural laws, while resolving all of their apparent contradictions.
The CUFT was shown successful in replicating all major empirical findings validated by both QM and RT, resolve the key theoretical inconsistencies between these theoretical models and has also recently received empirical support for one of its “differential-critical predictions”, e.g., differentiating it from both QM and RT predictions – namely: the ‘Proton Radius Puzzle’ findings [12] (delineated above), thereby validating the CUFT as a satisfactory CUFT. Indeed, one of the CUFT’s ‘differential-critical predictions’ (e.g., differentiating it from the predictions of both QM and RT) regards the more consistent spatial presentations of a more massive particle (or element), relative to the spatial-consistency of a less massive particle (or element) across a given series of USCF’s frames – has now received initial empirical validation through the findings associated with the ‘Proton-Radius Puzzle’! This is because the ‘Proton-Radius Puzzle’ empirical findings indicate that the more massive ‘Moun Hydrogen Proton’ is measured (approximately) 200 times – smaller and more accurate than the standard Hydrogen (e.g., with the 200 times lighter electron particle instead of the Muon).
In order to fully understand how these ‘Proton-Radius Puzzle’ findings [12] empirically confirm the differential-critical prediction of the CUFT, lets us return to the CUFT’s computational definitions of “mass”; Mass is defined by the CUFT as a measure of the degree of “spatial-consistency” of a particle across a given series of USCF’s frames. In mathematical terms, it is measured as the number of times that this particle was presented across the same spatial pixels (measured from within the object’s frame of reference) across a series of USCF’s frames… This computational definition of ‘mass’ implies at least two empirically measurable predictions
(a) that the more massive ‘Muon’ particle should be measured as more accurate- and as smaller- than the less massive electron particle; this is due to the fact that the more massive a particle is the greater its spatial-consistency across USCF’s frames and/or
(b) that more massive particles (e.g., such as the Muon) should be measured across a greater number of USCF’s frames, relative to less massive particles (such as the electron); In other words, we could expect to measure the (more massive) Muon across a greater number of USCF’s frames than the (lighter) electron
Interestingly, the ‘Proton-Radius Puzzle’ precisely confirms the first of these two CUFT ‘differential critical’ predictions – i.e., indicating that the (200 times) more massive Muon particle (e.g., when embedded within the Hydrogen Proton) is measured as (200 times) ‘smaller’ and ‘more accurate’ than the (200 times) less massive electron (associated) Hydrogen Proton. Hence, these findings provide an initial empirical confirmation of the CUFT – as differing from the predictions of both quantum and relativistic models’ predictions (e.g., which cannot account for these “Proton-Radius Puzzle” findings).
Efforts should be made to empirically validate the second (abovementioned) aspect of the CUFT’s differential-critical prediction regarding the appearance of ‘more massive’ particles such as the Muon across a greater number of USCF’s frames than the appearance of less massive particles (such as the electron).
Prior to focusing on the essence of the Paradigmatic Shift signified by the CUFT, it is worthwhile to outline the gist of the empirical evidence offered (thus far), which forces us to adopt this CUFT as an appropriate ‘Theory of Everything’ (TOE) – i.e., as replicating the key empirical results of Quantum Mechanics and Relativity Theory, resolving their apparent theoretical inconsistencies, embedding and transcending these two exist theoretical models; We’ve begun this article by noting that Theoretical Physics has reached a critical juncture akin (perhaps) to the “crisis” that appeared in Physics prior to Einstein’s 1905 revolution signified by Relativity Theory: the two pillars of modern Physics (QM and RT) seem contradictory of each other, and both of them fail to account (e.g., in a satisfactory manner) to a series of ‘Physical Conundrums’ (including: ‘Dark-Energy’ and ‘Dark-Matter’ and the ‘Arrow of Time’). We then identified [1] famous criteria for the adoption of a “Paradigmatic Shift” within a given Scientific discipline, and were able to corroborate that indeed the current state of Theoretical Physics may qualify for Kuhn’s criteria for the occurrence of a ‘Paradigmatic Shift’ in Physics; More specifically, we were able to demonstrate that the recently discovered ‘Computational Unified Field Theory’ (CUFT) does in fact satisfy all of these ‘rigorous scientific criteria’ for the adoption of such a Paradigmatic Shift in 21st century Physics: a) The CUFT is capable of replicating all major QM and RT empirical findings b) The CUFT was shown capable of resolving the principle theoretical inconsistencies that exist between these two models c) The CUFT identified at least one empirical prediction (e.g., of relatively more massive particles being measured spatially more consistent then less massive particles across a series of USCF’s frames) d) In fact this ‘differential-critical prediction’ of the CUFT has been validated through the recently discovered ‘Proton-Radius Puzzle’ [12]. Hence, apart from the last two criteria of the New Paradigm, i.e., being capable of explaining a series of “unexplained” phenomena (by the ‘Existent Paradigm’); and the New Paradigm’s discovery of new empirical phenomena, the CUFT seems to have satisfied all of the (above mentioned) rigorous criteria for such a ‘Paradigmatic Shift’ in Physics;
In fact, these two last scientific criteria necessary for the adoption of the CUFT as the (appropriate) ‘New Paradigm’ in Physics constitute the topics of the current and subsequent headings: Specifically, the current heading deals with the identification of the gist of the ‘Paradigmatic Shift’ offered by the CUFT, namely: the replacement of the current “Material-Causality” fundamental assumption underlying both Quantum Mechanics and Relativity Theory – with the CUFT’s ‘Universal Computational Principle’s (UCP) “A-Causal Computation”! Indeed, once the principle difference between these two ‘Material-Causality’ Computation (e.g., signified by the QM and RT ‘Self-Referential Ontological Computational System’, SROCS) and the CUFT’s UCP’s ‘A-Causal Computation’ will be understood, one of the direct theoretical implications of this (new) CUFT’s ‘A-Causal Computation’ will be the illustration of an alternative (e.g., “none material-causal”) satisfactory explanation of the “Dark-Energy” and “Dark-Matter” Physical Conundrum (which cannot be explained by contemporary QM or RT); Subsequently, the adoption of the CUFT’s New Paradigm’s ‘A-Causal Computation’ will also shed new light on the other ‘Physical Conundrum’ of the ‘Arrow of Time’ (e.g., wherein the CUFT’s New Paradigm’s A-Causal Computation will be shown capable of revising and expanding the ‘Arrow of Time’ and ‘Second Law of Thermodynamics’ basic tenets of modern Theoretical Physics)…
Hence, let us focus now on the gist of the ‘Paradigmatic Shift’ signified by the CUFT’s ‘A-Causal Computation’, e.g., as opposed to the Existent Paradigm’s ‘Causal-Material’ working assumption underlying both QM and RT; As outlined earlier, the computational structure of both QM and RT is characterized as a “Self-Referential Ontological Computational System”, e.g., wherein it is assumed that that it is solely the (direct or indirect) physical interactions between the subatomic ‘probe’ and ‘target’ elements or between the relativistic ‘observer’ and (space-time or energy-mass) ‘phenomenon’ which determined the particular value/s of the measured subatomic ‘target’ or relativistic ‘phenomenon’; But, we’ve seen that such computational structure inevitably leads to both ‘logical inconsistency’ and ensuing ‘computational indeterminacy’ which were negated by empirical evidence indicating the empirical capacity of these quantum and relativistic computational systems to determine the precise value/s (e.g., albeit “complimentary values” in the case of QM) of the subatomic ‘target’ and relativistic ‘phenomenon’; Hence, the CUFT’s ‘Duality Principle’ negated the basic SROCS computational structure of both QM and RT – instead pointing at the singularity of the ‘Universal Computational Principle’ (UCP) as the sole determination of all quantum ‘probe-target’ and relativistic ‘observerphenomenon’ relationships; Indeed, it is precisely this identification of the UCP as the sole source for determining any quantum or relativistic relationship or phenomenon which highlights its Paradigmatic Shift – i.e., from “Material-Causality” to ‘A-Causal Computation’: this is because whereas the Existent ‘Material-Causal’ Quantum or Relativistic (SROCS) Paradigm attempts to explain any quantum or relativistic relationship or phenomenon strictly based on the (direct or indirect) physical interactions between any hypothetical subatomic ‘probe’ and ‘target’ elements (or more generally between any two or more quantum entities) or between any hypothetical ‘relativistic observer’ and ‘phenomenon’ (or more generally between any two relativistic entities or phenomena); the CUFT’s New ‘A-Causal Computation’ Paradigm asserts that there can exist only one singular ‘Universal Computational Principle’ (UCP) which computes “simultaneously” all exhaustive spatial pixels (e.g., comprising all exhaustive quantum and relativistic phenomena) in the physical universe at a minimal time-point (e.g., ‘c2/h’) – thereby precluding the possibility of the existence of any “material-causal” relationship/s between any two (or more) quantum or relativistic elements, phenomena etc.!
Hence, the Paradigmatic Shift signified by the CUFT (and validated through the empirical verification of the CUFT’s ‘criticalprediction’ of the ‘Proton-Radius Puzzle’ findings and satisfaction of all other above mentioned rigorous scientific criteria required for validation of any Paradigmatic Shift in any given scientific discipline) is focused on the adoption of its UCP’s ‘A-Causal Computation’ which forces us to relinquish any “Material-Causal” relationships, e.g., at either the quantum or relativistic levels; Therefore, since we have no other option but to accept the CUFT’s (new) ‘A-Causal Computation’ Paradigm (e.g., due to its satisfaction of all of the above mentioned rigorous scientific criteria that are necessary for the adoption of such a Paradigmatic Shift in Science), we must abandon- and indeed revise- all current ‘causal-materialistic’ (quantum or relativistic) relationships, laws or phenomena! This means that in any given instance (e.g., in QM or RT) where there appear any theoretical construct/s that imply any kind of ‘material-causal’ relationship/s between any given subatomic ‘probe’ and ‘target’ elements, i.e., such as for instance the currently assumed ‘collapse of the probability wave function’ target element as “caused” by its direct physical interaction with the subatomic probe element; or such as the currently assumed determination of any relativistic (space-time or energy-mass) ‘phenomenon’ based on its direct physical interaction with another given relativistic observer – must be revised based on the CUFT’s (new) ‘A-Causal Computation Paradigm’… In a nutshell, this Paradigmatic Shift implies that all four basic ‘physical’ concepts of ‘space’, ‘time’, ‘energy’ and ‘mass’ must be redefined as secondary computational features computed by the singular (higher-ordered) ‘Universal Computational Principle’, and moreover that this UCP computation of these four secondary computational physical features – i.e., at any spatial pixel across the physical universe (e.g., at a minimal time-point ‘c2/h’ comprising a single ‘USCF’ frame/s) is carried out simultaneously, thereby precluding any possible ‘material-causal’ physical relationship/s at the quantum or relativistic levels; Instead, all quantum and relativistic phenomena, relationship/s or indeed laws much be transformed and embedded within the singularity of the UCP A-Causal Computation of the series of (simultaneous) USCF’s (e.g., as represented by the ‘Universal Computational Formula’)…
In order to demonstrate the application of this (significant) ‘Paradigmatic Shift’, e.g., from the currently assumed (QM and RT) ‘Material-Causal’ Paradigm to the CUFT’s new ‘A-Causal Computation’ Paradigm, let us examine, for instance, two primary theoretical constructs associated with RT: those of ‘Dark Energy’ and ‘Dark Matter’, which account for up to 70 to 90 percent of all the assumed energy and mass in the physical universe! ‘As known, ‘Dark Matter’ and ‘Dark Energy’ constitutes a (‘materialcausal’ relativistic) explanation of existent “theoretical gap” that exists between the empirically observed accelerated expansion of the physical universe – and the “shortage” of up to 90% of all calculated energy and mass in the physical universe (e.g., relative to the empirically observed 10% to 30% of mass and energy in the universe); In other words, based on the existent ‘materialcausal’ (relativistic) theoretical paradigm, the explanation of the accelerated rate of the universe’s expansion must be based on the amount of “mass” and “energy” that exist within the physical universe (e.g., at any given point in time) – which “causes” the universe to expand (in a particular accelerated rate)… But, based on the CUFT’s new ‘A-Causal Computation’ Paradigm, we are precluded from any such ‘material-causal’ explanation of any physical relationship/s between any two (or more) physical elements, e.g., such as the amount of ‘mass’ or ‘energy’ in the universe and its space-time rate of expansion! This is simply because according to the new CUFT’s ‘A-Causal Computation’ Paradigm, it is only the singular ‘Universal Computational Principle’ (UCP) which computes ‘simultaneously’ all of the spatial pixels in the physical universe – i.e., including their respective ‘energy’, ‘mass’ (‘space’ and ‘time’) secondary computational values, hence negating the possibility of any “material-causal” relationship/s between for instance the amount of energy or mass in the universe and its (accelerated) spatial expansion! Therefore, according to this new ‘A-Causal Computation’ Paradigm of the CUFT, the only means for explaining the observed accelerated expansion of the physical universe is based on the UCP’s singular computation of the series of ‘Universal Simultaneous Computational Frames’ (USCF’s) – i.e., denoting that this UCP (in fact) increases the number of spatial pixels within any given USCF’s frame/s in an “accelerated curve”, e.g., with each subsequent USCF comprising an accelerated increase in the number of new spatial pixels added (relative to the previous USCF’s frame/s)!
Interestingly, there may be a historic parallelism between pre-Einstein’s 1905 Relativity Paradigmatic Shift “superfluous” ‘ether’ theoretical concept (e.g., which could not be detected empirically and was eventually regarded as ‘superfluous’ within Relativity’s New Paradigm) – and contemporary “Dark Matter”, “Dark Energy” theoretical constructs which could not be detected empirically and which it is suggested may also be “superfluous” within the context of the (new) ‘A-Causal Computation’ Paradigm; Indeed, once we accept the CUFT’s new ‘A-Causal Computation’ Paradigm, these hypothetical theoretical constructs of “Dark Matter” and “Dark Energy” must be abandoned and revised (due to their assumption of the existence of a ‘material-causal’ relationship between them and the observed accelerated expansion of the universe’s space-time); Instead, the CUFT’s UCP postulates that the computation of any of the four (secondary computational) physical features of ‘energy’, ‘mass’ , ‘space’ and ‘time’ – at any exhaustive spatial pixel in the universe in any given minimal time-point (c2/h) (comprising any single or multiple USCF’s frame/s) must be carried out ‘simultaneously’ by the UCP (e.g., precluding any possibility of “Dark Energy” or “Dark Matter” ‘causing’ an accelerated expansion of space-times): Rather, the singularity of the UCP simultaneous computation of all spatial pixels comprising a (minimal time-point) USCF frame/s forces us to recognize the fact that this UCP in fact produces an accelerated increase in the number of spatial-pixels comprising each subsequent USCF frame – which constitutes a new (somewhat “radical”) prediction of the CUFT (that significantly differs from the predictions of both QM and RT)!
In much the same manner, the acceptance of the CUFT’s new ‘A-Causal Computation’ Paradigm, also Time’ theoretical constructs, i.e., based on the fact that both of them rely on the (above mentioned) ‘Material-Causal’ (existent) Paradigm, which (as we’ve seen) needs to be revised based on the CUFT’s new ‘A-Causal Computation’ Paradigm; This is because the Second Law of Thermodynamics asserts that there exists a “causal” relationship between “time” and the (level of) “entropy”, e.g., such that the level of entropy must increase with the passage of time:
T [A…N] → Entr. {a…n}, such that: Entr.: {A}ta < {B}tb < {C} tc… < {N}tn
But since we’ve already seen that the new CUFT’s ‘A-Causal Computation’ prohibits any such material-causal relationships between any two hypothetical physical elements (e.g., including: ‘time’ and ‘entropy’) – due to the fact that the UCP ‘A-Causal Computation’ dictates that this UCP computes simultaneously all four physical features at every possible spatial pixel in the entire universe (including ‘energy’ – which embeds within it any measure of “entropy”) and therefore prohibits any ‘material-causal’ relationship between ‘energy’ (e.g., including its measured degree of ‘entropy’) and ‘time’ (as stated above either within the same USCF frame or across different USCF’s frames). Hence, the ‘Second Law of Thermodynamics’, e.g., stating that the level of entropy must grow (within any given system) must grow with time - is negated by the CUFT’s (new) ‘A-Causal Computation’ Paradigm since the simultaneous computation of the UCP of every spatial pixel in the physical universe, i.e., including its four (secondary computational) physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ precludes the possibility of the existence of any ‘materialcausal’ relationship between ‘time’ and ‘entropy’, e.g., which is a particular measure of ‘energy’; Once again, it is suggested that the acceptance of the CUFT’s new ‘A-Causal Computation’ Paradigm calls for a revision of the various (quantum and relativistic) laws of Physics – albeit (as we’ll see in the next chapter) such theoretical revision of the laws of Physics would in fact embed the current laws of Physics within a broader theoretical understanding (e.g., in much the same manner that the discovery of Relativity Theory retained “Newtonian Mechanics” as a “special case” within Relativity’s broader theoretical framework);
In the case of the ‘Second Law of Thermodynamics’ and the ‘Arrow of Time’ phenomenon, the UCP ‘A-Causal Computation’ Paradigm necessitates our revision of these theoretical constructs in accord with the UCP’s computation of a series of ‘Universal Simultaneous Computational Frames’ (USCF’s) – which opens the possibility for the UCP’s computation of these USCF’s series also in a “reversed order”! Once we accept that the four (secondary computational) features of ‘space’, ‘time’, ‘energy’ and ‘mass’ are solely computed by the UCP, e.g., based on its three ‘Computational Dimensions’ (‘Framework’: frame/object, ‘Consistency’: ‘consistent’/’inconsistent’ and ‘Locus’: ‘global’/’local’); And moreover, according to the CUFT’s ‘Computational Invariance Principle’ [4] which is also based on one of the key inductive principles in Science, i.e., "Ockham’s razor", these four secondary computational ‘physical’ features [16] – may only represent ‘computationally variant’ properties (which exist only “during” the USCF frame/s as computed solely by the UCP but do not exist “in-between” any two USCF’s frames), as opposed to the singular (‘computationally invariant’) ‘Universal Computational Principle’ which exists both “in-between” any two subsequent USCF’s frames and also “during” all USCF’s frames and solely produces these four (secondary computational) ‘physical’ features; then we reach the inevitable” conclusion that the sole existence of ‘time’, ‘space’, ‘energy’ and ‘mass’ – is as secondary computational properties of this singular ‘Universal Computational Principle’ (UCP), as computed through its three Computational Dimensions based on its extremely rapid production of the series of USCF’s frames; Indeed, “time” was defined by the UCP as its computation of the degree of “change” (e.g., ‘inconsistency’) across a series of USCF’s as measured for a particular ‘object’, whereas “energy” was defined by the UCP as the computation of the degree of “frame-inconsistency” across a series of USCF’s frames:
Specifically, the “flow of time”, i.e., its directionality from the “past” to the “present” and to the “future” represents a particular order of USCF’s frames – which according to our ordinary experience seems to ‘flow’ only in this unidirectional format, e.g., defined as the “Arrow of Time”; Indeed, closely related to this apparent ‘unidirectional’ “Arrow of Time” – is the above mentioned ‘Second Law of Thermodynamics’ which associates this apparently ‘unidirectional’ flow of time with an increase in the ‘degree of entropy’; However, according to the CUFT’s new ‘A-Causal Computation’ Paradigm there exists a real possibility (which will be further explained below utilizing another theoretical postulate of the CUFT called: the ‘Human Spectrum Expansiveness Hypothesis”) of the UCP presenting any given series of USCF’s in the “reverse order” – i.e., which would both reverse the “Arrow of Time” and negate the ‘Second Law of Thermodynamics’! Hence, apart from the (abovementioned) CUFT’s ‘A-Causal Computation’ Paradigm negation of the ‘Second Law of Thermodynamics’ on the basis of its “material-causal” implied assumption, the UCP ‘A-Causal Computation’ which computes simultaneously all spatial pixels in the physical universe at any given minimal time-point (e.g., ‘c2/h’ comprising a single USCF frame) as a series of USCF frames, lends itself to the possibility of the same UCP computing the series of USCF’s in the reversed order! In fact, it is due to the abovementioned ‘Computational Invariance Principle’ which regards the four (secondary computational) physical features of ‘space’, ‘time’, ‘energy’ and ‘mass’ as merely “phenomenal”(e.g., ‘computationally variant’ - existing transiently only “during” the USCF frames but not “in-between” them), which has also lead to the recognition of the ‘Universal Computational Principle’ as a ‘Universal Consciousness Principle’, i.e., since there is no “material entity” which can be “transferred” across any two subsequent USCF frames, implying that the Universal Computational Principle also needs to possess all of the key features and qualities of a ‘Universal Consciousness Principle’: retaining- reproducing- and evolving- any spatial pixel in the universe across a series of USCF’s frames… But, this implies that the ‘Universal Consciousness Principle’ indeed retains the information regarding each exhaustive spatial-pixel in each of its series of USCF’s! The last “piece” of the puzzle which may allow in fact this ‘Universal Consciousness’ (Universal Computational Principle) to reproduce the same series of USCF’s (or segments of them) in “reversed order” – is related to one of the other theoretical postulates of the CUFT associated with the hypothetical connection between our ‘individual human consciousness’ and this ‘Universal Consciousness’, namely: the ‘Human Consciousness Spectrum Expansiveness’ postulate: This postulate hypothesizes that in much the same manner that the Universal Computational/Consciousness Principle is capable of retaining- reproducing- and evolving- any spatial pixel in the physical universe (across a series of USCF’s frames) so does the human Consciousness possess an inherent capacity to expand its identification to include a growing number of spatial pixels comprising parts of (or even the entirety of) the series of USCF’s frames’ series, thereby allowing such an expanded human Consciousness to actually affect the production of these spatial pixels across a given series of USCF’s frames – i.e., including in the “reversed order”!
Indeed, another indication of the CUFT new ‘A-Causal Computation’ Paradigm that it should be possible to “reverse” the flow of time was already given through one of its (three) ‘differential-critical predictions’ regarded the possibility of reversing the sequence of “spatial-electromagnetic-pixels’ sequence of a given object (or phenomenon) across a series of USCF’s frames [3] – based on the precise recording of that given object’s or phenomenon electromagnetic-spatial-pixels’ value/s (across these series of USCF’s frames) and an application of an appropriate electromagnetic stimulation to each of these pixels (across the series of USCF’s frames) – so as to reverse the order of the USCF’s frames (for that particular object or phenomenon)… The critical point to be noted here is that according to the CUFT’s A-Causal Computation Paradigm, “time”, “space”, “energy” and “mass” – are seen as secondary computational features produced by the singular ‘Universal Computational Principle’, e.g., through its three Computational Dimensions of ‘Framework’, ‘Consistency’ and ‘Locus’ based on its extremely rapid production of the series of USCF’s frames (comprising all exhaustive spatial pixels in the physical universe at a minimal time-point, c2/h); In fact, these four physical features do not exist “in-between” USCF’s frames, but only “during” these USCF’s frames (e.g., hence regarded as ‘computationally variant’ by the CUFT’s ‘Computational Invariance Principle’), and are totally dependent upon the Universal Computational Principle for their maintenance, retention and evolution across the series of USCF’s frames… Specifically, the UCP computes “time” as the degree of change (inconsistency) of a given object across frames (relative to the speed of light changes across frames) – and the (apparent) “unidirectional” flow of time (‘Arrow of Time’) merely represents the sequence of spatial (electromagnetic) changes in a given object’s (pixels composition) across a given series of USCF’s frames… there does not exist any “objective” measure of time except those specific changes in the given object’s spatial pixels’ composition across the series of USCF’s frames…Hence, to the extent that we are able to record the specific spatial-pixels electromagnetic value/s of that given object across a series of USCF’s frames and apply a particular electromagnetic stimulation to each of these spatial-pixels (across an equivalent number of USCF’s frames) such that we obtain the reversed order of the recorded sequence of each of that object’s spatial pixels electromagnetic values – we have effectively reversed the “flow of time” for that object!
Indeed, this suggested procedure of reversing the ‘spatial-electromagnetic pixels’ values order for a given object (or phenomenon) across a given series of USCF’s frames was identified as one of (three) ‘differential-critical predictions’ of the CUFT which differentiates it from the corresponding predictions of both QM and RT [3]. This was due to the fact that in both Relativity Theory and Quantum Mechanics, it is principally “impossible” to ‘reverse the flow of time’: In Relativity Theory this is due to the constraint imposed on the transmission of any signal travelling at a speed greater than the speed of light – hence it is not possible to “catch” a signal travelling at the speed of light from any event that already happened! In Quantum Mechanics, this is due to fact that after the “collapse” of the probability wave function of any given subatomic ‘target’, e.g., corresponding to the measurement of any given phenomenon or event – it is not possible (in principle) to “un-collapse” this probability wave function back to its potential state (e.g., prior to its physical interaction with the probe subatomic element)… Hence, viewed from the perspective of both QM and RT (representing the existent ‘Material-Causal’ Paradigm) it is not possible to reverse the flow of time, but according to the CUFT’s ‘A-Causal Paradigm’ the reversal of a given sequence of spatial-electromagnetic pixels values of a given object or phenomenon across a series of USCF’s frames is (in fact) one of the (three) ‘critical predictions’ of the CUFT!
Likewise, it is suggested that this A-Causal Computation may be able to resolve the two other major Physical conundrums found in contemporary Physics, namely: the principle theoretical inconsistency between QM and RT based on the ‘quantum entanglement’ phenomenon (and their incompatible “probabilistic” vs. “positivistic” modeling), and the “Arrow of Time” enigma; This is, once again due to the singular ‘A-Causal Computation’ of the Universal Computational Principle (UCP) which computes all single- multiple- and exhaustive- spatial pixels in the physical universe comprising any single or multiple USCF’s frame/s – thereby embedding, harmonizing and indeed transcending Relativity’s single spatial-temporal relativistic Phenomenon constrained by the speed of light transference of information between any two hypothetical such single spatial-temporal relativistic Phenomenon and observer entities, together with Quantum Mechanics’ multi spatial-temporal “subatomic probabilistic wave function” and the CUFT’s recognition of the UCP’s simultaneous computation of all exhaustive spatial pixels which gives rise to the Quantum well-validated phenomenon of ‘quantum entanglement’! Interestingly, it is precisely the CUFT’s stipulation of the Universal Computational Principle’s ‘A-Causal Computation’ which computes simultaneously all spatial pixels in the universe (e.g., at any minimal time point comprising a single USCF frame), which can account for this ‘quantum entanglement’ phenomenon based on its recognition of the embedding of two single spatial-temporal ‘particle’ entities within the broader multi spatial-temporal ‘probability wave function’ – which is still embedded within the exhaustive USCF frame/s series; Thus, based on the CUFT’s exhaustive perspective of single spatial-temporal ‘relativistic (space-time or energy-mass) Phenomenon’ and corresponding single spatial-temporal relativistic observers, it is able to accept Relativity’s assertion regarding the speed of light constraint imposed upon the transmission of any signal from any such single spatial-temporal ‘relativistic Phenomenon’ to any other corresponding single spatial-temporal ‘relativistic observer’; While at the same time embrace the broader multi spatial-temporal ‘probability wave function’, as well as the most exhaustive USCF’s frame/s perspective which recognizes the UCP’s simultaneous computation of all spatial pixels in the physical universe – thereby recognizing the existence of two (or more) single spatialtemporal “entangled quantum particles” which are embedded within the same multi spatial-temporal “probability wave function”, e.g., hence possessing entangled complimentary wave function values. Hence, the CUFT’s exhaustive computational perspective embraces both Relativity’s single spatial-temporal ‘Phenomenon’ and corresponding ‘relativistic observer’ entities constrained by the speed of light barrier for the transference of any signal between these two single spatial-temporal entities, and the broader multi spatial-temporal ‘probability wave function’s’ embedding of any two such ‘single spatial-temporal entangled particles’, e.g., which are all embedded within the exhaustive UCP’s simultaneous computation of all spatial pixels in the universe comprising a single (or multiple) USCF frame/s, thereby giving rise to the apparent phenomenon of the “entanglement” of two ‘entangled particles’ within the multiple spatial-temporal ‘probability wave function’ which is really embedded within the UCP’s exhaustive simultaneous USCF frame production!
To conclude this overview of the ‘Paradigmatic Shift’ represented by the CUFT’s ‘A-Causal Computation’ Paradigm, (e.g., relative to the current ‘Material-Causal Paradigm’ of both QM and RT) it is perhaps important to note that although this new ‘A-Causal-Computation’ Paradigm does necessitate a revision and reformulation of some of the key theoretical constructs (and laws) found in RT and QM, this revision merely embeds QM and RT within the broader (exhaustive) theoretical framework of the CUFT (as shown above), rather than negates their validity; As Einstein once remarked regarding his vision of the fate of his Relativity Theory (with further prospective theoretical advancements): “no better destiny could be allotted to any physical theory than that it should become a “special case” within a broader theoretical understanding”… Indeed, it is hereby suggested that the new ‘A-Causal Computation’ Paradigm may in fact broaden the scope of our theoretical understanding of both quantum and relativistic phenomena and indeed broaden the spectrum of our understanding of all possible physical phenomena and reality based on the discovery of the singular higher-ordered ‘Universal Computational/Consciousness Principle’; Hence, all relativistic phenomena and laws and all quantum phenomena and laws are retained but are also embedded (and transcended) within the broader formalization of the CUFT UCP’s ‘Universal Computational Formula’:
Finally, this recognition of the ‘Universal Computational Principle’ as the sole and singular reality producing and sustaining all four (secondary computational) physical properties of ‘space’, ‘time’, ‘energy’ and ‘mass’ has also lead to the formulation of a singular ‘Universal Computational Formula’ which completely integrates these four secondary computational physical properties, as well as all known quantum and relativistic properties, e.g., as embedded within the higher-ordered Universal Computational Formula:
The Universal Computational Formula
This includes (but is not limited to) the realization that all apparent “material-causal” relationship/s need to be replaced by the higher-ordered ‘A-Causal Computation’ of the (singular) UCP which computes simultaneously all spatial pixels in the physical universe (at any given minimal time-point ‘c2/h’ comprising a single USCF frame); It includes the potential revision of the ‘Arrow of Time’ and ‘Second Law of Thermodynamics’ by recognizing that it should be possible to reverse the spatial-electromagnetic sequence of any given phenomena or event – i.e., either through the recording and manipulation of the particular spatial-pixels electromagnetic values (of that given phenomenon across a series of USCF’s frames) or indeed through an exploration of the potential for the expansiveness of Human Consciousness to produce certain regions of space across a series of USCF’s frames… More generally, there seems to arise a need for a theoretical revision of some of the basic assumptions underlying the current Quantum Mechanical probabilistic interpretation regarding the “collapse of the probability wave function” upon the physical interaction between the subatomic ‘probe’ and ‘target’ elements – instead replacing it with the singularity of the UCP A-Causal Computation simultaneous production of all subatomic ‘probe-target’ and relativistic ‘observer-phenomenon’ hypothetical exhaustive pairs (at any minimal time point c2/h comprising any single or multiple USCF’s frame/s)… Ultimately, though, this new CUFT ‘A-Causal Computation’ Paradigm may offer us a more comprehensive exhaustive theoretical framework of the entirety of the physical universe as produced- sustained- and evolved- through the singularity of the Universal Computation/Consciousness Principle which produces the four physical features of ‘space, ‘time’, ‘energy. And ‘mass’, and yet transcends them (altogether) and is also connected with our individual human consciousness (and its inherent potential to expand to “coalesce” with this Universal Computational/Consciousness Principle)…
I’d like to acknowledge my immense gratitude towards: Mr. Brian Fisher without whose long-lasting support, this scientific work would not have been made possible, my dear supportive wife, Dr. Talyah Unger-Bentwich and my dear beloved mother, Dr. Tirza Bentwich for her lifelong support of my original thinking. I would also like to thank Mrs. Einat Scheinman and Mr. Yehuda Zaks whose recent (generous support) has allowed this scientific work to be executed.