BrainMeta'   Connectomics'  

Welcome Guest ( Log In | Register )

Reply to this topicStart new topic
> Quantum Physics
Robert the Bruce
post Oct 14, 2004, 10:43 AM
Post #1


Go to the top of the page
+Quote Post
Robert the Bruce
post Oct 14, 2004, 10:47 AM
Post #2


This report addresses the vectored forces which in conjunction with LaFolley's design and Ed's engine I will describe how to make a manna machine or dimensional force transceiver in my current book on alchemy.

Msg : 78 of 91
From : 1:343/70.10 Wed 16 Feb 94 14:48
To : All
Subj : Patent for "free energy" device
Copied (from: by Scott Parks using timEd.

From: (Dirk Kuhlmann)
Subject: Patent for "free energy" device
Date: Wed, 16 Feb 1994 22:48:45 GMT
Organization: PRZ TU-Berlin

I stumbled over this story a few days ago. Any comments, hints and
followups will be appreciated. Since the source (SCIENCE) seems
reliable to me, I wonder what came out of this story. According
to SCIENCE, 16. Nov. 1984, p. 817, Newman won the trial against the
patent office on 31 October 1984: "Newman persuaded the U.S.
District Court in Washington, D.C., to order that his
application be granted a full review by a new examiner - in
short, a second chance." If anyone is interested, I can post
this article in full length.

My questions:

- Did Newman finally got his device patented?
- Are there any people on the net who remember the story
or who were personally involved?
- Did anyone read Newman's book (Newman, Joseph; "The
energy machine of Joseph Newman: An invention whose time
has come". 5th ed. Soule (?), Evan R. (ed). 1987 $38.45,
ISBN 0-961 3835-5-2, J. Newman Pub.) and can tell something
about it? (It takes months to get this stuff
over here)

---------------------------[ start ]-----------------------

Source : SCIENCE, 10. Feb. 1984, pp. 571-572
Title : Newman's Impossible Motor
Subtitle: The patent office does not believe that Joseph Newman has
built a generator that is more than 100 percent efficient,
but New Orleans does.
Author : Eliot Marshall

At least one physicist in Louisiana swears that the CBS News
anchorman Dan Rather was smiling on 9 January when he reported
that an inventor near New Orleans has built a generator taht
defies the second law of thermodynamics. Others did not see
any smile. What they did see, to their surprise, was an
earnest but fantastic news story that has been running on New
Orleans' biggest television channel being repeated over the
network news.

The story is about an inventor, a self-educated Missisippian
named Joseph Wesley Newman. He was pleased with the CBS broadcast
because it make help him in a fight with the U.S. Patent and Trademark
Office, which has denied him a patent on the grounds of his latest
invention, "smacks of a perpetual motion machine", meaning by
definition it cannot do what it claimed. On 25 June, the U.S District
Court for the District of Columbia will hear a suit Newman has
brought against the patant office arguing that his device does not
aim at perpetual motion but converts mass to energy in a finite
but very efficient manner. He simply wants a patent.

Newman's invention is hard to describe, partly because its behavior
seems to be at odds with the laws of physics and partly because
the details are being kept secret while the ligation goes on.
Newman says his own theory of magnetism is "10,000 times more
important" than the invention itself, which be built to demonstrate
the concept.

He claims to have discovered the mechanical principles
of a gyroscopic particle of matter that orbits in a magnetic field
much as an electron orbits in an atomic shell. Several readers of
his theory say it is incomprehensible and would not get attention
were it not for the illustrative devices. The patent Newman seeks
is for an "Energy Generation System Having Larger Output than Input".

Those who have seen it say it is a crude direct current motor
powered by a bank of lantern batteries with a heavy, rotating
magnet at its center.

Readings of the machine's performance, like those of Dan Rather's
expression, depend on the reader. As a result of the TV coverage,
the people of New Orleans may be convinced that Newman has invented
a simple device that produces more energy than it consumes and
could end the world's energy sqabbles if only an arrogan
scientific community would pay attention. That is Newman's
message. It has been taken up and broadcasted in a sort of
crusade by Garland Robinette, the evening news anchorman at
the CBS affiliate in New Orleans, WWL-TV.

Last autumn Robinette aired an eight-part series on Newman's
device, charging that jealous academics and frightened
executives tried to stifle information about it. Robinette
concedes that his intense coverage of Newman's began on a slow
news day when he was looking for a cute show-closer. He claims
he was skeptical at first and saw Newman's invention as
acuriosity. But the story. soon grew into a "monster that I
couldn't let go" when New Orleans viewers, facing a 200
percent increase in utility rates, demanded to know more.

Furthermore, a Missisippi state energy offical and a credible
scientist had recently vouched for Newman's claims. Robinette
sais that since he began reporting on the invention, no one
has come forward to rebut Newman. He cahllenges people to come
to "get this story off my back".

Newman has benefited from the television coverage and from
several weighty endorsements. For example, the television
engineers bached him. Last year, Robinette dragged two
reluctant engineers on WWL-TV staff to Newman's garage in
Lucedale, Missisippi., about 2 1/2 hours from New Orleans.

They werde sceptics at first, but, after looking at
oscilloscope readings and watching the machine recharge
batteries, they agreed with their anchorman that the claims
seemed valid.

Engineer Ralph Hartwell described the tests he ran. When he
arrived at Newman's House, he connected some weak penlight
batteries he had brought along to a small conventional motor
in Newman's back yard. It was allowed to run until the
batteries were drained of power, taking about 1 minute.

He then moved the dead batteries over to the smallest of Newman's
demonstration motors, connected them as a power source, and
started this motor spinning. It ran until it was time for the
camera crew to leave, for something between 1 ans 2 hours.

Finally, the batteries werdetaken fram Newman's machine back
to the conventional motor and reconnected. This time the motor
ran for about 3 minutes. Hartwell ran annother experiment on a
large device and concluded that it also appeared to generate
more power than it used. Other measurements were taken with
oscilloscopes and current meters, but these raedings have been
questioned. After signing a confidential pledge, Hartwell was
allowed to examine the machinee's inner wiring. He is certain
that there is no hidden sourche of energy. Although he feels
uncomfortable about it, he says he could not disprove Newman's
claim and would like to see a universtity run a controlled

Newman's key endorsement comes from Roger Hastings, a
solid-state physicist for the Sperry Univac Company in
Minneapolis. A colleague who knew him as a postdoc fellow at
the University of Virginia says Hastings was regarded as an
adventurous and excellent theorist. Hasting's brother,
ascreener for new ideas for Tonka Toys, met NNewman when he
submitted an invention to Tonka. Although sceptical, Hastings
(the physician) was persuaded to make a trip to Lucedale. "

I used to teach physics at North Dakota University", says
Hastings, "and we would get three or four people a year who
had some kind of device that was going to save the world. I
assumed it was the same." Newman talked Hastings into fly down
for a visit anyway. He returned five times, testing and
retesting the motors, unti he was satisfied that he had made
no mistake. He eventually signed an affidatvit describing the
invention in detail and stating unequivocally that it runs at
greater than 100 percent efficency, producing more power than
it consumes. "I'm sticking my neck out," he says, "because
this is an important issue that should be resolved."

Endorsements such as this are essential for the credibility of
the pateht application. Although Newman has read the works of
the great electrical thinkers Michael Faraday and James Clerk
Maxwell, he is not proficent in math or physics.

Newman is collecting seval more endorsements. He claims to
have won the backing recently of, a German aerospace engineer
and a liaison officer between the National Aeronautics and
Space Administration (NASA) and the European space consortium.

Gerald Miller, a mechanical engineer, student of advanced
physics, and electical industiy consultant in California, das
inspected the devices and says, "I saw things that i cannot
explain in conventional terms." He found that the device
produced more energy than it used, adding, "I am absolutely
certain that there is no hidden energy source." Milton Everett,
a mechanical Engineer and director of the biomass program for
the Missisippi department of energy and transportation, says,
"I think Joe has discovered something that the world is going
to benefit from. It's nor a perpetual motion machine; it
converts mass to energy." Excluding inverstors, Newman claims
to have about 27 such endorsers.

But there have been and continue to be prominent doubters.
Oddly, TV anchorman Robinette has given little attention or
credence to the only thorough analysis ever performed on
Newman's device. It was aranged by Everett (before he became a
full convert to Newman's cause) and was paid for by the
Missisippi energy department. Two electrical engineers from

--- GEcho 1.02+
* Origin: Helix: Fido<>Internet - Seattle (206)783-6368 (1:343/70.10)

Msg : 84 of 90
From : 1:343/70.10 Wed 16 Feb 94 14:48
To : All
Subj : 02:Patent for "free energy" device

-=> Note:

Copied (from: by Scott Parks using timEd.

Missisippi State University (MSU), - Karl Carlson and Donald
Fitzgerald - tested one of the Newman's devices last March.
The conditions were unfavourable, because the motor kept
breaking down every "couple of minutes," says Carlson, as a
huge spark from the induction coil shorted out a switch on the
commutator. Thus, while it war fairly easy to measure the
power going in, it was not easy to tell what was coming out.
Newman has built a smaller, less quirky motor since then.

The pattern on the oscilloscope at the output end of a cycle
was difficult to read because as one observer says, the
discharge spark appeared as "a bright flash" or "a mess" on
the face of the screen. Newman sweeps this point aside as a
quibble, saying it merely indicates his machine's tremendous

The efficiency claimed for this device is anywhere from
the impossible (slightly over 100 percent) to the fantastic
(800 percent and up). A normal electric motor may be 80
percent efficient, Carlson says, and transformers are
generally in the 90's. Carlson and Fitzgerald found that
Newman's machine was between 55 and 76 percent efficient,
based on their reading of the most favourable oscillograms.

They wrote that they found "an output which is definitely less
than the input." However, they hedged by saying it was
impossible to measure the mechanical energy lost in the
machine, which could affect the rating. They declined to call
Newman's invention a breakthrough but reported that it was
remakably efficient given its "obvious crude configuration."

In a standard tag line, they wrote that "further investigation
is in order." Newman reads this qualified rejection as a
qualified endorsement, explaining when it comes to praising
new discoveries, academisc are mean. He speaks of Carlson and
Fitzgerald with harsher adjectives.

The physics faculties of Loyola and Tulane Universities, both
in New Orleans, have protested Robinette's reports. Daniel
Purrington, Tulane's physics chairman, says: "We all dispute
it. A number of us have told him [Robinette] we think what
he's doing is irresponsible. I talked to him for about 2 hours
about the principles involved." Carl Brans, a theorist at
Loyola, wpote Robinette a two-page letter of protest. "It's
just sensational journalism. In our opinion, it's not worth
the cost," to try to take the measurements that would end the

David Keiffer, an experimental physicist at loyola,lon with
other faculty member, offered to check Newman's device if he
would bring it to the laboratory. (Newman's patent attorney is
a physics graduate of Loyola.) But in the preliminary talks,
Keiffer says, Newman insisted that he be present during the
entire procedure. Then he and Keiffer got into an argument.

Newman packed up and left, never to return. The Loyola
physicist also sought to advise WWL-TV's engineers on testing
the device, but this proved to be a touchy proposition,
because WWL is owned by Loyola and was originally founded by
Loyola's physics department. No one wanted the advice to be
interpreted as pressure.

"I have a fairly good reputation here," Robinette says of his
science reporting, "and this thing just has the potential to
make me look like an absolute ignoranmus. So I've tried
desperately to disprove this and all I've done so far is get
more and more people who are convinced."

What about the negative conclusion reached by the MSU
engineers? Robinette maintains (like Newman and Everett) that
while the engineers were testing the machine, they agreed that
it was producing more energy than it used. But "when they went
back, they wrote an ambigous response that didn't say it
didn't work and didn't say it did." Robinette mentions that
the MSU engineers are retired, as though to diminish their
reliability. He finds it "very surprising that they never
called to challenge his report, which gave the Newman-Everett
version of events.

Some who might otherwis voice scepticism seem to sypathize
with Newman because of the way the patent office rebuffed him.

In court filings, the patent office concedes that Newman is
correct that it rejected his claims without fully reading the
documents he submitted; that his application was handled by an
examiner - Donovan Duggan - who seems to specialize in
rejecting perpetual machines; that Duggan said he would not
allow a patent on Newman's device, no matter how much
supportive evidence was submited; that the office officials
never tested the Newman device for efficacy and refused to
observe oscilloscope readings of itsinput and output; and
finally; and, finally, that the office issued a patent 1979 to
a man named Howard Johnson for a perpetual motion machine that
Johnson since then agreed is inoperable.

If there were an association od militant patent rejectees,
Newman's battle with the patent office could be its rallying
cause. But there is no such association. However, Newman has
done reasonably well attracting attention by himself,
especially in New Orleans. In a few months, he will get his
day in court.

-----------------------[ end ]-----------------------------

Dirk Kuhlmann
Technische Universitaet Berlin | voice: +49 (0)30 314-21238
Prozessrechnerverbundzentrale | fax: +49 (0)30 314-21114
Sekr. MA 073 | email:
Strasse des 17. Juni 136 | or
10623 Berlin 12 GERMANY |s=derek;ou=prz;pd=tu-berlin;ad=de
Dirk Kuhlmann
Technische Universitaet Berlin | voice: +49 (0)30 314-21238
Prozessrechnerverbundzentrale | fax: +49 (0)30 314-21114

--- GEcho 1.02+
* Origin: Helix: Fido<>Internet - Seattle (206)783-6368 (1:343/70.10)

Go to the top of the page
+Quote Post
Robert the Bruce
post Oct 14, 2004, 10:50 AM
Post #3


Scientific American reposted a November-2003 article by Michio Kaku entitled "Borrowed Time". It postulated how General Relativity and Quantum Mechanics allows -- in theory -- for alternate timelines within a Multiverse and "time machines". I archived it at (the 4th item). Prof. Max Tegmark wrote an article about 4 possible models for parallel universes (including Everett/Wheeler 'Many Worlds') for Scientific American, and it precedes Dr. Kaku's essay in the list (the 3rd item).<o:p></o:p>


Ed Halerewicz, Jr. is adept at translating technical physics for the layperson (such as myself). In an e-mail, Ed said there's actually more than 4 parallel universe models but added the jury is still out as to their actual existences. He thought Paul Davies is a better author for time-travel in general and that Gott had the best technical physics for this subject.<o:p></o:p>


Ed and Dr. Paul Hoiland wrote an e-book that explains the 'Science' in "Warp Drive" based on battle-tested GR and QM. If you have read about topics such "Clifford Algebra" and "Bell's Theorem" before but have no idea as to what the math looks like, they are covered in this manuscript. The "gist" of it, says Ed, is that there is no "blue smoke & mirrors" (i.e., sci-fi magic) in Warp Drive. Einstein's GR allows for it. It is more of an engineering challenge to produce the effects that theory allows. The original DRAFT version is in ZIP-ed .pdf format. I corrected some grammar errors in a ZIP-ed MS-Word .doc . Both downloads are accessible at .<o:p></o:p>


Some physicists have "extrapolated" from the basic principles of Warp Drive to create their own particular models. And a few engineers have produced designs for "ufo"-type craft based upon their interpretations of warp drive science. Ed critiques some of these at the aforementioned URL. Paul and Ed are among founding members of the "Journal of Advanced Propulsion Methods" => .<o:p></o:p>


Ed Halerewicz, Jr.'s homepage =>;</o:p>

Prof. Max Tegmark's homepage =><o:p></o:p>

Dr. Michio Kaku's homepage =>
Go to the top of the page
+Quote Post
Robert the Bruce
post Oct 22, 2004, 12:05 PM
Post #4


Go to the top of the page
+Quote Post
Robert the Bruce
post Oct 30, 2004, 07:07 AM
Post #5




by Ronald Swan
17 February 2002, posted 7 May 2002


A hypothesis is presented that resolves the long-standing fracture between physical and mental phenomena in the scientific tradition and situates mind as a natural possibility. The hypothesis is "meta-scientific" in that it does not resolve issues unique to any specific discipline and is formalized using elementary mathematical logic. However, it is falsifiable within standard scientific method because it implies empirical properties that can be determined consistent or inconsistent with current knowledge and supported or invalidated by future discovery. For example, the hypothesis predicts general properties of nature that are consistent with specific features of relativity and quantum mechanics, but not classical physics. Further, it predicts that the general theory of every scientific domain will have a non-computable basis.


The premise of this paper is that a complete, empirically supportable theory of mind requires that the possibility of mind be fundamental, even obvious, within our basic ideas about nature. Mind can be neither a "separate reality" beside the physical nor an epiphenomenon subordinate to the physical. If mind is separate, how does it connect to the physical ? And if it does connect, in what sense is it separate ? If mind is subordinate, how does it achieve effective causal control over the physical organism by "purposive, goal directed behavior" and "acts of will" ? Is "downward causality" a theoretical contradiction ?

Brash, drastic, and downright unbelievable as it may sound, my "claim" is that there is a secular meta-scientific hypothesis of nature that situates all empirical phenomena such as wave-particle duality, non-locality, relativity, life, mind, sociological and cultural processes as natural possibilities, and explains why nature exists. The hypothesis has specific empirical predictions by which it can be supported or invalidated, as opposed to descriptive analysis that is solely intellectually persuasive. And there is the caveat that the price paid for obtaining an explanation across the complete range of nature’s event classes is not getting a specific theory that uniquely explains phenomena within any particular domain of study (such as physics, biology, or cognitive science). That is, it implies no concrete theory about the structure of elementary particles or the engineering of the brain.

The fundamental technical idea of the hypothesis is the non-computable fixed point concept implied by Gödel’s Incompleteness Theorems. The fixed point concept is the formal representation of an empirical model implying that both local and non-local properties are fundamental conditions of actual existence at all levels of the natural hierarchy (elementary particles to mind). The hypothesis ("a meta-scientific theory of nature" or MSTN.0) assumes that identity preservation is a general common property of all actual entities. Identity preservation is a boundary condition within an environment. MSTN.0 treats the boundary condition (entity or process) as part of the (its own) environment. If we consider a "location" or event to entail specific space-time locality, then identity preservation is necessarily non-local to the extent it comprises a pattern over an event class (multiple locations).

If we accept the implication that space-time location is not a reified "enclosure" (container) for events, then local-non-local properties, space-time location and "non-location," and the event itself are identical. Every identity preserving entity (process, event class) is both definite and indefinite. Hence, actuality is both causal and non-causal (determinate and indeterminate). That is, the properties that have been both theoretically defining and intellectually mysterious at the quantum level are conditions of identity (being actual) at every level. Specifically, mind is a non-local property of the brain. "Acts of will" ("I am now going to pick up this fork and start eating") and "goal directed behavior" are non-causal events just as the EPR phenomena of physics are non-local and unexplainable within the bounds of relativistic causality.

It is not necessary to seek a complete explanation of the possibility of mind at the level of quantum mechanics because locality and non-locality are also properties of identity preserving processes at the level of organisms. This conclusion does not mean that the engineering of the brain is not dependent on quantum level non-locality to generate "mind." It does mean that non-locality as mind is distinct from quantum non-locality in the same way a living organism as a total boundary condition (process) is distinct from the physiological, electrical and chemical processes it depends on.

MSTN.0 implies that a complete theory of any natural domain containing identity preserving processes must have a non-computable basis. Hence, the hypothesis can be falsified by the discovery of a comparable computably based theory. A corollary is that mind also has a non-computable basis. Thus, MSTN.0 can be falsified by the creation of an entity with the full properties of mind or conscious awareness based entirely on computable procedures equivalent to a Turing machine (no matter how complex).

Recall that classical mechanics has no explanation for the equivalence of inertial and gravitational mass whereas the equivalence is intrinsic in relativity. Analogously, in the meta-theory MSTN.0, the following concepts represent intrinsically equivalent perspectives on an underlying empirical and foundational property of nature: local-non-local, causal-non-causal, determinate-indeterminate, definite-indefinite, different-identical. This equivalence solves the fracture of nature into physical, living, and mental phenomena. More pictorially, identity preserving processes are the possibility of nature having no a priori, independent, or fixed basis. "Natural law" is a practical approximation to reality just as classical gravitational "action at a distance" is to the contiguous relativistic non-Euclidean model. [A complete exposition of MSTN.0 would contain the following: (1) Local existence implies non-local existence as an empirical application of Gödel’s Incompleteness Theorems. (2) Non-local existence implies local existence as an empirical application of elementary model theory. (3) The possibility of the physical-life-mind hierarchy is an empirical application of (1) and (2) as modeled in the theory of the higher infinite. This paper deals with part (1).] {Elements of a Meta-Scientific Theory of Nature}


A common complaint regarding modern science and scholarship is that the extreme level of specialization makes it hard to envision broad issues, connections, and solutions. Of course, the positive side is that many of the major triumphs of 20th century science and technology, such as medical advances, the new computer age, the beginning of high-level genetic science, and space exploration, would not be possible without specialization. The negative extreme is the modern equivalent of scholastic arguments about how many angels can dance on the head of a pin.

My interest is in the middle area: significant issues that have either been ignored as intractable to scientific method or where many attempts at resolution have been made but the intellectual community has remained divided on the correct approach with no commonly acceptable solution in sight. There have been several critical periods in the history of science where investigation regarding anomalous results has seemingly ground to a productive halt. Before the theory that the earth is the center of the universe fizzled, there were numerous attempts to explain peculiarities of planetary motion by adding epicycles to their orbits. Similarly, before Newton’s universal theory of motion and gravitation, there were pieces of loosely connected knowledge regarding the relationship of motion, matter, and energy. Before Darwin, there was the need to presume an independent force to explain life, whether a divine designer or a secular Lamarckian drive toward improvement. Following the Michelson-Morley experiments, physicists unsuccessfully attempted to fit the equations of relativistic motion into a satisfactory theory until Einstein was bold enough to overturn commonly held assumptions.

I am suggesting both that we are now in such a situation and that by changing basic assumptions about nature we can resolve deep, centuries long questions. Brilliant people have provided extensive analysis and valuable proposals regarding the issues of mind and intelligence (for example, Chalmers, Hofstadter, Lucas, Penrose). Nevertheless, there is no commonly accepted, empirically validated theory of mind. I believe that investigating the problem under two distinct levels or issues is most productive. There is the engineering or technological problem in the theoretical context of all relevant disciplines (physics, chemistry, biology, cognitive science, AI) regarding how an entity with the property "mind" can function or be made. In addition, there is the generalized problem of how it is possible for mind to be a naturally occurring phenomenon. That is, rather than being a "paste on" epiphenomenon (epicycle) of nature requiring contortions of explanation, why shouldn’t mind be an innate and fundamental, even obvious possibility? But the history of science is an irrefutable demonstration that our existing ideas about nature as a whole do not hold a direct, integrated, logical solution to the problem of the natural possibility of mind. I am proposing that radical alteration of our most basic assumptions about nature (as a totality) can provide a solution that places the possibility of mind directly within the natural world, and, further, that such a radical solution is the only adequate one.

The scientific "revolutions" referred to above have at the least a common analogical thread of letting go various intellectual supports based on religious belief or current common sense to allow specific aspects of nature to "stand on their own." With biological evolution we let go an external creator or some type of arbitrary and unseen life force. With relativity we abandon the ability to make physical measurements with reference to anything beyond the actual physical entities being measured and doing the measuring. In each case, it was necessary to make a difficult leap of imagination and alter basic insight regarding how those aspects of nature work. Clearly, such radical changes in what is assumed about nature have not come easily either to society or the scientific community.

I am claiming that if we identify exactly what "the support that we keep letting go" is at the most general and abstract level logically possible, and then take the bold and difficult step of determining the empirical consequences, a new major "revolution" in our understanding of nature is possible. Corollary claims include that (1) the consequent hypothesis can be given formal representation using elementary concepts of mathematical logic; (2) since we are "releasing" the ultimate possible support, we will receive the most general level of explanation; (3) the level of explanation will be what has formerly been considered philosophical or theological (as opposed to currently investigated scientific domains); and (4), the empirical consequences will constitute an in-principle test of validity that places the hypothesis within the standard of scientific method.

The theory of evolution may provide the closest analogy. Pre-Darwin there is no credible, fully scientific explanation for the existence and development of life. Based on pre-Darwinian assumptions, even the most scientifically acceptable theories contain arbitrary "oracular" elements that cannot be empirically resolved within the standards of scientific method. However, biological evolution is a complete, empirically testable theory over its domain. There is no condition with reference to the existence of living organisms to which "random variation and natural selection" does not apply as a general process. Although notably, it cannot be generally represented as a computable procedure. That is, "random variation and natural selection" neither is nor leads to an effective procedure for predicting the concrete form of organisms in a future environment based on their form within the current environment.

Looking back retrospectively, the root ideas of major advances in scientific knowledge seem elegantly basic, even simple. In addition to ideas introduced in classical physics, evolution, and relativity, quantum theory changes our conception of matter and energy from continuous processes to discontinuous packets having dual particle-wave properties. Even though these concepts seem to conflict with common sense, are impossible to fully visualize, and require advanced mathematics to rigorously formulate, they are not fundamentally complex. Just so, the root idea of the new hypothesis is simple even though it requires radical alteration of our common notion about the what-and-why of being "real."

The "bold and imaginative leap" required to achieve the new revolution in our view of nature is to abstract from the upper hierarchical level to obtain the general hypothesis. Even a "Theory of Everything" that unifies the domain of physics would not generate an empirical theory of nature as a totality. No matter how necessary understanding the processes occurring among fundamental particles is to understanding the function of the brain, the content of biological, mental, and sociological processes will require separate levels of investigation and theory. Hence, a true empirical hypothesis regarding nature as a whole must identify explanatory properties that apply to all hierarchical levels.

Science has spent centuries idealizing the computably representable natural law discovered in the lower hierarchical levels and attempting to find equally strong computable representability in the upper hierarchy. The inability of sociology to find strong computable representation of social event classes exemplifies an essential limitation (for example, applying survey research methods to predicting election results achieves relatively ephemeral success compared to the highly accurate long range predictions of motion required for space travel). Strong computability requires a stable field of background circumstances.

Consider that as in the life sciences, event class invariance ("natural law") in the physical sciences is not absolute but occurs only relative to an assumed stability of environmental conditions. Again, in the general case, there must be properties common to every domain of nature. The "radical new assumption" of the revolutionary hypothesis is that the central property is a generalization of the strong circumstantiality of the upper hierarchy. As in evolutionary biology, the idea of the new assumption is quite simple. We can describe it by defining existence as solely "reflexively" or "unconditionally" circumstantial. In the case of organisms, it means that the organism is an active circumstance of its own existence and that the total circumstances {are} its existence. (We use the dictionary meaning of {reflexive} as "directed back on itself." Hence, if x is an "identity preserving process" [neutron, plant], although locally x = x is an "internal, external" boundary condition, non-locally x = x is the identity of entity and circumstances [environment]. Compare this seemingly contradictory duality to quantum wave-particle duality, as a specific instance.) An objective of this paper is to make local-non-local duality at all levels of the natural hierarchy feel as obvious and reasonable as biological "random variation and natural selection." (And thereby, completely demystify and secularize biology-based phenomena including "being alive" and performing "acts of will.")

The extreme contrast we can visualize is between the purely mechanistic "billiard ball" externally causal determinism of classical mechanics and the existence of an organism. An organism is completely constituted as the existence of a boundary interaction of internal and external circumstances. Total circumstantial dependency means simply that there is no independent guarantee (beyond the actual internal and external circumstances) of an organism coming into, sustaining, or ceasing existence at any level or by any means. The key idea is the elimination of any reification whatsoever regarding an organism as an actual entity that can be referred to, analyzed, or objectified apart from or in addition to its total concrete circumstances. Then, we generalize this idea to all existence, and follow the logical consequences to the full empirical conclusion, no matter what. We can pictorially visualize nature as a spectrum of causal conditions from completely external and mechanical (an idealized Turing machine or mechanical device, macroscopic energy-motion processes) to internal-external reflexively circumstantial and boundary preserving (elementary particles, organisms) to the totality of nature defined as that unique entity boundary condition) where internal and external circumstances are identical ("absolutely" non-local).

The critical intuitive insight can be further isolated by attempting separation of nature into two objects: the actual processes of nature, or "events" (compare the relativistic interpretation that the space-time continuum consists of events as 4-dimensional space-time locations), and the invariant patterns found across event classes ("laws"). Next, there is the distinction between law as (1) a real object existing in "Platonic space" that provides ideal form or control for concrete events, (2) a purely abstract object (mental construct or formal representation) that describes invariant, but non-reflexive, fixed patterns in concrete events, and (3) a purely abstract object representing invariance in reflexive, circumstantially embedded event classes. We need to be convinced that (a) there is no functional difference between (1) and (2), and no observable means by which to determine such. This (Aristotelian) conclusion is evident because both interpretations are represented in any given case by the same ideas or mathematical formulas and the only observables are the actual and identical events being represented. (cool.gif The difference between the first two and (3) is observable because distinguishable empirical consequences of the versions of causation are identifiable. Hence, we require existing and in-principle empirical tests to determine if (3) is the more adequate explanation of nature’s general causal properties.

Some examples may make the range and meaning of this distinction clear.

(1) Newton’s universal laws of motion and energy apply to all "real" entities and describe the necessary and fully deterministic physical behavior of all entities. Unfortunately, the assumption that these laws are literally universal creates an immediate and unbridgeable gap between "physical" and "mental." Although the classical equations can predict the motion and energy properties of macroscopic objects with high precision, neither these nor any other deterministic equations can be expected to predict the "purposeful" behavioral physical motion and effects of living entities. The equations that predict the movements of space vehicles and orbital location of planetary objects to seemingly incredible levels of accuracy do not predict the detailed decision process of minds building and launching the vehicles. In addition, our hypothesis implies that no computable procedure whatsoever can provide a complete representation of the purposeful behavior of living organisms (as opposed to mimicking or imitating partial features). Hence, with respect to the classical theory, since biological and mental event classes remain unexplained, their classification as completely "real" comes into question. The logical alternatives are to either classify them as secondary epiphenomena of physical nature (that which we {are} able to explain), or make them into a separate category of nature for "living" entities. Therein is the split between physical and mental. Current physical theories of physics and chemistry have not resolved this gap.

(2) The equations of the theory of relativity are also fully deterministic and (by our hypothesis) provide no explanatory basis for purposeful behavior. For our purpose, the significant difference between the classical and relativistic theories is in the "frame of reference" assumptions. We will consider the concept of "fixed point" as a means of generalizing the meaning of frame of reference. The velocity of light is a pivotal difference. The classical theory allows the assumption that light propagates in a medium (ether) that is fixed for (exists independently of) all observers, similar to sound in air. Hence, the measured velocity should vary depending on the relative velocities of the observers with respect to the medium and each other (otherwise, it would not be possible to pass the sound barrier). The medium was assumed to provide the universal frame of reference, or fixed point. When experimental evidence failed to support the variability of the velocity of light, Einstein introduced the radical, non-intuitive assumption that the measured velocity itself constitutes a quantitative fixed point for all observers regardless of relative motion. By drawing out the mathematical consequences of the new fixed point assumption, Einstein found seemingly uncommon sense conclusions that would provide empirical tests for the new theory.

(3) Unlike relativity, which has computable deterministic laws that are predictive at the event level, quantum theory does not have event level computable representability. For example, particle emissions from radioactive isotopes are not individually predictable. And yet, the "pre-collapse" wave equation is fully determined. It is well known that relativity and quantum mechanics have not yet been integrated into a unified and complete theory over the domain of physics. We will choose the identity preserving properties of elementary physical entities as the focus of our fixed point investigation. The conclusion to be reached is that computably representable event classes do not have strong identity preservation, and strong identity preservation is not computably representable. Even "in-principle," unlike the classical theory, computable representability of the actualized events is only possible as a probability condition based on an assumed environmental stability.

Go to the top of the page
+Quote Post

Reply to this topicStart new topic
2 User(s) are reading this topic (2 Guests and 0 Anonymous Users)
0 Members:


Lo-Fi Version Time is now: 23rd May 2018 - 08:42 AM

Home     |     About     |    Research     |    Forum     |    Feedback  

Copyright BrainMeta. All rights reserved.
Terms of Use  |  Last Modified Tue Jan 17 2006 12:39 am