VTX 14
Now how are we to suppose that the I-sense changes roles? True,
usually the role shifts are small enough not to warrant the separate
roles being construed as distinct personas.
I think it reasonable to suppose that the roles (=persona or
personas), like other aspects of the system, follow whereby some
target template is acceptably approximated by (usually) converging
matrices. (Pavlov's experiment that induced a neurotic breakdown in a
dog tends to confirm this viewpoint, as the reader will find explained
elsewhere.) But these matrices are hierarchical. Thus at the hierarchy
of questioner and responder, persona Q and persona R differ so little
as to qualify as modes of the same principle persona P.
Correspondingly, the persona target matrices Q' and R' are both very
similar to target matrix P.'
(Bear in mind that these matrices may have millions or billions of entries.)
+++++
At this point, readers will wonder about AI machine learning models.
What stands out here is that no form of AI that I know of requires
personas. No form of AI requires an I-sense that is projectable into a
persona. Of course engineers can set up, if they so desired, a system
with one or more segments that poses questions and one or more that
answers questions. But these segments would require no consciousness.
It also seems wise at this point to note that a number of
philosophers, psychologists, biologists and neuroscientists have
sought to abolish the concept of consciousness (see W. James, G. Ryle)
based on the fact that the concept is nebulous and hard to pin down
scientifically. I have no problem with suspending use of that word.
Yet I do have a problem with abolishing the concepts of pleasure, pain
and emotional feels. But one would think some degree of consciousness
(or awareness or SOMETHING) must go along with these qualia. Trading
the word "awareness" for "consciousness" doesn't really mean much.
Some people argue that when a motion detector scanner is tripped, we
should say the motion detector is aware of motion. The more complex
you make that motion detector, the more likely people will agree. But
until the motion detector can actually experience pain or pleasure, I
would say the use of that word is hollow.
+++++
VTX 15-18
Notice too that the "working I-sense" is attached to the persona that
is "on stage" and is always (or nearly so) in a quantum-like present.
For everyday experience (how words fail!) the NOW is not zero. That
"now" may be made mathematically equivalent to zero, but the
perceiver's present is to him largely undefinable, except by
comparisons -- as in "shorter than a minute" but "longer than a
second" (but then you face circular reasoning). Sometimes this
"working present" is called the "specious present."
Others of course can measure mean durations of the sense of present.
And they find that anything too close to zero is no good, as
demonstrated by the brain-injured musician whose memory span was seven
to 30 seconds, preventing him from forming new memories or recalling
most past events. Visitors found him in a state of perpetually waking
up, not recalling that he had seen the visitor seconds earlier.
This points to the importance of working, short term memory in the
formation of the specious present and in reality construction in
general. Total short-term amnesia would result in a coma-like
condition. Curiously, procedural memory can remain in place and permit
the individual to function, even tho consciousness is greatly
impaired. For example, the musician was able to play classical piano
and conduct choirs. But how conscious was he of what he was doing?
This last points to the fact that, notionally, one could design an AI
program to play classical piano and conduct music. In fact such
already exist. And they are not conscious.
Further, note the obvious point that a very short specious present
along with virtually no short-term memory meant the musician could
form no immediate plans, nor any plans at all, since to do so requires
both long and short term (working) memory.
VTX 19
Now consider, for example, a dog. It exists pretty much in the
present, tho as a higher level mammal, it can think well enough to
achieve short-term goals. We take it as having its own personality.
But does it have a conscious persona? Whatever one's sentimentalism
concerning dogs might be, there is little proof of much, or any, inner
reflectivity. But then, consciousness remains a wil o' the wisp.
In any case, no one thinks Rover has any notion of some force(?)
called time. Rover lives largely in the canine specious present, with
time based (but not conceptually based) on needs it apparently FEELS
(consciousness) and that emerge periodically. Still, even Rover's
specious present requires a mix of short and long term memories.
Animal experiments have shown that complex behavior to gratify needs
ceases when memory is truncated, tho there still remain routines for
such gratification, some of which are "hard-wired" into the animal's
CNR. Interestingly, once brain surgery has proceeded to the point that
the animal cannot feel, then we tend to deny that it is alive. More
specifically, we say the organism as a whole may still be functioning,
but that it is "brain dead," and we regard the brain as integral to
any meaningful mammalian life.
VTX 20
For the dog, past, present and future constitute a holistic unity (tho
there are numerous apparent exceptions that for this discussion are
unimportant). For the human, on the other hand, time exists with the
aid of memory. No memories, no time.
In the human's specious present, the "lapse of time" requires
attention to significant input signal change. If one does not attend
to that task, then the specious present remains a NOW with highly
blurred borders.
An important detail here is that more complex memories concern
"before" and "after" impressions. ["That day, Mom arrived home before
I did."]
But how does the brain determine beforeness, afterness and
simultaneity, and how does it obtain magnitudes of beforeness or
afterness? ["Mom arrived home long before I did."]
Observe that beforeness and afterness if 1-to-1 with the less-than,
greater-than relation. For this relation, we can arrive at it
empirically using piles of beans and comparing, and then abstract from
there (as logicians and set theorists do).
But in the case of before-after, we require memories. We can
secondarily apply the less-greater relation by referring to, say
dates, in our memory. But this won't work of illiterates and
innumerates. So that is why I suggested that emotional intensity (see
my paper Toward) is one means of timewise ranking.
Consider the assertion, "I remember it like it was yesterday." How
does the persona know that "it" wasn't yesterday? Probably from
internal cues (such as, "my bat that I later broke as a kid"). <bat,
kid> --> BEFORE --> <all memories related to adulthood>.
We are not considering a simple problem. Note that Rover apparently
consciously remembers nothing that happened an hour ago, let along
years ago. (That is not to say, his procedural memory doesn't go back
years.)
Consider also that every recalled memory is taken as referring to some
event BEFORE now. Time is made to exist (at least partly defined) by
memory (human memory, at any rate).
And yet computers have the functional equivalent of memory (even the
same word is used). For Turing's offspring, there is no need to posit
an elapse of time such that an inner observer needs to define "before"
and "after" in relation to these memories. Yes, an external
observer-designer gives the computer a clock, which sets a drum beat
by which the logic gates can dance without harmoniously. No gate or
set of gates needs to know before or after. Only the conscious
observer finds a need to parse that mystery named "time."
The above remarks seem to accord with quantum mechanics, whereby time,
space and conscious observers may well be inextricably intertwined.
Even in relativity theory, it is not possible to entirely abstract
away the conscious observer. Nor does the mathematization of spacetime
resolve all difficulties.
If the spacetime "block" is a changeless whole that transcends time,
how does one say that its parts move? In fact, according to general
relativity, what we perceive as motion is actually a curvature of
spacetime. That is to say, motion and much of the perceived universe
are largely illusory. The spacetime block may be construed as akin to
Kant's thing-in-itself, the thing beyond perception that cannot be
directly perceived, tho perhaps aspects of it can be apprehended by
new theories and new technology.
Here we have an analogy between relativity and quantum theory. On the
large scales of relativity, space and time merge. On the small scales
of energy quanta, space and time merge (tho the rules of merger are
different for the two disciplines, and thus far irreconcilable).
The middle scales of course constitute the range of human perception,
where time and space seem to exist (or, perhaps, subsist) as distinct
"modes of awareness," where Kant's space and time antinomies apply.
That is, to borrow (quite seriously) from topology, physics, time,
space, and consciousness appear to be more than simply connected.
On that point, let us suppose it possible to construct a topological
model that accounts for all those "phenomena." One could not construct
a Lego model, which can only be built in Euclidean 3-space + time. But
what if one could construct a multiply connected logic circuit (think
Moebius band or Klein bottle)? In that case, we might, I suppose, be
on the trail of a physics of consciousness.
At the very least, such a circuit requires one or more "wormholes,"
since wormholes must exist in order for the cosmos to be multiply
connected.
The conclusion follows that, if consciousness is entirely physically
based, it exists in concert with wormholes that have not been
detected.
+++++
* No relativistic effects are known that result in multi-connectivity,
tho there has been speculation. On the other hand, it is hard to see
how the spacetime block could be anything but multi-connected.
No comments:
Post a Comment