Friday, December 27, 2024

K paper C

 VTX 14


Now how are we to suppose that the I-sense changes roles? True,
usually the role shifts are small enough not to warrant the separate
roles being construed as distinct personas.

I think it reasonable to suppose that the roles (=persona or
personas), like other aspects of the system, follow whereby some
target template is acceptably approximated by (usually) converging
matrices. (Pavlov's experiment that induced a neurotic breakdown in a
dog tends to confirm this viewpoint, as the reader will find explained
elsewhere.) But these matrices are hierarchical. Thus at the hierarchy
of questioner and responder, persona Q and persona R differ so little
as to qualify as modes of the same principle persona P.
Correspondingly, the persona target matrices Q' and R' are both very
similar to target matrix P.'
(Bear in mind that these matrices may have millions or billions of entries.)
+++++
At this point, readers will wonder about AI machine learning models.
What stands out here is that no form of AI that I know of requires
personas. No form of AI requires an I-sense that is projectable into a
persona. Of course engineers can set up, if they so desired, a system
with one or more segments that poses questions and one or more that
answers questions. But these segments would require no consciousness.

It also seems wise at this point to note that a number of
philosophers, psychologists, biologists and neuroscientists have
sought to abolish the concept of consciousness (see W. James, G. Ryle)
based on the fact that the concept is nebulous and hard to pin down
scientifically. I have no problem with suspending use of that word.
Yet I do have a problem with abolishing the concepts of pleasure, pain
and emotional feels. But one would think some degree of consciousness
(or awareness or SOMETHING) must go along with these qualia. Trading
the word "awareness" for "consciousness" doesn't really mean much.

Some people argue that when a motion detector scanner is tripped, we
should say the motion detector is aware of motion. The more complex
you make that motion detector, the more likely people will agree. But
until the motion detector can actually experience pain or pleasure, I
would say the use of that word is hollow.
+++++
VTX 15-18
Notice too that the "working I-sense" is attached to the persona that
is "on stage" and is always (or nearly so) in a quantum-like present.
For everyday experience (how words fail!) the NOW is not zero. That
"now" may be made mathematically equivalent to zero, but the
perceiver's present is to him largely undefinable, except by
comparisons -- as in "shorter than a minute" but "longer than a
second" (but then you face circular reasoning). Sometimes this
"working present" is called the "specious present."

Others of course can measure mean durations of the sense of present.
And they find that anything too close to zero is no good, as
demonstrated by the brain-injured musician whose memory span was seven
to 30 seconds, preventing him from forming new memories or recalling
most past events. Visitors found him in a state of perpetually waking
up, not recalling that he had seen the visitor seconds earlier.

This points to the importance of working, short term memory in the
formation of the specious present and in reality construction in
general. Total short-term amnesia would result in a coma-like
condition. Curiously, procedural memory can remain in place and permit
the individual to function, even tho consciousness is greatly
impaired. For example, the musician was able to play classical piano
and conduct choirs. But how conscious was he of what he was doing?

This last points to the fact that, notionally, one could design an AI
program to play classical piano and conduct music. In fact such
already exist. And they are not conscious.

Further, note the obvious point that a very short specious present
along with virtually no short-term memory meant the musician could
form no immediate plans, nor any plans at all, since to do so requires
both long and short term (working) memory.

VTX 19

Now consider, for example, a dog. It exists pretty much in the
present, tho as a higher level mammal, it can think well enough to
achieve short-term goals. We take it as having its own personality.
But does it have a conscious persona? Whatever one's sentimentalism
concerning dogs might be, there is little proof of much, or any, inner
reflectivity. But then, consciousness remains a wil o' the wisp.

In any case, no one thinks Rover has any notion of some force(?)
called time. Rover lives largely in the canine specious present, with
time based (but not conceptually based) on needs it apparently FEELS
(consciousness) and that emerge periodically. Still, even Rover's
specious present requires a mix of short and long term memories.
Animal experiments have shown that complex behavior to gratify needs
ceases when memory is truncated, tho there still remain routines for
such gratification, some of which are "hard-wired" into the animal's
CNR. Interestingly, once brain surgery has proceeded to the point that
the animal cannot feel, then we tend to deny that it is alive. More
specifically, we say the organism as a whole may still be functioning,
but that it is "brain dead," and we regard the brain as integral to
any meaningful mammalian life.

VTX 20

For the dog, past, present and future constitute a holistic unity (tho
there are numerous apparent exceptions that for this discussion are
unimportant). For the human, on the other hand, time exists with the
aid of memory. No memories, no time.
In the human's specious present, the "lapse of time" requires
attention to significant input signal change. If one does not attend
to that task, then the specious present remains a NOW with highly
blurred borders.

An important detail here is that more complex memories concern
"before" and "after" impressions. ["That day, Mom arrived home before
I did."]

But how does the brain determine beforeness, afterness and
simultaneity, and how does it obtain magnitudes of beforeness or
afterness? ["Mom arrived home long before I did."]

Observe that beforeness and afterness if 1-to-1 with the less-than,
greater-than relation. For this relation, we can arrive at it
empirically using piles of beans and comparing, and then abstract from
there (as logicians and set theorists do).

But in the case of before-after, we require memories. We can
secondarily apply the less-greater relation by referring to, say
dates, in our memory. But this won't work of illiterates and
innumerates. So that is why I suggested that emotional intensity (see
my paper Toward) is one means of timewise ranking.

Consider the assertion, "I remember it like it was yesterday." How
does the persona know that "it" wasn't yesterday? Probably from
internal cues (such as, "my bat that I later broke as a kid").  <bat,
kid> --> BEFORE --> <all memories related to adulthood>.

We are not considering a simple problem. Note that Rover apparently
consciously remembers nothing that happened an hour ago, let along
years ago. (That is not to say, his procedural memory doesn't go back
years.)

Consider also that every recalled memory is taken as referring to some
event BEFORE now. Time is made to exist (at least partly defined) by
memory (human memory, at any rate).

And yet computers have the functional equivalent of memory (even the
same word is used). For Turing's offspring, there is no need to posit
an elapse of time such that an inner observer needs to define "before"
and "after" in relation to these memories. Yes, an external
observer-designer gives the computer a clock, which sets a drum beat
by which the logic gates can dance without harmoniously. No gate or
set of gates needs to know before or after. Only the conscious
observer finds a need to parse that mystery named "time."

The above remarks seem to accord with quantum mechanics, whereby time,
space and conscious observers may well be inextricably intertwined.
Even in relativity theory, it is not possible to entirely abstract
away the conscious observer. Nor does the mathematization of spacetime
resolve all difficulties.

If the spacetime "block" is a changeless whole that transcends time,
how does one say that its parts move? In fact, according to general
relativity, what we perceive as motion is actually a curvature of
spacetime. That is to say, motion and much of the perceived universe
are largely illusory. The spacetime block may be construed as akin to
Kant's thing-in-itself, the thing beyond perception that cannot be
directly perceived, tho perhaps aspects of it can be apprehended by
new theories and new technology.

Here we have an analogy between relativity and quantum theory. On the
large scales of relativity, space and time merge. On the small scales
of energy quanta, space and time merge (tho the rules of merger are
different for the two disciplines, and thus far irreconcilable).

The middle scales of course constitute the range of human perception,
where time and space seem to exist (or, perhaps, subsist) as distinct
"modes of awareness," where Kant's space and time antinomies apply.
That is, to borrow (quite seriously) from topology, physics, time,
space, and consciousness appear to be more than simply connected.

On that point, let us suppose it possible to construct a topological
model that accounts for all those "phenomena." One could not construct
a Lego model, which can only be built in Euclidean 3-space + time. But
what if one could construct a multiply connected logic circuit (think
Moebius band or Klein bottle)? In that case, we might, I suppose, be
on the trail of a physics of consciousness.

At the very least, such a circuit requires one or more "wormholes,"
since wormholes must exist in order for the cosmos to be multiply
connected.

The conclusion follows that, if consciousness is entirely physically
based, it exists in concert with wormholes that have not been
detected.

+++++
* No relativistic effects are known that result in multi-connectivity,
tho there has been speculation. On the other hand, it is hard to see
how the spacetime block could be anything but multi-connected.

K paper E

 VGQ5


if-then occurs in the degree 1 system.
if no detection, then no action.
if detection, then action (including the null action).

detection --> action
+++++++++++

A simple feedback system samples output and modifies the decision unit
input (homeostasis --> negative feedback control).
Such a system requires that input signals somehow be counted (perhaps
by energy threshold or time interval, or both).

Here we note that, as neuro studies affirm, we require some minimum
time interval which one would like to call the minimum unit of
attention. [For example, a motion detector has a lower minimum of the
speed of light.]

Plainly, we can increase the number of decision switches in tandem
with input sensors.

Since we are going from ground up, we have that every input sensor
goes to a single decision binary switch, yielding one action or
inaction. Now the sensor system may be somewhat complicated -- as is
the case with a radar motion detector, with its multiplicity of
sensors, exterior and interior. Yet any detected motion above a
particular threshold yields a decision/action.

Obviously one can have a large number of sensors and decision circuits
in "primitive" robots and drones.

We might call the above degree 1 attention.

The next steps require negative feedback control and if-then (or the
equivalent) logic gates.

VQ7

Now one may easily combine (conceptually) hierarchical complexity with
feedback control complexity (those are key ingredients of AI).

We have, say,

output X' = x'_1, x'_2, x'_3...x'_n

in which the output is sampled at every nth term and fed back to a
control on f(x). Call it g(x).

[it is also convenient to assume here (but not necessarily) that the
input (x_n)s are paired with the output (x'_n+1)s.]

That is f(x_n) + g(x_n) = f_n+1(x).   <fix notation>

We can also write f(x) = f(x+1) - g(x).

[To assure negative feedback, the sampled data must be out of phase
with the input data. Check incoming books]

Speculation for now: Now Bak's power law complexity, which he says
lies on the border between periodicity and chaos, might be reflected
in the borderline between negative and positive feedback: i.e.,
positive feedback implies input and output are in phase; negative
feedback is calmest at a 180 degree phase difference. So Bakian
systems would likely be found to be only mildly out of phase.

++++++++++=

Because g(x) represents "internal" detection, we assign it a "basic"
level of attention.

++++++++=
Thus far, we have not progressed farther than the time steps of a
computer clock.

+++++++++

Now we can use multiple detectors to attain a single decision (which
we might label as attention level 2). But here let's not confuse
things by putting our decision nets into some grab-all box.

Rather, all we need do is follow a logic gate model (see elsewhere).

For a =/= b, we have

input a and input b at logic gate yields decision c (including null
action decision).

or,

a.b --> c

a.~b --> c'

b.~a --> c''

We can ramp up the complexity somewhat with such forms as

c.c' --> d
c.~c' --> d'

and so on.

This type of construction program might be dubbed hierarchical "complexity."

(We needn't bother much with positive feedback here, tho one can see
its effects in severe epilepsy.)

++++++++++

So it is straightforward that a feedback system is an if-then system
(or sub-system). If the sample has value x, act to modify the output
stream. If not, do nothing.

It is also plain that one can place feedback control logic gates at
any level of a hierarchy. Usually it seems rational to control only
the previous level, but there is no conceptual reason that feedback
control from level n to level n-m is impermissible.

All that has actually been done with the above words is to
conceptually design a software program, which might be attached to
servomechanisms. A conceptual robot.

VGQ7

A system of logic gate hierarchies and feedback controls simply
multiples the units of attention, which remain discrete unto
themselves.

Increased complexity implies no change in attention (defined in terms
of scanning), nor does it imply consciousness. A sufficiently complex
robot ("android") may pass a Turing test while its design shows no
potential for "inner awareness" or consciousness. Increased complexity
of detection/response DOES NOT IMPLY sentience, despite implying
heightened attention and responsiveness (from more sensors and a
hierarchical decision system).

Well, if there is no physical reason for consciousness to emerge from
the physical brain, we must ask the point of its existence and,
further, how does it interact with the physical units? (Back to
Descartes.)

Inner-reflectivity, or consciousness, does seem to be intimately
related to physical structures, as the strong evidence from brain
damage cases demonstrates. Perhaps we may regard a newborn human as
having an untrained consciousness. The physical brain is responsible
for providing a way for the consciousness to interpret the world in
which it finds itself. (Agreed, this is a conceptualization that may
not throw much light.)

How do drugs and sleep dim or block consciousness? Or do they? Perhaps
they block consciousness from interacting with the "external world."
Even in deep, delta rhythm sleep, we cannot really say that
consciousness is gone. We can say that many of the sensors have shut
down and so the consciousness has less of immediate concern to deal
with. This is equivalent to saying that much sensory awareness is shut
off.

(It is probably a good idea to scrap the term "awareness" in favor of
the word "attention." At some point, we will need to formulate a
scientific glossary, such that specific terms have precise
definitions.)

WGQ9

And then, we have the whole business about the "I sense" v. bottom-up
"will." Many programs make decisions from bottom to top. But it is
possible to design in a template mode (think of a matrix) to which the
program strives, using feedback control. The "will" is then equivalent
to Matrix M. In Toward we show how much of human cognition seems to
follow this pattern. You'll also observe a similarity in AI
techniques.

The "will" then becomes a template of matrix numbers ("goal") which,
for humans, has sub-system matrix templates, sometimes vying with each
other toward the overall goal of homeostasis. The shifting will then
is the top template goal, as approximated by sub-system feedback
efforts.

So for a software program, whether the will goes top to bottom, bottom
to top or (as is very often the case) both, there is no freedom there.
The concept of free will belongs with any non-physical aspects of
existence, such as consciousness.

Further, note that for many people, chatbots pass the Turing test.
They seem as though they are consciously interacting with human
questioners. But those bots simply have no need for an internal "I
sense" or persona which it calls "me."

Consider blindsight. Lower down the scale of complexity, it is thought
there are animals that get around using a blindsight system. They
don't see as we do, but they function as tho they have some limited
level of consciousness seeing thru their eyes. Yet, blindsight is
defined by the organism responding to visual stimuli while being
unconscious of seeing them.
Those animals have no need for anything within them that says "I see it."

A robot without consciousness would be in some ways similar to a human
afflicted with blindsight.

+++++++++

Up to this point, we have been unable to identify any physical need
for consciousness, nor any physical way of causing it to emerge, tho
intelligent-like behavior can be designed, as we see from AI. Nor does
there appear to be any evolutionary advantage to consciousness, nor to
the qualia of pleasure and pain, nor to emotional feelings associated
with survivalist behaviors, all of which can be strongly simulated
without those qualia.

It is noteworthy that Gould, puzzled by the "excessive" power of human
cognition, conjectured that it was akin to a spandrel used in
construction that remains in the structure after its usefulness is
over, or that it might be a consequence of a neutral mutation. But
when we say there is no physically definable reason for an "inner
consciousness," we mean there is no obvious physics linking a physical
system to consciousness.

Penrose tried to account for consciousness by inserting quantum
gravity conjectures into the brain, but those ideas have gained no
traction.

What physics of consciousness?
Some have resolved the problem by saying that since the mind, or
equivalently, consciousness represents a process, there is no "it"
that can exist. Of course that is like saying that because you never
step into the same river twice, rivers don't exist. In any case, all
physics is about process. Yet the universe is held to exist.

The emergence, or epiphenomenon, concept.
"Emergence" means that systems can show collective behavior patterns
that are not evident in the smaller subsystems. Well known examples
are gas laws and population extinction lines. Power laws that may
emerge when systems are regarded holistically (dynamically) emerge in
complex systems (those with feedback loops).

But what is it that emerges? Behavior. Unless one insists that
consciousness is a form of behavior, then consciousness does not
emerge from dynamic systems. Yet if we say that consciousness is a
form of behavior, we are either saying it is a form that doesn't exist
in the physical world, or we are failing to ask the question: WHAT
form of behavior is it? How does one map it onto a Turing machine?

In other words, we may ask, How would an AI bot HAVE consciousness?
There seems to be no physical reason available for consciousness to
emerge.

I concede however that one could theoretically design an android
containing para-emotions and feelings that, to our eyes, gets angry
and flustered if challenged with questions it can't answer or that get
at an "unconscious" conflict (competing sub-goals), just as routinely
occurs among humans. BUT, is it consciousness that emerges, or is it
complex behaviors that  emerge?

My position is that you would also have to design in a para-persona
that talks to itself. But a para-persona implies a para-consciousness,
which is a system that simulates consciousness but is not sentient.

But tho one can design a reality modeler -- a prediction control unit
such as are used by anti-aircraft batteries -- how would one design a
logic circuit such that the unit identifies itself -- and FEELS that
identity -- as "I"? How does one design in subjectivity. Any
functioning self-referencing logic circuit is nothing but a feedback
control system.

The "I sense" doesn't emerge from physical systems.

WQG13

As to current concepts of quantum computing, nothing much emerges
beyond ultra-fast parallel processing, or eavesdropper detection. Of
course, this doesn't rule out the possibility that quantum
entanglement, or, say, a many worlds effect has something to do with
consciousness. But what would the associated logic circuit look like?
Even if one uses quantum logic, as in "(A.~A = 1) --> no observation",
one has to set up the decision circuit as "(A.~A = 1) --> no decision"
instead of the classical "(A.~A = 1) --> 0" . The logic circuit will
follow a decision form very closely aligned with a standard decision
tree.

For certain situations, quantum logic simply requires the inclusive-or
(or the exclusive-or) where you would not classically expect it.
Quantum logic also builds on inherent (axiomatic) weights, as opposed
to empirical derivations. All this is to say that quantum logic is
routine logic mildly modified (at least insofar as design of logic
circuits). Quantum behavior can be modeled by that route, but it is a
stretch to think that there is any obvious bridge between quantum
mechanics and the I-sense.

test

 


Sunday, March 5, 2023

Substack as band aid

A band can easily expand its social media presence by use of Substack, which prints and distributes emailed newsletters.

Options:

# Assuming a band has a list of subscribers for its promos, that list can be easily added to the Substack account. The startup newsletter would probably only duplicate most of the band's other social media material. This newsletter would be free, and its purpose would be to draw new subscribers and others to your band. That is, keep picking up newbies to your stuff. You might use this newsletter to advertise your wares, whether albums or greeting cards, or what have you.

The newsletter can be issued as occasionally as you wish. No real pressure.

# You can issue a closed circulation newsletter which is issued, either at a price or gratis, as an inducement for Patreon support.

# You can also put out a separate general-circulation newsletter that carries a pricetag and earns you money (hopefully).

Important note: It is safest to set up a separate account for each newsletter you put out. The Substack system would not let me produce separate newsletters on one account, despite saying that it was possible (which was true, according to its peculiar definition).

It is quite possible you could get one of your fans to handle the grunt-work gratis.

(I am not volunteering.)

Blowin' Down This Road<br> (I Ain't A-Gonna Be Treated This Way)

Short instrumental intro I'm blowin' down this old dusty road, I'm a-blowin' down this old dusty road, I'm ...