Friday, December 27, 2024

K paper E

 VGQ5


if-then occurs in the degree 1 system.
if no detection, then no action.
if detection, then action (including the null action).

detection --> action
+++++++++++

A simple feedback system samples output and modifies the decision unit
input (homeostasis --> negative feedback control).
Such a system requires that input signals somehow be counted (perhaps
by energy threshold or time interval, or both).

Here we note that, as neuro studies affirm, we require some minimum
time interval which one would like to call the minimum unit of
attention. [For example, a motion detector has a lower minimum of the
speed of light.]

Plainly, we can increase the number of decision switches in tandem
with input sensors.

Since we are going from ground up, we have that every input sensor
goes to a single decision binary switch, yielding one action or
inaction. Now the sensor system may be somewhat complicated -- as is
the case with a radar motion detector, with its multiplicity of
sensors, exterior and interior. Yet any detected motion above a
particular threshold yields a decision/action.

Obviously one can have a large number of sensors and decision circuits
in "primitive" robots and drones.

We might call the above degree 1 attention.

The next steps require negative feedback control and if-then (or the
equivalent) logic gates.

VQ7

Now one may easily combine (conceptually) hierarchical complexity with
feedback control complexity (those are key ingredients of AI).

We have, say,

output X' = x'_1, x'_2, x'_3...x'_n

in which the output is sampled at every nth term and fed back to a
control on f(x). Call it g(x).

[it is also convenient to assume here (but not necessarily) that the
input (x_n)s are paired with the output (x'_n+1)s.]

That is f(x_n) + g(x_n) = f_n+1(x).   <fix notation>

We can also write f(x) = f(x+1) - g(x).

[To assure negative feedback, the sampled data must be out of phase
with the input data. Check incoming books]

Speculation for now: Now Bak's power law complexity, which he says
lies on the border between periodicity and chaos, might be reflected
in the borderline between negative and positive feedback: i.e.,
positive feedback implies input and output are in phase; negative
feedback is calmest at a 180 degree phase difference. So Bakian
systems would likely be found to be only mildly out of phase.

++++++++++=

Because g(x) represents "internal" detection, we assign it a "basic"
level of attention.

++++++++=
Thus far, we have not progressed farther than the time steps of a
computer clock.

+++++++++

Now we can use multiple detectors to attain a single decision (which
we might label as attention level 2). But here let's not confuse
things by putting our decision nets into some grab-all box.

Rather, all we need do is follow a logic gate model (see elsewhere).

For a =/= b, we have

input a and input b at logic gate yields decision c (including null
action decision).

or,

a.b --> c

a.~b --> c'

b.~a --> c''

We can ramp up the complexity somewhat with such forms as

c.c' --> d
c.~c' --> d'

and so on.

This type of construction program might be dubbed hierarchical "complexity."

(We needn't bother much with positive feedback here, tho one can see
its effects in severe epilepsy.)

++++++++++

So it is straightforward that a feedback system is an if-then system
(or sub-system). If the sample has value x, act to modify the output
stream. If not, do nothing.

It is also plain that one can place feedback control logic gates at
any level of a hierarchy. Usually it seems rational to control only
the previous level, but there is no conceptual reason that feedback
control from level n to level n-m is impermissible.

All that has actually been done with the above words is to
conceptually design a software program, which might be attached to
servomechanisms. A conceptual robot.

VGQ7

A system of logic gate hierarchies and feedback controls simply
multiples the units of attention, which remain discrete unto
themselves.

Increased complexity implies no change in attention (defined in terms
of scanning), nor does it imply consciousness. A sufficiently complex
robot ("android") may pass a Turing test while its design shows no
potential for "inner awareness" or consciousness. Increased complexity
of detection/response DOES NOT IMPLY sentience, despite implying
heightened attention and responsiveness (from more sensors and a
hierarchical decision system).

Well, if there is no physical reason for consciousness to emerge from
the physical brain, we must ask the point of its existence and,
further, how does it interact with the physical units? (Back to
Descartes.)

Inner-reflectivity, or consciousness, does seem to be intimately
related to physical structures, as the strong evidence from brain
damage cases demonstrates. Perhaps we may regard a newborn human as
having an untrained consciousness. The physical brain is responsible
for providing a way for the consciousness to interpret the world in
which it finds itself. (Agreed, this is a conceptualization that may
not throw much light.)

How do drugs and sleep dim or block consciousness? Or do they? Perhaps
they block consciousness from interacting with the "external world."
Even in deep, delta rhythm sleep, we cannot really say that
consciousness is gone. We can say that many of the sensors have shut
down and so the consciousness has less of immediate concern to deal
with. This is equivalent to saying that much sensory awareness is shut
off.

(It is probably a good idea to scrap the term "awareness" in favor of
the word "attention." At some point, we will need to formulate a
scientific glossary, such that specific terms have precise
definitions.)

WGQ9

And then, we have the whole business about the "I sense" v. bottom-up
"will." Many programs make decisions from bottom to top. But it is
possible to design in a template mode (think of a matrix) to which the
program strives, using feedback control. The "will" is then equivalent
to Matrix M. In Toward we show how much of human cognition seems to
follow this pattern. You'll also observe a similarity in AI
techniques.

The "will" then becomes a template of matrix numbers ("goal") which,
for humans, has sub-system matrix templates, sometimes vying with each
other toward the overall goal of homeostasis. The shifting will then
is the top template goal, as approximated by sub-system feedback
efforts.

So for a software program, whether the will goes top to bottom, bottom
to top or (as is very often the case) both, there is no freedom there.
The concept of free will belongs with any non-physical aspects of
existence, such as consciousness.

Further, note that for many people, chatbots pass the Turing test.
They seem as though they are consciously interacting with human
questioners. But those bots simply have no need for an internal "I
sense" or persona which it calls "me."

Consider blindsight. Lower down the scale of complexity, it is thought
there are animals that get around using a blindsight system. They
don't see as we do, but they function as tho they have some limited
level of consciousness seeing thru their eyes. Yet, blindsight is
defined by the organism responding to visual stimuli while being
unconscious of seeing them.
Those animals have no need for anything within them that says "I see it."

A robot without consciousness would be in some ways similar to a human
afflicted with blindsight.

+++++++++

Up to this point, we have been unable to identify any physical need
for consciousness, nor any physical way of causing it to emerge, tho
intelligent-like behavior can be designed, as we see from AI. Nor does
there appear to be any evolutionary advantage to consciousness, nor to
the qualia of pleasure and pain, nor to emotional feelings associated
with survivalist behaviors, all of which can be strongly simulated
without those qualia.

It is noteworthy that Gould, puzzled by the "excessive" power of human
cognition, conjectured that it was akin to a spandrel used in
construction that remains in the structure after its usefulness is
over, or that it might be a consequence of a neutral mutation. But
when we say there is no physically definable reason for an "inner
consciousness," we mean there is no obvious physics linking a physical
system to consciousness.

Penrose tried to account for consciousness by inserting quantum
gravity conjectures into the brain, but those ideas have gained no
traction.

What physics of consciousness?
Some have resolved the problem by saying that since the mind, or
equivalently, consciousness represents a process, there is no "it"
that can exist. Of course that is like saying that because you never
step into the same river twice, rivers don't exist. In any case, all
physics is about process. Yet the universe is held to exist.

The emergence, or epiphenomenon, concept.
"Emergence" means that systems can show collective behavior patterns
that are not evident in the smaller subsystems. Well known examples
are gas laws and population extinction lines. Power laws that may
emerge when systems are regarded holistically (dynamically) emerge in
complex systems (those with feedback loops).

But what is it that emerges? Behavior. Unless one insists that
consciousness is a form of behavior, then consciousness does not
emerge from dynamic systems. Yet if we say that consciousness is a
form of behavior, we are either saying it is a form that doesn't exist
in the physical world, or we are failing to ask the question: WHAT
form of behavior is it? How does one map it onto a Turing machine?

In other words, we may ask, How would an AI bot HAVE consciousness?
There seems to be no physical reason available for consciousness to
emerge.

I concede however that one could theoretically design an android
containing para-emotions and feelings that, to our eyes, gets angry
and flustered if challenged with questions it can't answer or that get
at an "unconscious" conflict (competing sub-goals), just as routinely
occurs among humans. BUT, is it consciousness that emerges, or is it
complex behaviors that  emerge?

My position is that you would also have to design in a para-persona
that talks to itself. But a para-persona implies a para-consciousness,
which is a system that simulates consciousness but is not sentient.

But tho one can design a reality modeler -- a prediction control unit
such as are used by anti-aircraft batteries -- how would one design a
logic circuit such that the unit identifies itself -- and FEELS that
identity -- as "I"? How does one design in subjectivity. Any
functioning self-referencing logic circuit is nothing but a feedback
control system.

The "I sense" doesn't emerge from physical systems.

WQG13

As to current concepts of quantum computing, nothing much emerges
beyond ultra-fast parallel processing, or eavesdropper detection. Of
course, this doesn't rule out the possibility that quantum
entanglement, or, say, a many worlds effect has something to do with
consciousness. But what would the associated logic circuit look like?
Even if one uses quantum logic, as in "(A.~A = 1) --> no observation",
one has to set up the decision circuit as "(A.~A = 1) --> no decision"
instead of the classical "(A.~A = 1) --> 0" . The logic circuit will
follow a decision form very closely aligned with a standard decision
tree.

For certain situations, quantum logic simply requires the inclusive-or
(or the exclusive-or) where you would not classically expect it.
Quantum logic also builds on inherent (axiomatic) weights, as opposed
to empirical derivations. All this is to say that quantum logic is
routine logic mildly modified (at least insofar as design of logic
circuits). Quantum behavior can be modeled by that route, but it is a
stretch to think that there is any obvious bridge between quantum
mechanics and the I-sense.

No comments:

Post a Comment

K paper B

  Paul Conant   < krypto784@gmail.com > 3:31 PM (0 minutes ago) to  me ,  U.R. VTX 1-- In the interior "I mode," many, but n...