Friday, December 27, 2024

K paper B

 

Paul Conant krypto784@gmail.com

3:31 PM (0 minutes ago)
to meU.R.
VTX 1--

In the interior "I mode," many, but not all, persons talk to
themselves, usually silently, tho a few whisper or even speak aloud.

What is going on? What does it mean to say that the self sends a
verbal message to the self? What is speaking and what is listening?
And what of persons who don't talk verbally, with their "streams of
consciousness" being impressionistic feelings, desires, emotions? Of
course, the talkers have these qualia as part of their stream also,
but tend to focus on the verbal part of the signal or, anyway, data
stream.

Let's ask what it might mean for an android to have such a situation.
It's really not that hard a problem. An android is sampling an output
signal high in its hierarchy in order to modify that stream and-or
other output streams prior to an action decision. For a highly complex
system, that "decision" may be in the form of a prioritization
projected well into the future (matrix algebra can help here).

Now in a sample case, a human's self-talk follows a question-answer format.

Q. If I do X, what is likely to happen?
A. Y is what experience leads me to expect.
R (Reflection). But Y is not satisfactory.
(Try again)
Q. What if I do W

... and so on.

That is, the persona is scanning memory for possibilities of the
if-then variety. It puts the possibilities brought up by
associationist relations and puts them into roughly linear logical
if-then form. It matches the questions with potential answers on a
ranking system (as indicated in Toward).

One may say that the persona builds relations, using if-then as the
relation. XRY = R<x e X, y e Y>  . (Find paper on language and
relations.)

What might be called the thruput, or verbal stream, assists in the
process of seeking matching elements from both the left and right
sides of the relation if-then.

see photo VTX3

Now what about a verbal stream that doesn't follow X --> Y ? Such
thinking is slightly more abstract in the sense that the
emotion-driven shadow (an abbreviation) is continually seeking
satisfaction and continually triggering verbal streams that are
equivalent to X --> Y.  The scanner of the stream may not find a
"serious" match in Y but may help in concocting a transient Y that
briefly matches X for a mild bit of elation. Such a mechanism is very
delicate and subject to much variation, partly accounting for
psychological disturbances, tho most of those are rooted in the shadow
and not in the verbal system.

Now a question is: Where is the "I sense" located in this scheme?

Well, consider this internal dialog:

Proposed action (really a question):
   I think I will go home early, as I am rather tired.

Reflection A:
   But if I do so, this terrible backlog of work will get even worse.
So I really ought to stay.

Reflection B:
   But if I go now, there's a good chance I'll run into Mary at Joe's
Inn. She gets off at 4.

Decision:
   Hang it all, I'm on my way to Joe's! I'm outta here!

Lovelife trumps both fatigue and "responsibility."

{Better symbolization would be helpful.}

The shadow's conflicting needs -- rest, approval of self or others
(work ethic), love -- all feed into the decision process. The
conscious "I" owns the feelings that express the needs (tho not
necessarily), the verbal if-then analysis, and the top-ranked
potential choice, along with the decision prior to action.*
_______________
* In brain injury or strong psychological disturbance or trances of
various kinds, including sleep states, the I-sense can deny (or be
made to deny) ownership of various functions for which it usually
accepts responsibility.
_______________

There is no obvious need for an I-sense. A computer can follow the
foregoing algorithm and make comparable decisions, which are subject
to de-prioritizing based on needs resulting from emergent data,
external or internal. A computer glitch that slows down processing can
be seen as analogous to the effects of a headache.

Observe that in the dream state, the I-sense tends to remain in the
present tense. The shadow's needs, in symbolic (moving)  picture
language, are quite directly expressed, and it often tries to "feel
better" with a mentally fulfilled wish, much like a tot who spanks a
doll, a safer bet than hitting Daddy. Or who cradles the doll either
because she wants to be more like Mommy or because she craves Mommy's
attention.

The wish fulfilment aspect of mental life fits into the general
mammalian method of tuning up bodily and mental skills thru play. Is
it possible that androids would need to play. Well, one can certainly
see the possibility of shakedown periods in which they teach each
other skills, much as some chess engines can match wits against each
other in order to improve their play. But they would need no desire to
"feel better." Their inner function detectors would observe something
wrong and, where worthwhile, go thru a repair routine. Most such
repairs would be to the software: auto-correcting software.

In any case, the I-sense is always the puppet of the shadow; the
waking state persona that it owns is its intermediary (ambassador) to
other people. We are not now addressing whether there is a spiritual
domain that can liberate the public persona and I-sense from the
shadow. But, thus far in our investigation, without such a possibility
there is no path to free will.

Now I point out that my Toward model requires an ability to experience
pain and pleasure, to react to emotion, to feel fear, to love, or at
least to desire. All these "qualia," at least in humans, imply but do
not define consciousness -- which we have been unable to account for
using the machine paradigm.

And importantly there seems no obvious reason why one cannot, in
principle, design a humanistic android machine which behaves, even in
subtle ways, very much like a human being -- without designing in
consciousness.

Consider that the human pleasure-pain and emotion system is used to
inhibit, promote and build behaviors. If emotion X occurs at level w,
then behavior Y follows. If nerve sensor N is activated, then behavior
M (including perhaps the null behavior) follows. And yet a parallel
system of responses to withdraw, freeze, fight, do nothing can be
designed with no need for pleasure, pain or emotions.

That human responses are wonderfully sophisticated is an attribute of
the hierarchical feedback control system described earlier [IN
LONGHAND]. So whence consciousness?

It is clear that, for humans, consciousness (in the biological sense)
is intimately related to the need to FEEL "OK," to feel better than is
felt now (since bodily needs are always clamoring for satisfaction).
Here enters the drive to wish fulfilment, either symbolically in the
shadow or "actually" (tho this is largely another form of symbolism as
in "games people play") in relation to the  "social ego" (outward
facing persona).

That is to say, qualia and consciousness appear to be equivalent (in
the logic sense).

Yet again, we can pinpoint no way in which consciousness proceeds from
physical activity. If there is such, we will either need a new law of
physics, or we will need to uncover some new subtlety in quantum
physics covered by current law. On this last, however, we need concede
that the "weird" aspects of quantum mechanics all point toward the
idea that consciousness and measurable physical phenomena are
inextricably interwoven. None of the many attempts to escape from this
conclusion enjoys broad consensus.

It is possible that concepts of consciousness and in quantum physics
have brought us to the upper limits of human knowledge; we know,
courtesy of Kurt Goedel, that such limits exist.

For example, suppose we make the laws of physics axiomatic. Then there
are statements that are proper physical questions that are undecidable
because they cannot, in principle, be derived from axioms.

Can it be that questions regarding the origins of consciousness are,
in principle, undecidable? Perhaps it is too early to say. After all,
we haven't even found a way to posit the concept of consciousness in
the language of physics, or, for that matter in a logic circuit model.
The various proposals to frame that concept in physical language have
gained little traction.

+++++++++

Important note: In the previous discussion of inner dialog, notice
that the I-sense (loose term here) asserts ownership control by role.

====

There is a need, followed by verbalization of proposed action.

"I" <present> wonder if "I" <short term future> can slip away now.

[Role of question poser now] ^|^

"I" had better not. "I" need this job.

expression of decision to avoid penalty. role of answerer now.

"need job" reordered after "no" but actually came first in pre-conscious.

VTX 13

Consider what seems a bit of trivia. The mentalistic system just
discussed, along with all physics, can be conceptually represented as
"f(x) --> Y."

Granted, it may need a conscious observer to read the statement, but:
WHERE IS CONSCIOUSNESS IMPLIED IN THE STATEMENT? If Y represents a
binary decision possibility for physical action, you can make f(X) as
fancy a logic circuit as you wish, such that

f(x) o g(x) o h(x) ... or

f^[n](x) = f(f'(f''...f^[n](x))))...)

and at no step of the composite function -- including the final
"decision" step -- does consciousness emerge.

p/u VTX 14

K paper A

 XTY1


It is important to understand that there is no need for a robot to
make decisions based on inner feelings, or perceptions of pleasure or
pain. A logic circuit linked to scanners, no matter the hierarchical
complexity of "if-then's,"  can be designed to scan an animate or
inanimate object and respond to the data by approaching, freezing,
withdrawing or "fighting" (acting to try to neutralize the object)
with no corollary requirements of pleasure, pain, desire, fear,
fighting rage or any such conscious or semiconscious qualia.

There seems to be no method of designing a logic circuit from which
consciousness emerges -- since no logic control system needs to be
aware of its inner feelings.

One can easily imagine a robot that is programed to scan for and seek
a recharge source,

XTY2

to fight barriers put up against it, including other robots (or
humans) vying for access. And to withdraw when risk of critical damage
is too high.

No consciousness needed.
++++++++++++++
On sexuality and consciousness.

Even tho binary sexuality comes with high complexity, the previous
argument against a need for consciousness applies.

If we take physics to require a largely deterministic modeling of
nature, then the physical model rests on an if-then structure
(physicists know that that model isn't quite right because of quantum
indeterminacy; biologists in general seem to wave off the
indeterminacy issue). We see from the DNA model of reproduction that
the whole sexual process with a logic circuit model. Yes, the model is
fabulously complex, but nevertheless if we are sticking with the laws
of physics as we know them, a logic circuit model lies at the core.

Now it may be that DNA is in (yet another) Goldilocks zone in which
DNA is perpetually vectored toward replicating its own form (more or
less) and has generated giant phenotypes in order to increase odds of
"immortality." Yet, even if that line of reasoning is acceptable, from
a "logic circuit perspective," that would have nothing to do with the
conscious and semiconscious maneuvers toward sexual union.

There is no bona fide physical link between the physical-biological
theory and conscious sexual strivings. That doesn't mean there is no
link. It means that current physics cannot, in principle, supply the
link.

If a human can be modeled by a logic circuit, then there is no need
for human consciousness. Well it seems that humans need consciousness.
But do they, from a physical standpoint? Let us say that you are a
conscious reader. Yet an android could read this paper and assimilate
it with no need of consciousness. In fact an android could simulate
all human behavior and mannerisms -- including such activities as
wooing others -- without being conscious, tho you could well interact
with it as if it is conscious, as do many people when interacting with
chatbots.

[See matter on philosophical zombies.
https://en.wikipedia.org/wiki/Philosophical_zombie]

I am not interested in whether philosophical zombies (the behavioral
kind) are possible. My point is that when only the known laws of
physics are the basis of our model, we end up with a philosophical
zombie. Consciousness simply does not emerge from a logic circuit, no
matter how sophisticated.

K paper F

 DKR1


At one time the analogy was circulating in computer science that the brain is to the mind as the computer hardware is to the software. This analogy still has implicit attraction for many, tho these days the idea is couched in more respectable academic language.

But, we can safely say that the parallel is false. First of all, there is no fundamental difference between hardware and software. What we have is a hardware system designed to be "universal,' or a general system that can be modified into following specific routines. A software program selects and orders a subset of pre-existent logic gates.

A universal Turing "machine" converts to a specific TM calculation
under the concept that, conceptually, it is possible to choose one computation from the set of all computations. Or, the universal machine uses the standard description (which reduces to an integer) as an input value, and then yields a computation that a separate TM with the same number would yield.

How would one derive consciousness or qualia from Turing's 1936 paper? Yet any known AI system is directly (if only conceptually) convertible into a Turing program. There is no magic added -- that physics knows of -- whereby electronic circuitry implies consciousness. Atheist Turing tried to address this issue with his 1938-9 paper on oracles. He, of course, realized there are one or two things human minds do that computation could never achieve. He proposed a physical solution that limited the scale of the non-standard "objects" that would, he held, correct the undecidability problem. But, it turns out that his physical oracles are not even conceptually possible on account of the Heisenberg uncertainty principle and quantum limits more generally (in that time period, he would not have been expected to know anything much about quantum theory).

Anyone who doubts my case should perhaps re-examine his position, based on the fact that any consistent logic circuit (recognizing that programers sometimes devise inconsistent ones that result in "bugs") can be represented thru use of a single logic gate, such as NAND ("not and").

Consider [A --> B] <--> [~A v B.] <--> [~(A.~B)]

where the dot represents "and."

By this, we see that a logic circuit may be represented as a logical product, with the input values equivalent to an axiom set.

n_Π_(i = 1) =  ~(x.~y)_i

The output value is then reached at step n.

A basic hierarchical structure is implicit in the essential recursion present. Other hierarchical structures can be found by, say,

m_Π_(j = a) =  ~(x.~y)_j where m<n

Further, of course, we can have more than two consecutive subsets of products, obtaining any level of "complexity."

So for any step 1 to step 2, no qualia are implied. And that holds for any step m to step m+1. But to reach a hierarchical subset, one first has to proceed one step at a time. Hence, no clearcut implication of qualia.

As most of us agree that the consciousness or qualia that constitute it, are human realities, we are again forced to the conclusion that any physics of consciousness is well beyond current conceptions of physical law. After all at present the scientific method assumes deterministic logic, or largely deterministic logic. (The issue of tautology v. contingency is a side issue which is not relevant here.)

+++++++++
TCJ1

It seems quite reasonable to view a theory of physics as a coherent means of description that makes, or attempts to make, accurate predictions. A number of assumptions (whether construed as empirical or not) are, in terms of the theory, axioms or postulates. (Witness Newton and Einstein, for example.)

We see immediately that we have a logical structure underneath the theory, as in:

"If A and B occur close together in time, then C must follow," conceding that each letter can itself represent more than one letter. So we have ~(A.~B) --> C
or ~[(~A.B).C]

We are then free to use the product notation above whereby, interestingly, we have proved that a physical theory is equivalent to a logic circuit, or Turing machine. In fact, all logic circuits, or Turing machines, are the same except for the input value(s). Any consistent physical theory is the same as any other, except for the input values. (This is why Goedel's inconsistency proof focused only on the (conceptual) axioms of number theory.)

We recognize that for a rigorous proof of a scientific claim, we simply use the Deduction Theorem (reverse the progression of the logical product), while accepting that the deduction theorem cannot be used for a set of undecidable physical problems, as we know full well that no modern physical theory can do without number theory (answer to Hilbert question No. 6).

Again we repeat: Any modern theory of physics can be modeled as a logical product, which in turn can be used to represent any logic circuit (or Turing machine). Hence, any purely physical theory of brain functions must be representable as a logic circuit. The many sensory detectors are to be represented as an axiom set, whereby one decision can follow a specific axiom subset.

A train of axiomatic signals "close" in time (a low number of time steps apart). In turn this next-stage decision node may have a time lag, such that other axiom signals hit another next-stage node. These nodes can then be sufficient for a decision. Sometimes a node receives a stream of signals. The stream can be sampled upstream from the decision node and periodically fed back to the decision node as one of the criteria used in the decision-making function.

https://photos.google.com/photo/AF1QipMd2CPiHdp5xcC9WvnKYjXFtnMEwAt7Kt-o7zsD

photo of exponential graph

The above picture gives a snapshot at an instant of time, rather than the representing the full process. Nevertheless, it shows the simplicity underlying ALL strictly computational complexity.

This is all standard computer science. But the point is that complexity is, in principle, not all that complex. The rubric of "complex" stems from the fact that such systems tend to encompass major areas of unpredictability. But, it does not follow that because complex systems have large swaths of unpredictability, the emergence of the wild card of consciousness is a statistically reasonable idea. Probability is of no use when other considerations militate against such a conclusion.

photo for elsewhere in K notes

IMG_20241222_162500116.jpg

at Dece

Dynamic brain connectivity

 Dynamic brain connectivity distinguishes conscious and unconscious states

A. Special

Kantian transcendent ego

 give some idea of the transcendental ego of kantian philosophy

A. Special

does K exist?

 Does `Consciousness' Exist? : James, William : Free Download, Borrow, and Streaming : Internet Archive

A. Special

Paul Conant krypto784@gmail.com

Sat, Nov 30, 4:37 PM
to me
James says nothing here about neutral monism tho he did at some point espouse that notion, (see theta blog), thus, like Russell, shifting from pluralism to monism

---------- Forwarded message ---------
From: U.R. Awsumm <kryptx108@gmail.com>
Date: Sat, Nov 30, 2024, 4:34 PM
Subject: Does `Consciousness' Exist? : James, William : Free Download, Borrow, and Streaming : Internet Archive
To: Paul Conant <krypto784@gmail.com>


K paper B

  Paul Conant   < krypto784@gmail.com > 3:31 PM (0 minutes ago) to  me ,  U.R. VTX 1-- In the interior "I mode," many, but n...