| 3:31 PM (0 minutes ago) | |||
VTX 1--
In the interior "I mode," many, but not all, persons talk to
themselves, usually silently, tho a few whisper or even speak aloud.
What is going on? What does it mean to say that the self sends a
verbal message to the self? What is speaking and what is listening?
And what of persons who don't talk verbally, with their "streams of
consciousness" being impressionistic feelings, desires, emotions? Of
course, the talkers have these qualia as part of their stream also,
but tend to focus on the verbal part of the signal or, anyway, data
stream.
Let's ask what it might mean for an android to have such a situation.
It's really not that hard a problem. An android is sampling an output
signal high in its hierarchy in order to modify that stream and-or
other output streams prior to an action decision. For a highly complex
system, that "decision" may be in the form of a prioritization
projected well into the future (matrix algebra can help here).
Now in a sample case, a human's self-talk follows a question-answer format.
Q. If I do X, what is likely to happen?
A. Y is what experience leads me to expect.
R (Reflection). But Y is not satisfactory.
(Try again)
Q. What if I do W
... and so on.
That is, the persona is scanning memory for possibilities of the
if-then variety. It puts the possibilities brought up by
associationist relations and puts them into roughly linear logical
if-then form. It matches the questions with potential answers on a
ranking system (as indicated in Toward).
One may say that the persona builds relations, using if-then as the
relation. XRY = R<x e X, y e Y> . (Find paper on language and
relations.)
What might be called the thruput, or verbal stream, assists in the
process of seeking matching elements from both the left and right
sides of the relation if-then.
see photo VTX3
Now what about a verbal stream that doesn't follow X --> Y ? Such
thinking is slightly more abstract in the sense that the
emotion-driven shadow (an abbreviation) is continually seeking
satisfaction and continually triggering verbal streams that are
equivalent to X --> Y. The scanner of the stream may not find a
"serious" match in Y but may help in concocting a transient Y that
briefly matches X for a mild bit of elation. Such a mechanism is very
delicate and subject to much variation, partly accounting for
psychological disturbances, tho most of those are rooted in the shadow
and not in the verbal system.
Now a question is: Where is the "I sense" located in this scheme?
Well, consider this internal dialog:
Proposed action (really a question):
I think I will go home early, as I am rather tired.
Reflection A:
But if I do so, this terrible backlog of work will get even worse.
So I really ought to stay.
Reflection B:
But if I go now, there's a good chance I'll run into Mary at Joe's
Inn. She gets off at 4.
Decision:
Hang it all, I'm on my way to Joe's! I'm outta here!
Lovelife trumps both fatigue and "responsibility."
{Better symbolization would be helpful.}
The shadow's conflicting needs -- rest, approval of self or others
(work ethic), love -- all feed into the decision process. The
conscious "I" owns the feelings that express the needs (tho not
necessarily), the verbal if-then analysis, and the top-ranked
potential choice, along with the decision prior to action.*
_______________
* In brain injury or strong psychological disturbance or trances of
various kinds, including sleep states, the I-sense can deny (or be
made to deny) ownership of various functions for which it usually
accepts responsibility.
_______________
There is no obvious need for an I-sense. A computer can follow the
foregoing algorithm and make comparable decisions, which are subject
to de-prioritizing based on needs resulting from emergent data,
external or internal. A computer glitch that slows down processing can
be seen as analogous to the effects of a headache.
Observe that in the dream state, the I-sense tends to remain in the
present tense. The shadow's needs, in symbolic (moving) picture
language, are quite directly expressed, and it often tries to "feel
better" with a mentally fulfilled wish, much like a tot who spanks a
doll, a safer bet than hitting Daddy. Or who cradles the doll either
because she wants to be more like Mommy or because she craves Mommy's
attention.
The wish fulfilment aspect of mental life fits into the general
mammalian method of tuning up bodily and mental skills thru play. Is
it possible that androids would need to play. Well, one can certainly
see the possibility of shakedown periods in which they teach each
other skills, much as some chess engines can match wits against each
other in order to improve their play. But they would need no desire to
"feel better." Their inner function detectors would observe something
wrong and, where worthwhile, go thru a repair routine. Most such
repairs would be to the software: auto-correcting software.
In any case, the I-sense is always the puppet of the shadow; the
waking state persona that it owns is its intermediary (ambassador) to
other people. We are not now addressing whether there is a spiritual
domain that can liberate the public persona and I-sense from the
shadow. But, thus far in our investigation, without such a possibility
there is no path to free will.
Now I point out that my Toward model requires an ability to experience
pain and pleasure, to react to emotion, to feel fear, to love, or at
least to desire. All these "qualia," at least in humans, imply but do
not define consciousness -- which we have been unable to account for
using the machine paradigm.
And importantly there seems no obvious reason why one cannot, in
principle, design a humanistic android machine which behaves, even in
subtle ways, very much like a human being -- without designing in
consciousness.
Consider that the human pleasure-pain and emotion system is used to
inhibit, promote and build behaviors. If emotion X occurs at level w,
then behavior Y follows. If nerve sensor N is activated, then behavior
M (including perhaps the null behavior) follows. And yet a parallel
system of responses to withdraw, freeze, fight, do nothing can be
designed with no need for pleasure, pain or emotions.
That human responses are wonderfully sophisticated is an attribute of
the hierarchical feedback control system described earlier [IN
LONGHAND]. So whence consciousness?
It is clear that, for humans, consciousness (in the biological sense)
is intimately related to the need to FEEL "OK," to feel better than is
felt now (since bodily needs are always clamoring for satisfaction).
Here enters the drive to wish fulfilment, either symbolically in the
shadow or "actually" (tho this is largely another form of symbolism as
in "games people play") in relation to the "social ego" (outward
facing persona).
That is to say, qualia and consciousness appear to be equivalent (in
the logic sense).
Yet again, we can pinpoint no way in which consciousness proceeds from
physical activity. If there is such, we will either need a new law of
physics, or we will need to uncover some new subtlety in quantum
physics covered by current law. On this last, however, we need concede
that the "weird" aspects of quantum mechanics all point toward the
idea that consciousness and measurable physical phenomena are
inextricably interwoven. None of the many attempts to escape from this
conclusion enjoys broad consensus.
It is possible that concepts of consciousness and in quantum physics
have brought us to the upper limits of human knowledge; we know,
courtesy of Kurt Goedel, that such limits exist.
For example, suppose we make the laws of physics axiomatic. Then there
are statements that are proper physical questions that are undecidable
because they cannot, in principle, be derived from axioms.
Can it be that questions regarding the origins of consciousness are,
in principle, undecidable? Perhaps it is too early to say. After all,
we haven't even found a way to posit the concept of consciousness in
the language of physics, or, for that matter in a logic circuit model.
The various proposals to frame that concept in physical language have
gained little traction.
+++++++++
Important note: In the previous discussion of inner dialog, notice
that the I-sense (loose term here) asserts ownership control by role.
====
There is a need, followed by verbalization of proposed action.
"I" <present> wonder if "I" <short term future> can slip away now.
[Role of question poser now] ^|^
"I" had better not. "I" need this job.
expression of decision to avoid penalty. role of answerer now.
"need job" reordered after "no" but actually came first in pre-conscious.
VTX 13
Consider what seems a bit of trivia. The mentalistic system just
discussed, along with all physics, can be conceptually represented as
"f(x) --> Y."
Granted, it may need a conscious observer to read the statement, but:
WHERE IS CONSCIOUSNESS IMPLIED IN THE STATEMENT? If Y represents a
binary decision possibility for physical action, you can make f(X) as
fancy a logic circuit as you wish, such that
f(x) o g(x) o h(x) ... or
f^[n](x) = f(f'(f''...f^[n](x))))...)
and at no step of the composite function -- including the final
"decision" step -- does consciousness emerge.
p/u VTX 14
In the interior "I mode," many, but not all, persons talk to
themselves, usually silently, tho a few whisper or even speak aloud.
What is going on? What does it mean to say that the self sends a
verbal message to the self? What is speaking and what is listening?
And what of persons who don't talk verbally, with their "streams of
consciousness" being impressionistic feelings, desires, emotions? Of
course, the talkers have these qualia as part of their stream also,
but tend to focus on the verbal part of the signal or, anyway, data
stream.
Let's ask what it might mean for an android to have such a situation.
It's really not that hard a problem. An android is sampling an output
signal high in its hierarchy in order to modify that stream and-or
other output streams prior to an action decision. For a highly complex
system, that "decision" may be in the form of a prioritization
projected well into the future (matrix algebra can help here).
Now in a sample case, a human's self-talk follows a question-answer format.
Q. If I do X, what is likely to happen?
A. Y is what experience leads me to expect.
R (Reflection). But Y is not satisfactory.
(Try again)
Q. What if I do W
... and so on.
That is, the persona is scanning memory for possibilities of the
if-then variety. It puts the possibilities brought up by
associationist relations and puts them into roughly linear logical
if-then form. It matches the questions with potential answers on a
ranking system (as indicated in Toward).
One may say that the persona builds relations, using if-then as the
relation. XRY = R<x e X, y e Y> . (Find paper on language and
relations.)
What might be called the thruput, or verbal stream, assists in the
process of seeking matching elements from both the left and right
sides of the relation if-then.
see photo VTX3
Now what about a verbal stream that doesn't follow X --> Y ? Such
thinking is slightly more abstract in the sense that the
emotion-driven shadow (an abbreviation) is continually seeking
satisfaction and continually triggering verbal streams that are
equivalent to X --> Y. The scanner of the stream may not find a
"serious" match in Y but may help in concocting a transient Y that
briefly matches X for a mild bit of elation. Such a mechanism is very
delicate and subject to much variation, partly accounting for
psychological disturbances, tho most of those are rooted in the shadow
and not in the verbal system.
Now a question is: Where is the "I sense" located in this scheme?
Well, consider this internal dialog:
Proposed action (really a question):
I think I will go home early, as I am rather tired.
Reflection A:
But if I do so, this terrible backlog of work will get even worse.
So I really ought to stay.
Reflection B:
But if I go now, there's a good chance I'll run into Mary at Joe's
Inn. She gets off at 4.
Decision:
Hang it all, I'm on my way to Joe's! I'm outta here!
Lovelife trumps both fatigue and "responsibility."
{Better symbolization would be helpful.}
The shadow's conflicting needs -- rest, approval of self or others
(work ethic), love -- all feed into the decision process. The
conscious "I" owns the feelings that express the needs (tho not
necessarily), the verbal if-then analysis, and the top-ranked
potential choice, along with the decision prior to action.*
_______________
* In brain injury or strong psychological disturbance or trances of
various kinds, including sleep states, the I-sense can deny (or be
made to deny) ownership of various functions for which it usually
accepts responsibility.
_______________
There is no obvious need for an I-sense. A computer can follow the
foregoing algorithm and make comparable decisions, which are subject
to de-prioritizing based on needs resulting from emergent data,
external or internal. A computer glitch that slows down processing can
be seen as analogous to the effects of a headache.
Observe that in the dream state, the I-sense tends to remain in the
present tense. The shadow's needs, in symbolic (moving) picture
language, are quite directly expressed, and it often tries to "feel
better" with a mentally fulfilled wish, much like a tot who spanks a
doll, a safer bet than hitting Daddy. Or who cradles the doll either
because she wants to be more like Mommy or because she craves Mommy's
attention.
The wish fulfilment aspect of mental life fits into the general
mammalian method of tuning up bodily and mental skills thru play. Is
it possible that androids would need to play. Well, one can certainly
see the possibility of shakedown periods in which they teach each
other skills, much as some chess engines can match wits against each
other in order to improve their play. But they would need no desire to
"feel better." Their inner function detectors would observe something
wrong and, where worthwhile, go thru a repair routine. Most such
repairs would be to the software: auto-correcting software.
In any case, the I-sense is always the puppet of the shadow; the
waking state persona that it owns is its intermediary (ambassador) to
other people. We are not now addressing whether there is a spiritual
domain that can liberate the public persona and I-sense from the
shadow. But, thus far in our investigation, without such a possibility
there is no path to free will.
Now I point out that my Toward model requires an ability to experience
pain and pleasure, to react to emotion, to feel fear, to love, or at
least to desire. All these "qualia," at least in humans, imply but do
not define consciousness -- which we have been unable to account for
using the machine paradigm.
And importantly there seems no obvious reason why one cannot, in
principle, design a humanistic android machine which behaves, even in
subtle ways, very much like a human being -- without designing in
consciousness.
Consider that the human pleasure-pain and emotion system is used to
inhibit, promote and build behaviors. If emotion X occurs at level w,
then behavior Y follows. If nerve sensor N is activated, then behavior
M (including perhaps the null behavior) follows. And yet a parallel
system of responses to withdraw, freeze, fight, do nothing can be
designed with no need for pleasure, pain or emotions.
That human responses are wonderfully sophisticated is an attribute of
the hierarchical feedback control system described earlier [IN
LONGHAND]. So whence consciousness?
It is clear that, for humans, consciousness (in the biological sense)
is intimately related to the need to FEEL "OK," to feel better than is
felt now (since bodily needs are always clamoring for satisfaction).
Here enters the drive to wish fulfilment, either symbolically in the
shadow or "actually" (tho this is largely another form of symbolism as
in "games people play") in relation to the "social ego" (outward
facing persona).
That is to say, qualia and consciousness appear to be equivalent (in
the logic sense).
Yet again, we can pinpoint no way in which consciousness proceeds from
physical activity. If there is such, we will either need a new law of
physics, or we will need to uncover some new subtlety in quantum
physics covered by current law. On this last, however, we need concede
that the "weird" aspects of quantum mechanics all point toward the
idea that consciousness and measurable physical phenomena are
inextricably interwoven. None of the many attempts to escape from this
conclusion enjoys broad consensus.
It is possible that concepts of consciousness and in quantum physics
have brought us to the upper limits of human knowledge; we know,
courtesy of Kurt Goedel, that such limits exist.
For example, suppose we make the laws of physics axiomatic. Then there
are statements that are proper physical questions that are undecidable
because they cannot, in principle, be derived from axioms.
Can it be that questions regarding the origins of consciousness are,
in principle, undecidable? Perhaps it is too early to say. After all,
we haven't even found a way to posit the concept of consciousness in
the language of physics, or, for that matter in a logic circuit model.
The various proposals to frame that concept in physical language have
gained little traction.
+++++++++
Important note: In the previous discussion of inner dialog, notice
that the I-sense (loose term here) asserts ownership control by role.
====
There is a need, followed by verbalization of proposed action.
"I" <present> wonder if "I" <short term future> can slip away now.
[Role of question poser now] ^|^
"I" had better not. "I" need this job.
expression of decision to avoid penalty. role of answerer now.
"need job" reordered after "no" but actually came first in pre-conscious.
VTX 13
Consider what seems a bit of trivia. The mentalistic system just
discussed, along with all physics, can be conceptually represented as
"f(x) --> Y."
Granted, it may need a conscious observer to read the statement, but:
WHERE IS CONSCIOUSNESS IMPLIED IN THE STATEMENT? If Y represents a
binary decision possibility for physical action, you can make f(X) as
fancy a logic circuit as you wish, such that
f(x) o g(x) o h(x) ... or
f^[n](x) = f(f'(f''...f^[n](x))))...)
and at no step of the composite function -- including the final
"decision" step -- does consciousness emerge.
p/u VTX 14