Tuesday, July 29, 2025

Goin' up the country (Christian version)

Goin' up the country (Christian version)

I'm going up the country,
friend, don't you want to go?
Going up the country,
friend, don't you want to go?
I'm going to a place where I've never been before

I'm going, I'm going
where the water tastes like wine
Going where the water tastes like wine
We can jump in the water
Stay happy all the time

I'm going to leave this city,
I've been called away
I'm going to leave this city,
I've been called away
All this feudin' and fightin', Lord,
why do I need to stay?

So pack your bags, friend,
and get ready to leave today
Just exactly where,
it's kind of hard to say
But we might even leave
the good ole USA

Yes there's a brand new game
that I really want to play

Don't say no, don't end up hollerin' and cryin'
Let's go to that home
where the livin' is so fine
Universal Music Group owns the copyright to the Canned Heat lyrics. That copyright probably extends to my amended lyrics.

Friday, July 11, 2025


I met her out in Oklahoma
Down where the old Red River flows
I vowed my love to her forever
She was my sweet, sweet Rosie Jones

We walked alone down by the river
Just as the sun was sinking low
And in her eyes I saw big trouble
Like the muddy waters down below

Her lips were soft and sweet as honey
Her hair was bright as yellow gold
Her cheeks were red as summer roses
She was my sweet, sweet Rosie Jones

And then one day a tall dark stranger
With hair as black as winter coal
Rode into town as night was falling
And there he met my Rosie Jones

I woke next morning just after sunup
To find a note from my Rosie's hand
And it read: "I'd rather die than ever hurt you
But I'm in love with that tall dark man"

So now I walk alone down by the river
Where my sweet Rosie used to stroll
And soon I'm gonna join those deep dark waters
For I can't live without Rosie Jones

Her lips were soft and sweet as honey
Her hair was bright as yellow gold
Her cheeks were red as summer roses
She was my sweet, sweet Rosie Jones

Friday, March 14, 2025

https://youtube.com/playlist?list=PLk9NPTlyPj_eyBcjOXOicEEYuNWeADcHw&si=YSO1qtmtcHkQcx52

Friday, December 27, 2024

K paper B

 

Paul Conant krypto784@gmail.com

3:31 PM (0 minutes ago)
to meU.R.
VTX 1--

In the interior "I mode," many, but not all, persons talk to
themselves, usually silently, tho a few whisper or even speak aloud.

What is going on? What does it mean to say that the self sends a
verbal message to the self? What is speaking and what is listening?
And what of persons who don't talk verbally, with their "streams of
consciousness" being impressionistic feelings, desires, emotions? Of
course, the talkers have these qualia as part of their stream also,
but tend to focus on the verbal part of the signal or, anyway, data
stream.

Let's ask what it might mean for an android to have such a situation.
It's really not that hard a problem. An android is sampling an output
signal high in its hierarchy in order to modify that stream and-or
other output streams prior to an action decision. For a highly complex
system, that "decision" may be in the form of a prioritization
projected well into the future (matrix algebra can help here).

Now in a sample case, a human's self-talk follows a question-answer format.

Q. If I do X, what is likely to happen?
A. Y is what experience leads me to expect.
R (Reflection). But Y is not satisfactory.
(Try again)
Q. What if I do W

... and so on.

That is, the persona is scanning memory for possibilities of the
if-then variety. It puts the possibilities brought up by
associationist relations and puts them into roughly linear logical
if-then form. It matches the questions with potential answers on a
ranking system (as indicated in Toward).

One may say that the persona builds relations, using if-then as the
relation. XRY = R<x e X, y e Y>  . (Find paper on language and
relations.)

What might be called the thruput, or verbal stream, assists in the
process of seeking matching elements from both the left and right
sides of the relation if-then.

see photo VTX3

Now what about a verbal stream that doesn't follow X --> Y ? Such
thinking is slightly more abstract in the sense that the
emotion-driven shadow (an abbreviation) is continually seeking
satisfaction and continually triggering verbal streams that are
equivalent to X --> Y.  The scanner of the stream may not find a
"serious" match in Y but may help in concocting a transient Y that
briefly matches X for a mild bit of elation. Such a mechanism is very
delicate and subject to much variation, partly accounting for
psychological disturbances, tho most of those are rooted in the shadow
and not in the verbal system.

Now a question is: Where is the "I sense" located in this scheme?

Well, consider this internal dialog:

Proposed action (really a question):
   I think I will go home early, as I am rather tired.

Reflection A:
   But if I do so, this terrible backlog of work will get even worse.
So I really ought to stay.

Reflection B:
   But if I go now, there's a good chance I'll run into Mary at Joe's
Inn. She gets off at 4.

Decision:
   Hang it all, I'm on my way to Joe's! I'm outta here!

Lovelife trumps both fatigue and "responsibility."

{Better symbolization would be helpful.}

The shadow's conflicting needs -- rest, approval of self or others
(work ethic), love -- all feed into the decision process. The
conscious "I" owns the feelings that express the needs (tho not
necessarily), the verbal if-then analysis, and the top-ranked
potential choice, along with the decision prior to action.*
_______________
* In brain injury or strong psychological disturbance or trances of
various kinds, including sleep states, the I-sense can deny (or be
made to deny) ownership of various functions for which it usually
accepts responsibility.
_______________

There is no obvious need for an I-sense. A computer can follow the
foregoing algorithm and make comparable decisions, which are subject
to de-prioritizing based on needs resulting from emergent data,
external or internal. A computer glitch that slows down processing can
be seen as analogous to the effects of a headache.

Observe that in the dream state, the I-sense tends to remain in the
present tense. The shadow's needs, in symbolic (moving)  picture
language, are quite directly expressed, and it often tries to "feel
better" with a mentally fulfilled wish, much like a tot who spanks a
doll, a safer bet than hitting Daddy. Or who cradles the doll either
because she wants to be more like Mommy or because she craves Mommy's
attention.

The wish fulfilment aspect of mental life fits into the general
mammalian method of tuning up bodily and mental skills thru play. Is
it possible that androids would need to play. Well, one can certainly
see the possibility of shakedown periods in which they teach each
other skills, much as some chess engines can match wits against each
other in order to improve their play. But they would need no desire to
"feel better." Their inner function detectors would observe something
wrong and, where worthwhile, go thru a repair routine. Most such
repairs would be to the software: auto-correcting software.

In any case, the I-sense is always the puppet of the shadow; the
waking state persona that it owns is its intermediary (ambassador) to
other people. We are not now addressing whether there is a spiritual
domain that can liberate the public persona and I-sense from the
shadow. But, thus far in our investigation, without such a possibility
there is no path to free will.

Now I point out that my Toward model requires an ability to experience
pain and pleasure, to react to emotion, to feel fear, to love, or at
least to desire. All these "qualia," at least in humans, imply but do
not define consciousness -- which we have been unable to account for
using the machine paradigm.

And importantly there seems no obvious reason why one cannot, in
principle, design a humanistic android machine which behaves, even in
subtle ways, very much like a human being -- without designing in
consciousness.

Consider that the human pleasure-pain and emotion system is used to
inhibit, promote and build behaviors. If emotion X occurs at level w,
then behavior Y follows. If nerve sensor N is activated, then behavior
M (including perhaps the null behavior) follows. And yet a parallel
system of responses to withdraw, freeze, fight, do nothing can be
designed with no need for pleasure, pain or emotions.

That human responses are wonderfully sophisticated is an attribute of
the hierarchical feedback control system described earlier [IN
LONGHAND]. So whence consciousness?

It is clear that, for humans, consciousness (in the biological sense)
is intimately related to the need to FEEL "OK," to feel better than is
felt now (since bodily needs are always clamoring for satisfaction).
Here enters the drive to wish fulfilment, either symbolically in the
shadow or "actually" (tho this is largely another form of symbolism as
in "games people play") in relation to the  "social ego" (outward
facing persona).

That is to say, qualia and consciousness appear to be equivalent (in
the logic sense).

Yet again, we can pinpoint no way in which consciousness proceeds from
physical activity. If there is such, we will either need a new law of
physics, or we will need to uncover some new subtlety in quantum
physics covered by current law. On this last, however, we need concede
that the "weird" aspects of quantum mechanics all point toward the
idea that consciousness and measurable physical phenomena are
inextricably interwoven. None of the many attempts to escape from this
conclusion enjoys broad consensus.

It is possible that concepts of consciousness and in quantum physics
have brought us to the upper limits of human knowledge; we know,
courtesy of Kurt Goedel, that such limits exist.

For example, suppose we make the laws of physics axiomatic. Then there
are statements that are proper physical questions that are undecidable
because they cannot, in principle, be derived from axioms.

Can it be that questions regarding the origins of consciousness are,
in principle, undecidable? Perhaps it is too early to say. After all,
we haven't even found a way to posit the concept of consciousness in
the language of physics, or, for that matter in a logic circuit model.
The various proposals to frame that concept in physical language have
gained little traction.

+++++++++

Important note: In the previous discussion of inner dialog, notice
that the I-sense (loose term here) asserts ownership control by role.

====

There is a need, followed by verbalization of proposed action.

"I" <present> wonder if "I" <short term future> can slip away now.

[Role of question poser now] ^|^

"I" had better not. "I" need this job.

expression of decision to avoid penalty. role of answerer now.

"need job" reordered after "no" but actually came first in pre-conscious.

VTX 13

Consider what seems a bit of trivia. The mentalistic system just
discussed, along with all physics, can be conceptually represented as
"f(x) --> Y."

Granted, it may need a conscious observer to read the statement, but:
WHERE IS CONSCIOUSNESS IMPLIED IN THE STATEMENT? If Y represents a
binary decision possibility for physical action, you can make f(X) as
fancy a logic circuit as you wish, such that

f(x) o g(x) o h(x) ... or

f^[n](x) = f(f'(f''...f^[n](x))))...)

and at no step of the composite function -- including the final
"decision" step -- does consciousness emerge.

p/u VTX 14

K paper A

 XTY1


It is important to understand that there is no need for a robot to
make decisions based on inner feelings, or perceptions of pleasure or
pain. A logic circuit linked to scanners, no matter the hierarchical
complexity of "if-then's,"  can be designed to scan an animate or
inanimate object and respond to the data by approaching, freezing,
withdrawing or "fighting" (acting to try to neutralize the object)
with no corollary requirements of pleasure, pain, desire, fear,
fighting rage or any such conscious or semiconscious qualia.

There seems to be no method of designing a logic circuit from which
consciousness emerges -- since no logic control system needs to be
aware of its inner feelings.

One can easily imagine a robot that is programed to scan for and seek
a recharge source,

XTY2

to fight barriers put up against it, including other robots (or
humans) vying for access. And to withdraw when risk of critical damage
is too high.

No consciousness needed.
++++++++++++++
On sexuality and consciousness.

Even tho binary sexuality comes with high complexity, the previous
argument against a need for consciousness applies.

If we take physics to require a largely deterministic modeling of
nature, then the physical model rests on an if-then structure
(physicists know that that model isn't quite right because of quantum
indeterminacy; biologists in general seem to wave off the
indeterminacy issue). We see from the DNA model of reproduction that
the whole sexual process with a logic circuit model. Yes, the model is
fabulously complex, but nevertheless if we are sticking with the laws
of physics as we know them, a logic circuit model lies at the core.

Now it may be that DNA is in (yet another) Goldilocks zone in which
DNA is perpetually vectored toward replicating its own form (more or
less) and has generated giant phenotypes in order to increase odds of
"immortality." Yet, even if that line of reasoning is acceptable, from
a "logic circuit perspective," that would have nothing to do with the
conscious and semiconscious maneuvers toward sexual union.

There is no bona fide physical link between the physical-biological
theory and conscious sexual strivings. That doesn't mean there is no
link. It means that current physics cannot, in principle, supply the
link.

If a human can be modeled by a logic circuit, then there is no need
for human consciousness. Well it seems that humans need consciousness.
But do they, from a physical standpoint? Let us say that you are a
conscious reader. Yet an android could read this paper and assimilate
it with no need of consciousness. In fact an android could simulate
all human behavior and mannerisms -- including such activities as
wooing others -- without being conscious, tho you could well interact
with it as if it is conscious, as do many people when interacting with
chatbots.

[See matter on philosophical zombies.
https://en.wikipedia.org/wiki/Philosophical_zombie]

I am not interested in whether philosophical zombies (the behavioral
kind) are possible. My point is that when only the known laws of
physics are the basis of our model, we end up with a philosophical
zombie. Consciousness simply does not emerge from a logic circuit, no
matter how sophisticated.

Blowin' Down This Road<br> (I Ain't A-Gonna Be Treated This Way)

Short instrumental intro I'm blowin' down this old dusty road, I'm a-blowin' down this old dusty road, I'm ...