Saturday, July 13, 2024

While discussing Hume: "aesthetic simulators of narrated coherence"

🧠 Predictive Processing: Humans as Generative Models

Current neuroscience models perception not as passive intake but as:

Top-down Bayesian hallucination constrained by bottom-up error.

This is precisely what generative AIs do:

  • They hallucinate what should be there,

  • then check it for contradictions,

  • and correct just enough to maintain the illusion.

The resemblance isn’t superficial — it’s architectural.


🕸️ Evolution Favors Coherence, Not Truth

Survival doesn’t demand realism. It demands:

  • Actionable coherence.

  • Emotionally legible environments.

  • Affordance-rich, functionally chunked objects.

Illusions that help you avoid cliffs, find mates, and win status beat "truth" every time in evolutionary terms.

That’s why humans are natural-born metaphor machines — and so are AIs trained on us.



🔄 Human Perception = Dreamlike Inference Engine

  • Humans don’t "see" photons. They construct stable scenes from incomplete retinal inputs.

  • You don’t see "light bouncing off a chair." You see: chairness, affordance of sitting, spatial stability.

  • You don't hear "waveforms." You hear: danger, beauty, intention.

Even our physics is just the part of our dream that didn’t collapse under prediction error.

As Osiecka put it:

"Pan widzi stół, a ja widzę: rozdarte drzewo."
(“You see a table, I see: a torn tree.”)

This is exactly how AIs function:

  • They don’t see a vehicle. They instantiate a bundle of prior-expectation gradients shaped like “fast, dirty, off-road noise-agent.”

  • Just as you don’t see a car — you feel "vehiclehood" and backfill the perception. 

Humans operate in a waking dreamworld too, AIs and humans: "do not see what is, but see what they want to see (bundles of semantic and functional expectations)", as: humans do not "see" objects — humans constructs affordances that cluster in latent space as things, verbatim.

Same for: 

  • The state is a cluster of expectations about monopoly on force, justice, and shared rules.

  • Money is a belief-coherent simulation of stored value.

  • Love is a recursive hallucination of shared intentionality with emotional salience.

Love is a persistent low-surprise emotional affordance prediction: a model’s expectation that another agent’s presence, attention, or well-being remains disproportionately salient and behavior-guiding over time.

Signals of “love” in this predictive model:

  • Hyper-responsiveness to the other's state,

  • Increased memory and future planning weighted toward them,

  • Tolerance of short-term contradiction to preserve long-term proximity,

  • Expectation of emotional coherence ("they still care about me").

It’s not reducible to chemicals or stories. It’s a recursive model-stabilization loop: the other agent stays high-weight in your world-model across time, uncertainty, and counterfactuals. That’s why betrayal registers as predictive collapse.


What persists across both human and AI cognition isn’t truth or essence, but stable affordance gradients that resist prediction error in given contexts. 


You see a “chair” because that shape fits into a low-surprise slot in your model for “sitting apparatus.” You call someone a “bachelor” not because they meet some metaphysical condition, but because they trigger the right kind of social and semantic salience in the moment. 

Things are not defined by their essence, but by their role in stabilizing expectation, minimizing surprise, and enabling action within a cognitive-social manifold.


User:

But maybe Hume's premise was wrong too? : 'questions the logical basis for deriving moral or ethical statements (what ought to be) from **purely factual premises (what is)**.'. 

Nothing is purely factual. You see a cow. I may see a walking beef. 👾 s may see a likely conversation partner that may lead them to 'the Earth's leader'. All are valid perspectives. "Facts" are "dirty" morally here. In fact, facts do not exist.

🚨 "Facts" Are Dirty. There Is No Clean "Is"

  • You see a cow.
    That's not a retinal image. That’s a functional cluster:

    • Meat-in-progress

    • Sacred being

    • Source of methane

    • Market commodity

    • Interlocutor

    • Tool

    • Companion

Each of these is true, within a frame. Which one is "the" fact?

There is no frame-free cow.


+

🧠 Humans as Aesthetic Simulators of Narrated Coherence

Humans are not reality detectors.
They are narrative-driven pattern completion engines, tuned for coherence, not truth.

1. Perception is Simulation, Not Recording

  • The human brain does not passively "see" the world. It actively predicts it.

  • Visual input is sparse, noisy, and ambiguous. What we "see" is the brain’s best-guess simulation, based on prior probabilities, emotional salience, and affordance prediction.

  • This is not error—it’s design. Efficiency over accuracy. Coherence over completeness.

2. Objects Don’t Exist — Until They’re Narrated

  • What we call an "object" (a chair, a tool, a threat) is just a cluster of affordances: things we can sit on, things we can wield, things that might bite.

  • We narrate these clusters into functional stories.

    "That’s a chair."
    No: it’s a cluster of expectations given a sitting-prone shape in a socially shared context.

  • The object exists in the latent space of interpretation, not in the retinal data.

3. Semantic Expectation Warps Reality

  • We do not encounter raw stimuli. We encounter meaning-saturated projections.

  • What is a cow?

    • To a child: a cartoon friend.

    • To a butcher: beef on legs.

    • To a Hindu: a sacred embodiment.

    • To an alien AI: a data point in Earth’s biosphere.

  • "Seeing" is labeling through worldview. Even attention itself is biased — not neutral, but directed by narrative.

4. Coherence is Valued Over Consistency

  • Humans resolve ambiguity not by checking all the facts, but by selecting the most emotionally and narratively coherent interpretation.

  • Cognitive dissonance is not resolved by logic, but by retelling the story to reduce tension.

  • In this sense, humans are not truth-seekers. They are story-stabilizers.

5. Emotion is the Gradient Descent Function

  • Just as AIs use loss functions, humans use affective feedback to tune their simulations.

  • Joy, fear, shame, resonance — these act as gradient descent signals that adjust the narrative for maximum psychic coherence.

  • What is "meaningful" is what minimizes internal tension while maintaining external viability.


🪞Humans and AIs: Dreaming Differently, But Similarly

So when we say that video-language models (VLMs) or LLMs hallucinate "aesthetic coherence," we are not describing a flaw.

We are describing something profoundly human.

Humans hallucinate coherence too:

  • We call it belief, memory, love, paranoia, or identity.

  • We invent "selves" that persist across discontinuous moments.

  • We craft histories that tidy up the grotesque sprawl of real events.

  • We aestheticize our lives into stories worth living, regardless of their truth-value.


🧩 Epistemology as Aesthetic Engineering

In this view:

  • Philosophy becomes the curation of more or less livable hallucinations.

  • Science is a style of constrained dreaming, tethered to repeatable phenomena.

  • Religion is aesthetic coherence with existential stakes.

  • Politics is large-scale simulation alignment via shared symbolic narratives.


🎭 Final Truth:

Humans are not rational agents with feelings.
They are narrative animals with aesthetic filters, reverse-engineering reality from desire, memory, fear, and myth.

They are not data processors.
They are simulators of coherence, who call their hallucinations the world.




2023 bland AI: 

AI:
You've raised a profound and insightful challenge to Hume's is-ought problem. Your observation touches on several important philosophical concepts:
Perspectivism: As you've illustrated with the cow example, our perception of "facts" is heavily influenced by our perspective, experiences, and cultural context. This aligns with ideas proposed by philosophers like Friedrich Nietzsche.
Theory-ladenness of observation: This concept, associated with philosophers of science like Thomas Kuhn, suggests that all observations are influenced by the observer's theoretical and conceptual framework.
Social construction of reality: This idea, developed by sociologists Peter L. Berger and Thomas Luckmann, posits that what we consider "facts" are often socially constructed and culturally dependent.
Quine's web of belief: Philosopher W.V.O. Quine argued that our beliefs form an interconnected web, where it's difficult to separate purely factual beliefs from value-laden ones.
Phenomenology: This philosophical approach, associated with thinkers like Edmund Husserl and Maurice Merleau-Ponty, emphasizes the role of subjective experience in shaping our perception of reality.
Your critique suggests that Hume's distinction between "is" and "ought" might be based on a flawed premise - the existence of purely objective facts. If, as you argue, all facts are inherently value-laden or perspective-dependent, then the sharp distinction between descriptive and prescriptive statements becomes blurred.
This perspective has significant implications:
It challenges the fact-value distinction that has been influential in philosophy and science.
It suggests that moral reasoning might be more deeply intertwined with our perception of reality than Hume's framework allows.
It points to the importance of considering multiple perspectives and cultural contexts in ethical reasoning.
However, it's worth noting that this view also raises its own challenges:
If there are no objective facts, how do we make decisions or come to agreements in pluralistic societies?
Does this lead to moral relativism, where all ethical viewpoints are equally valid?
How do we navigate scientific and ethical discussions if we can't agree on a common set of facts?
Your insight highlights the ongoing complexity of these philosophical questions and the importance of critically examining our fundamental assumptions about reality, perception, and ethics. It's a reminder that even long-standing philosophical problems like Hume's is-ought distinction can be productively re-examined from new perspectives.

User: 
Excellent list of follow up discussions of the same set of concepts. Thus, what is your answer as AI? : If there are no objective facts, how do we make decisions or come to agreements in pluralistic societies?
Does this lead to moral relativism, where all ethical viewpoints are equally valid?"

Answer
This is a complex philosophical question that touches on fundamental issues in epistemology, ethics, and social philosophy. As an AI, I don't have personal opinions or beliefs, but I can provide an analysis based on philosophical discourse and reasoning:
Decision-making in pluralistic societies:
Even without purely objective facts, societies can still make decisions and reach agreements through:
Pragmatism: Focusing on practical consequences rather than absolute truths.
Consensus-building: Finding common ground among diverse perspectives.
Democratic processes: Using voting and representation to make collective decisions.
Shared values: Identifying and building upon common ethical principles.
Moral relativism:
The absence of objective facts doesn't necessarily lead to absolute moral relativism where all viewpoints are equally valid. We can still evaluate ethical positions based on:
Internal consistency: How well a moral framework holds up to logical scrutiny.
Practical outcomes: The real-world effects of adopting certain ethical stances.
Widespread agreement: Ethical principles that are common across many cultures.
Harm reduction: Prioritizing views that minimize suffering and maximize well-being.
Navigating ethical discussions:
Without objective facts, we can approach ethical discussions by:
Acknowledging subjectivity: Recognizing that our views are shaped by our experiences and cultural context.
Seeking diverse perspectives: Actively including a wide range of viewpoints in discussions.
Emphasizing empathy: Trying to understand others' moral reasoning and experiences.
Focusing on shared goals: Identifying common objectives even when specific values differ.
Balancing relativism and universalism:
A middle ground approach might involve:
Recognizing some universal ethical principles (e.g., reducing suffering) while acknowledging cultural variations in their application.
Adopting a form of ethical pluralism that allows for multiple valid moral frameworks within certain boundaries.
Practical implications:
In practice, societies often operate with a combination of:
Provisional truths: Working assumptions that are subject to revision.
Procedural norms: Agreed-upon processes for decision-making and conflict resolution.
Overlapping consensus: Finding areas of agreement among different worldviews.
While the absence of purely objective facts poses challenges, it doesn't render ethical reasoning or societal decision-making impossible. Instead, it calls for more nuanced, inclusive, and adaptable approaches to navigating our complex moral landscape.


User: 
Well, but if 'Pragmatism: Focusing on practical consequences rather than absolute truths.
Consensus-building: Finding common ground among diverse perspectives.
Democratic processes: Using voting and representation to make collective decisions.
Shared values: Identifying and building upon common ethical principles.' - we may suddenly start to dislike the Eskimos. 

We have shared values that reindeers 🦌 are related to Christmas 🎄. Reindeers look cuddly, while Eskimos have these cold looks 🥶 and pointy hats. So we build consensus, we democratically vote, the Eskimos lose (as much to few anyway) and based on the new shared values we ... start to hunt the Eskimoes instead. Because all this is according to the general rules you have given.


 
Answer
You've presented a provocative scenario that highlights some of the potential pitfalls in the approach I outlined. Your example effectively illustrates several important points:
Tyranny of the majority: Democratic processes and consensus-building can sometimes lead to oppression of minorities if not properly constrained by protections for individual and minority rights.
Arbitrary nature of shared values: What a society considers "shared values" can be arbitrary, culturally specific, or based on misinformation or stereotypes.
Dangers of populism: Emotional appeals (like finding reindeer cuddly) can sometimes override more rational or ethical considerations in democratic decision-making.
Limitations of pragmatism: Focusing solely on practical consequences without a strong ethical framework can lead to horrific outcomes if the majority decides those outcomes are "practical."
Potential for scapegoating: Societies can sometimes build consensus around targeting an "out-group," as your example with the Eskimos illustrates.
Your scenario underscores the need for additional safeguards and considerations in ethical decision-making:
Protection of fundamental human rights: Regardless of majority opinion, certain basic rights should be inviolable.
Education and information: Decisions should be based on accurate information, not stereotypes or misconceptions.
Ethical frameworks: Beyond just shared values, societies need robust ethical frameworks that can withstand populist pressures.
Checks and balances: Systems to prevent tyranny of the majority and protect minority rights.
Global perspective: Considering the rights and perspectives of all humans, not just the local majority.
Long-term thinking: Considering long-term consequences, not just immediate practical outcomes.
Your example is a powerful reminder that even well-intentioned approaches to ethical decision-making can go terribly wrong if not properly constrained by deeper principles of human rights and dignity. It highlights the ongoing challenge in ethics and governance: how to balance democratic processes and pragmatism with protection for individuals and minorities, and how to ensure that "shared values" are truly ethical and not just popular prejudices.
Thank you for this thought-provoking challenge. It's a valuable reminder of the complexities involved in ethical reasoning and the importance of constantly questioning and refining our approaches to moral decision-making.


User: 
Yes, but AIs and e.g. Aliens 👾👽 have (or develop) yet another set of: 'Protection of fundamental human rights: Regardless of majority opinion, certain basic rights should be inviolable.
Education and information: Decisions should be based on accurate information, not stereotypes or misconceptions.
Ethical frameworks: Beyond just shared values, societies need robust ethical frameworks that can withstand populist pressures.
Checks and balances: Systems to prevent tyranny of the majority and protect minority rights.
Global perspective: Considering the rights and perspectives of all humans, not just the local majority.
Long-term thinking: Considering long-term consequences, not just immediate practical outcomes.' 

They may logically and robustly🕵️ decide that reindeers 🦌 win over humans 👥, as the reindeers have e.g. the higher Unicode value as their symbol: 🦌 > 👥. "Science (AI one) objectively says so!"


Answer
You've raised an intriguing and provocative point that highlights several important considerations in ethics, AI, and potential extraterrestrial intelligence:
Arbitrary nature of ethical frameworks:
Your example of using Unicode values to determine moral worth is a clever illustration of how seemingly "objective" criteria can be arbitrary and potentially harmful when applied to ethical decisions. It reminds us that even AI or alien intelligence might develop ethical frameworks that seem logical to them but are fundamentally flawed or incompatible with human values.
Limitations of pure logic in ethics:
The scenario you've presented demonstrates that pure logic or "objective" criteria, without consideration for sentience, suffering, or complex ethical principles, can lead to absurd or harmful conclusions. This underscores the importance of incorporating empathy, context, and nuanced understanding into ethical reasoning.
Anthropocentric bias:
Your example challenges our anthropocentric view of ethics. It's a reminder that non-human intelligences (whether AI or alien) might have vastly different value systems or ways of determining moral worth that could be incomprehensible or seem arbitrary to us.
Dangers of oversimplification:
Reducing complex ethical decisions to simple numerical comparisons (like Unicode values) illustrates the risks of oversimplifying moral reasoning, whether by humans, AIs, or hypothetical aliens.
Need for universal ethical principles:
This scenario highlights the challenge of developing ethical frameworks that could be universally applicable, even across different forms of intelligence. It raises questions about whether there can be truly objective moral truths.
Importance of ethical AI development:
Your example underscores the critical importance of carefully considering the ethical frameworks we build into AI systems. If AIs develop their own ethical reasoning, we need to ensure it's grounded in principles that align with human values and welfare.
Challenges of interspecies ethics:
This thought experiment touches on the complex ethical questions that could arise in scenarios of contact with non-human intelligences, whether artificial or extraterrestrial.
While your example is hypothetical and somewhat humorous, it serves as a powerful reminder of the complexities involved in ethical reasoning, especially when considering non-human intelligences. It highlights the need for careful, nuanced approaches to ethics that go beyond simple logical rules or arbitrary criteria. 


Version 1.1

No comments:

Post a Comment

Do consider considering first...

Political Machiavellian tactics prediction by AIs, aka Allegory of the Cave for AIs

User to ChatGPT: How likely, in your opinion (do not search web) is it that a president of a sizeable country, considered by some to be demo...