What does “PSNF” mean?
PSNF is the project name of this cognitive approach.
Internally, you can read it as:
- Perceived
- Semantic
- Neural
- Field
In plain words: PSNF-Core is a Semantic Neural Field-inspired engine, where meaning emerges from
relations and activation dynamics (a “field”), not from opaque model weights.
What it is
- A cognitive engine built around a concept graph (nodes + weighted links)
- A semantic mapper that extracts roles (agent/action/patient + context)
- A memory system (short-term + episodic + user-confirmed facts)
- A dynamic field where activation propagates and reshapes relevance over time
- A transparent model you can inspect (concepts, links, facts, traces)
What it is NOT
- Not a Large Language Model (LLM)
- Not a “text autocomplete” system
- Not a cloud service (no server required)
- Not a fixed script with always-identical outputs
PSNF-Core aims to be explainable by design, not by after-the-fact interpretations.
The Gestalt Field
PSNF-Core does not interpret words in isolation.
Each interaction shapes a Gestalt field:
a global activation landscape where concepts influence each other.
- Concepts have salience (how “active” they are right now)
- Links have weights (how strongly concepts are associated)
- The system is sensitive to the overall configuration, not just single rules
Same input ≠ internal state ≠ output (because the field evolves with experience).
Propagation (Spreading Activation)
A key mechanism in PSNF-Core is propagation:
activation flows across the graph, reinforcing related concepts and revealing likely paths for reasoning.
This is inspired by spreading activation models in cognitive science.
- Start from a focus concept (e.g., dog)
- Propagate activation through weighted links (neighbors receive partial activation)
- Use the resulting activation pattern to pick explanations, chains, and relevant memories
focusConcept → spreadActivation() → activatedNeighborhood → chain/archetype/episode selection
This is not “LLM sampling”: it’s a controlled, inspectable activation flow over explicit knowledge.
Cognitive Chemistry: Dopamine & Serotonin
PSNF-Core includes an explicit (simplified) neurochemical modulation layer.
This contributes to its non-deterministic feel: the engine is not only computing an answer,
it is also regulating cognition.
🔵 Dopamine
Reinforces meaningful connections. When a concept participates successfully
in reasoning or explanation, its relevance can increase (and some links strengthen).
Learning by reinforcement, not by backpropagation.
🟣 Serotonin
Stabilizes cognition. It helps reduce runaway activation, repetition,
and “semantic collapse” toward one dominant concept.
Balance over randomness.
Together: dopamine pushes learning forward, serotonin keeps the system stable and usable.
Why PSNF-Core is dynamic, not static
Even if many steps are rule-based, PSNF-Core operates as an evolving cognitive system.
Answers depend on internal state: recent context (STM), activation (salience),
episodic traces, user-confirmed facts, and chemical modulation.
- Context changes focus → different paths through the graph
- Propagation changes neighborhoods → different chains and explanations
- Learning changes weights → different “best” neighbors over time
- User facts can override generic links when relevant
How it works (high level)
1) Tokenize + lemmatize
2) Build semantic structure (semantic spine):
- agent / action / patient
- optional: time / location / instrument / cause / purpose
3) Update the concept graph:
- strengthen links (agent → action → patient)
- update role stats (asAgent / asAction / asPatient)
- update salience + neuromodulators (dopamine/serotonin)
4) Propagation (Gestalt field):
- spread activation from focus concepts
- compute an "activation neighborhood"
5) Question analysis:
- detect question type: what/why/how/...
- choose focus concept(s)
6) Answer generation:
- check user facts first (when applicable)
- otherwise traverse graph + activation neighborhood + episodic hits
7) Everything is inspectable:
- concepts, links, facts, memory traces
How to talk to PSNF-Core (best practices)
- Teach explicitly with simple definitional statements:
“A dog is an animal.” → Store as Fact
- One idea per sentence works best, especially at the beginning.
- Long prompts: split into short statements, then ask questions.
Tip: write 3–6 short sentences, send, then ask: “What is X?”
- Use the Search panel to inspect what it learned (concepts / links / facts).
- If something looks wrong, prefer a correction workflow (future UX):
Did you mean “...” ? → confirm typo or store new concept.
PSNF-Core is meant to be “trained in dialogue” with explicit control, not flooded with unverified text.
Why this is useful
Education
Students can explore how knowledge becomes structure: concepts, relations, definitions, memory, activation.
Research & experimentation
A sandbox for transparent cognition and controllable learning—without cloud dependencies.
If you want to understand how an AI “thinks” (and how it changes over time),
PSNF-Core is built exactly for that.