🧠 AI-Core Hybrid — Early Command-Line Intelligence
AI-Core continues to evolve — this time, it’s taken shape as a hybrid Ollama wrapper, merging local memory and LLM connectivity into one self-learning system.
It’s still in its infant stages, but already showing signs of adaptive intelligence and personalized retention.
🔹 Overview
The latest AI-Core build acts as a memory-driven interface that wraps around Ollama models like granite3.3.
It uses internal memory for short-term reasoning and, when needed, sends queries (plus the last two Q&A pairs) to the LLM.
This allows it to maintain topic consistency while building progressive understanding over each session.
When the LLM is inactive, AI-Core’s responses remain short or uncertain — limited to its current internal facts.
But when connected, it dynamically learns and refines its answers, capturing new knowledge using CLI commands like :teach and :teachlast.
🧩 Example Interaction
Below is a live example showing the transition from no LLM, to LLM-assisted, to self-taught state.
YOU> who are you
AI > ai-core interface.
YOU> what can you do
AI > yes.
YOU> can we interact?
[i] No internal answer found. Gathering context for LLM...
[i] Found 2 context snippet(s). Querying LLM...
AI > Absolutely, we can interact. I'm here to assist you with information and answer your questions to the best of my ability. Please feel free to ask me anything.
YOU> what does purpose mean?
[i] No internal answer found. Gathering context for LLM...
[i] Found 2 context snippet(s). Querying LLM...
AI > The term "purpose" refers to the reason for which something exists or is done, or the intended outcome...
YOU> :teachlast
[i] taught last.
YOU> what does purpose mean?
AI > The term "purpose" refers to the reason for which something exists or is done, or the intended outcome...
Notice that before teaching, AI-Core required the LLM to explain “purpose.”
After using :teachlast, it retained the answer locally — now responding instantly without querying the LLM again.
This simple example demonstrates incremental memory and command-line teachability — a small step toward local, personalized AI memory systems.
⚙️ Active Commands
Current operational commands within the AI-Core shell:
:swap :stats :listqa :teach :teachlast :reviewpending :promote
:profiles :profile <name|off> :ingestfacts :memfacts :exportfacts
:recenter :quit
Each of these commands interacts with different layers of memory and control:
:teach/:teachlast– Manually store a new fact or capture the last exchange into persistent memory.:listqa– Display recent question/answer pairs.:memfacts– Show structured long-term facts currently known to AI-Core.:stats– View live token usage and memory footprint.:swap/:profile– Change behavior or route model queries dynamically.:ingestfacts/:exportfacts– Import/export knowledge files (.txt,.json).:reviewpending/:promote– Curate what gets promoted to long-term memory.:recenter– Rebalance focus and reset context drift mid-session.:quit– Graceful shutdown, snapshotting session data.
🧬 Why This Matters
Even at this stage, AI-Core can learn through conversation and retain personal information when taught — allowing more personal, contextual interaction across sessions.
By using direct command-line input, we maintain full transparency and control over what gets stored, reviewed, and recalled.
Every command is a small step toward building a personalizable intelligence core that adapts to the user while keeping its logic observable and open.
🚧 In Development
Upcoming goals include:
- Inline memory editing and tagging.
- Fact confidence scoring and decay timers.
- Real-time
:teachqueue synchronization across sessions. - User-defined fact categories (e.g., personal, project, reference).
- Optional encryption for personal data memory blocks.
AI-Core is young — still learning, still rough — but every command and response teaches it how to grow.
Each improvement starts as a single experiment, tested live, reviewed, and then built into the next version.
Leave a Reply