Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
[Unreleased]¶
Added¶
- Input firewall (Gate 1) — Security gate blocking garbage/adversarial content from auto-capture pipeline
- Blocks: oversized content (>10KB), control sequences (
<ctrl*>, fake role tags), JSON metadata injection, base64/binary blocks, repetitive content, low-entropy data FirewallResultdataclass withblocked,reason,sanitizedfields- Integrated into all 3 auto-capture entry points: stop hook, precompact hook, post-tool passive capture
- 30 new tests (
test_input_firewall.py) - Stop hook role filtering — JSONL transcript entries classified by role; tool results skipped, assistant messages filtered by memory markers
- Embedding semantic dedup — Removes near-duplicate auto-captures using local embedding cosine similarity (sentence_transformer/ollama only)
- Compact response mode — Reduce MCP tool response tokens by 60-80%
compact=trueparam on all 46 MCP tools to strip metadata hints and truncate liststoken_budget=Nparam for progressive response size enforcement- Auto-compact: responses with >20 list items are compacted automatically
- Content preview: list items show truncated content with
_content_truncatedflag - Count-replace:
fibers_matched,conflicts,expiry_warnings→ count only - Long string truncation:
markdownfield capped at 500 chars ResponseConfigin config.toml:compact_mode,max_list_items,strip_hints,content_preview_length,auto_compact_threshold- 47 new tests (
test_response_compactor.py)
Fixed¶
- Memory poisoning prevention — Garbage content (chat control sequences, fake role injection, 270KB payloads) no longer enters brain through hooks (#94)
- PreCompact emergency threshold — Raised from 0.5 to 0.65 to reduce false positive captures
- fiber.metadata type sync —
nmem_editnow syncs type changes intofiber.metadata(cherry-picked from PR #85) - Compression size guard — Skip compression when summary is not smaller than original (#92)
[4.11.0] - 2026-03-17¶
Added¶
- Diminishing returns gate (v4.0 Phase 5) — Stop spreading activation early when new hops add insufficient signal
ActivationTracedataclass: per-hop tracking of new neurons and activation gainshould_stop_spreading(): absolute (< min neurons) + relative (gain ratio < threshold) criteria- Wired into all 3 activation engines: BFS, PPR, Reflex
- 4 new
BrainConfigfields:diminishing_returns_enabled/threshold/min_neurons/grace_hops - 25 new tests (
test_diminishing_returns.py)
Improved¶
- Roadmap cleanup — Removed 45 completed/obsolete plan files, consolidated remaining plans
- File watcher plan added (3 phases, Issue #66)
- Brain Quality Track C1+C2 merged
- v4.0 master plan: all 5 phases complete
Tests¶
- 4140 passed, 92 skipped, 1 xfailed
[4.10.0] - 2026-03-16¶
Added¶
- Onboarding overhaul (Issue #82) — Reduce 26 manual setup steps to 1 command
nmem init --full: auto-detect embeddings, enable dedup, generate maintenance script, print guide URLnmem doctorenhanced: 11 checks (was 8),--fixflag for auto-remediation (hooks, dedup, embedding)- Interactive quickstart guide page (MkDocs + animated terminal demos, scroll reveals, feature cards)
- Dashboard
GuideCardfor new users (<50 neurons) — dismissible, persisted via localStorage - Help button (?) in dashboard TopBar linking to quickstart guide
- CLI banners link to guide URL after init and doctor
- 35 new tests (test_full_setup + test_doctor_enhanced)
Fixed¶
- Windows npm install: OpenClaw plugin postinstall uses cross-platform Node.js instead of Unix shell syntax
[4.9.0] - 2026-03-16¶
Added¶
- Knowledge Surface (.nm format) — Two-tier memory architecture: Tier 1 =
.nmflat file (~1000 tokens, loaded every session), Tier 2 =brain.dbSQLite graph (queried on-demand) .nmformat with 5 sections: GRAPH (causal edges), CLUSTERS (topic groups), SIGNALS (urgent/watching/uncertain), DEPTH MAP (self-routing hints), META (brain stats)SurfaceGenerator— algorithmic extraction from brain.db using composite scoring (activation + recency + connections + priority)- Depth-aware recall routing: SUFFICIENT entities answered from surface (0 latency), NEEDS_DEEP triggers depth=2 recall
- Auto-injected into MCP
instructionson session init for immediate agent context nmem_surfaceMCP tool — generate (rebuild from brain.db) and show (inspect current surface)- Auto-regeneration on
nmem_auto(action="process")session-end - Atomic file writes (tmp + rename), project-level and global surface resolution
- Surface reload on brain switch, cached by brain name
- 73 new tests across 4 test files
Fixed¶
- CI fixes: doc_trainer mock using real
BrainConfiginstead ofMagicMock(lazy entity promotion attrs), auto_tags tests accept bigrams from keyword extractor - Docs freshness: regenerated CLI reference (new PostgreSQL migrate options)
[4.8.0] - 2026-03-16¶
Added¶
- B7: Lazy Entity Promotion — Entities need 2+ mentions before becoming neurons;
entity_refstable (schema v29), retroactive synapses on promotion, high-confidence/user-tagged exceptions - A4: Auto-Importance Scoring — Heuristic priority when user doesn't set explicit priority; type bonus, causal/comparative language signals, entity richness
- A4: Reflection Engine — Accumulates importance from saved memories, detects patterns (recurring entities, temporal sequences, contradictions) at threshold
- PostgreSQL Migration —
nmem migrate postgresCLI command with full connection params (#80) - B1-B6, B8: Brain Quality Track B — Auto-consolidation, Hebbian retrieval, cross-memory linking, IDF keywords, fiber scoring, contextual compression, adaptive decay
- A1: Smart Instructions — Decision framework injected into MCP
instructionsto guide proactive memory saving - Schema v29 —
entity_refstable for lazy entity promotion +keyword_document_frequencyfor IDF scoring - 73 new tests: lazy entity (11), importance (16), reflection (12), compression (12), adaptive decay (11), postgres migration (5), cross-memory link (9), IDF (7), fiber scoring (8)
Improved¶
- All quality improvements are purely algorithmic — zero LLM calls added
- Pipeline steps use
getattrfor backward compat with SimpleNamespace contexts - Entity ref operations gracefully degrade when table doesn't exist
[4.7.0] - 2026-03-16¶
Added¶
- PostgreSQL + pgvector backend — Full async storage backend via
asyncpgwith vector similarity search. Supports neurons, synapses, fibers, brains, typed queries. Docker Compose included. Contributed by @zsecducna (#56) - NeuralMemory vs Mem0 benchmark — Head-to-head comparison: 121x faster writes, equal accuracy, 0 API calls vs 70. Script at
scripts/benchmark_mem0_vs_nm.py - Chatbot v2 — Upgraded HF Spaces chatbot with conversation memory, cognitive reasoning for low-confidence answers, source citations, and retrieval stats panel
Fixed¶
ReinforcementManager.reinforce()test — updated assertion to match batch API (update_neuron_states_batch)check_distribution.py— Fixed ClawHub JSON parser, Windows shell compat, independent version channels
[4.6.0] - 2026-03-14¶
Added¶
nmem setup rules— IDE rules file generator for multi-agent adoption. Generates.cursorrules,.windsurfrules,.clinerules,GEMINI.md, andAGENTS.mdwith NM usage instructions. Supports--all,--ide <name>,--force, and interactive selection- 17 new tests for IDE rules generator
[4.5.0] - 2026-03-14¶
Added¶
- Context merger (Phase A) —
nmem_rememberaccepts optionalcontextdict (e.g.{reason, alternatives, cause, fix, steps}) that gets merged into content server-side using type-specific templates. Works with any agent — no need to craft perfect prose - Quality scorer (Phase B) — Every
nmem_rememberresponse now includesquality("low"/"medium"/"high"),score(0-10), andhints(actionable improvement suggestions). Soft gate: always stores, never rejects - 36 new tests for quality scorer (20) and context merger (16)
Fixed¶
- Tool memory config default — test assertion updated to match
enabled=Truedefault
[4.4.1] - 2026-03-14¶
Improved¶
- Embedding config-status 3-state detection — Quick Actions card now distinguishes "configured", "installed but disabled", and "not installed" for embedding provider, with actionable enable/disable commands
[4.4.0] - 2026-03-14¶
Added¶
- Dashboard Quick Actions card — Overview page now shows configuration status for 6 features (tool memory, cloud sync, embedding, consolidation, review queue, orphan rate) with actionable shortcut commands and copy buttons
/api/dashboard/config-statusendpoint — returns per-feature config status with status badges and commands- Source-Aware Brain plan — 4-phase architecture plan for smart index with exact citations from source documents
Fixed¶
- Plugin skills path (#71) —
skillsfield inplugin.jsonchanged from"./SKILL.md"(file) to"./skills"(directory) to match Claude Code's expected format. Fixes 2 load errors on plugin install - Tool stats empty —
tool_memory.enableddefaulted tofalse, causing dashboard Tool Stats page to show no data. Now defaults totrue— tool usage tracking works out of the box - E2E health test — fixed assertion mismatch (
"healthy"vs"ok")
Added¶
- Source-Aware Brain plan — 4-phase architecture plan for smart index with exact citations from source documents (source locators,
nmem_citetool, source refresh, cloud resolvers)
[4.3.1] - 2026-03-14¶
Fixed¶
- Plugin manifest validation (#70) — removed invalid
features,instructions,agentskeys fromplugin.jsonthat broke Claude Code plugin install - Doc trainer orphan neurons — heading-less chunks now get synthetic heading from filename; added per-file tags for cross-cluster ENRICH linking; increased heading dedup limit 20→100 for common headings like "Overview"
- Chatbot brain loading — use
find_brain_by_name("neuralmemory-docs")instead of non-existentlist_brains()method - HF deploy script username — fixed
nhadaututthekytypo (double t)
Added¶
/health+/readyendpoints —nmem servenow exposes health check (brain name, uptime, schema version) and readiness probe (503 when uninitialized) for production monitoring- Cloud sync privacy docs — privacy model table, encryption details, CF free tier limits in
docs/guides/cloud-sync.md
Improved¶
- Self-hosted cloud sync — switched default from shared hub to self-hosted model. Users deploy their own CF Worker + D1 database. Data stays on user's own Cloudflare account
- Sync setup instructions — updated README, FAQ, dashboard SyncPage, and MCP setup flow to guide self-hosted deployment first
Tests¶
- 14 new health endpoint tests
- Total: 3748 passing
[4.3.0] - 2026-03-13¶
Added¶
nmem_tool_statsMCP tool — exposes tool usage analytics (summary + daily breakdown) via MCP (#63)/api/dashboard/tool-statsREST endpoint — tool usage analytics for dashboard integration- Dashboard: Tool Stats page — top tools bar chart, usage-over-time line chart, detailed table with success rates and durations (#63)
- Background consolidation daemon —
nmem servenow runs periodic consolidation using existingmaintenance.scheduled_consolidation_*config (#65) - HuggingFace Spaces deployment — chatbot ready for HF Spaces with proper metadata, async Gradio handlers, deploy script, and docs guide (#60)
- Cascading retrieval with fiber summary tier — FTS5 search on fiber summaries as step 2.8 before neuron pipeline, sufficiency gate for early termination, schema v27 (#61, #62)
Improved¶
- Docs messaging — restructured README and mcp-server.md with "3 tools you need, 41 the agent handles" hierarchy (#59)
Fixed¶
nmem doctorschema version check — was usingPRAGMA user_version(always 0) instead ofschema_versiontable; now correctly reports v26nmem brain healthcrash in shared mode — hardcodedlimit=10000exceeded server max (1000), causing 422 errors (#67)nmem infocrash in shared mode — same limit issue for typed memories querynmem consolidateFK crash — summarize strategy referenced anchor neurons pruned by earlier tier; now validates neuron existence before creating summary fibers (#68)
[4.1.1] - 2026-03-12¶
Fixed¶
nmem doctorcrash — fixedNo module named 'neural_memory.storage.sqlite'caused by stale import after storage restructuring (now imports fromsqlite_schema)nmem_pin action=list— newlistaction to query pinned fibers (#57)
Improved¶
- Stale references audit — updated tool counts (39→44), schema version (v22→v26), test counts across README, ROADMAP, plugin.json, mcp-server.md
- FAQ — added "Why is my consolidation 0%?" entry
- Regenerated docs — MCP tools + CLI reference refreshed for v4.1.x
[4.1.0] - 2026-03-12¶
Added¶
- Auto-generated MCP Tool Reference —
scripts/gen_mcp_docs.pyintrospects all 44 MCP tool schemas and generatesdocs/api/mcp-tools.mdwith parameter tables, categories, and tier badges - Auto-generated CLI Reference —
scripts/gen_cli_docs.pyintrospects all 66 CLI commands (Typer/Click) and generatesdocs/getting-started/cli-reference.md - Documentation Chatbot — Gradio UI (
chatbot/app.py) powered by NeuralMemory's ReflexPipeline, answers docs questions without an LLM using spreading activation retrieval - Docs Brain Trainer —
chatbot/train_docs_brain.pytrains a brain from project docs (40 files → 1045 chunks → 9175 neurons) - CI Docs Freshness Check — new
docsjob in GitHub Actions runs--checkmode on both generators, fails CI when auto-generated docs are stale
Fixed¶
- Brain lookup fallback —
get_brain(name)now falls back tofind_brain_by_name()when id-based lookup fails, preventing duplicate "brain.v2" creation for users upgrading from older versions with UUID-based brain ids
Improved¶
- Docs navigation — added orphan pages (Companion Setup, Lessons Learned) to mkdocs.yml nav
- Cross-links — CLI Guide, CLI Reference, and MCP Tools Reference now link to each other via admonition boxes
- CLI Guide renamed — title changed from "CLI Reference" to "CLI Guide" to avoid confusion with auto-generated reference
[4.0.1] - 2026-03-12¶
Security¶
- Fix path traversal in
index_handler.py— adapter connection paths now validated withis_relative_to()against allowed directories (cwd, home, temp) - Fix path traversal in
pre_compact.pyhook — stdin transcript path now validated against~/.claudedirectory - Update
cryptography>=46.0.5— fix CVE-2026-26007 - Add
python-multipart>=0.0.22floor constraint — fix CVE-2026-24486 - Remove internal info from error messages — 9 locations no longer leak memory IDs, hypothesis IDs, or filesystem paths to clients
- CORS hardening — replace
localhost:*wildcard with explicit port list (3000, 3001, 5173, 5174, 8000, 8080, 8888)
Fixed¶
- Fix 8 silent
except Exception: passblocks — all now log at DEBUG level withexc_info=True - Fix 14 redundant exception tuples (
except (AttributeError, Exception)→except Exception) - Remove unused
python-dateutilfrom core dependencies
[4.0.0] - 2026-03-12¶
Added¶
- Semantic Drift Detection — Find tag synonyms/aliases via Jaccard similarity on co-occurrence data
- Tag Co-Occurrence Matrix — Automatically recorded on every memory encode, tracks which tags appear together
- Union-Find Clustering — Groups related tags with confidence thresholds: merge (>0.7), alias (>0.4), review (>0.3)
- Temporal Drift Detection — Compares early vs recent session topics to detect terminology shifts
nmem_driftMCP Tool — detect/list/merge/alias/dismiss actions for managing drift clustersdetect_driftConsolidation Strategy — Runs drift analysis during periodic consolidation- Schema v26 — New
tag_cooccurrenceanddrift_clusterstables
Improved¶
- Brain Intelligence Complete — v4.0 milestone: session intelligence, adaptive depth, predictive priming, and semantic drift detection work together as feedback loops
- Consolidation engine now includes drift detection in the final tier alongside semantic_link
Tests¶
- 51 new drift detection tests (Jaccard, clustering, storage, MCP handler, Union-Find)
- Total: 3810 passing
[3.5.0] - 2026-03-12¶
Added¶
- Predictive Priming — Brain anticipates next query from session context with 4-source priming engine
- Activation Cache — Recent query results carry forward as soft activation with exponential decay (
0.7^nper query) - Topic Pre-Warming — Session topics with EMA > 0.5 pre-warm related neurons before query parsing (truly predictive)
- Habit-Based Priming — Query pattern co-occurrence (CONCEPT neurons + BEFORE synapses) predicts next topic, max 3 predicted topics
- Co-Activation Priming — Hebbian binding data (strength >= 0.5, count >= 3) boosts associated neurons
- Priming Metrics — Hit rate tracking with auto-adjusted aggressiveness (0.5x-1.5x) based on priming effectiveness
- Session priming fields —
priming_hit_rate,priming_totalexposed in session summaries and result metadata
Tests¶
- 57 new tests covering all priming sources, metrics, orchestration, merging, backward compat
- Total: 3759 passing
[3.4.0] - 2026-03-12¶
Added¶
- Session-aware depth selection — Primed topics go shallower (already in context), new topics go deeper (need exploration). Uses session EMA topic weights
- Calibration-driven gate tuning — High-accuracy gates get confidence boost (+10%), low-accuracy gates get dampened (-30%), very low avg_confidence triggers downgrade to insufficient
- Agent feedback signal —
agent_used_resultparameter: remember-after-recall = strong positive, unused recall = raised bar for success - Dynamic RRF weights — Per-brain retriever weights evolve from outcome history via
retriever_calibrationtable and EMA - Auto activation strategy —
activation_strategy="auto"selects classic/PPR/hybrid based on graph density (synapses/neuron ratio) - Schema v25 —
retriever_calibrationtable +graph_densitycolumn on brains
Tests¶
- 30 new tests covering all 5 features + backward compatibility
- Total: 3702 passing
[3.3.0] - 2026-03-12¶
Added¶
- Cloud Sync Hub — Cloudflare Workers + D1 sync hub with API key auth, brain ownership, device management. Live at
neural-memory-sync-hub.vietnam11399.workers.dev - API key auth —
nmk_prefixed keys, SHA-256 hashed storage, Bearer token transport, key masking in all outputs nmem_sync_config(action='setup')— Guided onboarding flow for cloud sync setup- URL versioning — Cloud hub uses
/v1/prefix, localhost preserves backward-compatible paths - HTTP error mapping — User-friendly messages for 401/403/413/429 status codes
- Cloud profile in
nmem_sync_status— Shows tier, email, usage when connected to cloud hub - HTTPS enforcement — Refuses non-HTTPS for cloud hub URLs (localhost exempt)
Tests¶
- 22 new tests: SyncConfig api_key, key masking, URL versioning, HTTP error handling
- Sync hub: 10 Vitest tests (health, auth, validation, type shapes)
- Total: 3672 passing
[3.2.0] - 2026-03-11¶
Added¶
- Session Intelligence (v4.0 Phase 1) — In-memory session state tracking across MCP calls with topic EMA scoring, LRU eviction (max 10 sessions), 2h auto-expiry, and SQLite persistence via
session_summariestable (schema v24) - Dashboard assets in wheel — Bundled
server/static/dist/via hatch artifacts config, fixing blank dashboard on pip install (#54)
Fixed¶
- Config singleton mutation —
wizard.pyandembedding_setup.pynow use immutablereplace()pattern instead of mutating the cached config singleton (H1/H2) - Structure detector false positives — Added 4096-char size guard and CSV all-text column rejection heuristic (H4/H5)
- Source registry validation —
_row_to_source()handles invalid SourceType/SourceStatus gracefully,update_source()validates before SQL write (H2/H3) - Source handler error handling —
_require_brain_id()andSource.create()wrapped in try/except ValueError (H1/M1)
Tests¶
- 40 new tests for session intelligence (QueryRecord, SessionState EMA, SessionManager LRU, SQLite persistence)
- Total: 3650 passing
[3.1.0] - 2026-03-11¶
Added¶
- Source-Aware Memory (v3.0 Pillar 4) — Brain that knows its sources. 6-phase plan fully shipped.
nmem_showtool — Retrieve exact verbatim content of a memory by fiber ID- Exact recall mode —
mode="exact"innmem_recallreturns verbatim content without summarization - Source Registry — Schema v23 with
sourcestable,SOURCE_OFsynapse type,nmem_sourcetool for registering and querying memory provenance - Structured encoding — Schema-aware encoder detects tabular data (CSV, markdown tables, JSON arrays) and preserves structure through the pipeline
- Citation engine —
citation.pygenerates citation metadata with audit synapses linking memories to their sources nmem init --wizard— Interactive first-run wizard: brain name → embedding provider → MCP config → test memorynmem doctor— System health diagnostics with 8 checks (Python, config, brain, deps, embeddings, schema, MCP, CLI tools)nmem setup embeddings— Interactive embedding provider setup with installation status and API key detection- Change log tracking —
sqlite_change_log.pyrecords schema and data mutations for audit trail
Fixed¶
- SharedStorage brain_id parity — Abstract
brain_idproperty on base class, all backends implement consistently (#53) - Hub auto-creates brain — First sync or device registration no longer fails on missing brain
- Error message leaks — Batch remember no longer exposes
str(e)exception details to clients
Improved¶
- DX Sprint — Actionable error messages across CLI and MCP, embedding setup guides new users through provider selection
- VS Code extension v0.5.0 — 6 lifecycle and config bug fixes
Tests¶
- 200+ new tests across all v3.0 phases (show handler, source registry, structured encoding, citation, audit synapses, DX wizard/doctor/embedding)
- Total: 3515 passing
[2.29.0] - 2026-03-10¶
Added¶
- Reciprocal Rank Fusion (RRF) — Multi-retriever score blending for anchor ranking. Combines BM25/FTS5, embedding similarity, and graph expansion ranks into unified scores using the RRF formula (
score = Σ weight_i / (k + rank_i)). Anchors now start with differentiated activation levels instead of uniform 1.0. Config:rrf_k(default 60). - Graph-based query expansion — 1-hop neighbor traversal from entity/concept anchors adds soft expansion anchors. Exploits knowledge graph structure for associative priming (e.g., "auth" → OAuth2 → JWT, session). Config:
graph_expansion_enabled,graph_expansion_max,graph_expansion_min_weight. - Personalized PageRank (PPR) activation — Optional replacement for classic BFS spreading activation. Distributes activation proportional to edge weights / out-degree with damping (teleport back to seed set), naturally handling hub dampening. Opt-in via
activation_strategy = "ppr"or"hybrid"(PPR + reflex). Config:ppr_damping,ppr_iterations,ppr_epsilon. - Tag filtering in Query API and MCP —
POST /queryacceptstags: list[str](AND filter, max 20).nmem_recallacceptstags: list[str]to scope results to specific tag sets. Filters acrosstags,auto_tags, andagent_tagscolumns. Backward compatible —tags=Nonereturns all results as before.
Fixed¶
- Marketplace plugin install — Removed unrecognized
featureskey frommarketplace.jsonthat caused Claude Code/plugin marketplace addto fail with schema validation error (#49).
[2.28.0] - 2026-03-08¶
Added¶
nmem_remember_batch— Bulk remember up to 20 memories in a single call. Partial success supported (individual failures don't block others). Added tostandardtool tier.- Trust score — First-class
trust_score(0.0–1.0) andsourcefields on TypedMemory. Source-specific ceiling caps:user_input=0.9,ai_inference=0.7,auto_capture=0.5,verified=1.0. Schema v22 migration adds columns + index. min_trustfilter —nmem_recallaccepts optionalmin_trustparameter to filter out low-confidence memories.- Auto-promote context→fact — Frequently-recalled context memories (frequency ≥ 5) are automatically promoted to
factduring consolidation. Audit trail in metadata (auto_promoted,promoted_from,promoted_at). - SEMANTIC alternative path — Memories can reach SEMANTIC stage via intensive reinforcement (
rehearsal_count ≥ 15+5 distinct 2h-windows) as alternative to the 3-distinct-days spacing requirement. Enables agents with burst usage patterns.
Fixed¶
- FK constraint race condition —
update_fiber()no longer raises ValueError when a fiber is deleted between deferred-write enqueue and flush. Gracefully skips with debug log.
Changed¶
- MCP startup 3x faster — Lazy-import
cli.setup(defer until first-time init actually needed) andsync.client/sync.sync_engine(defer aiohttp until first sync call). Cold start: 611ms → 197ms.
[2.27.3] - 2026-03-08¶
Fixed¶
- OpenAI-compatible client HTTP 400 — Tool schemas now include
parametersalias alongsideinputSchema, fixing "schema must be type object, got type None" errors when MCP tools are forwarded through OpenAI-compatible bridges (Cursor, LiteLLM, etc.)
Added¶
- Cognitive Reasoning Guide — Full workflow documentation: hypothesize, evidence, predict, verify loop with Bayesian confidence formula, end-to-end examples (
docs/guides/cognitive-reasoning.md) - Schema v21 Migration Guide — New tables, auto-migration behavior, rollback instructions (
docs/guides/schema-v21-migration.md) - Learning Habits Guide — 3-stage pipeline, thresholds, confidence calculation, suggestion engine (
docs/guides/learning-habits.md) - Pre-ship smoke tests — Auto-type classifier (13 cases) and cognitive engine integration test in
scripts/pre_ship.py
[2.27.2] - 2026-03-07¶
Fixed¶
- OpenClaw plugin: lazy auto-connect — Fixed tools returning "NeuralMemory service not running" when OpenClaw calls
register()multiple times across subsystems (gateway, agent worker, CLI). Agent worker instance now lazily connects on first tool call viaensureConnected()with connection mutex to prevent race conditions (#38)
[2.27.1] - 2026-03-06¶
Added¶
nmem_edit— Edit memory type, content, or priority by fiber ID. Preserves all neural connections. Supports typed_memory path (type/priority) and anchor neuron path (content update)nmem_forget— Soft delete (sets expires_at for natural decay) or hard delete (permanent removal with cascade to fiber + typed_memory). Also handles orphan neuron deletion- Enhanced MCP instructions — Richer behavioral directives: brain growth tips, rich language patterns (causal/temporal/relational/decisional/comparative), memory correction guidance, all 38 tools listed
- Enhanced plugin instructions — Comprehensive agent guidance in
.claude-plugin/plugin.jsonfor proactive memory usage
Fixed¶
- FK constraint errors —
INSERT OR REPLACE INTO neuron_statesandsave_maturationnow catchsqlite3.IntegrityErrorwhen neuron was deleted by consolidation prune (previously crashed with FOREIGN KEY constraint failed) - Auto-type classifier bias — Reordered
suggest_memory_type(): DECISION now checked before INSIGHT to prevent "because" from hijacking decisions. Removed overly broad "because"/"pattern" from INSIGHT keywords. Added "rejected"/"went with" to DECISION, "prefers"/"preferred" to PREFERENCE. Tightened TODO keywords and added guard against descriptive "should" - DECISION_PATTERNS greediness — Removed overly broad patterns (
"we're going to","let's use","going to") fromauto_capture.pythat caused false decision captures -
Synapse FK error message — Distinguished FOREIGN KEY violations from UNIQUE violations in
add_synapse()for clearer error messages -
Cognitive Reasoning Layer — 8 new MCP tools for hypothesis-driven reasoning (38 tools total)
nmem_hypothesize— Create and manage hypotheses with Bayesian confidence tracking and auto-resolutionnmem_evidence— Submit evidence for/against hypotheses, auto-updates confidence via sigmoid-dampened shiftnmem_predict— Make falsifiable predictions with deadlines, linked to hypotheses via PREDICTED synapsenmem_verify— Verify predictions as correct/wrong, propagates result to linked hypothesisnmem_cognitive— Hot index: ranked summary of active hypotheses + pending predictions with calibration scorenmem_gaps— Knowledge gap metacognition: detect, track, prioritize, and resolve what the brain doesn't knownmem_schema— Schema evolution: evolve hypotheses into new versions via SUPERSEDES synapse chainnmem_explain— (moved to cognitive) Trace shortest path between concepts with evidence- Schema v21 — Three new tables:
cognitive_state(hypothesis/prediction tracking),hot_index(ranked cognitive summary),knowledge_gaps(metacognition) - Pure cognitive engine (
engine/cognitive.py) — Stateless functions:update_confidence,detect_auto_resolution,compute_calibration,score_hypothesis,score_prediction,gap_priority - Bayesian confidence model — Sigmoid-dampened shift with surprise factor and diminishing returns from total evidence
- Auto-resolution — Hypotheses with confidence ≥0.9 + 3 supporting evidence auto-confirm; ≤0.1 + 3 against auto-refute
- Prediction calibration — Tracks correct/wrong ratio across all resolved predictions
- Schema version chain —
parent_schema_idcolumn +get_schema_history()walks the SUPERSEDES chain with cycle guard - Knowledge gap detection sources —
contradiction,low_confidence_hypothesis,user_flagged,recall_miss,stale_schema
[2.26.1] - 2026-03-05¶
Added¶
- Dashboard: actionable health penalties — Top penalties section shows ranked cards with score bar, penalty points lost, estimated gain if fixed, and exact action to improve each component
- API:
top_penaltiesfield in/api/dashboard/healthresponse — exposes diagnostics engine penalty analysis to frontend - i18n: penalty translations — English and Vietnamese keys for top penalties section
[2.26.0] - 2026-03-05¶
Added¶
- Brain Health Guide (
docs/guides/brain-health.md) — comprehensive guide explaining all 7 health metrics, thresholds, improvement roadmap (F through A), common issues, maintenance schedule - Connection Tracing docs (
nmem_explain) — added to README, MCP prompt, brain health guide. Previously undocumented feature that traces shortest path between concepts - Embedding auto-detection (
provider = "auto") — automatically detects best available embedding provider: Ollama → sentence-transformers → Gemini → OpenAI. Lowers barrier for cross-language recall - Consolidation post-run hints — warns about orphan neurons (>20%) and missing consolidation after running
nmem consolidate - Pre-ship verification script (
scripts/pre_ship.py) — automated quality gate: version consistency, ruff, mypy, import smoke test, fast tests, plugin checks - MCP instructions update — health interpretation, priority scale, tagging strategy, maintenance schedule added to system prompt
Changed¶
- README: added nmem_explain to tools table, brain health section, connection tracing section, embedding auto-detect
- OpenClaw npm package renamed to
neuralmemory(published on npm)
[2.25.1] - 2026-03-05¶
Fixed¶
nmem flushstdin blocking — Process hangs forever when spawned as subprocess without piped input;sys.stdin.read()blocks because no EOF is sent. Added 5s timeout viaThreadPoolExecutor(fixes #27)- Consolidation prune — Protects fiber members from orphan pruning + invariant tests
- Orphan rate — Counts fiber membership correctly, isolated E2E tests from production DB
- Dashboard dist — Bundled for
pip installcompatibility
Changed¶
- Published v2.25.0 release (was stuck in draft)
[OpenClaw Plugin 1.5.0] - 2026-03-05¶
Fixed¶
- Plugin ID mismatch warning — Renamed package from
@neuralmemory/openclaw-plugintoneuralmemoryto match manifestid. OpenClaw'sderiveIdHint()extracts the unscoped package name asidHint, which previously producedopenclaw-plugin≠neuralmemory - Tool schema provider compatibility — Replaced
integerwithnumber(Gemini rejectsinteger), addedadditionalProperties: false(OpenAI strict mode), removed constraint keywords (maxLength,maxItems,minimum,maximum) that some providers reject. MCP server validates these server-side - Pre-existing test bugs — Config test missing
initTimeoutin expected defaults; execute tests passing args asidparameter
[2.25.0] - 2026-03-04¶
Added¶
- Proactive Memory Auto-Save — 4-layer system ensures agents use NeuralMemory without explicit instructions
- MCP
instructions— Behavioral directives in InitializeResult, auto-injected into agent context - Post-tool passive capture — Server-side auto-analysis of recall/context/recap/explain results with rate limiting (3/min)
- Plugin
instructionsfield — Short nudge for all plugin users - Enhanced stop hook — Transcript capture 80→150 lines, session summary extraction, always saves at least one context memory
- Ollama embedding provider — Local zero-cost inference via Ollama API (contributed by @xthanhn91)
Fixed¶
- Scale performance bottlenecks — Consolidation prune, neuron dedup, cache improvements (PR #23)
- OpenClaw plugin
execute()signature — Missingidparameter broke all agent tool calls (issue #19) - Auto-consolidation crash —
ValueError: 'none' is not a valid ConsolidationStrategy(issue #20) nmem remember --stdin— CLI now supports piped input for safe shell usage (issue #21)- CI test compatibility —
test_remember_sensitive_contentmock fix for Python 3.11
[2.24.2] - 2026-03-03¶
Added¶
- Dashboard Phase 2 — Complete visual dashboard overhaul
- Sigma.js graph visualization — WebGL-rendered neural graph with ForceAtlas2 layout, node limit selector (100-1000), click-to-inspect detail panel, color-coded by neuron type
- ReactFlow mindmap — Interactive fiber mindmap with dagre left-to-right tree layout, custom nodes (root/group/leaf), MiniMap, zoom/pan, click-to-select neuron details
- Theme toggle — Light / Dark / System cycle button in TopBar, warm cream light mode (
#faf8f3), class-based TailwindCSS 4 dark mode via@custom-variant - Delete brain — Trash icon on inactive brains in Overview table with confirmation dialog
- Click-to-switch brain — Click inactive brain row to switch active brain
- CLI update check fix — Editable/dev installs no longer show misleading "Update available" prompts
Removed¶
- Legacy dashboard UI — Removed
dashboard.html,index.html, legacy JS/CSS/locales (4,451 LOC),/staticmount from FastAPI
Dependencies¶
- Added
@xyflow/react,@dagrejs/dagre(ReactFlow mindmap) - Added
graphology-layout-forceatlas2(Sigma.js graph layout)
[2.24.1] - 2026-03-03¶
Fixed¶
- IntegrityError in consolidation —
save_maturationFK constraint failed when orphaned maturation records referenced deleted fibers - Added
cleanup_orphaned_maturations()to purge stale records before stage advancement - Defensive try/except for any remaining FK errors during
_mature()
Tests¶
- 2 new tests for orphaned maturation handling
- Total: 3145 passing
[2.24.0] - 2026-03-03¶
Fixed¶
- [CRITICAL] SQL Injection Prevention —
get_synapses_for_neuronsdirection param validated against whitelist instead of raw f-string - [HIGH] BFS max_hops off-by-one — Nodes at depth=max_hops no longer uselessly enqueued then discarded
- [HIGH] Bidirectional path search —
memory_store.get_path()now respectsbidirectional=Trueviato_undirected() - [HIGH] JSON-RPC parse errors — Returns proper
{"code": -32700}error instead of silently dropping malformed messages - [HIGH] Encryption failure policy — Returns error instead of silently storing plaintext when encryption fails
- [HIGH]
disable_auto_saveplacement — Moved insidetryblock in tool_handlers and conflict_handler sofinallyalways re-enables - [HIGH] Cross-brain depth validation — Added int coercion + 0-3 clamping for depth parameter
- [HIGH] Factory sync exception handling — Narrowed bare
except Exceptionto specific exception types - [HIGH] SSN pattern false positives — Excluded invalid prefixes (000, 666, 900-999); raised base64/hex minimums to 64 chars
- [MEDIUM] MCP notification handling — Unknown notifications return None instead of error responses
- [MEDIUM] Brain ID error propagation — New
_get_brain_or_error()helper prevents uncaught ValueError in 6 handlers - [MEDIUM] Connection handler I/O — Removed unused brain fetch in
_explain - [MEDIUM] Evidence fetch optimization — Removed wasted source neuron from evidence query
- [MEDIUM] Narrative date validation — Added
end_date < start_dateguard - [MEDIUM] CORS port handling — Enumerate common dev ports instead of invalid
:*wildcard - [MEDIUM] Embedding config — Graceful fallback instead of crash on invalid provider
- [LOW] Type coercion — max_hops/max_fibers/max_depth safely coerced to int
- [LOW] Immutability — Dict mutations replaced with spread patterns in review_handler and encoder
- [LOW] Schema cleanup — Removed empty
"required": []from nmem_suggest
Tests¶
- Fixed and added 5 tests (max_hops_capped, avg_weight, default_hops, tier assertions, embedding fallback)
- Total: 3143 passing
[2.23.0] - 2026-03-03¶
Added¶
- nmem_explain — Connection Explainer — New MCP tool to explain how two entities are related
- Finds shortest path through synapse graph via bidirectional BFS
- Hydrates path with fiber evidence (memory summaries)
- Returns structured steps + human-readable markdown explanation
- New engine module:
connection_explainer.pywithConnectionStepandConnectionExplanationdataclasses - New handler mixin:
ConnectionHandlerfollowing established mixin pattern - Args:
from_entity,to_entity(required),max_hops(optional, 1-10, default 6)
Fixed¶
- OpenClaw Compatibility — Handle JSON string arguments in MCP
tools/callhandler - OpenClaw sends
argumentsas JSON string instead of dict — now auto-parsed - Prevents crash when receiving
"arguments": "{\"content\": \"...\"}"format
Improved¶
- Bidirectional BFS —
get_path()in SQLite storage now supportsbidirectional=True - Uses
UNION ALLto traverse both outgoing and incoming synapse edges - Updated abstract base + all 5 storage implementations
Tests¶
- 11 new tests for connection explainer (engine + MCP handler + integration)
- Total: 3140+ passing
[2.22.0] - 2026-03-03¶
Fixed¶
- #12 Version Mismatch — Detect editable installs in update hint, show version in
nmem_stats - #14 Dedup on Remember — Enable SimHash dedup (Tier 1) by default, surface
dedup_hintin remember response, skip content < 20 chars - #11 SEMANTIC Stage Blocked — Rehearse maturation records on retrieval so memories can reach SEMANTIC stage (requires 3+ distinct reinforcement days)
- #15 Low Activation Efficiency — Fix Hebbian learning None activation floor (0.1 instead of None → delta > 0), add dormant neuron reactivation during consolidation
Added¶
- #10 Semantic Linking —
SemanticLinkingStepcross-links entity/concept neurons to existing similar neurons (reduces orphan rate) - #13 Neuron Diversity —
ExtractActionNeuronsStep+ExtractIntentNeuronsStepextract ACTION/INTENT neurons from verb/goal phrases (improves type diversity from 4-5 to 6-7 of 8 types) - Dormant Reactivation — Consolidation ENRICH tier bumps up to 20 dormant neurons (access_frequency=0) with +0.05 activation
Tests¶
- 55 new tests across 6 test files: version check (12), dedup default (9), maturation rehearsal (5), semantic linking (6), action/intent extraction (15), activation efficiency (8)
- Total: 3127 passing
[2.21.0] - 2026-03-03¶
Added¶
- Cross-Language Recall Hint — Smart detection when recall misses due to language mismatch
- Detects query language vs brain majority language (Vietnamese ↔ English)
- Shows actionable
cross_language_hintin recall response when embedding is not enabled - Suggests
pip installif sentence-transformers not installed, config-only if already installed -
detect_language()extracted as reusable module-level function with Vietnamese-unique char detection -
Embedding Setup Guide — Comprehensive docs for all embedding providers
- New
docs/guides/embedding-setup.mdwith provider comparison, config examples, troubleshooting - Free multilingual model recommendations:
paraphrase-multilingual-MiniLM-L12-v2(50+ languages, 384D, ~440MB) -
Provider comparison table: sentence_transformer (free/local) vs Gemini vs OpenAI
-
Embedding Documentation & Onboarding
- README: updated "None — pure algorithmic" → "Optional", added embedding quick-start section
.env.example: addedGEMINI_API_KEY,OPENAI_API_KEYvars- Onboarding step 6: suggests cross-language recall setup for new users
Improved¶
- Vietnamese Language Detection — More accurate short-text detection
- Added
_VI_UNIQUE_CHARSset (chars exclusive to Vietnamese, not shared with French/Spanish) - Short text like "lỗi xác thực" now correctly detected as Vietnamese
Tests¶
- 18 new tests in
test_cross_language_hint.py(8 detect_language + 10 hint logic) - All 3090+ tests pass
[2.20.0] - 2026-03-03¶
Added¶
- Gemini Embedding Provider — Cross-language recall via Google Gemini embeddings (PR #9 by @xthanhn91)
GeminiEmbeddingprovider:gemini-embedding-001(3072D),text-embedding-004(768D)- Parallel anchor sources: embedding + FTS5 run concurrently (not fallback-only)
- Config pipeline:
config.toml[embedding]→EmbeddingSettings→BrainConfig→ SQLite - Doc training embeds anchor neurons for cross-language retrieval
- E2E validated: 100/100 Vietnamese queries on English KB (avg confidence 0.98)
-
Optional dependency:
pip install 'neural-memory[embeddings-gemini]' -
Sufficiency Enhancements — Smarter retrieval gating
- EMA calibration: per-gate accuracy tracking, auto-downgrade unreliable gates
- Per-query-type thresholds: strict (factual), lenient (exploratory), default profiles
- Diminishing returns gate: early-exit when multi-pass retrieval plateaus
Fixed¶
- Comprehensive Audit — 7 CRITICAL, 17 HIGH, 18 MEDIUM fixes
- Security: auth guard on consolidation routes, CORS wildcard removal, path traversal fix
- Performance:
@lru_cacheregex, cached QueryRouter/MemoryEncryptor,asyncio.gatherembeddings - Infrastructure:
.dockerignore,.env.example, bounded exports, async cursor managers - PR #9 Review Fixes — 3 HIGH, 6 MEDIUM, 3 LOW
- Bare except → specific exceptions in doc_trainer
EmbeddingSettingsfrozen + validated (rejects invalid providers)- Probe-first early exit in embedding anchor scan (performance)
- Correct task_type for semantic discovery consolidation
- Hardcoded paths → env vars in E2E scripts
Tests¶
- 33 new sufficiency tests (EMA calibration, query profiles, diminishing returns)
- 6 new EmbeddingSettings validation tests
- 13 new Gemini embedding provider tests
- Full suite: 3054 passed, 0 failed
[2.19.0] - 2026-03-02¶
Added¶
- React Dashboard — Modern dashboard replacing legacy Alpine.js/vis.js
- Vite 7 + React 19 + TypeScript + TailwindCSS 4 + shadcn/ui
- Warm cream light theme (
#faf8f3) with dark mode support - 7 pages: Overview, Health (Recharts radar), Graph, Timeline, Evolution, Diagrams, Settings
- TanStack Query 5 for data fetching, Zustand 5 for state
- Lazy-loaded routes with skeleton loaders
/uiand/dashboardserve React SPA, legacy at/ui-legacyand/dashboard-legacy-
Brain file info: paths, sizes, disk usage in Settings page
-
Telegram Backup Integration — Send brain
.dbfiles to Telegram TelegramClient(aiohttp):send_message(auto-split >4096 chars),send_document,backup_brainTelegramConfigfrozen dataclass inunified_config.py([telegram]TOML section)- CLI:
nmem telegram status,nmem telegram test,nmem telegram backup [--brain NAME] - MCP tool:
nmem_telegram_backup(28 total tools) - Dashboard API:
GET /api/dashboard/telegram/status,POST .../test,POST .../backup - Dashboard Settings page: status indicator, test button, backup button
- Bot token via
NMEM_TELEGRAM_BOT_TOKENenv var only (never in config file) -
Chat IDs in
config.tomlunder[telegram]section -
Brain Files API —
GET /api/dashboard/brain-files - Returns brains directory path, per-brain file path + size, total disk usage
Tests¶
- 15 new Telegram tests: config, token, client, status, MCP handler
- MCP tool count updated (27→28)
[2.18.0] - 2026-03-02¶
Added¶
- Export Markdown —
nmem brain export --format markdown -o brain.md - Human-readable brain export grouped by memory type (facts, decisions, insights, etc.)
- Tag index with occurrence counts
- Statistics table with neuron/synapse/fiber breakdowns
- Pinned memory indicators and sensitive content exclusion support
-
New module:
cli/markdown_export.py(~180 LOC) -
Original Timestamp —
event_atparameter onnmem_remember - MCP:
nmem_remember(content="Meeting at 8am", event_at="2026-03-02T08:00:00") - CLI:
nmem remember "Meeting" --timestamp "2026-03-02T08:00:00" - Time neurons and fiber
time_start/time_enduse the original event time - Supports ISO format with optional timezone (auto-stripped for UTC storage)
Changed¶
- Health Roadmap Enhancement — Concrete metrics in improvement actions
- Actions now include specific numbers: "Store memories to build ~250 more connections (current: 0.5 synapses/neuron, target: 3.0+)"
- Added
timeframefield to roadmap: "~2 weeks with regular use" - Dynamic action strings computed from actual brain metrics (neuron counts, orphan rate, etc.)
- Grade transition messages include estimated timeframe
Tests¶
- 31 new tests:
test_markdown_export.py(11),test_health_roadmap.py(13),test_event_timestamp.py(7)
[2.17.0] - 2026-03-02¶
Added¶
- Knowledge Base Training — Multi-format document extraction with pinned memories
- 12 supported formats: .md, .mdx, .txt, .rst (passthrough), .pdf, .docx, .pptx, .html/.htm (rich docs), .json, .xlsx, .csv (structured data)
doc_extractor.py— Format-specific extractors with 50MB file size limit- Optional dependencies via
neural-memory[extract]for non-text formats (pymupdf4llm, python-docx, python-pptx, beautifulsoup4, markdownify, openpyxl) - Pinned Memories — Permanent knowledge that bypasses decay, pruning, and compression
Fiber.pinned: boolfield — pinned fibers skip all lifecycle operations- 4 lifecycle bypass points: decay, pruning, compression, maturation
nmem_pinMCP tool for manual pin/unpin- Training File Dedup — SHA-256 hash tracking prevents re-ingesting same documents
training_filestable with hash, status, progress tracking- Resume support for interrupted training sessions
- Tool Memory System — Tracks MCP tool usage patterns and effectiveness
MemoryType.TOOL— New memory type (90-day expiry, 0.06 decay rate)SynapseType.EFFECTIVE_FOR+USED_WITH— Tool effectiveness and co-occurrence synapses- PostToolUse hook — Fast JSONL buffer capture (<50ms, no SQLite on hot path)
engine/tool_memory.py— Batch processing during consolidationPROCESS_TOOL_EVENTSconsolidation strategy
Fixed (Comprehensive Audit — 4 CRITICAL, 8 HIGH, 12 MEDIUM)¶
- CRITICAL: Auth guard on consolidation routes, CORS wildcard removal, path traversal fix, coverage threshold enforcement
- HIGH: Reject null client IP, sanitize error messages, Windows ACL key protection, FalkorDB password warning
- Performance: Module-level regex compilation with
@lru_cache, cached QueryRouter + MemoryEncryptor (lazy singleton),asyncio.gatherfor parallel embeddings, batch neuron delete (chunked 500), SQL FILTER clause combining queries - Infrastructure:
.dockerignore,.env.example, bounded export (LIMIT 50000),asyncio.Lockfor storage cache, cursor context managers
Changed¶
- Schema version 18 → 20 (tool_events table, pinned column on fibers, training_files table)
- SynapseType enum: 22 → 24 types (EFFECTIVE_FOR, USED_WITH)
- MemoryType enum: 10 → 11 types (TOOL)
- MCP tools: 26 → 27 (added nmem_pin)
- ROADMAP.md — Complete rewrite as forward-looking 5-phase vision
- Agent instructions — 7 new sections covering all 28 MCP tools
- MCP prompt — Added KB training, pin, health, review, import instructions
[2.16.0] - 2026-02-28¶
Added¶
- Algorithmic Sufficiency Check — Post-stabilization gate that early-exits when activation signal is too weak
- 8-gate evaluation (priority-ordered, first match wins): no_anchors, empty_landscape, unstable_noise, ambiguous_spread, intersection_convergence, high_coverage_strong_hit, focused_result, default_pass
- Unified confidence formula from 7 weighted inputs (activation, focus_ratio, coverage, intersection_ratio, proximity, stability, path_diversity)
- Conservative bias — false-INSUFFICIENT penalized 10× more than false-SUFFICIENT
engine/sufficiency.py(~302 LOC),storage/sqlite_calibration.py(~133 LOC)- Schema migration v17 → v18 (
retrieval_calibrationtable)
[2.15.1] - 2026-02-28¶
Fixed¶
- SharedStorage CRUD Endpoint Mismatch — Client called endpoints that didn't exist on server
- Added 14 CRUD endpoints to
server/routes/memory.py(neurons + synapses full lifecycle, state, neighbors, path) - 6 new Pydantic models in
server/models.py - Brain Import Deduplication — Changed
INSERT→INSERT OR REPLACEinsqlite_brain_ops.pyfor idempotent imports
[2.15.0] - 2026-02-28¶
Added¶
- Trusted Networks for Docker/Container Deployments — Configurable non-localhost access via
NEURAL_MEMORY_TRUSTED_NETWORKSenv var (CIDR notation) is_trusted_host()function with safeipaddressmodule validation- Default remains localhost-only (secure by default)
Fixed¶
- OpenClaw Plugin Zod Peer Dependency — Pinned
zodto^3.0.0
[2.14.0] - 2026-02-27¶
Added¶
- MCP Tool Tiers — 3-tier system (minimal/standard/full) for controlling exposed tools
ToolTierConfigfrozen dataclass with case-insensitive tier parsingget_tool_schemas_for_tier()filters tools by tier level- Minimal: 4 core tools, Standard: 8 tools, Full: all 27 tools
- Hidden tools still callable via dispatch (tier controls visibility, not access)
- Consolidation Eligibility Hints —
_eligibility_hints()explains why 0 changes happened - Habits Status — Progress bars for emerging patterns
- Diagnostics Improvements — Actionable recommendations with specific numbers
- Graph SVG Export — Pure Python SVG export with dark theme, zero external deps
[2.13.0] - 2026-02-27¶
Added¶
- Error Resolution Learning — When a new FACT/INSIGHT contradicts an existing ERROR memory, the system creates a
RESOLVED_BYsynapse linking fix → error instead of just flagging a conflict RESOLVED_BYsynapse type added toSynapseTypeenum (22 types total)- Resolved errors get ≥50% activation demotion (2x stronger than normal conflicts)
- Error neurons marked with
_conflict_resolvedand_resolved_bymetadata - Auto-detection via neuron metadata
{"type": "error"}— no caller changes needed - Zero-cost: pure graph manipulation, no LLM calls
- 7 new tests in
test_error_resolution.py
Changed¶
resolve_conflicts()accepts optionalexisting_memory_typeparameterconflict_detection.pynow importsloggingmodule for RESOLVED_BY synapse debug logging
[2.8.1] - 2026-02-23¶
Added¶
- FalkorDB Graph Storage Backend — Optional graph-native storage replacing SQLite for high-performance traversal
FalkorDBStoragecomposite class implementing fullNeuralStorageABC via 5 specialized mixinsFalkorDBBaseMixin— connection pooling, query helpers (_query,_query_ro), index managementFalkorDBNeuronMixin— neuron CRUD with graph node operationsFalkorDBSynapseMixin— synapse CRUD with typed graph edgesFalkorDBFiberMixin— fiber CRUD withCONTAINSrelationships, batch operationsFalkorDBGraphMixin— native Cypher spreading activation (1-4 hop BFS via variable-length paths)FalkorDBBrainMixin— brain registry graph, import/export, graph-level clear- Brain-per-graph isolation (
brain_{id}) for native multi-tenancy - Read-only query routing via
ro_queryfor registry reads and fiber lookups - Per-neuron limit enforcement in
find_fibers_batchvia UNWIND+collect/slice Cypher pattern - Connection health verification via Redis PING with automatic reconnect
docker-compose.falkordb.yml— standalone FalkorDB service configuration- Migration CLI:
nmem migrate falkordbto move SQLite brain data to FalkorDB - 69 tests across 6 test files (auto-skip when FalkorDB unavailable)
- SQLite remains default — FalkorDB is opt-in via
[storage]TOML config
Fixed¶
- mypy:
set_brainmissing from ABC — Addedset_brain(brain_id)toNeuralStoragebase class, resolving 2 mypy errors inunified_config.py - Registry reads used write queries — Added
_registry_query_ro()for read-only brain registry operations (get_brain,find_brain_by_name) find_fibers_batchignoredlimit_per_neuron— Rewrote with UNWIND+collect/slice Cypher for proper per-neuron limiting- FalkorDB health check was superficial —
_get_falkordb_storage()now performs actual Redis PING instead of just_db is not Nonecheck export_brainleakedbrain_idin error — Sanitized to generic "Brain not found" message- Import sorting (I001) — Fixed
falkordb.asynciobeforeredis.asyncioinfalkordb_store.py - Unused import (F401) — Removed stale
SQLiteStorageimport fromunified_config.py - Quoted annotation (UP037) — Unquoted
_storage_cacheand_falkordb_storagetype annotations - Silent error logging — Upgraded index creation and connection close errors from debug to warning level
[2.8.0] - 2026-02-22¶
Added¶
- Adaptive Recall (Bayesian Depth Prior) — System learns optimal retrieval depth per entity pattern
- Beta distribution priors per (entity, depth) pair — picks depth with highest E[Beta(a,b)]
- 5% epsilon exploration to discover better depths for known entities
- Fallback to rule-based detection when < 5 queries or no priors exist
- Outcome recording: updates alpha (success) or beta (failure) based on confidence + fibers_matched
- 30-day decay (a = 0.9, b = 0.9) to forget stale patterns
DepthPrior,DepthDecisionfrozen dataclasses +AdaptiveDepthSelectorengineSQLiteDepthPriorMixinwith batch fetch, upsert, stale decay, delete operations- Configurable:
adaptive_depth_enabled(default True),adaptive_depth_epsilon(default 0.05) - Tiered Memory Compression — Age-based compression preserving entity graph structure (zero-LLM)
- 5 tiers: Full (< 7d), Extractive (7-30d), Entity-only (30-90d), Template (90-180d), Graph-only (180d+)
- Entity density scoring:
count(neurons_referenced) / word_countper sentence - Reversible for tiers 1-2 (backup stored), irreversible for tiers 3-4
- Integrated as
COMPRESSstrategy inConsolidationEngine(Tier 2) CompressionTierIntEnum,CompressionConfig,CompressionResultfrozen dataclassesSQLiteCompressionMixinfor backup storage with stats- Configurable:
compression_enabled(default True),compression_tier_thresholds(7, 30, 90, 180 days) - Multi-Device Sync — Hub-and-spoke incremental sync via change log + sequence numbers
- Device Identity: UUID-based device_id generation, persisted in config,
DeviceInfofrozen dataclass - Change Tracking: Append-only
change_logtable recording all neuron/synapse/fiber mutationsChangeEntryfrozen dataclass,SQLiteChangeLogMixinwith 6 CRUD methodsrecord_change(),get_changes_since(sequence),mark_synced(),prune_synced_changes()
- Incremental Sync Protocol: Delta-based merge using neural-aware conflict resolution
SyncRequest,SyncResponse,SyncChange,SyncConflictfrozen dataclassesConflictStrategyenum: prefer_recent, prefer_local, prefer_remote, prefer_stronger- Neural merge rules: weight=max, access_frequency=sum, tags=union, conductivity=max, delete wins
- Sync Engine:
SyncEngineorchestrator withprepare_sync_request(),process_sync_response(),handle_hub_sync() - Hub Server Endpoints (localhost-only by default):
POST /hub/register— register device for brainPOST /hub/sync— push/pull incremental changesGET /hub/status/{brain_id}— sync status + device countGET /hub/devices/{brain_id}— list registered devices
- 3 new MCP tools (full tier only):
nmem_sync— trigger manual sync (push/pull/full)nmem_sync_status— show pending changes, devices, last syncnmem_sync_config— configure hub URL, auto-sync, conflict strategy
SyncConfigfrozen dataclass: enabled (default False), hub_url, auto_sync, sync_interval_seconds, conflict_strategy- Device tracking columns on neurons/synapses/fibers:
device_id,device_origin,updated_at - Schema migrations v15 → v16 (depth_priors, compression_backups, fiber compression_tier) → v17 (change_log, devices, device columns)
Changed¶
- SQLite schema — Version 15 → 17 (two migrations)
- MCP tools — Expanded from 23 to 26 (
nmem_sync,nmem_sync_status,nmem_sync_config) - MCPServer mixin chain — Added
SyncToolHandlermixin Fibermodel — Addedcompression_tier: int = 0fieldBrainConfig— Added 4 new fields:adaptive_depth_enabled,adaptive_depth_epsilon,compression_enabled,compression_tier_thresholdsUnifiedConfig— Addeddevice_idfield andSyncConfigdataclassConsolidationEngine— AddedCOMPRESSstrategy enum + Tier 2 registration +fibers_compressed/tokens_savedreport fields- Hub endpoints — Pydantic request validation with regex-based brain_id/device_id format checks
- Tests: 2687 passed (up from 2527), +160 new tests across 8 test files
[2.7.1] - 2026-02-21¶
Added¶
- MCP Tool Tiers — Config-based filtering to reduce token overhead per API turn
- 3 tiers:
minimal(4 tools, ~84% savings),standard(8 tools, ~69% savings),full(all 23, default) ToolTierConfigfrozen dataclass inunified_config.pywithfrom_dict()/to_dict()get_tool_schemas_for_tier(tier)intool_schemas.py— filters schemas by tier[tool_tier]TOML section inconfig.tomlfor persistent configuration- Hidden tools remain callable via dispatch — only schema exposure changes
- CLI command:
nmem config tier [--show | minimal | standard | full] - Description Compression — All 23 tool descriptions compressed (~22% token reduction at full tier)
Changed¶
MCPServer.get_tools()now respectsconfig.tool_tier.tiersettingtool_schemas.pyrefactored:_ALL_TOOL_SCHEMASmodule-level list +TOOL_TIERSdict- Tests: added 28 new tests in
test_tool_tiers.py
[2.7.0] - 2026-02-18¶
Added¶
- Spaced Repetition Engine — Leitner box system (5 boxes: 1d, 3d, 7d, 14d, 30d) for memory reinforcement
ReviewSchedulefrozen dataclass: fiber_id, brain_id, box (1–5), next_review, streak, review_countSpacedRepetitionEngine:get_review_queue(),process_review()(callsReinforcementManager),auto_schedule_fiber()advance(success)returns new schedule instance — box increments on success (max 5), resets to 1 on failure- Auto-scheduling: fibers with
priority >= 7are automatically scheduled in_remember SQLiteReviewsMixin: upsert, get_due, get_stats withmin(limit, 100)capInMemoryReviewsMixinfor testingReviewHandlerMCP mixin:nmem_reviewtool (queue/mark/schedule/stats actions)- Schema migration v14 → v15 (
review_schedulestable + 2 indexes) - Memory Narratives — Template-based markdown narrative generation (no LLM)
- 3 modes:
timeline(date range),topic(spreading activation viaReflexPipeline),causal(CAUSED_BY chain traversal) NarrativeItem+Narrativefrozen dataclasses withto_markdown()rendering- Timeline mode: queries fibers by date range, sorts chronologically, groups by date headers
- Topic mode: runs SA query, fetches matched fibers, sorts by relevance
- Causal mode: uses
trace_causal_chain()to follow CAUSED_BY synapses, builds cause→effect narrative NarrativeHandlerMCP mixin:nmem_narrativetool (timeline/topic/causal actions)- Configurable
max_fiberswith server-side cap of 50 - Semantic Synapse Discovery — Offline consolidation using embeddings to find latent connections
- Batch embeds CONCEPT + ENTITY neurons, evaluates cosine similarity pairs above threshold
- Creates SIMILAR_TO synapses with
weight = similarity * 0.6and{"_semantic_discovery": True}metadata - Configurable:
semantic_discovery_similarity_threshold(default 0.7),semantic_discovery_max_pairs(default 100) - Integrated as Tier 5 (
SEMANTIC_LINK) inConsolidationEnginestrategy dispatch - 2× faster decay for unreinforced semantic synapses in
_prune(reinforced_count < 2 → decay factor 0.5) - Optional — gracefully skipped if
sentence-transformersnot installed SemanticDiscoveryResultdataclass: neurons_embedded, pairs_evaluated, synapses_created, skipped_existing- Cross-Brain Recall — Parallel spreading activation across multiple brains
- Extends
nmem_recallwith optionalbrainsarray parameter (max 5 brains) - Resolves brain names → DB paths via
UnifiedConfig, opens temporarySQLiteStorageper brain - Parallel query via
asyncio.gather, each brain runs independentReflexPipeline - SimHash-based deduplication across brain results (keeps higher confidence on collision)
- Confidence-sorted merge with
[brain_name]prefixed context sections CrossBrainFiber+CrossBrainResultfrozen dataclasses- Temporary storage instances closed in
finallyblocks
Changed¶
- MCPServer mixin chain — Added
ReviewHandler+NarrativeHandlermixins (16 → 18 handler mixins) - MCP tools — Expanded from 21 to 23 (
nmem_review,nmem_narrative) - SQLite schema — Version 14 → 15 (
review_schedulestable) nmem_recallschema — Addedbrainsarray property for cross-brain queriesBrainConfig— Addedsemantic_discovery_similarity_thresholdandsemantic_discovery_max_pairsfieldsConsolidationEngine— AddedSEMANTIC_LINKstrategy enum + Tier 5 +semantic_synapses_createdreport field- Consolidation prune — Unreinforced semantic synapses (
_semantic_discoverymetadata) decay at 2× rate - Tests: 2399 passed (up from 2314), +85 new tests across 4 features
[2.6.0] - 2026-02-18¶
Added¶
- Smart Context Optimizer — Composite scoring replaces naive loop in
nmem_context - 5-factor weighted score: activation (0.30) + priority (0.25) + frequency (0.20) + conductivity (0.15) + freshness (0.10)
- SimHash-based deduplication removes near-duplicate content before token budgeting
- Proportional token budget allocation: items get budget proportional to their composite score
- Items below minimum budget (20 tokens) are dropped; oversized items are truncated
optimization_statsfield in response showsitems_droppedandtop_score- Proactive Alerts Queue — Persistent brain health alerts with full lifecycle management
Alertfrozen dataclass withAlertStatus(active → seen → acknowledged → resolved) and 7AlertTypeenum valuesSQLiteAlertsMixinwith CRUD operations:record_alert(6h dedup cooldown),get_active_alerts,mark_alerts_seen,mark_alert_acknowledged,resolve_alerts_by_typeAlertHandlerMCP mixin:nmem_alertstool (list/acknowledge actions)- Auto-creation from health pulse hints; auto-resolution when conditions clear
- Pending alert count surfaced in
nmem_remember,nmem_recall,nmem_contextresponses - Schema migration v13 → v14 (alerts table + indexes)
- Recall Pattern Learning — Discover and materialize query topic co-occurrence patterns
extract_topics()— keyword-based topic extraction from recall queries (min_length=3, cap 10)mine_query_topic_pairs()— session-grouped, time-windowed (600s default) pair miningextract_pattern_candidates()— frequency filtering + confidence scoringlearn_query_patterns()— materializes patterns as CONCEPT neurons + BEFORE synapses with{"_query_pattern": True}metadatasuggest_follow_up_queries()— follows BEFORE synapses for related topic suggestions- Integrated into LEARN_HABITS consolidation strategy
related_queriesfield added tonmem_recallresponse
Changed¶
- MCPServer mixin chain — Added
AlertHandlermixin (15 → 16 handler mixins) - MCP tools — Expanded from 20 to 21 (
nmem_alerts) - SQLite schema — Version 13 → 14 (alerts table)
nmem_contextresponse — Now includesoptimization_statswhen items are droppednmem_recallresponse — Now includesrelated_queriesfrom learned patterns- Tests: 2314 passed (up from 2291)
[2.5.0] - 2026-02-18¶
Added¶
- Onboarding flow — Detects fresh brain (0 neurons + 0 fibers) and surfaces a 4-step getting-started guide on the first tool call (
_remember,_recall,_context,_stats). Shows once per server instance. - Background expiry cleanup — Fire-and-forget task auto-deletes expired
TypedMemory+ underlying fibers on a configurable interval (default 12h, max 100/run). FiresMEMORY_EXPIREDhooks. Piggybacks on_check_maintenance(). - Scheduled consolidation — Background
asyncioloop runs consolidation every 24h (configurable strategies: prune, merge, enrich). Shares_last_consolidation_atwithMaintenanceHandlerto prevent overlap. Initial delay of one full interval avoids triggering on restart. - Version check handler — Background task checks PyPI every 24h for newer versions of
neural-memory. Caches result and surfacesupdate_hintin_remember,_recall,_statsresponses when an update is available. Usesurllib(no extra deps), validates HTTPS scheme. - Expiry alerts —
warn_expiry_daysparameter onnmem_recall; expiring-soon count in health pulse thresholds - Evolution dashboard —
/api/evolutionREST endpoint + dashboard UI tab for brain maturation metrics (stage distribution, plasticity, proficiency)
Changed¶
- MaintenanceConfig — Added 8 new config fields:
expiry_cleanup_enabled,expiry_cleanup_interval_hours,expiry_cleanup_max_per_run,scheduled_consolidation_enabled,scheduled_consolidation_interval_hours,scheduled_consolidation_strategies,version_check_enabled,version_check_interval_hours - MCPServer mixin chain — Added
OnboardingHandler,ExpiryCleanupHandler,ScheduledConsolidationHandler,VersionCheckHandlermixins - Server lifecycle —
run_mcp_server()now starts scheduled consolidation + version check at startup, cancels all background tasks on shutdown
[2.4.0] - 2026-02-17¶
Security¶
- 6-phase security audit — Comprehensive audit across 142K LOC / 190 files covering engine, storage, server, config, MCP/CLI, core, safety, utils, sync, integration, and extraction modules
- Path traversal fixes — 3 CRITICAL path injection vulnerabilities in CLI commands (tools, brain import, shortcuts) patched with
resolve()+is_relative_to() - CORS hardening — Replaced wildcard patterns with explicit localhost origins in FastAPI server
- TOML injection prevention — Added
_sanitize_toml_str()for user-provided dedup config fields - API key masking —
BrainModeConfig.to_dict()now serializes api_key as"***"instead of plaintext - Info leak prevention — Removed internal IDs, adapter names, and filesystem paths from 5 error messages across MCP, integration, and sync modules
- WebSocket validation — Brain ID format + length validation on subscribe action
- Path normalization —
SQLiteStorageandNEURALMEMORY_DIRenv var paths now resolved withPath.resolve()
Fixed¶
- Frozen core models —
Synapse,Fiber,NeuronState,BrainSnapshot,FreshnessResult,MemoryFreshnessReport,Entity,WeightedKeyword,TimeHintdataclasses are nowfrozen=Trueper immutability contract - merge_brain() atomicity — Restore from backup on import failure instead of leaving empty brain
- import_brain() orphan — Brain record INSERT moved inside transaction to prevent orphan on failure
- Division-by-zero guards —
_predicates_conflict()and homeostatic normalization protected against empty inputs - Datetime hardening — 4
datetime.fromisoformat()call sites wrapped with try/except + naive UTC enforcement - Lateral inhibition — Ceiling division for fair slot allocation across clusters
- suggest_memory_type — Word boundary matching prevents false positives (e.g. "add" no longer matches "address")
- Git update command — Detects current branch instead of hardcoded 'main'
- Dead code removal — Removed unused
updated_atfield, duplicate index, stale imports
Performance¶
- N+1 query elimination —
consolidation._prune()pre-fetches neighbor synapses in batch (was 500+ serial queries);activation.activate()caches neighbors + batch state pre-fetch (was ~1000 queries);conflict_detectionusesasyncio.gather()for parallel searches - Export safety caps —
export_brain()limited to 50K neurons, 100K synapses, 50K fibers - Bounds enforcement — 15+ storage methods capped with
min(limit, MAX), schema tool limits enforced - Regex pre-compilation —
sensitive.pyandtrigger_engine.pypatterns compiled at module level with cache - Enrichment optimization — Early exit on empty tags + zero intersection in O(n^2) Jaccard loop
- ReDoS prevention — Content length cap (100K chars) before regex matching in sensitive content detection
Changed¶
- BrainConfig.with_updates() — Replaced 80-line manual field copy with
dataclasses.replace() - DriftReport.variants — Changed from mutable
listtotupleon frozen dataclass - Mutable constants —
VI_PERSON_PREFIXESandLOCATION_INDICATORSconverted tofrozenset - Error handling — 8 bare
except Exceptionblocks narrowed to specific exception types with logging
[2.2.0] - 2026-02-13¶
Added¶
- Config presets — Three built-in profiles:
safe-cost(token-efficient),balanced(defaults),max-recall(maximum retention). CLI:nmem config preset <name> [--list] [--dry-run] - Consolidation delta report —
run_with_delta()wrapper computes before/after health snapshots around consolidation, showing purity, connectivity, and orphan rate changes. CLI consolidate now shows health delta.
Fixed¶
- CI lint parity — CI now passes: fixed 14 lint errors in test files (unused imports, sorting, Yoda conditions)
- Release workflow idempotency —
gh release createno longer fails when release already exists; uploads assets to existing release instead - CI test timeouts — Added
pytest-timeout(60s default) andtimeout-minutes: 15to prevent stuck CI jobs
Changed¶
- Makefile — Added
verifytarget matching CI exactly (lint + format-check + typecheck + test-cov + security) - Auto-consolidation observability — Background auto-consolidation now logs purity delta for monitoring
[2.1.0] - 2026-02-13¶
Fixed¶
- Brain reset on config migration — When upgrading to unified config (config.toml),
current_brainis now migrated from legacy config.json so users don't lose their active brain selection - EternalHandler stale brain cache — Eternal context now detects brain switches and re-creates the context instead of caching the initial brain ID indefinitely
- Ruff lint errors — Fixed 7 pre-existing lint violations (unused imports, naming convention, import ordering)
- Mypy type errors — Fixed 2 pre-existing type errors (
Anyimport,set()arg-type)
Added¶
- CLI
--versionflag —nmem --version/nmem -Vnow prints version and exits (standard CLI convention) - Actionable health scoring —
nmem_healthnow returnstop_penalties: top 3 ranked penalty factors with estimated gain and suggested action - Semantic stage progress —
nmem_evolutionnow returnsstage_distribution(fiber counts per maturation stage) andclosest_to_semantic(top 3 EPISODIC fibers with progress % and next step) - Composable encoding pipeline — Refactored monolithic
encode()into 14 composable async pipeline steps (PipelineContext/PipelineStep/Pipeline)
Changed¶
- Dependency warning suppression — pyvi/NumPy DeprecationWarnings are now suppressed at import time with targeted
filterwarnings
[2.3.1] - 2026-02-17¶
Refactored¶
- Engine cleanup — Removed 176 lines of dead code across 6 engine modules
- Deduplicated stop-word sets into shared
_STOP_WORDSfrozenset inconflict_detection.py - Replaced manual
Fiber()constructor withdc_replace()inconsolidation.py - Removed unused
reconstitute_answer()fromretrieval_context.py - Hoisted expansion suffix/prefix constants to module level in
retrieval.py - Used
heapq.nlargestinstead of sorted+slice in retrieval reinforcement - Typed consolidation dispatch dict with
Callable[[], Awaitable[None]]instead ofAny
Fixed¶
- Unreachable break in dream — Outer loop guard added to prevent quadratic blowup when activated neuron list is large (max 50K pairs)
- JSON snapshot validation —
brain_versioning.pynow validates parsed JSON is a dict before field access
[2.3.0] - 2026-02-16¶
Added¶
- PreCompact + Stop auto-flush hooks — Pre-compaction hook fires before context compression, parallel CI tests support
- Emergency flush (
nmem_auto action="flush") — Pre-compaction emergency capture that skips dedup, lowers confidence threshold to 0.5, enables all memory types regardless of config, and boosts priority +2. Tagemergency_flushapplied to all captured memories. Inspired by OpenClaw Memory's Layer 3 (memoryFlush) - Session gap detection —
nmem_session(action="get")now returnsgap_detected: truewhen content may have been lost between sessions (e.g. user ran/newwithout saving). Uses MD5 fingerprint stored onsession_set/session_endto detect gaps from older code paths missing fingerprints - Auto-capture preference patterns — Detects explicit preferences ("I prefer...", "always use..."), corrections ("that's wrong...", "actually, it should be..."), and Vietnamese equivalents. New memory type
preferencewith 0.85 confidence - Windows surrogate crash fix — MCP server now strips lone surrogate characters (U+D800-U+DFFF) from tool arguments before processing, preventing
UnicodeEncodeErroron Windows stdio pipes
Fixed¶
- CI lint failure — Fixed ruff RUF002 (ambiguous EN DASH
–in docstring) inmcp/server.py - CI stress test timeouts — Skipped stress tests on GitHub runners to prevent CI timeout failures
Changed¶
- Release workflow hardened —
release.ymlnow validates tag version matchespyproject.toml+__init__.pybefore publishing, and runs full CI (lint + typecheck + test) as a gate before PyPI upload
[Unreleased]¶
Fixed¶
- Agent forgets tools after
/new—before_agent_starthook now always injectssystemPromptwith tool instructions, ensuring the agent knows about NeuralMemory tools even after session reset. Previously onlyprependContext(data) was injected, leaving the agent unaware of available tools - Agent confuses CLI vs MCP tool calls —
systemPromptinjection explicitly states "call as tool, NOT CLI command", preventing agents from runningnmem rememberin terminal instead of calling thenmem_remembertool openclaw plugins listnot recognizing plugin on Windows — Changedmainandopenclaw.extensionsfrom TypeScript source (src/index.ts) to compiled output (dist/index.js). AddedprepublishOnlyandpostinstallbuild scripts. Fixedtsconfig.jsonmodule resolution frombundlertoNode16for broader compatibility- OpenClaw plugin ID mismatch — Added explicit
"id": "neuralmemory"toopenclawsection inpackage.json, fixing theplugin id mismatch (manifest uses "neuralmemory", entry hints "openclaw-plugin")warning - Content-Length framing bug — Switched from string-based buffer to raw
Bufferfor byte-accurate MCP message parsing. Fixes silent data corruption with non-ASCII content (Vietnamese, emoji, CJK) - Null dereference after close() —
writeMessage()andnotify()now guard against null process reference - Unhandled tool call errors —
callTool()exceptions in tools.ts now caught and returned as structured error responses instead of crashing OpenClaw
Added¶
- Configurable MCP timeout — New
timeoutplugin config option (default: 30s, max: 120s) for users on slow machines or first-time init - Actionable MCP error messages — Initialize failures now include Python stderr output and specific hints:
ENOENT→ tells user to checkpythonPathin plugin config- Exit code 1 → suggests
pip install neural-memory - Timeout → prints captured stderr + verify command (
python -m neural_memory.mcp)
Security¶
- Least-privilege child env — MCP subprocess now receives only whitelisted env vars (
PATH,HOME,PYTHONPATH,NEURALMEMORY_*) instead of fullprocess.env. Prevents leaking API keys and secrets to child process - Config validation —
resolveConfig()now validates types, ranges, and brain name pattern (^[a-zA-Z0-9_\-.]{1,64}$). Invalid values fall back to defaults instead of passing through - Input bounds on all tools — Zod schemas now enforce max lengths: content (100K chars), query (10K), tags (50 items × 100 chars), expires_days (1–3650), context limit (1–200)
- Buffer overflow protection — 10 MB cap on stdio buffer; process killed if exceeded
- Stderr cap — Max 50 lines collected during init to prevent unbounded memory growth
- Auto-capture truncation — Agent messages truncated to 50K chars before sending to MCP
- Graceful shutdown —
close()now removes listeners, waits up to 3s for exit, then escalates to SIGKILL - Config schema hardened — Added
additionalProperties: falseand brain namepatternconstraint
[1.7.4] - 2026-02-11¶
Fixed¶
- Full mypy compliance: Resolved all 341 mypy errors across 79 files (0 errors in 170 source files)
- Added
TYPE_CHECKINGprotocol stubs to all mixin classes (storage, MCP handlers) - Added generic type parameters to all bare
dict/listannotations - Narrowed
str | None→strbefore passing to typed parameters - Removed 14 stale
# type: ignorecomments - Added proper type annotations to
HybridStoragefactory delegate methods - Fixed variable name reuse across different types in same scope
- Fixed missing
awaiton coroutine calls in CLI commands
Added¶
- CLAUDE.md — Type Safety Rules: New section documenting mixin protocol stubs, generic type params, Optional narrowing, and
# type: ignorediscipline to prevent future mypy regressions
[1.7.3] - 2026-02-11¶
Added¶
- Bundled skills — 3 Claude Code agent skills (memory-intake, memory-audit, memory-evolution) now ship inside the pip package under
src/neural_memory/skills/ nmem install-skills— new CLI command to install skills to~/.claude/skills/--listshows available skills with descriptions--forceoverwrites existing with latest version- Detects unchanged files (skip), changed files (report "update available"), missing
~/.claude/(graceful error) nmem init --skip-skills— skills are now installed as part ofnmem init; use--skip-skillsto opt out- Tests: 25 new unit tests for
setup_skills,_discover_bundled_skills,_classify_status,_extract_skill_description
Changed¶
_classify_status()now recognizes "installed" and "updated" as success statesskills/README.mdupdated: manual copy instructions replaced withnmem install-skills
[1.7.2] - 2026-02-11¶
Security¶
- CORS hardening: Default CORS origins changed from
["*"]to["http://localhost:*", "http://127.0.0.1:*"](C2) - Bind address: Default server bind changed from
0.0.0.0to127.0.0.1(C4) - Migration safety: Non-benign migration errors now halt and raise instead of silently advancing schema version (C8)
- Info leakage: Removed available brain names from 404 error responses (H21)
- URI validation: Graphiti adapter validates
bolt:///bolt+s://URI scheme before connecting (H23) - Error masking: Exception type names no longer leaked in MCP training error responses (H27)
- Import screening:
RecordMapper.map_record()now runscheck_sensitive_content()before importing external records (H33)
Fixed¶
- Fix
RuntimeError: Event loop is closedfrom aiosqlite worker thread on CLI exit (Python 3.12+) - Root cause: 4 CLI commands (
decay,consolidate,export,import) calledget_shared_storage()directly, bypassing_active_storagestracking — aiosqlite connections were never closed before event loop teardown - Route all CLI storage creation through
get_storage()in_helpers.pyso connections are properly tracked and cleaned up - Add
await asyncio.sleep(0)after storage cleanup to drain pending aiosqlite worker thread callbacks beforeasyncio.run()tears down the loop - Bounds hardening: MCP
_habitsfiber fetch reduced 10K→1K;_contextlimit capped at 200; RESTlist_neuronscapped at 1000;EncodeRequest.contentmax 100K chars (H11-H13, H32) - Data integrity:
import_brainwrapped inBEGIN IMMEDIATEwith rollback on failure (H14) - Code quality: AWF adapter gets ImportError guard; redundant
enable_auto_save()removed from train handler (C7, H26) - Public API: Added
current_brain_idproperty toNeuralStorage,SQLiteStorage,InMemoryStorage— replaces private_current_brain_idaccess (H25)
Added¶
- CLAUDE.md: Project-level AI coding standards (architecture, immutability, datetime, security, bounds, testing, error handling, naming conventions)
- Quality gates: Automated enforcement via ruff, mypy, pytest, and CI
- 8 new ruff rule sets: S (bandit), A (builtins), DTZ (datetimez), T20 (print), PT (pytest), PERF (perflint), PIE, ERA (eradicate)
- Per-file-ignores for intentional patterns (CLI print, simhash MD5, SQL column names, etc.)
- Coverage threshold: 67% enforced in CI and Makefile
- CI: typecheck job now fails build (removed
continue-on-errorand|| true); build requires[lint, typecheck, test]; added security scan job - Pre-commit: updated hooks (ruff v0.9.6, mypy v1.15.0); added
no-commit-to-branchandbandit - Makefile: added
security,audittargets;checknow includessecurity
Changed¶
- Tests: 1759 passed (up from 1696)
[1.7.1] - 2026-02-11¶
Fixed¶
- Fix
__version__reporting "1.6.1" instead of "1.7.0" in PyPI package (runtime version mismatch)
[1.7.0] - 2026-02-11¶
Added¶
- Proactive Brain Intelligence — 3 features that make the brain self-aware during normal usage
- Related Memories on Write —
nmem_remembernow discovers and returns up to 3 related existing memories via 2-hop SpreadingActivation from the new anchor neuron. Always-on (~5-10ms overhead), non-intrusive. Response includesrelated_memorieslist withfiber_id,preview, andsimilarityscore. - Expired Memory Hint — Health pulse detects expired memories via cheap COUNT query on
typed_memoriestable. Surfaces hint when count exceeds threshold (default: 10):"N expired memories found. Consider cleanup via nmem list --expired." - Stale Fiber Detection — Health pulse detects fibers with decayed conductivity (last conducted >90 days ago or never). Surfaces hint when stale ratio exceeds threshold (default: 30%):
"N% of fibers are stale. Consider running nmem_health for review." - MaintenanceConfig extensions — 3 new configuration fields:
expired_memory_warn_threshold(default: 10)stale_fiber_ratio_threshold(default: 0.3)stale_fiber_days(default: 90)- Storage layer — 2 new optional methods on
NeuralStorage: get_expired_memory_count()— COUNT of expired typed memories (SQLite + InMemory)get_stale_fiber_count(brain_id, stale_days)— COUNT of stale fibers (SQLite + InMemory)- HealthPulse extensions —
expired_memory_countandstale_fiber_ratiofields - HEALTH_DEGRADATION trigger —
TriggerType.HEALTH_DEGRADATIONfor maintenance events
Changed¶
- Tests: 1696 passed (up from 1695)
[1.6.1] - 2026-02-10¶
Fixed¶
- CLI brain commands (
export,import,create,delete,health,transplant) now work correctly in SQLite mode brain exportno longer produces empty files when brain was created withbrain createbrain deletecorrectly removes.dbfiles in unified config modebrain healthuses storage-agnosticfind_neurons()instead of JSON-internal_neuronsdict- All
versionsubcommands (create,list,rollback,diff) now find brains in SQLite mode shared syncuses correct storage backend
[1.6.0] - 2026-02-10¶
Added¶
- DB-to-Brain Schema Training (
nmem_train_db) — Teach brains to understand database structure - 3-layer pipeline:
SchemaIntrospector→KnowledgeExtractor→DBTrainer - Extracts schema knowledge (table structures, relationships, patterns) — NOT raw data rows
- SQLite dialect (v1) via
aiosqliteread-only connections - Schema fingerprint (SHA256) for re-training detection
- Schema Introspection —
engine/db_introspector.py SchemaDialectprotocol withSQLiteDialectimplementation- Frozen dataclasses:
ColumnInfo,ForeignKeyInfo,IndexInfo,TableInfo,SchemaSnapshot - PRAGMA-based metadata extraction (table_info, foreign_key_list, index_list)
- Knowledge Extraction —
engine/db_knowledge.py - FK-to-SynapseType mapping with confidence scoring (IS_A, INVOLVES, AT_LOCATION, RELATED_TO)
- Structure-based join table detection (2+ FKs, ≤1 business column → CO_OCCURS synapse)
- 5 schema pattern detectors: audit_trail, soft_delete, tree_hierarchy, polymorphic, enum_table
- Training Orchestrator —
engine/db_trainer.py - Mirrors DocTrainer architecture: batch save, per-table error isolation, shared domain neuron
- Configurable:
max_tables(1-500),salience_ceiling,consolidate,domain_tag - MCP Tool:
nmem_train_db—trainandstatusactions
Fixed¶
- Security: read-only SQLite connections, absolute path rejection, SQL identifier sanitization, info leakage prevention
Changed¶
- MCP tools expanded from 17 to 18
- Tests: 1648 passed (up from 1596)
Skills¶
- 3 composable AI agent skills — ship-faster SKILL.md pattern, installable to
~/.claude/skills/ memory-intake— structured memory creation from messy notes, 1-question-at-a-time clarification, batch store with previewmemory-audit— 6-dimension quality review (purity, freshness, coverage, clarity, relevance, structure), A-F gradingmemory-evolution— evidence-based optimization from usage patterns, consolidation, enrichment, pruning, checkpoint Q&A
[1.5.0] - 2026-02-10¶
Added¶
- Conflict Management MCP Tool (
nmem_conflicts) — List, resolve, and pre-check memory conflicts list,resolve(keep_existing/keep_new/keep_both),checkactionsConflictHandlermixin with full input validation- Recall Conflict Surfacing —
has_conflictsflag andconflict_countin default recall response - Provenance Source Enrichment —
NEURALMEMORY_SOURCEenv var →mcp:{source}provenance - Purity Score Conflict Penalty — Unresolved CONTRADICTS reduce health score (max -10 points)
Fixed¶
- 20+ performance bottlenecks — storage index optimization, encoder batch operations
- 25+ bugs across engine/storage/MCP — deep audit fixes including deprecated
datetime.utcnow()replacement
Changed¶
- MCP tools expanded from 16 to 17
- Tests: 1372 passed (up from 1352)
[1.4.0] - 2026-02-09¶
Added¶
- OpenClaw Memory Plugin —
@neuralmemory/openclaw-pluginnpm package - MCP stdio client: JSON-RPC 2.0 with Content-Length framing
- 6 core tools, 2 hooks (before_agent_start, agent_end), 1 service
- Plugin manifest with
configSchema+uiHints
Changed¶
- Dashboard Integrations tab simplified to status-only with deep links (Option B)
[1.3.0] - 2026-02-09¶
Added¶
- Deep Integration Status — Enhanced status cards, activity log, setup wizards, import sources
- Source Attribution —
NEURALMEMORY_SOURCEenv var for integration tracking - 25 new i18n keys in EN + VI (87 total)
Changed¶
- Tests: 1352 passed (up from 1340)
[1.2.0] - 2026-02-09¶
Added¶
- Dashboard — Full-featured SPA at
/dashboard(Alpine.js + Tailwind CDN, zero-build) - 5 tabs: Overview, Neural Graph (Cytoscape.js), Integrations, Health (radar chart), Settings
- Graph toolbar, toast notifications, skeleton loading, brain management, EN/VI i18n
- ARIA accessibility, 44px mobile touch targets, design system
Fixed¶
ModuleNotFoundError: typing_extensionson fresh Python 3.12 — added dependency
Changed¶
- Tests: 1340 passed (up from 1264)
[1.1.0] - 2026-02-09¶
Added¶
- ClawHub SKILL.md — Published
[email protected]to ClawHub - Nanobot Integration — 4 tools adapted for Nanobot's action interface
- Architecture Doc —
docs/ARCHITECTURE_V1_EXTENDED.md
Changed¶
- OpenClaw PR #12596 submitted
[1.0.2] - 2026-02-09¶
Fixed¶
- Empty recall for broad queries —
format_context()truncates long fiber content to fit token budget - Diversity metric normalization — Shannon entropy normalized against 8 expected synapse types
- Temporal synapse diversity —
_link_temporal_neighbors()creates BEFORE/AFTER instead of always RELATED_TO - Consolidation prune crash — Fixed
Fiber(tags=...)TypeError, usesdataclasses.replace()
[1.0.0] - 2026-02-09¶
Added¶
- Brain Versioning — Snapshot, rollback, diff (schema v11,
brain_versionstable) - Partial Brain Transplant — Topic-filtered merge between brains with conflict resolution
- Brain Quality Badge — Grade A-F from BrainHealthReport, marketplace eligibility
- Optional Embedding Layer — SentenceTransformer + OpenAI providers (OFF by default)
- Optional LLM Extraction — Enhanced relation extraction beyond regex (OFF by default)
Changed¶
- Version 1.0.0 — Production/Stable, schema v10 → v11
- MCP tools expanded from 14 to 16 (nmem_version, nmem_transplant)
[0.20.0] - 2026-02-09¶
Added¶
- Habitual Recall — ENRICH, DREAM, LEARN_HABITS consolidation strategies
- Action event log (hippocampal buffer), sequence mining, workflow suggestions
nmem_habitsMCP tool,nmem habitsCLI,nmem updateCLI- Prune enhancements: dream synapse 10x decay, high-salience resistance
- Schema v10:
action_eventstable - 6 new BrainConfig fields for habit/dream configuration
Changed¶
ConsolidationStrategyextended with ENRICH, DREAM, LEARN_HABITS- Schema version 9 → 10
[0.19.0] - 2026-02-08¶
Added¶
- Temporal Reasoning — Causal chain traversal, temporal range queries, event sequence tracing
trace_causal_chain(),query_temporal_range(),trace_event_sequence()CAUSAL_CHAINandTEMPORAL_SEQUENCEsynthesis methods- Pipeline integration: "Why?" → causal, "When?" → temporal, "What happened after?" → event sequence
- Router enhancement with traversal metadata in
RouteDecision
Changed¶
- Tests: 1019 passed (up from 987)
[0.17.0] - 2026-02-08¶
Added¶
- Brain Diagnostics —
BrainHealthReportwith 7 component scores and composite purity (0-100) - Grade A/B/C/D/F, 7 warning codes, automatic recommendations
- Tag drift detection via
TagNormalizer.detect_drift() - MCP tool:
nmem_health— Brain health diagnostics - CLI command:
nmem health— Terminal health report with ASCII progress bars
[0.16.0] - 2026-02-08¶
Added¶
- Emotional Valence — Lexicon-based sentiment extraction (EN + VI, zero LLM)
SentimentExtractor,Valenceenum, 7 emotion tag categories- Negation handling, intensifier detection
FELTsynapses from anchor → emotion STATE neurons- Emotional Resonance Scoring — Up to +0.1 retrieval boost for matching-valence memories
- Emotional Decay Modulation — High-intensity emotions decay slower (trauma persistence)
Changed¶
- Tests: 950 passed (up from 908)
[0.15.0] - 2026-02-08¶
Added¶
- Associative Inference Engine — Co-activation patterns → persistent CO_OCCURS synapses
compute_inferred_weight(),identify_candidates(),create_inferred_synapse()generate_associative_tags()from BFS clustering- Co-Activation Persistence —
co_activation_eventstable (schema v8 → v9) record_co_activation(),get_co_activation_counts(),prune_co_activations()- INFER Consolidation Strategy — Create synapses from co-activation patterns
- Tag Normalizer — ~25 synonym groups + SimHash fuzzy matching + drift detection
- 6 new BrainConfig fields for co-activation configuration
Changed¶
- Schema version 8 → 9
- Tests: 908 passed (up from 838)
[0.14.0] - 2026-02-08¶
Added¶
- Relation extraction engine: Regex-based causal, comparative, and sequential pattern detection from content — auto-creates CAUSED_BY, LEADS_TO, BEFORE, SIMILAR_TO, CONTRADICTS synapses during encoding
- Tag origin tracking: Separate
auto_tags(content-derived) fromagent_tags(user-provided) with backward-compatiblefiber.tagsunion property - Auto memory type inference:
suggest_memory_type()fallback when no explicit type provided at encode time - Confirmatory weight boost: Hebbian +0.1 boost on anchor synapses when agent tags confirm auto tags; RELATED_TO synapses (weight 0.3) for divergent agent tags
- Bilingual pattern support: English + Vietnamese regex patterns for causal ("because"/"vì"), comparative ("similar to"), and sequential ("then"/"sau khi") relations
RelationType,RelationCandidate,RelationExtractorin newextraction/relations.pyFiber.auto_tags,Fiber.agent_tagsfields withFiber.add_auto_tags()method- SQLite schema migration v7→v8 with backward-compatible column additions and backfill
- 62 new tests: relation extraction (25), tag origin (10), confirmatory boost (5), relation encoding (7), auto-tags update (15)
ROADMAP.mdwith versioned plan from v0.14.0 → v1.0.0
Fixed¶
- "Event loop is closed" noise on CLI exit: aiosqlite connections now properly closed before event loop teardown via centralized
run_async()helper - MCP server shutdown now closes storage connection in
finallyblock
Changed¶
- All 32 CLI
asyncio.run()calls replaced withrun_async()for proper cleanup - Encoder pipeline extended with relation extraction (step 6b) and confirmatory boost (step 6c)
Fiber.create(tags=...)preserved for backward compat — maps toagent_tags- 838 tests passing
[0.13.0] - 2026-02-07¶
Added¶
- Ground truth evaluation dataset: 30 curated memories across 5 sessions (Day 1→Day 30) covering project setup, development, integration, sprint review, and production launch
- Standard IR metrics: Precision@K, Recall@K, MRR (Mean Reciprocal Rank), NDCG@K with per-query and per-category aggregation
- 25 evaluation queries: 8 factual, 6 temporal, 4 causal, 4 pattern, 3 multi-session coherence queries with expected relevant results
- Naive keyword-overlap baseline: Tokenize-and-rank strawman that NeuralMemory's activation-based recall must beat
- Long-horizon coherence test framework: 5-session simulation across 30 days with recall tracking per session (target: >= 60% at day 30)
benchmarks/ground_truth.py— ground truth memories, queries, session schedulebenchmarks/metrics.py— IR metrics:precision_at_k,recall_at_k,reciprocal_rank,ndcg_at_k,evaluate_query,BenchmarkReportbenchmarks/naive_baseline.py— keyword overlap ranking and baseline evaluationbenchmarks/coherence_test.py— multi-session coherence test withCoherenceReport- Ground-truth evaluation section in
run_benchmarks.pycomparing NeuralMemory vs baseline - 27 new unit tests: precision (6), recall (4), MRR (5), NDCG (4), query evaluation (1), report aggregation (2), baseline (5)
Changed¶
run_benchmarks.pynow includes ground-truth evaluation with NeuralMemory vs naive baseline comparison in generated markdown output
[0.12.0] - 2026-02-07¶
Added¶
- Real-time conflict detection: Detects factual contradictions and decision reversals at encode time using predicate extraction — no LLM required
- Factual contradiction detection: Regex-based extraction of
"X uses/chose/decided Y"patterns, compares predicates across memories with matching subjects - Decision reversal detection: Identifies when a new DECISION contradicts an existing one via tag overlap analysis
- Dispute resolution pipeline: Anti-Hebbian confidence reduction,
_disputedand_supersededmetadata markers, and CONTRADICTS synapse creation - Disputed neuron deprioritization: Retrieval pipeline reduces activation of disputed neurons by 50% and superseded neurons by 75%
CONTRADICTSsynapse type for linking contradictory memoriesConflictType,Conflict,ConflictResolution,ConflictReportin newengine/conflict_detection.pydetect_conflicts(),resolve_conflicts()for encode-time conflict handling- 32 new unit tests: predicate extraction (5), predicate conflict (4), subject matching (4), tag overlap (4), helpers (4), detection integration (6), resolution (5)
Changed¶
- Encoder pipeline runs conflict detection after anchor neuron creation, before fiber assembly
- Retrieval pipeline adds
_deprioritize_disputed()step after stabilization to suppress disputed neurons SynapseTypeenum extended withCONTRADICTS = "contradicts"
[0.11.0] - 2026-02-07¶
Added¶
- Activation stabilization: Iterative dampening algorithm settles neural activations into stable patterns after spreading activation — noise floor removal, dampening (0.85x), homeostatic normalization, convergence detection (typically 2-4 iterations)
- Multi-neuron answer reconstruction: Strategy-based answer synthesis replacing single-neuron
reconstitute_answer()— SINGLE mode (high-confidence top neuron), FIBER_SUMMARY mode (best fiber summary), MULTI_NEURON mode (top-5 neurons ordered by fiber pathway position) - Memory maturation lifecycle: Four-stage memory model STM → Working (30min) → Episodic (4h) → Semantic (7d + spacing effect). Stage-aware decay multipliers: STM 5x, Working 2x, Episodic 1x, Semantic 0.3x
- Spacing effect requirement: EPISODIC → SEMANTIC promotion requires reinforcement across 3+ distinct calendar days, modeling biological spaced repetition
- Pattern extraction: Episodic → semantic concept formation via tag Jaccard clustering (Union-Find). Clusters of 3+ similar fibers generate CONCEPT neurons with IS_A synapses to common entities
- MATURE consolidation strategy: New consolidation strategy that advances maturation stages and extracts semantic patterns from mature episodic memories
StabilizationConfig,StabilizationReport,stabilize()in newengine/stabilization.pySynthesisMethod,ReconstructionResult,reconstruct_answer()in newengine/reconstruction.pyMemoryStage,MaturationRecord,compute_stage_transition(),get_decay_multiplier()in newengine/memory_stages.pyExtractedPattern,ExtractionReport,extract_patterns()in newengine/pattern_extraction.pySQLiteMaturationMixinin newstorage/sqlite_maturation.py— maturation CRUD for SQLite backend- Schema migration v6→v7:
memory_maturationstable with composite key (brain_id, fiber_id) contributing_neuronsandsynthesis_methodfields onRetrievalResultstages_advancedandpatterns_extractedfields onConsolidationReport- Maturation abstract methods on
NeuralStoragebase:save_maturation(),get_maturation(),find_maturations() - 49 new unit tests: stabilization (12), reconstruction (11), memory stages (16), pattern extraction (8), plus 2 consolidation tests
Changed¶
- Retrieval pipeline inserts stabilization phase after lateral inhibition and before answer reconstruction
- Answer reconstruction uses multi-strategy
reconstruct_answer()instead ofreconstitute_answer() - Encoder initializes maturation record (STM stage) when creating new fibers
- Consolidation engine supports
MATUREstrategy for stage advancement and pattern extraction
[0.10.0] - 2026-02-07¶
Added¶
- Formal Hebbian learning rule: Principled weight update
Δw = η_eff * pre * post * (w_max - w)replacing ad-hocweight += delta + dormancy_bonus - Novelty-adaptive learning rate: New synapses learn ~4x faster, frequently reinforced synapses stabilize toward base rate via exponential decay
- Natural weight saturation:
(w_max - w)term prevents runaway weight growth — weights near ceiling barely change - Competitive normalization:
normalize_outgoing_weights()caps total outgoing weight per neuron at budget (default 5.0), implementing winner-take-most competition - Anti-Hebbian update:
anti_hebbian_update()for conflict resolution weight reduction (used in Phase 3) learning_rate,weight_normalization_budget,novelty_boost_max,novelty_decay_rateonBrainConfigLearningConfig,WeightUpdate,hebbian_update,compute_effective_rate,normalize_outgoing_weightsin newengine/learning_rule.py- 33 new unit tests covering learning rule, normalization, and backward compatibility
Changed¶
Synapse.reinforce()accepts optionalpre_activation,post_activation,nowparameters — uses formal Hebbian rule when activations provided, falls back to direct delta for backward compatibilityReflexPipeline._defer_co_activated()passes neuron activation levels to Hebbian strengtheningReflexPipeline._defer_reinforce_or_create()forwards activation levels toreinforce()- Removed dormancy bonus from
Synapse.reinforce()(novelty adaptation in learning rule replaces it)
[0.9.6] - 2026-02-07¶
Added¶
- Sigmoid activation function: Neurons now use sigmoid gating (
1/(1+e^(-6(x-0.5)))) instead of raw clamping, producing bio-realistic nonlinear activation curves - Firing threshold: Neurons only propagate signals when activation meets threshold (default 0.3), filtering borderline noise
- Refractory period: Cooldown prevents same neuron firing twice within a query pipeline (default 500ms), checked during spreading activation
- Lateral inhibition: Top-K winner-take-most competition in retrieval pipeline — top 10 neurons survive unchanged, rest suppressed by 0.7x factor
- Homeostatic target field: Reserved
homeostatic_targetfield on NeuronState for v2 adaptive regulation firedandin_refractoryproperties onNeuronStatesigmoid_steepness,default_firing_threshold,default_refractory_ms,lateral_inhibition_k,lateral_inhibition_factoronBrainConfig- Schema migration v5→v6: four new columns on
neuron_statestable
Changed¶
NeuronState.activate()applies sigmoid function and acceptsnowandsigmoid_steepnessparametersNeuronState.decay()preserves all new fields (firing_threshold, refractory_until, refractory_period_ms, homeostatic_target)DecayManager.apply_decay()usesstate.decay()instead of manual NeuronState constructionReinforcementManager.reinforce()directly sets activation level (bypasses sigmoid for reinforcement)- Spreading activation skips neurons in refractory cooldown
- Storage layer (SQLite + SharedStore) serializes/deserializes all new NeuronState fields
[0.9.5] - 2026-02-07¶
Added¶
- Type-aware decay rates: Different memory types now decay at biologically-inspired rates (facts: 0.02/day, todos: 0.15/day).
DEFAULT_DECAY_RATESdict andget_decay_rate()helper inmemory_types.py - Retrieval score breakdown:
ScoreBreakdowndataclass exposes confidence components (base_activation, intersection_boost, freshness_boost, frequency_boost) inRetrievalResultand MCPnmem_recallresponse - SimHash near-duplicate detection: 64-bit locality-sensitive hashing via
utils/simhash.py. Newcontent_hashfield onNeuronmodel. Encoder and auto-capture use SimHash to catch paraphrased duplicates - Point-in-time temporal queries:
valid_atparameter onnmem_recallfilters fibers by temporal validity window (time_start <= valid_at <= time_end) - Schema migration v4→v5:
content_hash INTEGERcolumn on neurons table
Changed¶
DecayManager.apply_decay()now uses per-neuronstate.decay_rateinstead of global ratereconstitute_answer()returnsScoreBreakdownas third tuple element_remember()MCP handler sets type-specific decay rates on neuron states after encoding
[0.9.4] - 2026-02-07¶
Performance¶
- SQLite WAL mode +
synchronous=NORMAL+ 8MB cache for concurrent reads and reduced I/O - Batch storage methods:
get_synapses_for_neurons(),find_fibers_batch(),get_neuron_states_batch()— singleIN()queries replacing N sequential calls - Deferred write queue: Fiber conductivity, Hebbian strengthening, and synapse writes batched after response assembly
- Parallel anchor finding: Entity + keyword lookups via
asyncio.gather()instead of sequential loops - Batch fiber discovery: Single junction-table query replaces 5-15 sequential
find_fibers()calls - Batch subgraph extraction: Single query replaces 20-50 sequential
get_synapses()calls - BFS state prefetch: Batch
get_neuron_states_batch()per hop instead of individual lookups - Target: 3-5x faster retrieval (800-4500ms → 200-800ms)
[0.9.0] - 2026-02-06¶
Added¶
- Codebase indexing (
nmem_index): Index Python files into neural graph for code-aware recall - Python AST extractor: Parse functions, classes, methods, imports, constants via stdlib
ast - Codebase encoder: Map code symbols to neurons (SPATIAL/ACTION/CONCEPT/ENTITY) and synapses (CONTAINS/IS_A/RELATED_TO/CO_OCCURS)
- Branch-aware sessions:
nmem_sessionauto-detects git branch/commit/repo and stores in metadata + tags - Git context utility: Detect branch, commit SHA, repo root via subprocess (zero deps)
- CLI
nmem indexcommand: Index codebase from command line with--ext,--status,--jsonoptions - 16 new tests for extraction, encoding, and git context
[0.8.0]¶
Added¶
- Initial project structure
- Core data models: Neuron, Synapse, Fiber, Brain
- In-memory storage backend using NetworkX
- Temporal extraction for Vietnamese and English
- Query parser with stimulus decomposition
- Spreading activation algorithm
- Reflex retrieval pipeline
- Memory encoder
- FastAPI server with memory and brain endpoints
- Unit and integration tests
- Docker support
[0.1.0] - TBD¶
Added¶
- First public release
- Core memory encoding and retrieval
- Multi-language support (English, Vietnamese)
- REST API server
- Brain export/import functionality