Documentation Chatbot¶
NeuralMemory includes a self-answering documentation chatbot powered by spreading activation — no LLM required.
Try it live: HuggingFace Space
How it works¶
- Project documentation is encoded into a neural memory brain (neurons + synapses + fibers)
- Your query triggers spreading activation across the knowledge graph
- The most relevant documentation chunks are retrieved and displayed
- A confidence score reflects how well the context matches your query
The chatbot uses ReflexPipeline — the same retrieval engine behind nmem_recall.
Running locally¶
Options:
| Flag | Description |
|---|---|
--port 7861 |
Custom port (default: 7860) |
--share |
Create a public Gradio URL |
Re-training the brain¶
If you've updated the documentation:
This trains from docs/, README.md, CHANGELOG.md, and FAQ.md. The brain is saved to chatbot/brain/docs.db.
Training options:
| Flag | Description |
|---|---|
--brain NAME |
Custom brain name (default: neuralmemory-docs) |
--export DIR |
Copy the trained DB to another directory |
--no-verify |
Skip verification queries |
Deploying to HuggingFace Spaces¶
Prerequisites¶
One-command deploy¶
Manual deploy¶
- Create a new Gradio Space with SDK = Gradio
- Clone the Space repo
- Copy
chatbot/app.py,chatbot/requirements.txt,chatbot/README.md, andchatbot/brain/into the Space - Push to HuggingFace
The brain DB is ~51 MB — well within HuggingFace's file size limits.
Search depth levels¶
| Level | Pipeline Depth | Speed | Best for |
|---|---|---|---|
| Quick | INSTANT |
~5ms | Simple keyword lookups |
| Normal | CONTEXT |
~20ms | Most questions |
| Deep | DEEP |
~50ms | Complex multi-topic queries |