Neuroloom

Core Concepts

What is a Memory

A memory is a typed, structured record that captures a unit of project knowledge — a decision, a pattern, an incident root cause, an architecture constraint — and makes it searchable by meaning, not keyword. Memories replace the static CLAUDE.md and .cursorrules files that load everything at session start regardless of what the agent actually needs.

Where a flat file forces your agent to ingest the entire document every session, Neuroloom surfaces only the memories relevant to the current task. A session focused on database query optimization retrieves discovery and architecture memories. A session debugging a failing test retrieves incident and discovery memories. The retrieval is semantic — it matches meaning, not exact text.


The Memory Record

Each memory is stored as a structured record with the following fields.

FieldTypeDescription
memory_idUUIDUnique identifier for the memory
titlestringShort, human-readable label
narrativestringFull content of the memory
memory_typeStrEnumClassification (see Memory Types below)
tagsstring[]Free-form labels for manual categorization
conceptsstring[]Extracted semantic concepts used in similarity matching
source_filesstring[]File paths associated with this memory, used in relationship discovery
importance_scorefloatWeighted score combining recency, access count, confidence, and PageRank
confidence_scorefloatHow reliably the memory reflects current project state (0.0–1.0)
access_countintTotal number of times this memory has been retrieved
retrieval_countintTimes this memory was returned in search results
last_accessed_attimestampMost recent retrieval time
source_session_idUUIDSession during which this memory was created
pagerank_scorefloatStructural centrality score from the daily PageRank cron
community_labelstringCluster label assigned by community detection
embeddingVector(1024)1024-dimension vector used for semantic similarity search
is_consolidatedboolWhether this memory was created by merging duplicates
consolidated_fromUUID[]IDs of the source memories, if consolidated

Memory Types

Every memory carries a memory_type that defines what kind of knowledge it represents. The type drives context injection — when a session begins, Neuroloom surfaces memories whose type matches the work pattern it detects.

Neuroloom uses 9 canonical types: 7 that the LLM assigns during extraction, and 2 that are system-only.

LLM-assignable types

TypeDescription
decisionA deliberate choice between alternatives, with rationale — framework selection, API design, naming convention
patternA recurring technique or approach observed in the codebase (descriptive: "when X, we do Y")
conventionA prescriptive rule or standard the team follows (imperative: "always/never do X")
architectureHow components, services, or layers structurally relate; system-level boundaries and data flow
discoveryA learned insight, gotcha, or non-obvious behavior where nothing broke
incidentSomething broke and was fixed; captures the failure mode and resolution
generalDoes not clearly fit any other type; a staging area for nightly reclassification (last resort only)

System-only types (never LLM-assigned)

TypeDescription
wikiManually authored reference material: glossary entries, process docs, how-tos
sdlc_knowledgeSDLC process knowledge — deliverable patterns, playbooks, discipline captures

Memory Anatomy

A memory record has three distinct layers: its content (title and narrative), its classification (type, tags, concepts), and its scoring metadata (importance, confidence, PageRank).

The embedding is computed from the title and narrative at write time and used for all semantic search operations. The concepts field contains extracted semantic concepts that supplement embedding-based similarity with explicit concept overlap matching.


Living Knowledge vs. Static Files

The difference between Neuroloom memories and flat project-knowledge files is not just format — it is the retrieval model.

ApproachTyped memory modelConfidence evolutionSupersession trailCode graph
Neuroloom9 types; code-aware (decision, incident, architecture)Epistemic confidence evolves as graph edges accumulatesupersedes edges trace what replaced what — navigable via the supersession-trail endpointFull symbol-level graph via CodeWeaver; memories link to functions and files
Mem0 OpenMemoryNo named types; unstructured string storeNo confidence evolutionNo supersession — old facts persist alongside new onesNo code graph
Augment CodeCodebase graph (structural only)No memory confidence modelNo supersession conceptStrong structural graph, no lifecycle memory on top of it
GitHub Copilot MemoryNo explicit types; session-scoped contextNo confidence modelNo supersessionNo code graph; relies on IDE indexing
Claude Code flat .claude/ filesNo type system; flat markdownNo confidence — static textNo supersession — manually edit to removeNo code graph

The dimensions that matter most for coding agents: whether the memory system tracks why something changed (supersedes), whether it can tell you how confident a memory still is, and whether it roots memories in actual code structure rather than prose descriptions of code.

A 500-line CLAUDE.md loads all 500 lines every session. Neuroloom injects the 8–12 memories most relevant to the task currently in front of the agent. As the project grows, context quality stays constant.

Note

Migrating from a CLAUDE.md or .cursorrules file? See the Claude.md Migration Cookbook for a step-by-step guide to converting static project knowledge into typed memories.


Storing a Memory

Store a memory by posting to the memories endpoint with a type and narrative.

curl -X POST https://api.neuroloom.dev/api/v1/memories \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Database migration strategy — always use Alembic autogenerate",
    "narrative": "All schema changes go through Alembic autogenerate. We ran into drift twice using manual migrations. The rule: never edit the migration file directly after generation; if the autogenerated migration is wrong, fix the model and regenerate.",
    "memory_type": "convention",
    "tags": ["database", "migrations", "alembic"],
    "source_files": ["alembic/env.py", "alembic/versions/"]
  }'

The API returns the created memory record including its assigned memory_id and initial scoring fields.


Searching Memories

Semantic search returns memories ranked by embedding similarity to the query, filtered by workspace.

curl -X POST https://api.neuroloom.dev/api/v1/memories/search \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "how do we handle database schema changes",
    "limit": 5
  }'

The query "how do we handle database schema changes" returns the Alembic convention memory above even though the query does not use the words "Alembic", "autogenerate", or "migrations" — the semantic embedding matches meaning.


Ready to get started?

Start building with Neuroloom for free.

Start Building Free