Neuroloom

Reference

Memories API

Store and retrieve structured knowledge objects in your Neuroloom workspace. Every memory has a title, narrative, tags, concepts, source files, and importance and confidence scores. After creation, the API enqueues background jobs to generate embeddings and discover relationships to other memories.

See Memory Concepts for data model details, or Coding Agent Memory cookbook for a worked integration example.

Base URL and Authentication

https://api.neuroloom.dev

All endpoints require an API key in the Authorization header:

Authorization: Token $MEMORIES_API_TOKEN
Warning

The auth scheme is Token, not Bearer. Requests using Authorization: Bearer <key> return 401 Unauthorized.

Set these environment variables to match what the Neuroloom MCP server reads:

export MEMORIES_API_TOKEN="your-api-key"
export MEMORIES_WORKSPACE_ID="your-workspace-id"

Memory IDs

Every memory has two identifiers:

FieldFormatUsed For
idUUIDInternal database relations. Never used in API paths.
memory_idmem- prefixed string, e.g. mem-a1b2c3d4e5f6g7h8All API paths. Use this one.

Store a Memory

POST /api/v1/memories/store

Persist a pre-structured memory directly, bypassing LLM extraction. Use this when you have already synthesised a well-formed insight and want to store it immediately.

memory_store(
  title="Prefer async SQLAlchemy for all DB calls",
  memory_type="pattern",
  content="All database operations must use async/await. Sync calls block the event loop and cause latency spikes under load. Use AsyncSession from sqlalchemy.ext.asyncio and await all queries.",
  tags=["sqlalchemy", "async", "backend"],
  concepts=["async", "database", "performance"],
  files=["api/neuroloom_api/database.py"],
  importance=0.9,
  confidence=0.95
)

Note: content maps to narrative, files maps to source_files, and importance/confidence map to importance_score/confidence_score in the REST API.

import httpx, os

response = httpx.post(
    "https://api.neuroloom.dev/api/v1/memories/store",
    headers={"Authorization": f"Token {os.environ['MEMORIES_API_TOKEN']}"},
    json={
        "title": "Prefer async SQLAlchemy for all DB calls",
        "narrative": "All database operations must use async/await. Sync calls block the event loop and cause latency spikes under load.",
        "memory_type": "pattern",
        "tags": ["sqlalchemy", "async", "backend"],
        "concepts": ["async", "database", "performance"],
        "source_files": ["api/neuroloom_api/database.py"],
        "importance_score": 0.9,
        "confidence_score": 0.95,
    },
)
memory = response.json()
print(memory["memory_id"])  # mem-a1b2c3d4e5f6g7h8
curl -X POST "https://api.neuroloom.dev/api/v1/memories/store" \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "title": "Prefer async SQLAlchemy for all DB calls",
    "narrative": "All database operations must use async/await. Sync calls block the event loop and cause latency spikes under load.",
    "memory_type": "pattern",
    "tags": ["sqlalchemy", "async", "backend"],
    "concepts": ["async", "database", "performance"],
    "source_files": ["api/neuroloom_api/database.py"],
    "importance_score": 0.9,
    "confidence_score": 0.95
  }'

Request body

FieldTypeRequiredDefaultDescription
titlestringYesShort descriptive title (max 500 characters).
narrativestringYesFull prose description of the memory content.
memory_typestringNo"general"Category label. See Memory Types.
tagsarray of stringsNo[]Keyword tags for filtering.
conceptsarray of stringsNo[]Abstract concept labels used for relationship discovery.
source_filesarray of stringsNo[]File paths associated with this memory.
importance_scorefloatNo0.5Importance weight, 0.01.0.
confidence_scorefloatNo0.5Confidence in the memory's accuracy, 0.01.0.

Response201 Created, memory object.

{
  "id": "550e8400-e29b-41d4-a716-446655440000",
  "memory_id": "mem-a1b2c3d4e5f6g7h8",
  "workspace_id": "ws-xyz789",
  "title": "Prefer async SQLAlchemy for all DB calls",
  "narrative": "All database operations must use async/await. Sync calls block the event loop and cause latency spikes under load.",
  "memory_type": "pattern",
  "tags": ["sqlalchemy", "async", "backend"],
  "concepts": ["async", "database", "performance"],
  "source_files": ["api/neuroloom_api/database.py"],
  "importance_score": 0.9,
  "confidence_score": 0.95,
  "pagerank_score": null,
  "community_label": null,
  "access_count": 0,
  "retrieval_count": 0,
  "last_accessed_at": null,
  "source_session_id": null,
  "is_consolidated": false,
  "consolidated_from": null,
  "created_at": "2026-03-20T09:00:00Z",
  "updated_at": null
}
Note

pagerank_score and community_label are populated after the background graph analysis job runs. They are null immediately after creation.


List Memories

GET /api/v1/memories/

Return memories for the authenticated workspace, ordered by importance_score descending then created_at descending by default.

memory_get_index(limit=50)

Returns a lightweight title-and-type index. For full memory objects, use memory_get_timeline or the REST endpoint.

import httpx, os

response = httpx.get(
    "https://api.neuroloom.dev/api/v1/memories/",
    headers={"Authorization": f"Token {os.environ['MEMORIES_API_TOKEN']}"},
    params={"memory_type": "pattern", "limit": 10, "ordering": "-importance_score"},
)
memories = response.json()
curl "https://api.neuroloom.dev/api/v1/memories/?memory_type=pattern&limit=10&ordering=-importance_score" \
  -H "Authorization: Token $MEMORIES_API_TOKEN"

Query parameters

ParameterTypeDefaultDescription
memory_typestringFilter to a single memory type (e.g. decision, pattern).
limitinteger50Results per page (min 1, max 500).
offsetinteger0Results to skip for pagination.
orderingstring-importance_scoreSort field. Prefix with - for descending. Supported: importance_score, access_count, created_at, retrieval_count.
pruning_candidatesbooleanfalseWhen true, restricts to memories eligible for pruning: importance < 0.3, older than retention period, not accessed in 90+ days.

Response200 OK, array of memory objects.


Get a Memory

GET /api/v1/memories/{memory_id}

Fetch a single memory with its full relationship graph, including outgoing (what this memory relates to) and incoming (what relates to this memory) edges.

memory_get_detail(
  memory_id="mem-a1b2c3d4e5f6g7h8",
  include_related=true
)

Setting include_related=true appends up to 5 semantically similar memories to the response. Also records an access event automatically.

import httpx, os

response = httpx.get(
    "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8",
    headers={"Authorization": f"Token {os.environ['MEMORIES_API_TOKEN']}"},
)
memory = response.json()
print(memory["outgoing_relationships"])
curl "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8" \
  -H "Authorization: Token $MEMORIES_API_TOKEN"

Path parameters

ParameterTypeDescription
memory_idstringThe memory_id field (e.g. mem-a1b2c3d4e5f6g7h8). Not the internal id.

Response200 OK, extends the base memory object with relationship arrays.

{
  "id": "...",
  "memory_id": "mem-a1b2c3d4e5f6g7h8",
  "title": "Prefer async SQLAlchemy for all DB calls",
  "narrative": "...",
  "pagerank_score": 0.42,
  "community_label": "database-patterns",
  "feedback_positive_count": 5,
  "feedback_negative_count": 1,
  "outgoing_relationships": [
    {
      "id": "...",
      "source_memory_id": "mem-a1b2c3d4e5f6g7h8",
      "target_memory_id": "mem-zz9988776655aabb",
      "relationship_type": "similar_to",
      "discovery_method": "embedding_similarity",
      "confidence": 0.87,
      "is_bidirectional": false,
      "context": null,
      "created_at": "2026-03-20T09:00:00Z"
    }
  ],
  "incoming_relationships": []
}

feedback_positive_count and feedback_negative_count are 0 until feedback is recorded via the Feedback API.


Update a Memory

PATCH /api/v1/memories/{memory_id}

Partial update — supply only the fields you want to change. memory_id and workspace_id are immutable.

Request body — all fields optional

FieldTypeDescription
titlestringNew title (max 500 characters).
narrativestringNew narrative text.
tagsarray of stringsReplaces the existing tags list entirely.
conceptsarray of stringsReplaces the existing concepts list entirely.
source_filesarray of stringsReplaces the existing source files list entirely.
importance_scorefloatNew importance weight, 0.01.0.
confidence_scorefloatNew confidence score, 0.01.0.

Response200 OK, updated memory object.

curl -X PATCH "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8" \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "importance_score": 0.95,
    "tags": ["sqlalchemy", "async", "critical"]
  }'

Delete a Memory

DELETE /api/v1/memories/{memory_id}

Permanently deletes a memory and all its relationship edges. This cannot be undone.

Response204 No Content.

curl -X DELETE "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8" \
  -H "Authorization: Token $MEMORIES_API_TOKEN"

Batch Delete Memories

POST /api/v1/memories/batch-delete

Delete up to 100 memories in a single request. Only deletes memories belonging to the authenticated workspace. Returns the count of actually deleted memories — this may be less than the number of IDs supplied if some were not found.

Request body

FieldTypeRequiredDescription
memory_idsarray of stringsYesList of memory_id values to delete. Maximum 100 items.

Response200 OK

{ "deleted_count": 3 }

Supplying more than 100 IDs returns 422 Unprocessable Entity.

curl -X POST "https://api.neuroloom.dev/api/v1/memories/batch-delete" \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "memory_ids": ["mem-a1b2c3d4e5f6g7h8", "mem-zz9988776655aabb", "mem-cc1122334455ddee"]
  }'

GET /api/v1/memories/{memory_id}/related

Find memories semantically similar to the given memory using pgvector cosine similarity on stored embeddings. The source memory must have a generated embedding; if not yet computed, no results are returned.

memory_get_related(
  memory_id="mem-a1b2c3d4e5f6g7h8",
  limit=5
)
import httpx, os

response = httpx.get(
    "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8/related",
    headers={"Authorization": f"Token {os.environ['MEMORIES_API_TOKEN']}"},
    params={"limit": 5, "similarity_threshold": 0.8},
)
related = response.json()
curl "https://api.neuroloom.dev/api/v1/memories/mem-a1b2c3d4e5f6g7h8/related?limit=5&similarity_threshold=0.8" \
  -H "Authorization: Token $MEMORIES_API_TOKEN"

Query parameters

ParameterTypeDefaultDescription
limitinteger10Max results (min 1, max 50).
similarity_thresholdfloat0.75Minimum cosine similarity score, 0.01.0.

Response200 OK, array of SearchResult objects (same shape as search results).


Explore a Memory Subgraph

POST /api/v1/memories/explore

Expand a topic or query into a subgraph of related memories using graph traversal from seed nodes. Returns a bounded set of nodes and edges for visualisation or context injection.

memory_explore(
  query="database connection pooling",
  max_nodes=30,
  relationship_types=["similar_to", "references"],
  min_edge_confidence=0.7
)
import httpx, os

response = httpx.post(
    "https://api.neuroloom.dev/api/v1/memories/explore",
    headers={"Authorization": f"Token {os.environ['MEMORIES_API_TOKEN']}"},
    json={
        "query": "database connection pooling",
        "max_nodes": 30,
        "relationship_types": ["similar_to", "references"],
        "min_edge_confidence": 0.7,
        "seed_limit": 5,
    },
)
subgraph = response.json()
curl -X POST "https://api.neuroloom.dev/api/v1/memories/explore" \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "query": "database connection pooling",
    "max_nodes": 30,
    "relationship_types": ["similar_to", "references"],
    "min_edge_confidence": 0.7,
    "seed_limit": 5
  }'

Request body

FieldTypeRequiredDefaultDescription
querystringYesTopic or question used to seed the traversal.
max_nodesintegerNo50Maximum nodes to include in the subgraph (max 200).
relationship_typesarray of stringsNoall typesRestrict traversal to these edge types. See Relationship Types.
min_edge_confidencefloatNo0.0Minimum edge confidence for traversal.
seed_limitintegerNo5Number of seed memories to start traversal from.

Response200 OK

{
  "nodes": [
    {
      "memory_id": "mem-a1b2c3d4e5f6g7h8",
      "title": "Use connection pooling for all DB connections",
      "memory_type": "pattern",
      "importance_score": 0.88,
      "pagerank_score": 0.51,
      "community_label": "database-patterns"
    }
  ],
  "edges": [
    {
      "source_memory_id": "mem-a1b2c3d4e5f6g7h8",
      "target_memory_id": "mem-zz9988776655aabb",
      "relationship_type": "similar_to",
      "confidence": 0.82
    }
  ],
  "seed_memories": ["mem-a1b2c3d4e5f6g7h8"]
}

Find Shortest Path

POST /api/v1/memories/path

Find the shortest relationship path between two memories using BFS graph traversal. Useful for understanding how two concepts connect through the workspace's relationship graph.

Request body

FieldTypeRequiredDefaultDescription
source_memory_idstringYesStarting memory memory_id.
target_memory_idstringYesDestination memory memory_id.
max_depthintegerNo5Maximum BFS depth to traverse.
relationship_typesarray of stringsNoall typesRestrict path to these edge types.

Response200 OK

{
  "path": [
    "mem-a1b2c3d4e5f6g7h8",
    "mem-bb3344556677ccdd",
    "mem-zz9988776655aabb"
  ],
  "edges": [
    {
      "source_memory_id": "mem-a1b2c3d4e5f6g7h8",
      "target_memory_id": "mem-bb3344556677ccdd",
      "relationship_type": "references",
      "confidence": 0.91
    },
    {
      "source_memory_id": "mem-bb3344556677ccdd",
      "target_memory_id": "mem-zz9988776655aabb",
      "relationship_type": "related_to",
      "confidence": 0.78
    }
  ],
  "depth": 2
}

If no path exists within max_depth, returns {"path": [], "edges": [], "depth": 0}.

curl -X POST "https://api.neuroloom.dev/api/v1/memories/path" \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "source_memory_id": "mem-a1b2c3d4e5f6g7h8",
    "target_memory_id": "mem-zz9988776655aabb",
    "max_depth": 5,
    "relationship_types": ["references", "related_to", "similar_to"]
  }'

Record an Access Event

POST /api/v1/memories/{memory_id}/access

Record that a memory was viewed or read. Increments access_count and updates last_accessed_at. Used by the MCP memory_get_detail tool automatically; call this endpoint directly only if you are building a custom client.

Response200 OK, updated memory object.


Record a Retrieval Event

POST /api/v1/memories/{memory_id}/retrieval

Record that a memory appeared in search results. Increments retrieval_count. Distinct from access — retrieval means the memory surfaced in a search response; access means it was directly opened. Both signals feed into importance score recalculation.

Response200 OK, updated memory object.


Memory Types

Valid values for the memory_type field (9 canonical types):

LLM-assignable

ValueWhen to Use
decisionArchitectural or design decisions with a chosen direction and rationale
patternRecurring implementation patterns the team follows (descriptive: "when X, we do Y")
conventionTeam conventions and style rules (prescriptive: "always/never do X")
architectureSystem-level structural knowledge — service boundaries, data flow, dependency directions
discoveryLearned insights, gotchas, and non-obvious behaviors where nothing broke
incidentRoot causes and fixes for bugs and incidents, to avoid re-diagnosing
generalGeneral-purpose catch-all; staging area for nightly reclassification (last resort only)

System-only (never submit these from the LLM extraction path)

ValueWhen to Use
wikiManually authored reference material, not tied to a session
sdlc_knowledgeSDLC process and workflow knowledge

Error Responses

StatusWhen
401 UnauthorizedMissing or invalid Authorization header, or the API key is inactive.
404 Not FoundThe memory_id does not exist in the authenticated workspace.
422 Unprocessable EntityRequest body or query parameter validation failed. Also returned when a batch delete request exceeds 100 items.
{ "detail": "Memory 'mem-notexist' not found." }

Pagination

All list endpoints use limit + offset pagination:

ParameterDescription
limitResults per page.
offsetResults to skip before returning the page.

To fetch the second page of 20 results: ?limit=20&offset=20.


Ready to get started?

Start building with Neuroloom for free.

Start Building Free