Reference
MCP Tools
Call Neuroloom's memory and session operations as MCP tools from Claude Code, Cursor, Windsurf, or any MCP-compatible agent. This page documents all 28 tools with their parameters, required values, and usage patterns.
For setup instructions, see the Quickstart. For session lifecycle concepts, see Session Lifecycle. For a worked integration, see Coding Agent Memory cookbook.
Connection
Connect to the hosted MCP server using the Streamable HTTP transport:
{
"mcpServers": {
"neuroloom": {
"type": "http",
"url": "https://mcp.neuroloom.dev/mcp",
"headers": {
"Authorization": "Bearer your_api_key_here"
}
}
}
}Or from the CLI:
# Project-scoped (shared via .mcp.json)
claude mcp add-json neuroloom -s project '{"type":"http","url":"https://mcp.neuroloom.dev/mcp","headers":{"Authorization":"Bearer nl_your_api_key_here"}}'
# User-scoped (personal, not committed)
claude mcp add-json neuroloom -s user '{"type":"http","url":"https://mcp.neuroloom.dev/mcp","headers":{"Authorization":"Bearer nl_your_api_key_here"}}'MCP HTTP transport uses Authorization: Bearer <key>. Direct REST API calls use Authorization: Token <key>. These are different schemes for different transports.
Transport Restrictions
Some tools are only available on specific transports:
| Transport | Tools Available |
|---|---|
| HTTP (hosted) | All tools except stdio-only tools |
| stdio (local) | All tools |
stdio-only tools (not available on the hosted HTTP server):
document_ingest_filedocument_ingest_files_batchdocument_ingest_batch_from_filesdlc_seed_from_file
HTTP-only tools (not available on stdio):
document_ingest_batch_get_upload_urlsdlc_seed_get_upload_url
Memory Tools (10)
memory_search
Search cross-session memory with semantic and keyword matching. Combines pgvector cosine similarity with keyword frequency scoring. The most common tool — call before starting any non-trivial task to surface relevant past decisions and patterns.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
query | string | Yes | Natural language search query. |
workspace_id | string | No | Target workspace ID. Defaults to the workspace associated with the API key. |
memory_types | array of strings | No | Filter by type: decision, pattern, architecture, incident, etc. |
tags | array of strings | No | Filter to memories matching ANY of these tags (OR semantics). |
tag_prefixes | array of strings | No | Filter to memories whose tags start with any of these prefixes. MCP-only; not in REST API. |
files | array of strings | No | Filter to memories referencing these file paths. |
min_importance | number | No | Minimum importance score (0.0–1.0). |
min_confidence | number | No | Minimum confidence score (0.0–1.0). |
limit | integer | No | Maximum results (default 5, max 100). Keep at 5 for focused lookups; raise to 10–20 for broad surveys. |
search_profile | string enum | No | Retrieval strategy (default "default"). One of: "default" (balanced blend), "exact" (keyword-heavy, use when you know the exact term), "exploration" (semantic-heavy, for open-ended discovery), "recency" (weights recently created memories higher), "file-scoped" (boosts memories linked to specific file paths). |
min_score | number | No | Minimum cross-encoder relevance score (0.0–1.0, default 0.15). Acts as a noise floor — memories whose reranked relevance falls below this threshold are suppressed. Relevant pairs typically score 0.22–0.29; noise scores 0.01–0.07. Raise to 0.25+ for stricter filtering. |
Example
memory_search(
query="error handling patterns for async FastAPI endpoints",
memory_types=["pattern", "convention", "incident"],
min_importance=0.6,
limit=10
)memory_get_detail
Retrieve the full record for a single memory, including its relationship graph. Returns the complete entry: narrative, tags, concepts, source files, importance and confidence scores, and relationship edges. Use after memory_search to read the complete context for a specific memory.
Also records an access event on the memory, incrementing access_count.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
memory_id | string | Yes | The memory ID (e.g. mem-abc123). |
include_related | boolean | No | Append up to 5 semantically similar memories to the response. Default true. |
Example
memory_get_detail(
memory_id="mem-a1b2c3d4e5f6g7h8",
include_related=true
)memory_get_timeline
List recent memories ordered by creation time descending. Useful for understanding what was captured during a recent work period or reviewing what was stored in the last session.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. |
days | integer | No | How many days back to include. Default 7. |
limit | integer | No | Maximum number of memories. Default 30. |
Example
memory_get_timeline(days=3, limit=20)memory_get_index
Return a lightweight title-and-type index of all memories. The most token-efficient way to survey your workspace. Use this for a quick overview, then memory_get_detail to read specific entries in full.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. |
limit | integer | No | Maximum entries to return. Default 50. |
Example
memory_get_index(limit=100)memory_get_related
Find memories semantically similar to a given memory using pgvector cosine similarity on stored embeddings. Requires the source memory to have a generated embedding — if not yet computed, returns an empty list.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
memory_id | string | Yes | The memory ID of the source memory. |
limit | integer | No | Maximum related memories to return. Default 10. |
Example
memory_get_related(
memory_id="mem-a1b2c3d4e5f6g7h8",
limit=5
)memory_by_file
Find memories that reference a specific file path. Use when opening a file to surface past decisions and patterns captured while working on it. Partial path matches are supported — database.py matches api/neuroloom_api/database.py.
This tool is a convenience wrapper that calls the search endpoint with a files filter.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
file_path | string | Yes | File path to search for. Partial matches are supported. |
workspace_id | string | No | Target workspace ID. |
limit | integer | No | Maximum results. Default 20. |
Example
memory_by_file(
file_path="api/neuroloom_api/routers/memories.py",
limit=15
)memory_store
Persist a pre-structured memory directly to the workspace. Bypasses the observation pipeline — use when you have already synthesised a well-formed insight and want to store it immediately. After storing, the API enqueues embedding generation and relationship discovery in the background.
MCP parameter names differ from REST API field names: content maps to narrative, files maps to source_files, and importance/confidence map to importance_score/confidence_score.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
title | string | Yes | Memory title (max 500 characters). |
memory_type | string | Yes | Category: decision, pattern, convention, architecture, discovery, incident, general, wiki, sdlc_knowledge. |
content | string | Yes | The primary memory narrative. Maps to narrative in the REST API. |
workspace_id | string | No | Target workspace ID. |
summary | string | No | Short one-line summary. |
concepts | array of strings | No | Key concept labels used for relationship discovery. |
tags | array of strings | No | Freeform tags for filtering. |
files | array of strings | No | Source file paths referenced by this memory. Maps to source_files in the REST API. |
importance | number | No | Importance score (0.0–1.0). Default 0.7. Maps to importance_score. |
confidence | number | No | Confidence score (0.0–1.0). Default 0.8. Maps to confidence_score. |
Example
memory_store(
title="Use HNSW index with m=16, ef_construction=64 for pgvector",
memory_type="decision",
content="HNSW index parameters: m=16 and ef_construction=64 give good recall at acceptable build time. Use ef_search=40 as the query default, bumping to 100-200 for high-recall scenarios. Index is non-transactional — rollbacks do not undo index updates.",
tags=["pgvector", "hnsw", "performance"],
concepts=["vector-search", "database-indexing"],
files=["api/neuroloom_api/models.py"],
importance=0.88,
confidence=0.95
)memory_rate
Rate a memory's usefulness after retrieval or use. Call after using (or deciding not to use) a retrieved memory. Positive ratings increase the memory's importance score over time; negative ratings decrease it during the next scheduled recalculation.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
memory_id | string | Yes | The memory ID (e.g. mem-abc123). |
useful | boolean | Yes | true if the memory was helpful, false if not. |
context | string | No | Explanation of why the memory was or was not useful. |
Example
memory_rate(
memory_id="mem-a1b2c3d4e5f6g7h8",
useful=true,
context="The selectinload pattern resolved the N+1 query issue immediately."
)memory_explore
Seed a topic exploration using a query, then expand through relationship edges to return a bounded subgraph of connected memories. Use this to understand how a topic spreads across your workspace relationship graph, or to build richer context injections.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
query | string | Yes | Natural language topic used to find seed memories. |
max_nodes | integer | No | Maximum nodes in the returned subgraph. Default 50, max 200. |
relationship_types | array of strings | No | Restrict traversal to these edge types. Omit for all types. |
min_edge_confidence | float | No | Minimum edge confidence for traversal. Default 0.0. |
seed_limit | integer | No | Number of seed memories to start from. Default 5. |
Example
memory_explore(
query="error handling and resilience patterns",
max_nodes=25,
relationship_types=["similar_to", "references"],
min_edge_confidence=0.65,
seed_limit=3
)workspace_insight
Return a concise snapshot of what the memory engine knows about this project. Use this to answer "What does Neuroloom know about this project?", "How many memories have been stored?", or "When was the last memory added?" Also called internally by the /neuroloom:status slash command.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. Defaults to the workspace associated with the API key. |
Example
workspace_insight()Response fields
| Field | Type | Description |
|---|---|---|
workspace_id | string | Owning workspace UUID |
total_memories | integer | Total memories stored in this workspace |
memory_counts | object | Memory counts keyed by type — all 9 types included, zeros shown |
total_relationships | integer | Total relationship edges |
relationship_counts | array | Top-7 edge types by count, sorted descending |
pending_observations | integer | Observation buffer depth — events not yet processed into memories |
last_memory_added_at | timestamp or null | Timestamp of the most recently created memory |
last_discovery_run_at | timestamp or null | Proxy for when relationship discovery last ran |
Session Tools (3)
session_start
Start a new Neuroloom session and retrieve initial memory context. Call at the beginning of a work session to register it with Neuroloom and receive relevant past memories for context injection.
The returned session_id must be passed to session_end and session_get_context.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. |
project_path | string | No | Working directory for this session. Used to associate memories with a project. Auto-detected when not supplied. |
Example
session_start(
project_path="/Users/dev/projects/neuroloom"
)Response includes session_id (a sess- prefixed string), started_at, and the initial memory context block ready for injection.
session_end
End an active Neuroloom session. Marks the session inactive and enqueues an async batch extraction job that processes observations into structured memories. The response is returned immediately — extraction runs in the background.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
session_id | string | Yes | Session ID returned by session_start. |
summary | string | No | Human-written summary of the session's outcomes. Gives the extraction job additional context. |
Example
session_end(
session_id="sess-abc123",
summary="Implemented D38 graph API. Explore and path endpoints now live with BFS traversal. Added pagerank_score and community_label fields to memory responses."
)session_get_context
Retrieve relevant memory context for an active session. Returns recent high-importance memories from the workspace for context injection. Use after session_start or mid-session to refresh context when switching to a new area of work.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
session_id | string | Yes | Active session ID. |
max_memories | integer | No | Maximum memories to return. Default 10. |
Example
session_get_context(
session_id="sess-abc123",
max_memories=15
)Document Tools (6)
Document tools ingest structured content (markdown, plaintext, PDFs) into the workspace as memories. Useful for bulk-loading existing documentation, wikis, or decision logs.
document_ingest
Ingest a single document by passing its content directly. Creates one or more memories from the document's content using the workspace's extraction pipeline.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
content | string | Yes | Document text content (markdown or plaintext). |
title | string | Yes | Document title, used as the memory title root. |
memory_type | string | No | Memory type to assign to extracted memories. Default "wiki". |
tags | array of strings | No | Tags to apply to all extracted memories. |
workspace_id | string | No | Target workspace ID. |
document_ingest_batch
Ingest multiple documents in a single call by passing a list of document objects. Each document is processed independently.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
documents | array of objects | Yes | List of document objects, each with content, title, and optional memory_type and tags. |
workspace_id | string | No | Target workspace ID. |
document_ingest_file
Ingest a file from the local filesystem by path. Reads the file and ingests its contents.
stdio only. This tool is not available on the hosted HTTP MCP server. Run the MCP server locally with stdio transport to use file-based ingestion.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
file_path | string | Yes | Absolute path to the file on the local filesystem. |
title | string | No | Override title. Defaults to the filename. |
memory_type | string | No | Memory type for extracted memories. Default "wiki". |
tags | array of strings | No | Tags to apply. |
workspace_id | string | No | Target workspace ID. |
document_ingest_files_batch
Ingest multiple files from the local filesystem in a single call.
stdio only. Not available on the hosted HTTP MCP server.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
file_paths | array of strings | Yes | Absolute paths to the files to ingest. |
memory_type | string | No | Memory type for all extracted memories. Default "wiki". |
tags | array of strings | No | Tags to apply to all files. |
workspace_id | string | No | Target workspace ID. |
document_ingest_batch_from_file
Ingest a batch of documents defined in a JSON file on the local filesystem.
stdio only. Not available on the hosted HTTP MCP server.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
batch_file_path | string | Yes | Absolute path to a JSON file containing an array of document objects. |
workspace_id | string | No | Target workspace ID. |
document_ingest_batch_get_upload_url
Get a pre-signed upload URL for uploading a batch document file to the hosted MCP server. Use when running in HTTP transport mode and you need to upload large document batches.
HTTP only. Not available on stdio transport.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. |
SDLC Tools (4)
SDLC tools ingest structured knowledge from SDLC deliverables (specs, plans, results) as memories. Designed for the Neuroloom SDLC plugin.
sdlc_seed
Seed the workspace with SDLC knowledge by passing structured content directly. Creates memories from SDLC deliverable content (specs, plans, results, changelogs).
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
content | string | Yes | SDLC document text content. |
document_type | string | Yes | Type of SDLC document: spec, plan, result, changelog, knowledge. |
deliverable_id | string | No | Deliverable ID (e.g. D38) for cross-referencing. |
workspace_id | string | No | Target workspace ID. |
sdlc_seed_from_file
Seed the workspace from a local SDLC document file.
stdio only. Not available on the hosted HTTP MCP server.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
file_path | string | Yes | Absolute path to the SDLC document file. |
document_type | string | Yes | Type of SDLC document: spec, plan, result, changelog, knowledge. |
deliverable_id | string | No | Deliverable ID for cross-referencing. |
workspace_id | string | No | Target workspace ID. |
sdlc_seed_get_upload_url
Get a pre-signed upload URL for uploading SDLC document files to the hosted MCP server.
HTTP only. Not available on stdio transport.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
workspace_id | string | No | Target workspace ID. |
sdlc_get_version
Return the current SDLC plugin version and server version. Use to confirm compatibility between your local plugin and the hosted MCP server.
Parameters
None.
Example
sdlc_get_version()Code Graph Tools (5)
Code graph tools query the CodeWeaver structural index — functions, classes, and modules parsed from your codebase. These tools require the CodeWeaver code graph to be populated (run the neuroloom:init skill or seed_code_graph.py locally first).
code_search
Find functions, classes, or modules by name or file pattern. Use this to resolve a function name to its symbol_id before calling code_callers, code_callees, or code_context.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
q | string | No | Name substring filter (case-insensitive). |
file_pattern | string | No | File path glob pattern (e.g. src/services/*.ts). |
symbol_type | string enum | No | Filter by type: "function", "class", or "module". |
limit | integer | No | Maximum results (default 50). |
Example
code_search(q="createUser", symbol_type="function")code_navigate
See what calls a function and what it calls — a 1-hop graph view. Use for graph topology only. For linked memories alongside the call chain, use code_context instead.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
symbol_id | string | Yes | UUID of the code symbol (from code_search). |
code_callers
Get all functions and methods that call this symbol (incoming edges only).
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
symbol_id | string | Yes | UUID of the code symbol (from code_search). |
code_callees
Get all functions and methods that this symbol calls (outgoing edges only).
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
symbol_id | string | Yes | UUID of the code symbol (from code_search). |
code_context
Given a symbol, traverse its call chain and return all linked memories. Use when you need code structure and memory context together — answers "what do we know about this function and everything it touches?". Provide either symbol_id (from code_search) or symbol_name with an optional file_path for name-based lookup.
Parameters
| Name | Type | Required | Description |
|---|---|---|---|
symbol_id | string | No | UUID of the code symbol (from code_search). |
symbol_name | string | No | Function or class name for name-based lookup. |
file_path | string | No | Optional file path to disambiguate when multiple symbols share a name. |
Example
code_context(symbol_name="get_memories", file_path="api/routers/memories.py")When the response includes graph_empty: true, no symbols are indexed yet. Run the neuroloom:init skill or seed_code_graph.py locally to populate the code graph.
Tool Summary
| Tool | Category | Required Parameters |
|---|---|---|
memory_search | Memory | query |
memory_get_detail | Memory | memory_id |
memory_get_timeline | Memory | (none) |
memory_get_index | Memory | (none) |
memory_get_related | Memory | memory_id |
memory_by_file | Memory | file_path |
memory_store | Memory | title, memory_type, content |
memory_rate | Memory | memory_id, useful |
memory_explore | Memory | query |
workspace_insight | Memory | (none) |
session_start | Session | (none) |
session_end | Session | session_id |
session_get_context | Session | session_id |
document_ingest | Document | content, title |
document_ingest_batch | Document | documents |
document_ingest_file | Document (stdio) | file_path |
document_ingest_files_batch | Document (stdio) | file_paths |
document_ingest_batch_from_file | Document (stdio) | batch_file_path |
document_ingest_batch_get_upload_url | Document (HTTP) | (none) |
sdlc_seed | SDLC | content, document_type |
sdlc_seed_from_file | SDLC (stdio) | file_path, document_type |
sdlc_seed_get_upload_url | SDLC (HTTP) | (none) |
sdlc_get_version | SDLC | (none) |
code_search | Code Graph | (none) |
code_navigate | Code Graph | symbol_id |
code_callers | Code Graph | symbol_id |
code_callees | Code Graph | symbol_id |
code_context | Code Graph | (none) |
Troubleshooting
| Error | Cause | Fix |
|---|---|---|
401 authentication_failed | API key invalid or revoked | Create a new key at app.neuroloom.dev/settings/api-keys |
401 followed by OAuth flow | URL has a trailing slash (/mcp/) causing a 307 redirect that strips the Authorization header | Use https://mcp.neuroloom.dev/mcp (no trailing slash) |
503 upstream_unavailable | Neuroloom API temporarily down | Retry after a moment |
| Connection refused | MCP client doesn't support Streamable HTTP transport | Check your client docs; Claude Desktop (stdio-only) is not supported by the hosted server |
| Connection timeout | Server unreachable | Check https://mcp.neuroloom.dev/health |
| Client initiates OAuth flow | Client connected to an older server version that lacks the OAuth compatibility layer | Upgrade to the current MCP server (mcp.neuroloom.dev) — current versions respond to OAuth discovery probes automatically |
Related
- Quickstart — install and configure the plugin or direct MCP access
- Memory Lifecycle — how MCP tool calls feed the observation and extraction pipeline
- Coding Agent Memory — worked example using MCP tools in a coding workflow