Neuroloom

Cookbooks

Decision Tracking

Your project's architectural history lives in commits, Slack threads, and individual developers' memories. When someone asks "why do we use StrEnum instead of Postgres ENUM?", the answer is somewhere — but finding it takes 20 minutes. Store decisions as typed memories and retrieve the context in seconds.

Problem

Architectural Decision Records exist in theory. In practice, they live in a docs/adr/ folder that nobody updates after month two. The decisions that actually shape the codebase — the ones made during debugging sessions, code reviews, and quick Slack calls — never get written down at all. Six months later, the same trade-offs get re-litigated because there's no record that the team already evaluated them.

Persona

Daniel maintains a FastAPI backend that's been in production for two years. His team has made hundreds of architectural decisions — database schema choices, authentication approaches, caching strategies. Some are in ADRs. Most are not. When a new engineer asks why a particular pattern exists, Daniel either remembers it or the knowledge is lost.

Prerequisites

  • Neuroloom API key from app.neuroloom.dev/settings/api-keys
  • Environment variables set:
    export MEMORIES_API_TOKEN="nl_your_api_key_here"
    export MEMORIES_WORKSPACE_ID="ws_your_workspace_id_here"

Step 1: Store an ADR as a DECISION memory

Decision memories have a higher default importance than general memories, and they're filtered separately so engineers can query "all decisions" as a distinct set.

Use memory_store:
- title: "StrEnum over Postgres ENUM for choice fields"
- memory_type: "decision"
- content: "We use Python StrEnum instead of Postgres ENUM for all choice fields (memory_type, relationship_type, job_status, etc). Postgres ENUMs require DDL migrations to add new values — ALTER TYPE cannot run inside a transaction, which makes zero-downtime deploys harder. StrEnum values are stored as VARCHAR, migrations for new values are just data migrations, and we can add new values without touching the schema. Trade-off: no DB-level constraint on valid values, but application-level validation via Pydantic covers this. Decision made during D7 migration planning."
- concepts: ["database schema", "ENUM", "migrations", "StrEnum", "PostgreSQL"]
- files: ["api/neuroloom_api/models/"]
- importance: 0.92
- tags: ["schema", "migrations", "adr"]
curl -X POST https://api.neuroloom.dev/api/v1/memories/store \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "title": "StrEnum over Postgres ENUM for choice fields",
    "memory_type": "decision",
    "narrative": "We use Python StrEnum instead of Postgres ENUM for all choice fields. Postgres ENUMs require DDL migrations to add new values — ALTER TYPE cannot run inside a transaction, which makes zero-downtime deploys harder. StrEnum values are stored as VARCHAR, migrations for new values are just data migrations, and we can add new values without touching the schema. Trade-off: no DB-level constraint on valid values, but application-level validation via Pydantic covers this.",
    "concepts": ["database schema", "ENUM", "migrations", "StrEnum", "PostgreSQL"],
    "source_files": ["api/neuroloom_api/models/"],
    "importance_score": 0.92,
    "tags": ["schema", "migrations", "adr"]
  }'
import os
import httpx

token = os.environ["MEMORIES_API_TOKEN"]
workspace_id = os.environ["MEMORIES_WORKSPACE_ID"]

response = httpx.post(
    "https://api.neuroloom.dev/api/v1/memories/store",
    headers={"Authorization": f"Token {token}"},
    json={
        "workspace_id": workspace_id,
        "title": "StrEnum over Postgres ENUM for choice fields",
        "memory_type": "decision",
        "narrative": (
            "We use Python StrEnum instead of Postgres ENUM for all choice fields. "
            "Postgres ENUMs require DDL migrations to add new values — ALTER TYPE "
            "cannot run inside a transaction, which makes zero-downtime deploys harder. "
            "StrEnum values are stored as VARCHAR, migrations for new values are just "
            "data migrations, and we can add new values without touching the schema."
        ),
        "concepts": ["database schema", "ENUM", "migrations", "StrEnum", "PostgreSQL"],
        "source_files": ["api/neuroloom_api/models/"],
        "importance_score": 0.92,
        "tags": ["schema", "migrations", "adr"],
    },
)
memory = response.json()
print(f"Stored: {memory['id']}{memory['title']}")

Response:

{
  "id": "mem-2e7b9f3d",
  "title": "StrEnum over Postgres ENUM for choice fields",
  "memory_type": "decision",
  "importance_score": 0.92,
  "created_at": "2026-04-01T10:30:00Z"
}

After storing the StrEnum decision, store a related decision about zero-downtime migrations and link them explicitly. Explicit relationships make it possible to navigate from one decision to connected decisions without a new search.

Use memory_store:
- title: "Zero-downtime migration strategy"
- memory_type: "decision"
- content: "All schema migrations must be backward-compatible for at least one deploy cycle. We use the expand-contract pattern: first migration adds the new column or table, application code is deployed to write to both old and new, second migration removes the old column. This allows rollback without data loss. Postgres DDL inside transactions is the key constraint that shapes this approach."
- concepts: ["migrations", "zero-downtime", "expand-contract", "PostgreSQL", "Alembic"]
- files: ["api/alembic/", "api/neuroloom_api/"]
- importance: 0.90
- tags: ["migrations", "adr", "deployment"]

Then link them:

Use memory_explore with query "migration strategy" to find the related memory IDs, then store a relationship linking "StrEnum over Postgres ENUM" (mem-2e7b9f3d) to the zero-downtime migration memory with relationship_type "related_to"
# Store the related decision
RELATED_ID=$(curl -s -X POST https://api.neuroloom.dev/api/v1/memories/store \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "title": "Zero-downtime migration strategy",
    "memory_type": "decision",
    "narrative": "All schema migrations must be backward-compatible for at least one deploy cycle. We use the expand-contract pattern: first migration adds the new column, application writes to both, second migration removes the old column.",
    "concepts": ["migrations", "zero-downtime", "expand-contract", "PostgreSQL", "Alembic"],
    "source_files": ["api/alembic/"],
    "importance_score": 0.90,
    "tags": ["migrations", "adr", "deployment"]
  }' | python3 -c "import sys,json; print(json.load(sys.stdin)['id'])")

# Create a relationship between the two decisions
curl -X POST https://api.neuroloom.dev/api/v1/memories/relationships \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "source_memory_id": "mem-2e7b9f3d",
    "target_memory_id": "'"$RELATED_ID"'",
    "relationship_type": "related_to",
    "confidence": 0.9
  }'

Response:

{
  "id": "rel-5f1a8c2e",
  "source_memory_id": "mem-2e7b9f3d",
  "target_memory_id": "mem-9c4d2a7f",
  "relationship_type": "related_to",
  "confidence": 0.9
}

Neuroloom also discovers relationships automatically via embedding similarity and concept overlap after embedding generation completes. Explicit relationships are useful when you know the connection and want it immediately available.


Step 3: Search decisions by concept

When a new engineer asks "why do we structure migrations this way?" — search by the question, not the exact title:

Use memory_search with query "why do we use this migration approach" and memory_types ["decision"] and tags ["adr"]
curl -X POST https://api.neuroloom.dev/api/v1/memories/search \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "query": "why do we use this migration approach",
    "memory_types": ["decision"],
    "tags": ["adr"],
    "limit": 10
  }'
response = httpx.post(
    "https://api.neuroloom.dev/api/v1/memories/search",
    headers={"Authorization": f"Token {token}"},
    json={
        "workspace_id": workspace_id,
        "query": "why do we use this migration approach",
        "memory_types": ["decision"],
        "tags": ["adr"],
        "limit": 10,
    },
)
for r in response.json()["results"]:
    print(f"{r['score']:.2f}  [{r['memory_type']}]  {r['title']}")

Response:

{
  "results": [
    {
      "id": "mem-9c4d2a7f",
      "title": "Zero-downtime migration strategy",
      "memory_type": "decision",
      "score": 0.89,
      "summary": "Expand-contract pattern for backward-compatible schema changes across deploy cycles"
    },
    {
      "id": "mem-2e7b9f3d",
      "title": "StrEnum over Postgres ENUM for choice fields",
      "memory_type": "decision",
      "score": 0.81,
      "summary": "StrEnum avoids ALTER TYPE DDL constraint that blocks zero-downtime deploys"
    }
  ]
}

Both decisions surfaced because they share conceptual territory — even though the query didn't mention StrEnum or expand-contract.


Step 4: Retrieve decisions by file

When opening a migration file, surface all decisions that relate to it:

Use memory_by_file with file_path "api/alembic/"
curl -X POST https://api.neuroloom.dev/api/v1/memories/by-file \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "file_path": "api/alembic/",
    "limit": 20
  }'

Response:

{
  "results": [
    {
      "id": "mem-9c4d2a7f",
      "title": "Zero-downtime migration strategy",
      "memory_type": "decision",
      "matched_files": ["api/alembic/"],
      "importance_score": 0.90
    }
  ]
}

Partial path matching is supported — api/alembic/ matches any memory whose source_files includes a path containing that string.


Step 5: Surface contradictions

When a new decision contradicts an existing one, mark it explicitly. This prevents two conflicting conventions from silently coexisting.

Scenario: the team initially decided to use Postgres ENUM for status fields but later switched to StrEnum. The old decision should be superseded.

# Store the original (now superseded) decision first if it doesn't exist
OLD_ID="mem-1a2b3c4d"  # ID of the original ENUM decision

# Create a SUPERSEDES relationship from the new decision to the old one
curl -X POST https://api.neuroloom.dev/api/v1/memories/relationships \
  -H "Authorization: Token $MEMORIES_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "workspace_id": "'"$MEMORIES_WORKSPACE_ID"'",
    "source_memory_id": "mem-2e7b9f3d",
    "target_memory_id": "'"$OLD_ID"'",
    "relationship_type": "supersedes",
    "confidence": 1.0
  }'
# Link new decision as superseding the old one
response = httpx.post(
    "https://api.neuroloom.dev/api/v1/memories/relationships",
    headers={"Authorization": f"Token {token}"},
    json={
        "workspace_id": workspace_id,
        "source_memory_id": "mem-2e7b9f3d",   # StrEnum decision (new)
        "target_memory_id": "mem-1a2b3c4d",    # Postgres ENUM decision (old)
        "relationship_type": "supersedes",
        "confidence": 1.0,
    },
)
print(response.json())

When searching decisions in the future, the superseded memory still appears — but retrieving its detail shows the superseding relationship, so anyone reading it knows the decision has been overturned.

Note

The contradicts relationship type is for decisions that are in tension without a clear winner. supersedes is for decisions where one explicitly replaces another. Use the right type — they're navigated differently in graph exploration.


Production Patterns

Standardize ADR format in the narrative

Use a consistent narrative structure so searches return comparable results:

Context: [What situation led to this decision]
Decision: [What we decided]
Reasoning: [Why we chose this over alternatives]
Alternatives considered: [What we rejected and why]
Consequences: [What this means going forward]

When all decision narratives follow the same structure, concept extraction and relationship discovery produce more consistent results.

Bulk import existing ADRs

If you have existing ADR markdown files, ingest them in a batch. Tag them with "source": "adr-import" so you can audit the import later:

import os
import glob
import httpx

token = os.environ["MEMORIES_API_TOKEN"]
workspace_id = os.environ["MEMORIES_WORKSPACE_ID"]

adr_files = glob.glob("docs/adr/*.md")
for filepath in adr_files:
    with open(filepath) as f:
        content = f.read()
    # Extract title from first heading
    title = content.split("\n")[0].lstrip("# ").strip()
    httpx.post(
        "https://api.neuroloom.dev/api/v1/memories/store",
        headers={"Authorization": f"Token {token}"},
        json={
            "workspace_id": workspace_id,
            "title": title,
            "memory_type": "decision",
            "narrative": content,
            "source_files": [filepath],
            "tags": ["adr", "adr-import"],
            "importance_score": 0.8,
        },
    )
    print(f"Imported: {title}")

Handle supersession cleanly

When a decision is overturned:

  1. Store the new decision with full context
  2. Create a supersedes relationship from new → old
  3. Update the old memory's narrative to add a note: "Superseded by: [new memory title]"

This ensures both old and new decisions are discoverable, with clear navigation between them.


Before You Ship

  • Confirm all existing ADRs are imported with memory_type: "decision" and tags: ["adr"]
  • Verify source_files are set on each decision so memory_by_file works from the editor
  • Check that superseded decisions are linked — run memory_search for overturned decisions and confirm the relationship appears in memory_get_detail
  • Run a concept search for your major architecture areas ("authentication", "caching", "database") and verify the right decisions surface
  • Confirm contradicting decisions are marked with contradicts or supersedes — not left as orphans

Ready to get started?

Start building with Neuroloom for free.

Start Building Free