Skip to content

Cases

An Intel Request isn't just a container for questions — it's a case. The case layer is where you, the analyst, build the durable picture: the things that have been confirmed, the things that are still missing, and the timeline of what happened. Investigations come and go; the case is what survives them.

Case vs. Investigation

Investigation Case
Scope One question, one briefing One RFI, many briefings
Lifetime Minutes to hours Weeks to months
Who decides The reasoning engine proposes findings The analyst confirms what's true
Data model Evidence from the reasoning loop Only analyst-accepted material

An investigation produces claims. The case is where you decide which claims to accept. The agent proposes; you decide.

Building Blocks

The case has four types of structured objects, all scoped to a single RFI.

Key Questions

The durable "what do we need to know?" list. Key Questions aren't investigation questions — they're the lines of inquiry for the entire RFI. Everything else hangs off them.

"Create a key question: Who authorized the financial transfer?"

Create Key Questions before running investigations. The harvest flow needs them to route material, and the case board is unreadable without them.

Case Findings

Analyst-confirmed claims, each tied to exactly one Key Question.

Every finding has a confidence level:

Confidence Meaning
Confirmed Corroborated by multiple sources or documentation
Assessed Single source, high trust
Suspected Plausible but unverified

And a status: pending (awaiting review), accepted (part of the case), or discarded (rejected with a reason).

Findings from investigations carry a source link so you can trace back to the original evidence chain. Findings can also be flagged as contradicting each other.

"File this as an assessed finding under Key Question 2." "Flag these two findings as contradictory."

When you file a finding, it's written to both the structured case record and the case knowledge graph — a per-RFI graph where entities and relationships extracted from findings accumulate over time.

Gaps

Known unknowns. A gap says: "this is something the case needs to answer, and we don't have it yet."

"Add a gap: we still don't know who signed the authorization."

Gaps are attached to Key Questions and have a status: open, being_investigated, or closed (with a resolution). An RFI with no gaps is either perfectly solved or unexamined. Recording gaps honestly makes the case auditable.

Timeline Events

A typed, chronological view of the case. Each event has a date, a type (incident, regulatory, operational, legal, other), and optional links to entities and findings.

"Add a timeline event: On 2024-03-15, the regulator issued a warning to Operator X."

Timeline events are mirrored into the case knowledge graph, keeping the graph's temporal layer consistent with the structured timeline.

The Harvest

The harvest bridges the workspace and the case. After an investigation completes or new files are uploaded, run a harvest:

"Start a harvest."

The harvest scans workspace files against the existing case graph and proposes new material:

Category What it means
Corroboration Confirms an existing finding
New evidence Matches a Key Question but no existing finding
Orphan Relevant graph matches but no Key Question fits — needs your routing
Gap revealed A Key Question with no findings and no proposals — a genuine data gap

Proposals appear in the Harvest Review panel. You accept or reject each one — accepted proposals are filed as findings automatically. The harvest never mutates existing findings; it's append-only by design.

Harvest timing

Run the harvest after each investigation concludes, and again after uploading new files. It catches material the reasoning engine surfaced but you haven't yet committed to the case.

Working With the Case in Chat

The case layer is fully accessible through conversation:

  • "What's the current case state?" — overview of key questions, findings, gaps
  • "File this as a confirmed finding under Key Question 1" — commits a finding
  • "Discard that finding — the source is unreliable" — discards with a reason
  • "Upgrade the confidence on finding 3 to confirmed" — updates confidence
  • "Search the case graph for entities related to financial transfers" — queries the case graph
  • "What entity types are in the case graph?" — inspects the case graph structure

The assistant always checks the existing case state before filing — if a semantically equivalent finding already exists, it tells you rather than creating a duplicate.

Practical Tips

  • Key Questions first. Create them before you run an investigation or harvest.
  • File findings deliberately. The agent will propose — don't auto-accept. Case findings are what you stand behind.
  • Use gaps honestly. An RFI without gaps isn't done — it's unfalsifiable.
  • Harvest after each investigation. It catches material you might have missed.
  • Confidence is a ladder. Start suspected → assessed when a second source confirms → confirmed when documented.

Under the Hood

The case knowledge graph

Every RFI has its own FalkorDB graph (case_rfi_{rfi_id}) managed by a dedicated Graphiti MCP instance. Two connection modes:

  • stdio mode (local dev): spawns a child Graphiti MCP process per RFI on demand.
  • HTTP mode (Docker): connects to a shared Graphiti container with group_id routing per RFI. Set CASE_GRAPH_URL to enable.

Because the case graph is a real Graphiti instance, everything that works with knowledge graphs — semantic search, entity extraction, edge inference, temporal episodes — works inside a case too. But it is populated incrementally and deliberately.

The case ontology

The case graph uses a minimal, domain-agnostic ontology with six entity types: Evidence, Actor, Event, Location, Document, Concept. Intentionally thin — the case graph is for analyst-curated links between concrete things, not a reproduction of a full domain ontology.

The ontology path is controlled by CASE_ONTOLOGY_TTL (default: aletheia/case/ontology/default.ttl). The loader uses rdflib to emit Graphiti entity_types config entries from every owl:Class and rdfs:Class with both a label and a comment.

Tips: keep the class set small (six is good, a dozen is a lot); write rdfs:comment for the extractor, not for humans; re-uploading TTL does not reclassify existing cases.

Harvest mechanics

The harvest processes files in four phases:

  1. Extract — convert each file to text, chunk on paragraph boundaries (2000 chars default)
  2. Search — search each chunk against the case graph
  3. Categorize — classify as corroboration, new evidence, orphan, or gap revealed
  4. Propose — write proposals to SQLite for analyst review

Append-only: running twice produces duplicate proposals for unchanged files, by design.

Case tools reference
Tool Purpose
create_key_question Register a new line of inquiry
get_case_state Snapshot of all case objects
file_finding Commit a finding + extract to case graph
discard_finding Discard with reason
upgrade_confidence Change finding confidence
flag_contradiction Cross-link contradicting findings
create_gap Document a known unknown
close_gap Resolve a gap
add_timeline_event Add a dated event + extract to case graph
search_case_graph Semantic/hybrid search
get_case_graph_schema Inspect the case graph structure
start_harvest Process workspace files against the case
get_harvest_status Poll harvest progress

Learn More