Quickstart¶
Build your first knowledge graph and run an evaluation in minutes.
1. Build an Ontology Graph (Optional but Recommended)¶
For best results with FTM data, first load the ontology:
aletheia build-ontology-graph \
--use-case terrorist_orgs \
--knowledge-graph terrorist_orgs_ontology
2. Build the Knowledge Graph¶
aletheia build-knowledge-graph \
--use-case terrorist_orgs \
--knowledge-graph terrorist_orgs \
--schema-mode graph-hybrid \
--ontology-graph terrorist_orgs_ontology
This will:
- Parse the source data using the use case's parser
- Convert entities to markdown episodes
- Extract entities and relationships using Graphiti
- Store everything in your graph database
Resume Interrupted Builds
If the build is interrupted, use --resume to continue from where it left off:
3. Run an Evaluation¶
aletheia evaluate-ragas \
--knowledge-graph terrorist_orgs \
--questions use_cases/terrorist_orgs/evaluation_questions.json \
--output-dir output/
Add --grounding-mode strict (the default) to verify that answers are grounded in evidence, or --use-community-search to include hierarchical community context.
This will:
- Search the graph for each question
- Generate answers from retrieved context
- Verify grounding (in strict/lenient mode)
- Calculate RAGAS metrics (precision, recall, faithfulness, similarity)
- Output results to JSON and Markdown files
4. Review Results¶
Check the output directory for:
ragas_YYYYMMDD_HHMMSS.json- Detailed resultsragas_YYYYMMDD_HHMMSS.md- Human-readable summary
Key metrics to look for:
| Metric | Good Score | What It Measures |
|---|---|---|
| Context Precision | > 0.7 | Relevance ranking of retrieved context |
| Context Recall | > 0.7 | Coverage of required information |
| Faithfulness | > 0.7 | Answer grounded in context |
| Answer Similarity | > 0.7 | Semantic match to gold answer |
Next Steps¶
- Schema Modes - Choose the right schema mode
- Evaluation Guide - Deep dive into metrics
- Creating Use Cases - Build your own use case