Evaluatr

AI-Powered Evaluation Report Analysis and Framework Mapping

PyPI License Documentation

What is Evaluatr?

Evaluatr is an AI-powered system designed to automate the complex task of mapping evaluation reports against structured frameworks. Initially developed for IOM (International Organization for Migration) evaluation reports and the Strategic Results Framework (SRF), it transforms a traditionally manual, time-intensive process into an intelligent, interpretable workflow.

The system maps evaluation reports—often 150+ pages of heterogeneous content—against hierarchical frameworks like the SRF, which contains objectives, enablers, and cross-cutting priorities, each with specific outcomes, outputs, and indicators. Evaluatr targets the output level for optimal granularity and connects to broader frameworks like the Sustainable Development Goals (SDGs) for interoperability.

Beyond automation, Evaluatr prioritizes interpretability and human-AI collaboration. IOM evaluators can understand the mapping process, audit AI decisions, perform error analysis, build training datasets over time, and create robust evaluation pipelines—ensuring the AI system aligns with business needs through actionable, transparent, auditable methodology.

The Challenge We Solve

IOM evaluators possess deep expertise in mapping evaluation reports against frameworks like the Strategic Results Framework (SRF), but face significant operational challenges when processing reports that often exceed 150 pages of diverse content across multiple projects and contexts.

The core challenges are:

  • Time-intensive process: Hundreds of staff-hours required per comprehensive mapping exercise
  • Individual consistency: Even expert evaluators may categorize the same content differently across sessions
  • Cross-evaluator consistency: Different evaluators may interpret and map identical content to different framework outputs
  • Scale vs. thoroughness: Growing volume of evaluation reports creates pressure to choose between speed and comprehensive analysis

IOM needs a solution that leverages evaluators’ expertise while addressing these operational bottlenecks—accelerating the mapping process while maintaining the consistency and thoroughness that manual review currently struggles to achieve at scale.

Key Features

1. Document Preparation Pipeline âś… Available

  • Repository Processing: Read and preprocess IOM evaluation report repositories with standardized outputs
  • Automated Downloads: Batch download of evaluation documents from diverse sources
  • OCR Processing: Convert scanned PDFs to searchable text using Optical Character Recognition (OCR) technology
  • Content Enrichment: Fix OCR-corrupted headings and enrich documents with AI-generated image descriptions for high-quality input data

2. Intelligent Mapping đźš§ In Development

  • Agentic Framework Mapping: Use DSPy-powered agents for traceable, interpretable mapping of reports against evaluation frameworks like the IOM Strategic Results Framework (SRF)
  • Command-line Interface: Streamlined pipeline execution through easy-to-use CLI tools

3. Knowledge Synthesis đź“‹ Planned

  • Knowledge Cards: Generate structured summaries for downstream AI tasks like proposal writing and synthesis

️ Installation & Setup

From GitHub

pip install git+https://github.com/franckalbinet/evaluatr.git

Development Installation

# Clone the repository
git clone https://github.com/franckalbinet/evaluatr.git
cd evaluatr

# Install in development mode
pip install -e .

# Make changes in nbs/ directory, then compile:
nbdev_prepare
Note

This project uses nbdev for literate programming - see the Development section for more details.

Environment Configuration

Create a .env file in your project root with your API keys:

MISTRAL_API_KEY="your_mistral_api_key"
GEMINI_API_KEY="your_gemini_api_key"

Note: Evaluatr uses llmlite and dspy for LLM interactions, giving you flexibility to use any compatible language model provider beyond the examples above.

Quick Start

Reading an IOM Evaluation Repository

from evaluatr.readers import IOMRepoReader

# Initialize reader with your Excel file
reader = IOMRepoReader('files/test/eval_repo_iom.xlsx')

# Process the repository
evaluations = reader()

# Each evaluation is a standardized dictionary
for eval in evaluations[:3]:  # Show first 3
    print(f"ID: {eval['id']}")
    print(f"Title: {eval['meta']['Title']}")
    print(f"Documents: {len(eval['docs'])}")
    print("---")
ID: 1a57974ab89d7280988aa6b706147ce1
Title: EX-POST EVALUATION OF THE PROJECT:  NIGERIA: STRENGTHENING REINTEGRATION FOR RETURNEES (SRARP)  - PHASE II
Documents: 2
---
ID: c660e774d14854e20dc74457712b50ec
Title: FINAL EVALUATION OF THE PROJECT: STRENGTHEN BORDER MANAGEMENT AND SECURITY IN MALI AND NIGER THROUGH CAPACITY BUILDING OF BORDER AUTHORITIES AND ENHANCED DIALOGUE WITH BORDER COMMUNITIES
Documents: 2
---
ID: 2cae361c6779b561af07200e3d4e4051
Title: Final Evaluation of the project "SUPPORTING THE IMPLEMENTATION OF AN E RESIDENCE PLATFORM IN CABO VERDE"
Documents: 2
---

Exporting it to JSON:

reader.to_json('processed_evaluations.json')

Downloading evaluation documents

from evaluatr.downloaders import download_docs
from pathlib import Path

fname = 'files/test/evaluations.json'
base_dir = Path("files/test/pdf_library")
download_docs(fname, base_dir=base_dir, n_workers=0, overwrite=True)
(#24) ['Downloaded Internal%20Evaluation_NG20P0516_MAY_2023_FINAL_Abderrahim%20EL%20MOULAT.pdf','Downloaded RR0163_Evaluation%20Brief_MAY_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0238_Evaluation%20Brief_FEB_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0238__FEB_2023_FINAL%20RE_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0053_Evaluation%20Brief_SEP_%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0053_OCT_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Internal%20Evaluation_NC0030_JUNE_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded NC0030_Evaluation%20Brief_June%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded CD0015_Evaluation%20Brief_May%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Projet%20CD0015_Final%20Evaluation%20Report_May_202_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_Retour%20Vert_JUL_2021_Fina_Abderrahim%20EL%20MOULAT.pdf','Downloaded NC0012_Evaluation%20Brief_JUL%202021_Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Internal%20Evaluation_JANUARY_2021__Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Project_Evaluation%20Brief_JAN%202021_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Evaluation%20Brief_ARCO_Shiraz%20JERBI.pdF','Downloaded Final%20evaluation%20report_ARCO_Shiraz%20JERBI_1.pdf','Downloaded Management%20Response%20Matrix_ARCO_Shiraz%20JERBI.pdf','Downloaded IOM%20MANAGEMENT%20RESPONSE%20MATRIX.pdf','Downloaded IOM%20Niger%20-%20MIRAA%20III%20-%20Final%20Evaluation%20Report%20%28003%29.pdf','Downloaded CE.0369%20-%20IDEE%20-%20ANNEXE%201%20-%20Rapport%20Recherche_Joanie%20DUROCHER_0.pdf'...]

OCR Processing

Convert PDF evaluation reports into structured markdown files with extracted images:

from evaluatr.ocr import process_single_evaluation_batch
from pathlib import Path

# Process a single evaluation report
report_path = Path("path/to/your/evaluation_report_folder")
output_dir = Path("md_library")

process_single_evaluation_batch(report_path, output_dir)

Output Structure:

md_library/
├── evaluation_id/
│   ├── page_1.md
│   ├── page_2.md
│   └── img/
│       ├── img-0.jpeg
│       └── img-1.jpeg

Example markdown page with image reference as generated by Mistral OCR:

The evaluation followed the Organisation of Economic Cooperation and Development/Development Assistance Committee (OECD/DAC) evaluation criteria and quality standards. The evaluation ...

FIGURE 2. OECD/DAC CRITERIA FOR EVALUATIONS
![img-2.jpeg](img-2.jpeg)

Each evaluation question includes the main data collection ...

Batch OCR Processing

Process multiple evaluation reports efficiently using Mistral’s batch OCR API:

from evaluatr.ocr import process_all_reports_batch
from pathlib import Path

# Get all evaluation report directories
reports_dir = Path("path/to/all/evaluation_reports")
report_folders = [d for d in reports_dir.iterdir() if d.is_dir()]

# Process all reports using batch OCR for efficiency
process_all_reports_batch(report_folders, md_library_path="md_library")

Benefits of batch processing: - Significantly faster than processing PDFs individually - Cost-effective through Mistral’s batch API pricing (expect $0.5 per 1,000 pages) - Automatic job monitoring and result retrieval

Document Enrichment

While Mistral OCR excels at text extraction, it often struggles with heading hierarchy detection, producing inconsistent markdown levels that break document structure. Clean, properly nested headings are crucial for agentic AI systems to retrieve content hierarchically—mimicking how experienced evaluation analysts navigate reports by section and subsection (as you’ll see in the upcoming mappr module). Additionally, evaluation reports contain rich visual evidence through charts, graphs, and diagrams that standard OCR simply references as image links. The enrichr module addresses these “garbage in, garbage out” challenges by fixing structural issues and converting visual content into searchable, AI-readable descriptions.

from evaluatr.enrichr import fix_doc_hdgs, enrich_images
from pathlib import Path

# Fix heading hierarchy in OCR'd document
doc_path = Path("md_library/evaluation_id")
fix_doc_hdgs(doc_path)

# Enrich images with descriptive text
pages_dir = doc_path / "enhanced"
img_dir = doc_path / "img"
enrich_images(pages_dir, img_dir)

Documentation

  • Full Documentation: GitHub Pages
  • API Reference: Available in the documentation
  • Examples: See the nbs/ directory for Jupyter notebooks

Contributing

Development Philosophy

Evaluatr is built using nbdev, a literate programming framework that allows us to develop code, documentation, and tests together in Jupyter notebooks. This approach offers several advantages:

  • Documentation-driven development: Code and explanations live side-by-side, ensuring documentation stays current
  • Reproducible research: Each module’s development process is fully transparent and reproducible
  • Collaborative friendly: Notebooks make it easier for domain experts to understand and contribute to the codebase

fastcore provides the foundational utilities that power this approach, offering enhanced Python functionality and seamless integration between notebooks and production code.

Development Setup

We welcome contributions! Here’s how you can help:

  1. Fork the repository
# Install development dependencies
pip install -e .
  1. Create a feature branch (git checkout -b feature/amazing-feature)
  2. Make your changes in the nbs/ directory
  3. Compile with nbdev_prepare
  4. Commit your changes (git commit -m 'Add amazing feature')
  5. Push to the branch (git push origin feature/amazing-feature)
  6. Open a Pull Request

License

This project is licensed under the MIT License - see the LICENSE file for details.

Dependencies

Evaluatr is built on these key Python packages:

  • fastcore & pandas - Core data processing and utilities
  • mistralai & litellm - AI/LLM integration for OCR and enrichment
  • dspy & toolslm - Structured AI programming and tool integration

Support