Evangeline
A complete framework for running AI models (CNNs and Transformer LLMs) on Xilinx Zynq UltraScale+ MPSoC FPGA with end-to-end tooling for development and deployment on the ZCU102 Evaluation Kit.

Overview
Evangeline is a production-ready framework for deploying AI models on FPGA hardware, featuring a complete toolchain from high-level model definitions to low-level hardware acceleration. The project implements both ResNet50 (25M parameter CNN for image classification) and Stories15M (15M parameter Transformer LLM for text generation) with dual-path compilation supporting seamless switching between CPU development and FPGA acceleration. It includes a comprehensive build system with YAML-based configuration, HLS kernel library with pre-optimized neural network operations, and complete deployment pipeline generating SD card-bootable images. The framework provides both interactive inference and benchmark modes with detailed performance metrics and accuracy measurements.
Key Features
Dual-path compilation: seamlessly switch between CPU-only development and FPGA acceleration
Two production-ready models: ResNet50 (25M params) for image classification and Stories15M (15M params) for text generation
HLS kernel library with pre-optimized operations: convolution, batch normalization, ReLU, pooling, matrix multiplication, RMSNorm, RoPE, and softmax
YAML-based declarative build system with flexible stage control
Complete deployment pipeline: from source code to SD card-bootable Linux images
Benchmark suite with accuracy metrics (Top-1/Top-5 for CNN, perplexity for LLM) and performance measurement
Interactive inference modes for both image classification and text generation
Comprehensive documentation with Mintlify integration
FPGA-optimized implementations with custom memory management and kernel interfaces