Evangeline
A complete framework for running AI models (CNNs and Transformer LLMs) on Xilinx Zynq UltraScale+ MPSoC FPGA with end-to-end tooling for development and deployment on the ZCU102 Evaluation Kit.

Overview
Evangeline is a production-ready framework for deploying AI models on FPGA hardware, featuring a complete toolchain from high-level model definitions to low-level hardware acceleration. The project implements both ResNet50 (25M parameter CNN for image classification) and Stories15M (15M parameter Transformer LLM for text generation) with dual-path compilation supporting seamless switching between CPU development and FPGA acceleration. It includes a comprehensive build system with YAML-based configuration, HLS kernel library with pre-optimized neural network operations, and complete deployment pipeline generating SD card-bootable images. The framework provides both interactive inference and benchmark modes with detailed performance metrics and accuracy measurements.
Key Features
Dual-path compilation: seamlessly switch between CPU-only development and FPGA acceleration
Two production-ready models: ResNet50 (25M params) for image classification and Stories15M (15M params) for text generation
HLS kernel library with pre-optimized operations: convolution, batch normalization, ReLU, pooling, matrix multiplication, RMSNorm, RoPE, and softmax
YAML-based declarative build system with flexible stage control
Complete deployment pipeline: from source code to SD card-bootable Linux images
Benchmark suite with accuracy metrics (Top-1/Top-5 for CNN, perplexity for LLM) and performance measurement
Interactive inference modes for both image classification and text generation
Comprehensive documentation with Mintlify integration
FPGA-optimized implementations with custom memory management and kernel interfaces
Project Gallery

Evangeline Setup
Here's my FPGA setup for the Evangeline project. The ZCU102 board is connected to my laptop via serial connection for programming and debugging. I'm using a serial terminal to interact with the Linux OS running on the FPGA. The project now supports both ResNet50 CNN for image classification (achieving production-level accuracy on ImageNet) and Stories15M Transformer for text generation with optimized FPGA kernels. The complete framework includes a sophisticated build system using Xilinx Vitis 2025.1, custom HLS kernels for neural network operations, and comprehensive benchmarking capabilities. It's been incredibly rewarding building a complete end-to-end deployment pipeline from high-level C++ code through hardware synthesis to SD card-bootable images.