Portfolio-Former
Fine-tuning a transformer model on my personal portfolio data to create the conversational AI you're talking to right now.

Overview
This project involves the complete pipeline for creating a specialized, personal AI assistant. I built a system to automatically generate a high-quality, instruction-based dataset from my structured portfolio data (JSON files). I then used this dataset to fine-tune a pre-trained Large Language Model (Llama 3.1 8B) using modern, efficient techniques like PEFT/LoRA and 4-bit quantization. The result is a model that can answer questions about my skills, experience, and projects in a natural, first-person conversational style.
Key Features
Automated instruction dataset generation from structured JSON files.
Efficient fine-tuning using Parameter-Efficient Fine-Tuning (PEFT) with LoRA.
4-bit quantization to reduce memory and computational costs during training.
A custom system prompt to define the AI's persona and ensure factual, first-person responses.
The model can synthesize information from multiple sources to answer complex, general questions.