oruccakir
← Back to Projects

Portfolio-Former

PythonTransformersPyTorchPeftLoRABitsAndBytesLLM Fine-Tuning

Overview

Fine-tuning a transformer model on my personal portfolio data to create the conversational AI you're talking to right now.

This project involves the complete pipeline for creating a specialized, personal AI assistant. I built a system to automatically generate a high-quality, instruction-based dataset from my structured portfolio data (JSON files). I then used this dataset to fine-tune a pre-trained Large Language Model (Llama 3.1 8B) using modern, efficient techniques like PEFT/LoRA and 4-bit quantization. The result is a model that can answer questions about my skills, experience, and projects in a natural, first-person conversational style.

Features

  • Automated instruction dataset generation from structured JSON files.
  • Efficient fine-tuning using Parameter-Efficient Fine-Tuning (PEFT) with LoRA.
  • 4-bit quantization to reduce memory and computational costs during training.
  • A custom system prompt to define the AI's persona and ensure factual, first-person responses.
  • The model can synthesize information from multiple sources to answer complex, general questions.

Let them

Let them speak.

People talked a lot, but in the end the results stayed and that’s what really mattered.

1 / 7