Fine-tune ChatGPT with few-shot learning for personalized resume bullet points.
I initiated this project in response to the challenges in the job market and the growing importance of tailoring resumes to specific job requirements. Inspired by the increasing use of Large Language Models, especially their fine-tuning for specific purposes, I decided to experiment with few-shot fine-tuning GPT 3.5 turbo to create a system that streamlines the process of customizing resumes for job descriptions.
The initial phase involved reviewing successful resumes on platforms like LinkedIn, including those of classmates and alumni, to identify effective formats and content. Subsequently, these selected resumes, along with a guide on crafting impactful resume bullet points, such as the one provided by Columbia University, were used to create a guideline for fine-tuning the model for generating bullet points.
In the following stage, a dataset with prompt-answer pairs was compiled for training the model, guided by OpenAI's documentation. This involved crafting diverse conversations to simulate real-world scenarios the model might encounter in production. Job descriptions from LinkedIn were utilized, employing the STAR method to tailor robust bullet points for each role. Additionally, ChatGPT 3.5, with descriptive prompts, was used to generate extra bullet points. The final dataset, stored in the 'prompt-answer-pairs.jsonl', was developed and utilized for fine-tuning the model.
The results from the fine-tuned model are as follows. Despite a limited fine-tuning dataset, the model performed well with few-shot examples. Opportunities for improvement include enhancing dataset diversity, creating a user interface, etc. It's important to note that this was a small-scale experiment.