Skip to content

velyncodes/rock-paper-scissors-image-classifier

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Rock, Paper, and Scissors Image Classifier

This is a final project for Dicoding that I worked on independently. This project is a part of the requirements for graduating from the Machine Learning Developer class that I attended. The dataset used was obtained from the Dicoding platform.

A link to the data used in the project.

Table of Contents

Background

Project approach:

  1. Objectives
  2. Methodology
  3. Expected Outcomes

1. Objectives

The primary objectives of this project are:

  1. Gather and preprocess a comprehensive dataset of hand images representing rock, paper, and scissors gestures, ensuring a diverse range of hand shapes, sizes, and skin tones.
  2. Design and train a convolutional neural network (CNN) tailored for image classification to accurately distinguish between the three hand image categories.
  3. Assess the model's accuracy and performance using a dedicated validation dataset, employing metrics of accuracy and loss on both training and validation data.

2. Methodology

  • Data Collection and Preprocessing
  • Model Development
  • Model Evaluation

3. Expected Outcomes

An image classification model that can accurately distinguish 3 categories of images: rock, paper, and scissors.

Conclusion

From this rock-paper-scissors image classification project, I gained a deeper understanding of machine learning, image processing, and techniques related to classification tasks

Documentation

Model Summary

image

Evaluation of model

image_2024-05-12_11-10-17

Visualization of accuracy and loss

image_2024-05-12_11-04-37

Prediction of model

image_2024-05-12_11-05-57

Maintainers

@velyncodes

Please do not steal my work. It took countless cups of coffee and many sleepless nights to create. Thank you.

©️ Zevanna Vangelyn (Velyn)