Scores: 100/100
The project is about using Convolutional Neural Networks (CNN) to classify handwritten digits.
To achieve it, the code will do the following:
- Load the training config.
- Download the minist dataset (divided into training and testing sets).
- Construct the neural network.
- Update the network parameters with training dataset by minimizing the loss. (Training).
- Test the neural network with the testing dataset. (Testing)
- Plot the results.
The given code is in main_given.py
My implementation is in main.py
See original repository: here
My repository: here
- Load the config and random seeds.
- Run the multiprocessing with different seeds.
- Plot the results. They are recorded by the .txt file.
- Modify the config to the seed assigned, then run with the updated config.
- In the run function, first assign the seed and device (CPU or CUDA).
- The rest of the run part was given. It mainly does the following:
- Set DataLoader arguments based on whether CUDA is available
- Load and preprocess the MNIST dataset
- Initialize the DataLoaders for training and testing, model, optimizer, and learning rate scheduler
- Train and test the model for each epoch as well as update the leraning rate scheduler. At the same time records the statistics.
- Plot the training and testing performance
- Save the results to a file and save the trained model
- Iterate through each batch in the training dataset loader.
- Move the input data and target labels to the device.
- Reset the gradients to zero for the optimizer.
- Forward pass: compute the model's output based on the input data.
- Compute the loss between the model's output and target labels.
- Backward pass: compute gradients of the loss with respect to the model's parameters.
- Update the model's parameters using the gradients.
- Accumulate the total loss for current batch.
- Get the index of the max log-probability (i.e., the predicted class) for each sample in the batch.
- Compare the predicted class with the target class and count the number of correct predictions.
- After the for loop: Compute the average loss and accuracy for this training set.
- Iterate through each batch in the test dataset loader.
- Move the input data and target labels to the device.
- Forward pass: compute the model's output based on the input.
- Compute the loss, take the summation.
- Get the index of the max log-probability (i.e., the predicted class) for each sample in the batch.
- Count the number of correct predictions .
Simply use the pyplot API to plot the epoch and performance data along with the title and label as required.
The plot_mean function reads the data recorded in text files, use a list to store the epoch and use the dictionary for each epoch to store the four datas used for plotting. Then convert each group of data who belong to the same plot into a list as the "performance" parameter of the plot function.