This repository was made to keep track of my progress during Data Science Nigeria's 70 days of Machine Learning
- Day 1 - Introduction to Machine Learning
- Day 7 - How Linear Regression works
- Day 10 - R-Squared theory;
Another name for R-squared error is Coefficient of Determination.
Error is the distance between a point and a line of best fit.
The error is squared to get a positive value of our error. - Day 13 - Introduction to K Nearest Neighbors algorithm;
'K' in the "K Nearest Neighbors" algorithm is a parameter that refers to the number of nearest neighbors to consider during voting process.
K Nearest Neighbors algorithm is classified as supervised learning.
Clustering is the process of dividing data points into a number of similar groups. - Day 21 - Understanding Vectors;
The magnitude of a vector is denoted with Bars.
Learnt how to calculate the magnitude of the vector. - Day 22 - Support Vector Assertion;
Dot product is the relationship between the input and weight.
If vector "u", dotted with vector "w + b" equals zero, it means that Vector u is on the decision boundary.
If vector "u", dotted with vector "w + b" is greater or equal to zero, it means that the sample is of a class above the hyperplane. - Day 23 - Support Vector Machine Fundamentals;
A support vector is a feature set that if moved, affects the position of the best separating hyperplane. - Day 24 - Support Vector Machine Optimization;
Equation for hyperplane is X.W + b
Support Vector Machines are less effective when the data is noisy and contains overlapping points. - Day 29 & 30 - Introduction to Kernels;
Kernels are done using inner product.
Kernels take two inputs and outputs the similarities.
Inner Product is a projection of x1 onto x2.
Kernel is represented using the greek letter "phi".
Transformation of the old and creation of new hyperplane helps SVM to perform better on non-linearly separable data.
The default kernel for SVM using sckit-learn is Radio base function. - Day 31 - Soft Margin SVM;
For a more generalised model, the best kernel that represents the dataset has to be found.
Soft Margin Classifier is a classifier with violating data in the separating hyperplane.
Hard Margin Classifier is a classifier having perfectly separated data points in the decision hyperplane.
In Soft margin there's a degree of error called Slack.
The value of slack in SVM can best be represented with S>=0 .
A slack value of Zero indicates a Hard Margin.
Given that (SV(support Vectors) / Number of samples ) > 1, indicates; Overfitting, Non- linearly separable data and Wrong kernel. - Day 33 - Support Vector Machine Parameters
SVM is a binary classifier, so it can only seperate two groups per decision boundary; OVO - One versus One and OVR - One Versus Rest.
I learnt about a lot of SVM Parameters. - Day 43 - Introduction to Neural Networks.
A neural network with two or more hidden layers.
The layers of a neural network are; Input, Hidden and Output.
An activation function serves as a threshold for determining output value.
The idea of a Neural Network is dea is to mimic a neuron, and, with a basic neuron, you have the dendrites, a nucleus, axon, and terminal axon. - Day 44 - Installation of TensorFlow.
I wrote an article on it. Click here to read. - Day 65 - Introduction to 3D Convolutional Neural Network
Done with 70 Days of ML:
Resources;
+ #DSN70daysofML