It is an artificially intelligent system which I develop by using Machine learning and Deep Learning.
There are many people in all over the world and in our country as well who can’t speak. They feel very miserable when they try to convey their message to the normal peoples so they always needs a person who knows their sign language and then this person conveys their message to the normal people so due to fulfill this requirement I want to build a system through which they can easily convey their message to the normal people without any third person involved. This System get their gesture or sign and speak what they want to tell.
Data is so precious in Machine Learning and Deep Learning without data we can’t do any thing so first of all we develop our own data set it’s a very difficult task by taking images in a huge quantity and extract just sign from these images we spend a lot of time in developing the dataset for our model. Our dataset contain 300x300 size images.
Some Images of our dataset:
I build a model of artificial neural network and then train it on our dataset. It successfully trained with a very good accuracy of 75-80%. I test it on random images of validation set whenever I run the testing code which I write on "Test On validation Set" section It chooses random image from the validation set and predict the result.
Model predicted the result in text and then I convert this text into Speach by just playing the audio you can hear what the dumb and deaf person want to say by showing this gesture.