Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test set acc doesn't work #18

Open
Salidor opened this issue Jul 13, 2019 · 11 comments
Open

Test set acc doesn't work #18

Salidor opened this issue Jul 13, 2019 · 11 comments

Comments

@Salidor
Copy link

Salidor commented Jul 13, 2019

I am setting the line test_Y to true:

# for 'classification'
        self.loss_and_acc=None     # loss, train_acc, test_acc, spend_time
        **self.test_Y=True# real label**
        self.real_class = None
        self.pred_class = None

But it still doesn't show the accuracy of test set

@zhuofupan
Copy link
Owner

test_Y in here is not related with whether output test result or not. It just used to prevent error showing : “model has no attribution test _Y”. It may caused by wrong setting of your dataset.

@Salidor
Copy link
Author

Salidor commented Jul 13, 2019

What does this line is doing then?

if test_Y is not None:
                        acc=self.test_average_accuracy(test_X,test_Y,sess)
                        string = string + '  | 「Test」: accuracy = {:.4}%'.format(acc*100)
                        self.loss_and_acc[i][2]=acc       # <2> Test accuracy

@Salidor
Copy link
Author

Salidor commented Jul 14, 2019

Also could you please provide "the code" to your dataset in encoding which all of the people around the world could read it. I've used many chinese character encodings but it didn't work. The pass is wrong.

@zhuofupan
Copy link
Owner

if you provide test_Y, then prediction procedure on test set will be performed after each training epoch. By set the dataset with the form of [train_X, train_Y, test _X, test_Y], you should get the right result.

@Salidor
Copy link
Author

Salidor commented Jul 14, 2019

изображение
Could you please provide password to your database? As i said i've tried chinese character encodings but it didn't work. Also i've build the dataset in an exact way as yours and got the outcome in a form of sub01[usc].csv file. The acc was 92%. So the dataset structure was ok i guess. Which file should i set in the form of [train_X, train_Y, test _X, test_Y]. May i ask you to be a bit more specific? Thanks you for your time.

@zhuofupan
Copy link
Owner

txt already fixed, password is '7mpb'. If you don't want to process the raw sound data with ‘wav’ form, you don't need to download it.
in all test file, you can see the setting rule
in 'classification_MINST.py'
datasets = [mnist.train.images,mnist.train.labels,mnist.test.images , mnist.test.labels]
in 'classification_USC.py'
datasets = read_data(meth='mfcc') #X_train, Y_train, X_test, Y_test =read_sf_data(dynamic=t)
in 'prediction_BMS.py'
train_X,train_Y,test_X = read_data()

@Salidor
Copy link
Author

Salidor commented Jul 15, 2019

Thank you for your quick answers.

in 'classification_USC.py'
datasets = read_data(meth='mfcc') #X_train, Y_train, X_test, Y_test =read_sf_data(dynamic=t)

When i remove comment from X_train, Y_train, X_test, Y_test =read_sf_data(dynamic=t) i get an error: name read_sf_data is not defined. What is sf data here?

@ghost
Copy link

ghost commented Feb 19, 2020

I was working on this and I had the same problem of @Salidor. Can you please tell the exact procedure in order to give Y_test ? The classifier just goes on the training phase, never do the test because Y_test is always None. Some help please? @fuzimaoxinan

@zhuofupan
Copy link
Owner

Hello, if you use the data set offered by me, it indeed will not show the results, since the data set are downloaded from the internet and it dosen't provide labels. Try to use youself data set or classification task. In the code, if you don't provide labels, it will not show test result in the console.

@ghost
Copy link

ghost commented Feb 20, 2020

Thank you, I tried with some data and all worked. But could you provide me how is it possible that, with my data, I always obtain 100% test accuracy and 50-60% train accuracy? How is this possible? It seems that it is always going on overfitting.

@zhuofupan
Copy link
Owner

That is confusing. Firstly, to improve train accuracy, you can appropriately adjust learning rate, reduce drop out rate, reduce batch size, and so on. Due to the differences in the data set, they need to try repeatedly to get the appropriate value. Then, is there data imbalance problem exiting in your data set? Is the number of your test data too small, so it is easy to get high accuracy? Finally, check your data set carefully to rule out the possibility of program errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants