-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Regarding model training #17
Comments
Hello, Thank you for reaching out. As you mentioned, the correct order is to first pre-train the spatial model using spatial_train.py. I recommend verifying that your pre-trained spatial model achieves performance comparable to the one provided in /models before proceeding. Next, you should run end2end_train.py. Hyperparameter settings:
Additionally, please note that dataset modifications, such as adding or removing videos, can also impact performance. |
So, do I not need to use extract_features.py to extract features? The features I had previously were extracted from the features of the live-VQC dataset downloaded from the official website using extract_features.py. What features should I use now to train the spatial model? |
Could you please tell me the detailed steps to train a model using the live-vqc dataset from the official website and then test it? |
For sure you need to extract features using extract_features.py Steps to Train the Model:
|
This is exactly how I trained the models. But why does the training of both the spatial model and the final model always end prematurely? Moreover, the Mean Opinion Score (MOS) predicted by the trained models is far greater than the range of 0 - 100 in the training set, reaching around 200 - 300. Additionally, when using test_model.py for testing, the index is abnormal. Here are my detailed steps: |
Hello, author. I'm wondering if I want to train a model based on the LIVE-VQC video dataset on my own, and also pre-train the spatial model by myself (as described in your paper), in what order should I run your code and how should I set the parameters?
Previously, I first used extract_features.py to extract the features of the LIVE dataset and KonIQ. Then I ran spatial_train.py to pre-train the spatial model using the features of KonIQ. After that, I ran end2end_train.py to train the model for the entire LIVE dataset using the features of the LIVE data and the pre-trained spatial model. However, when running test_model.py, I found that all the performance metrics were lower than those reported in your paper, and I also got the following error:
RuntimeWarning: overflow encountered in exp
logisticPart = 1 + np.exp(np.negative(np.divide(X - bayta3, np.abs(bayta4))))
The text was updated successfully, but these errors were encountered: