-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to add ID column to the output #2
Comments
The If you would like to store additional data, I would suggest adding it to the "input_column_names" instead, but there is no option to print these values during inference. We should maybe consider adding this functionality in a future release. However, there might be an easier solution for your needs. The data loader reads each line of the input file in the order they appear (an assumption I made based on the other created issue), so the lines of the original file should match the lines of the predicted output. One can easily open the files in a program like excel and copy additional columns from one file to another. On Linux, one can do this automatically with a one-liner, as I am illustrating by appending the SMILES column for the Tox21 dataset:
Of course, this only works if the rows between the 2 files match. If a splitter was used, this is not the case anymore. |
Is it also the case when using the following ?
|
This is equivalent to not using a split because the index splitter does not shuffle samples, and 100% of the samples are kept in the "test" split. It is the perfect setup for the above mentioned use case. |
Adding support for recursive config
I'd like to have the ID column
mol_id
, present in my input file, in the output file after prediction. I tried to addmol_id
totarget_column_names
but this results in the following error
When
mol_id
is an integer, the result is 0 and 1, like any other value, so I suppose, it shouldn't be added totarget_column_names
.input_column_names
are not in the output, so I don't know what part of the config I should modify.Thank you.
The text was updated successfully, but these errors were encountered: