Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: use proper eval default main eval metrics for text regression model #3602

Conversation

MattGPT-ai
Copy link
Contributor

This fixes the default metrics used in the text regression model to be more appropriate for regression, rather than the classification default. There are "correlation" and "pearson" metrics.

This is mostly a copy of #3538 but for the TextRegressor model

also refactor variables to avoid type conflicts
@MattGPT-ai MattGPT-ai force-pushed the mattb.fix.proper-default-eval-metric-text-regression branch from 3cc131a to 7c302c6 Compare January 26, 2025 21:36
@alanakbik
Copy link
Collaborator

@MattGPT-ai thanks for fixing this! Tested with:

from flair.datasets import WASSA_FEAR
from flair.embeddings import TransformerDocumentEmbeddings
from flair.models import TextRegressor
from flair.trainers import ModelTrainer

# load pairwise regression dataset
corpus = WASSA_FEAR().downsample(0.1)

# simple text pair regressor
model = TextRegressor(document_embeddings=TransformerDocumentEmbeddings("distilbert-base-uncased"),
                      label_name="class")

# train the model and be sure to pass the correct main_evaluation_metric
trainer = ModelTrainer(corpus=corpus, model=model)

trainer.fine_tune("resources/taggers/wassa-regression",
                  main_evaluation_metric=("correlation", "spearman"),
                  train_with_dev=True,
                  monitor_test=True,
                  max_epochs=2,
                  )

Only indirectly connected to this PR, but from a usability standpoint, it might be nicer to not have to explicitly set the main_evaluation_metric in the trainer for regression models. Perhaps in a future PR we could put in a check that - if no main_evaluation_metric is provided - takes the default metric from the Model.

@alanakbik alanakbik merged commit 087e441 into flairNLP:master Jan 27, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants