You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 15, 2025. It is now read-only.
According to the code (and assuming that STEPS = 1, i dont understand how the outputs change after the adaptation:
defforward(self, x):
ifself.episodic:
self.reset()
for_inrange(self.steps):
outputs=forward_and_adapt(x, self.model, self.optimizer)
returnoutputs@torch.enable_grad() # ensure grads in possible no grad context for testingdefforward_and_adapt(x, model, optimizer):
"""Forward and adapt model on batch of data. Measure entropy of the model prediction, take gradients, and update params. """# forwardoutputs=model(x)
# adaptloss=softmax_entropy(outputs).mean(0)
loss.backward()
optimizer.step()
optimizer.zero_grad()
returnoutputs
judging by the code, you return the original outputs however they do change somehow, how?
The text was updated successfully, but these errors were encountered:
I think in Online TTA, a batch of samples won't be fed into the network again to do another forward pass, so the updated model is for the next batch. But in some other scenarios the current batch will get a new output.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
According to the code (and assuming that
STEPS = 1
, i dont understand how the outputs change after the adaptation:judging by the code, you return the original outputs however they do change somehow, how?
The text was updated successfully, but these errors were encountered: