Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Tests for Evaluation Output #45

Open
Ueva opened this issue Dec 29, 2023 · 0 comments
Open

Add Tests for Evaluation Output #45

Ueva opened this issue Dec 29, 2023 · 0 comments
Assignees

Comments

@Ueva
Copy link
Owner

Ueva commented Dec 29, 2023

Add some test cases to ensure that all of the evaluation methods we support are actually giving us the outputs we’re expecting.

This could be done very simply, with a few short episodes of interaction simulated on a very simple MDP, both with and without skills. See the existing run_agent test cases for inspiration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants