Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@danielezhu
Copy link
Contributor

Description of changes:
Title

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

(test_case.original_model_output,),
(test_case.perturbed_model_output_1,),
(test_case.perturbed_model_output_2,),
(test_case.original_model_output, None),
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Model runner predict method's output should be a two-element tuple.

mock_get_results_path.return_value = "/path/to/results"
model_runner = Mock()

@pytest.mark.parametrize(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All invalid input cases are handled by evaluate_dataset now.

),
],
)
def test_qa_accuracy_semantic_robustness_evaluate_sample_with_model_output(self, test_case):
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed model_output as an argument to evaluate_sample, to be consistent with evaluate. Since semantic robustness algos require a model and model inputs anyways, there's no need to make things more complicated by allowing users to first invoke their model to first get the model output, and then pass that output here. We should just get the model output ourselves, with their model.

user_provided_prompt_template: Optional[str]
dataset_prompt_template: str

@pytest.mark.parametrize(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are all test cases that don't actually validate the numerical values, but rather ensure that correct function calls are made. evaluate_dataset now handles all of this logic, so we can get rid of these test cases. Notice how all of scores are just 0.0, since we're mocking everything.

malhotra18
malhotra18 previously approved these changes Mar 27, 2024
nathanng17
nathanng17 previously approved these changes Mar 27, 2024
@danielezhu danielezhu dismissed stale reviews from nathanng17 and malhotra18 via f2680fc March 27, 2024 17:42
@danielezhu danielezhu merged commit 1c99234 into aws:main Mar 27, 2024
@danielezhu danielezhu deleted the qa_sr branch March 27, 2024 18:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants