-
Notifications
You must be signed in to change notification settings - Fork 1
Added mode as an argument to Evals class to enable binary evals #119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| query_identifier=eval_config.get(_TLM_EVAL_QUERY_IDENTIFIER_KEY), | ||
| context_identifier=eval_config.get(_TLM_EVAL_CONTEXT_IDENTIFIER_KEY), | ||
| response_identifier=eval_config.get(_TLM_EVAL_RESPONSE_IDENTIFIER_KEY), | ||
| mode=eval_config.get("mode") or "numeric", # Default to numeric if not specified |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we want this or numeric part? isnt the mode always specified for our default built-in Evals?
I think we should make sure the default built-in Evals are always specifying the mode
| Leave this value as None (the default) if this Eval doesn't consider the response. | ||
| mode (str, optional): The evaluation mode, either "numeric" (default) or "binary". | ||
| - "numeric": For evaluations that naturally have a continuous score range (e.g., helpfulness, coherence). | ||
| - "binary": For yes/no evaluations (e.g., does response mention a company, is query appropriate). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - "binary": For yes/no evaluations (e.g., does response mention a company, is query appropriate). | |
| - "binary": For yes/no evaluations (e.g., does response mention a particular company or not). |
| Both modes return numeric scores in the 0-1 range. For binary evaluations detecting issues, | ||
| low scores typically correspond to "Yes" (issue detected) and high scores to "No" (issue not detected). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| Both modes return numeric scores in the 0-1 range. For binary evaluations detecting issues, | |
| low scores typically correspond to "Yes" (issue detected) and high scores to "No" (issue not detected). | |
| Both modes return numeric scores in the 0-1 range. | |
| For numeric evaluations, your `criteria` should define what good vs. bad looks like (low evaluation scores will correspond to cases deemed bad). | |
| For binary evaluations, your `criteria` should be a Yes/No question (low evaluation scores will correspond to "Yes" cases, so phrase your question such that the likelihood of "Yes" matches the likelihood of the particular problem you wish to detect). |
| Eval( | ||
| name=cast(str, eval_config[_TLM_EVAL_NAME_KEY]), | ||
| criteria=cast(str, eval_config[_TLM_EVAL_CRITERIA_KEY]), | ||
| query_identifier=eval_config.get(_TLM_EVAL_QUERY_IDENTIFIER_KEY), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since youre using "numeric" and "binary" many places, those should be defined variables in constants.py:
_NUMERIC_STR = "numeric"
_BINARY_STR = "binary"
and then you should use those variables throughout
| query_identifier=eval_config.get("query_identifier"), | ||
| context_identifier=eval_config.get("context_identifier"), | ||
| response_identifier=eval_config.get("response_identifier"), | ||
| mode=eval_config.get("mode") or "numeric", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same as my comment at the top, why do we have fallback mode for the default built-in Evals? The default built-in Evals should not be allowed to have mode unspecified.
jwmueller
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just small suggestions
|
replaced by #130 |
PR to add "mode" to Eval class in "cleanlab-tlm". This allows us to specify whether an Eval is Binary or Numeric.
This PR serves as an API design spec