Skip to content

Conversation

@mturk24
Copy link
Contributor

@mturk24 mturk24 commented Sep 19, 2025

PR to add "mode" to Eval class in "cleanlab-tlm". This allows us to specify whether an Eval is Binary or Numeric.

This PR serves as an API design spec

@mturk24 mturk24 changed the title Added mode to get_default_evals fn Added mode as an argument to Evals class to enable binary evals Sep 25, 2025
query_identifier=eval_config.get(_TLM_EVAL_QUERY_IDENTIFIER_KEY),
context_identifier=eval_config.get(_TLM_EVAL_CONTEXT_IDENTIFIER_KEY),
response_identifier=eval_config.get(_TLM_EVAL_RESPONSE_IDENTIFIER_KEY),
mode=eval_config.get("mode") or "numeric", # Default to numeric if not specified
Copy link
Member

@jwmueller jwmueller Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we want this or numeric part? isnt the mode always specified for our default built-in Evals?
I think we should make sure the default built-in Evals are always specifying the mode

Leave this value as None (the default) if this Eval doesn't consider the response.
mode (str, optional): The evaluation mode, either "numeric" (default) or "binary".
- "numeric": For evaluations that naturally have a continuous score range (e.g., helpfulness, coherence).
- "binary": For yes/no evaluations (e.g., does response mention a company, is query appropriate).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- "binary": For yes/no evaluations (e.g., does response mention a company, is query appropriate).
- "binary": For yes/no evaluations (e.g., does response mention a particular company or not).

Comment on lines +868 to +869
Both modes return numeric scores in the 0-1 range. For binary evaluations detecting issues,
low scores typically correspond to "Yes" (issue detected) and high scores to "No" (issue not detected).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Both modes return numeric scores in the 0-1 range. For binary evaluations detecting issues,
low scores typically correspond to "Yes" (issue detected) and high scores to "No" (issue not detected).
Both modes return numeric scores in the 0-1 range.
For numeric evaluations, your `criteria` should define what good vs. bad looks like (low evaluation scores will correspond to cases deemed bad).
For binary evaluations, your `criteria` should be a Yes/No question (low evaluation scores will correspond to "Yes" cases, so phrase your question such that the likelihood of "Yes" matches the likelihood of the particular problem you wish to detect).

Eval(
name=cast(str, eval_config[_TLM_EVAL_NAME_KEY]),
criteria=cast(str, eval_config[_TLM_EVAL_CRITERIA_KEY]),
query_identifier=eval_config.get(_TLM_EVAL_QUERY_IDENTIFIER_KEY),
Copy link
Member

@jwmueller jwmueller Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since youre using "numeric" and "binary" many places, those should be defined variables in constants.py:

_NUMERIC_STR = "numeric"
_BINARY_STR = "binary" 

and then you should use those variables throughout

query_identifier=eval_config.get("query_identifier"),
context_identifier=eval_config.get("context_identifier"),
response_identifier=eval_config.get("response_identifier"),
mode=eval_config.get("mode") or "numeric",
Copy link
Member

@jwmueller jwmueller Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as my comment at the top, why do we have fallback mode for the default built-in Evals? The default built-in Evals should not be allowed to have mode unspecified.

Copy link
Member

@jwmueller jwmueller left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just small suggestions

@jwmueller jwmueller marked this pull request as draft September 26, 2025 19:23
@jwmueller
Copy link
Member

replaced by #130

@jwmueller jwmueller closed this Nov 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants