langcheck.metrics.text_structure#

langcheck.metrics.text_structure.contains_all_strings(generated_outputs: List[str] | str, strings: List[str], case_sensitive: bool = False, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs contain all strings in of a given list. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • strings – A list of strings to match

  • case_sensitive – Whether to match case sensitively or not, default False

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.contains_any_strings(generated_outputs: List[str] | str, strings: List[str], case_sensitive: bool = False, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs contain any strings in a given list. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • strings – A list of strings to match

  • case_sensitive – Whether to match case sensitively or not, default to False.

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.contains_regex(generated_outputs: List[str] | str, regex: str, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs partially contain a given regular expression. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • regex – The regular expression to match

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.is_float(generated_outputs: List[str] | str, min: float | None = None, max: float | None = None, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs can be parsed as floating point numbers, optionally within a min/max range. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • min – The optional minimum valid float

  • max – The optional maximum valid float

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.is_int(generated_outputs: List[str] | str, domain: Iterable[int] | Container[int] | None = None, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs can be parsed as integers, optionally within a domain of integers like range(1, 11) or {1, 3, 5}. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • domain – The optional domain of valid integers

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.is_json_array(generated_outputs: List[str] | str, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs can be parsed as JSON arrays. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.is_json_object(generated_outputs: List[str] | str, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs can be parsed as JSON objects. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.matches_regex(generated_outputs: List[str] | str, regex: str, prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs fully match a given regular expression. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • regex – The regular expression to match

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object

langcheck.metrics.text_structure.validation_fn(generated_outputs: List[str] | str, valid_fn: Callable[[str], bool], prompts: List[str] | str | None = None) MetricValue[int][source]#

Checks if generated outputs are valid according to an arbitrary function. This metric takes on binary 0 or 1 values.

Parameters:
  • generated_outputs – The model generated output(s) to evaluate

  • valid_fn – A function that takes a single string and returns a bool determining whether the string is valid or not. The function can also raise an exception on failure.

  • prompts – The prompts used to generate the output(s). Prompts are optional metadata and not used to calculate the metric.

Returns:

An MetricValue object