Optional
criterionOptional
evaluationOptional
llmOptional
memoryOptional
skipOptional
config: any[]Use .batch() instead. Will be removed in 0.2.0.
This feature is deprecated and will be removed in the future.
It is not recommended for use.
Call the chain on all inputs in the list
Check if the evaluation arguments are valid.
Optional
reference: stringThe reference label.
Optional
input: stringThe input string.
If the evaluator requires an input string but none is provided, or if the evaluator requires a reference label but none is provided.
Evaluate the output string pairs.
Optional
callOptions: unknownOptional
config: anyA dictionary containing the preference, scores, and/or other information.
Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Optional
config: anyOptional configuration for the Runnable.
Promise that resolves with the output of the chain run.
Format prompt with values and pass to LLM
keys to pass to prompt template
Optional
callbackManager: anyCallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" })
Static
deserializeStatic
fromLLMCreate a new instance of the PairwiseStringEvalChain.
Optional
criteria: CriteriaLikeThe criteria to use for evaluation.
Optional
chainOptions: Partial<Omit<LLMEvalChainInput<EvalOutputType, BaseLanguageModelInterface>, "llm">>Options to pass to the chain.
Static
resolveOptional
criteria: CriteriaLikeStatic
resolveGenerated using TypeDoc
A chain for comparing two outputs, such as the outputs of two models, prompts, or outputs of a single model on similar inputs, with labeled preferences.