Ai knowledge and logicHelpers

llm_classify_binary

Classify input as true or false using an LLM.

llm_classify_binary(prompt: str, input, fallback=None) -> bool

Sends a prompt and input to an LLM and returns a boolean classification. Useful for yes/no decisions based on conversation content — detecting intent, sentiment, threats, or specific patterns.


Parameters

NameTypeDescription
promptstrThe classification instruction (e.g., "Is the customer asking about returns?")
inputanyThe text to classify. If not a string (e.g., a list of messages or dict), it is auto-converted to JSON
fallbackany/NoneOptional value to return if the API call fails. If not provided, exceptions are raised

Returns

boolTrue or False based on the LLM classification.

Error handling

  • If fallback is provided, it is returned on any API error.
  • If fallback is not provided (default), exceptions propagate — your action will fail.
  • Set a fallback when you want your action to continue even if classification fails.

Examples

Detect a chargeback threat

def execute_action(context):
    messages = context["conversation"]["messages"]

    is_threat = llm_classify_binary(
        "Has the customer explicitly threatened a chargeback, bank dispute, "
        "or legal action? Frustration alone does not count.",
        messages,
        fallback=False,
    )

    if is_threat:
        # Immediately process refund to avoid chargeback
        process_full_refund(context["args"]["orderId"])
        add_conversation_tag(context, "chargeback-threat")
        return {"action": "refunded", "reason": "chargeback_threat"}

    return {"action": "continue_normal_flow"}

Check if a message is about a specific topic

def execute_action(context):
    last_message = context["conversation"]["messages"][-1]["content"]

    is_return_request = llm_classify_binary(
        "Is the customer asking to return or exchange a product?",
        last_message,
    )

    return {"isReturnRequest": is_return_request}

Classify with a structured input

def execute_action(context):
    order = fetch_order(context["args"]["orderId"])

    is_suspicious = llm_classify_binary(
        "Does this order look suspicious? Check for mismatched "
        "shipping/billing addresses, unusually high quantities, "
        "or known fraud patterns.",
        order,  # dict is auto-converted to JSON
        fallback=False,
    )

    return {"suspicious": is_suspicious, "order": order}

On this page