Did you know ... Search Documentation:
Pack llmpl -- prolog/llm.pl
PublicShow source

This module exposes the predicate llm/2, which posts a user prompt to an HTTP-based large language model (LLM) API and unifies the model's response with the second argument.

Configuration is provided through environment variables. The library can be reused across different APIs.

  • LLM_API_URL – required LLM endpoint accepting POST requests.
  • LLM_API_KEY – secret used to build a bearer token.
  • LLM_MODEL – (optional) model identifier, defaults to "gpt-4o-mini".
  • LLM_API_TIMEOUT – (optional) request timeout in seconds, defaults to 60.

The library assumes an OpenAI-compatible payload/response. To target a different API adjust llm_request_body/2 or llm_extract_text/2.

 llm(+Input, -Output) is det
Send Input as a prompt to the configured LLM endpoint and unify Output with the assistant's response text.