Did you know ... Search Documentation:
Packs (add-ons) for SWI-Prolog

Package "llmpl"

Title:Prolog library for interfacing with large language models.
Rating:Not rated. Create the first rating!
Latest version:0.1.0
SHA1 sum:690ecb2a7a028e0c74ed8a8ee24f54abf92e05dd
Author:Evangelos Lamprou <vagos@lamprou.xyz>
Home page:https://github.com/vagos/llmpl
Download URL:https://github.com/vagos/llmpl/releases/*.zip

Reviews

No reviews. Create the first review!.

Details by download location

VersionSHA1#DownloadsURL
0.1.0690ecb2a7a028e0c74ed8a8ee24f54abf92e05dd2https://github.com/vagos/llmpl.git

llmpl

Use LLMs inside Prolog!

pllm is a minimal SWI-Prolog helper that exposes llm/2. The predicate posts a prompt to an HTTP LLM endpoint and unifies the model's response text with the second argument.

The library currently supports any OpenAI-compatible chat/completions endpoint.

Installation

?- pack_install(pllm).

Configuration

Set the following environment variables:

VariableDescription
LLM_API_URLthe chat/completions endpoint that accepts POST requests.
LLM_API_KEYsecret that will be sent as a bearer token.
LLM_MODELoptional model name (defaults to `gpt-4o-mini`).
LLM_API_TIMEOUToptional request timeout in seconds (defaults to 60).

Usage

# Fill in .env with your settings
set -a && souce .env && set +a
swipl
?- [prolog/llm].
?- llm("Say hello in French.", Output).
Output = "Bonjour !".

?- llm(Prompt, "Dog").
Prompt = "What animal is man's best friend?",
...

Reverse prompts

If you call llm/2 with an unbound first argument and a concrete response, the library first asks the LLM to suggest a prompt that would (ideally) produce that response, binds it to your variable, and then sends a second request that wraps the suggested prompt in a hard constraint ("answer only with ..."). This costs two API calls and is still best-effort; the model may ignore the constraint, in which case the predicate simply fails.

Contents of pack "llmpl"

Pack contains 3 files holding a total of 7.1K bytes.