Liminal is a work in progress. The documentation may not reflect the current implementation.
Liminal is an Effect-based library for composing conversation trees with language models.
- Documentation →
Usage guide intended for human readers. - Examples →
Examples illustrating common use cases. - llms.txt →
Chunks of truth to be fed into LLMs.
An effect is a conversation.
example.ts
import { FileSystem } from "@effect/platform"
import { Effect } from "effect"
import { L } from "liminal"
const conversation = Effect.gen(function*() {
// Set system instruction.
yield* L.system`You are an expert TypeScript developer.`
// Get a file system service.
const fs = yield* FileSystem.FileSystem
// Append messages.
yield* L.user`
It seems as though a new mental model may arise for
LLM conversation state management.
What are your thoughts on the following DX?
${fs.readFileString("example.ts")}
`
// Infer and append the assistant message.
const answer = yield* L.assistant
answer satisfies string
// List all messages.
const messages = yield* L.messages
// Clear all messages.
yield* L.clear
// Re-append messages.
yield* L.append(...messages)
}).pipe(
L.thread,
)-
Clone and build Liminal.
git clone [email protected]:harrysolovay/liminal.git cd liminal bun i bun run build
-
Configure any environment variables used by the example.
-
Run the example script.
bun examples/<path-to-example>
Please ensure you adhere to our code of conduct when interacting in this repository.
Contributions are welcome and appreciated! Check out the contributing guide before you dive in.
Liminal is Apache-licensed.