dt.llm User Guide

dt.llm is a simple Datatailr wrapper around LangChain ChatOpenAI. Use it exactly like a LangChain chat model, with minimal setup.

Configure an LLM on a DT installation

Before using dt.llm in Python, create and start an LLM from the Job Scheduler:

  1. Go to Job Scheduler -> LLMs
  2. Click New LLM button (top right corner of the window)
  3. Configure general parameters as per requirement
  4. Enter custom name from HuggingFace if you don't see your model in the models dropdown.
  5. Select the GPU machine on which your LLM would run.
  6. Click Save
  7. Click Start

After starting, you can also Restart or Stop the LLM from the same UI.
The LLM job is visible in the Active Jobs section.

Quick Start

from dt.llm import LLM

llm = LLM("qwen-coder-medium", temperature=0.0)
response = llm.invoke("Write a Python function for fibonacci up to 1000.")
print(response.content)

LangChain Compatibility

dt.llm wraps LangChain ChatOpenAI, so all standard LangChain chat model features are supported.

  • Use familiar methods like invoke, stream, batch, and ainvoke
  • Pass standard ChatOpenAI arguments such as temperature, timeout, and max_retries
  • Access the underlying client through llm.client if needed

LangChain docs:

Constructor

LLM(
    name: str,
    *,
    scheme: str = "http",
    api_key: str = "sk-local",
    **kwargs
)
  • name: Datatailr LLM name (required)
  • scheme: http or https (default http)
  • api_key: API key value (default sk-local)
  • **kwargs: forwarded to LangChain ChatOpenAI
    • Common examples: temperature, timeout, max_retries

Common Usage

from dt.llm import LLM

llm = LLM(
    "qwen-coder-medium",
    temperature=0.2,
    timeout=60,
    max_retries=2,
)

print("Model:", llm.name)
print("Endpoint:", llm.endpoint)

answer = llm.invoke("Give me 3 tips for writing clean Python code.")
print(answer.content)

Optional: Direct LangChain client access

client = llm.client
resp = client.invoke("Summarize Python decorators in 5 lines.")
print(resp.content)

Configure Continue Extension in DT IDE

Use the Continue VS Code extension with your DT LLMs by adding model entries in:

  • .continue/config.yaml

Start with this template:

models:
  - name: qwen-coder-medium
    provider: openai
    model: qwen-coder-medium
    apiBase: http://<dt-llm-host>:<dt-llm-port>/v1
    apiKey: sk-local

# Optional: choose default models used by Continue features
tabAutocompleteModel:
  name: qwen-coder-medium
  provider: openai
  model: qwen-coder-medium
  apiBase: http://<dt-llm-host>:<dt-llm-port>/v1
  apiKey: sk-local

defaultModel: qwen-coder-medium

To add more DT LLMs, copy one model block and change:

  • name
  • model
  • apiBase

One model block per DT LLM is the easiest setup.

Continue docs:

Errors and fixes

  • LLMNotFound
    • The model name is not available in your environment. Check the model name and try again.

Minimal helper function

from dt.llm import LLM

def ask(prompt: str) -> str:
    llm = LLM("qwen-coder-medium", temperature=0.0)
    return llm.invoke(prompt).content

if __name__ == "__main__":
    print(ask("Explain list comprehension in Python with one example."))