dt.llm User Guide
dt.llm is a simple Datatailr wrapper around LangChain ChatOpenAI. Use it exactly like a LangChain chat model, with minimal setup.
Configure an LLM on a DT installation
Before using dt.llm in Python, create and start an LLM from the Job Scheduler:
- Go to
Job Scheduler->LLMs - Click
New LLMbutton (top right corner of the window) - Configure general parameters as per requirement
- Enter custom name from HuggingFace if you don't see your model in the models dropdown.
- Select the GPU machine on which your LLM would run.
- Click
Save - Click
Start
After starting, you can also Restart or Stop the LLM from the same UI.
The LLM job is visible in the Active Jobs section.
Quick Start
from dt.llm import LLM
llm = LLM("qwen-coder-medium", temperature=0.0)
response = llm.invoke("Write a Python function for fibonacci up to 1000.")
print(response.content)LangChain Compatibility
dt.llm wraps LangChain ChatOpenAI, so all standard LangChain chat model features are supported.
- Use familiar methods like
invoke,stream,batch, andainvoke - Pass standard
ChatOpenAIarguments such astemperature,timeout, andmax_retries - Access the underlying client through
llm.clientif needed
LangChain docs:
Constructor
LLM(
name: str,
*,
scheme: str = "http",
api_key: str = "sk-local",
**kwargs
)name: Datatailr LLM name (required)scheme:httporhttps(defaulthttp)api_key: API key value (defaultsk-local)**kwargs: forwarded to LangChainChatOpenAI- Common examples:
temperature,timeout,max_retries
- Common examples:
Common Usage
from dt.llm import LLM
llm = LLM(
"qwen-coder-medium",
temperature=0.2,
timeout=60,
max_retries=2,
)
print("Model:", llm.name)
print("Endpoint:", llm.endpoint)
answer = llm.invoke("Give me 3 tips for writing clean Python code.")
print(answer.content)Optional: Direct LangChain client access
client = llm.client
resp = client.invoke("Summarize Python decorators in 5 lines.")
print(resp.content)Configure Continue Extension in DT IDE
Use the Continue VS Code extension with your DT LLMs by adding model entries in:
.continue/config.yaml
Start with this template:
models:
- name: qwen-coder-medium
provider: openai
model: qwen-coder-medium
apiBase: http://<dt-llm-host>:<dt-llm-port>/v1
apiKey: sk-local
# Optional: choose default models used by Continue features
tabAutocompleteModel:
name: qwen-coder-medium
provider: openai
model: qwen-coder-medium
apiBase: http://<dt-llm-host>:<dt-llm-port>/v1
apiKey: sk-local
defaultModel: qwen-coder-mediumTo add more DT LLMs, copy one model block and change:
namemodelapiBase
One model block per DT LLM is the easiest setup.
Continue docs:
Errors and fixes
LLMNotFound- The model name is not available in your environment. Check the model name and try again.
Minimal helper function
from dt.llm import LLM
def ask(prompt: str) -> str:
llm = LLM("qwen-coder-medium", temperature=0.0)
return llm.invoke(prompt).content
if __name__ == "__main__":
print(ask("Explain list comprehension in Python with one example."))Updated 3 days ago
