orka.agents.local_llm_agents module
Local LLM Agents Module
This module provides agents for interfacing with locally running large language models. Supports various local LLM serving solutions including Ollama, LM Studio, LMDeploy, and other OpenAI-compatible APIs.
Local LLM agents enable: - Fully offline LLM workflows - Privacy-preserving AI processing - Custom model deployment flexibility - Reduced dependency on cloud services - Integration with self-hosted models
- class orka.agents.local_llm_agents.LocalLLMAgent(agent_id, prompt, queue, **kwargs)[source]
Bases:
LegacyBaseAgent
Calls a local LLM endpoint (e.g. Ollama, LM Studio) with a prompt and returns the response.
This agent mimics the same interface as OpenAI-based agents but uses local model endpoints for inference. It supports various local LLM serving solutions like Ollama, LM Studio, LMDeploy, and other OpenAI-compatible APIs.
Supported Providers:
ollama: Native Ollama API format
lm_studio: LM Studio with OpenAI-compatible endpoint
openai_compatible: Any OpenAI-compatible API endpoint
Configuration Example:
type: local_llm prompt: “Summarize this: {{ input }}” model: “mistral” model_url: “http://localhost:11434/api/generate” provider: “ollama” temperature: 0.7
- run(input_data)[source]
Generate an answer using a local LLM endpoint.
- Parameters:
input_data (dict or str) – Input data containing: - If dict: prompt (str), model (str), temperature (float), and other params - If str: Direct input text to process
- Returns:
Generated answer from the local model.
- Return type:
str
- build_prompt(input_text, template=None, full_context=None)[source]
Build the prompt from template and input data.
- Parameters:
input_text (str) – The main input text to substitute
template (str, optional) – Template string, defaults to self.prompt
full_context (dict, optional) – Full context dict for complex template variables
- Returns:
The built prompt
- Return type:
str