orka.agents.llm_agents module

🤖 LLM Agents - Cloud-Powered Intelligent Processing

This module contains specialized agents that leverage cloud LLMs (OpenAI GPT models) for sophisticated natural language understanding and generation tasks.

Core LLM Agent Types:

🎨 OpenAIAnswerBuilder: The master craftsman of responses - Synthesizes multiple data sources into coherent answers - Perfect for final response generation in complex workflows - Handles context-aware formatting and detailed explanations

🎯 OpenAIClassificationAgent: The intelligent router - Classifies inputs into predefined categories with high precision - Essential for workflow branching and content routing - Supports complex multi-class classification scenarios

OpenAIBinaryAgent: The precise decision maker - Makes accurate true/false determinations - Ideal for validation, filtering, and gate-keeping logic - Optimized for clear yes/no decision points

Advanced Features: - 🧠 Reasoning Extraction: Captures internal reasoning from <think> blocks - 📊 Cost Tracking: Automatic token usage and cost calculation - 🔧 JSON Parsing: Robust handling of structured LLM responses - ⚡ Error Recovery: Graceful degradation for malformed responses - 🎛️ Flexible Prompting: Jinja2 template support for dynamic prompts

Real-world Applications: - Customer service with intelligent intent classification - Content moderation with nuanced decision making - Research synthesis combining multiple information sources - Multi-step reasoning workflows with transparent logic

orka.agents.llm_agents.parse_llm_json_response(response_text, error_tracker=None, agent_id='unknown') dict[source]

Parse JSON response from LLM that may contain reasoning (<think> blocks) or be in various formats.

This parser is specifically designed for local LLMs and reasoning models. It handles reasoning blocks, JSON in code blocks, and malformed JSON.

Parameters:
  • response_text (str) – Raw response from LLM

  • error_tracker – Optional error tracking object for silent degradations

  • agent_id (str) – Agent ID for error tracking

Returns:

Parsed response with ‘response’, ‘confidence’, ‘internal_reasoning’ keys

Return type:

dict

class orka.agents.llm_agents.OpenAIAnswerBuilder(agent_id, prompt, queue, **kwargs)[source]

Bases: LegacyBaseAgent

🎨 The master craftsman of responses - builds comprehensive answers from complex inputs.

What makes it special: - Multi-source Synthesis: Combines search results, context, and knowledge seamlessly - Context Awareness: Understands conversation history and user intent - Structured Output: Generates well-formatted, coherent responses - Template Power: Uses Jinja2 for dynamic prompt construction - Cost Optimization: Tracks token usage and provides cost insights

Perfect for: - Final answer generation in research workflows - Customer service response crafting - Content creation with multiple input sources - Detailed explanations combining technical and user-friendly language

Example Use Cases: ```yaml # Comprehensive Q&A system - id: answer_builder

type: openai-answer prompt: |

Create a comprehensive answer using: - Search results: {{ previous_outputs.web_search }} - User context: {{ previous_outputs.user_profile }} - Classification: {{ previous_outputs.intent_classifier }}

Provide a helpful, accurate response that addresses the user’s specific needs.

```

Advanced Features: - Automatic reasoning extraction from <think> blocks - Confidence scoring for answer quality assessment - JSON response parsing with fallback handling - Template variable resolution with rich context

run(input_data) dict[source]

Generate an answer using OpenAI’s GPT model.

Parameters:

input_data (dict) – Input data containing: - prompt (str): The prompt to use (optional, defaults to agent’s prompt) - model (str): The model to use (optional, defaults to OPENAI_MODEL) - temperature (float): Temperature for generation (optional, defaults to 0.7) - parse_json (bool): Whether to parse JSON response (defaults to True) - error_tracker: Optional error tracking object - agent_id (str): Agent ID for error tracking

Returns:

Returns parsed JSON dict with keys:

response, confidence, internal_reasoning, _metrics

Return type:

dict

class orka.agents.llm_agents.OpenAIBinaryAgent(agent_id, prompt, queue, **kwargs)[source]

Bases: OpenAIAnswerBuilder

The precise decision maker - makes accurate true/false determinations.

Decision-making excellence: - High Precision: Optimized for clear binary classifications - Context Sensitive: Considers full context for nuanced decisions - Confidence Scoring: Provides certainty metrics for decisions - Fast Processing: Streamlined for quick yes/no determinations

Essential for: - Content moderation (toxic/safe, appropriate/inappropriate) - Workflow gating (proceed/stop, valid/invalid) - Quality assurance (pass/fail, correct/incorrect) - User intent validation (question/statement, urgent/routine)

Real-world scenarios: ```yaml # Content safety check - id: safety_check

type: openai-binary prompt: “Is this content safe for all audiences? {{ input }}”

# Search requirement detection - id: needs_search

type: openai-binary prompt: “Does this question require current information? {{ input }}”

# Priority classification - id: is_urgent

type: openai-binary prompt: “Is this request urgent based on content and context? {{ input }}”

```

Decision Quality: - Leverages full GPT reasoning capabilities - Provides transparent decision rationale - Handles edge cases and ambiguous inputs gracefully

run(input_data) bool[source]

Make a true/false decision using OpenAI’s GPT model.

Parameters:

input_data (dict) – Input data containing: - prompt (str): The prompt to use (optional, defaults to agent’s prompt) - model (str): The model to use (optional, defaults to OPENAI_MODEL) - temperature (float): Temperature for generation (optional, defaults to 0.7)

Returns:

True or False based on the model’s response.

Return type:

bool

class orka.agents.llm_agents.OpenAIClassificationAgent(agent_id, prompt, queue, **kwargs)[source]

Bases: OpenAIAnswerBuilder

🎯 The intelligent router - classifies inputs into predefined categories with precision.

Classification superpowers: - Multi-class Intelligence: Handles complex category systems with ease - Context Awareness: Uses conversation history for better classification - Confidence Metrics: Provides certainty scores for each classification - Dynamic Categories: Supports runtime category adjustment - Fallback Handling: Graceful degradation for unknown categories

Essential for: - Intent detection in conversational AI - Content categorization and routing - Topic classification for knowledge systems - Sentiment and emotion analysis - Domain-specific classification tasks

Classification patterns: ```yaml # Customer service routing - id: intent_classifier

type: openai-classification options: [question, complaint, compliment, request, technical_issue] prompt: “Classify customer intent: {{ input }}”

# Content categorization - id: topic_classifier

type: openai-classification options: [technology, science, business, entertainment, sports] prompt: “What topic does this article discuss? {{ input }}”

# Urgency assessment - id: priority_classifier

type: openai-classification options: [low, medium, high, critical] prompt: “Assess priority level based on content and context: {{ input }}”

```

Advanced capabilities: - Hierarchical classification support - Multi-label classification for complex content - Confidence thresholding for quality control - Custom category definitions with examples

run(input_data) str[source]

Classify input using OpenAI’s GPT model.

Parameters:

input_data (dict) – Input data containing: - prompt (str): The prompt to use (optional, defaults to agent’s prompt) - model (str): The model to use (optional, defaults to OPENAI_MODEL) - temperature (float): Temperature for generation (optional, defaults to 0.7)

Returns:

Category name based on the model’s classification.

Return type:

str