orka.cli.core module

CLI Core Functionality

This module contains the core CLI functionality including the programmatic entry point for running OrKa workflows.

async orka.cli.core.run_cli_entrypoint(config_path: str, input_text: str, log_to_file: bool = False, verbose: bool = False) dict[str, Any] | list[Event] | str[source]

🚀 Primary programmatic entry point - run OrKa workflows from any application.

What makes this special: - Universal Integration: Call OrKa from any Python application seamlessly - Flexible Output: Returns structured data perfect for further processing - Production Ready: Handles errors gracefully with comprehensive logging - Development Friendly: Optional file logging for debugging workflows

Integration Patterns:

1. Simple Q&A Integration:

result = await run_cli_entrypoint(
    "configs/qa_workflow.yml",
"What is machine learning?",
log_to_file=False

) # Returns: {“answer_agent”: “Machine learning is…”} ```

2. Complex Workflow Integration: ```python result = await run_cli_entrypoint(

“configs/content_moderation.yml”,

user_generated_content, log_to_file=True # Debug complex workflows

) # Returns: {“safety_check”: True, “sentiment”: “positive”, “topics”: [“tech”]}

3. Batch Processing Integration:

results = []
for item in dataset:
    result = await run_cli_entrypoint(
        "configs/classifier.yml",
        item["text"],
        log_to_file=False
    )
    results.append(result)

Return Value Intelligence: - Dict: Agent outputs mapped by agent ID (most common) - List: Complete event trace for debugging complex workflows - String: Simple text output for basic workflows

Perfect for: - Web applications needing AI capabilities - Data processing pipelines with AI components - Microservices requiring intelligent decision making - Research applications with custom AI workflows

orka.cli.core.run_cli(argv: list[str] | None = None) int[source]

Run the CLI with the given arguments.