orka.memory_logger module

Memory Logger

The Memory Logger is a critical component of the OrKa framework that provides persistent storage and retrieval capabilities for orchestration events, agent outputs, and system state. It serves as both a runtime memory system and an audit trail for agent workflows.

Modular Architecture

The memory logger features a modular architecture with focused components while maintaining 100% backward compatibility through factory functions.

Key Features

Event Logging

Records all agent activities and system events with detailed metadata

Data Persistence

Stores data in Redis streams with reliability and durability

Serialization

Handles conversion of complex Python objects to JSON-serializable formats with intelligent blob deduplication

Error Resilience

Implements fallback mechanisms for handling serialization errors gracefully

Querying

Provides methods to retrieve recent events and specific data points efficiently

File Export

Supports exporting memory logs to files for analysis and backup

RedisStack Backend

High-performance RedisStack backend with HNSW vector indexing for semantic search

Core Use Cases

The Memory Logger is essential for:

  • Enabling agents to access past context and outputs

  • Debugging and auditing agent workflows

  • Maintaining state across distributed components

  • Supporting complex workflow patterns like fork/join

  • Providing audit trails for compliance and analysis

Modular Components

The memory system is composed of specialized modules:

BaseMemoryLogger

Abstract base class defining the memory logger interface

RedisMemoryLogger

Complete Redis backend implementation with streams and data structures

RedisStackMemoryLogger

High-performance RedisStack backend with HNSW vector indexing

serialization

JSON sanitization and memory processing utilities

file_operations

Save/load functionality and file I/O operations

compressor

Data compression utilities for efficient storage

Usage Examples

Factory Function (Recommended)

from orka.memory_logger import create_memory_logger

# RedisStack backend (default - recommended)
redisstack_memory = create_memory_logger("redisstack", redis_url="redis://localhost:6380")

# Basic Redis backend
redis_memory = create_memory_logger("redis", redis_url="redis://localhost:6380")

Direct Instantiation

from orka.memory.redis_logger import RedisMemoryLogger
from orka.memory.redisstack_logger import RedisStackMemoryLogger

# Redis logger
redis_logger = RedisMemoryLogger(redis_url="redis://localhost:6380")

# RedisStack logger with HNSW
redisstack_logger = RedisStackMemoryLogger(redis_url="redis://localhost:6380")

Environment-Based Configuration

import os
from orka.memory_logger import create_memory_logger

# Set backend via environment variable
os.environ["ORKA_MEMORY_BACKEND"] = "redisstack"

# Logger will use RedisStack automatically
memory = create_memory_logger()

Backend Comparison

RedisStack Backend (Recommended)
  • Best for: Production AI workloads, high-performance applications

  • Features: HNSW vector indexing, 100x faster search, advanced memory management

  • Performance: Sub-millisecond search, 50,000+ operations/second

Redis Backend (Legacy)
  • Best for: Development, single-node deployments, quick prototyping

  • Features: Fast in-memory operations, simple setup, full feature support

  • Limitations: Basic search capabilities, no vector indexing

Implementation Notes

Backward Compatibility

All existing code using RedisMemoryLogger continues to work unchanged

Performance Optimizations
  • Blob deduplication reduces storage overhead

  • In-memory buffers provide fast access to recent events

  • Batch operations improve throughput

  • HNSW indexing for ultra-fast vector search

Error Handling
  • Robust sanitization handles non-serializable objects

  • Graceful degradation prevents workflow failures

  • Detailed error logging aids debugging

Thread Safety

All memory logger implementations are thread-safe for concurrent access

orka.memory_logger.apply_memory_preset_to_config(config: dict[str, Any], memory_preset: str | None = None, operation: str | None = None) dict[str, Any][source]

Apply memory preset operation-specific defaults to a configuration dictionary.

This function intelligently merges memory preset defaults based on the operation type (read/write) with the provided configuration, allowing users to specify just a preset and have appropriate defaults applied automatically.

Parameters:
  • config – Base configuration dictionary

  • memory_preset – Name of the memory preset (sensory, working, episodic, semantic, procedural, meta)

  • operation – Memory operation type (‘read’ or ‘write’)

Returns:

Enhanced configuration with preset defaults applied

Example

>>> config = {"operation": "read", "namespace": "test"}
>>> enhanced = apply_memory_preset_to_config(config, "episodic", "read")
>>> # Returns config with episodic read defaults like similarity_threshold, vector_weight, etc.
orka.memory_logger.create_memory_logger(backend: str = 'redisstack', redis_url: str | None = None, stream_key: str = 'orka:memory', debug_keep_previous_outputs: bool = False, decay_config: dict[str, Any] | None = None, memory_preset: str | None = None, operation: str | None = None, enable_hnsw: bool = True, vector_params: dict[str, Any] | None = None, format_params: dict[str, Any] | None = None, index_name: str = 'orka_enhanced_memory', vector_dim: int = 384, force_recreate_index: bool = False, **kwargs) BaseMemoryLogger[source]

Enhanced factory with RedisStack as primary backend.

Creates a memory logger instance based on the specified backend. Defaults to RedisStack for optimal performance with automatic fallback.

Parameters:
  • backend – Memory backend type (“redisstack”, “redis”)

  • redis_url – Redis connection URL

  • stream_key – Redis stream key for logging

  • debug_keep_previous_outputs – Whether to keep previous outputs in logs

  • decay_config – Memory decay configuration

  • memory_preset – Memory preset name (sensory, working, episodic, semantic, procedural, meta)

  • enable_hnsw – Enable HNSW vector indexing (RedisStack only)

  • vector_params – HNSW configuration parameters

  • format_params – Content formatting parameters (e.g., newline handling, custom filters)

  • index_name – Name of the RedisStack index for vector search

  • vector_dim – Dimension of vector embeddings

  • force_recreate_index – Whether to force recreate index if it exists but is misconfigured

  • **kwargs – Additional parameters for backward compatibility

Returns:

Configured memory logger instance

Raises:
  • ImportError – If required dependencies are not available

  • ConnectionError – If backend connection fails

Notes

All parameters can be configured through YAML configuration. Vector parameters can be specified in detail through the vector_params dictionary.