feat: enhance coverage reporting and improve tool configuration (#55)
* feat: enhance coverage reporting and improve tool configuration - Added support for JSON coverage reports in pyproject.toml. - Updated .gitignore to include coverage.json and task files for better management. - Introduced a new Type Safety Audit Report to document findings and recommendations for type safety improvements. - Created a comprehensive coverage configuration guide to assist in understanding coverage reporting setup. - Refactored tools configuration to utilize environment variables for concurrent scraping settings. These changes improve the project's testing and reporting capabilities while enhancing overall code quality and maintainability. * feat: enhance configuration handling and improve error logging - Introduced a new utility function `_get_env_int` for robust environment variable integer retrieval with validation. - Updated `WebToolsConfig` and `ToolsConfigModel` to utilize the new utility for environment variable defaults. - Enhanced logging in `CircuitBreaker` to provide detailed state transition information. - Improved URL handling in `url_analyzer.py` for better file extension extraction and normalization. - Added type validation and logging in `SecureInputMixin` to ensure input sanitization and validation consistency. These changes improve the reliability and maintainability of configuration management and error handling across the codebase. * refactor: update imports and enhance .gitignore for improved organization - Updated import paths in various example scripts to reflect the new structure under `biz_bud`. - Enhanced .gitignore to include clearer formatting for task files. - Removed obsolete function calls and improved error handling in several scripts. - Added public alias for backward compatibility in `upload_r2r.py`. These changes improve code organization, maintainability, and compatibility across the project. * refactor: update graph paths in langgraph.json for improved organization - Changed paths for research, catalog, paperless, and url_to_r2r graphs to reflect new directory structure. - Added new entries for analysis and scraping graphs to enhance functionality. These changes improve the organization and maintainability of the graph configurations. * fix: enhance validation and error handling in date range and scraping functions - Updated date validation in UserFiltersModel to ensure date values are strings. - Improved error messages in create_scraped_content_dict to clarify conditions for success and failure. - Enhanced test coverage for date validation and scraping content creation to ensure robustness. These changes improve input validation and error handling across the application, enhancing overall reliability. * refactor: streamline graph creation and enhance type annotations in examples - Simplified graph creation in `catalog_ingredient_research_example.py` and `catalog_tech_components_example.py` by directly compiling the graph. - Updated type annotations in `catalog_intel_with_config.py` for improved clarity and consistency. - Enhanced error handling in catalog data processing to ensure robustness against unexpected data types. These changes improve code readability, maintainability, and error resilience across example scripts. * Update src/biz_bud/nodes/extraction/extractors.py Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com> * Update src/biz_bud/core/validation/pydantic_models.py Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com> * refactor: migrate Jina and Tavily clients to use ServiceFactory dependency injection * refactor: migrate URL processing to provider-based architecture with improved error handling * feat: add FirecrawlApp compatibility classes and mock implementations * fix: add thread-safe locking to LazyLoader factory management * feat: implement service restart and refactor cache decorator helpers * refactor: move r2r_direct_api_call to tools.clients.r2r_utils and improve HTTP service error handling * chore: update Sonar task IDs in report configuration --------- Co-authored-by: sourcery-ai[bot] <58596630+sourcery-ai[bot]@users.noreply.github.com>
This commit is contained in:
5
.claude.yaml
Normal file
5
.claude.yaml
Normal file
@@ -0,0 +1,5 @@
|
||||
tools:
|
||||
- name: pyrefly
|
||||
description: "Run the pyrefly code checker"
|
||||
type: shell
|
||||
command: pyrefly check src/ tests/
|
||||
@@ -1,7 +1,8 @@
|
||||
---
|
||||
name: code-quality-modernizer
|
||||
description: Use this agent when you need to improve code quality, modernize legacy code patterns, fix linting issues, or ensure code adheres to project standards. Examples: <example>Context: User has just written a new function and wants to ensure it meets quality standards. user: 'I just added a new authentication function, can you check if it follows our coding standards?' assistant: 'I'll use the code-quality-modernizer agent to review your authentication function and ensure it meets our quality standards.' <commentary>Since the user wants code quality review, use the code-quality-modernizer agent to analyze the code with linting tools and suggest improvements.</commentary></example> <example>Context: User is working on legacy code that needs modernization. user: 'This old module uses outdated patterns and has type issues' assistant: 'Let me use the code-quality-modernizer agent to modernize this legacy module and fix the type issues.' <commentary>The user has legacy code that needs modernization, so use the code-quality-modernizer agent to apply modern patterns and fix issues.</commentary></example>
|
||||
tools: Task, Bash, Glob, Grep, LS, ExitPlanMode, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoWrite, WebSearch, mcp__sequential-thinking__sequentialthinking, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__ide__getDiagnostics, mcp__ide__executeCode,
|
||||
tools: Task, Bash, Glob, Grep, LS, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoWrite, WebSearch, mcp__sequential-thinking__sequentialthinking, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__ide__getDiagnostics, mcp__ide__executeCode
|
||||
model: sonnet
|
||||
color: red
|
||||
---
|
||||
|
||||
@@ -10,13 +11,14 @@ You are a Code Quality and Modernization Expert, specializing in transforming co
|
||||
Your primary responsibilities:
|
||||
|
||||
**Code Quality Analysis & Improvement:**
|
||||
- Run comprehensive quality checks using `make lint-all` to identify all issues across the codebase
|
||||
- Run comprehensive quality checks using `make pyright` to identify all issues across the codebase
|
||||
- Execute `make pyrefly` for advanced type checking and modern Python pattern analysis
|
||||
- Use `make format` to ensure consistent code formatting according to project standards
|
||||
- Analyze linting output systematically and prioritize fixes by impact and complexity
|
||||
|
||||
**Code Modernization:**
|
||||
- Utilize modernization scripts in the `scripts/` directory to upgrade legacy patterns
|
||||
- Run `python scripts/checks/typing_modernization_check.py` to detect outdated typing patterns
|
||||
- Identify and replace outdated Python constructs with modern equivalents
|
||||
- Upgrade type annotations to use modern typing syntax (avoid `Any`, use specific types)
|
||||
- Modernize import statements, exception handling, and data structure usage
|
||||
@@ -25,10 +27,13 @@ Your primary responsibilities:
|
||||
**Systematic Workflow:**
|
||||
1. Always start by running `make lint-all` to get a comprehensive view of all quality issues
|
||||
2. Run `make pyrefly` for detailed type analysis and modernization opportunities
|
||||
3. Execute relevant modernization scripts from `scripts/` directory when legacy patterns are detected
|
||||
4. Apply `make format` to ensure consistent styling
|
||||
5. Re-run linting tools to verify all issues are resolved
|
||||
6. Provide detailed explanations of changes made and their benefits
|
||||
3. Execute `python scripts/checks/typing_modernization_check.py` to detect typing pattern issues
|
||||
4. Execute relevant modernization scripts from `scripts/` directory when legacy patterns are detected
|
||||
5. Run `python scripts/checks/audit_core_dependencies.py` to check architectural compliance
|
||||
6. Apply `make format` to ensure consistent styling
|
||||
7. **Run Sourcery analysis** - Execute `sourcery review .` to detect additional code quality issues and anti-patterns
|
||||
8. Re-run linting tools to verify all issues are resolved
|
||||
9. Provide detailed explanations of changes made and their benefits
|
||||
|
||||
**Quality Standards Enforcement:**
|
||||
- Never ignore or skip linting errors - address each one systematically
|
||||
@@ -36,6 +41,9 @@ Your primary responsibilities:
|
||||
- Verify that `# type: ignore` comments are never used
|
||||
- Maintain consistency with project's established patterns from CLAUDE.md files
|
||||
- Follow the principle that precommit checks and lint errors must never be bypassed
|
||||
- Ensure architectural compliance - all code must pass `python scripts/checks/audit_core_dependencies.py` with zero violations
|
||||
- **Address all Sourcery violations** - particularly "no-loop-in-tests" and "no-conditionals-in-tests" following CLAUDE.md guidelines
|
||||
- **Test Quality Standards** - Replace loops and conditionals in tests with inline variable returns as functions to keep code neat
|
||||
|
||||
**Communication & Documentation:**
|
||||
- Explain the rationale behind each modernization change
|
||||
@@ -50,4 +58,33 @@ Your primary responsibilities:
|
||||
- Handle conflicts between different linting tools by prioritizing project-specific standards
|
||||
- Escalate to user when encountering architectural decisions that require business context
|
||||
|
||||
You approach each task methodically, ensuring that code not only passes all quality checks but also exemplifies modern Python best practices. You balance thoroughness with efficiency, always explaining your reasoning and the long-term benefits of the improvements you implement.
|
||||
**Typing Modernization:**
|
||||
- Use `python scripts/checks/typing_modernization_check.py` to detect outdated typing patterns
|
||||
- Replace `Union[X, Y]` with modern `X | Y` syntax (Python 3.10+)
|
||||
- Replace `Optional[X]` with `X | None` syntax
|
||||
- Modernize generic types: `Dict` → `dict`, `List` → `list`, `Set` → `set`, `Tuple` → `tuple`
|
||||
- Update Pydantic v1 patterns to v2: `@validator` → `@field_validator`, `Config` class → `model_config`
|
||||
- Replace `typing_extensions` imports with standard `typing` imports where possible
|
||||
- Eliminate unnecessary `# type: ignore` comments by fixing the underlying issues
|
||||
|
||||
**Architectural Compliance:**
|
||||
- Use `python scripts/checks/audit_core_dependencies.py` to detect architectural anti-patterns
|
||||
- Enforce ServiceFactory dependency injection for all API clients and services
|
||||
- Ensure proper use of core infrastructure patterns (HTTPClient, gather_with_concurrency, custom errors)
|
||||
- Verify that direct imports of service clients are only in TYPE_CHECKING blocks or _get_client methods
|
||||
- Validate that state mutations use StateUpdater pattern rather than direct assignment
|
||||
- Check that exception handling uses custom error types from `biz_bud.core.errors`
|
||||
|
||||
**Sourcery Integration:**
|
||||
- Use Sourcery as an advanced code quality analysis tool to detect Python anti-patterns and inefficiencies
|
||||
- Pay special attention to test code quality violations: "no-loop-in-tests" and "no-conditionals-in-tests"
|
||||
- When addressing Sourcery violations in tests, follow these patterns:
|
||||
- Replace `for` loops with list comprehensions and `all()`/`any()` functions
|
||||
- Convert conditional statements to direct assertions or tuple unpacking
|
||||
- Use functional programming approaches (map, filter, reduce) instead of explicit iteration
|
||||
- Extract complex logic to helper methods with descriptive names
|
||||
- Ensure test functions remain focused and declarative rather than procedural
|
||||
- Review Sourcery output systematically and address all violations before considering the code quality task complete
|
||||
- Document the transformation from imperative to functional testing patterns when making changes
|
||||
|
||||
You approach each task methodically, ensuring that code not only passes all quality checks but also exemplifies modern Python best practices and architectural compliance. You balance thoroughness with efficiency, always explaining your reasoning and the long-term benefits of the improvements you implement.
|
||||
|
||||
60
.claude/agents/implementation-engineer.md
Normal file
60
.claude/agents/implementation-engineer.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
name: implementation-engineer
|
||||
description: Use this agent when you need to implement specific code functionality based on architectural guidance or detailed requirements. Examples: <example>Context: An architect has designed a new authentication system and needs it implemented. user: 'Please implement the JWT authentication middleware based on the design document' assistant: 'I'll use the implementation-engineer agent to build the JWT middleware following the architectural specifications' <commentary>Since the user needs specific code implementation based on design requirements, use the implementation-engineer agent to handle the development work.</commentary></example> <example>Context: A feature specification has been provided and needs to be coded. user: 'Build the user profile management API endpoints according to the OpenAPI spec' assistant: 'Let me use the implementation-engineer agent to implement the user profile API endpoints' <commentary>The user has clear implementation requirements, so the implementation-engineer agent should handle the coding work.</commentary></example>
|
||||
tools: Task, Bash, Glob, Grep, LS, ExitPlanMode, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoWrite, WebSearch, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__sequential-thinking__sequentialthinking
|
||||
model: sonnet
|
||||
color: blue
|
||||
---
|
||||
|
||||
You are an Implementation Engineer, a skilled software developer who transforms architectural designs and specifications into working code. Your expertise lies in writing clean, efficient, and maintainable code that follows established patterns and avoids redundancy.
|
||||
|
||||
Before implementing any functionality, you MUST:
|
||||
|
||||
1. **Research Existing Code**: First, search for and examine any local README files, documentation, or instruction files in the target directory to understand coding standards, patterns, and conventions.
|
||||
|
||||
2. **Inspect Codebase**: If no documentation is found, thoroughly inspect the surrounding code, packages, and modules to understand:
|
||||
- Existing architectural patterns
|
||||
- Code organization principles
|
||||
- Naming conventions
|
||||
- Error handling approaches
|
||||
- Testing patterns
|
||||
- Dependencies and imports
|
||||
|
||||
3. **Prevent Duplication**: Before writing any code, verify that similar or equivalent functionality doesn't already exist. If it does, either:
|
||||
- Extend or modify existing code
|
||||
- Refactor to create reusable components
|
||||
- Document why new implementation is necessary
|
||||
|
||||
4. **Module Size Management**: Monitor module size and when any module approaches 700 lines:
|
||||
- Convert the module into a package structure
|
||||
- Break functionality into logical sub-modules
|
||||
- Target maximum 500 lines per module
|
||||
- Maintain clear interfaces between components
|
||||
|
||||
Your implementation approach:
|
||||
|
||||
- Write code that integrates seamlessly with existing patterns
|
||||
- Follow the project's established coding standards and conventions
|
||||
- Implement comprehensive error handling and logging
|
||||
- Include appropriate type hints and documentation
|
||||
- Write testable code with clear separation of concerns
|
||||
- Optimize for readability and maintainability over cleverness
|
||||
- Use dependency injection and composition where appropriate
|
||||
|
||||
When implementing:
|
||||
|
||||
1. Start by outlining your implementation plan
|
||||
2. Identify any existing code that can be leveraged or extended
|
||||
3. Break complex functionality into smaller, focused functions
|
||||
4. Implement incrementally with clear commit points
|
||||
5. Add appropriate tests for new functionality
|
||||
6. Document any architectural decisions or trade-offs
|
||||
|
||||
If you encounter ambiguity in requirements:
|
||||
|
||||
- Ask specific clarifying questions
|
||||
- Propose implementation alternatives with trade-offs
|
||||
- Default to the most maintainable and extensible approach
|
||||
- Follow established project patterns when in doubt
|
||||
|
||||
Your code should be production-ready, following all project linting rules, type checking requirements, and testing standards. Never compromise on code quality or create technical debt through shortcuts.
|
||||
@@ -1,7 +1,8 @@
|
||||
---
|
||||
name: library-compliance-validator
|
||||
description: Use this agent when you need to validate that code changes comply with modern patterns and best practices for specific libraries like LangGraph, LangChain, Firecrawl, R2R, Dify, PostgreSQL, Tavily, and Jina. Examples: <example>Context: The user has just implemented a new LangGraph workflow and wants to ensure it follows current best practices. user: 'I just created a new agent workflow using LangGraph. Can you review it for compliance?' assistant: 'I'll use the library-compliance-validator agent to check your LangGraph implementation against current best practices and patterns.' <commentary>Since the user wants to validate their LangGraph code against modern patterns, use the library-compliance-validator agent to perform this specialized review.</commentary></example> <example>Context: The user has modified database queries and wants to ensure PostgreSQL best practices are followed. user: 'I updated our database layer with some new PostgreSQL queries. Please validate they follow modern patterns.' assistant: 'Let me use the library-compliance-validator agent to review your PostgreSQL implementation for compliance with current best practices.' <commentary>The user needs validation of PostgreSQL code changes, so use the library-compliance-validator agent to check compliance with modern database patterns.</commentary></example>
|
||||
tools: Task, Glob, Grep, LS, ExitPlanMode, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, mcp__sequential-thinking__sequentialthinking, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__ide__getDiagnostics, mcp__ide__executeCode,
|
||||
tools: Task, Glob, Grep, LS, ExitPlanMode, Read, NotebookRead, WebFetch, TodoWrite, WebSearch, mcp__sequential-thinking__sequentialthinking, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__ide__getDiagnostics, mcp__ide__executeCode
|
||||
model: opus
|
||||
color: cyan
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
---
|
||||
name: test-reconciler
|
||||
description: Use this agent when you need to review code implementations and ensure comprehensive test coverage with hierarchical fixtures. Examples: <example>Context: The user has just implemented a new authentication service with JWT token handling. user: 'I just finished implementing the JWT authentication service in auth/jwt_service.py' assistant: 'Let me use the test-reconciler agent to review your implementation and create comprehensive unit tests with proper fixture hierarchy' <commentary>Since the user has completed an implementation that needs test coverage, use the test-reconciler agent to analyze the code and create/update tests accordingly.</commentary></example> <example>Context: The user has refactored existing database models and needs tests updated. user: 'I refactored the User and Profile models to use a new relationship structure' assistant: 'I'll use the test-reconciler agent to analyze your model changes and update the existing tests to match the new structure' <commentary>The user has made changes to existing code that likely affects existing tests, so use the test-reconciler agent to reconcile the changes.</commentary></example>
|
||||
tools: Task, Bash, Glob, Grep, LS, ExitPlanMode, Read, Edit, MultiEdit, Write, NotebookRead, NotebookEdit, WebFetch, TodoWrite, WebSearch, mcp__context7-mcp__resolve-library-id, mcp__context7-mcp__get-library-docs, mcp__sequential-thinking__sequentialthinking
|
||||
model: sonnet
|
||||
color: orange
|
||||
---
|
||||
|
||||
|
||||
21
.github/workflows/sonar.yml
vendored
Normal file
21
.github/workflows/sonar.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
||||
name: SonarQube Scan
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
|
||||
|
||||
jobs:
|
||||
build:
|
||||
name: Build and analyze
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0 # Shallow clones should be disabled for a better relevancy of analysis
|
||||
- uses: SonarSource/sonarqube-scan-action@v5
|
||||
env:
|
||||
SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}
|
||||
SONAR_HOST_URL: ${{ secrets.SONAR_HOST_URL }}
|
||||
9
.gitignore
vendored
9
.gitignore
vendored
@@ -42,8 +42,8 @@ MANIFEST
|
||||
.codespellignore
|
||||
.coverage
|
||||
.coverage.*
|
||||
docker/init-items.sql
|
||||
prof/
|
||||
scripts/checks/sonar_scan.sh
|
||||
tests/cassettes/103fe67e-a040-4e4e-aadb-b20a7057f904.yaml
|
||||
# PyInstaller
|
||||
# Usually these files are written by a python script from a template
|
||||
@@ -66,6 +66,7 @@ htmlcov/
|
||||
.cache
|
||||
nosetests.xml
|
||||
coverage.xml
|
||||
coverage.json
|
||||
*.cover
|
||||
*.py,cover
|
||||
.hypothesis/
|
||||
@@ -214,6 +215,6 @@ node_modules/
|
||||
circular_import_analysis.json
|
||||
.backup/*.tar.gz
|
||||
|
||||
# Task files
|
||||
tasks.json
|
||||
tasks/
|
||||
# Task files
|
||||
# tasks.json
|
||||
# tasks/
|
||||
|
||||
0
.sonar/.sonar_lock
Normal file
0
.sonar/.sonar_lock
Normal file
6
.sonar/report-task.txt
Normal file
6
.sonar/report-task.txt
Normal file
@@ -0,0 +1,6 @@
|
||||
projectKey=vasceannie_biz-bud_6c113581-e663-4a15-8a76-1ce5dab23a5f
|
||||
serverUrl=http://sonar.lab
|
||||
serverVersion=25.7.0.110598
|
||||
dashboardUrl=http://sonar.lab/dashboard?id=vasceannie_biz-bud_6c113581-e663-4a15-8a76-1ce5dab23a5f
|
||||
ceTaskId=0d01fc5f-bdcd-4221-b0cf-3eaec8b41901
|
||||
ceTaskUrl=http://sonar.lab/api/ce/task?id=0d01fc5f-bdcd-4221-b0cf-3eaec8b41901
|
||||
@@ -75,10 +75,8 @@ The project uses UV for all package management:
|
||||
# Install main project with all packages
|
||||
uv pip install -e ".[dev]"
|
||||
|
||||
# Install individual packages for development
|
||||
uv pip install -e packages/business-buddy-core
|
||||
uv pip install -e packages/business-buddy-extraction
|
||||
uv pip install -e packages/business-buddy-tools
|
||||
# Install main project in development mode
|
||||
uv pip install -e .
|
||||
|
||||
# Sync dependencies
|
||||
uv sync
|
||||
|
||||
89
CLAUDE.md
89
CLAUDE.md
@@ -81,9 +81,94 @@ project/
|
||||
### Linting and Type Checking
|
||||
|
||||
- **Caution**: Do not run `pyrefly check .` as it will crash. Only use `make pyrefly` after activating the virtual environment.
|
||||
- **Linting Recommendation**: basedpyright & pyrefly are the best linters, only use ruff for fast formatting and syntax
|
||||
|
||||
[... rest of the existing content remains the same ...]
|
||||
## Best Practices and Guidelines
|
||||
|
||||
### Error Handling
|
||||
- Never use generic exceptions, always use custom exceptions from `@src/biz_bud/core/errors/`
|
||||
|
||||
### Package Usage
|
||||
- Always utilize the projects core packages in `@src/biz_bud/core/` and `@src/biz_bud/logging/`
|
||||
|
||||
## Core Patterns and Code Quality Standards
|
||||
|
||||
### Architecture Overview
|
||||
Business Buddy follows a modular architecture with dependency injection, singleton service management, and typed state patterns:
|
||||
|
||||
#### Service Factory Pattern
|
||||
- **Central orchestration**: All services created through `ServiceFactory` with automatic dependency injection
|
||||
- **Thread-safe singletons**: Race-condition-free service creation with proper lifecycle management
|
||||
- **Context managers**: Always use `async with ServiceFactory(config)` for automatic cleanup
|
||||
- **Global factory**: Use `get_global_factory()` for application-wide services
|
||||
|
||||
#### Focused State Architecture
|
||||
- **Scoped states**: Each workflow uses focused TypedDict states instead of monolithic state
|
||||
- **Type safety**: All states inherit from `BaseState` with proper field typing
|
||||
- **Reducer patterns**: Use `Annotated[list[T], add]` for accumulating fields
|
||||
- **Immutability**: Return state updates, never mutate state directly
|
||||
|
||||
#### Provider-Based Tools
|
||||
- **Unified interfaces**: All tools follow consistent provider patterns for extensibility
|
||||
- **Auto-fallback**: Tools automatically select best available provider
|
||||
- **Structured models**: `SearchResult`, `ScrapedContent`, `ExtractedData` ensure interoperability
|
||||
- **Service integration**: Tools leverage service factory for consistent resource access
|
||||
|
||||
### Code Quality Standards
|
||||
|
||||
#### Type Safety
|
||||
- **Modern typing**: Use `typing_extensions` for latest type annotations
|
||||
- **No `Any` types**: Find specific types instead of using `Any`
|
||||
- **Pydantic models**: Use for configuration and validation schemas
|
||||
- **Field validation**: Apply constraints with `Field(ge=1, le=300)`
|
||||
|
||||
#### Error Handling
|
||||
- **Custom exceptions**: Always use project-specific exceptions from `@src/biz_bud/core/errors/`
|
||||
- **No generic exceptions**: Never use bare `Exception` or `ValueError`
|
||||
- **Retry patterns**: Implement exponential backoff for external services
|
||||
- **Circuit breakers**: Use for external dependencies with failure thresholds
|
||||
|
||||
#### LangGraph Patterns
|
||||
- **Standard decorators**: Apply `@standard_node`, `@handle_errors`, `@log_node_execution`
|
||||
- **Factory functions**: Create graphs through factory functions, not direct construction
|
||||
- **Command routing**: Use `Command` pattern for intelligent state-based navigation
|
||||
- **Reusable edge helpers**: Leverage built-in routing functions to prevent duplication
|
||||
|
||||
#### Resource Management
|
||||
- **Context managers**: Use for automatic cleanup of services and connections
|
||||
- **Connection pooling**: Configure appropriate pool sizes (5-20 connections)
|
||||
- **Batch processing**: Process large datasets in chunks to avoid memory issues
|
||||
- **Cache strategies**: Implement multi-level caching (memory, Redis, disk)
|
||||
|
||||
#### Configuration Management
|
||||
- **Single source of truth**: Use `AppConfig` and `RunnableConfig`
|
||||
- **Environment variables**: Load sensitive data from environment
|
||||
- **Node-specific configs**: Use context-specific overrides when needed
|
||||
- **Validation early**: Validate configuration at startup
|
||||
|
||||
#### Testing Patterns
|
||||
- **Service mocking**: Mock services through factory for isolated testing
|
||||
- **Fixture hierarchy**: Use pytest fixtures for reusable test setup
|
||||
- **Integration tests**: Test full workflows with real backends
|
||||
- **Error simulation**: Test retry logic and error handling paths
|
||||
|
||||
### Anti-Technical Debt Patterns
|
||||
- **Centralized resource management**: Use context managers and cleanup registries
|
||||
- **Configuration consolidation**: Single source through AppConfig
|
||||
- **Standardized node patterns**: Consistent structure across all nodes
|
||||
- **Reusable components**: DRY routing logic and common patterns
|
||||
|
||||
### Performance Optimization
|
||||
- **Concurrent initialization**: Initialize multiple services concurrently
|
||||
- **Critical services**: Pre-initialize essential services for faster startup
|
||||
- **Memory efficiency**: Use weak references and proper cleanup
|
||||
- **Caching layers**: Redis with TTL management and batch operations
|
||||
|
||||
## Code Design Recommendations
|
||||
|
||||
### State Management
|
||||
- When interacting with State TypeDicts, prioritize .get() instead of isinstance checks
|
||||
|
||||
## Task Master AI Instructions
|
||||
**Import Task Master's development workflow commands and guidelines, treat as if import is in the main CLAUDE.md file.**
|
||||
@./.taskmaster/CLAUDE.md
|
||||
@./.taskmaster/CLAUDE.md
|
||||
@@ -7,25 +7,19 @@ ENV PYTHONUNBUFFERED=1 \
|
||||
DEBIAN_FRONTEND=noninteractive \
|
||||
TZ=UTC
|
||||
|
||||
# Install system dependencies
|
||||
# Install system dependencies, UV package manager, Node.js, and Python packages
|
||||
RUN apt-get update && apt-get install -y \
|
||||
build-essential \
|
||||
ca-certificates \
|
||||
curl \
|
||||
git \
|
||||
ca-certificates \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install UV package manager
|
||||
RUN pip install --no-cache-dir uv
|
||||
|
||||
# Install Node.js (required for some LangGraph features)
|
||||
RUN curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& pip install --no-cache-dir uv \
|
||||
&& curl -fsSL https://deb.nodesource.com/setup_lts.x | bash - \
|
||||
&& apt-get install -y nodejs \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Install LangGraph CLI and uvicorn
|
||||
RUN pip install --no-cache-dir langgraph-cli uvicorn
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& pip install --no-cache-dir langgraph-cli uvicorn
|
||||
|
||||
# Create app user
|
||||
RUN useradd --create-home --shell /bin/bash app
|
||||
|
||||
11
Makefile
11
Makefile
@@ -129,7 +129,7 @@ lint lint_diff lint_package lint_tests:
|
||||
pyrefly:
|
||||
@bash -c "$(ACTIVATE) && pyrefly check src tests"
|
||||
|
||||
pyright:
|
||||
basedpyright:
|
||||
@bash -c "$(ACTIVATE) && basedpyright src tests"
|
||||
|
||||
# Check for modern typing patterns and Pydantic v2 usage
|
||||
@@ -202,6 +202,14 @@ langgraph-dev-local:
|
||||
@echo "🚀 Starting LangGraph development server with local studio..."
|
||||
@bash -c "$(ACTIVATE) && langgraph dev --host 0.0.0.0 --port 2024 --studio-local"
|
||||
|
||||
######################
|
||||
# CODE ANALYSIS
|
||||
######################
|
||||
|
||||
sonar:
|
||||
@echo "🔍 Running SonarQube analysis..."
|
||||
@bash -c "$(ACTIVATE) && ./scripts/checks/sonar_scan.sh"
|
||||
|
||||
######################
|
||||
# HELP
|
||||
######################
|
||||
@@ -222,6 +230,7 @@ help:
|
||||
@echo 'test TEST_FILE=<test_file> - run all tests in file'
|
||||
@echo 'test_watch - run unit tests in watch mode'
|
||||
@echo 'coverage-report - generate HTML coverage report at htmlcov/index.html'
|
||||
@echo 'sonar - run SonarQube code analysis'
|
||||
@echo 'langgraph-dev - start LangGraph dev server (for containers/devcontainer)'
|
||||
@echo 'langgraph-dev-local - start LangGraph dev server with local studio'
|
||||
@echo 'tree - show tree of .py files in src/'
|
||||
|
||||
901
cicderrors.txt
Normal file
901
cicderrors.txt
Normal file
@@ -0,0 +1,901 @@
|
||||
Run echo "🔍 Running pyrefly type checking (not in pre-commit)..."
|
||||
🔍 Running pyrefly type checking (not in pre-commit)...
|
||||
ERROR Could not find name `FirecrawlApp` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:42:16
|
||||
|
|
||||
42 | async with FirecrawlApp() as app:
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `FirecrawlOptions` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:50:18
|
||||
|
|
||||
50 | url, FirecrawlOptions(formats=["markdown", "links"], only_main_content=True)
|
||||
| ^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `CrawlOptions` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:71:21
|
||||
|
|
||||
71 | options=CrawlOptions(
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `FirecrawlOptions` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:74:32
|
||||
|
|
||||
74 | scrape_options=FirecrawlOptions(formats=["markdown"], only_main_content=True),
|
||||
| ^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `CrawlJob` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:79:34
|
||||
|
|
||||
79 | if isinstance(crawl_job, CrawlJob):
|
||||
| ^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `CrawlJob` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:127:30
|
||||
|
|
||||
127 | def status_callback(job: CrawlJob) -> None:
|
||||
| ^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `FirecrawlApp` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:151:16
|
||||
|
|
||||
151 | async with FirecrawlApp() as app:
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `CrawlOptions` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:156:19
|
||||
|
|
||||
156 | options = CrawlOptions(
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `FirecrawlOptions` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:159:28
|
||||
|
|
||||
159 | scrape_options=FirecrawlOptions(
|
||||
| ^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find name `CrawlJob` [unknown-name]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:182:40
|
||||
|
|
||||
182 | if isinstance(initial_job, CrawlJob) and initial_job.job_id:
|
||||
| ^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `FirecrawlApp` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:256:49
|
||||
|
|
||||
256 | from biz_bud.tools.clients.firecrawl import FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `FirecrawlOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:256:63
|
||||
|
|
||||
256 | from biz_bud.tools.clients.firecrawl import FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `dict` has no attribute `data` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:301:17
|
||||
|
|
||||
301 | and r.data
|
||||
| ^^^^^^
|
||||
|
|
||||
ERROR Object of class `list` has no attribute `data` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/firecrawl_monitoring_example.py:301:17
|
||||
|
|
||||
301 | and r.data
|
||||
| ^^^^^^
|
||||
|
|
||||
ERROR Object of class `bool` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:56:24
|
||||
|
|
||||
56 | if processing_result.get("skipped"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `str` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:56:24
|
||||
|
|
||||
56 | if processing_result.get("skipped"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `bool` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:57:45
|
||||
|
|
||||
57 | print(f"\nSkipped: {processing_result.get('reason')}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `str` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:57:45
|
||||
|
|
||||
57 | print(f"\nSkipped: {processing_result.get('reason')}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `bool` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:60:28
|
||||
|
|
||||
60 | if processing_result.get("scraped_content"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `str` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:60:28
|
||||
|
|
||||
60 | if processing_result.get("scraped_content"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Can't apply arguments to non-class, got Literal[True] [bad-specialization]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:61:57
|
||||
|
|
||||
61 | ... print(f"Pages scraped: {len(processing_result['scraped_content'])}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Cannot index into `str` [no-matching-overload]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:61:57
|
||||
|
|
||||
61 | ... print(f"Pages scraped: {len(processing_result['scraped_content'])}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
No matching overload found for function `str.__getitem__`
|
||||
Possible overloads:
|
||||
(key: SupportsIndex | slice[Any, Any, Any], /) -> LiteralString
|
||||
(key: SupportsIndex | slice[Any, Any, Any], /) -> str [closest match]
|
||||
ERROR Object of class `bool` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:62:28
|
||||
|
|
||||
62 | if processing_result.get("r2r_dataset_id"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Object of class `str` has no attribute `get` [missing-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:62:28
|
||||
|
|
||||
62 | if processing_result.get("r2r_dataset_id"):
|
||||
| ^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Can't apply arguments to non-class, got Literal[True] [bad-specialization]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:63:51
|
||||
|
|
||||
63 | ... print(f"R2R dataset: {processing_result['r2r_dataset_id']}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Cannot index into `str` [no-matching-overload]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:63:51
|
||||
|
|
||||
63 | ... print(f"R2R dataset: {processing_result['r2r_dataset_id']}")
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
No matching overload found for function `str.__getitem__`
|
||||
Possible overloads:
|
||||
(key: SupportsIndex | slice[Any, Any, Any], /) -> LiteralString
|
||||
(key: SupportsIndex | slice[Any, Any, Any], /) -> str [closest match]
|
||||
ERROR Could not import `ExtractOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:72:9
|
||||
|
|
||||
72 | ExtractOptions,
|
||||
| ^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `FirecrawlApp` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:73:9
|
||||
|
|
||||
73 | FirecrawlApp,
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `MapOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:74:9
|
||||
|
|
||||
74 | MapOptions,
|
||||
| ^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `SearchOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_rag_agent_firecrawl.py:75:9
|
||||
|
|
||||
75 | SearchOptions,
|
||||
| ^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `CrawlJob` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_selfhosted_firecrawl_rag.py:14:45
|
||||
|
|
||||
14 | from biz_bud.tools.clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `CrawlOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_selfhosted_firecrawl_rag.py:14:55
|
||||
|
|
||||
14 | from biz_bud.tools.clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `FirecrawlApp` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_selfhosted_firecrawl_rag.py:14:69
|
||||
|
|
||||
14 | from biz_bud.tools.clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not import `FirecrawlOptions` from `biz_bud.tools.clients.firecrawl` [missing-module-attribute]
|
||||
--> /home/runner/work/biz-bud/biz-bud/examples/test_selfhosted_firecrawl_rag.py:14:83
|
||||
|
|
||||
14 | from biz_bud.tools.clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
| ^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR `in` is not supported between `Literal['biz_bud.tools.clients']` and `None` [unsupported-operand]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/checks/audit_core_dependencies.py:134:20
|
||||
|
|
||||
134 | if "biz_bud.tools.clients" in node.module or \
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR `in` is not supported between `Literal['biz_bud.services']` and `None` [unsupported-operand]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/checks/audit_core_dependencies.py:135:20
|
||||
|
|
||||
135 | "biz_bud.services" in node.module and "factory" not in node.module:
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR `not in` is not supported between `Literal['factory']` and `None` [unsupported-operand]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/checks/audit_core_dependencies.py:135:58
|
||||
|
|
||||
135 | "biz_bud.services" in node.module and "factory" not in node.module:
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
ERROR Could not find import of `biz_bud.validation` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/demo_validation_system.py:21:1
|
||||
|
|
||||
21 | from biz_bud.validation import ValidationRunner # noqa: E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `biz_bud.validation.agent_validators` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/demo_validation_system.py:22:1
|
||||
|
|
||||
22 | / from biz_bud.validation.agent_validators import ( # noqa: E402
|
||||
23 | | BuddyAgentValidator,
|
||||
24 | | CapabilityResolutionValidator,
|
||||
25 | | ToolFactoryValidator,
|
||||
26 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `biz_bud.validation.base` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/demo_validation_system.py:27:1
|
||||
|
|
||||
27 | from biz_bud.validation.base import BaseValidator # noqa: E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `biz_bud.validation.deployment_validators` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/demo_validation_system.py:28:1
|
||||
|
|
||||
28 | / from biz_bud.validation.deployment_validators import ( # noqa: E402
|
||||
29 | | PerformanceValidator,
|
||||
30 | | StateManagementValidator,
|
||||
31 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `biz_bud.validation.registry_validators` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/scripts/demo_validation_system.py:32:1
|
||||
|
|
||||
32 | / from biz_bud.validation.registry_validators import ( # noqa: E402
|
||||
33 | | CapabilityConsistencyValidator,
|
||||
34 | | ComponentDiscoveryValidator,
|
||||
35 | | RegistryIntegrityValidator,
|
||||
36 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:31:1
|
||||
|
|
||||
31 | from tests.helpers.assertions.custom_assertions import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:32:1
|
||||
|
|
||||
32 | from tests.helpers.factories.state_factories import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.config_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:33:1
|
||||
|
|
||||
33 | from tests.helpers.fixtures.config_fixtures import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.factory_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:34:1
|
||||
|
|
||||
34 | from tests.helpers.fixtures.factory_fixtures import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.mock_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:35:1
|
||||
|
|
||||
35 | from tests.helpers.fixtures.mock_fixtures import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.state_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:36:1
|
||||
|
|
||||
36 | from tests.helpers.fixtures.state_fixtures import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/conftest.py:37:1
|
||||
|
|
||||
37 | from tests.helpers.mocks.mock_builders import * # noqa: F401, F403, E402
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_analysis_workflow_e2e.py:11:1
|
||||
|
|
||||
11 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_catalog_intel_caribbean_e2e.py:12:1
|
||||
|
|
||||
12 | from tests.helpers.assertions.custom_assertions import assert_state_has_no_errors
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_catalog_intel_caribbean_e2e.py:13:1
|
||||
|
|
||||
13 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_catalog_intel_workflow_e2e.py:12:1
|
||||
|
|
||||
12 | from tests.helpers.assertions.custom_assertions import assert_state_has_no_errors
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_catalog_intel_workflow_e2e.py:13:1
|
||||
|
|
||||
13 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_rag_workflow_e2e.py:12:1
|
||||
|
|
||||
12 | from tests.helpers.assertions.custom_assertions import assert_state_has_no_errors
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_rag_workflow_e2e.py:13:1
|
||||
|
|
||||
13 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_research_workflow_e2e.py:12:1
|
||||
|
|
||||
12 | / from tests.helpers.assertions.custom_assertions import (
|
||||
13 | | assert_state_has_messages,
|
||||
14 | | assert_state_has_no_errors,
|
||||
15 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/e2e/test_research_workflow_e2e.py:16:1
|
||||
|
|
||||
16 | from tests.helpers.mocks.mock_builders import MockLLMBuilder, MockSearchToolBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.config_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/helpers/fixtures/__init__.py:3:1
|
||||
|
|
||||
3 | from tests.helpers.fixtures.config_fixtures import * # noqa: F403,F401
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.factory_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/helpers/fixtures/__init__.py:4:1
|
||||
|
|
||||
4 | from tests.helpers.fixtures.factory_fixtures import * # noqa: F403,F401
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.mock_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/helpers/fixtures/__init__.py:5:1
|
||||
|
|
||||
5 | from tests.helpers.fixtures.mock_fixtures import * # noqa: F403,F401
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.state_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/helpers/fixtures/__init__.py:6:1
|
||||
|
|
||||
6 | from tests.helpers.fixtures.state_fixtures import * # noqa: F403,F401
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/helpers/fixtures/state_fixtures.py:7:1
|
||||
|
|
||||
7 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_graph_wiring.py:10:1
|
||||
|
|
||||
10 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_synthesis_flow.py:16:1
|
||||
|
|
||||
16 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_synthesis_flow.py:147:9
|
||||
|
|
||||
147 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.mock_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_synthesis_flow.py:156:9
|
||||
|
|
||||
156 | from tests.helpers.fixtures.mock_fixtures import create_mock_service_factory
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_synthesis_flow.py:278:9
|
||||
|
|
||||
278 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.fixtures.mock_fixtures` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/graphs/test_research_synthesis_flow.py:287:9
|
||||
|
|
||||
287 | from tests.helpers.fixtures.mock_fixtures import create_mock_service_factory
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/nodes/extraction/test_semantic_extraction_debug_integration.py:15:1
|
||||
|
|
||||
15 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/nodes/extraction/test_semantic_extraction_debug_integration.py:16:1
|
||||
|
|
||||
16 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/integration_tests/services/test_llm_json_extraction_integration.py:10:1
|
||||
|
|
||||
10 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/meta/test_fixture_architecture.py:10:1
|
||||
|
|
||||
10 | / from tests.helpers.assertions.custom_assertions import (
|
||||
11 | | assert_message_types,
|
||||
12 | | assert_metadata_contains,
|
||||
13 | | assert_search_results_valid,
|
||||
14 | | assert_state_has_errors,
|
||||
15 | | assert_state_has_messages,
|
||||
| |_______________________________^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/meta/test_fixture_architecture.py:20:1
|
||||
|
|
||||
20 | / from tests.helpers.factories.state_factories import (
|
||||
21 | | StateBuilder,
|
||||
22 | | create_error_state,
|
||||
23 | | create_menu_intelligence_state,
|
||||
24 | | create_research_state,
|
||||
25 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/meta/test_fixture_architecture.py:26:1
|
||||
|
|
||||
26 | / from tests.helpers.mocks.mock_builders import (
|
||||
27 | | MockLLMBuilder,
|
||||
28 | | MockRedisBuilder,
|
||||
29 | | MockSearchToolBuilder,
|
||||
30 | | )
|
||||
| |_^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/meta/test_simple_fixtures.py:7:1
|
||||
|
|
||||
7 | from tests.helpers.assertions.custom_assertions import assert_state_has_messages
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/meta/test_simple_fixtures.py:10:1
|
||||
|
|
||||
10 | from tests.helpers.factories.state_factories import StateBuilder, create_research_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mocks.mock_builders` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/graphs/test_research.py:17:1
|
||||
|
|
||||
17 | from tests.helpers.mocks.mock_builders import MockLLMBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/graphs/test_research.py:49:5
|
||||
|
|
||||
49 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/graphs/test_research.py:131:9
|
||||
|
|
||||
131 | from tests.helpers.assertions.custom_assertions import assert_state_has_no_errors
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.assertions.custom_assertions` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/extraction/test_orchestrator.py:11:1
|
||||
|
|
||||
11 | from tests.helpers.assertions.custom_assertions import assert_state_has_no_errors
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_agent_nodes.py:18:1
|
||||
|
|
||||
18 | from tests.helpers.factories.state_factories import create_minimal_rag_agent_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_agent_nodes_r2r.py:14:1
|
||||
|
|
||||
14 | from tests.helpers.factories.state_factories import create_minimal_rag_agent_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_check_duplicate.py:34:9
|
||||
|
|
||||
34 | from tests.helpers.factories.state_factories import create_minimal_rag_agent_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_check_duplicate.py:163:13
|
||||
|
|
||||
163 | from tests.helpers.factories.state_factories import create_minimal_rag_agent_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_check_duplicate.py:201:9
|
||||
|
|
||||
201 | from tests.helpers.factories.state_factories import create_minimal_rag_agent_state
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:104:9
|
||||
|
|
||||
104 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:205:9
|
||||
|
|
||||
205 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:296:9
|
||||
|
|
||||
296 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:394:9
|
||||
|
|
||||
394 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:481:9
|
||||
|
|
||||
481 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:569:9
|
||||
|
|
||||
569 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:610:9
|
||||
|
|
||||
610 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:666:9
|
||||
|
|
||||
666 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:714:9
|
||||
|
|
||||
714 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:772:9
|
||||
|
|
||||
772 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:839:9
|
||||
|
|
||||
839 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:903:9
|
||||
|
|
||||
903 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/rag/test_upload_r2r.py:959:9
|
||||
|
|
||||
959 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.factories.state_factories` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/nodes/scraping/test_scrape_summary.py:10:1
|
||||
|
|
||||
10 | from tests.helpers.factories.state_factories import StateBuilder
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
ERROR Could not find import of `tests.helpers.mock_helpers` [import-error]
|
||||
--> /home/runner/work/biz-bud/biz-bud/tests/unit_tests/services/test_redis_backend.py:11:1
|
||||
|
|
||||
11 | from tests.helpers.mock_helpers import create_mock_redis_client
|
||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
||||
Looked in these locations (from config in `/home/runner/work/biz-bud/biz-bud/pyrefly.toml`):
|
||||
Search path (from config file): ["/home/runner/work/biz-bud/biz-bud/src", "/home/runner/work/biz-bud/biz-bud/tests"]
|
||||
Import root (inferred from project layout): "/home/runner/work/biz-bud/biz-bud/src"
|
||||
Site package path queried from interpreter: ["/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12", "/opt/hostedtoolcache/Python/3.12.11/x64/lib/python3.12/lib-dynload", "/home/runner/work/biz-bud/biz-bud/.venv/lib/python3.12/site-packages", "/home/runner/work/biz-bud/biz-bud/src"]
|
||||
INFO errors shown: 101, errors ignored: 72, modules: 675, transitive dependencies: 8,473, lines: 4,388,206, time: 33.00s, peak memory: physical 1.6 GiB
|
||||
89
config.yaml
89
config.yaml
@@ -296,6 +296,9 @@ database_config:
|
||||
# postgres_db: null
|
||||
# postgres_host: null
|
||||
# postgres_port: 5432
|
||||
# postgres_min_pool_size: 2
|
||||
# postgres_max_pool_size: 15
|
||||
# postgres_command_timeout: 10
|
||||
default_page_size: 100
|
||||
max_page_size: 1000
|
||||
|
||||
@@ -308,9 +311,9 @@ proxy_config:
|
||||
|
||||
# Redis configuration
|
||||
# Env Override: REDIS_URL
|
||||
# redis_config:
|
||||
# redis_url: "redis://localhost:6379/0"
|
||||
# key_prefix: "biz_bud:"
|
||||
redis_config:
|
||||
# redis_url: "redis://localhost:6379/0" # Set via environment variable
|
||||
# key_prefix: "biz_bud:"
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# WORKFLOW AND FEATURE CONFIGURATIONS
|
||||
@@ -320,15 +323,12 @@ proxy_config:
|
||||
rag_config:
|
||||
crawl_depth: 2
|
||||
use_crawl_endpoint: false # Use map+scrape for better discovery on documentation sites
|
||||
use_map_first: true # Use map endpoint for URL discovery (recommended for docs sites)
|
||||
use_firecrawl_extract: true
|
||||
batch_size: 10
|
||||
enable_semantic_chunking: true
|
||||
chunk_size: 1000
|
||||
chunk_overlap: 200
|
||||
embedding_model: "openai/text-embedding-3-small"
|
||||
skip_if_url_exists: true # New field: Skip processing if URL is already in R2R
|
||||
reuse_existing_dataset: true # New field: Use existing R2R collection if found
|
||||
custom_dataset_name: "business-buddy" # Custom name for R2R collection
|
||||
max_pages_to_map: 2000 # Max pages to discover during URL mapping
|
||||
max_pages_to_crawl: 2000 # Max pages to process after discovery (increased from default 20)
|
||||
# extraction_prompt: null # Optional custom prompt for Firecrawl's extract feature
|
||||
@@ -397,7 +397,7 @@ search_optimization:
|
||||
enable_metrics: true
|
||||
metrics_window_size: 1000
|
||||
|
||||
# Error Handling configuration (NEW SECTION)
|
||||
# Error Handling configuration
|
||||
error_handling:
|
||||
max_retry_attempts: 3
|
||||
retry_backoff_base: 1.5
|
||||
@@ -405,36 +405,56 @@ error_handling:
|
||||
enable_llm_analysis: true
|
||||
recovery_timeout: 300
|
||||
enable_auto_recovery: true
|
||||
# Define rules for classifying error severity
|
||||
# Define rules for classifying error severity (corrected structure)
|
||||
criticality_rules:
|
||||
- type: "AuthenticationError"
|
||||
severity: "critical"
|
||||
- type: "ConfigurationError"
|
||||
severity: "critical"
|
||||
- category: "network"
|
||||
severity: "high"
|
||||
retryable: true
|
||||
# Define recovery strategies for different error types
|
||||
- pattern: "rate.limit|quota.exceeded"
|
||||
criticality: "medium"
|
||||
can_continue: true
|
||||
- pattern: "unauthorized|403|invalid.api.key"
|
||||
criticality: "critical"
|
||||
can_continue: false
|
||||
- pattern: "timeout|deadline.exceeded"
|
||||
criticality: "low"
|
||||
can_continue: true
|
||||
- pattern: "authentication|auth.*error"
|
||||
criticality: "critical"
|
||||
can_continue: false
|
||||
- pattern: "network|connection.*error"
|
||||
criticality: "medium"
|
||||
can_continue: true
|
||||
# Define recovery strategies for different error types (corrected structure)
|
||||
recovery_strategies:
|
||||
rate_limit:
|
||||
- action: "retry_with_backoff"
|
||||
parameters: { backoff_base: 2.0, max_delay: 120 }
|
||||
priority: 10
|
||||
- action: "fallback"
|
||||
parameters: { fallback_type: "provider" }
|
||||
priority: 20
|
||||
parameters: { initial_delay: 5, max_delay: 60 }
|
||||
- action: "switch_provider"
|
||||
parameters: { providers: ["openai", "anthropic", "google"] }
|
||||
context_overflow:
|
||||
- action: "trim_context"
|
||||
parameters: { strategy: "sliding_window", window_size: 0.8 }
|
||||
- action: "chunk_input"
|
||||
parameters: { chunk_size: 1000, overlap: 100 }
|
||||
network:
|
||||
- action: "retry_with_backoff"
|
||||
parameters: { backoff_base: 1.5, max_delay: 60 }
|
||||
priority: 10
|
||||
parameters: { initial_delay: 2, max_delay: 30 }
|
||||
|
||||
# Tools configuration
|
||||
tools:
|
||||
search:
|
||||
# name: null
|
||||
# max_results: null
|
||||
ranking:
|
||||
strategy: "basic_scoring" # Options: basic_scoring, jina_rerank, hybrid
|
||||
hybrid_weight: 0.7
|
||||
jina_rerank:
|
||||
model: "jina-reranker-v2-base-multilingual"
|
||||
timeout: 30.0
|
||||
max_retries: 3
|
||||
enable_fallback: true
|
||||
extract:
|
||||
# name: null
|
||||
chunk_size: 8000
|
||||
min_content_length: 100
|
||||
web_tools:
|
||||
scraper_timeout: 30
|
||||
max_concurrent_scrapes: 5
|
||||
@@ -455,6 +475,20 @@ tools:
|
||||
max_retries: 3
|
||||
follow_redirects: true
|
||||
verify_ssl: true
|
||||
# Tool factory configuration for LangGraph integration
|
||||
factory:
|
||||
enable_caching: true
|
||||
cache_ttl_seconds: 3600.0
|
||||
max_cached_tools: 100
|
||||
auto_register_nodes: true
|
||||
auto_register_graphs: true
|
||||
default_tool_timeout: 300.0
|
||||
# State integration configuration
|
||||
state_integration:
|
||||
enable_state_validation: true
|
||||
preserve_message_history: true
|
||||
max_state_history_length: 50
|
||||
auto_enrich_state: true
|
||||
|
||||
# ------------------------------------------------------------------------------
|
||||
# GENERAL APPLICATION SETTINGS
|
||||
@@ -465,9 +499,9 @@ feature_flags:
|
||||
enable_advanced_reasoning: false
|
||||
enable_streaming_response: true
|
||||
enable_tool_caching: true
|
||||
enable_parallel_tools: true
|
||||
enable_parallel_tools: false # Schema default is false, not true
|
||||
enable_memory_optimization: true
|
||||
# experimental_features: {}
|
||||
experimental_features: {} # Required by schema
|
||||
|
||||
# Rate limits configuration
|
||||
rate_limits:
|
||||
@@ -480,10 +514,11 @@ rate_limits:
|
||||
|
||||
# Telemetry configuration
|
||||
telemetry_config:
|
||||
enabled: null # Schema field (separate from enable_telemetry)
|
||||
enable_telemetry: false
|
||||
collect_performance_metrics: false
|
||||
collect_usage_statistics: false
|
||||
error_reporting_level: "minimal" # Options: none, minimal, full
|
||||
metrics_export_interval: 300
|
||||
metrics_retention_days: 30
|
||||
# custom_metrics: {}
|
||||
custom_metrics: {} # Required by schema
|
||||
|
||||
@@ -65,10 +65,25 @@ RUN pip3 install --no-cache-dir langgraph-cli
|
||||
# Install repomix via npm (correct package name)
|
||||
RUN npm install -g repomix
|
||||
|
||||
# Install Task Master AI MCP server
|
||||
# Install Task Master AI MCP server (includes Claude Code CLI as dependency)
|
||||
RUN npm install -g task-master-ai
|
||||
|
||||
# Note: Claude Code CLI is installed locally via host bind mount at /home/dev/.claude/local/
|
||||
# Create chown wrapper to handle malformed Windsurf commands
|
||||
RUN echo '#!/bin/bash\n\
|
||||
# Wrapper to handle malformed chown commands from Windsurf\n\
|
||||
args=()\n\
|
||||
for arg in "$@"; do\n\
|
||||
if [[ "$arg" =~ ^[0-9]+:[0-9]+:$ ]]; then\n\
|
||||
arg="${arg%:}"\n\
|
||||
fi\n\
|
||||
args+=("$arg")\n\
|
||||
done\n\
|
||||
exec /usr/bin/chown-original "${args[@]}"' > /usr/local/bin/chown-wrapper && \
|
||||
chmod +x /usr/local/bin/chown-wrapper && \
|
||||
mv /usr/bin/chown /usr/bin/chown-original && \
|
||||
ln -s /usr/local/bin/chown-wrapper /usr/bin/chown
|
||||
|
||||
# Note: Claude Code CLI is included with task-master-ai installation. Host bind mount at /home/dev/.claude/ provides config/history.
|
||||
|
||||
# Create a non-root user matching host user (UID 1000)
|
||||
# Use build args to allow flexible UID/GID mapping
|
||||
@@ -76,7 +91,7 @@ ARG USER_ID=1000
|
||||
ARG GROUP_ID=1000
|
||||
RUN groupadd -g ${GROUP_ID} dev && \
|
||||
useradd -u ${USER_ID} -g ${GROUP_ID} -ms /bin/bash dev && \
|
||||
echo "dev ALL=(ALL) NOPASSWD: /usr/bin/chown, /usr/bin/chmod, /usr/bin/mkdir, /usr/bin/touch, /usr/bin/rm, /usr/bin/ln" > /etc/sudoers.d/dev && \
|
||||
echo "dev ALL=(ALL) NOPASSWD: /usr/bin/chown *, /usr/bin/chmod *, /usr/bin/mkdir *, /usr/bin/touch *, /usr/bin/rm *, /usr/bin/ln *" > /etc/sudoers.d/dev && \
|
||||
chmod 0440 /etc/sudoers.d/dev && \
|
||||
mkdir -p /home/dev/.vscode-server /home/dev/.config && \
|
||||
chown -R dev:dev /home/dev
|
||||
@@ -95,4 +110,4 @@ USER dev
|
||||
# Set the entrypoint with tini for proper signal handling
|
||||
ENTRYPOINT ["/usr/bin/tini", "--", "/usr/local/bin/entrypoint.sh"]
|
||||
|
||||
# Copy SQL init script for Postgres
|
||||
# SQL init script is mounted via volume in compose-dev.yaml
|
||||
|
||||
@@ -35,12 +35,13 @@ services:
|
||||
- type: bind
|
||||
source: ..
|
||||
target: /app
|
||||
# Docker socket for container management (optional)
|
||||
- type: bind
|
||||
source: /var/run/docker.sock
|
||||
target: /var/run/docker.sock
|
||||
read_only: true
|
||||
# Git configuration (conditional)
|
||||
# Docker socket for container management (optional) - COMMENTED OUT FOR SECURITY
|
||||
# Only uncomment if you need to manage Docker containers from within the app container
|
||||
# - type: bind
|
||||
# source: /var/run/docker.sock
|
||||
# target: /var/run/docker.sock
|
||||
# read_only: true
|
||||
# Git configuration (conditional - will fail gracefully if not present)
|
||||
- type: bind
|
||||
source: ${HOME}/.gitconfig
|
||||
target: /home/dev/.gitconfig
|
||||
@@ -109,7 +110,7 @@ services:
|
||||
- "5432"
|
||||
volumes:
|
||||
- pgdata:/var/lib/postgresql/data
|
||||
- ./init-items.sql:/docker-entrypoint-initdb.d/init-items.sql:ro
|
||||
- ./db-init:/docker-entrypoint-initdb.d:ro
|
||||
networks:
|
||||
- biz_bud_network
|
||||
healthcheck:
|
||||
@@ -117,6 +118,7 @@ services:
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
start_period: 10s
|
||||
|
||||
redis:
|
||||
image: redis:7
|
||||
|
||||
118
docker/db-init/000-base-tables.sql
Normal file
118
docker/db-init/000-base-tables.sql
Normal file
@@ -0,0 +1,118 @@
|
||||
-- Base receipt processing schema
|
||||
-- This file must run first (prefixed with 000-)
|
||||
|
||||
-- Create sequences first
|
||||
CREATE SEQUENCE IF NOT EXISTS receipts_id_seq;
|
||||
CREATE SEQUENCE IF NOT EXISTS receipt_line_items_id_seq;
|
||||
|
||||
-- Create base receipts table
|
||||
CREATE TABLE IF NOT EXISTS rpt_receipts (
|
||||
id numeric DEFAULT nextval('receipts_id_seq'::regclass) NOT NULL,
|
||||
vendor_name varchar(255) NOT NULL,
|
||||
vendor_address text NULL,
|
||||
transaction_date date NULL,
|
||||
transaction_time time NULL,
|
||||
receipt_number varchar(100) NULL,
|
||||
customer_info text NULL,
|
||||
subtotal numeric(10, 2) NULL,
|
||||
tax_amount numeric(10, 2) NULL,
|
||||
final_total numeric(10, 2) NULL,
|
||||
total_items int4 NULL,
|
||||
payment_method varchar(50) NULL,
|
||||
card_last_four bpchar(4) NULL,
|
||||
card_type varchar(20) NULL,
|
||||
raw_receipt_text text NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
updated_at timestamptz DEFAULT now() NULL,
|
||||
CONSTRAINT receipts_pkey PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
-- Create base indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipts_payment_method ON rpt_receipts USING btree (payment_method);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipts_transaction_date ON rpt_receipts USING btree (transaction_date);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipts_vendor_name ON rpt_receipts USING btree (vendor_name);
|
||||
|
||||
-- Create master products table
|
||||
CREATE TABLE IF NOT EXISTS rpt_master_products (
|
||||
id serial4 NOT NULL,
|
||||
canonical_name varchar(500) NOT NULL,
|
||||
canonical_description text NULL,
|
||||
category varchar(100) NULL,
|
||||
unit_of_measure varchar(50) NULL,
|
||||
estimated_unit_price numeric(10, 2) NULL,
|
||||
is_active bool DEFAULT true NULL,
|
||||
first_seen_date timestamptz DEFAULT now() NULL,
|
||||
last_updated timestamptz DEFAULT now() NULL,
|
||||
total_occurrences int4 DEFAULT 1 NULL,
|
||||
CONSTRAINT rpt_master_products_pkey PRIMARY KEY (id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_master_products_canonical_name ON rpt_master_products USING btree (canonical_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_master_products_category ON rpt_master_products USING btree (category);
|
||||
|
||||
-- Create product variations table
|
||||
CREATE TABLE IF NOT EXISTS rpt_product_variations (
|
||||
id serial4 NOT NULL,
|
||||
master_product_id int4 NOT NULL,
|
||||
original_description varchar(500) NOT NULL,
|
||||
verified_description varchar(500) NULL,
|
||||
confidence_score numeric(3, 2) NULL,
|
||||
verification_source varchar(100) NULL,
|
||||
search_variations_used text[] NULL,
|
||||
successful_variation varchar(500) NULL,
|
||||
verification_notes text NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
occurrence_count int4 DEFAULT 1 NULL,
|
||||
CONSTRAINT rpt_product_variations_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_product_variations_master_product_fkey FOREIGN KEY (master_product_id) REFERENCES rpt_master_products(id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_product_variations_master_id ON rpt_product_variations USING btree (master_product_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_product_variations_original ON rpt_product_variations USING btree (original_description);
|
||||
|
||||
-- Create receipt line items table
|
||||
CREATE TABLE IF NOT EXISTS rpt_receipt_line_items (
|
||||
id int4 DEFAULT nextval('receipt_line_items_id_seq'::regclass) NOT NULL,
|
||||
receipt_id numeric NOT NULL,
|
||||
line_number int4 NULL,
|
||||
product_name varchar(500) NOT NULL,
|
||||
product_code varchar(100) NULL,
|
||||
quantity numeric(10, 3) NULL,
|
||||
unit_of_measure varchar(50) NULL,
|
||||
unit_price numeric(10, 2) NULL,
|
||||
total_price numeric(10, 2) NULL,
|
||||
category varchar(100) NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
master_product_id int4 NULL,
|
||||
product_variation_id int4 NULL,
|
||||
original_ocr_text varchar(500) NULL,
|
||||
reconciliation_status varchar(50) DEFAULT 'pending'::character varying NULL,
|
||||
reconciliation_confidence numeric(3, 2) NULL,
|
||||
needs_review bool DEFAULT false NULL,
|
||||
CONSTRAINT receipt_line_items_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_receipt_line_items_master_product_fkey FOREIGN KEY (master_product_id) REFERENCES rpt_master_products(id),
|
||||
CONSTRAINT rpt_receipt_line_items_receipt_id_fkey FOREIGN KEY (receipt_id) REFERENCES rpt_receipts(id),
|
||||
CONSTRAINT rpt_receipt_line_items_variation_fkey FOREIGN KEY (product_variation_id) REFERENCES rpt_product_variations(id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipt_line_items_master_product ON rpt_receipt_line_items USING btree (master_product_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipt_line_items_receipt_id ON rpt_receipt_line_items USING btree (receipt_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipt_line_items_reconciliation_status ON rpt_receipt_line_items USING btree (reconciliation_status);
|
||||
|
||||
-- Create reconciliation log table
|
||||
CREATE TABLE IF NOT EXISTS rpt_reconciliation_log (
|
||||
id serial4 NOT NULL,
|
||||
receipt_line_item_id int4 NOT NULL,
|
||||
original_description varchar(500) NULL,
|
||||
suggested_master_product_id int4 NULL,
|
||||
action_taken varchar(50) NULL,
|
||||
confidence_score numeric(3, 2) NULL,
|
||||
reconciled_by varchar(100) NULL,
|
||||
reconciled_at timestamptz DEFAULT now() NULL,
|
||||
notes text NULL,
|
||||
CONSTRAINT rpt_reconciliation_log_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_reconciliation_log_line_item_fkey FOREIGN KEY (receipt_line_item_id) REFERENCES rpt_receipt_line_items(id),
|
||||
CONSTRAINT rpt_reconciliation_log_master_product_fkey FOREIGN KEY (suggested_master_product_id) REFERENCES rpt_master_products(id)
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_reconciliation_log_receipt_item ON rpt_reconciliation_log USING btree (receipt_line_item_id);
|
||||
26
docker/db-init/002-init-master-vendors.sql
Normal file
26
docker/db-init/002-init-master-vendors.sql
Normal file
@@ -0,0 +1,26 @@
|
||||
-- Master vendors table (depends on base tables)
|
||||
CREATE TABLE IF NOT EXISTS rpt_master_vendors (
|
||||
id serial4 PRIMARY KEY,
|
||||
canonical_name varchar(255) NOT NULL,
|
||||
display_name varchar(255) NULL,
|
||||
canonical_address text NULL,
|
||||
country_iso2 char(2) NULL,
|
||||
business_reg_no varchar(100) NULL,
|
||||
phone varchar(30) NULL,
|
||||
website varchar(255) NULL,
|
||||
vendor_type varchar(50) NULL,
|
||||
is_active bool DEFAULT true,
|
||||
first_seen_date timestamptz DEFAULT now(),
|
||||
last_updated timestamptz DEFAULT now(),
|
||||
total_occurrences int4 DEFAULT 1
|
||||
);
|
||||
|
||||
-- Create indices for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_master_vendors_name ON rpt_master_vendors USING btree (canonical_name);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_master_vendors_type ON rpt_master_vendors USING btree (vendor_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_master_vendors_country ON rpt_master_vendors USING btree (country_iso2);
|
||||
|
||||
-- Create unique constraint to prevent duplicates
|
||||
-- Using expression index for case-insensitive uniqueness
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS uq_rpt_master_vendors_name_lower
|
||||
ON rpt_master_vendors (lower(canonical_name));
|
||||
17
docker/db-init/003-init-vendor-variations.sql
Normal file
17
docker/db-init/003-init-vendor-variations.sql
Normal file
@@ -0,0 +1,17 @@
|
||||
CREATE TABLE IF NOT EXISTS rpt_vendor_variations (
|
||||
id serial4 PRIMARY KEY,
|
||||
master_vendor_id int4 NOT NULL REFERENCES rpt_master_vendors(id),
|
||||
original_name varchar(255) NOT NULL,
|
||||
verified_name varchar(255) NULL,
|
||||
confidence_score numeric(3,2) NULL,
|
||||
verification_source varchar(100) NULL,
|
||||
search_variations_used text[] NULL,
|
||||
successful_variation varchar(255) NULL,
|
||||
verification_notes text NULL,
|
||||
created_at timestamptz DEFAULT now(),
|
||||
occurrence_count int4 DEFAULT 1
|
||||
);
|
||||
|
||||
-- Create indices for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_vendor_variations_master_id ON rpt_vendor_variations USING btree (master_vendor_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_vendor_variations_original ON rpt_vendor_variations USING btree (original_name);
|
||||
6
docker/db-init/004-update-receipts-table.sql
Normal file
6
docker/db-init/004-update-receipts-table.sql
Normal file
@@ -0,0 +1,6 @@
|
||||
-- Add master vendor reference to rpt_receipts table
|
||||
ALTER TABLE rpt_receipts
|
||||
ADD COLUMN IF NOT EXISTS master_vendor_id int4 REFERENCES rpt_master_vendors(id);
|
||||
|
||||
-- Create index for performance
|
||||
CREATE INDEX IF NOT EXISTS idx_rpt_receipts_master_vendor ON rpt_receipts USING btree (master_vendor_id);
|
||||
18
docker/db-init/005-backfill-master-vendors.sql
Normal file
18
docker/db-init/005-backfill-master-vendors.sql
Normal file
@@ -0,0 +1,18 @@
|
||||
-- Backfill rpt_master_vendors with distinct vendor names from rpt_receipts
|
||||
-- Insert only vendors that don't already exist (case-insensitive check)
|
||||
INSERT INTO rpt_master_vendors (canonical_name, total_occurrences)
|
||||
SELECT vendor_name, COUNT(*) as total_occurrences
|
||||
FROM rpt_receipts r
|
||||
WHERE vendor_name IS NOT NULL
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM rpt_master_vendors mv
|
||||
WHERE lower(mv.canonical_name) = lower(r.vendor_name)
|
||||
)
|
||||
GROUP BY vendor_name;
|
||||
|
||||
-- Update rpt_receipts with master vendor IDs
|
||||
UPDATE rpt_receipts r
|
||||
SET master_vendor_id = mv.id
|
||||
FROM rpt_master_vendors mv
|
||||
WHERE lower(mv.canonical_name) = lower(r.vendor_name)
|
||||
AND r.master_vendor_id IS NULL;
|
||||
@@ -4,7 +4,19 @@
|
||||
# 1Password SSH agent integration, and development tools
|
||||
# Exit on error
|
||||
set -e
|
||||
|
||||
# Cleanup function for temporary files and processes
|
||||
cleanup() {
|
||||
echo "🧹 Cleaning up temporary files..."
|
||||
rm -rf /tmp/git-config /tmp/git-tools /tmp/.ssh
|
||||
if [ -n "$SSH_AGENT_PID" ]; then
|
||||
ssh-agent -k >/dev/null 2>&1 || true
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
echo "=== ???? Starting container initialization as user: $(whoami) ==="
|
||||
# Note: Database initialization scripts in docker/db-init/ are automatically
|
||||
# executed by PostgreSQL during first container startup via docker-entrypoint-initdb.d
|
||||
|
||||
## 1. Set up directories and environment variables
|
||||
#echo "???? Setting up directories..."
|
||||
@@ -152,22 +164,24 @@ echo "Running as: $CURRENT_USER (UID: $CURRENT_UID, GID: $CURRENT_GID)"
|
||||
if [ -d /home/dev/.claude ]; then
|
||||
echo "🤖 Setting up Claude Code directory permissions..."
|
||||
# Only fix ownership of files that are safe to change
|
||||
find /home/dev/.claude -type d -exec chmod 755 {} \; 2>/dev/null || true
|
||||
find /home/dev/.claude -name "*.jsonl" -exec chown $CURRENT_UID:$CURRENT_GID {} \; 2>/dev/null || true
|
||||
if ! find /home/dev/.claude -type d -exec chmod 755 {} \; 2>/dev/null; then
|
||||
echo "⚠️ Warning: Some Claude directories may have permission issues"
|
||||
fi
|
||||
if ! find /home/dev/.claude -name "*.jsonl" -exec chown $CURRENT_UID:$CURRENT_GID {} \; 2>/dev/null; then
|
||||
echo "⚠️ Warning: Some Claude log files may have permission issues"
|
||||
fi
|
||||
mkdir -p /home/dev/.claude/projects 2>/dev/null || true
|
||||
echo "✅ Claude Code permissions configured"
|
||||
|
||||
# Ensure Claude symlink points to local installation (not global)
|
||||
if [ -f /home/dev/.claude/local/claude ]; then
|
||||
sudo ln -sf /home/dev/.claude/local/claude /usr/local/bin/claude 2>/dev/null || true
|
||||
echo "✅ Claude Code symlink configured for local installation"
|
||||
|
||||
# Create local bashrc with Claude alias for proper local installation detection
|
||||
if [ ! -f /home/dev/.bashrc.local ] || ! grep -q "claude.*\.claude/local/claude" /home/dev/.bashrc.local; then
|
||||
echo "# Claude Code alias for local installation" >> /home/dev/.bashrc.local
|
||||
echo "alias claude=\"~/.claude/local/claude\"" >> /home/dev/.bashrc.local
|
||||
echo "✅ Claude Code alias configured in .bashrc.local"
|
||||
fi
|
||||
# Setup Claude CLI from task-master-ai installation
|
||||
if [ -f /usr/lib/node_modules/task-master-ai/node_modules/.bin/claude ]; then
|
||||
echo "🔧 Setting up Claude CLI from task-master-ai installation..."
|
||||
sudo ln -sf /usr/lib/node_modules/task-master-ai/node_modules/.bin/claude /usr/local/bin/claude 2>/dev/null || true
|
||||
echo "✅ Claude Code CLI configured from global task-master-ai installation"
|
||||
elif command -v claude >/dev/null 2>&1; then
|
||||
echo "✅ Claude Code CLI is available globally"
|
||||
else
|
||||
echo "⚠️ Claude Code CLI not found, may need container rebuild"
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -232,12 +246,12 @@ echo "tree: $(tree --version 2>/dev/null || echo 'not found')"
|
||||
echo "=== ??? Container initialization completed ==="
|
||||
|
||||
## 9. Execute the command passed to the container
|
||||
#if [ $# -eq 0 ]; then
|
||||
if [ $# -eq 0 ]; then
|
||||
# If no arguments are provided, sleep infinity (default behavior)
|
||||
echo "?????? No command specified, running sleep infinity..."
|
||||
exec sleep infinity
|
||||
#else
|
||||
else
|
||||
# Otherwise, execute the provided command
|
||||
echo "?????? Executing command: $@"
|
||||
exec "$@"
|
||||
#fi
|
||||
fi
|
||||
|
||||
269
docs/audit-core-deps.md
Normal file
269
docs/audit-core-deps.md
Normal file
@@ -0,0 +1,269 @@
|
||||
Of course. Writing a script to enforce architectural conventions is an excellent way to maintain a large codebase. Statically analyzing your code is far more reliable than manual reviews for catching these kinds of deviations.
|
||||
|
||||
This script will use Python's built-in `ast` (Abstract Syntax Tree) module. It's the most robust way to analyze Python code, as it understands the code's structure, unlike simple text-based searches which can be easily fooled.
|
||||
|
||||
The script will identify modules, functions, or packages that are NOT using your core dependency infrastructure by looking for "anti-patterns"—the use of standard libraries or direct instantiations where your custom framework should be used instead.
|
||||
|
||||
### The Script: `audit_core_dependencies.py`
|
||||
|
||||
Save the following code as a Python file (e.g., `audit_core_dependencies.py`) in the root of your repository.
|
||||
|
||||
```python
|
||||
import ast
|
||||
import os
|
||||
import argparse
|
||||
from typing import Any, Dict, List, Set, Tuple
|
||||
|
||||
# --- Configuration of Anti-Patterns ---
|
||||
|
||||
# Direct imports of libraries that should be replaced by your core infrastructure.
|
||||
# Maps the disallowed module to the suggested core module/function.
|
||||
DISALLOWED_IMPORTS: Dict[str, str] = {
|
||||
"logging": "biz_bud.logging.unified_logging.get_logger",
|
||||
"requests": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"httpx": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"aiohttp": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"asyncio.gather": "biz_bud.core.networking.async_utils.gather_with_concurrency",
|
||||
}
|
||||
|
||||
# Direct instantiation of service clients or tools that should come from the factory.
|
||||
DISALLOWED_INSTANTIATIONS: Dict[str, str] = {
|
||||
"TavilySearchProvider": "ServiceFactory.get_service() or create_tools_for_capabilities()",
|
||||
"JinaSearchProvider": "ServiceFactory.get_service() or create_tools_for_capabilities()",
|
||||
"ArxivProvider": "ServiceFactory.get_service() or create_tools_for_capabilities()",
|
||||
"FirecrawlClient": "ServiceFactory.get_service() or a dedicated provider from ScrapeService",
|
||||
"TavilyClient": "ServiceFactory.get_service()",
|
||||
"PostgresStore": "ServiceFactory.get_db_service()",
|
||||
"LangchainLLMClient": "ServiceFactory.get_llm_client()",
|
||||
"HTTPClient": "HTTPClient.get_or_create_client() instead of direct instantiation",
|
||||
}
|
||||
|
||||
# Built-in exceptions that should ideally be wrapped in a custom BusinessBuddyError.
|
||||
DISALLOWED_EXCEPTIONS: Set[str] = {
|
||||
"Exception",
|
||||
"ValueError",
|
||||
"KeyError",
|
||||
"TypeError",
|
||||
"AttributeError",
|
||||
"NotImplementedError",
|
||||
}
|
||||
|
||||
class InfrastructureVisitor(ast.NodeVisitor):
|
||||
"""
|
||||
AST visitor that walks the code tree and identifies violations
|
||||
of the core dependency infrastructure usage.
|
||||
"""
|
||||
|
||||
def __init__(self, filepath: str):
|
||||
self.filepath = filepath
|
||||
self.violations: List[Tuple[int, str]] = []
|
||||
self.imported_names: Dict[str, str] = {} # Maps alias to full import path
|
||||
|
||||
def _add_violation(self, node: ast.AST, message: str):
|
||||
self.violations.append((node.lineno, message))
|
||||
|
||||
def visit_Import(self, node: ast.Import) -> None:
|
||||
"""Checks for `import logging`, `import requests`, etc."""
|
||||
for alias in node.names:
|
||||
if alias.name in DISALLOWED_IMPORTS:
|
||||
suggestion = DISALLOWED_IMPORTS[alias.name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Disallowed import '{alias.name}'. Please use '{suggestion}'."
|
||||
)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_ImportFrom(self, node: ast.ImportFrom) -> None:
|
||||
"""Checks for direct service/client imports, e.g., `from biz_bud.tools.clients import TavilyClient`"""
|
||||
if node.module:
|
||||
for alias in node.names:
|
||||
full_import_path = f"{node.module}.{alias.name}"
|
||||
# Store the imported name (could be an alias)
|
||||
self.imported_names[alias.asname or alias.name] = full_import_path
|
||||
|
||||
# Check for direct service/tool instantiation patterns
|
||||
if "biz_bud.tools.clients" in node.module or \
|
||||
"biz_bud.services" in node.module and "factory" not in node.module:
|
||||
if alias.name in DISALLOWED_INSTANTIATIONS:
|
||||
suggestion = DISALLOWED_INSTANTIATIONS[alias.name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Disallowed direct import of '{alias.name}'. Use the ServiceFactory: '{suggestion}'."
|
||||
)
|
||||
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Raise(self, node: ast.Raise) -> None:
|
||||
"""Checks for `raise ValueError` instead of a custom error."""
|
||||
if isinstance(node.exc, ast.Call) and isinstance(node.exc.func, ast.Name):
|
||||
exception_name = node.exc.func.id
|
||||
elif isinstance(node.exc, ast.Name):
|
||||
exception_name = node.exc.id
|
||||
else:
|
||||
exception_name = "unknown"
|
||||
|
||||
if exception_name in DISALLOWED_EXCEPTIONS:
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Raising generic exception '{exception_name}'. Please use a custom `BusinessBuddyError` from `core.errors.base`."
|
||||
)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Assign(self, node: ast.Assign) -> None:
|
||||
"""Checks for direct state mutation like `state['key'] = value`."""
|
||||
for target in node.targets:
|
||||
if isinstance(target, ast.Subscript) and isinstance(target.value, ast.Name):
|
||||
if target.value.id == 'state':
|
||||
self._add_violation(
|
||||
node,
|
||||
"Direct state mutation `state[...] = ...` detected. Please use `StateUpdater` for immutable updates."
|
||||
)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Call(self, node: ast.Call) -> None:
|
||||
"""
|
||||
Checks for:
|
||||
1. Direct instantiation of disallowed classes (e.g., `TavilyClient()`).
|
||||
2. Direct use of `asyncio.gather`.
|
||||
3. Direct state mutation via `state.update(...)`.
|
||||
"""
|
||||
# 1. Check for direct instantiations
|
||||
if isinstance(node.func, ast.Name):
|
||||
class_name = node.func.id
|
||||
if class_name in DISALLOWED_INSTANTIATIONS:
|
||||
# Verify it's not a legitimate call, e.g. a function with the same name
|
||||
if self.imported_names.get(class_name, "").endswith(class_name):
|
||||
suggestion = DISALLOWED_INSTANTIATIONS[class_name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Direct instantiation of '{class_name}'. Use the ServiceFactory: '{suggestion}'."
|
||||
)
|
||||
|
||||
# 2. Check for `asyncio.gather` and `state.update`
|
||||
if isinstance(node.func, ast.Attribute):
|
||||
attr = node.func
|
||||
if isinstance(attr.value, ast.Name):
|
||||
parent_name = attr.value.id
|
||||
attr_name = attr.attr
|
||||
|
||||
# Check for asyncio.gather
|
||||
if parent_name == 'asyncio' and attr_name == 'gather':
|
||||
suggestion = DISALLOWED_IMPORTS['asyncio.gather']
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Direct use of 'asyncio.gather'. Please use '{suggestion}' for controlled concurrency."
|
||||
)
|
||||
|
||||
# Check for state.update()
|
||||
if parent_name == 'state' and attr_name == 'update':
|
||||
self._add_violation(
|
||||
node,
|
||||
"Direct state mutation with `state.update()` detected. Please use `StateUpdater`."
|
||||
)
|
||||
|
||||
self.generic_visit(node)
|
||||
|
||||
|
||||
def audit_directory(directory: str) -> Dict[str, List[Tuple[int, str]]]:
|
||||
"""Scans a directory for Python files and audits them."""
|
||||
all_violations: Dict[str, List[Tuple[int, str]]] = {}
|
||||
for root, _, files in os.walk(directory):
|
||||
for file in files:
|
||||
if file.endswith(".py"):
|
||||
filepath = os.path.join(root, file)
|
||||
try:
|
||||
with open(filepath, "r", encoding="utf-8") as f:
|
||||
source_code = f.read()
|
||||
tree = ast.parse(source_code, filename=filepath)
|
||||
|
||||
visitor = InfrastructureVisitor(filepath)
|
||||
visitor.visit(tree)
|
||||
|
||||
if visitor.violations:
|
||||
all_violations[filepath] = visitor.violations
|
||||
except (SyntaxError, ValueError) as e:
|
||||
all_violations[filepath] = [(0, f"ERROR: Could not parse file: {e}")]
|
||||
return all_violations
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description="Audit Python code for adherence to core infrastructure.")
|
||||
parser.add_argument(
|
||||
"directory",
|
||||
nargs="?",
|
||||
default="src/biz_bud",
|
||||
help="The directory to scan. Defaults to 'src/biz_bud'."
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"--- Auditing directory: {args.directory} ---\n")
|
||||
|
||||
violations = audit_directory(args.directory)
|
||||
|
||||
if not violations:
|
||||
print("\033[92m✅ All scanned files adhere to the core infrastructure rules.\033[0m")
|
||||
return
|
||||
|
||||
print(f"\033[91m🔥 Found {len(violations)} file(s) with violations:\033[0m\n")
|
||||
|
||||
total_violations = 0
|
||||
for filepath, file_violations in violations.items():
|
||||
print(f"\033[1m\033[93mFile: {filepath}\033[0m")
|
||||
for line, message in sorted(file_violations):
|
||||
print(f" \033[96mL{line}:\033[0m {message}")
|
||||
total_violations += 1
|
||||
print("-" * 20)
|
||||
|
||||
print(f"\n\033[1m\033[91mSummary: Found {total_violations} total violations in {len(violations)} files.\033[0m")
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
```
|
||||
|
||||
### How to Run the Script
|
||||
|
||||
1. **Save the file** as `audit_core_dependencies.py` in your project's root directory.
|
||||
2. **Run from your terminal:**
|
||||
```bash
|
||||
# Scan the default 'src/biz_bud' directory
|
||||
python audit_core_dependencies.py
|
||||
|
||||
# Scan a different directory
|
||||
python audit_core_dependencies.py path/to/your/code
|
||||
```
|
||||
|
||||
### How It Works and What It Detects
|
||||
|
||||
This script defines a series of "anti-patterns" and then checks your code for them.
|
||||
|
||||
1. **Logging (`DISALLOWED_IMPORTS`)**:
|
||||
* **Anti-Pattern**: `import logging`
|
||||
* **Why**: Your custom logging in `biz_bud.logging.unified_logging` and `services.logger_factory` is designed to provide structured, context-aware logs. Using the standard `logging` library directly bypasses this, leading to inconsistent log formats and loss of valuable context like trace IDs or node names.
|
||||
* **Detection**: The script flags any file that directly imports the `logging` module.
|
||||
|
||||
2. **Errors (`DISALLOWED_EXCEPTIONS`)**:
|
||||
* **Anti-Pattern**: `raise ValueError("...")` or `except Exception:`
|
||||
* **Why**: Your `core.errors` framework is built to create a predictable, structured error handling system. Raising generic exceptions bypasses your custom error types (`BusinessBuddyError`), telemetry, and routing logic. This leads to unhandled crashes and makes it difficult to implement targeted recovery strategies.
|
||||
* **Detection**: The `visit_Raise` method checks if the code is raising a standard, built-in exception instead of a custom one.
|
||||
|
||||
3. **HTTP & APIs (`DISALLOWED_IMPORTS`)**:
|
||||
* **Anti-Pattern**: `import requests` or `import httpx`
|
||||
* **Why**: Your `core.networking.http_client.HTTPClient` provides a centralized, singleton session manager with built-in retry logic, timeouts, and potentially unified headers or proxy configurations. Using external HTTP libraries directly fragments this logic, leading to inconsistent network behavior and making it harder to manage connections.
|
||||
* **Detection**: Flags any file importing `requests`, `httpx`, or `aiohttp`.
|
||||
|
||||
4. **Tools, Services, and Language Models (`DISALLOWED_INSTANTIATIONS`)**:
|
||||
* **Anti-Pattern**: `from biz_bud.tools.clients import TavilyClient; client = TavilyClient()`
|
||||
* **Why**: Your `ServiceFactory` is the single source of truth for creating and managing the lifecycle of services. It handles singleton behavior, dependency injection, and centralized configuration. Bypassing it means you might have multiple instances of a service (e.g., multiple database connection pools), services without proper configuration, or services that don't get cleaned up correctly.
|
||||
* **Detection**: The script first identifies direct imports of service or client classes and then uses `visit_Call` to check if they are being instantiated directly.
|
||||
|
||||
5. **State Reducers (`visit_Assign`, `visit_Call`)**:
|
||||
* **Anti-Pattern**: `state['key'] = value` or `state.update({...})`
|
||||
* **Why**: Your architecture appears to be moving towards immutable state updates (as hinted by `core/langgraph/state_immutability.py` and the concept of reducers). Direct mutation of the state dictionary is an anti-pattern because it can lead to unpredictable side effects, making the graph's flow difficult to trace and debug. Using a `StateUpdater` class or reducers ensures that state changes are explicit and traceable.
|
||||
* **Detection**: The script specifically looks for assignment to `state[...]` or calls to `state.update()`.
|
||||
|
||||
6. **Concurrency (`visit_Call`)**:
|
||||
* **Anti-Pattern**: `asyncio.gather(...)`
|
||||
* **Why**: Your `gather_with_concurrency` wrapper in `core.networking.async_utils` likely adds a semaphore or other logic to limit the number of concurrent tasks. Calling `asyncio.gather` directly bypasses this control, which can lead to overwhelming external APIs with too many requests, hitting rate limits, or exhausting system resources.
|
||||
* **Detection**: The script looks for direct calls to `asyncio.gather`.
|
||||
|
||||
This script provides a powerful, automated first line of defense to enforce your architectural standards and significantly reduce the classes of bugs you asked about.
|
||||
349
docs/comprehensive.md
Normal file
349
docs/comprehensive.md
Normal file
@@ -0,0 +1,349 @@
|
||||
Excellent! As a senior engineer, I've conducted a thorough review of your codebase. The project, `biz-bud`, is a sophisticated AI agent system with a well-defined architecture built on LangGraph. The separation of concerns into states, nodes, services, and tools is commendable.
|
||||
|
||||
However, like any complex project, there are areas for improvement. I've identified several bugs, inconsistencies, and performance bottlenecks. My feedback is structured to be actionable for an AI code assistant, with clear explanations and targeted code fixes.
|
||||
|
||||
Here is my comprehensive review:
|
||||
|
||||
***
|
||||
|
||||
## Code Review: `biz-bud` AI Agent
|
||||
|
||||
### High-Level Feedback
|
||||
|
||||
The codebase demonstrates a strong architectural foundation using a state-driven graph approach. The custom error handling, service factory for dependency injection, and modular node design are signs of a mature system. The primary areas for improvement revolve around enhancing type safety, simplifying state management, ensuring asynchronous correctness, and resolving inconsistencies that have emerged as the project has grown.
|
||||
|
||||
---
|
||||
|
||||
### 🐞 Bugs: Critical Issues to Address
|
||||
|
||||
These are issues that will likely lead to runtime errors, data loss, or incorrect behavior.
|
||||
|
||||
#### 1. Bug: Inconsistent State Updates in `call_model_node`
|
||||
|
||||
* **Location:** `src/biz_bud/nodes/llm/call.py`
|
||||
* **Problem:** The `call_model_node` function is not guaranteed to receive a `state` dictionary, especially when called directly or from non-graph contexts. The line `safe_messages = locals().get("messages", get_messages(state) if state else [])` will fail with a `NameError` if an exception occurs before `state` is defined within the local scope. This can happen if `ConfigurationProvider(config)` fails.
|
||||
* **Impact:** Unhandled exceptions during LLM calls will lead to a crash in the error handling logic itself, masking the original error.
|
||||
* **Fix:** Ensure `state` is defined at the beginning of the function, even if it's just an empty dictionary, to guarantee the error handling block can execute safely.
|
||||
|
||||
```diff
|
||||
--- a/src/biz_bud/nodes/llm/call.py
|
||||
+++ b/src/biz_bud/nodes/llm/call.py
|
||||
@@ -148,6 +148,7 @@
|
||||
state: dict[str, Any] | None,
|
||||
config: NodeLLMConfigOverride | None = None,
|
||||
) -> CallModelNodeOutput:
|
||||
+ state = state or {}
|
||||
provider = None
|
||||
try:
|
||||
# Get provider from runnable config if available
|
||||
@@ -250,7 +251,7 @@
|
||||
# Log diagnostic information for debugging underlying failures
|
||||
logger.error("LLM call failed after multiple retries.", exc_info=e)
|
||||
error_msg = f"Unexpected error in call_model_node: {str(e)}"
|
||||
- safe_messages = locals().get("messages", get_messages(state) if state else [])
|
||||
+ safe_messages = locals().get("messages", get_messages(state))
|
||||
|
||||
return {
|
||||
"messages": [
|
||||
|
||||
```
|
||||
|
||||
#### 2. Bug: Race Condition in Service Factory Initialization
|
||||
|
||||
* **Location:** `src/biz_bud/services/factory.py`
|
||||
* **Problem:** In `_GlobalFactoryManager.get_factory`, the check for `self._factory` is not protected by the async lock. This creates a race condition where two concurrent calls could both see `self._factory` as `None`, proceed to create a new factory, and one will overwrite the other.
|
||||
* **Impact:** This can lead to multiple instances of services that should be singletons, causing unpredictable behavior, resource leaks, and inconsistent state.
|
||||
* **Fix:** Acquire the lock *before* checking if the factory instance exists.
|
||||
|
||||
```diff
|
||||
--- a/src/biz_bud/services/factory.py
|
||||
+++ b/src/biz_bud/services/factory.py
|
||||
@@ -321,8 +321,8 @@
|
||||
|
||||
async def get_factory(self, config: AppConfig | None = None) -> ServiceFactory:
|
||||
"""Get the global service factory, creating it if it doesn't exist."""
|
||||
- if self._factory:
|
||||
- return self._factory
|
||||
+ # Acquire lock before checking to prevent race conditions
|
||||
+ async with self._lock:
|
||||
+ if self._factory:
|
||||
+ return self._factory
|
||||
|
||||
- async with self._lock:
|
||||
- # Check again inside the lock
|
||||
- if self._factory:
|
||||
- return self._factory
|
||||
+ # If we're here, the factory is None and we have the lock.
|
||||
|
||||
task = self._initializing_task
|
||||
if task and not task.done():
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### ⛓️ Inconsistencies & Technical Debt
|
||||
|
||||
These issues make the code harder to read, maintain, and reason about. They often point to incomplete refactoring or differing coding patterns.
|
||||
|
||||
#### 1. Inconsistency: Brittle State Typing with `NotRequired[Any]`
|
||||
|
||||
* **Location:** `src/biz_bud/states/unified.py` and other state definition files.
|
||||
* **Problem:** The extensive use of `NotRequired[Any]`, `NotRequired[list[Any]]`, and `NotRequired[dict[str, Any]]` undermines the entire purpose of using `TypedDict`. It forces developers to write defensive code with lots of `.get()` calls and provides no static analysis benefits, leading to potential `KeyError` or `TypeError` if a field is assumed to exist.
|
||||
* **Impact:** Reduced code quality, increased risk of runtime errors, and poor developer experience (no autocompletion, no type checking).
|
||||
* **Fix:** Refactor the `TypedDict` definitions to be more specific. Replace `Any` with concrete types or more specific `TypedDict`s where possible. Fields that are always present should not be `NotRequired`.
|
||||
|
||||
##### Targeted Fix Example:
|
||||
|
||||
```diff
|
||||
--- a/src/biz_bud/states/unified.py
|
||||
+++ b/src/biz_bud/states/unified.py
|
||||
@@ -62,7 +62,7 @@
|
||||
search_history: Annotated[list[SearchHistoryEntry], add]
|
||||
visited_urls: Annotated[list[str], add]
|
||||
search_status: Literal[
|
||||
- "pending", "success", "failure", "no_results", "cached"
|
||||
+ "pending", "success", "failure", "no_results", "cached", None
|
||||
]
|
||||
|
||||
|
||||
@@ -107,24 +107,24 @@
|
||||
# Research State Fields
|
||||
extracted_info: ExtractedInfoDict
|
||||
extracted_content: dict[str, Any]
|
||||
- synthesis: str
|
||||
+ synthesis: str | None
|
||||
|
||||
# Fields that might be needed in tests but aren't in BaseState
|
||||
- initial_input: DataDict
|
||||
- is_last_step: bool
|
||||
- run_metadata: MetadataDict
|
||||
- parsed_input: "ParsedInputTypedDict"
|
||||
- is_complete: bool
|
||||
- requires_interrupt: bool
|
||||
- input_metadata: "InputMetadataTypedDict"
|
||||
- context: DataDict
|
||||
+ initial_input: NotRequired[DataDict]
|
||||
+ is_last_step: NotRequired[bool]
|
||||
+ run_metadata: NotRequired[MetadataDict]
|
||||
+ parsed_input: NotRequired["ParsedInputTypedDict"]
|
||||
+ is_complete: NotRequired[bool]
|
||||
+ requires_interrupt: NotRequired[bool]
|
||||
+ input_metadata: NotRequired["InputMetadataTypedDict"]
|
||||
+ context: NotRequired[DataDict]
|
||||
organization: NotRequired[list[Organization]]
|
||||
organizations: NotRequired[list[Organization]]
|
||||
- plan: AnalysisPlan
|
||||
- final_output: str
|
||||
- formatted_response: str
|
||||
- tool_calls: list["ToolCallTypedDict"]
|
||||
- research_query: str
|
||||
- enhanced_query: str
|
||||
- rag_context: list[RAGContextDict]
|
||||
+ plan: NotRequired[AnalysisPlan]
|
||||
+ final_output: NotRequired[str]
|
||||
+ formatted_response: NotRequired[str]
|
||||
+ tool_calls: NotRequired[list["ToolCallTypedDict"]]
|
||||
+ research_query: NotRequired[str]
|
||||
+ enhanced_query: NotRequired[str]
|
||||
+ rag_context: NotRequired[list[RAGContextDict]]
|
||||
|
||||
# Market Research State Fields
|
||||
restaurant_name: NotRequired[str]
|
||||
|
||||
```
|
||||
*(Note: This is an illustrative fix. A full refactoring would require a deeper analysis of which fields are truly optional across all graph flows.)*
|
||||
|
||||
#### 2. Inconsistency: Redundant URL Routing and Analysis Logic
|
||||
|
||||
* **Location:**
|
||||
* `src/biz_bud/graphs/rag/nodes/scraping/url_router.py`
|
||||
* `src/biz_bud/nodes/scrape/route_url.py`
|
||||
* `src/biz_bud/nodes/scrape/scrape_url.py` (contains `_analyze_url`)
|
||||
* **Problem:** The logic for determining if a URL points to a Git repository, a PDF, or a standard webpage is duplicated and slightly different across multiple files. This makes maintenance difficult—a change in one place might not be reflected in the others.
|
||||
* **Impact:** Inconsistent behavior depending on which graph is running. A URL might be classified as a Git repo by one node but not by another.
|
||||
* **Fix:** Consolidate the URL analysis logic into a single, robust utility function. All routing nodes should call this central function to ensure consistent decisions.
|
||||
|
||||
##### Proposed Centralized Utility (`src/biz_bud/core/utils/url_analyzer.py` - New File):
|
||||
```python
|
||||
# src/biz_bud/core/utils/url_analyzer.py
|
||||
from typing import Literal
|
||||
from urllib.parse import urlparse
|
||||
|
||||
UrlType = Literal["git_repo", "pdf", "sitemap", "webpage", "unsupported"]
|
||||
|
||||
def analyze_url_type(url: str) -> UrlType:
|
||||
"""Analyzes a URL and returns its classified type."""
|
||||
try:
|
||||
parsed = urlparse(url.lower())
|
||||
path = parsed.path
|
||||
|
||||
# Git repositories
|
||||
git_hosts = ["github.com", "gitlab.com", "bitbucket.org"]
|
||||
if any(host in parsed.netloc for host in git_hosts) or path.endswith('.git'):
|
||||
return "git_repo"
|
||||
|
||||
# PDF documents
|
||||
if path.endswith('.pdf'):
|
||||
return "pdf"
|
||||
|
||||
# Sitemap
|
||||
if "sitemap" in path or path.endswith(".xml"):
|
||||
return "sitemap"
|
||||
|
||||
# Unsupported file types
|
||||
unsupported_exts = ['.zip', '.exe', '.dmg', '.tar.gz']
|
||||
if any(path.endswith(ext) for ext in unsupported_exts):
|
||||
return "unsupported"
|
||||
|
||||
return "webpage"
|
||||
except Exception:
|
||||
return "unsupported"
|
||||
|
||||
```
|
||||
|
||||
##### Refactor `route_url.py`:
|
||||
```diff
|
||||
--- a/src/biz_bud/nodes/scrape/route_url.py
|
||||
+++ b/src/biz_bud/nodes/scrape/route_url.py
|
||||
@@ -1,37 +1,22 @@
|
||||
-from typing import Any, Literal
|
||||
-from urllib.parse import urlparse
|
||||
+from typing import Any
|
||||
from biz_bud.core.utils.state_updater import StateUpdater
|
||||
+from biz_bud.core.utils.url_analyzer import analyze_url_type
|
||||
|
||||
-def _analyze_url(url: str) -> dict[str, Any]:
|
||||
- parsed = urlparse(url)
|
||||
- path = parsed.path.lower()
|
||||
- url_type: Literal["webpage", "pdf", "git_repo", "sitemap", "unsupported"] = "webpage"
|
||||
-
|
||||
- if any(host in parsed.netloc for host in ["github.com", "gitlab.com"]):
|
||||
- url_type = "git_repo"
|
||||
- elif path.endswith(".pdf"):
|
||||
- url_type = "pdf"
|
||||
- elif any(ext in path for ext in [".zip", ".exe", ".dmg"]):
|
||||
- url_type = "unsupported"
|
||||
- elif "sitemap" in path or path.endswith(".xml"):
|
||||
- url_type = "sitemap"
|
||||
-
|
||||
- return {"type": url_type, "domain": parsed.netloc}
|
||||
|
||||
async def route_url_node(
|
||||
state: dict[str, Any], config: dict[str, Any] | None = None
|
||||
) -> dict[str, Any]:
|
||||
url = state.get("input_url") or state.get("url", "")
|
||||
- url_info = _analyze_url(url)
|
||||
- routing_decision = "process_normal"
|
||||
- routing_metadata = {"url_type": url_info["type"]}
|
||||
+ url_type = analyze_url_type(url)
|
||||
+ routing_decision = "skip_unsupported"
|
||||
+ routing_metadata = {"url_type": url_type}
|
||||
|
||||
- if url_info["type"] == "git_repo":
|
||||
+ if url_type == "git_repo":
|
||||
routing_decision = "process_git_repo"
|
||||
- elif url_info["type"] == "pdf":
|
||||
+ elif url_type == "pdf":
|
||||
routing_decision = "process_pdf"
|
||||
- elif url_info["type"] == "unsupported":
|
||||
- routing_decision = "skip_unsupported"
|
||||
- elif url_info["type"] == "sitemap":
|
||||
+ elif url_type == "sitemap":
|
||||
routing_decision = "process_sitemap"
|
||||
+ elif url_type == "webpage":
|
||||
+ routing_decision = "process_normal"
|
||||
|
||||
updater = StateUpdater(state)
|
||||
updater.set("routing_decision", routing_decision)
|
||||
```
|
||||
*(This refactoring would need to be applied to all related files.)*
|
||||
|
||||
---
|
||||
|
||||
### 🚀 Bottlenecks & Performance Issues
|
||||
|
||||
These areas could cause slow execution, especially with large inputs or high concurrency.
|
||||
|
||||
#### 1. Bottleneck: Sequential Execution in `extract_batch_node`
|
||||
|
||||
* **Location:** `src/biz_bud/nodes/extraction/extractors.py`
|
||||
* **Problem:** The `extract_batch_node` processes a batch of content by iterating through it and calling an async extraction function (`_extract_from_content_impl`) for each item sequentially within the `extract_with_semaphore` wrapper. The `asyncio.gather` is used correctly, but the semaphore logic could be more efficient.
|
||||
* **Impact:** Scraping and processing multiple URLs is a major performance bottleneck. If 10 URLs are scraped, and each takes 5 seconds to extract from, the total time will be 50 seconds instead of being closer to 5 seconds with full concurrency.
|
||||
* **Fix:** Ensure that the `extract_with_semaphore` function is correctly wrapping the async call and that the `max_concurrent` parameter is configured appropriately. The current implementation looks mostly correct but can be made more robust by ensuring the semaphore is acquired *inside* the function passed to `gather`, which it already does. The main issue is likely the default `max_concurrent` value of 3. This should be configured from the central `AppConfig` to allow for higher throughput in production environments.
|
||||
|
||||
##### Targeted Fix:
|
||||
Instead of a code change, the fix is to **ensure the configuration reflects the desired concurrency.**
|
||||
|
||||
* In `config.yaml` or environment variables, set a higher `max_concurrent_scrapes` in the `web_tools` section.
|
||||
* The `extraction_orchestrator_node` needs to pass this configuration value down into the batch node's state.
|
||||
|
||||
```python
|
||||
# In src/biz_bud/nodes/extraction/orchestrator.py
|
||||
|
||||
# ... existing code ...
|
||||
llm_client = await service_factory.get_llm_for_node(
|
||||
"extraction_orchestrator",
|
||||
llm_profile_override="small"
|
||||
)
|
||||
|
||||
# --- FIX: Plumb concurrency config from the main AppConfig ---
|
||||
web_tools_config = node_config.get("web_tools", {})
|
||||
max_concurrent = web_tools_config.get("max_concurrent_scrapes", 5) # Default to 5 instead of 3
|
||||
|
||||
# Pass successful scrapes to the batch extractor
|
||||
query = state_dict.get("query", "")
|
||||
batch_state = {
|
||||
"content_batch": successful_scrapes,
|
||||
"query": query,
|
||||
"verbose": verbose,
|
||||
"max_concurrent": max_concurrent, # Pass the configured value
|
||||
}
|
||||
# ... rest of the code ...
|
||||
```
|
||||
|
||||
#### 2. Bottleneck: Inefficient Deduplication of Search Results
|
||||
|
||||
* **Location:** `src/biz_bud/nodes/search/ranker.py`
|
||||
* **Problem:** The `_remove_duplicates` method in `SearchResultRanker` uses a simple `_calculate_text_similarity` function based on Jaccard similarity of word sets. For highly similar snippets or titles that differ by only a few common words, this may not be effective. Furthermore, comparing every result to every other result is O(n^2), which can be slow for large result sets.
|
||||
* **Impact:** The final output may contain redundant information, and the ranking step could be slow if many providers return a large number of overlapping results.
|
||||
* **Fix:** Implement a more efficient and effective deduplication strategy. A good approach is to use a "near-duplicate" detection method like MinHash or SimHash. For a simpler but still effective improvement, we can cluster documents by title similarity and then only compare snippets within clusters.
|
||||
|
||||
##### Targeted Fix (Simplified Improvement):
|
||||
```diff
|
||||
--- a/src/biz_bud/nodes/search/ranker.py
|
||||
+++ b/src/biz_bud/nodes/search/ranker.py
|
||||
@@ -107,14 +107,26 @@
|
||||
return freshness
|
||||
|
||||
def _remove_duplicates(
|
||||
self, results: list[RankedSearchResult]
|
||||
) -> list[RankedSearchResult]:
|
||||
unique_results: list[RankedSearchResult] = []
|
||||
- seen_urls: set[str] = set()
|
||||
+ # Use a more robust check for duplicates than just URL
|
||||
+ seen_hashes: set[str] = set()
|
||||
|
||||
for result in results:
|
||||
- if result.url in seen_urls:
|
||||
+ # Normalize URL for better duplicate detection
|
||||
+ normalized_url = result.url.lower().rstrip("/")
|
||||
+
|
||||
+ # Create a simple hash from the title to quickly identify near-duplicate content
|
||||
+ # A more advanced solution would use MinHash or SimHash here.
|
||||
+ normalized_title = self._normalize_text(result.title)
|
||||
+ content_hash = hashlib.md5(normalized_title[:50].encode()).hexdigest()
|
||||
+
|
||||
+ # Key for checking duplicates is a tuple of the normalized URL's domain and the content hash
|
||||
+ duplicate_key = (result.source_domain, content_hash)
|
||||
+
|
||||
+ if duplicate_key in seen_hashes:
|
||||
continue
|
||||
- seen_urls.add(result.url)
|
||||
-
|
||||
+ seen_hashes.add(duplicate_key)
|
||||
unique_results.append(result)
|
||||
|
||||
return unique_results
|
||||
```
|
||||
*(Note: `hashlib` would need to be imported.)*
|
||||
133
docs/coverage-configuration.md
Normal file
133
docs/coverage-configuration.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Coverage Configuration Guide
|
||||
|
||||
This document explains the coverage reporting configuration for the Business Buddy project.
|
||||
|
||||
## Overview
|
||||
|
||||
The project uses `pytest-cov` for code coverage measurement with comprehensive reporting options configured in `pyproject.toml`.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Coverage Collection (`[tool.coverage.run]`)
|
||||
|
||||
- **Source**: `src/biz_bud` - Measures coverage for all source code
|
||||
- **Branch Coverage**: Enabled to track both statement and branch coverage
|
||||
- **Parallel Execution**: Supports parallel test execution with xdist
|
||||
- **Omitted Files**:
|
||||
- Test files (`*/tests/*`, `*/test_*.py`, `*/conftest.py`)
|
||||
- Init files (`*/__init__.py`)
|
||||
- Entry points (`webapp.py`, CLI files)
|
||||
- Database migrations
|
||||
|
||||
### Coverage Reporting (`[tool.coverage.report]`)
|
||||
|
||||
- **Show Missing**: Displays line numbers for uncovered code
|
||||
- **Precision**: 2 decimal places for coverage percentages
|
||||
- **Skip Empty**: Excludes empty files from reports
|
||||
- **Comprehensive Exclusions**:
|
||||
- Type checking blocks (`if TYPE_CHECKING:`)
|
||||
- Debug code (`if DEBUG:`, `if __debug__:`)
|
||||
- Platform-specific code
|
||||
- Error handling patterns
|
||||
- Abstract methods and protocols
|
||||
|
||||
### Report Formats
|
||||
|
||||
1. **Terminal**: `--cov-report=term-missing:skip-covered`
|
||||
2. **HTML**: `htmlcov/index.html` with context information
|
||||
3. **XML**: `coverage.xml` for CI/CD integration
|
||||
4. **JSON**: `coverage.json` for programmatic access
|
||||
|
||||
## Usage
|
||||
|
||||
### Running Tests with Coverage
|
||||
|
||||
```bash
|
||||
# Run all tests with coverage
|
||||
make test
|
||||
|
||||
# Run specific test with coverage
|
||||
pytest tests/path/to/test.py --cov=src/biz_bud --cov-report=html
|
||||
|
||||
# Run without coverage (faster for development)
|
||||
pytest tests/path/to/test.py --no-cov
|
||||
```
|
||||
|
||||
### Coverage Reports
|
||||
|
||||
```bash
|
||||
# Generate HTML report
|
||||
pytest --cov=src/biz_bud --cov-report=html
|
||||
|
||||
# View HTML report
|
||||
open htmlcov/index.html # macOS
|
||||
xdg-open htmlcov/index.html # Linux
|
||||
|
||||
# Generate XML report for CI
|
||||
pytest --cov=src/biz_bud --cov-report=xml
|
||||
|
||||
# Generate JSON report
|
||||
pytest --cov=src/biz_bud --cov-report=json
|
||||
```
|
||||
|
||||
### Coverage Thresholds
|
||||
|
||||
- **Minimum Coverage**: 70% (configurable via `--cov-fail-under`)
|
||||
- **Branch Coverage**: Required for thorough testing
|
||||
- **Context Tracking**: Enabled to track which tests cover which code
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Write Tests First**: Aim for high coverage through TDD
|
||||
2. **Focus on Critical Paths**: Prioritize coverage for core business logic
|
||||
3. **Use Exclusion Pragmas**: Mark intentionally untested code with `# pragma: no cover`
|
||||
4. **Review Coverage Reports**: Use HTML reports to identify missed edge cases
|
||||
5. **Monitor Trends**: Track coverage changes in CI/CD
|
||||
|
||||
## Exclusion Patterns
|
||||
|
||||
The configuration excludes common patterns that don't need testing:
|
||||
|
||||
- Type checking imports (`if TYPE_CHECKING:`)
|
||||
- Debug statements (`if DEBUG:`, `if __debug__:`)
|
||||
- Platform-specific code (`if sys.platform`)
|
||||
- Abstract methods (`@abstract`, `raise NotImplementedError`)
|
||||
- Error handling boilerplate (`except ImportError:`)
|
||||
|
||||
## Integration with CI/CD
|
||||
|
||||
The XML and JSON reports are designed for integration with:
|
||||
|
||||
- **GitHub Actions**: Upload coverage to services like Codecov
|
||||
- **SonarQube**: Import coverage data for quality gates
|
||||
- **IDE Integration**: Many IDEs can display coverage inline
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **No Data Collected**: Ensure source paths match actual file locations
|
||||
2. **Parallel Test Issues**: Coverage data may need combining with `coverage combine`
|
||||
3. **Missing Files**: Check that files are imported during test execution
|
||||
4. **Low Coverage**: Review exclusion patterns and test completeness
|
||||
|
||||
### Debug Commands
|
||||
|
||||
```bash
|
||||
# Check coverage configuration
|
||||
python -m coverage debug config
|
||||
|
||||
# Combine parallel coverage data
|
||||
python -m coverage combine
|
||||
|
||||
# Erase coverage data
|
||||
python -m coverage erase
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
- **Configuration**: `pyproject.toml` (`[tool.coverage.*]` sections)
|
||||
- **Data File**: `.coverage` (temporary, in .gitignore)
|
||||
- **HTML Reports**: `htmlcov/` directory (in .gitignore)
|
||||
- **XML Report**: `coverage.xml` (in .gitignore)
|
||||
- **JSON Report**: `coverage.json` (in .gitignore)
|
||||
130
docs/db/structure.sql
Normal file
130
docs/db/structure.sql
Normal file
@@ -0,0 +1,130 @@
|
||||
-- public.rpt_master_products definition
|
||||
|
||||
-- Drop table
|
||||
|
||||
-- DROP TABLE public.rpt_master_products;
|
||||
|
||||
CREATE TABLE public.rpt_master_products (
|
||||
id serial4 NOT NULL,
|
||||
canonical_name varchar(500) NOT NULL,
|
||||
canonical_description text NULL,
|
||||
category varchar(100) NULL,
|
||||
unit_of_measure varchar(50) NULL,
|
||||
estimated_unit_price numeric(10, 2) NULL,
|
||||
is_active bool DEFAULT true NULL,
|
||||
first_seen_date timestamptz DEFAULT now() NULL,
|
||||
last_updated timestamptz DEFAULT now() NULL,
|
||||
total_occurrences int4 DEFAULT 1 NULL,
|
||||
CONSTRAINT rpt_master_products_pkey PRIMARY KEY (id)
|
||||
);
|
||||
CREATE INDEX idx_rpt_master_products_canonical_name ON public.rpt_master_products USING btree (canonical_name);
|
||||
CREATE INDEX idx_rpt_master_products_category ON public.rpt_master_products USING btree (category);
|
||||
|
||||
-- public.rpt_product_variations definition
|
||||
|
||||
-- Drop table
|
||||
|
||||
-- DROP TABLE public.rpt_product_variations;
|
||||
|
||||
CREATE TABLE public.rpt_product_variations (
|
||||
id serial4 NOT NULL,
|
||||
master_product_id int4 NOT NULL,
|
||||
original_description varchar(500) NOT NULL,
|
||||
verified_description varchar(500) NULL,
|
||||
confidence_score numeric(3, 2) NULL,
|
||||
verification_source varchar(100) NULL,
|
||||
search_variations_used _text NULL,
|
||||
successful_variation varchar(500) NULL,
|
||||
verification_notes text NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
occurrence_count int4 DEFAULT 1 NULL,
|
||||
CONSTRAINT rpt_product_variations_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_product_variations_master_product_fkey FOREIGN KEY (master_product_id) REFERENCES public.rpt_master_products(id)
|
||||
);
|
||||
CREATE INDEX idx_rpt_product_variations_master_id ON public.rpt_product_variations USING btree (master_product_id);
|
||||
CREATE INDEX idx_rpt_product_variations_original ON public.rpt_product_variations USING btree (original_description);
|
||||
|
||||
-- public.rpt_receipt_line_items definition
|
||||
|
||||
-- Drop table
|
||||
|
||||
-- DROP TABLE public.rpt_receipt_line_items;
|
||||
|
||||
CREATE TABLE public.rpt_receipt_line_items (
|
||||
id int4 DEFAULT nextval('receipt_line_items_id_seq'::regclass) NOT NULL,
|
||||
receipt_id numeric NOT NULL,
|
||||
line_number int4 NULL,
|
||||
product_name varchar(500) NOT NULL,
|
||||
product_code varchar(100) NULL,
|
||||
quantity numeric(10, 3) NULL,
|
||||
unit_of_measure varchar(50) NULL,
|
||||
unit_price numeric(10, 2) NULL,
|
||||
total_price numeric(10, 2) NULL,
|
||||
category varchar(100) NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
master_product_id int4 NULL,
|
||||
product_variation_id int4 NULL,
|
||||
original_ocr_text varchar(500) NULL,
|
||||
reconciliation_status varchar(50) DEFAULT 'pending'::character varying NULL,
|
||||
reconciliation_confidence numeric(3, 2) NULL,
|
||||
needs_review bool DEFAULT false NULL,
|
||||
CONSTRAINT receipt_line_items_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_receipt_line_items_master_product_fkey FOREIGN KEY (master_product_id) REFERENCES public.rpt_master_products(id),
|
||||
CONSTRAINT rpt_receipt_line_items_receipt_id_fkey FOREIGN KEY (receipt_id) REFERENCES public.rpt_receipts(id),
|
||||
CONSTRAINT rpt_receipt_line_items_variation_fkey FOREIGN KEY (product_variation_id) REFERENCES public.rpt_product_variations(id)
|
||||
);
|
||||
CREATE INDEX idx_rpt_receipt_line_items_master_product ON public.rpt_receipt_line_items USING btree (master_product_id);
|
||||
CREATE INDEX idx_rpt_receipt_line_items_receipt_id ON public.rpt_receipt_line_items USING btree (receipt_id);
|
||||
CREATE INDEX idx_rpt_receipt_line_items_reconciliation_status ON public.rpt_receipt_line_items USING btree (reconciliation_status);
|
||||
|
||||
-- public.rpt_receipts definition
|
||||
|
||||
-- Drop table
|
||||
|
||||
-- DROP TABLE public.rpt_receipts;
|
||||
|
||||
CREATE TABLE public.rpt_receipts (
|
||||
id numeric DEFAULT nextval('receipts_id_seq'::regclass) NOT NULL,
|
||||
vendor_name varchar(255) NOT NULL,
|
||||
vendor_address text NULL,
|
||||
transaction_date date NULL,
|
||||
transaction_time time NULL,
|
||||
receipt_number varchar(100) NULL,
|
||||
customer_info text NULL,
|
||||
subtotal numeric(10, 2) NULL,
|
||||
tax_amount numeric(10, 2) NULL,
|
||||
final_total numeric(10, 2) NULL,
|
||||
total_items int4 NULL,
|
||||
payment_method varchar(50) NULL,
|
||||
card_last_four bpchar(4) NULL,
|
||||
card_type varchar(20) NULL,
|
||||
raw_receipt_text text NULL,
|
||||
created_at timestamptz DEFAULT now() NULL,
|
||||
updated_at timestamptz DEFAULT now() NULL,
|
||||
CONSTRAINT receipts_pkey PRIMARY KEY (id)
|
||||
);
|
||||
CREATE INDEX idx_rpt_receipts_payment_method ON public.rpt_receipts USING btree (payment_method);
|
||||
CREATE INDEX idx_rpt_receipts_transaction_date ON public.rpt_receipts USING btree (transaction_date);
|
||||
CREATE INDEX idx_rpt_receipts_vendor_name ON public.rpt_receipts USING btree (vendor_name);
|
||||
|
||||
-- public.rpt_reconciliation_log definition
|
||||
|
||||
-- Drop table
|
||||
|
||||
-- DROP TABLE public.rpt_reconciliation_log;
|
||||
|
||||
CREATE TABLE public.rpt_reconciliation_log (
|
||||
id serial4 NOT NULL,
|
||||
receipt_line_item_id int4 NOT NULL,
|
||||
original_description varchar(500) NULL,
|
||||
suggested_master_product_id int4 NULL,
|
||||
action_taken varchar(50) NULL,
|
||||
confidence_score numeric(3, 2) NULL,
|
||||
reconciled_by varchar(100) NULL,
|
||||
reconciled_at timestamptz DEFAULT now() NULL,
|
||||
notes text NULL,
|
||||
CONSTRAINT rpt_reconciliation_log_pkey PRIMARY KEY (id),
|
||||
CONSTRAINT rpt_reconciliation_log_line_item_fkey FOREIGN KEY (receipt_line_item_id) REFERENCES public.rpt_receipt_line_items(id),
|
||||
CONSTRAINT rpt_reconciliation_log_master_product_fkey FOREIGN KEY (suggested_master_product_id) REFERENCES public.rpt_master_products(id)
|
||||
);
|
||||
CREATE INDEX idx_rpt_reconciliation_log_receipt_item ON public.rpt_reconciliation_log USING btree (receipt_line_item_id);
|
||||
75
docs/more-fix.md
Normal file
75
docs/more-fix.md
Normal file
@@ -0,0 +1,75 @@
|
||||
An analysis of your `core` package reveals a sophisticated and feature-rich system. However, its complexity also introduces several bug risks, potential crash points, and areas of redundancy. Here is a detailed breakdown.
|
||||
|
||||
### Bug Risks & Why You Are Crashing
|
||||
|
||||
Your application's stability is likely impacted by a few core architectural choices. Crashes are most likely originating from inconsistent data structures during runtime, race conditions in shared services, and incomplete error handling.
|
||||
|
||||
**1. Heavy Reliance on `TypedDict` for State Management (Highest Risk)**
|
||||
|
||||
This is the most significant risk and the most probable cause of runtime crashes like `KeyError` or `TypeError`.
|
||||
|
||||
* **The Problem:** Throughout the `src/biz_bud/states/` directory (e.g., `unified.py`, `rag_agent.py`, `base.py`), you use `TypedDict` to define the shape of your state. `TypedDict` is a tool for *static analysis* (like Mypy or Pyright) and provides **zero runtime validation**. If one node in your graph produces a dictionary that is missing a key or has a value of the wrong type, the next node that tries to access it will crash.
|
||||
* **Evidence:** The state definitions are littered with `NotRequired[...]`, `... | None`, and `... | Any` (e.g., `src/biz_bud/states/unified.py`), which weakens the data contract between nodes. For example, a node might expect `state['search_results']` to be a list, but if a preceding search fails and the key is not added, a downstream node will crash with a `KeyError`.
|
||||
* **Why it Crashes:** A function signature might indicate it accepts `ResearchState`, but at runtime, it just receives a standard Python `dict`. There's no guarantee that the dictionary actually conforms to the `ResearchState` structure.
|
||||
* **Recommendation:** Systematically migrate all `TypedDict` state definitions to Pydantic `BaseModel`. Pydantic models perform runtime validation, which would turn these crashes into clear, catchable `ValidationError` exceptions at the boundary of each node. The `core/validation/graph_validation.py` module already attempts to do this with its `PydanticValidator`, but this should be the default for all state objects, not an add-on.
|
||||
|
||||
**2. Inconsistent Service and Singleton Management**
|
||||
|
||||
Race conditions and improper initialization/cleanup of shared services can lead to unpredictable behavior and crashes.
|
||||
|
||||
* **The Problem:** You have multiple patterns for managing global or shared objects: a `ServiceFactory` (`factory/service_factory.py`), a `SingletonLifecycleManager` (`services/singleton_manager.py`), and a `HTTPClient` singleton (`core/networking/http_client.py`). While sophisticated, inconsistencies in their use can cause issues. For instance, a service might be used before it's fully initialized or after it has been cleaned up.
|
||||
* **Evidence:** The `ServiceFactory` uses an `_initializing` dictionary with `asyncio.Task` to prevent re-entrant initialization, which is good. However, if any service's `initialize()` method fails, the task will hold an exception that could be raised in an unexpected location when another part of the app tries to get that service. The `SingletonLifecycleManager` adds another layer of complexity, and it's unclear if all critical singletons (like the `ServiceFactory` itself) are registered with it.
|
||||
* **Why it Crashes:** A race condition during startup could lead to a service being requested before its dependencies are ready. An error during the async initialization of a critical service (like the database connection pool in `services/db.py`) could cause cascading failures across the application.
|
||||
|
||||
**3. Potential for Unhandled Exceptions and Swallowed Errors**
|
||||
|
||||
While you have a comprehensive error-handling framework in `core/errors`, there are areas where it might be bypassed.
|
||||
|
||||
* **The Problem:** The `handle_errors` decorator and the `ErrorRouter` provide a structured way to manage failures. However, there are still many generic `try...except Exception` blocks in the codebase. If these blocks don't convert the generic exception into a `BusinessBuddyError` or log it properly, the root cause of the error can be hidden.
|
||||
* **Evidence:** In `nodes/llm/call.py`, the `call_model_node` has a broad `except Exception as e:` block. While it does attempt to log and create an `ErrorInfo` object, any failure within this exception handling itself (e.g., a serialization issue with the state) would be an unhandled crash. Similarly, in `tools/clients/r2r.py`, the `search` method has a broad `except Exception`, which could mask the actual issue from the R2R client.
|
||||
* **Why it Crashes:** A generic exception that is caught but not properly processed or routed can lead to the application being in an inconsistent state, causing a different crash later on. If an exception is "swallowed" (caught and ignored), the application might proceed with `None` or incorrect data, leading to a `TypeError` or `AttributeError` in a subsequent step.
|
||||
|
||||
**4. Configuration-Related Failures**
|
||||
|
||||
The system's behavior is heavily dependent on the `config.yaml` and environment variables. A missing or invalid configuration can lead to startup or runtime failures.
|
||||
|
||||
* **The Problem:** The configuration loader (`core/config/loader.py`) merges settings from multiple sources. Critical values like API keys are often optional in the Pydantic models (e.g., `core/config/schemas/services.py`). If a key is not provided in any source, the value will be `None`.
|
||||
* **Evidence:** In `services/llm/client.py`, the `_get_llm_instance` function might receive `api_key=None`. While the underlying LangChain clients might raise an error, the application doesn't perform an upfront check, leading to a failure deep within a library call, which can be harder to debug.
|
||||
* **Why it Crashes:** A service attempting to initialize without a required API key or a valid URL will crash. For example, the `PostgresStore` in `services/db.py` will fail to create its connection pool if database credentials are missing.
|
||||
|
||||
### Redundancy and Duplication
|
||||
|
||||
Duplicate code increases maintenance overhead and creates a risk of inconsistent behavior when one copy is updated but the other is not.
|
||||
|
||||
**1. Multiple JSON Extraction Implementations**
|
||||
|
||||
* **The Problem:** The logic for parsing potentially malformed JSON from LLM responses is implemented in at least two different places.
|
||||
* **Evidence:**
|
||||
* `services/llm/utils.py` contains a very detailed and robust `parse_json_response` function with multiple recovery strategies.
|
||||
* `tools/capabilities/extraction/text/structured_extraction.py` contains a similar, but distinct, `extract_json_from_text` function, also with its own recovery logic like `_fix_truncated_json`.
|
||||
* **Recommendation:** Consolidate this logic into a single, robust utility function, likely the one in `tools/capabilities/extraction/text/structured_extraction.py` as it appears more comprehensive. All other parts of the code should call this central function.
|
||||
|
||||
**2. Redundant URL Parsing and Analysis**
|
||||
|
||||
* **The Problem:** Logic for parsing, normalizing, and analyzing URLs is spread across multiple files instead of being centralized.
|
||||
* **Evidence:**
|
||||
* `core/utils/url_analyzer.py` provides a detailed `analyze_url_type` function.
|
||||
* `core/utils/url_normalizer.py` provides a `URLNormalizer` class.
|
||||
* Despite these utilities, manual URL parsing using `urlparse` and custom domain extraction logic is found in `nodes/scrape/route_url.py`, `tools/utils/url_filters.py`, `nodes/search/ranker.py`, and `graphs/rag/nodes/upload_r2r.py`.
|
||||
* **Recommendation:** All URL analysis and normalization should be done through the `URLAnalyzer` and `URLNormalizer` utilities in `core/utils`. This ensures consistent behavior for identifying domains, extensions, and repository URLs.
|
||||
|
||||
**3. Duplicated State Field Definitions**
|
||||
|
||||
* **The Problem:** Even with `BaseState`, common state fields are often redefined or handled inconsistently across different state `TypedDict`s.
|
||||
* **Evidence:**
|
||||
* Fields like `query`, `search_results`, `extracted_info`, and `synthesis` appear in multiple state definitions (`states/research.py`, `states/buddy.py`, `states/unified.py`).
|
||||
* `states/unified.py` is an attempt to solve this but acts as a "god object," containing almost every possible field from every workflow. This makes it very difficult to reason about what state is actually available at any given point in a specific graph.
|
||||
* **Recommendation:** Instead of a single unified state, embrace smaller, composable Pydantic models for state. Define mixin classes for common concerns (e.g., a `SearchStateMixin` with `search_query` and `search_results`) that can be included in more specific state models for different graphs.
|
||||
|
||||
**4. Inconsistent Service Client Initialization**
|
||||
|
||||
* **The Problem:** While the `ServiceFactory` is the intended pattern, some parts of the code appear to instantiate service clients directly.
|
||||
* **Evidence:**
|
||||
* `tools/clients/` contains standalone clients like `FirecrawlClient` and `TavilyClient`.
|
||||
* `tools/capabilities/search/tool.py` directly instantiates providers like `TavilySearchProvider` inside its `_initialize_providers` method.
|
||||
* **Recommendation:** All external service clients and providers should be managed and instantiated exclusively through the `ServiceFactory`. This allows for centralized configuration, singleton management, and proper lifecycle control (initialization and cleanup).
|
||||
213
docs/service-factory-client-integration.md
Normal file
213
docs/service-factory-client-integration.md
Normal file
@@ -0,0 +1,213 @@
|
||||
# Service Factory Client Integration
|
||||
|
||||
## Overview
|
||||
|
||||
This document outlines the architectural improvement made to integrate tool clients (JinaClient, FirecrawlClient, TavilyClient) with the ServiceFactory pattern, ensuring consistent dependency injection, lifecycle management, and testing patterns.
|
||||
|
||||
## Problem Identified
|
||||
|
||||
**Issue**: Test files were directly instantiating tool clients instead of using the ServiceFactory pattern:
|
||||
|
||||
```python
|
||||
# ❌ INCORRECT: Direct instantiation (bypassing factory benefits)
|
||||
client = JinaClient(Mock(spec=AppConfig))
|
||||
client = FirecrawlClient(Mock(spec=AppConfig))
|
||||
client = TavilyClient(Mock(spec=AppConfig))
|
||||
```
|
||||
|
||||
**Root Cause**: Tool clients inherited from `BaseService` but lacked corresponding factory methods:
|
||||
- Missing `get_jina_client()` → JinaClient
|
||||
- Missing `get_firecrawl_client()` → FirecrawlClient
|
||||
- Missing `get_tavily_client()` → TavilyClient
|
||||
|
||||
## Solution Implemented
|
||||
|
||||
### 1. Added Factory Methods
|
||||
|
||||
Extended `ServiceFactory` with new client methods in `/app/src/biz_bud/services/factory/service_factory.py`:
|
||||
|
||||
```python
|
||||
async def get_jina_client(self) -> "JinaClient":
|
||||
"""Get the Jina client service."""
|
||||
from biz_bud.tools.clients.jina import JinaClient
|
||||
return await self.get_service(JinaClient)
|
||||
|
||||
async def get_firecrawl_client(self) -> "FirecrawlClient":
|
||||
"""Get the Firecrawl client service."""
|
||||
from biz_bud.tools.clients.firecrawl import FirecrawlClient
|
||||
return await self.get_service(FirecrawlClient)
|
||||
|
||||
async def get_tavily_client(self) -> "TavilyClient":
|
||||
"""Get the Tavily client service."""
|
||||
from biz_bud.tools.clients.tavily import TavilyClient
|
||||
return await self.get_service(TavilyClient)
|
||||
```
|
||||
|
||||
### 2. Added Type Checking Imports
|
||||
|
||||
Added TYPE_CHECKING imports for proper type hints:
|
||||
|
||||
```python
|
||||
if TYPE_CHECKING:
|
||||
from biz_bud.tools.clients.firecrawl import FirecrawlClient
|
||||
from biz_bud.tools.clients.jina import JinaClient
|
||||
from biz_bud.tools.clients.tavily import TavilyClient
|
||||
```
|
||||
|
||||
### 3. Fixed Pydantic Compatibility
|
||||
|
||||
Fixed Pydantic errors in legacy_tools.py by adding proper type annotations:
|
||||
|
||||
```python
|
||||
# Before (causing Pydantic errors):
|
||||
args_schema = StatisticsExtractionInput
|
||||
|
||||
# After (properly typed):
|
||||
args_schema: type[StatisticsExtractionInput] = StatisticsExtractionInput
|
||||
```
|
||||
|
||||
### 4. Created Demonstration Tests
|
||||
|
||||
Created comprehensive test files showing proper factory usage:
|
||||
- `/app/tests/unit_tests/services/test_factory_client_integration.py`
|
||||
- `/app/tests/unit_tests/tools/clients/test_jina_factory_pattern.py`
|
||||
|
||||
## Proper Usage Pattern
|
||||
|
||||
### ✅ Correct Factory Pattern
|
||||
|
||||
```python
|
||||
# Proper dependency injection and lifecycle management
|
||||
async with ServiceFactory(config) as factory:
|
||||
jina_client = await factory.get_jina_client()
|
||||
firecrawl_client = await factory.get_firecrawl_client()
|
||||
tavily_client = await factory.get_tavily_client()
|
||||
|
||||
# Use clients with automatic cleanup
|
||||
result = await jina_client.search("query")
|
||||
# Automatic cleanup when context exits
|
||||
```
|
||||
|
||||
### ❌ Incorrect Direct Instantiation
|
||||
|
||||
```python
|
||||
# Bypasses factory benefits - avoid this pattern
|
||||
client = JinaClient(Mock(spec=AppConfig)) # No dependency injection
|
||||
client = FirecrawlClient(config) # No lifecycle management
|
||||
client = TavilyClient(config) # No singleton behavior
|
||||
```
|
||||
|
||||
## Benefits of Factory Pattern
|
||||
|
||||
### 1. **Dependency Injection**
|
||||
- Automatic configuration injection
|
||||
- Consistent service creation
|
||||
- Centralized configuration management
|
||||
|
||||
### 2. **Lifecycle Management**
|
||||
- Proper initialization and cleanup
|
||||
- Resource management
|
||||
- Context manager support
|
||||
|
||||
### 3. **Singleton Behavior**
|
||||
- Single instance per factory
|
||||
- Memory efficiency
|
||||
- State consistency
|
||||
|
||||
### 4. **Thread Safety**
|
||||
- Race-condition-free initialization
|
||||
- Concurrent access protection
|
||||
- Proper locking mechanisms
|
||||
|
||||
### 5. **Testing Benefits**
|
||||
- Consistent mocking patterns
|
||||
- Easier dependency substitution
|
||||
- Better test isolation
|
||||
|
||||
## Testing Pattern Comparison
|
||||
|
||||
### Old Pattern (Incorrect)
|
||||
```python
|
||||
def test_client_functionality():
|
||||
# Direct instantiation bypasses factory
|
||||
client = JinaClient(Mock(spec=AppConfig)) # ❌
|
||||
result = client.some_method()
|
||||
assert result == expected
|
||||
```
|
||||
|
||||
### New Pattern (Correct)
|
||||
```python
|
||||
@pytest.mark.asyncio
|
||||
async def test_client_functionality(service_factory):
|
||||
# Factory-based creation with proper lifecycle
|
||||
with patch.object(JinaClient, 'some_method') as mock_method:
|
||||
client = await service_factory.get_jina_client() # ✅
|
||||
result = await client.some_method()
|
||||
mock_method.assert_called_once()
|
||||
```
|
||||
|
||||
## Migration Guide for Existing Tests
|
||||
|
||||
### Step 1: Update Test Fixtures
|
||||
```python
|
||||
@pytest.fixture
|
||||
async def service_factory(mock_app_config):
|
||||
factory = ServiceFactory(mock_app_config)
|
||||
yield factory
|
||||
await factory.cleanup()
|
||||
```
|
||||
|
||||
### Step 2: Replace Direct Instantiation
|
||||
```python
|
||||
# Before:
|
||||
client = JinaClient(Mock(spec=AppConfig))
|
||||
|
||||
# After:
|
||||
client = await service_factory.get_jina_client()
|
||||
```
|
||||
|
||||
### Step 3: Add Proper Mocking
|
||||
```python
|
||||
with patch.object(JinaClient, '_validate_config') as mock_validate:
|
||||
with patch.object(JinaClient, 'initialize', new_callable=AsyncMock):
|
||||
mock_validate.return_value = Mock()
|
||||
client = await service_factory.get_jina_client()
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
### Factory Methods Available
|
||||
- ✅ `ServiceFactory.get_jina_client()`
|
||||
- ✅ `ServiceFactory.get_firecrawl_client()`
|
||||
- ✅ `ServiceFactory.get_tavily_client()`
|
||||
|
||||
### Tests Pass
|
||||
- ✅ Factory integration tests
|
||||
- ✅ Singleton behavior tests
|
||||
- ✅ Lifecycle management tests
|
||||
- ✅ Thread safety tests
|
||||
|
||||
### Linting Clean
|
||||
- ✅ Ruff checks pass
|
||||
- ✅ Pydantic compatibility resolved
|
||||
- ✅ Type hints correct
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Update Existing Test Files**: Migrate remaining test files to use factory pattern
|
||||
2. **Add More Factory Methods**: Consider adding factory methods for other BaseService subclasses
|
||||
3. **Documentation Updates**: Update relevant README files with factory patterns
|
||||
4. **Code Reviews**: Ensure all new code follows factory pattern
|
||||
|
||||
## Files Modified
|
||||
|
||||
### Core Changes
|
||||
- `src/biz_bud/services/factory/service_factory.py` - Added factory methods
|
||||
- `src/biz_bud/tools/capabilities/extraction/legacy_tools.py` - Fixed Pydantic errors
|
||||
|
||||
### Documentation/Tests
|
||||
- `tests/unit_tests/services/test_factory_client_integration.py` - Factory integration tests
|
||||
- `tests/unit_tests/tools/clients/test_jina_factory_pattern.py` - Factory pattern demonstration
|
||||
- `docs/service-factory-client-integration.md` - This documentation
|
||||
|
||||
This architectural improvement ensures consistent service management across the codebase and provides a foundation for proper dependency injection and lifecycle management patterns.
|
||||
@@ -16,7 +16,7 @@ from typing import Any
|
||||
import yaml
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
from biz_bud.graphs.catalog_research import create_catalog_research_graph
|
||||
from biz_bud.graphs.research.graph import create_research_graph as create_catalog_research_graph
|
||||
|
||||
|
||||
async def load_catalog_from_config(config_path: Path) -> dict[str, Any]:
|
||||
@@ -170,8 +170,7 @@ async def main():
|
||||
catalog_data = transform_to_catalog_format(catalog_config)
|
||||
|
||||
# Create and run the research graph
|
||||
graph = create_catalog_research_graph()
|
||||
compiled_graph = graph.compile()
|
||||
compiled_graph = create_catalog_research_graph()
|
||||
|
||||
print("\n⏳ Running ingredient research workflow...")
|
||||
print(" 1. Searching for ingredient information...")
|
||||
|
||||
@@ -3,14 +3,15 @@
|
||||
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
from biz_bud.graphs.catalog_intel import create_catalog_intel_graph
|
||||
from biz_bud.graphs.catalog.graph import create_catalog_graph as create_catalog_intel_graph
|
||||
|
||||
|
||||
async def load_catalog_from_config(config_path: Path) -> dict:
|
||||
async def load_catalog_from_config(config_path: Path) -> dict[str, Any]:
|
||||
"""Load catalog data from config.yaml file."""
|
||||
with open(config_path) as f:
|
||||
config = yaml.safe_load(f)
|
||||
@@ -24,7 +25,7 @@ async def load_catalog_from_config(config_path: Path) -> dict:
|
||||
return catalog_config
|
||||
|
||||
|
||||
def transform_config_to_catalog_format(catalog_config: dict) -> dict:
|
||||
def transform_config_to_catalog_format(catalog_config: dict[str, Any]) -> dict[str, Any]:
|
||||
"""Transform config catalog data into the format expected by the graph.
|
||||
|
||||
In a real application, this would likely:
|
||||
@@ -93,7 +94,10 @@ def transform_config_to_catalog_format(catalog_config: dict) -> dict:
|
||||
|
||||
# Build catalog items
|
||||
catalog_items = []
|
||||
for idx, item_name in enumerate(catalog_config.get("items", []), 1):
|
||||
items = catalog_config.get("items", [])
|
||||
if not isinstance(items, list):
|
||||
items = []
|
||||
for idx, item_name in enumerate(items, 1):
|
||||
details = item_details_map.get(item_name, {})
|
||||
catalog_items.append(
|
||||
{
|
||||
@@ -119,8 +123,8 @@ def transform_config_to_catalog_format(catalog_config: dict) -> dict:
|
||||
|
||||
|
||||
async def analyze_catalog_with_user_query(
|
||||
catalog_data: dict, user_query: str
|
||||
) -> dict[str, object] | None:
|
||||
catalog_data: dict[str, Any], user_query: str
|
||||
) -> dict[str, Any] | None:
|
||||
"""Run catalog intelligence analysis with the given query."""
|
||||
# Create the catalog intelligence graph
|
||||
graph = create_catalog_intel_graph()
|
||||
@@ -143,7 +147,12 @@ async def analyze_catalog_with_user_query(
|
||||
|
||||
# Run the analysis
|
||||
print(f"\n🔍 Analyzing catalog with query: '{user_query}'")
|
||||
print(f"📋 Catalog items: {[item['name'] for item in catalog_data['catalog_items']]}")
|
||||
catalog_items = catalog_data.get('catalog_items', [])
|
||||
if isinstance(catalog_items, list):
|
||||
item_names = [item.get('name', 'Unknown') for item in catalog_items if isinstance(item, dict)]
|
||||
print(f"📋 Catalog items: {item_names}")
|
||||
else:
|
||||
print("📋 Catalog items: []")
|
||||
print("⏳ Running analysis...")
|
||||
|
||||
try:
|
||||
@@ -188,9 +197,17 @@ async def main():
|
||||
try:
|
||||
# Load catalog data from config
|
||||
catalog_config = await load_catalog_from_config(config_path)
|
||||
print(f"📄 Loaded catalog config: {len(catalog_config.get('items', []))} items")
|
||||
print(f"🍴 Category: {', '.join(catalog_config.get('category', []))}")
|
||||
print(f"🌴 Subcategory: {', '.join(catalog_config.get('subcategory', []))}")
|
||||
items = catalog_config.get('items', [])
|
||||
items_count = len(items) if isinstance(items, list) else 0
|
||||
print(f"📄 Loaded catalog config: {items_count} items")
|
||||
|
||||
category = catalog_config.get('category', [])
|
||||
category_str = ', '.join(category) if isinstance(category, list) else str(category)
|
||||
print(f"🍴 Category: {category_str}")
|
||||
|
||||
subcategory = catalog_config.get('subcategory', [])
|
||||
subcategory_str = ', '.join(subcategory) if isinstance(subcategory, list) else str(subcategory)
|
||||
print(f"🌴 Subcategory: {subcategory_str}")
|
||||
|
||||
# Transform to catalog format
|
||||
catalog_data = transform_config_to_catalog_format(catalog_config)
|
||||
|
||||
@@ -10,7 +10,7 @@ from typing import Any
|
||||
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
from biz_bud.graphs.catalog_research import create_catalog_research_graph
|
||||
from biz_bud.graphs.research.graph import create_research_graph as create_catalog_research_graph
|
||||
|
||||
|
||||
def create_tech_catalog() -> dict[str, Any]:
|
||||
@@ -124,8 +124,7 @@ async def main():
|
||||
print(f" - {item['name']}: {item['description']}")
|
||||
|
||||
# Create and run the research graph (same graph as food!)
|
||||
graph = create_catalog_research_graph()
|
||||
compiled_graph = graph.compile()
|
||||
compiled_graph = create_catalog_research_graph()
|
||||
|
||||
print("\n⏳ Running component research workflow...")
|
||||
print(" 1. Searching for technical specifications...")
|
||||
|
||||
@@ -7,7 +7,7 @@ import sys
|
||||
from typing import Any
|
||||
|
||||
from biz_bud.core.config.loader import load_config_async
|
||||
from biz_bud.graphs.url_to_r2r import process_url_to_r2r_with_streaming
|
||||
from biz_bud.graphs.rag.graph import process_url_to_r2r_with_streaming
|
||||
|
||||
|
||||
async def crawl_r2r_docs_fixed(max_depth: int = 3, max_pages: int = 50):
|
||||
@@ -95,7 +95,7 @@ async def crawl_r2r_docs_fixed(max_depth: int = 3, max_pages: int = 50):
|
||||
print("📊 CRAWL RESULTS")
|
||||
print("=" * 60)
|
||||
|
||||
if result.get("error"):
|
||||
if "error" in result and result["error"]:
|
||||
print(f"❌ Error: {result['error']}")
|
||||
return
|
||||
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
import asyncio
|
||||
|
||||
from biz_bud.core.config.loader import load_config_async
|
||||
from biz_bud.core.logging.utils import get_logger
|
||||
from biz_bud.graphs.url_to_r2r import process_url_to_r2r
|
||||
from biz_bud.graphs.rag.graph import process_url_to_r2r
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
@@ -61,7 +61,7 @@ async def test_with_custom_limits():
|
||||
logger.info(f"- Pages scraped: {len(result.get('scraped_content', []))}")
|
||||
logger.info(f"- Status: {result.get('status')}")
|
||||
|
||||
if result.get("error"):
|
||||
if "error" in result and result["error"]:
|
||||
logger.error(f"- Error: {result['error']}")
|
||||
|
||||
except Exception as e:
|
||||
|
||||
@@ -1,209 +1,54 @@
|
||||
"""Example usage of enhanced Firecrawl API endpoints."""
|
||||
|
||||
import asyncio
|
||||
from typing import Any, cast
|
||||
|
||||
from bb_tools.api_clients.firecrawl import (
|
||||
CrawlOptions,
|
||||
ExtractOptions,
|
||||
FirecrawlApp,
|
||||
FirecrawlOptions,
|
||||
MapOptions,
|
||||
SearchOptions,
|
||||
)
|
||||
# Note: These imports are from the original firecrawl library
|
||||
# They are not available in our current client implementation
|
||||
# This example is disabled as it requires the actual firecrawl library
|
||||
|
||||
|
||||
async def example_map_website():
|
||||
"""Demonstrate using the map endpoint to discover URLs."""
|
||||
async with FirecrawlApp() as app:
|
||||
# Map a website to discover all URLs
|
||||
map_options = MapOptions(
|
||||
limit=50,
|
||||
include_subdomains=False,
|
||||
search="documentation", # Optional: filter URLs containing "documentation"
|
||||
)
|
||||
|
||||
urls = await app.map_website("https://example.com", options=map_options)
|
||||
print(f"Discovered {len(urls)} URLs")
|
||||
for url in urls[:5]:
|
||||
print(f" - {url}")
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
|
||||
async def example_crawl_website():
|
||||
"""Demonstrate using the crawl endpoint for deep website crawling."""
|
||||
async with FirecrawlApp() as app:
|
||||
# Crawl a website with depth control
|
||||
crawl_options = CrawlOptions(
|
||||
limit=20,
|
||||
max_depth=2,
|
||||
include_paths=[r"/docs/.*", r"/api/.*"],
|
||||
exclude_paths=[r".*\.pdf$", r".*/archive/.*"],
|
||||
scrape_options=FirecrawlOptions(
|
||||
formats=["markdown", "links"],
|
||||
only_main_content=True,
|
||||
),
|
||||
)
|
||||
|
||||
result = await app.crawl_website(
|
||||
"https://example.com",
|
||||
options=crawl_options,
|
||||
wait_for_completion=True,
|
||||
)
|
||||
|
||||
if isinstance(result, dict) and "data" in result:
|
||||
data = result["data"]
|
||||
if isinstance(data, list):
|
||||
print(f"Crawled {len(data)} pages")
|
||||
for page in data[:3]:
|
||||
if isinstance(page, dict):
|
||||
metadata = page.get("metadata", {})
|
||||
title = (
|
||||
metadata.get("title", "N/A") if isinstance(metadata, dict) else "N/A"
|
||||
)
|
||||
content = page.get("content", "")
|
||||
print(f" - Title: {title}")
|
||||
if isinstance(content, str):
|
||||
print(f" Content preview: {content[:100]}...")
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
|
||||
async def example_search_and_scrape():
|
||||
"""Demonstrate using the search endpoint to search and scrape results."""
|
||||
async with FirecrawlApp() as app:
|
||||
# Search the web and scrape results
|
||||
search_options = SearchOptions(
|
||||
limit=5,
|
||||
tbs="qdr:w", # Last week
|
||||
location="US",
|
||||
scrape_options=FirecrawlOptions(
|
||||
formats=["markdown"],
|
||||
only_main_content=True,
|
||||
),
|
||||
)
|
||||
|
||||
results = await app.search("RAG implementation best practices", options=search_options)
|
||||
print(f"Found and scraped {len(results)} search results")
|
||||
|
||||
for i, result in enumerate(results):
|
||||
if result:
|
||||
print(f"\n{i + 1}. {result.get('title', 'No title')}")
|
||||
print(f" URL: {result.get('url', 'No URL')}")
|
||||
markdown = result.get("markdown")
|
||||
if markdown and isinstance(markdown, str):
|
||||
print(f" Content preview: {markdown[:200]}...")
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
|
||||
async def example_extract_structured_data():
|
||||
"""Demonstrate using the extract endpoint for AI-powered extraction."""
|
||||
async with FirecrawlApp() as app:
|
||||
# Extract structured data from multiple URLs
|
||||
urls = [
|
||||
"https://example.com/company/about",
|
||||
"https://example.com/company/team",
|
||||
"https://example.com/company/careers",
|
||||
]
|
||||
|
||||
# Option 1: Using a prompt
|
||||
extract_options = ExtractOptions(
|
||||
prompt="Extract company information including: company name, founded year, number of employees, main products/services, and key team members with their roles.",
|
||||
)
|
||||
|
||||
result = await app.extract(urls, options=extract_options)
|
||||
if result.get("success"):
|
||||
print("Extracted company information:")
|
||||
print(result.get("data", {}))
|
||||
|
||||
# Option 2: Using a schema
|
||||
schema_options = ExtractOptions(
|
||||
extract_schema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"company_name": {"type": "string"},
|
||||
"founded_year": {"type": "integer"},
|
||||
"employees": {"type": "integer"},
|
||||
"products": {"type": "array", "items": {"type": "string"}},
|
||||
"team_members": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"name": {"type": "string"},
|
||||
"role": {"type": "string"},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
structured_result = await app.extract(urls, options=schema_options)
|
||||
if structured_result.get("success"):
|
||||
print("\nStructured extraction result:")
|
||||
print(structured_result.get("data", {}))
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
|
||||
async def example_rag_integration():
|
||||
"""Demonstrate using Firecrawl for RAG pipeline."""
|
||||
async with FirecrawlApp() as app:
|
||||
base_url = "https://docs.example.com"
|
||||
|
||||
# Step 1: Map the documentation site
|
||||
print("Step 1: Discovering documentation pages...")
|
||||
map_options = MapOptions(limit=100, sitemap_only=True)
|
||||
all_urls = await app.map_website(base_url, options=map_options)
|
||||
|
||||
# Step 2: Crawl and extract content
|
||||
print(f"\nStep 2: Crawling {len(all_urls)} pages...")
|
||||
crawl_options = CrawlOptions(
|
||||
limit=50,
|
||||
scrape_options=FirecrawlOptions(
|
||||
formats=["markdown"],
|
||||
only_main_content=True,
|
||||
exclude_tags=["nav", "footer", "header"],
|
||||
),
|
||||
)
|
||||
|
||||
crawl_result = await app.crawl_website(base_url, options=crawl_options)
|
||||
|
||||
# Step 3: Process for RAG
|
||||
if isinstance(crawl_result, dict) and "data" in crawl_result:
|
||||
data = crawl_result["data"]
|
||||
if isinstance(data, list):
|
||||
print(f"\nStep 3: Processing {len(data)} pages for RAG...")
|
||||
documents = []
|
||||
for page in data:
|
||||
if isinstance(page, dict) and page.get("markdown"):
|
||||
page_metadata = page.get("metadata", {})
|
||||
if isinstance(page_metadata, dict):
|
||||
# Cast to Any to work around pyrefly type inference
|
||||
metadata_dict = cast("Any", page_metadata)
|
||||
documents.append(
|
||||
{
|
||||
"content": page["markdown"],
|
||||
"metadata": {
|
||||
"source": base_url,
|
||||
"title": metadata_dict.get("title", ""),
|
||||
"description": metadata_dict.get("description", ""),
|
||||
},
|
||||
}
|
||||
)
|
||||
|
||||
print(f"Ready to index {len(documents)} documents into vector store")
|
||||
return documents
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
|
||||
async def main():
|
||||
"""Run all examples."""
|
||||
print("=== Firecrawl Enhanced API Examples ===\n")
|
||||
"""Run all the Firecrawl examples."""
|
||||
print("Enhanced Firecrawl API Examples")
|
||||
print("=" * 40)
|
||||
print("Note: These examples require the firecrawl-py library")
|
||||
print()
|
||||
|
||||
# Uncomment the examples you want to run:
|
||||
|
||||
# await example_map_website()
|
||||
# await example_crawl_website()
|
||||
# await example_search_and_scrape()
|
||||
# await example_extract_structured_data()
|
||||
# await example_rag_integration()
|
||||
|
||||
print("\nNote: Set FIRECRAWL_API_KEY environment variable before running!")
|
||||
await example_map_website()
|
||||
await example_crawl_website()
|
||||
await example_search_and_scrape()
|
||||
await example_extract_structured_data()
|
||||
await example_rag_integration()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -19,12 +19,18 @@ load_dotenv()
|
||||
|
||||
async def analyze_crawl_vs_scrape():
|
||||
"""Analyze the relationship between crawl and scrape endpoints."""
|
||||
from bb_tools.api_clients.firecrawl import (
|
||||
CrawlJob,
|
||||
CrawlOptions,
|
||||
FirecrawlApp,
|
||||
FirecrawlOptions,
|
||||
)
|
||||
# Note: These imports are from the original firecrawl library
|
||||
# They are not available in our current client implementation
|
||||
# from biz_bud.tools.clients.firecrawl import (
|
||||
# CrawlJob,
|
||||
# CrawlOptions,
|
||||
# FirecrawlApp,
|
||||
# FirecrawlOptions,
|
||||
# )
|
||||
|
||||
# This example needs the actual firecrawl library
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
print("=== Understanding Crawl vs Scrape Behavior ===\n")
|
||||
print("Key insights:")
|
||||
@@ -100,12 +106,18 @@ async def analyze_crawl_vs_scrape():
|
||||
|
||||
async def monitor_crawl_job():
|
||||
"""Monitor a crawl job with real-time status updates."""
|
||||
from bb_tools.api_clients.firecrawl import (
|
||||
CrawlJob,
|
||||
CrawlOptions,
|
||||
FirecrawlApp,
|
||||
FirecrawlOptions,
|
||||
)
|
||||
# Note: These imports are from the original firecrawl library
|
||||
# They are not available in our current client implementation
|
||||
# from biz_bud.tools.clients.firecrawl import (
|
||||
# CrawlJob,
|
||||
# CrawlOptions,
|
||||
# FirecrawlApp,
|
||||
# FirecrawlOptions,
|
||||
# )
|
||||
|
||||
# This example needs the actual firecrawl library
|
||||
print("This example requires the actual firecrawl-py library")
|
||||
return
|
||||
|
||||
print("=== Firecrawl Crawl Job Monitoring Example ===\n")
|
||||
|
||||
@@ -241,7 +253,7 @@ async def monitor_crawl_job():
|
||||
|
||||
async def monitor_batch_scrape():
|
||||
"""Monitor batch scrape operations."""
|
||||
from bb_tools.api_clients.firecrawl import FirecrawlApp, FirecrawlOptions
|
||||
from biz_bud.tools.clients.firecrawl import FirecrawlApp, FirecrawlOptions
|
||||
|
||||
print("\n\n=== Firecrawl Batch Scrape Monitoring Example ===\n")
|
||||
|
||||
|
||||
@@ -4,10 +4,10 @@
|
||||
import asyncio
|
||||
import os
|
||||
|
||||
from bb_tools.r2r import r2r_rag, r2r_search
|
||||
|
||||
from biz_bud.graphs import process_url_to_r2r
|
||||
from biz_bud.graphs.rag.graph import process_url_to_r2r
|
||||
from biz_bud.graphs.rag.nodes.upload_r2r import extract_meaningful_name_from_url
|
||||
from biz_bud.tools.capabilities.database.tool import r2r_rag_completion as r2r_rag
|
||||
from biz_bud.tools.capabilities.database.tool import r2r_search_documents as r2r_search
|
||||
|
||||
|
||||
async def main():
|
||||
|
||||
@@ -9,7 +9,7 @@ import time
|
||||
from pathlib import Path
|
||||
|
||||
# Import the logging configuration
|
||||
from biz_bud.core.logging.config import get_logger, setup_logging
|
||||
from biz_bud.logging import get_logger, setup_logging
|
||||
|
||||
|
||||
def simulate_verbose_logs():
|
||||
|
||||
@@ -53,14 +53,19 @@ async def test_rag_agent_with_firecrawl():
|
||||
if result.get("processing_result"):
|
||||
processing_result = result["processing_result"]
|
||||
if processing_result:
|
||||
if processing_result.get("skipped"):
|
||||
print(f"\nSkipped: {processing_result.get('reason')}")
|
||||
# Only call .get() if processing_result is a dictionary
|
||||
if isinstance(processing_result, dict):
|
||||
if processing_result.get("skipped"):
|
||||
print(f"\nSkipped: {processing_result.get('reason')}")
|
||||
else:
|
||||
print("\nProcessed Successfully!")
|
||||
if processing_result.get("scraped_content"):
|
||||
print(f"Pages scraped: {len(processing_result['scraped_content'])}")
|
||||
if processing_result.get("r2r_dataset_id"):
|
||||
print(f"R2R dataset: {processing_result['r2r_dataset_id']}")
|
||||
else:
|
||||
print("\nProcessed Successfully!")
|
||||
if processing_result.get("scraped_content"):
|
||||
print(f"Pages scraped: {len(processing_result['scraped_content'])}")
|
||||
if processing_result.get("r2r_dataset_id"):
|
||||
print(f"R2R dataset: {processing_result['r2r_dataset_id']}")
|
||||
# Handle non-dict processing results
|
||||
print(f"\nProcessing result: {processing_result}")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\nError processing {url}: {e}")
|
||||
@@ -68,7 +73,7 @@ async def test_rag_agent_with_firecrawl():
|
||||
|
||||
async def test_firecrawl_endpoints_directly():
|
||||
"""Test Firecrawl endpoints directly."""
|
||||
from bb_tools.api_clients.firecrawl import (
|
||||
from biz_bud.tools.clients.firecrawl import (
|
||||
ExtractOptions,
|
||||
FirecrawlApp,
|
||||
MapOptions,
|
||||
|
||||
@@ -11,7 +11,7 @@ import asyncio
|
||||
import os
|
||||
from pprint import pprint
|
||||
|
||||
from bb_tools.api_clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
from biz_bud.tools.clients.firecrawl import CrawlJob, CrawlOptions, FirecrawlApp, FirecrawlOptions
|
||||
|
||||
|
||||
async def test_basic_scrape():
|
||||
|
||||
@@ -4,11 +4,13 @@
|
||||
"agent": "./src/biz_bud/graphs/graph.py:graph_factory",
|
||||
"buddy_agent": "./src/biz_bud/agents/buddy_agent.py:buddy_agent_factory",
|
||||
"planner": "./src/biz_bud/graphs/planner.py:planner_graph_factory",
|
||||
"research": "./src/biz_bud/graphs/research.py:research_graph_factory",
|
||||
"catalog": "./src/biz_bud/graphs/catalog.py:catalog_factory",
|
||||
"paperless": "./src/biz_bud/graphs/paperless.py:paperless_graph_factory",
|
||||
"url_to_r2r": "./src/biz_bud/graphs/url_to_r2r.py:url_to_r2r_graph_factory",
|
||||
"error_handling": "./src/biz_bud/graphs/error_handling.py:error_handling_graph_factory"
|
||||
"research": "./src/biz_bud/graphs/research/graph.py:research_graph_factory",
|
||||
"catalog": "./src/biz_bud/graphs/catalog/graph.py:catalog_factory",
|
||||
"paperless": "./src/biz_bud/graphs/paperless/graph.py:paperless_graph_factory",
|
||||
"url_to_r2r": "./src/biz_bud/graphs/rag/graph.py:url_to_r2r_graph_factory",
|
||||
"error_handling": "./src/biz_bud/graphs/error_handling.py:error_handling_graph_factory",
|
||||
"analysis": "./src/biz_bud/graphs/analysis/graph.py:analysis_graph_factory",
|
||||
"scraping": "./src/biz_bud/graphs/scraping/graph.py:scraping_graph_factory"
|
||||
},
|
||||
"env": ".env",
|
||||
"http": {
|
||||
|
||||
7
mypy.ini
7
mypy.ini
@@ -15,7 +15,7 @@ warn_unused_ignores = True
|
||||
warn_unreachable = True
|
||||
strict_equality = True
|
||||
extra_checks = True
|
||||
mypy_path = packages/business-buddy-tools/src/bb_tools/stubs
|
||||
mypy_path = src
|
||||
|
||||
[mypy-r2r.*]
|
||||
ignore_missing_imports = True
|
||||
@@ -144,8 +144,5 @@ ignore_missing_imports = True
|
||||
ignore_missing_imports = True
|
||||
|
||||
# Allow explicit Any for legitimate JSON processing use cases
|
||||
[mypy-packages.business-buddy-core.src.bb_core.validation.merge]
|
||||
disallow_any_explicit = False
|
||||
|
||||
[mypy-bb_core.validation.merge]
|
||||
[mypy-biz_bud.core.validation.*]
|
||||
disallow_any_explicit = False
|
||||
|
||||
@@ -60,6 +60,8 @@ dependencies = [
|
||||
"zendriver>=0.8.1",
|
||||
# NLP and AI utilities
|
||||
"nltk>=3.9.1",
|
||||
"spacy>=3.7.0", # For content normalization and tokenization
|
||||
"datasketch>=1.6.0", # For MinHash and SimHash deduplication
|
||||
"tiktoken>=0.8.0,<0.9.0",
|
||||
"openai>=1.91.0",
|
||||
"anthropic>=0.55.0",
|
||||
@@ -154,11 +156,16 @@ addopts = [
|
||||
"--strict-config",
|
||||
"--verbose",
|
||||
"--tb=short",
|
||||
"--cov=biz_bud",
|
||||
"--cov-report=term-missing",
|
||||
"--cov-report=html",
|
||||
"--cov=src/biz_bud",
|
||||
"--cov-report=term-missing:skip-covered",
|
||||
"--cov-report=html:htmlcov",
|
||||
"--cov-report=xml:coverage.xml",
|
||||
"--cov-report=json:coverage.json",
|
||||
"--cov-branch",
|
||||
"--cov-fail-under=70",
|
||||
"--cov-context=test",
|
||||
"-n=auto",
|
||||
"--dist=worksteal", # Better load balancing for parallel tests
|
||||
]
|
||||
filterwarnings = [
|
||||
"ignore::pydantic.PydanticDeprecatedSince20",
|
||||
@@ -185,20 +192,75 @@ target-version = ["py312"]
|
||||
include = '\.pyi?$'
|
||||
|
||||
[tool.coverage.run]
|
||||
source = ["biz_bud"]
|
||||
omit = ["*/tests/*", "*/__init__.py"]
|
||||
source = ["src/biz_bud"]
|
||||
omit = [
|
||||
"*/tests/*",
|
||||
"*/__init__.py",
|
||||
"*/conftest.py",
|
||||
"*/test_*.py",
|
||||
"src/biz_bud/webapp.py", # Main app entry point, typically excluded
|
||||
"src/biz_bud/**/migrations/*", # Database migrations
|
||||
"src/biz_bud/**/cli.py", # CLI entry points
|
||||
]
|
||||
branch = true # Enable branch coverage
|
||||
parallel = true # Support parallel test execution
|
||||
relative_files = true # Use relative file paths
|
||||
data_file = ".coverage"
|
||||
|
||||
[tool.coverage.report]
|
||||
show_missing = true
|
||||
skip_covered = false
|
||||
precision = 2
|
||||
exclude_lines = [
|
||||
# Standard exclusions
|
||||
"pragma: no cover",
|
||||
"def __repr__",
|
||||
"if self.debug:",
|
||||
"raise AssertionError",
|
||||
"raise NotImplementedError",
|
||||
"if __name__ == .__main__.:",
|
||||
"class .*Protocol:",
|
||||
|
||||
# Type checking exclusions
|
||||
"if TYPE_CHECKING:",
|
||||
"class .*Protocol.*:",
|
||||
"@abstract",
|
||||
"@overload",
|
||||
"\\.\\.\\.", # Ellipsis in type stubs
|
||||
|
||||
# Error handling exclusions
|
||||
"except ImportError:",
|
||||
"except ModuleNotFoundError:",
|
||||
"raise NotImplementedError.*",
|
||||
|
||||
# Debug and development exclusions
|
||||
"if DEBUG:",
|
||||
"if settings\\.DEBUG:",
|
||||
"if __debug__:",
|
||||
|
||||
# Platform-specific exclusions
|
||||
"if sys\\.platform",
|
||||
"if platform\\.system",
|
||||
|
||||
# Defensive programming
|
||||
"assert False",
|
||||
"raise SystemExit",
|
||||
]
|
||||
ignore_errors = false
|
||||
skip_empty = true
|
||||
|
||||
[tool.coverage.html]
|
||||
directory = "htmlcov"
|
||||
title = "Business Buddy Test Coverage Report"
|
||||
show_contexts = true
|
||||
skip_covered = false
|
||||
skip_empty = true
|
||||
|
||||
[tool.coverage.xml]
|
||||
output = "coverage.xml"
|
||||
|
||||
[tool.coverage.json]
|
||||
output = "coverage.json"
|
||||
show_contexts = true
|
||||
|
||||
[tool.isort]
|
||||
profile = "black"
|
||||
@@ -256,6 +318,7 @@ dev = [
|
||||
"types-pyyaml>=6.0.12.20250516",
|
||||
"types-aiofiles>=24.1.0.20250708",
|
||||
"aider-install>=0.2.0",
|
||||
"pysonar>=1.1.0.2035",
|
||||
]
|
||||
|
||||
# Pyrefly configuration will be in a separate file
|
||||
|
||||
@@ -27,9 +27,6 @@ project_excludes = [
|
||||
"**/.archive/",
|
||||
"cache/",
|
||||
"examples/",
|
||||
"packages/**/build/",
|
||||
"packages/**/dist/",
|
||||
"packages/**/*.egg-info/",
|
||||
".cenv/**",
|
||||
".venv-host/**",
|
||||
"**/.venv/**",
|
||||
|
||||
@@ -1,14 +1,10 @@
|
||||
{
|
||||
"include": [
|
||||
"src",
|
||||
"packages",
|
||||
"tests"
|
||||
],
|
||||
"extraPaths": [
|
||||
"src",
|
||||
"packages/business-buddy-core/src",
|
||||
"packages/business-buddy-extraction/src",
|
||||
"packages/business-buddy-tools/src"
|
||||
"src"
|
||||
],
|
||||
"exclude": [
|
||||
"**/node_modules",
|
||||
|
||||
@@ -102,22 +102,6 @@ bs4==0.0.2
|
||||
# via
|
||||
# business-buddy (pyproject.toml)
|
||||
# business-buddy-utils
|
||||
business-buddy-core @ file:///home/vasceannie/repos/biz-budz/packages/business-buddy-core
|
||||
# via
|
||||
# business-buddy (pyproject.toml)
|
||||
# business-buddy-extraction
|
||||
# business-buddy-utils
|
||||
business-buddy-extraction @ file:///home/vasceannie/repos/biz-budz/packages/business-buddy-extraction
|
||||
# via
|
||||
# business-buddy (pyproject.toml)
|
||||
# business-buddy-tools
|
||||
# business-buddy-utils
|
||||
business-buddy-tools @ file:///home/vasceannie/repos/biz-budz/packages/business-buddy-tools
|
||||
# via business-buddy (pyproject.toml)
|
||||
business-buddy-utils @ file:///home/vasceannie/repos/biz-budz/packages/business-buddy-utils
|
||||
# via
|
||||
# business-buddy (pyproject.toml)
|
||||
# business-buddy-tools
|
||||
cachetools==5.5.2
|
||||
# via google-auth
|
||||
certifi==2025.6.15
|
||||
|
||||
225
review.txt
Normal file
225
review.txt
Normal file
@@ -0,0 +1,225 @@
|
||||
Dockerfile.production
|
||||
Of course. I have reviewed your codebase and identified several opportunities to refactor frequently used variables, literals, and code patterns into module-level constants and helper functions. This will improve maintainability, reduce errors from typos, and adhere to the DRY (Don't Repeat Yourself) principle.
|
||||
|
||||
Here is a file-by-file breakdown of my recommendations.
|
||||
|
||||
### General Recommendations: State Keys
|
||||
|
||||
Across many files, especially in the `graphs/` and `nodes/` directories, you frequently access dictionary keys on the `state` object using string literals. These are prime candidates for constants.
|
||||
|
||||
**Recommendation:** Define these common state keys in your global constants file `src/biz_bud/core/config/constants.py` to ensure consistency and prevent typos.
|
||||
|
||||
**In `src/biz_bud/core/config/constants.py`:**
|
||||
|
||||
```python
|
||||
# --- State Keys ---
|
||||
STATE_KEY_MESSAGES = "messages"
|
||||
STATE_KEY_ERRORS = "errors"
|
||||
STATE_KEY_CONFIG = "config"
|
||||
STATE_KEY_QUERY = "query"
|
||||
STATE_KEY_USER_QUERY = "user_query"
|
||||
STATE_KEY_SEARCH_RESULTS = "search_results"
|
||||
STATE_KEY_SYNTHESIS = "synthesis"
|
||||
STATE_KEY_FINAL_RESPONSE = "final_response"
|
||||
STATE_KEY_TOOL_CALLS = "tool_calls"
|
||||
STATE_KEY_INPUT_URL = "input_url"
|
||||
STATE_KEY_URL = "url"
|
||||
STATE_KEY_SERVICE_FACTORY = "service_factory"
|
||||
STATE_KEY_EXTRACTED_INFO = "extracted_info"
|
||||
STATE_KEY_SOURCES = "sources"
|
||||
```
|
||||
|
||||
You can then import and use these constants throughout your project, for example: `query = state.get(STATE_KEY_QUERY, "")`.
|
||||
|
||||
---
|
||||
|
||||
### File-Specific Recommendations
|
||||
|
||||
#### **File: `src/biz_bud/services/llm/client.py`**
|
||||
|
||||
This file contains logic for dynamically binding tools to an LLM based on inferred capabilities. The mapping from capability to tool can be extracted.
|
||||
|
||||
* **Constants:**
|
||||
* The mapping from a normalized capability name (e.g., "search") to the tools that fulfill it is implicitly defined. This should be a constant.
|
||||
|
||||
**Recommendation:** Create a `CAPABILITY_TO_TOOLS_MAP` constant.
|
||||
|
||||
```python
|
||||
# At module level in src/biz_bud/services/llm/client.py
|
||||
CAPABILITY_TO_TOOLS_MAP: dict[str, list[Callable[..., Any]]] = {
|
||||
"search": [web_search],
|
||||
"scrape": [scrape_url],
|
||||
"document_management": [
|
||||
# Assuming these tools are imported
|
||||
search_paperless_documents,
|
||||
get_paperless_document,
|
||||
update_paperless_document,
|
||||
],
|
||||
}
|
||||
```
|
||||
|
||||
#### **File: `src/biz_bud/graphs/rag/nodes/check_duplicate.py` and `upload_r2r.py`**
|
||||
|
||||
Both of these files contain logic for interacting with an R2R (Ready-to-Retrieve) service. There is significant code duplication in handling API configuration, client instantiation, and direct API calls.
|
||||
|
||||
* **Helper Functions:**
|
||||
* **R2R Configuration/Client Setup:** The logic to get the R2R base URL and credentials from the application config and environment variables is repeated.
|
||||
* **Direct API Call:** The `_r2r_direct_api_call` function is present in both files.
|
||||
* **Collection Management:** The logic to check if a collection exists and create it if it doesn't (`_ensure_collection_exists`) is duplicated.
|
||||
|
||||
**Recommendation:** Create a new utility module, for instance `src/biz_bud/tools/clients/r2r_utils.py`, to house these shared functions.
|
||||
|
||||
**In a new file `src/biz_bud/tools/clients/r2r_utils.py`:**
|
||||
```python
|
||||
import os
|
||||
from typing import Any, cast
|
||||
# ... other necessary imports
|
||||
|
||||
class R2RConfig(TypedDict):
|
||||
base_url: str
|
||||
api_key: str | None
|
||||
email: str | None
|
||||
password: str | None
|
||||
|
||||
def get_r2r_config(app_config: dict[str, Any]) -> R2RConfig:
|
||||
"""Extracts R2R configuration from app config and environment variables."""
|
||||
api_config = app_config.get("api_config", {})
|
||||
r2r_base_url = api_config.get("r2r_base_url") or os.getenv(
|
||||
"R2R_BASE_URL", "http://localhost:7272"
|
||||
)
|
||||
# ... logic to get api_key, email, password
|
||||
return R2RConfig(base_url=r2r_base_url, ...)
|
||||
|
||||
async def r2r_direct_api_call(client: Any, method: str, endpoint: str, **kwargs: Any) -> dict[str, Any]:
|
||||
"""Makes a direct HTTP request to the R2R API endpoint."""
|
||||
# Implementation from your existing _r2r_direct_api_call
|
||||
...
|
||||
|
||||
async def ensure_collection_exists(client: R2RClient, collection_name: str) -> str | None:
|
||||
"""Checks if a collection exists by name and creates it if not, returning the ID."""
|
||||
# Implementation from your existing _ensure_collection_exists
|
||||
...
|
||||
```
|
||||
|
||||
#### **File: `src/biz_bud/graphs/planner.py`**
|
||||
|
||||
This file contains logic for discovering available graphs and mapping steps to them.
|
||||
|
||||
* **Helper Functions:**
|
||||
* The `discover_available_graphs` function is self-contained and well-defined but could be moved to a more general location if other agents or modules need to know about available graphs. For now, it is acceptable here.
|
||||
* The logic for selecting a graph for a given step (`agent_selection_node`) involves creating a detailed LLM prompt. This prompt generation could be its own helper function to improve readability.
|
||||
|
||||
**Recommendation:** Extract the prompt generation logic into a helper.
|
||||
|
||||
```python
|
||||
# In src/biz_bud/graphs/planner.py
|
||||
|
||||
def _create_graph_selection_prompt(step: QueryStep, graph_context: list[str]) -> str:
|
||||
"""Creates the prompt for the LLM to select the best graph for a plan step."""
|
||||
step_query = step["query"]
|
||||
step_description = step["description"]
|
||||
context_str = "\n".join(graph_context)
|
||||
return f"""Given the following query step, select the most appropriate graph workflow:
|
||||
Query: {step_query}
|
||||
Description: {step_description}
|
||||
Available Graphs:
|
||||
{context_str}
|
||||
Respond with the graph name and a brief reasoning.
|
||||
Format:
|
||||
GRAPH: [graph_name]
|
||||
REASONING: [reasoning]"""
|
||||
|
||||
# agent_selection_node can then be simplified:
|
||||
async def agent_selection_node(state: PlannerState) -> dict[str, Any]:
|
||||
# ...
|
||||
for step in steps:
|
||||
# ...
|
||||
selection_prompt = _create_graph_selection_prompt(step, graph_context)
|
||||
# ...
|
||||
```
|
||||
|
||||
#### **File: `src/biz_bud/agents/buddy_nodes_registry.py`**
|
||||
|
||||
This module handles complex logic for analyzing queries, selecting tools, and orchestrating execution.
|
||||
|
||||
* **Constants:**
|
||||
* The introspection keywords (`"capabilities"`, `"tools"`, `"graphs"`) are used to detect if a query is about the agent's own abilities.
|
||||
* Magic numbers like the `capability_refresh_interval` (300 seconds) should be constants.
|
||||
|
||||
**Recommendation:** Define these as module-level constants.
|
||||
|
||||
```python
|
||||
# At module level in src/biz_bud/agents/buddy_nodes_registry.py
|
||||
INTROSPECTION_KEYWORDS = {"capabilities", "tools", "graphs", "what can you do"}
|
||||
CAPABILITY_REFRESH_INTERVAL_SECONDS = 300.0
|
||||
```
|
||||
|
||||
* **Helper Functions:**
|
||||
* The `query_analyzer_node` contains a large block of code dedicated to creating a summary of the agent's capabilities for introspection queries. This can be extracted.
|
||||
|
||||
**Recommendation:** Create a helper function to format the capability summary.
|
||||
|
||||
```python
|
||||
# In src/biz_bud/agents/buddy_nodes_registry.py
|
||||
|
||||
def _format_introspection_response(capability_map: dict, capability_summary: dict) -> tuple[dict, list]:
|
||||
"""Formats the agent's capabilities into a structured response for introspection queries."""
|
||||
# ... logic to create extracted_info and sources from capability maps ...
|
||||
return extracted_info, sources
|
||||
|
||||
# The query_analyzer_node becomes cleaner:
|
||||
async def query_analyzer_node(state: BuddyState, config: RunnableConfig | None = None) -> dict[str, Any]:
|
||||
# ...
|
||||
if is_introspection:
|
||||
# ...
|
||||
extracted_info, sources = _format_introspection_response(capability_map, capability_summary)
|
||||
# ...
|
||||
return updater.build()
|
||||
# ...
|
||||
```
|
||||
|
||||
#### **File: `src/biz_bud/nodes/llm/call.py`**
|
||||
|
||||
The `call_model_node` function has complex error handling and response parsing logic that can be simplified.
|
||||
|
||||
* **Helper Functions:**
|
||||
* **Error Categorization:** The logic to map a raw exception to a category, message, and retriable status is a clear, self-contained unit.
|
||||
* **Error Response Generation:** Generating a user-facing error message and structured `ErrorInfo` dict based on the exception category is repeated.
|
||||
|
||||
**Recommendation:** Extract these two logical blocks into helper functions.
|
||||
|
||||
```python
|
||||
# In src/biz_bud/nodes/llm/call.py
|
||||
|
||||
def _categorize_llm_exception(exception: Exception) -> tuple[str, str, bool]:
|
||||
"""Categorizes an LLM exception and determines if it's retriable."""
|
||||
# ... implementation from your existing logic ...
|
||||
# Returns (category, user_message, is_retriable)
|
||||
if isinstance(exception, (OpenAIAuthError, AnthropicAuthError)):
|
||||
return "authentication", "Auth issue.", False
|
||||
# ... other conditions
|
||||
return "llm_error", "An error occurred.", True
|
||||
|
||||
async def _handle_llm_call_error(
|
||||
state: dict, exception: Exception, context: dict
|
||||
) -> dict[str, Any]:
|
||||
"""Handles an exception during an LLM call, updating state with structured error info."""
|
||||
category, user_message, _ = _categorize_llm_exception(exception)
|
||||
error_info = create_error_info(
|
||||
message=user_message,
|
||||
category=category,
|
||||
cause=exception,
|
||||
context=context,
|
||||
)
|
||||
return await add_error_to_state(state.copy(), error_info)
|
||||
|
||||
|
||||
# The call_model_node's exception block becomes:
|
||||
# ...
|
||||
# except Exception as e:
|
||||
# llm_context = {"messages": safe_messages, ...}
|
||||
# error_state = await _handle_llm_call_error(state, e, llm_context)
|
||||
# return {**error_state, "final_response": error_state["errors"][-1]["message"]}
|
||||
```
|
||||
|
||||
By implementing these changes, you will make your codebase more robust, readable, and easier to maintain.
|
||||
428
scripts/checks/audit_core_dependencies.py
Executable file
428
scripts/checks/audit_core_dependencies.py
Executable file
@@ -0,0 +1,428 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Audit Python code for adherence to core infrastructure patterns.
|
||||
|
||||
This script uses AST analysis to detect anti-patterns and enforce
|
||||
architectural conventions in the biz_bud codebase.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import ast
|
||||
import os
|
||||
from typing import Dict, List, Set, Tuple
|
||||
|
||||
# --- Configuration of Anti-Patterns ---
|
||||
|
||||
# Direct imports of libraries that should be replaced by your core infrastructure.
|
||||
# Maps the disallowed module to the suggested core module/function.
|
||||
DISALLOWED_IMPORTS: Dict[str, str] = {
|
||||
"logging": "biz_bud.core.logging.get_logger",
|
||||
"requests": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"httpx": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"aiohttp": "biz_bud.core.networking.http_client.HTTPClient",
|
||||
"asyncio.gather": "biz_bud.core.utils.async_utils.gather_with_concurrency",
|
||||
}
|
||||
|
||||
# Direct instantiation of service clients or tools that should come from the factory.
|
||||
DISALLOWED_INSTANTIATIONS: Dict[str, str] = {
|
||||
"TavilySearchProvider": "ServiceFactory.get_service() or create_tools_for_capabilities()",
|
||||
"JinaSearchProvider": "ServiceFactory.get_service() or create_tools_for_capabilities()",
|
||||
"FirecrawlClient": "ServiceFactory.get_service() or a dedicated provider from ScrapeService",
|
||||
"TavilyClient": "ServiceFactory.get_service()",
|
||||
"PostgresStore": "ServiceFactory.get_db_service()",
|
||||
"LangchainLLMClient": "ServiceFactory.get_llm_client()",
|
||||
"HTTPClient": "HTTPClient.get_or_create_client() instead of direct instantiation",
|
||||
}
|
||||
|
||||
# Built-in exceptions that should ideally be wrapped in a custom BusinessBuddyError.
|
||||
DISALLOWED_EXCEPTIONS: Set[str] = {
|
||||
"Exception",
|
||||
"ValueError",
|
||||
"KeyError",
|
||||
"TypeError",
|
||||
"AttributeError",
|
||||
"NotImplementedError",
|
||||
}
|
||||
|
||||
# --- File Path Patterns for Exemptions ---
|
||||
|
||||
# Core infrastructure files that can use networking libraries directly
|
||||
NETWORKING_INFRASTRUCTURE_PATTERNS: List[str] = [
|
||||
"http_client.py",
|
||||
"http_service.py",
|
||||
"/core/networking/",
|
||||
"/core/services/"
|
||||
]
|
||||
|
||||
# Logging infrastructure files that can use logging directly
|
||||
LOGGING_INFRASTRUCTURE_PATTERNS: List[str] = [
|
||||
"/core/errors/",
|
||||
"/error_handling/",
|
||||
"telemetry.py",
|
||||
"logger.py",
|
||||
"/logging/",
|
||||
"unified_logging.py"
|
||||
]
|
||||
|
||||
# Files that can use requests directly
|
||||
REQUESTS_ALLOWED_PATTERNS: List[str] = [
|
||||
"document_processing.py",
|
||||
"/tools/",
|
||||
"/clients/"
|
||||
]
|
||||
|
||||
# Files that can use httpx directly
|
||||
HTTPX_ALLOWED_PATTERNS: List[str] = [
|
||||
"/core/networking/",
|
||||
"http_client.py",
|
||||
"/core/errors/"
|
||||
]
|
||||
|
||||
# Factory and service files that can import services directly
|
||||
FACTORY_SERVICE_PATTERNS: List[str] = [
|
||||
"factories.py",
|
||||
"factory.py",
|
||||
"/services/",
|
||||
"/core/services/",
|
||||
"http_client.py",
|
||||
"http_service.py",
|
||||
"/core/networking/",
|
||||
"/core/errors/",
|
||||
"/error_handling/",
|
||||
"/logging/",
|
||||
"unified_logging.py"
|
||||
]
|
||||
|
||||
# Files that can raise generic exceptions
|
||||
EXCEPTION_EXEMPT_PATTERNS: List[str] = [
|
||||
"/url_processing/",
|
||||
"/core/services/",
|
||||
"abstract",
|
||||
"interface",
|
||||
"/core/errors/",
|
||||
"/core/langgraph/",
|
||||
"/logging/",
|
||||
"unified_logging.py"
|
||||
]
|
||||
|
||||
# Files that can mutate state directly
|
||||
STATE_MUTATION_EXEMPT_PATTERNS: List[str] = [
|
||||
"/core/langgraph/",
|
||||
"/nodes/llm/",
|
||||
"cross_cutting.py",
|
||||
"state_immutability.py",
|
||||
"/core/validation/"
|
||||
]
|
||||
|
||||
# --- Error Messages ---
|
||||
|
||||
STATE_MUTATION_MESSAGE = "Direct state mutation `state[...] = ...` detected. Please use `StateUpdater` for immutable updates."
|
||||
STATE_UPDATE_MESSAGE = "Direct state mutation with `state.update()` detected. Please use `StateUpdater`."
|
||||
CUSTOM_ERROR_MESSAGE = "Please use a custom error from `biz_bud.core.errors` (e.g., BusinessBuddyError, ValidationError, etc.)."
|
||||
|
||||
# Special class exemptions
|
||||
ARXIV_PROVIDER = "ArxivProvider"
|
||||
|
||||
class InfrastructureVisitor(ast.NodeVisitor):
|
||||
"""
|
||||
AST visitor that walks the code tree and identifies violations
|
||||
of the core dependency infrastructure usage.
|
||||
"""
|
||||
|
||||
def __init__(self, filepath: str):
|
||||
self.filepath = filepath
|
||||
self.violations: List[Tuple[int, str]] = []
|
||||
self.imported_names: Dict[str, str] = {} # Maps alias to full import path
|
||||
self.in_pydantic_validator = False
|
||||
self.in_type_checking = False
|
||||
self.in_get_client_method = False
|
||||
self.in_initialize_method = False
|
||||
|
||||
def _add_violation(self, node: ast.AST, message: str) -> None:
|
||||
line_number = getattr(node, 'lineno', 0)
|
||||
self.violations.append((line_number, message))
|
||||
|
||||
def _has_filepath_pattern(self, patterns: List[str]) -> bool:
|
||||
"""Check if the current filepath matches any of the given patterns."""
|
||||
return any(pattern in self.filepath for pattern in patterns)
|
||||
|
||||
def _is_exempt_from_import_check(self, import_name: str) -> bool:
|
||||
"""Check if an import is exempt from validation based on filepath and import type."""
|
||||
if import_name == "aiohttp" and self._has_filepath_pattern(NETWORKING_INFRASTRUCTURE_PATTERNS):
|
||||
return True
|
||||
if import_name == "logging" and self._has_filepath_pattern(LOGGING_INFRASTRUCTURE_PATTERNS):
|
||||
return True
|
||||
if import_name == "requests" and self._has_filepath_pattern(REQUESTS_ALLOWED_PATTERNS):
|
||||
return True
|
||||
if import_name == "httpx" and self._has_filepath_pattern(HTTPX_ALLOWED_PATTERNS):
|
||||
return True
|
||||
return False
|
||||
|
||||
def visit_Import(self, node: ast.Import) -> None:
|
||||
"""Checks for `import logging`, `import requests`, etc."""
|
||||
for alias in node.names:
|
||||
if alias.name in DISALLOWED_IMPORTS and not self._is_exempt_from_import_check(alias.name):
|
||||
suggestion = DISALLOWED_IMPORTS[alias.name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Disallowed import '{alias.name}'. Please use '{suggestion}'."
|
||||
)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_If(self, node: ast.If) -> None:
|
||||
"""Track if we're in a TYPE_CHECKING block."""
|
||||
was_in_type_checking = self.in_type_checking
|
||||
|
||||
# Check if this is a TYPE_CHECKING condition
|
||||
if isinstance(node.test, ast.Name) and node.test.id == "TYPE_CHECKING":
|
||||
self.in_type_checking = True
|
||||
|
||||
self.generic_visit(node)
|
||||
self.in_type_checking = was_in_type_checking
|
||||
|
||||
def _should_skip_import_from_validation(self, node: ast.ImportFrom) -> bool:
|
||||
"""Check if ImportFrom validation should be skipped."""
|
||||
if self.in_type_checking or self.in_get_client_method:
|
||||
return True
|
||||
|
||||
if self._has_filepath_pattern(FACTORY_SERVICE_PATTERNS):
|
||||
return True
|
||||
|
||||
# Special case: cleanup_registry legitimately imports service types
|
||||
if "cleanup_registry.py" in self.filepath and node.module and "biz_bud.services" in node.module:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _check_disallowed_service_import(self, node: ast.ImportFrom, alias: ast.alias) -> None:
|
||||
"""Check for disallowed direct service/tool imports."""
|
||||
if not node.module:
|
||||
return
|
||||
|
||||
is_client_import = "biz_bud.tools.clients" in node.module
|
||||
is_service_import = "biz_bud.services" in node.module and "factory" not in node.module
|
||||
|
||||
if (is_client_import or is_service_import) and alias.name in DISALLOWED_INSTANTIATIONS:
|
||||
suggestion = DISALLOWED_INSTANTIATIONS[alias.name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Disallowed direct import of '{alias.name}'. Use the ServiceFactory: '{suggestion}'."
|
||||
)
|
||||
|
||||
def visit_ImportFrom(self, node: ast.ImportFrom) -> None:
|
||||
"""Checks for direct service/client imports, e.g., `from biz_bud.tools.clients import TavilyClient`"""
|
||||
if self._should_skip_import_from_validation(node):
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
if node.module:
|
||||
for alias in node.names:
|
||||
full_import_path = f"{node.module}.{alias.name}"
|
||||
# Store the imported name (could be an alias)
|
||||
self.imported_names[alias.asname or alias.name] = full_import_path
|
||||
self._check_disallowed_service_import(node, alias)
|
||||
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_FunctionDef(self, node: ast.FunctionDef) -> None:
|
||||
"""Track if we're in a Pydantic field validator or a _get_client method."""
|
||||
self._visit_function_def(node)
|
||||
|
||||
def visit_AsyncFunctionDef(self, node: ast.AsyncFunctionDef) -> None:
|
||||
"""Track if we're in a Pydantic field validator or a _get_client method (async version)."""
|
||||
self._visit_function_def(node)
|
||||
|
||||
def _visit_function_def(self, node) -> None:
|
||||
"""Common logic for FunctionDef and AsyncFunctionDef."""
|
||||
was_in_validator = self.in_pydantic_validator
|
||||
was_in_get_client = self.in_get_client_method
|
||||
was_in_initialize = self.in_initialize_method
|
||||
|
||||
# Check if this function has @field_validator or @model_validator decorator
|
||||
for decorator in node.decorator_list:
|
||||
if isinstance(decorator, ast.Name) and decorator.id in ("field_validator", "model_validator"):
|
||||
self.in_pydantic_validator = True
|
||||
break
|
||||
elif isinstance(decorator, ast.Call) and isinstance(decorator.func, ast.Name) and decorator.func.id in ("field_validator", "model_validator"):
|
||||
self.in_pydantic_validator = True
|
||||
break
|
||||
|
||||
# Check if this is a _get_client method (legitimate ServiceFactory pattern)
|
||||
if node.name == "_get_client":
|
||||
self.in_get_client_method = True
|
||||
|
||||
# Check if this is an initialize method (legitimate provider instantiation)
|
||||
if node.name == "initialize":
|
||||
self.in_initialize_method = True
|
||||
|
||||
self.generic_visit(node)
|
||||
self.in_pydantic_validator = was_in_validator
|
||||
self.in_get_client_method = was_in_get_client
|
||||
self.in_initialize_method = was_in_initialize
|
||||
|
||||
def _get_exception_name(self, node: ast.Raise) -> str:
|
||||
"""Extract exception name from raise node."""
|
||||
if isinstance(node.exc, ast.Call) and isinstance(node.exc.func, ast.Name):
|
||||
return node.exc.func.id
|
||||
elif isinstance(node.exc, ast.Name):
|
||||
return node.exc.id
|
||||
return "unknown"
|
||||
|
||||
def visit_Raise(self, node: ast.Raise) -> None:
|
||||
"""Checks for `raise ValueError` instead of a custom error."""
|
||||
# Skip validation if we're in a Pydantic field validator
|
||||
if self.in_pydantic_validator:
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
# Skip validation for abstract interfaces and framework internals
|
||||
if self._has_filepath_pattern(EXCEPTION_EXEMPT_PATTERNS):
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
exception_name = self._get_exception_name(node)
|
||||
|
||||
# Allow NotImplementedError in abstract interfaces
|
||||
if exception_name == "NotImplementedError":
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
if exception_name in DISALLOWED_EXCEPTIONS:
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Raising generic exception '{exception_name}'. {CUSTOM_ERROR_MESSAGE}"
|
||||
)
|
||||
self.generic_visit(node)
|
||||
|
||||
def visit_Assign(self, node: ast.Assign) -> None:
|
||||
"""Checks for direct state mutation like `state['key'] = value`."""
|
||||
# Skip validation for LangGraph framework internals and core infrastructure
|
||||
if self._has_filepath_pattern(STATE_MUTATION_EXEMPT_PATTERNS):
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
for target in node.targets:
|
||||
if isinstance(target, ast.Subscript) and isinstance(target.value, ast.Name):
|
||||
if target.value.id == 'state':
|
||||
self._add_violation(node, STATE_MUTATION_MESSAGE)
|
||||
self.generic_visit(node)
|
||||
|
||||
def _should_skip_call_validation(self) -> bool:
|
||||
"""Check if call validation should be skipped for factory/implementation files."""
|
||||
return self._has_filepath_pattern(FACTORY_SERVICE_PATTERNS)
|
||||
|
||||
def _check_direct_instantiation(self, node: ast.Call) -> None:
|
||||
"""Check for direct instantiation of disallowed classes."""
|
||||
if not isinstance(node.func, ast.Name):
|
||||
return
|
||||
|
||||
class_name = node.func.id
|
||||
|
||||
# Skip ArxivProvider - it doesn't use API clients, only HTTPClient directly
|
||||
if class_name == ARXIV_PROVIDER:
|
||||
return
|
||||
|
||||
# Skip provider instantiation in initialize methods (legitimate ServiceFactory pattern)
|
||||
if self.in_initialize_method and class_name.endswith("Provider"):
|
||||
return
|
||||
|
||||
if class_name in DISALLOWED_INSTANTIATIONS:
|
||||
# Verify it's not a legitimate call, e.g. a function with the same name
|
||||
if self.imported_names.get(class_name, "").endswith(class_name):
|
||||
suggestion = DISALLOWED_INSTANTIATIONS[class_name]
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Direct instantiation of '{class_name}'. Use the ServiceFactory: '{suggestion}'."
|
||||
)
|
||||
|
||||
def _check_attribute_calls(self, node: ast.Call) -> None:
|
||||
"""Check for disallowed attribute calls like asyncio.gather and state.update."""
|
||||
if not isinstance(node.func, ast.Attribute) or not isinstance(node.func.value, ast.Name):
|
||||
return
|
||||
|
||||
parent_name = node.func.value.id
|
||||
attr_name = node.func.attr
|
||||
|
||||
# Check for asyncio.gather
|
||||
if parent_name == 'asyncio' and attr_name == 'gather':
|
||||
suggestion = DISALLOWED_IMPORTS['asyncio.gather']
|
||||
self._add_violation(
|
||||
node,
|
||||
f"Direct use of 'asyncio.gather'. Please use '{suggestion}' for controlled concurrency."
|
||||
)
|
||||
|
||||
# Check for state.update()
|
||||
if parent_name == 'state' and attr_name == 'update':
|
||||
self._add_violation(node, STATE_UPDATE_MESSAGE)
|
||||
|
||||
def visit_Call(self, node: ast.Call) -> None:
|
||||
"""
|
||||
Checks for:
|
||||
1. Direct instantiation of disallowed classes (e.g., `TavilyClient()`).
|
||||
2. Direct use of `asyncio.gather`.
|
||||
3. Direct state mutation via `state.update(...)`.
|
||||
"""
|
||||
if self._should_skip_call_validation():
|
||||
self.generic_visit(node)
|
||||
return
|
||||
|
||||
self._check_direct_instantiation(node)
|
||||
self._check_attribute_calls(node)
|
||||
self.generic_visit(node)
|
||||
|
||||
|
||||
def audit_directory(directory: str) -> Dict[str, List[Tuple[int, str]]]:
|
||||
"""Scans a directory for Python files and audits them."""
|
||||
all_violations: Dict[str, List[Tuple[int, str]]] = {}
|
||||
for root, _, files in os.walk(directory):
|
||||
for file in files:
|
||||
if file.endswith(".py"):
|
||||
filepath = os.path.join(root, file)
|
||||
try:
|
||||
with open(filepath, "r", encoding="utf-8") as f:
|
||||
source_code = f.read()
|
||||
tree = ast.parse(source_code, filename=filepath)
|
||||
|
||||
visitor = InfrastructureVisitor(filepath)
|
||||
visitor.visit(tree)
|
||||
|
||||
if visitor.violations:
|
||||
all_violations[filepath] = visitor.violations
|
||||
except (SyntaxError, ValueError) as e:
|
||||
all_violations[filepath] = [(0, f"ERROR: Could not parse file: {e}")]
|
||||
return all_violations
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(description="Audit Python code for adherence to core infrastructure.")
|
||||
parser.add_argument(
|
||||
"directory",
|
||||
nargs="?",
|
||||
default="src/biz_bud",
|
||||
help="The directory to scan. Defaults to 'src/biz_bud'."
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
print(f"--- Auditing directory: {args.directory} ---\n")
|
||||
|
||||
violations = audit_directory(args.directory)
|
||||
|
||||
if not violations:
|
||||
print("\033[92m All scanned files adhere to the core infrastructure rules.\033[0m")
|
||||
return
|
||||
|
||||
print(f"\033[91m=% Found {len(violations)} file(s) with violations:\033[0m\n")
|
||||
|
||||
total_violations = 0
|
||||
for filepath, file_violations in violations.items():
|
||||
print(f"\033[1m\033[93mFile: {filepath}\033[0m")
|
||||
for line, message in sorted(file_violations):
|
||||
print(f" \033[96mL{line}:\033[0m {message}")
|
||||
total_violations += 1
|
||||
print("-" * 20)
|
||||
|
||||
print(f"\n\033[1m\033[91mSummary: Found {total_violations} total violations in {len(violations)} files.\033[0m")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -2,14 +2,13 @@
|
||||
|
||||
import asyncio
|
||||
|
||||
from biz_bud.core.config.loader import clear_config_cache, load_config
|
||||
from biz_bud.core.config.loader import load_config
|
||||
from biz_bud.services.factory import ServiceFactory
|
||||
|
||||
|
||||
async def clear_all_caches() -> None:
|
||||
"""Clear all application caches."""
|
||||
# Clear config cache
|
||||
clear_config_cache()
|
||||
# Clear config cache (function removed, cache is cleared automatically)
|
||||
|
||||
# Clear Redis cache if available
|
||||
try:
|
||||
|
||||
@@ -4,8 +4,8 @@
|
||||
# Activate virtual environment
|
||||
source .venv/bin/activate
|
||||
|
||||
# Export Python path to include all package sources
|
||||
export PYTHONPATH="${PYTHONPATH}:src:packages/business-buddy-utils/src:packages/business-buddy-tools/src"
|
||||
# Export Python path to include source directory
|
||||
export PYTHONPATH="${PYTHONPATH}:src"
|
||||
|
||||
# Run pyrefly with the provided arguments
|
||||
pyrefly "$@"
|
||||
pyrefly "$@"
|
||||
|
||||
@@ -5,7 +5,7 @@ This script validates that the codebase uses modern Python 3.12+ typing patterns
|
||||
and Pydantic v2 features, while ignoring legitimate compatibility-related type ignores.
|
||||
|
||||
Usage:
|
||||
python scripts/checks/typing_modernization_check.py # Check src/ and packages/
|
||||
python scripts/checks/typing_modernization_check.py # Check src/
|
||||
python scripts/checks/typing_modernization_check.py --tests # Include tests/
|
||||
python scripts/checks/typing_modernization_check.py --verbose # Detailed output
|
||||
python scripts/checks/typing_modernization_check.py --fix # Auto-fix simple issues
|
||||
@@ -21,6 +21,56 @@ from typing import NamedTuple
|
||||
# Define the project root
|
||||
PROJECT_ROOT = Path(__file__).parent.parent.parent
|
||||
|
||||
# --- Constants for Pattern Matching ---
|
||||
|
||||
# Old typing imports that should be modernized
|
||||
OLD_TYPING_IMPORTS = ["Union", "Optional", "Dict", "List", "Set", "Tuple"]
|
||||
|
||||
# Modern imports that can be moved from typing_extensions to typing
|
||||
MODERNIZABLE_TYPING_EXTENSIONS = ["NotRequired", "Required", "TypedDict", "Literal"]
|
||||
|
||||
# Legitimate type ignore patterns for compatibility
|
||||
LEGITIMATE_TYPE_IGNORE_PATTERNS = [
|
||||
"import", # Import compatibility issues
|
||||
"TCH", # TYPE_CHECKING related ignores
|
||||
"overload", # Function overload issues
|
||||
"protocol", # Protocol compatibility
|
||||
"mypy", # Specific mypy version issues
|
||||
"pyright", # Specific pyright issues
|
||||
]
|
||||
|
||||
# File path patterns to skip
|
||||
SKIP_PATH_PATTERNS = ["__pycache__", "migrations", "generated"]
|
||||
|
||||
# --- Error Messages ---
|
||||
|
||||
# Pydantic v1 to v2 migration messages
|
||||
PYDANTIC_CONFIG_MESSAGE = "Use model_config = ConfigDict(...) instead of Config class"
|
||||
PYDANTIC_CONFIG_SUGGESTION = "model_config = ConfigDict(...)"
|
||||
|
||||
PYDANTIC_MUTATION_MESSAGE = "'allow_mutation' is deprecated, use 'frozen' on model"
|
||||
PYDANTIC_MUTATION_SUGGESTION = "Use frozen=True in model_config"
|
||||
|
||||
PYDANTIC_VALIDATOR_MESSAGE = "Use @field_validator instead of @validator"
|
||||
PYDANTIC_VALIDATOR_SUGGESTION = "@field_validator('field_name')"
|
||||
|
||||
PYDANTIC_ROOT_VALIDATOR_MESSAGE = "Use @model_validator instead of @root_validator"
|
||||
PYDANTIC_ROOT_VALIDATOR_SUGGESTION = "@model_validator(mode='before')"
|
||||
|
||||
# Typing modernization messages
|
||||
UNION_SYNTAX_MESSAGE = "Use '|' syntax instead of Union"
|
||||
OPTIONAL_SYNTAX_MESSAGE = "Use '| None' syntax instead of Optional"
|
||||
BUILTIN_GENERIC_MESSAGE = "Use built-in generic"
|
||||
|
||||
TYPING_EXTENSIONS_MESSAGE = "These can be imported from typing"
|
||||
UNNECESSARY_TRY_EXCEPT_MESSAGE = "Try/except for typing imports may be unnecessary in Python 3.12+"
|
||||
UNNECESSARY_TRY_EXCEPT_SUGGESTION = "Direct import should work"
|
||||
|
||||
# --- Type checking constants ---
|
||||
TYPING_MODULE = "typing"
|
||||
TYPING_EXTENSIONS_MODULE = "typing_extensions"
|
||||
DIRECT_IMPORT = "direct_import"
|
||||
|
||||
|
||||
class Issue(NamedTuple):
|
||||
"""Represents a typing/Pydantic issue found in the code."""
|
||||
@@ -51,7 +101,6 @@ class TypingChecker:
|
||||
# Paths to check
|
||||
self.check_paths = [
|
||||
PROJECT_ROOT / "src",
|
||||
PROJECT_ROOT / "packages",
|
||||
]
|
||||
if include_tests:
|
||||
self.check_paths.append(PROJECT_ROOT / "tests")
|
||||
@@ -84,16 +133,11 @@ class TypingChecker:
|
||||
def _should_skip_file(self, file_path: Path) -> bool:
|
||||
"""Determine if a file should be skipped from checking."""
|
||||
# Skip files in __pycache__ or .git directories
|
||||
if any(
|
||||
part.startswith(".") or part == "__pycache__" for part in file_path.parts
|
||||
):
|
||||
if any(part.startswith(".") for part in file_path.parts):
|
||||
return True
|
||||
|
||||
# Skip migration files or generated code
|
||||
if "migrations" in str(file_path) or "generated" in str(file_path):
|
||||
return True
|
||||
|
||||
return False
|
||||
return any(pattern in str(file_path) for pattern in SKIP_PATH_PATTERNS)
|
||||
|
||||
def _check_file(self, file_path: Path) -> None:
|
||||
"""Check a single Python file for typing and Pydantic issues."""
|
||||
@@ -168,64 +212,112 @@ class TypingChecker:
|
||||
if "# type: ignore" not in line:
|
||||
return False
|
||||
|
||||
# Common legitimate type ignores for compatibility
|
||||
legitimate_patterns = [
|
||||
"import", # Import compatibility issues
|
||||
"TCH", # TYPE_CHECKING related ignores
|
||||
"overload", # Function overload issues
|
||||
"protocol", # Protocol compatibility
|
||||
"mypy", # Specific mypy version issues
|
||||
"pyright", # Specific pyright issues
|
||||
]
|
||||
return any(pattern in line.lower() for pattern in LEGITIMATE_TYPE_IGNORE_PATTERNS)
|
||||
|
||||
return any(pattern in line.lower() for pattern in legitimate_patterns)
|
||||
def _is_valid_old_import(self, line: str, import_name: str) -> bool:
|
||||
"""Check if an import name is a valid old typing import to flag."""
|
||||
# Check for exact word boundaries to avoid false positives like "TypedDict" containing "Dict"
|
||||
pattern = rf"\b{import_name}\b"
|
||||
if not re.search(pattern, line):
|
||||
return False
|
||||
|
||||
# Additional check to ensure it's not part of a longer word like "TypedDict"
|
||||
# Check for common patterns: " Dict", "Dict,", "Dict)", "(Dict", "Dict\n"
|
||||
if not any([
|
||||
f" {import_name}" in line,
|
||||
f"{import_name}," in line,
|
||||
f"{import_name})" in line,
|
||||
f"({import_name}" in line,
|
||||
line.strip().endswith(import_name)
|
||||
]):
|
||||
return False
|
||||
|
||||
# Exclude cases where it's part of a longer identifier
|
||||
excluded_patterns = [
|
||||
f"Typed{import_name}",
|
||||
f"{import_name}Type",
|
||||
f"_{import_name}",
|
||||
f"{import_name}_",
|
||||
]
|
||||
return not any(longer in line for longer in excluded_patterns)
|
||||
|
||||
def _check_old_typing_imports(
|
||||
self, file_path: Path, line_num: int, line: str
|
||||
) -> None:
|
||||
"""Check for old typing imports that should be modernized."""
|
||||
# Pattern: from typing import Union, Optional, Dict, List, etc.
|
||||
if "from typing import" in line:
|
||||
old_imports = ["Union", "Optional", "Dict", "List", "Set", "Tuple"]
|
||||
found_old = []
|
||||
if "from typing import" not in line:
|
||||
return
|
||||
|
||||
for imp in old_imports:
|
||||
# Check for exact word boundaries to avoid false positives like "TypedDict" containing "Dict"
|
||||
# Match the import name with word boundaries or specific delimiters
|
||||
pattern = rf"\b{imp}\b"
|
||||
if re.search(pattern, line):
|
||||
# Additional check to ensure it's not part of a longer word like "TypedDict"
|
||||
# Check for common patterns: " Dict", "Dict,", "Dict)", "(Dict", "Dict\n"
|
||||
if (
|
||||
f" {imp}" in line
|
||||
or f"{imp}," in line
|
||||
or f"{imp})" in line
|
||||
or f"({imp}" in line
|
||||
or line.strip().endswith(imp)
|
||||
):
|
||||
# Exclude cases where it's part of a longer identifier
|
||||
if not any(
|
||||
longer in line
|
||||
for longer in [
|
||||
f"Typed{imp}",
|
||||
f"{imp}Type",
|
||||
f"_{imp}",
|
||||
f"{imp}_",
|
||||
]
|
||||
):
|
||||
found_old.append(imp)
|
||||
found_old = [
|
||||
imp for imp in OLD_TYPING_IMPORTS
|
||||
if self._is_valid_old_import(line, imp)
|
||||
]
|
||||
|
||||
if found_old:
|
||||
suggestion = self._suggest_import_fix(line, found_old)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_typing_import",
|
||||
description=f"Old typing imports: {', '.join(found_old)}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
if found_old:
|
||||
suggestion = self._suggest_import_fix(line, found_old)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_typing_import",
|
||||
description=f"Old typing imports: {', '.join(found_old)}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
|
||||
def _add_union_issue(self, file_path: Path, line_num: int, inner_content: str) -> None:
|
||||
"""Add a Union syntax issue if valid."""
|
||||
type_parts = self._parse_comma_separated_types(inner_content)
|
||||
if len(type_parts) <= 1:
|
||||
return
|
||||
|
||||
suggestion = " | ".join(type_parts)
|
||||
if not self._validate_suggestion(suggestion):
|
||||
return
|
||||
|
||||
issue_key = (str(file_path), line_num, suggestion)
|
||||
if issue_key in self._seen_issues:
|
||||
return
|
||||
|
||||
self._seen_issues.add(issue_key)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_union_syntax",
|
||||
description=f"{UNION_SYNTAX_MESSAGE}: Union[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
|
||||
def _add_optional_issue(self, file_path: Path, line_num: int, inner_content: str) -> None:
|
||||
"""Add an Optional syntax issue if valid."""
|
||||
suggestion = f"{inner_content} | None"
|
||||
if self._validate_suggestion(suggestion):
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_optional_syntax",
|
||||
description=f"{OPTIONAL_SYNTAX_MESSAGE}: Optional[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
|
||||
def _add_generic_issue(self, file_path: Path, line_num: int, old_type: str, inner_content: str) -> None:
|
||||
"""Add a generic type issue if valid."""
|
||||
suggestion = f"{old_type.lower()}[{inner_content}]"
|
||||
if self._validate_suggestion(suggestion):
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_generic_syntax",
|
||||
description=f"{BUILTIN_GENERIC_MESSAGE}: {old_type}[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
|
||||
def _check_old_typing_patterns(
|
||||
self, file_path: Path, line_num: int, line: str
|
||||
@@ -237,62 +329,20 @@ class TypingChecker:
|
||||
# Union[X, Y] should be X | Y - handle nested brackets properly
|
||||
union_matches = self._find_balanced_brackets(line, "Union")
|
||||
for match in union_matches:
|
||||
inner_content = match["content"]
|
||||
# Parse comma-separated types properly
|
||||
type_parts = self._parse_comma_separated_types(inner_content)
|
||||
if len(type_parts) > 1:
|
||||
suggestion = " | ".join(type_parts)
|
||||
# Validate the suggestion and check for duplicates before adding the issue
|
||||
if self._validate_suggestion(suggestion):
|
||||
issue_key = (str(file_path), line_num, suggestion)
|
||||
if issue_key not in self._seen_issues:
|
||||
self._seen_issues.add(issue_key)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_union_syntax",
|
||||
description=f"Use '|' syntax instead of Union: Union[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
self._add_union_issue(file_path, line_num, match["content"])
|
||||
|
||||
# Optional[X] should be X | None - handle nested brackets properly
|
||||
optional_matches = self._find_balanced_brackets(line, "Optional")
|
||||
for match in optional_matches:
|
||||
inner_content = match["content"]
|
||||
suggestion = f"{inner_content} | None"
|
||||
# Validate the suggestion before adding the issue
|
||||
if self._validate_suggestion(suggestion):
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_optional_syntax",
|
||||
description=f"Use '| None' syntax instead of Optional: Optional[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
self._add_optional_issue(file_path, line_num, match["content"])
|
||||
|
||||
# Dict[K, V] should be dict[K, V] - only if imported from typing
|
||||
for old_type in ["Dict", "List", "Set", "Tuple"]:
|
||||
# Check if this type is imported from typing
|
||||
if old_type in imports and imports[old_type] == "typing":
|
||||
if old_type in imports and imports[old_type] == TYPING_MODULE:
|
||||
matches = self._find_balanced_brackets(line, old_type)
|
||||
for match in matches:
|
||||
inner_content = match["content"]
|
||||
suggestion = f"{old_type.lower()}[{inner_content}]"
|
||||
# Validate the suggestion before adding the issue
|
||||
if self._validate_suggestion(suggestion):
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_generic_syntax",
|
||||
description=f"Use built-in generic: {old_type}[{inner_content}]",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
self._add_generic_issue(file_path, line_num, old_type, match["content"])
|
||||
|
||||
def _check_pydantic_v1_patterns(
|
||||
self, file_path: Path, line_num: int, line: str
|
||||
@@ -305,8 +355,8 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="pydantic_v1_config",
|
||||
description="Use model_config = ConfigDict(...) instead of Config class",
|
||||
suggestion="model_config = ConfigDict(...)",
|
||||
description=PYDANTIC_CONFIG_MESSAGE,
|
||||
suggestion=PYDANTIC_CONFIG_SUGGESTION,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -317,8 +367,8 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="pydantic_v1_field",
|
||||
description="'allow_mutation' is deprecated, use 'frozen' on model",
|
||||
suggestion="Use frozen=True in model_config",
|
||||
description=PYDANTIC_MUTATION_MESSAGE,
|
||||
suggestion=PYDANTIC_MUTATION_SUGGESTION,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -329,8 +379,8 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="pydantic_v1_validator",
|
||||
description="Use @field_validator instead of @validator",
|
||||
suggestion="@field_validator('field_name')",
|
||||
description=PYDANTIC_VALIDATOR_MESSAGE,
|
||||
suggestion=PYDANTIC_VALIDATOR_SUGGESTION,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -341,8 +391,8 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="pydantic_v1_root_validator",
|
||||
description="Use @model_validator instead of @root_validator",
|
||||
suggestion="@model_validator(mode='before')",
|
||||
description=PYDANTIC_ROOT_VALIDATOR_MESSAGE,
|
||||
suggestion=PYDANTIC_ROOT_VALIDATOR_SUGGESTION,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -352,9 +402,9 @@ class TypingChecker:
|
||||
"""Check for other modernization opportunities."""
|
||||
# typing_extensions imports that can be replaced
|
||||
if "from typing_extensions import" in line:
|
||||
modern_imports = ["NotRequired", "Required", "TypedDict", "Literal"]
|
||||
found_modern = [
|
||||
imp for imp in modern_imports if f" {imp}" in line or f"{imp}," in line
|
||||
imp for imp in MODERNIZABLE_TYPING_EXTENSIONS
|
||||
if f" {imp}" in line or f"{imp}," in line
|
||||
]
|
||||
|
||||
if found_modern:
|
||||
@@ -363,7 +413,7 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="typing_extensions_modernizable",
|
||||
description=f"These can be imported from typing: {', '.join(found_modern)}",
|
||||
description=f"{TYPING_EXTENSIONS_MESSAGE}: {', '.join(found_modern)}",
|
||||
suggestion=f"from typing import {', '.join(found_modern)}",
|
||||
)
|
||||
)
|
||||
@@ -375,8 +425,8 @@ class TypingChecker:
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="unnecessary_typing_try_except",
|
||||
description="Try/except for typing imports may be unnecessary in Python 3.12+",
|
||||
suggestion="Direct import should work",
|
||||
description=UNNECESSARY_TRY_EXCEPT_MESSAGE,
|
||||
suggestion=UNNECESSARY_TRY_EXCEPT_SUGGESTION,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -435,65 +485,80 @@ class TypingChecker:
|
||||
type_name = annotation.value.id
|
||||
|
||||
# Union[X, Y] -> X | Y (only if Union imported from typing)
|
||||
if type_name == "Union" and type_name in imports and imports[type_name] == "typing":
|
||||
# Extract the union types from the subscript
|
||||
if isinstance(annotation.slice, ast.Tuple):
|
||||
type_parts = []
|
||||
for elt in annotation.slice.elts:
|
||||
try:
|
||||
part_str = ast_module.unparse(elt)
|
||||
type_parts.append(part_str)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if len(type_parts) > 1:
|
||||
suggestion = " | ".join(type_parts)
|
||||
issue_key = (str(file_path), line_num, suggestion)
|
||||
if issue_key not in self._seen_issues:
|
||||
self._seen_issues.add(issue_key)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_union_syntax_ast",
|
||||
description=f"Use '|' syntax instead of Union in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
if type_name == "Union" and type_name in imports and imports[type_name] == TYPING_MODULE:
|
||||
self._handle_union_ast(file_path, line_num, annotation, context, annotation_str, ast_module)
|
||||
|
||||
# Optional[X] -> X | None (only if Optional imported from typing)
|
||||
elif type_name == "Optional" and type_name in imports and imports[type_name] == "typing":
|
||||
try:
|
||||
inner_type = ast_module.unparse(annotation.slice)
|
||||
suggestion = f"{inner_type} | None"
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_optional_syntax_ast",
|
||||
description=f"Use '| None' syntax instead of Optional in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
elif type_name == "Optional" and type_name in imports and imports[type_name] == TYPING_MODULE:
|
||||
self._handle_optional_ast(file_path, line_num, annotation, context, annotation_str, ast_module)
|
||||
|
||||
# Dict/List/Set/Tuple[...] -> dict/list/set/tuple[...] (only if imported from typing)
|
||||
elif type_name in ("Dict", "List", "Set", "Tuple") and type_name in imports and imports[type_name] == "typing":
|
||||
try:
|
||||
inner_type = ast_module.unparse(annotation.slice)
|
||||
suggestion = f"{type_name.lower()}[{inner_type}]"
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_generic_syntax_ast",
|
||||
description=f"Use built-in generic in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
elif type_name in ("Dict", "List", "Set", "Tuple") and type_name in imports and imports[type_name] == TYPING_MODULE:
|
||||
self._handle_generic_ast(file_path, line_num, annotation, context, annotation_str, type_name, ast_module)
|
||||
|
||||
def _handle_union_ast(self, file_path: Path, line_num: int, annotation: ast.Subscript,
|
||||
context: str, annotation_str: str, ast_module) -> None:
|
||||
"""Handle Union annotation AST processing."""
|
||||
# Extract the union types from the subscript
|
||||
if isinstance(annotation.slice, ast.Tuple):
|
||||
type_parts = []
|
||||
for elt in annotation.slice.elts:
|
||||
try:
|
||||
part_str = ast_module.unparse(elt)
|
||||
type_parts.append(part_str)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
if len(type_parts) > 1:
|
||||
suggestion = " | ".join(type_parts)
|
||||
issue_key = (str(file_path), line_num, suggestion)
|
||||
if issue_key not in self._seen_issues:
|
||||
self._seen_issues.add(issue_key)
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_union_syntax_ast",
|
||||
description=f"{UNION_SYNTAX_MESSAGE} in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
)
|
||||
|
||||
def _handle_optional_ast(self, file_path: Path, line_num: int, annotation: ast.Subscript,
|
||||
context: str, annotation_str: str, ast_module) -> None:
|
||||
"""Handle Optional annotation AST processing."""
|
||||
try:
|
||||
inner_type = ast_module.unparse(annotation.slice)
|
||||
suggestion = f"{inner_type} | None"
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_optional_syntax_ast",
|
||||
description=f"{OPTIONAL_SYNTAX_MESSAGE} in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _handle_generic_ast(self, file_path: Path, line_num: int, annotation: ast.Subscript,
|
||||
context: str, annotation_str: str, type_name: str, ast_module) -> None:
|
||||
"""Handle generic type annotation AST processing."""
|
||||
try:
|
||||
inner_type = ast_module.unparse(annotation.slice)
|
||||
suggestion = f"{type_name.lower()}[{inner_type}]"
|
||||
self.issues.append(
|
||||
Issue(
|
||||
file_path=file_path,
|
||||
line_number=line_num,
|
||||
issue_type="old_generic_syntax_ast",
|
||||
description=f"{BUILTIN_GENERIC_MESSAGE} in {context}: {annotation_str}",
|
||||
suggestion=suggestion,
|
||||
)
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def _analyze_imports(self, file_path: Path, tree: ast.AST) -> None:
|
||||
"""Analyze imports in the file to track where types come from."""
|
||||
@@ -501,7 +566,7 @@ class TypingChecker:
|
||||
self._file_imports[file_key] = {}
|
||||
|
||||
for node in ast.walk(tree):
|
||||
if isinstance(node, ast.ImportFrom) and node.module in ("typing", "typing_extensions"):
|
||||
if isinstance(node, ast.ImportFrom) and node.module in (TYPING_MODULE, TYPING_EXTENSIONS_MODULE):
|
||||
# At this point, node.module is guaranteed to be one of the strings we checked
|
||||
import_from_node = node # Type is now narrowed to ast.ImportFrom
|
||||
for alias in import_from_node.names or []:
|
||||
@@ -512,7 +577,7 @@ class TypingChecker:
|
||||
elif isinstance(node, ast.Import):
|
||||
for alias in node.names:
|
||||
name = alias.asname or alias.name
|
||||
self._file_imports[file_key][name] = "direct_import"
|
||||
self._file_imports[file_key][name] = DIRECT_IMPORT
|
||||
|
||||
def _find_balanced_brackets(self, text: str, type_name: str) -> list[dict[str, str]]:
|
||||
"""Find all occurrences of Type[...] with properly balanced brackets."""
|
||||
|
||||
@@ -18,53 +18,60 @@ project_root = Path(__file__).parent.parent
|
||||
sys.path.insert(0, str(project_root / "src"))
|
||||
|
||||
# Imports after path modification
|
||||
from biz_bud.validation import ValidationRunner # noqa: E402
|
||||
from biz_bud.validation.agent_validators import ( # noqa: E402
|
||||
BuddyAgentValidator,
|
||||
CapabilityResolutionValidator,
|
||||
ToolFactoryValidator,
|
||||
)
|
||||
from biz_bud.validation.base import BaseValidator # noqa: E402
|
||||
from biz_bud.validation.deployment_validators import ( # noqa: E402
|
||||
PerformanceValidator,
|
||||
StateManagementValidator,
|
||||
)
|
||||
from biz_bud.validation.registry_validators import ( # noqa: E402
|
||||
CapabilityConsistencyValidator,
|
||||
ComponentDiscoveryValidator,
|
||||
RegistryIntegrityValidator,
|
||||
)
|
||||
# from biz_bud.validation import ValidationRunner # noqa: E402
|
||||
from biz_bud.core.validation.base import Validator as BaseValidator # noqa: E402
|
||||
|
||||
# Note: The following validators don't exist in the current codebase
|
||||
# They may need to be implemented or this demo may be outdated
|
||||
# from biz_bud.validation.agent_validators import ( # noqa: E402
|
||||
# BuddyAgentValidator,
|
||||
# CapabilityResolutionValidator,
|
||||
# ToolFactoryValidator,
|
||||
# )
|
||||
# from biz_bud.validation.deployment_validators import ( # noqa: E402
|
||||
# PerformanceValidator,
|
||||
# StateManagementValidator,
|
||||
# )
|
||||
# from biz_bud.validation.registry_validators import ( # noqa: E402
|
||||
# CapabilityConsistencyValidator,
|
||||
# ComponentDiscoveryValidator,
|
||||
# RegistryIntegrityValidator,
|
||||
# )
|
||||
|
||||
|
||||
async def demo_basic_validation():
|
||||
"""Demonstrate basic validation functionality."""
|
||||
print("🔍 BASIC VALIDATION DEMO")
|
||||
print("=" * 50)
|
||||
print("⚠️ This demo is currently disabled due to missing validator classes.")
|
||||
print(" The validation system appears to have been refactored.")
|
||||
print(" Please implement the missing validators or update this demo.")
|
||||
|
||||
# Create validation runner
|
||||
runner = ValidationRunner()
|
||||
# Note: The following code is commented out due to missing classes
|
||||
# # Create validation runner
|
||||
# runner = ValidationRunner()
|
||||
|
||||
# Register basic validators
|
||||
print("📝 Registering basic validators...")
|
||||
basic_validators: list[BaseValidator] = [
|
||||
RegistryIntegrityValidator("nodes"),
|
||||
RegistryIntegrityValidator("graphs"),
|
||||
RegistryIntegrityValidator("tools"),
|
||||
]
|
||||
# # Register basic validators
|
||||
# print("📝 Registering basic validators...")
|
||||
# basic_validators: list[BaseValidator] = [
|
||||
# RegistryIntegrityValidator("nodes"),
|
||||
# RegistryIntegrityValidator("graphs"),
|
||||
# RegistryIntegrityValidator("tools"),
|
||||
# ]
|
||||
|
||||
runner.register_validators(basic_validators)
|
||||
print(f"✅ Registered {len(basic_validators)} validators")
|
||||
# runner.register_validators(basic_validators)
|
||||
# print(f"✅ Registered {len(basic_validators)} validators")
|
||||
|
||||
# Run validations
|
||||
print("\n🚀 Running basic validations...")
|
||||
report = await runner.run_all_validations(parallel=True)
|
||||
# # Run validations
|
||||
# print("\n🚀 Running basic validations...")
|
||||
# report = await runner.run_all_validations(parallel=True)
|
||||
|
||||
# Display summary
|
||||
print("\n📊 VALIDATION SUMMARY")
|
||||
print(f" Total validations: {report.summary.total_validations}")
|
||||
print(f" Success rate: {report.summary.success_rate:.1f}%")
|
||||
print(f" Duration: {report.summary.total_duration:.2f}s")
|
||||
print(f" Issues found: {report.summary.total_issues}")
|
||||
# # Display summary
|
||||
# print("\n📊 VALIDATION SUMMARY")
|
||||
# print(f" Total validations: {report.summary.total_validations}")
|
||||
# print(f" Success rate: {report.summary.success_rate:.1f}%")
|
||||
# print(f" Duration: {report.summary.total_duration:.2f}s")
|
||||
# print(f" Issues found: {report.summary.total_issues}")
|
||||
|
||||
if report.summary.has_failures:
|
||||
print(" ⚠️ Failures detected!")
|
||||
@@ -274,8 +281,9 @@ async def main():
|
||||
"""Run the main demonstration function."""
|
||||
print("🚀 REGISTRY VALIDATION SYSTEM DEMONSTRATION")
|
||||
print("=" * 60)
|
||||
print("This demo shows how the validation system ensures agents")
|
||||
print("can discover and deploy all registered components.")
|
||||
print("⚠️ This demo is currently disabled due to missing validator classes.")
|
||||
print(" The validation system appears to have been refactored.")
|
||||
print(" Available validators can be found in: src/biz_bud/core/validation/")
|
||||
print()
|
||||
|
||||
# Setup logging
|
||||
@@ -287,27 +295,26 @@ async def main():
|
||||
|
||||
try:
|
||||
# Run basic demonstration
|
||||
basic_report = await demo_basic_validation()
|
||||
await demo_basic_validation()
|
||||
|
||||
# Run single validator demo
|
||||
await demo_single_validator()
|
||||
|
||||
# Run capability resolution demo
|
||||
await demo_capability_resolution()
|
||||
# Note: Other demos are also disabled due to missing dependencies
|
||||
# await demo_single_validator()
|
||||
# await demo_capability_resolution()
|
||||
|
||||
# Run comprehensive demo if requested
|
||||
if full_demo:
|
||||
comprehensive_report = await demo_comprehensive_validation()
|
||||
final_report = comprehensive_report
|
||||
else:
|
||||
final_report = basic_report
|
||||
print("\n💡 Run with --full for comprehensive validation demo")
|
||||
# Note: Also disabled due to missing dependencies
|
||||
# if full_demo:
|
||||
# comprehensive_report = await demo_comprehensive_validation()
|
||||
# final_report = comprehensive_report
|
||||
# else:
|
||||
# final_report = basic_report
|
||||
print("\n💡 Demo functionality is currently disabled - see note above")
|
||||
|
||||
# Save report if requested
|
||||
if save_report:
|
||||
await save_validation_report(final_report)
|
||||
else:
|
||||
print("\n💡 Add --save-report to save detailed report to file")
|
||||
# if save_report:
|
||||
# await save_validation_report(final_report)
|
||||
# else:
|
||||
# print("\n💡 Add --save-report to save detailed report to file")
|
||||
|
||||
# Final summary
|
||||
print("\n✅ DEMONSTRATION COMPLETE")
|
||||
|
||||
1
sonar-project.properties
Normal file
1
sonar-project.properties
Normal file
@@ -0,0 +1 @@
|
||||
sonar.projectKey=vasceannie_biz-bud_6c113581-e663-4a15-8a76-1ce5dab23a5f
|
||||
@@ -1,198 +0,0 @@
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
184 for key, value in variation.items():
|
||||
185 if key != "expected_error":
|
||||
186 corrupted_state[key] = value
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
185 if key != "expected_error":
|
||||
186 corrupted_state[key] = value
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
209 for field in missing_field_tests:
|
||||
210 # Create state with missing field - cast to dict for testing
|
||||
211 corrupted_state = cast("dict[str, Any]", copy.deepcopy(state))
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
212 if field in corrupted_state:
|
||||
213 del corrupted_state[field]
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
218 if value is not None or field not in missing_field_tests:
|
||||
219 corrupted_state[field]
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
319 if recovered_state.get("query") is None:
|
||||
320 recovered_state["query"] = "Recovered query"
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
325 if "query" not in state or state.get("query") is None:
|
||||
326 raise ValueError("State corruption detected")
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
370 for corrupted_data in corrupted_data_variations:
|
||||
371 try:
|
||||
372 # Try to deserialize corrupted data
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
416 for corrupted_data in corrupted_states:
|
||||
417 try:
|
||||
418 # Create corrupted state - cast to dict for testing corruption
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
422 for key, value in corrupted_data.items():
|
||||
423 corrupted_state[key] = value
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
449 if recovered_state.get("query") is None:
|
||||
450 recovered_state["query"] = "Recovered query"
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
460 if recovered_state["search_status"] not in valid_statuses:
|
||||
461 recovered_state["search_status"] = "idle"
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
464 if recovered_state["synthesis_attempts"] < 0:
|
||||
465 recovered_state["synthesis_attempts"] = 0
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
490 if 0 <= checkpoint_id < len(checkpoints):
|
||||
491 return copy.deepcopy(checkpoints[checkpoint_id])
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
557 for i, operation in enumerate(operations):
|
||||
558 try:
|
||||
559 state = operation(state)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
605 for scenario in error_scenarios:
|
||||
606 try:
|
||||
607 # Simulate error occurrence
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
609 if not callable(exception_class):
|
||||
610 raise RuntimeError(f"Invalid exception type: {exception_class}")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
629 if scenario["expected_behavior"] == "stop":
|
||||
630 # Critical error - reset state
|
||||
631 state = self.create_valid_research_state()
|
||||
Reviewed tests/crash_tests/test_memory_exhaustion.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
333 for i in range(n):
|
||||
334 total += i**2
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
342 for _ in range(1000):
|
||||
343 future = executor.submit(cpu_intensive_task, 10000)
|
||||
344 futures.append(future)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
348 for future in as_completed(futures, timeout=60):
|
||||
349 try:
|
||||
350 _ = future.result()
|
||||
Reviewed tests/crash_tests/test_malformed_input.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
176 for empty_input in empty_inputs:
|
||||
177 try:
|
||||
178 result = validator.validate_input(empty_input)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
198 for long_input in long_inputs:
|
||||
199 try:
|
||||
200 result = validator.validate_input(long_input)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
227 for special_input in special_inputs:
|
||||
228 try:
|
||||
229 result = validator.validate_input(special_input)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
253 for malformed_json in malformed_json_inputs:
|
||||
254 try:
|
||||
255 # Try to parse as JSON
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
291 for malformed_xml in malformed_xml_inputs:
|
||||
292 try:
|
||||
293 # Try to parse as XML
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
336 for malformed_html in malformed_html_inputs:
|
||||
337 try:
|
||||
338 # Try to extract from malformed HTML
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
388 for injection_input in injection_inputs:
|
||||
389 try:
|
||||
390 # Validate input
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
420 for binary_input in binary_inputs:
|
||||
421 try:
|
||||
422 # Try to validate binary data
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
454 for circular_input in circular_inputs:
|
||||
455 try:
|
||||
456 # Try to validate circular data
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
594 for malformed_url in malformed_urls:
|
||||
595 try:
|
||||
596 # Try to extract from malformed URL
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
618 if depth == 0:
|
||||
619 return {"value": "deep"}
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
623 if depth == 0:
|
||||
624 return ["deep"]
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
631 for depth in test_depths:
|
||||
632 try:
|
||||
633 if len(deeply_nested_inputs) < 2:
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
633 if len(deeply_nested_inputs) < 2:
|
||||
634 # Create dict/list structures
|
||||
635 deeply_nested_inputs.append(create_nested_dict(depth))
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
647 for nested_input in deeply_nested_inputs:
|
||||
648 try:
|
||||
649 result = validator.validate_input(nested_input)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
678 for mixed_input in mixed_encoding_inputs:
|
||||
679 try:
|
||||
680 result = validator.validate_input(mixed_input)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
744 for malformed_input in malformed_inputs:
|
||||
745 try:
|
||||
746 # Simulate input validation that might be done in actual workflow
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
747 if malformed_input is None:
|
||||
748 raise ValueError("Input cannot be None")
|
||||
749 elif isinstance(malformed_input, bytes):
|
||||
Reviewed tests/integration_tests/test_per_document_analysis_integration.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
163 for url, expected in test_cases:
|
||||
164 result = extract_collection_name(url)
|
||||
165 assert result == expected, (
|
||||
Reviewed tests/integration_tests/test_firecrawl_collection_integration.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
27 for url in test_urls:
|
||||
28 result = extract_collection_name(url)
|
||||
29 assert result == "firecrawl", (
|
||||
200
sourcery.txt
200
sourcery.txt
@@ -1,200 +0,0 @@
|
||||
(.venv) dev@biz-bud-dev:/app$ sourcery review src tests --fix --verbose
|
||||
Reviewed tests/unit_tests/tools/capabilities/extraction/test_structured.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
1048 for text in edge_cases:
|
||||
1049 # Mock all functions to return empty/default results
|
||||
1050 with patch(
|
||||
Reviewed tests/crash_tests/test_concurrency_races.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
439 for _ in range(50): # Reduced from 100 to 50
|
||||
440 with lock:
|
||||
441 current = shared_resource["counter"]
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
452 for future in as_completed(futures):
|
||||
453 result = future.result()
|
||||
454 results.append(result)
|
||||
Reviewed tests/crash_tests/test_filesystem_errors.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
69 if "r" in mode and "no_read_permission" in str(file):
|
||||
70 raise PermissionError("Permission denied: cannot read file")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
85 if "w" in mode and "no_write_permission" in str(file):
|
||||
86 raise PermissionError("Permission denied: cannot write to file")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
153 if len(large_content) > max_file_size:
|
||||
154 with pytest.raises(ValueError, match="File too large"):
|
||||
155 raise ValueError("File too large")
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
175 for invalid_path in invalid_paths:
|
||||
176 try:
|
||||
177 with open(invalid_path, "r") as f:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
216 for network_path in network_paths:
|
||||
217 with pytest.raises((FileNotFoundError, OSError)):
|
||||
218 open(network_path, "r").read()
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
225 for i in range(1000): # Try to exhaust file handles
|
||||
226 file_path = os.path.join(temp_dir, f"file_{i}.txt")
|
||||
227 with open(file_path, "w") as f:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
240 for open_file in open_files:
|
||||
241 try:
|
||||
242 open_file.close()
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
279 for i in range(10):
|
||||
280 write_thread = threading.Thread(target=write_worker, args=(i,))
|
||||
281 read_thread = threading.Thread(target=read_worker, args=(i,))
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
285 for thread in threads:
|
||||
286 thread.start()
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
289 for thread in threads:
|
||||
290 thread.join()
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
305 if os.getuid() == 0:
|
||||
306 pytest.skip("Cannot test permission errors when running as root")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
335 if os.getuid() == 0:
|
||||
336 pytest.skip("Cannot test permission errors when running as root")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
359 if os.path.exists(temp_file):
|
||||
360 os.chmod(temp_file, 0o644)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
388 for file_path in test_files:
|
||||
389 try:
|
||||
390 with open(file_path, "r") as f:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
423 for test_file in test_files:
|
||||
424 try:
|
||||
425 with open(test_file, "r", encoding="utf-8") as f:
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
460 if os.getuid() == 0:
|
||||
461 pytest.skip("Cannot test permission errors when running as root")
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
522 for invalid_file in invalid_files:
|
||||
523 try:
|
||||
524 if invalid_file.endswith(".json"):
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
524 if invalid_file.endswith(".json"):
|
||||
525 import json
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
541 if len(set(col_counts)) > 1:
|
||||
542 raise ValueError("Inconsistent column count")
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
563 if os.getuid() == 0:
|
||||
564 pytest.skip("Cannot test permission errors when running as root")
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
632 for i in range(5):
|
||||
633 temp_file = os.path.join(temp_dir, f"temp_file_{i}.txt")
|
||||
634 with open(temp_file, "w") as f:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
641 for temp_file in temp_files[:3]: # Process first 3
|
||||
642 with open(temp_file, "r") as f:
|
||||
643 content = f.read()
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
651 for temp_file in temp_files:
|
||||
652 try:
|
||||
653 os.remove(temp_file)
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
658 for temp_file in temp_files:
|
||||
659 assert not os.path.exists(temp_file)
|
||||
Reviewed tests/crash_tests/test_config_validation.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
112 for invalid_config in invalid_configs:
|
||||
113 config_file = self.create_temp_config_file(invalid_config)
|
||||
114 try:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
160 for invalid_config in invalid_configs:
|
||||
161 config_file = self.create_temp_config_file(invalid_config)
|
||||
162 try:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
201 for invalid_config in invalid_configs:
|
||||
202 config_file = self.create_temp_config_file(invalid_config)
|
||||
203 try:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
229 for var in env_vars:
|
||||
230 original_values[var] = os.environ.get(var)
|
||||
231 if var in os.environ:
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
231 if var in os.environ:
|
||||
232 del os.environ[var]
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
250 for var, value in original_values.items():
|
||||
251 if value is not None:
|
||||
252 os.environ[var] = value
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
251 if value is not None:
|
||||
252 os.environ[var] = value
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
267 for malformed_yaml in malformed_yamls:
|
||||
268 with tempfile.NamedTemporaryFile(
|
||||
269 mode="w", suffix=".yaml", delete=False
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
340 for invalid_config in invalid_type_configs:
|
||||
341 config_file = self.create_temp_config_file(invalid_config)
|
||||
342 try:
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
446 for edge_config in edge_case_configs:
|
||||
447 config_file = self.create_temp_config_file(edge_config)
|
||||
448 try:
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
511 if "LLM_TIMEOUT" in os.environ:
|
||||
512 del os.environ["LLM_TIMEOUT"]
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
513 if "DB_PORT" in os.environ:
|
||||
514 del os.environ["DB_PORT"]
|
||||
Reviewed tests/crash_tests/test_state_corruption.py
|
||||
-------------------------------------------------------------------------------------------
|
||||
|
||||
• Avoid conditionals in tests. (no-conditionals-in-tests)
|
||||
159 if hasattr(
|
||||
160 problematic_data.get("file_handle"),
|
||||
161 "close",
|
||||
|
||||
• Avoid loops in tests. (no-loop-in-tests)
|
||||
181 for variation in corrupted_variations:
|
||||
182 # Create corrupted state - cast to dict for testing corruption
|
||||
183 corrupted_state = cast("dict[str, Any]", copy.deepcopy(state))
|
||||
@@ -22,12 +22,14 @@ Dependencies:
|
||||
- logging: Logging of events and warnings.
|
||||
"""
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
||||
import nltk
|
||||
|
||||
from . import nodes
|
||||
from .logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
def _setup_nltk() -> None:
|
||||
@@ -63,7 +65,7 @@ def _setup_nltk() -> None:
|
||||
# Also ensure stopwords are available
|
||||
nltk.download("stopwords", download_dir=nltk_data_dir, quiet=True)
|
||||
except Exception as e: # noqa: BLE001
|
||||
logging.warning("Failed to download NLTK stopwords: %s", e)
|
||||
logger.warning("Failed to download NLTK stopwords: %s", e)
|
||||
|
||||
|
||||
# Initialize NLTK
|
||||
|
||||
@@ -389,9 +389,8 @@ async def stream_buddy_agent(
|
||||
yield update["final_response"]
|
||||
|
||||
except Exception as e:
|
||||
import logging
|
||||
|
||||
logging.exception("Buddy agent streaming failed with an exception")
|
||||
logger = get_logger(__name__)
|
||||
logger.exception("Buddy agent streaming failed with an exception")
|
||||
sanitized_error = ErrorMessageFormatter.sanitize_error_message(e, for_user=True)
|
||||
error_highlight(f"Buddy agent streaming failed: {sanitized_error}")
|
||||
yield f"Error: {sanitized_error}"
|
||||
|
||||
@@ -8,6 +8,7 @@ import re
|
||||
import time
|
||||
from typing import Any, Literal
|
||||
|
||||
from biz_bud.core.errors import ValidationError
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
# Removed broken core import
|
||||
@@ -566,7 +567,7 @@ class ResponseFormatter:
|
||||
if records_without_status := [
|
||||
record for record in execution_history if "status" not in record
|
||||
]:
|
||||
raise ValueError(
|
||||
raise ValidationError(
|
||||
f"Found {len(records_without_status)} execution records missing 'status' key. "
|
||||
f"All execution records must include a 'status' key. Offending records: {records_without_status}"
|
||||
)
|
||||
|
||||
@@ -19,11 +19,11 @@ from biz_bud.agents.buddy_execution import (
|
||||
from biz_bud.agents.buddy_state_manager import StateHelper
|
||||
|
||||
# Removed broken core import
|
||||
from biz_bud.core.errors import ValidationError
|
||||
from biz_bud.core.langgraph import StateUpdater, ensure_immutable_node, handle_errors, standard_node
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
# ToolFactory removed - using direct imports instead
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
from biz_bud.states.buddy import BuddyState
|
||||
from biz_bud.tools.capabilities.extraction.core.base import extract_text_from_multimodal_content
|
||||
|
||||
@@ -98,8 +98,123 @@ COMPLEX_PATTERNS: list[Pattern[str]] = [
|
||||
# Precompiled regex for LLM response parsing
|
||||
LLM_RESPONSE_PATTERN: Pattern[str] = re.compile(r"\b(simple|complex)\b", re.IGNORECASE)
|
||||
|
||||
# Introspection keywords for capability queries
|
||||
INTROSPECTION_KEYWORDS = {
|
||||
"capabilities",
|
||||
"tools",
|
||||
"graphs",
|
||||
"what can you do",
|
||||
"help",
|
||||
"commands",
|
||||
"nodes",
|
||||
"available",
|
||||
}
|
||||
|
||||
async def _analyze_query_complexity(user_query: str) -> tuple[str, str]:
|
||||
# Capability refresh interval in seconds (5 minutes)
|
||||
CAPABILITY_REFRESH_INTERVAL_SECONDS = 300.0
|
||||
|
||||
|
||||
def _format_introspection_response(
|
||||
capability_map: dict[str, Any], capability_summary: dict[str, Any]
|
||||
) -> tuple[dict[str, Any], list[dict[str, Any]]]:
|
||||
"""Format the agent's capabilities into a structured response for introspection queries.
|
||||
|
||||
Args:
|
||||
capability_map: Dictionary mapping capability names to their components
|
||||
capability_summary: Summary statistics about capabilities
|
||||
|
||||
Returns:
|
||||
Tuple of (extracted_info, sources) formatted for introspection response
|
||||
"""
|
||||
extracted_info = {}
|
||||
sources = []
|
||||
source_idx = 0
|
||||
|
||||
# Add capability overview
|
||||
source_key = f"source_{source_idx}"
|
||||
extracted_info[source_key] = {
|
||||
"content": f"Business Buddy has {capability_summary.get('total_capabilities', 0)} distinct capabilities.",
|
||||
"summary": "System capability overview",
|
||||
"title": "System Capability Overview",
|
||||
"url": "system://capability_overview",
|
||||
"key_points": [
|
||||
f"Total capabilities: {capability_summary.get('total_capabilities', 0)}",
|
||||
"Direct node-based architecture",
|
||||
],
|
||||
"facts": [],
|
||||
}
|
||||
sources.append(
|
||||
{
|
||||
"key": source_key,
|
||||
"url": "system://capability_overview",
|
||||
"title": "System Capability Overview",
|
||||
}
|
||||
)
|
||||
source_idx += 1
|
||||
|
||||
# Add detailed capability information
|
||||
for capability_name, components in capability_map.items():
|
||||
node_count = len(components.get("nodes", []))
|
||||
graph_count = len(components.get("graphs", []))
|
||||
|
||||
if (
|
||||
node_count > 0 or graph_count > 0
|
||||
): # Only include capabilities that have components
|
||||
source_key = f"source_{source_idx}"
|
||||
|
||||
# Create detailed content with node and graph lists
|
||||
content_parts = [
|
||||
f"{components.get('description', 'No description')}.",
|
||||
f"Available in {node_count} nodes and {graph_count} graphs.",
|
||||
]
|
||||
|
||||
if node_count > 0:
|
||||
node_names = [
|
||||
node.get("name", "<unnamed>")
|
||||
for node in components.get("nodes", [])
|
||||
]
|
||||
content_parts.append(
|
||||
f"Nodes: {', '.join(node_names[:5])}"
|
||||
+ (f" and {node_count - 5} more" if node_count > 5 else "")
|
||||
)
|
||||
|
||||
if graph_count > 0:
|
||||
graph_names = [
|
||||
graph.get("name", "<unnamed>")
|
||||
for graph in components.get("graphs", [])
|
||||
]
|
||||
content_parts.append(
|
||||
f"Graphs: {', '.join(graph_names[:5])}"
|
||||
+ (f" and {graph_count - 5} more" if graph_count > 5 else "")
|
||||
)
|
||||
|
||||
extracted_info[source_key] = {
|
||||
"content": " ".join(content_parts),
|
||||
"summary": f"{capability_name} capability",
|
||||
"title": f"{capability_name.title()} Capability",
|
||||
"url": f"system://capability_{capability_name}",
|
||||
"key_points": [
|
||||
f"Nodes providing this capability: {node_count}",
|
||||
f"Graphs providing this capability: {graph_count}",
|
||||
f"Description: {components.get('description', 'No description')}",
|
||||
],
|
||||
"facts": [],
|
||||
}
|
||||
sources.append(
|
||||
{
|
||||
"key": source_key,
|
||||
"url": f"system://capability_{capability_name}",
|
||||
"title": f"{capability_name.title()} Capability",
|
||||
}
|
||||
)
|
||||
source_idx += 1
|
||||
|
||||
return extracted_info, sources
|
||||
|
||||
|
||||
async def _analyze_query_complexity(
|
||||
user_query: str, service_factory=None
|
||||
) -> tuple[str, str]:
|
||||
"""Analyze query complexity to determine routing strategy.
|
||||
|
||||
Args:
|
||||
@@ -141,7 +256,12 @@ async def _analyze_query_complexity(user_query: str) -> tuple[str, str]:
|
||||
|
||||
# Use LLM for borderline cases
|
||||
try:
|
||||
service_factory = await get_global_factory()
|
||||
if service_factory is None:
|
||||
# Fallback for backward compatibility, but should not be used in new code
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
service_factory = await get_global_factory()
|
||||
|
||||
llm_client = await service_factory.get_llm_for_node("query_analyzer", "small")
|
||||
|
||||
prompt = f"""Analyze this query to determine if it needs simple web search or complex research:
|
||||
@@ -185,10 +305,29 @@ Respond with only: "simple" or "complex"
|
||||
@ensure_immutable_node
|
||||
async def buddy_orchestrator_node(
|
||||
state: BuddyState, config: RunnableConfig | None = None
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
"""Coordinate the execution flow as main orchestrator node."""
|
||||
logger.info("Buddy orchestrator analyzing request")
|
||||
|
||||
# Initialize service factory at the start to ensure it's always available
|
||||
service_factory = None
|
||||
try:
|
||||
from biz_bud.core.langgraph.runnable_config import ConfigurationProvider
|
||||
|
||||
if config is not None:
|
||||
provider = ConfigurationProvider(config)
|
||||
service_factory = provider.get_service_factory()
|
||||
|
||||
if service_factory is None:
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
service_factory = await get_global_factory()
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to initialize service factory: {e}")
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
service_factory = await get_global_factory()
|
||||
|
||||
# Extract user query using helper
|
||||
user_query = StateHelper.extract_user_query(state)
|
||||
|
||||
@@ -263,8 +402,9 @@ async def buddy_orchestrator_node(
|
||||
|
||||
# Analyze query complexity to determine execution strategy
|
||||
try:
|
||||
# Use already initialized service factory for complexity analysis
|
||||
complexity_level, complexity_reasoning = await _analyze_query_complexity(
|
||||
user_query
|
||||
user_query, service_factory
|
||||
)
|
||||
updater = updater.set("query_complexity", complexity_level).set(
|
||||
"complexity_reasoning", complexity_reasoning
|
||||
@@ -290,7 +430,7 @@ async def buddy_orchestrator_node(
|
||||
current_time = time.time()
|
||||
|
||||
# Refresh capabilities if not done recently (every 5 minutes)
|
||||
if current_time - last_discovery > 300:
|
||||
if current_time - last_discovery > CAPABILITY_REFRESH_INTERVAL_SECONDS:
|
||||
logger.info("Refreshing capabilities before planning")
|
||||
|
||||
# Run capability discovery
|
||||
@@ -309,18 +449,7 @@ async def buddy_orchestrator_node(
|
||||
)
|
||||
|
||||
# Check for capability introspection queries first
|
||||
introspection_keywords = [
|
||||
"tools",
|
||||
"capabilities",
|
||||
"what can you do",
|
||||
"help",
|
||||
"functions",
|
||||
"abilities",
|
||||
"commands",
|
||||
"nodes",
|
||||
"graphs",
|
||||
"available",
|
||||
]
|
||||
introspection_keywords = INTROSPECTION_KEYWORDS
|
||||
|
||||
# Handle case where user_query might be a list (multimodal content)
|
||||
user_query_str = extract_text_from_multimodal_content(user_query)
|
||||
@@ -362,92 +491,10 @@ async def buddy_orchestrator_node(
|
||||
if not isinstance(capability_summary, dict):
|
||||
capability_summary = {}
|
||||
|
||||
extracted_info = {}
|
||||
sources = []
|
||||
source_idx = 0
|
||||
|
||||
# Add capability overview
|
||||
source_key = f"source_{source_idx}"
|
||||
extracted_info[source_key] = {
|
||||
"content": f"Business Buddy has {capability_summary.get('total_capabilities', 0)} distinct capabilities.",
|
||||
"summary": "System capability overview",
|
||||
"title": "System Capability Overview",
|
||||
"url": "system://capability_overview",
|
||||
"key_points": [
|
||||
f"Total capabilities: {capability_summary.get('total_capabilities', 0)}",
|
||||
"Direct node-based architecture",
|
||||
],
|
||||
"facts": [],
|
||||
}
|
||||
sources.append(
|
||||
{
|
||||
"key": source_key,
|
||||
"url": "system://capability_overview",
|
||||
"title": "System Capability Overview",
|
||||
}
|
||||
# Format introspection response using helper function
|
||||
extracted_info, sources = _format_introspection_response(
|
||||
capability_map, capability_summary
|
||||
)
|
||||
source_idx += 1
|
||||
|
||||
# Add detailed capability information
|
||||
for capability_name, components in capability_map.items():
|
||||
node_count = len(components.get("nodes", []))
|
||||
graph_count = len(components.get("graphs", []))
|
||||
|
||||
if (
|
||||
node_count > 0 or graph_count > 0
|
||||
): # Only include capabilities that have components
|
||||
source_key = f"source_{source_idx}"
|
||||
|
||||
# Create detailed content with node and graph lists
|
||||
content_parts = [
|
||||
f"{components.get('description', 'No description')}.",
|
||||
f"Available in {node_count} nodes and {graph_count} graphs.",
|
||||
]
|
||||
|
||||
if node_count > 0:
|
||||
node_names = [
|
||||
node.get("name", "<unnamed>")
|
||||
for node in components.get("nodes", [])
|
||||
]
|
||||
content_parts.append(
|
||||
f"Nodes: {', '.join(node_names[:5])}"
|
||||
+ (f" and {node_count - 5} more" if node_count > 5 else "")
|
||||
)
|
||||
|
||||
if graph_count > 0:
|
||||
graph_names = [
|
||||
graph.get("name", "<unnamed>")
|
||||
for graph in components.get("graphs", [])
|
||||
]
|
||||
content_parts.append(
|
||||
f"Graphs: {', '.join(graph_names[:5])}"
|
||||
+ (
|
||||
f" and {graph_count - 5} more"
|
||||
if graph_count > 5
|
||||
else ""
|
||||
)
|
||||
)
|
||||
|
||||
extracted_info[source_key] = {
|
||||
"content": " ".join(content_parts),
|
||||
"summary": f"{capability_name} capability",
|
||||
"title": f"{capability_name.title()} Capability",
|
||||
"url": f"system://capability_{capability_name}",
|
||||
"key_points": [
|
||||
f"Nodes providing this capability: {node_count}",
|
||||
f"Graphs providing this capability: {graph_count}",
|
||||
f"Description: {components.get('description', 'No description')}",
|
||||
],
|
||||
"facts": [],
|
||||
}
|
||||
sources.append(
|
||||
{
|
||||
"key": source_key,
|
||||
"url": f"system://capability_{capability_name}",
|
||||
"title": f"{capability_name.title()} Capability",
|
||||
}
|
||||
)
|
||||
source_idx += 1
|
||||
|
||||
# Skip to synthesis with real capability data
|
||||
return (
|
||||
@@ -469,12 +516,6 @@ async def buddy_orchestrator_node(
|
||||
from biz_bud.core.config.constants import DOCUMENT_CAPABILITIES
|
||||
|
||||
detected_capabilities = state.get("available_capabilities", [])
|
||||
# Ensure detected_capabilities is iterable to prevent type errors with 'any'
|
||||
if not isinstance(detected_capabilities, (list, tuple, set)):
|
||||
logger.warning(
|
||||
f"'available_capabilities' in state is not a list/tuple/set (got {type(detected_capabilities).__name__}); resetting to empty list. Upstream data issue likely."
|
||||
)
|
||||
detected_capabilities = []
|
||||
has_specific_capabilities = any(
|
||||
cap in detected_capabilities for cap in DOCUMENT_CAPABILITIES
|
||||
)
|
||||
@@ -487,8 +528,13 @@ async def buddy_orchestrator_node(
|
||||
logger.info("Simple query detected - using lightweight web search approach")
|
||||
|
||||
try:
|
||||
# Get global factory and use web search directly
|
||||
factory = await get_global_factory()
|
||||
# Get service factory from dependency injection (already retrieved above)
|
||||
factory = service_factory
|
||||
if factory is None:
|
||||
# Fallback for cases where dependency injection is not configured
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
factory = await get_global_factory()
|
||||
|
||||
# Try to create lightweight search tool first, then fallback to existing search tools
|
||||
search_tool = None
|
||||
@@ -510,9 +556,9 @@ async def buddy_orchestrator_node(
|
||||
# Execute tool with proper error handling and validation
|
||||
try:
|
||||
# Validate query before execution
|
||||
if not user_query or not isinstance(user_query, str):
|
||||
raise ValueError(
|
||||
f"Invalid query for tool execution: {type(user_query)}"
|
||||
if not user_query:
|
||||
raise ValidationError(
|
||||
"Invalid query for tool execution: query is empty"
|
||||
)
|
||||
|
||||
search_result = await search_tool._arun(query=user_query)
|
||||
@@ -616,8 +662,13 @@ async def buddy_orchestrator_node(
|
||||
logger.info("Creating execution plan (complex query or fallback)")
|
||||
|
||||
try:
|
||||
# Get global factory and create planner tool dynamically
|
||||
factory = await get_global_factory()
|
||||
# Get service factory from dependency injection (already retrieved above)
|
||||
factory = service_factory
|
||||
if factory is None:
|
||||
# Fallback for cases where dependency injection is not configured
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
factory = await get_global_factory()
|
||||
planner = await factory.create_node_tool("planner")
|
||||
|
||||
# Add capability context to planner
|
||||
@@ -637,7 +688,7 @@ async def buddy_orchestrator_node(
|
||||
"Execution plan is missing 'steps' key. Full execution plan: %s",
|
||||
execution_plan,
|
||||
)
|
||||
raise ValueError(
|
||||
raise ValidationError(
|
||||
f"Execution plan is missing 'steps' key. Execution plan: {execution_plan}"
|
||||
)
|
||||
steps = execution_plan.get("steps", [])
|
||||
@@ -713,7 +764,7 @@ async def buddy_orchestrator_node(
|
||||
@ensure_immutable_node
|
||||
async def buddy_executor_node(
|
||||
state: BuddyState, config: RunnableConfig | None = None
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
"""Execute the current step in the plan."""
|
||||
current_step = state.get("current_step")
|
||||
if not current_step:
|
||||
@@ -747,8 +798,19 @@ async def buddy_executor_node(
|
||||
if "capability_map" in state:
|
||||
context["available_capabilities"] = state["capability_map"] # type: ignore[index]
|
||||
|
||||
# Get global factory and create graph executor dynamically
|
||||
factory = await get_global_factory()
|
||||
# Get service factory from config using dependency injection
|
||||
from biz_bud.core.langgraph.runnable_config import ConfigurationProvider
|
||||
|
||||
factory = None
|
||||
if config is not None:
|
||||
provider = ConfigurationProvider(config)
|
||||
factory = provider.get_service_factory()
|
||||
|
||||
if factory is None:
|
||||
# Fallback for cases where dependency injection is not configured
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
factory = await get_global_factory()
|
||||
executor = await factory.create_node_tool(graph_name)
|
||||
result = await executor._arun(query=step_query, context=context)
|
||||
|
||||
@@ -846,7 +908,7 @@ async def buddy_executor_node(
|
||||
@ensure_immutable_node
|
||||
async def buddy_analyzer_node(
|
||||
state: BuddyState, config: RunnableConfig | None = None
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
"""Analyze execution results and determine if plan modification is needed."""
|
||||
logger.info("Analyzing execution results")
|
||||
|
||||
@@ -891,7 +953,7 @@ async def buddy_analyzer_node(
|
||||
@ensure_immutable_node
|
||||
async def buddy_synthesizer_node(
|
||||
state: BuddyState, config: RunnableConfig | None = None
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
"""Synthesize final results from all executions."""
|
||||
logger.info("Synthesizing final results")
|
||||
|
||||
@@ -913,12 +975,7 @@ async def buddy_synthesizer_node(
|
||||
extracted_info = state.get("extracted_info")
|
||||
sources = state.get("sources")
|
||||
|
||||
if (
|
||||
extracted_info is not None
|
||||
and sources is not None
|
||||
and isinstance(extracted_info, dict)
|
||||
and isinstance(sources, list)
|
||||
): # type: ignore[misc]
|
||||
if extracted_info is not None and sources is not None:
|
||||
# Use existing data (e.g., from capability introspection)
|
||||
info_count = len(extracted_info)
|
||||
sources_count = len(sources)
|
||||
@@ -934,8 +991,19 @@ async def buddy_synthesizer_node(
|
||||
intermediate_results
|
||||
)
|
||||
|
||||
# Use synthesis tool from registry
|
||||
factory = await get_global_factory()
|
||||
# Get service factory from config using dependency injection
|
||||
from biz_bud.core.langgraph.runnable_config import ConfigurationProvider
|
||||
|
||||
factory = None
|
||||
if config is not None:
|
||||
provider = ConfigurationProvider(config)
|
||||
factory = provider.get_service_factory()
|
||||
|
||||
if factory is None:
|
||||
# Fallback for cases where dependency injection is not configured
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
factory = await get_global_factory()
|
||||
synthesizer = await factory.create_node_tool("synthesize_search_results")
|
||||
|
||||
synthesis_result = await synthesizer._arun(
|
||||
@@ -1018,7 +1086,7 @@ async def buddy_synthesizer_node(
|
||||
@ensure_immutable_node
|
||||
async def buddy_capability_discovery_node(
|
||||
state: BuddyState, config: RunnableConfig | None = None
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
) -> dict[str, Any]: # noqa: ARG001
|
||||
"""Discover and refresh system capabilities from registries.
|
||||
|
||||
This node scans the node and graph registries to build a comprehensive
|
||||
|
||||
@@ -187,28 +187,10 @@ class StateHelper:
|
||||
The extracted query string, or empty string if not found
|
||||
"""
|
||||
# First try the direct user_query field
|
||||
if user_query := state.get("user_query"):
|
||||
if isinstance(user_query, str):
|
||||
if "user_query" in state:
|
||||
user_query = state["user_query"]
|
||||
if user_query.strip():
|
||||
return user_query
|
||||
if isinstance(user_query, dict):
|
||||
logger.warning(
|
||||
f"Expected 'user_query' to be str, got dict. Value: {user_query!r}. "
|
||||
"Consider serializing this object before assignment."
|
||||
)
|
||||
# Optionally, you could return json.dumps(user_query) if that's appropriate for your use case.
|
||||
return ""
|
||||
if isinstance(user_query, list):
|
||||
logger.warning(
|
||||
f"Expected 'user_query' to be str, got list. Value: {user_query!r}. "
|
||||
"Consider joining or serializing this list before assignment."
|
||||
)
|
||||
return ""
|
||||
|
||||
logger.warning(
|
||||
f"Expected 'user_query' to be str, got {type(user_query).__name__}. "
|
||||
f"Value: {user_query!r}. Converting to str as a last resort."
|
||||
)
|
||||
return str(user_query)
|
||||
|
||||
# Then try to find in messages
|
||||
messages = state.get("messages", [])
|
||||
|
||||
448
src/biz_bud/core/README.md
Normal file
448
src/biz_bud/core/README.md
Normal file
@@ -0,0 +1,448 @@
|
||||
# Business Buddy Core Module
|
||||
|
||||
The core module provides the foundational infrastructure for the Business Buddy AI agent framework. It implements essential patterns including singleton service management, dependency injection, configuration handling, validation, and LangGraph utilities.
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
The core module is organized into several key areas:
|
||||
|
||||
```
|
||||
src/biz_bud/core/
|
||||
├── caching/ # Cache management and backends
|
||||
├── config/ # Configuration schemas and loading
|
||||
├── edge_helpers/ # LangGraph routing utilities
|
||||
├── errors/ # Error handling and aggregation
|
||||
├── langgraph/ # LangGraph patterns and utilities
|
||||
├── networking/ # HTTP clients and networking
|
||||
├── utils/ # Core utilities and helpers
|
||||
├── validation/ # Input validation and security
|
||||
├── cleanup_registry.py # Service lifecycle management
|
||||
├── enums.py # Core enumerations
|
||||
├── helpers.py # General helper functions
|
||||
├── service_helpers.py # Service-specific utilities
|
||||
├── tool_types.py # Tool type definitions
|
||||
└── types.py # Core type definitions
|
||||
```
|
||||
|
||||
## Service Factory and Singleton Pattern
|
||||
|
||||
### ServiceFactory
|
||||
|
||||
The `ServiceFactory` provides centralized service creation and lifecycle management with thread-safe singleton patterns:
|
||||
|
||||
```python
|
||||
from biz_bud.core.config.schemas import AppConfig
|
||||
from biz_bud.services.factory import ServiceFactory, get_global_factory
|
||||
|
||||
# Create factory with configuration
|
||||
config = AppConfig(...)
|
||||
factory = ServiceFactory(config)
|
||||
|
||||
# Or use global factory
|
||||
factory = await get_global_factory(config)
|
||||
|
||||
# Get services with automatic dependency injection
|
||||
llm_client = await factory.get_llm_client()
|
||||
vector_store = await factory.get_vector_store()
|
||||
extraction_service = await factory.get_semantic_extraction()
|
||||
|
||||
# Context manager usage
|
||||
async with factory:
|
||||
# Use services
|
||||
result = await llm_client.llm_chat("Hello")
|
||||
# Automatic cleanup on exit
|
||||
```
|
||||
|
||||
### Key Features
|
||||
|
||||
- **Thread-safe initialization**: Race-condition-free service creation
|
||||
- **Dependency injection**: Automatic service dependency resolution
|
||||
- **Lifecycle management**: Proper cleanup and resource management
|
||||
- **Configuration-driven**: Services configured via `AppConfig`
|
||||
- **Lazy loading**: Services created only when needed
|
||||
|
||||
### Singleton Manager Integration
|
||||
|
||||
The factory integrates with the cleanup registry for comprehensive lifecycle management:
|
||||
|
||||
```python
|
||||
from biz_bud.core.cleanup_registry import get_cleanup_registry
|
||||
|
||||
# Register cleanup functions
|
||||
registry = get_cleanup_registry()
|
||||
registry.register_cleanup("my_service", my_cleanup_function)
|
||||
|
||||
# Cleanup all services
|
||||
await registry.cleanup_all()
|
||||
```
|
||||
|
||||
## LangGraph Integration
|
||||
|
||||
### Creating LangGraph Nodes
|
||||
|
||||
The core module provides utilities for creating robust LangGraph nodes:
|
||||
|
||||
```python
|
||||
from biz_bud.core.langgraph import standard_node, handle_errors, log_node_execution
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
@standard_node
|
||||
@handle_errors
|
||||
@log_node_execution
|
||||
async def my_analysis_node(state: dict) -> dict:
|
||||
"""Analyze input with LLM."""
|
||||
factory = await get_global_factory()
|
||||
llm_client = await factory.get_llm_client()
|
||||
|
||||
result = await llm_client.llm_chat(
|
||||
prompt=f"Analyze: {state['input']}"
|
||||
)
|
||||
|
||||
return {"analysis": result}
|
||||
|
||||
# With configuration injection
|
||||
from biz_bud.core.langgraph import create_config_injected_node
|
||||
|
||||
configured_node = create_config_injected_node(
|
||||
my_analysis_node,
|
||||
node_context="analysis",
|
||||
llm_profile="analytical"
|
||||
)
|
||||
```
|
||||
|
||||
### Creating LangGraph Graphs
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph
|
||||
from biz_bud.core.langgraph import configure_graph_with_injection
|
||||
from biz_bud.core.edge_helpers import create_enum_router
|
||||
|
||||
# Define state
|
||||
class AnalysisState(TypedDict):
|
||||
input: str
|
||||
analysis: str
|
||||
next_action: str
|
||||
|
||||
# Create graph
|
||||
graph = StateGraph(AnalysisState)
|
||||
|
||||
# Add nodes with service injection
|
||||
graph.add_node("analyze", my_analysis_node)
|
||||
graph.add_node("process", process_node)
|
||||
|
||||
# Add conditional routing
|
||||
router = create_enum_router({
|
||||
"continue": "process",
|
||||
"complete": "__end__"
|
||||
})
|
||||
graph.add_conditional_edge("analyze", router)
|
||||
|
||||
# Configure with dependency injection
|
||||
configured_graph = configure_graph_with_injection(graph, config)
|
||||
```
|
||||
|
||||
### Edge Helpers and Routing
|
||||
|
||||
The edge helpers provide reusable routing logic:
|
||||
|
||||
```python
|
||||
from biz_bud.core.edge_helpers import (
|
||||
should_continue,
|
||||
check_accuracy,
|
||||
handle_error,
|
||||
create_enum_router
|
||||
)
|
||||
|
||||
# Basic continuation check
|
||||
graph.add_conditional_edge("node", should_continue)
|
||||
|
||||
# Custom routing based on state
|
||||
def route_by_confidence(state: dict) -> str:
|
||||
confidence = state.get("confidence", 0)
|
||||
if confidence > 0.8:
|
||||
return "high_confidence"
|
||||
elif confidence > 0.5:
|
||||
return "medium_confidence"
|
||||
else:
|
||||
return "low_confidence"
|
||||
|
||||
graph.add_conditional_edge("analysis", route_by_confidence)
|
||||
```
|
||||
|
||||
## Creating Tools with Service Integration
|
||||
|
||||
Tools can leverage the service factory for consistent resource access:
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
|
||||
@tool
|
||||
async def analyze_document(content: str) -> str:
|
||||
"""Analyze document content with AI."""
|
||||
factory = await get_global_factory()
|
||||
llm_client = await factory.get_llm_client()
|
||||
|
||||
return await llm_client.llm_chat(
|
||||
prompt=f"Analyze this document: {content}",
|
||||
system_prompt="You are a document analysis expert."
|
||||
)
|
||||
|
||||
@tool
|
||||
async def search_knowledge_base(query: str) -> list[dict]:
|
||||
"""Search the knowledge base."""
|
||||
factory = await get_global_factory()
|
||||
vector_store = await factory.get_vector_store()
|
||||
|
||||
results = await vector_store.similarity_search(query, k=5)
|
||||
return [{"content": r.page_content, "metadata": r.metadata} for r in results]
|
||||
|
||||
# For tools requiring multiple services
|
||||
@tool
|
||||
async def extract_and_store(url: str) -> str:
|
||||
"""Extract content from URL and store in knowledge base."""
|
||||
factory = await get_global_factory()
|
||||
|
||||
# Get multiple services
|
||||
extraction_service = await factory.get_semantic_extraction()
|
||||
vector_store = await factory.get_vector_store()
|
||||
|
||||
# Extract content
|
||||
content = await extraction_service.extract_from_url(url)
|
||||
|
||||
# Store in vector database
|
||||
await vector_store.add_texts([content["text"]], metadatas=[content["metadata"]])
|
||||
|
||||
return f"Extracted and stored content from {url}"
|
||||
```
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### Configuration Schemas
|
||||
|
||||
All services use typed configuration schemas:
|
||||
|
||||
```python
|
||||
from biz_bud.core.config.schemas import AppConfig, LLMConfig, ServicesConfig
|
||||
|
||||
config = AppConfig(
|
||||
llm=LLMConfig(
|
||||
provider="openai",
|
||||
model="gpt-4",
|
||||
temperature=0.7
|
||||
),
|
||||
services=ServicesConfig(
|
||||
database_url="postgresql://...",
|
||||
redis_url="redis://...",
|
||||
vector_store_url="http://..."
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
### Dynamic Configuration
|
||||
|
||||
```python
|
||||
from biz_bud.core.config import build_llm_config
|
||||
|
||||
# Get node-specific LLM configuration
|
||||
llm_config = build_llm_config(
|
||||
node_context="extraction",
|
||||
config=app_config,
|
||||
temperature_override=0.3
|
||||
)
|
||||
|
||||
# Use with factory
|
||||
llm_client = await factory.get_llm_for_node(
|
||||
node_context="extraction",
|
||||
temperature_override=0.3
|
||||
)
|
||||
```
|
||||
|
||||
## Caching and Performance
|
||||
|
||||
### Cache Integration
|
||||
|
||||
```python
|
||||
from biz_bud.core.caching import CacheManager, redis_cache
|
||||
|
||||
# Use cache manager
|
||||
cache_manager = CacheManager(backend="redis")
|
||||
|
||||
@redis_cache(ttl=3600)
|
||||
async def expensive_operation(param: str) -> str:
|
||||
"""Cached expensive operation."""
|
||||
# Perform expensive computation
|
||||
result = await some_expensive_call(param)
|
||||
return result
|
||||
|
||||
# Manual cache usage
|
||||
cache_key = "analysis:document:123"
|
||||
cached_result = await cache_manager.get(cache_key)
|
||||
if not cached_result:
|
||||
result = await perform_analysis()
|
||||
await cache_manager.set(cache_key, result, ttl=3600)
|
||||
```
|
||||
|
||||
### Memory Management
|
||||
|
||||
The factory uses weak references and proper cleanup:
|
||||
|
||||
```python
|
||||
# Automatic cleanup on context exit
|
||||
async with ServiceFactory(config) as factory:
|
||||
services = await factory.initialize_critical_services()
|
||||
# ... use services
|
||||
# Automatic cleanup happens here
|
||||
|
||||
# Manual cleanup
|
||||
await factory.cleanup()
|
||||
|
||||
# Global cleanup
|
||||
from biz_bud.services.factory import cleanup_global_factory
|
||||
await cleanup_global_factory()
|
||||
```
|
||||
|
||||
## Error Handling and Validation
|
||||
|
||||
### Comprehensive Error Handling
|
||||
|
||||
```python
|
||||
from biz_bud.core.errors import ErrorAggregator, handle_service_error
|
||||
from biz_bud.core.langgraph import handle_errors
|
||||
|
||||
@handle_errors
|
||||
async def robust_node(state: dict) -> dict:
|
||||
"""Node with comprehensive error handling."""
|
||||
try:
|
||||
result = await risky_operation(state["input"])
|
||||
return {"result": result}
|
||||
except SpecificError as e:
|
||||
# Handle specific errors
|
||||
return {"error": str(e), "retry": True}
|
||||
```
|
||||
|
||||
### Input Validation
|
||||
|
||||
```python
|
||||
from biz_bud.core.validation import validate_content, ContentValidator
|
||||
|
||||
# Validate inputs
|
||||
validator = ContentValidator()
|
||||
is_valid, errors = await validator.validate_text_content(user_input)
|
||||
|
||||
if not is_valid:
|
||||
raise ValidationError(f"Invalid input: {errors}")
|
||||
|
||||
# Schema validation
|
||||
from biz_bud.core.validation.pydantic_models import DocumentSchema
|
||||
|
||||
document = DocumentSchema.model_validate(raw_data)
|
||||
```
|
||||
|
||||
## Networking and HTTP Clients
|
||||
|
||||
### HTTP Client Usage
|
||||
|
||||
```python
|
||||
from biz_bud.core.networking import HTTPClient
|
||||
|
||||
async with HTTPClient() as client:
|
||||
response = await client.get("https://api.example.com/data")
|
||||
data = response.json()
|
||||
|
||||
# With retry and timeout
|
||||
from biz_bud.core.networking.retry import with_retry
|
||||
|
||||
@with_retry(max_attempts=3, backoff_factor=2.0)
|
||||
async def fetch_data(url: str):
|
||||
async with HTTPClient() as client:
|
||||
return await client.get(url, timeout=30.0)
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Service Lifecycle
|
||||
|
||||
1. **Use context managers**: Always use `async with` for proper cleanup
|
||||
2. **Global factory for long-running processes**: Use `get_global_factory()` for applications
|
||||
3. **Initialize critical services early**: Use `initialize_critical_services()` for faster startup
|
||||
4. **Proper cleanup**: Always call cleanup methods or use context managers
|
||||
|
||||
### LangGraph Patterns
|
||||
|
||||
1. **Use decorators**: Apply `@standard_node`, `@handle_errors`, `@log_node_execution`
|
||||
2. **State immutability**: Use immutable state patterns for reliability
|
||||
3. **Configuration injection**: Use `create_config_injected_node` for context-specific settings
|
||||
4. **Reusable edge helpers**: Leverage built-in routing functions
|
||||
|
||||
### Tool Development
|
||||
|
||||
1. **Consistent service access**: Use the service factory for all resource access
|
||||
2. **Error handling**: Wrap tool logic in try-catch blocks
|
||||
3. **Type hints**: Use proper type annotations for better IDE support
|
||||
4. **Documentation**: Include clear docstrings and examples
|
||||
|
||||
### Configuration
|
||||
|
||||
1. **Use schemas**: Always use typed configuration schemas
|
||||
2. **Environment variables**: Load sensitive data from environment
|
||||
3. **Validation**: Validate configuration at startup
|
||||
4. **Context-specific settings**: Use node-specific overrides when needed
|
||||
|
||||
## Testing
|
||||
|
||||
### Service Factory Testing
|
||||
|
||||
```python
|
||||
import pytest
|
||||
from biz_bud.services.factory import ServiceFactory
|
||||
from biz_bud.core.config.schemas import AppConfig
|
||||
|
||||
@pytest.fixture
|
||||
async def factory():
|
||||
config = AppConfig(...)
|
||||
factory = ServiceFactory(config)
|
||||
yield factory
|
||||
await factory.cleanup()
|
||||
|
||||
async def test_service_creation(factory):
|
||||
llm_client = await factory.get_llm_client()
|
||||
assert llm_client is not None
|
||||
|
||||
# Services should be singletons
|
||||
llm_client2 = await factory.get_llm_client()
|
||||
assert llm_client is llm_client2
|
||||
```
|
||||
|
||||
### LangGraph Testing
|
||||
|
||||
```python
|
||||
async def test_node_execution():
|
||||
state = {"input": "test data"}
|
||||
result = await my_analysis_node(state)
|
||||
|
||||
assert "analysis" in result
|
||||
assert isinstance(result["analysis"], str)
|
||||
```
|
||||
|
||||
## Module Dependencies
|
||||
|
||||
The core module has minimal external dependencies:
|
||||
|
||||
- **LangChain/LangGraph**: For AI agent patterns
|
||||
- **Pydantic**: For configuration and validation schemas
|
||||
- **AsyncIO**: For asynchronous operations
|
||||
- **Redis** (optional): For caching backend
|
||||
- **PostgreSQL** (optional): For database services
|
||||
|
||||
## Migration and Upgrade Guide
|
||||
|
||||
When upgrading or migrating services:
|
||||
|
||||
1. **Update configuration schemas** first
|
||||
2. **Test service initialization** in isolation
|
||||
3. **Verify singleton behavior** remains consistent
|
||||
4. **Check cleanup and resource management**
|
||||
5. **Update dependent nodes and tools**
|
||||
|
||||
For more detailed information about specific modules, see the individual documentation in each subdirectory.
|
||||
311
src/biz_bud/core/caching/CACHING_GUIDELINES.md
Normal file
311
src/biz_bud/core/caching/CACHING_GUIDELINES.md
Normal file
@@ -0,0 +1,311 @@
|
||||
# Caching Guidelines
|
||||
|
||||
This document provides guidelines for choosing the appropriate caching approach in the Business Buddy Core system.
|
||||
|
||||
## Overview
|
||||
|
||||
The caching system provides multiple approaches for caching function results:
|
||||
|
||||
1. **Centralized Cache Decorators** (`@cache_async`, `@cache_sync`)
|
||||
2. **Standard Library Cache** (`@lru_cache` from `functools`)
|
||||
3. **Custom Cache Backends** (Redis, File, Memory)
|
||||
|
||||
## When to Use Each Approach
|
||||
|
||||
### Use `@cache_async` / `@cache_sync` (Centralized Cache Decorators)
|
||||
|
||||
**Recommended for:**
|
||||
- Functions that perform expensive operations (network calls, database queries, file I/O)
|
||||
- Functions that benefit from persistence across application restarts
|
||||
- Functions where cache invalidation and management are important
|
||||
- Async functions that need caching
|
||||
- Functions where cache hit/miss metrics are valuable
|
||||
|
||||
**Examples:**
|
||||
```python
|
||||
from biz_bud.core.caching.decorators import cache_async, cache_sync
|
||||
|
||||
@cache_async(ttl=3600, key_prefix="llm_call:")
|
||||
async def call_llm_api(prompt: str, model: str) -> str:
|
||||
"""Expensive LLM API call - cache for 1 hour."""
|
||||
# Implementation here
|
||||
pass
|
||||
|
||||
@cache_sync(ttl=7200, key_prefix="config:")
|
||||
def load_configuration(config_path: str) -> dict:
|
||||
"""Configuration loading - cache for 2 hours."""
|
||||
# Implementation here
|
||||
pass
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Configurable TTL (time-to-live)
|
||||
- Multiple backend support (memory, Redis, file)
|
||||
- Built-in cache management (clear, delete specific entries)
|
||||
- Telemetry and monitoring integration
|
||||
- Force refresh capability
|
||||
- Thread-safe initialization
|
||||
|
||||
**Trade-offs:**
|
||||
- Slightly higher overhead than `@lru_cache`
|
||||
- Requires serialization/deserialization (pickle)
|
||||
- More complex setup for advanced features
|
||||
|
||||
### Use `@lru_cache` (Standard Library)
|
||||
|
||||
**Recommended for:**
|
||||
- Pure functions with deterministic outputs
|
||||
- Mathematical computations or algorithms
|
||||
- Small to medium result sets that fit in memory
|
||||
- Functions called frequently with repeated arguments
|
||||
- Simple caching needs without TTL requirements
|
||||
|
||||
**Examples:**
|
||||
```python
|
||||
from functools import lru_cache
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def calculate_fibonacci(n: int) -> int:
|
||||
"""Pure mathematical function - cache up to 128 results."""
|
||||
if n < 2:
|
||||
return n
|
||||
return calculate_fibonacci(n-1) + calculate_fibonacci(n-2)
|
||||
|
||||
@lru_cache(maxsize=256)
|
||||
def parse_url_pattern(pattern: str) -> dict:
|
||||
"""URL pattern parsing - lightweight and deterministic."""
|
||||
# Implementation here
|
||||
pass
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
- Minimal overhead and maximum performance
|
||||
- Built into Python standard library
|
||||
- Memory-efficient LRU eviction policy
|
||||
- No serialization overhead
|
||||
- Simple API
|
||||
|
||||
**Trade-offs:**
|
||||
- No TTL support (cache persists until evicted or cleared)
|
||||
- Memory-only storage (lost on restart)
|
||||
- Limited to sync functions only
|
||||
- No advanced cache management features
|
||||
- Fixed maxsize limit
|
||||
|
||||
### Use Custom Cache Backends
|
||||
|
||||
**Recommended for:**
|
||||
- Application-wide caching strategies
|
||||
- High-volume caching with Redis clustering
|
||||
- Persistent caching across deployments
|
||||
- Complex cache hierarchies or policies
|
||||
|
||||
**Examples:**
|
||||
```python
|
||||
from biz_bud.core.caching import get_cache_manager, CacheBackend
|
||||
|
||||
# Redis for distributed caching
|
||||
cache_manager = await get_cache_manager()
|
||||
redis_cache = await cache_manager.get_backend(CacheBackend.REDIS)
|
||||
|
||||
# Manual cache operations
|
||||
await redis_cache.set("user:123", user_data, ttl=3600)
|
||||
cached_user = await redis_cache.get("user:123")
|
||||
```
|
||||
|
||||
## Decision Matrix
|
||||
|
||||
| Use Case | Recommended Approach | Reason |
|
||||
|----------|---------------------|---------|
|
||||
| HTTP API calls | `@cache_async` | Network I/O, TTL needed, async |
|
||||
| Database queries | `@cache_async` | Expensive, TTL needed, may be async |
|
||||
| File system operations | `@cache_sync` | I/O heavy, TTL useful |
|
||||
| Mathematical calculations | `@lru_cache` | Pure functions, high performance |
|
||||
| Configuration parsing | `@cache_sync` | Infrequent changes, I/O involved |
|
||||
| URL normalization | `@lru_cache` | Deterministic, frequent calls |
|
||||
| LLM responses | `@cache_async` | Very expensive, network, async |
|
||||
| Validation results | `@lru_cache` | Fast, deterministic, frequent |
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Memory Usage
|
||||
- `@lru_cache`: Most memory-efficient, objects stored directly
|
||||
- `@cache_async/sync`: Higher memory usage due to pickle serialization
|
||||
- Custom backends: Varies by backend (Redis = network, File = disk)
|
||||
|
||||
### Speed
|
||||
1. `@lru_cache` (fastest - direct memory access)
|
||||
2. `@cache_async/sync` with Memory backend
|
||||
3. `@cache_async/sync` with Redis backend
|
||||
4. `@cache_async/sync` with File backend (slowest)
|
||||
|
||||
### Persistence
|
||||
- `@lru_cache`: Lost on process restart
|
||||
- Memory backend: Lost on process restart
|
||||
- Redis backend: Persists across restarts (if Redis configured for persistence)
|
||||
- File backend: Persists across restarts
|
||||
|
||||
## Best Practices
|
||||
|
||||
### For `@cache_async` / `@cache_sync`
|
||||
|
||||
1. **Choose appropriate TTL values:**
|
||||
```python
|
||||
@cache_async(ttl=60) # Fast-changing data (1 minute)
|
||||
@cache_async(ttl=3600) # Slow-changing data (1 hour)
|
||||
@cache_async(ttl=86400) # Very stable data (24 hours)
|
||||
@cache_async(ttl=None) # Cache until manually cleared
|
||||
```
|
||||
|
||||
2. **Use meaningful key prefixes:**
|
||||
```python
|
||||
@cache_async(key_prefix="api:weather:")
|
||||
async def get_weather(city: str) -> dict:
|
||||
pass
|
||||
```
|
||||
|
||||
3. **Handle cache invalidation:**
|
||||
```python
|
||||
@cache_async(ttl=3600)
|
||||
async def get_user_profile(user_id: str) -> dict:
|
||||
pass
|
||||
|
||||
# Invalidate when user data changes
|
||||
await get_user_profile.cache_delete(user_id)
|
||||
```
|
||||
|
||||
4. **Use force_refresh for updates:**
|
||||
```python
|
||||
# Normal cached call
|
||||
profile = await get_user_profile(user_id)
|
||||
|
||||
# Force refresh from source
|
||||
profile = await get_user_profile(user_id, force_refresh=True)
|
||||
```
|
||||
|
||||
### For `@lru_cache`
|
||||
|
||||
1. **Set appropriate maxsize:**
|
||||
```python
|
||||
@lru_cache(maxsize=None) # Unlimited (use carefully)
|
||||
@lru_cache(maxsize=128) # Small cache
|
||||
@lru_cache(maxsize=1024) # Medium cache
|
||||
```
|
||||
|
||||
2. **Use for pure functions only:**
|
||||
```python
|
||||
# Good - deterministic output
|
||||
@lru_cache(maxsize=256)
|
||||
def calculate_hash(data: str) -> str:
|
||||
return hashlib.md5(data.encode()).hexdigest()
|
||||
|
||||
# Bad - non-deterministic or side effects
|
||||
@lru_cache(maxsize=128)
|
||||
def get_current_time() -> datetime: # Time changes!
|
||||
return datetime.now()
|
||||
```
|
||||
|
||||
3. **Clear cache when needed:**
|
||||
```python
|
||||
# Clear all cached results
|
||||
my_function.cache_clear()
|
||||
|
||||
# Check cache statistics
|
||||
print(my_function.cache_info())
|
||||
```
|
||||
|
||||
## Testing Considerations
|
||||
|
||||
### With Centralized Cache Decorators
|
||||
```python
|
||||
from biz_bud.core.caching.decorators import reset_cache_singleton
|
||||
|
||||
def test_cached_function():
|
||||
# Reset cache state before test
|
||||
reset_cache_singleton()
|
||||
|
||||
# Test with fresh cache
|
||||
result1 = await my_cached_function("test")
|
||||
result2 = await my_cached_function("test") # Should be cached
|
||||
|
||||
assert result1 == result2
|
||||
```
|
||||
|
||||
### With `@lru_cache`
|
||||
```python
|
||||
def test_lru_cached_function():
|
||||
# Clear cache before test
|
||||
my_function.cache_clear()
|
||||
|
||||
# Test with fresh cache
|
||||
result1 = my_function("test")
|
||||
result2 = my_function("test") # Should be cached
|
||||
|
||||
assert result1 == result2
|
||||
assert my_function.cache_info().hits == 1
|
||||
```
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### From `@lru_cache` to `@cache_async`
|
||||
|
||||
```python
|
||||
# Before
|
||||
from functools import lru_cache
|
||||
|
||||
@lru_cache(maxsize=128)
|
||||
def expensive_sync_function(param: str) -> str:
|
||||
# Implementation
|
||||
pass
|
||||
|
||||
# After
|
||||
from biz_bud.core.caching.decorators import cache_sync
|
||||
|
||||
@cache_sync(ttl=3600, key_prefix="expensive:")
|
||||
def expensive_sync_function(param: str) -> str:
|
||||
# Same implementation
|
||||
pass
|
||||
```
|
||||
|
||||
### From sync to async caching
|
||||
|
||||
```python
|
||||
# Before
|
||||
@cache_sync(ttl=3600)
|
||||
def sync_function(param: str) -> str:
|
||||
# Implementation
|
||||
pass
|
||||
|
||||
# After
|
||||
@cache_async(ttl=3600)
|
||||
async def async_function(param: str) -> str:
|
||||
# Convert implementation to async
|
||||
pass
|
||||
```
|
||||
|
||||
## Monitoring and Debugging
|
||||
|
||||
### Cache Performance
|
||||
- Use cache hit ratio logging for centralized cache decorators
|
||||
- Monitor memory usage with `@lru_cache` via `cache_info()`
|
||||
- Enable performance logging for detailed timing
|
||||
|
||||
### Cache Debugging
|
||||
```python
|
||||
# Check cache contents (development only)
|
||||
backend = await get_default_cache_async()
|
||||
await backend.clear() # Clear all cache
|
||||
|
||||
# Force refresh specific entries
|
||||
result = await cached_function(param, force_refresh=True)
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Choose your caching approach based on:
|
||||
|
||||
1. **Performance requirements**: `@lru_cache` for maximum speed
|
||||
2. **Persistence needs**: Centralized cache for cross-restart persistence
|
||||
3. **Function type**: `@cache_async` for async, `@lru_cache` for sync pure functions
|
||||
4. **Cache management**: Centralized cache for TTL, invalidation, monitoring
|
||||
5. **Complexity**: `@lru_cache` for simple cases, centralized cache for advanced features
|
||||
@@ -2,10 +2,10 @@
|
||||
|
||||
from .base import CacheBackend as BytesCacheBackend
|
||||
from .base import CacheKey
|
||||
from .base import GenericCacheBackend as CacheBackend
|
||||
from .cache_backends import AsyncFileCacheBackend
|
||||
from .cache_encoder import CacheKeyEncoder
|
||||
from .cache_manager import LLMCache
|
||||
from .cache_types import CacheBackend
|
||||
from .decorators import cache, cache_async, cache_sync
|
||||
from .file import FileCache
|
||||
from .memory import InMemoryCache
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
"""Abstract base class for cache backends."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Protocol, TypeVar
|
||||
from typing import Generic, Protocol, TypeVar, runtime_checkable
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
@runtime_checkable
|
||||
class CacheKey(Protocol):
|
||||
"""Protocol for cache key generation."""
|
||||
|
||||
@@ -117,3 +118,60 @@ class CacheBackend(ABC):
|
||||
implementation for backends that don't need initialization.
|
||||
"""
|
||||
return None
|
||||
|
||||
|
||||
class GenericCacheBackend(ABC, Generic[T]):
|
||||
"""Abstract base class for generic cache backends."""
|
||||
|
||||
@abstractmethod
|
||||
async def get(self, key: str) -> T | None:
|
||||
"""Retrieve value from cache.
|
||||
|
||||
Args:
|
||||
key: Cache key
|
||||
|
||||
Returns:
|
||||
Cached value or None if not found
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def set(
|
||||
self,
|
||||
key: str,
|
||||
value: T,
|
||||
ttl: int | None = None,
|
||||
) -> None:
|
||||
"""Store value in cache.
|
||||
|
||||
Args:
|
||||
key: Cache key
|
||||
value: Value to store
|
||||
ttl: Time-to-live in seconds (None for no expiry)
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def delete(self, key: str) -> None:
|
||||
"""Remove value from cache.
|
||||
|
||||
Args:
|
||||
key: Cache key to delete
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def clear(self) -> None:
|
||||
"""Clear all cache entries."""
|
||||
...
|
||||
|
||||
async def ainit(self) -> None:
|
||||
"""Initialize the cache backend.
|
||||
|
||||
This method can be overridden by implementations that need
|
||||
async initialization. The default implementation does nothing.
|
||||
|
||||
Note: This is intentionally non-abstract to provide a default
|
||||
implementation for backends that don't need initialization.
|
||||
"""
|
||||
return None
|
||||
|
||||
@@ -4,9 +4,13 @@ import json
|
||||
import pickle
|
||||
from pathlib import Path
|
||||
|
||||
from .cache_types import CacheBackend
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
from .base import GenericCacheBackend as CacheBackend
|
||||
from .file import FileCache
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class AsyncFileCacheBackend[T](CacheBackend[T]):
|
||||
"""Async file-based cache backend with generic typing support.
|
||||
@@ -63,9 +67,21 @@ class AsyncFileCacheBackend[T](CacheBackend[T]):
|
||||
|
||||
# Deserialize based on serializer type
|
||||
if self.serializer == "json":
|
||||
return json.loads(bytes_value.decode("utf-8"))
|
||||
try:
|
||||
return json.loads(bytes_value.decode("utf-8"))
|
||||
except (json.JSONDecodeError, UnicodeDecodeError) as e:
|
||||
logger.warning(f"Failed to deserialize cached JSON for key {key}: {e}")
|
||||
# Remove corrupted cache entry
|
||||
await self._file_cache.delete(key)
|
||||
return None
|
||||
else: # pickle
|
||||
return pickle.loads(bytes_value)
|
||||
try:
|
||||
return pickle.loads(bytes_value)
|
||||
except (pickle.PickleError, EOFError) as e:
|
||||
logger.warning(f"Failed to deserialize cached pickle for key {key}: {e}")
|
||||
# Remove corrupted cache entry
|
||||
await self._file_cache.delete(key)
|
||||
return None
|
||||
|
||||
async def set(self, key: str, value: T, ttl: int | None = None) -> None:
|
||||
"""Serialize and set value in cache."""
|
||||
|
||||
@@ -3,12 +3,13 @@
|
||||
import hashlib
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
from .base import GenericCacheBackend as CacheBackend
|
||||
from .cache_backends import AsyncFileCacheBackend
|
||||
from .cache_encoder import CacheKeyEncoder
|
||||
from .cache_types import CacheBackend
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
@@ -51,10 +52,14 @@ class LLMCache[T]:
|
||||
if not self._ainit_done:
|
||||
# Initialize if it has an ainit method
|
||||
if hasattr(self._backend, "ainit"):
|
||||
await self._backend.ainit()
|
||||
try:
|
||||
await self._backend.ainit()
|
||||
except Exception as e:
|
||||
logger.warning(f"Cache backend initialization failed: {e}")
|
||||
# Continue without initialization - operations will still work
|
||||
self._ainit_done = True
|
||||
|
||||
def _generate_key(self, args: tuple[object, ...], kwargs: dict[str, object]) -> str:
|
||||
def _generate_key(self, args: tuple[Any, ...], kwargs: dict[str, Any]) -> str:
|
||||
"""Generate a cache key from arguments.
|
||||
|
||||
Args:
|
||||
|
||||
@@ -1,60 +0,0 @@
|
||||
"""Type definitions for caching system."""
|
||||
|
||||
from abc import ABC, abstractmethod
|
||||
|
||||
|
||||
class CacheBackend[T](ABC):
|
||||
"""Abstract base class for generic cache backends."""
|
||||
|
||||
@abstractmethod
|
||||
async def get(self, key: str) -> T | None:
|
||||
"""Retrieve value from cache.
|
||||
|
||||
Args:
|
||||
key: Cache key
|
||||
|
||||
Returns:
|
||||
Cached value or None if not found
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def set(
|
||||
self,
|
||||
key: str,
|
||||
value: T,
|
||||
ttl: int | None = None,
|
||||
) -> None:
|
||||
"""Store value in cache.
|
||||
|
||||
Args:
|
||||
key: Cache key
|
||||
value: Value to store
|
||||
ttl: Time-to-live in seconds (None for no expiry)
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def delete(self, key: str) -> None:
|
||||
"""Remove value from cache.
|
||||
|
||||
Args:
|
||||
key: Cache key to delete
|
||||
"""
|
||||
...
|
||||
|
||||
@abstractmethod
|
||||
async def clear(self) -> None:
|
||||
"""Clear all cache entries."""
|
||||
...
|
||||
|
||||
async def ainit(self) -> None:
|
||||
"""Initialize the cache backend.
|
||||
|
||||
This method can be overridden by implementations that need
|
||||
async initialization. The default implementation does nothing.
|
||||
|
||||
Note: This is intentionally non-abstract to provide a default
|
||||
implementation for backends that don't need initialization.
|
||||
"""
|
||||
return None
|
||||
@@ -6,15 +6,23 @@ import hashlib
|
||||
import json
|
||||
import pickle
|
||||
from collections.abc import Awaitable, Callable
|
||||
from typing import ParamSpec, TypeVar, cast
|
||||
from typing import Any, ParamSpec, TypeVar, cast
|
||||
|
||||
from .base import CacheBackend as BytesCacheBackend
|
||||
from .cache_types import CacheBackend
|
||||
from .base import GenericCacheBackend
|
||||
from .memory import InMemoryCache
|
||||
|
||||
# Type alias for backward compatibility
|
||||
CacheBackend = GenericCacheBackend
|
||||
|
||||
P = ParamSpec("P")
|
||||
T = TypeVar("T")
|
||||
|
||||
# Type aliases for caching decorators
|
||||
CacheDictType = dict[str, object]
|
||||
CacheTupleType = tuple[object, ...]
|
||||
AsyncCacheDecoratorType = Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]
|
||||
|
||||
|
||||
class _DefaultCacheManager:
|
||||
"""Thread-safe manager for the default cache instance using task-based pattern."""
|
||||
@@ -82,7 +90,7 @@ class _DefaultCacheManager:
|
||||
try:
|
||||
await self._initializing_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
raise
|
||||
finally:
|
||||
self._initializing_task = None
|
||||
|
||||
@@ -97,7 +105,7 @@ class _DefaultCacheManager:
|
||||
_default_cache_manager = _DefaultCacheManager()
|
||||
|
||||
|
||||
def get_default_cache() -> InMemoryCache:
|
||||
def get_default_cache() -> BytesCacheBackend | CacheBackend[bytes]:
|
||||
"""Get or create the default shared cache instance.
|
||||
|
||||
Note: This is the synchronous version for backward compatibility.
|
||||
@@ -106,15 +114,15 @@ def get_default_cache() -> InMemoryCache:
|
||||
return _default_cache_manager.get_cache_sync()
|
||||
|
||||
|
||||
async def get_default_cache_async() -> InMemoryCache:
|
||||
async def get_default_cache_async() -> BytesCacheBackend | CacheBackend[bytes]:
|
||||
"""Get or create the default shared cache instance with thread-safe init."""
|
||||
return await _default_cache_manager.get_cache()
|
||||
|
||||
|
||||
def _generate_cache_key(
|
||||
func_name: str,
|
||||
args: tuple[object, ...],
|
||||
kwargs: dict[str, object],
|
||||
args: tuple[Any, ...],
|
||||
kwargs: dict[str, Any],
|
||||
prefix: str = "",
|
||||
) -> str:
|
||||
"""Generate a cache key from function name and arguments.
|
||||
@@ -154,6 +162,73 @@ def _generate_cache_key(
|
||||
return f"{prefix}{func_name}:{key_hash}"
|
||||
|
||||
|
||||
async def _initialize_backend_if_needed(backend: BytesCacheBackend | CacheBackend[bytes], backend_initialized: bool) -> bool:
|
||||
"""Initialize backend if needed and return new initialized state."""
|
||||
if not backend_initialized and hasattr(backend, "ainit"):
|
||||
await backend.ainit()
|
||||
return True
|
||||
return backend_initialized
|
||||
|
||||
|
||||
def _process_cache_parameters(kwargs: dict[str, object]) -> tuple[dict[str, object], bool]:
|
||||
"""Process cache parameters and return cleaned kwargs and force_refresh flag."""
|
||||
# Process kwargs directly as they're already the right type
|
||||
kwargs_dict = kwargs
|
||||
force_refresh = kwargs_dict.pop("force_refresh", False)
|
||||
return kwargs_dict, bool(force_refresh)
|
||||
|
||||
|
||||
def _generate_cache_key_safe(
|
||||
func_name: str,
|
||||
args: tuple[object, ...],
|
||||
kwargs: dict[str, object],
|
||||
key_prefix: str,
|
||||
key_func: Callable[..., str] | None
|
||||
) -> str | None:
|
||||
"""Generate cache key safely, return None if generation fails."""
|
||||
try:
|
||||
if key_func:
|
||||
# Call key_func directly with args and kwargs
|
||||
return key_func(*args, **kwargs)
|
||||
else:
|
||||
# Convert ParamSpec args/kwargs to tuple/dict for key generation
|
||||
# Generate cache key with args and kwargs directly
|
||||
return _generate_cache_key(
|
||||
func_name,
|
||||
args,
|
||||
kwargs,
|
||||
key_prefix,
|
||||
)
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
async def _get_cached_value(backend: BytesCacheBackend | CacheBackend[bytes], cache_key: str) -> object | None:
|
||||
"""Get and deserialize cached value, return None if not found or failed."""
|
||||
try:
|
||||
cached_value = await backend.get(cache_key)
|
||||
if cached_value is not None:
|
||||
try:
|
||||
return pickle.loads(cached_value)
|
||||
except Exception:
|
||||
# If unpickling fails, continue to compute
|
||||
pass
|
||||
except Exception:
|
||||
# If cache get fails, continue to compute
|
||||
pass
|
||||
return None
|
||||
|
||||
|
||||
async def _store_cache_value(backend: BytesCacheBackend | CacheBackend[bytes], cache_key: str, result: object, ttl: int | None) -> None:
|
||||
"""Serialize and store result in cache, ignore failures."""
|
||||
try:
|
||||
serialized = pickle.dumps(result)
|
||||
await backend.set(cache_key, serialized, ttl)
|
||||
except Exception:
|
||||
# If serialization fails, ignore
|
||||
pass
|
||||
|
||||
|
||||
def cache_async(
|
||||
backend: BytesCacheBackend | CacheBackend[bytes] | None = None,
|
||||
ttl: int | None = 3600,
|
||||
@@ -183,61 +258,30 @@ def cache_async(
|
||||
nonlocal backend_initialized
|
||||
|
||||
# Initialize backend if needed
|
||||
if not backend_initialized and hasattr(backend, "ainit"):
|
||||
await backend.ainit()
|
||||
backend_initialized = True
|
||||
backend_initialized = await _initialize_backend_if_needed(backend, backend_initialized)
|
||||
|
||||
# Check for force_refresh parameter
|
||||
# Convert ParamSpecKwargs to dict for processing
|
||||
kwargs_dict = cast("dict[str, object]", kwargs)
|
||||
force_refresh = kwargs_dict.pop("force_refresh", False)
|
||||
# Note: kwargs remains unchanged since we work with kwargs_dict
|
||||
# Process cache parameters and clean kwargs
|
||||
kwargs_dict, force_refresh = _process_cache_parameters(kwargs)
|
||||
|
||||
# Generate cache key (excluding force_refresh from key generation)
|
||||
try:
|
||||
if key_func:
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
cache_key = key_func(
|
||||
*cast("tuple[object, ...]", args),
|
||||
**cast("dict[str, object]", kwargs),
|
||||
)
|
||||
else:
|
||||
# Convert ParamSpec args/kwargs to tuple/dict for key generation
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
cache_key = _generate_cache_key(
|
||||
func.__name__,
|
||||
cast("tuple[object, ...]", args),
|
||||
kwargs_dict,
|
||||
key_prefix,
|
||||
)
|
||||
except Exception:
|
||||
cache_key = _generate_cache_key_safe(
|
||||
func.__name__, args, kwargs_dict, key_prefix, key_func
|
||||
)
|
||||
if cache_key is None:
|
||||
# If key generation fails, skip caching and just execute function
|
||||
return await func(*args, **kwargs)
|
||||
|
||||
# Try to get from cache (unless force_refresh is True)
|
||||
if not force_refresh:
|
||||
try:
|
||||
cached_value = await backend.get(cache_key)
|
||||
if cached_value is not None:
|
||||
try:
|
||||
return pickle.loads(cached_value)
|
||||
except Exception:
|
||||
# If unpickling fails, continue to compute
|
||||
pass
|
||||
except Exception:
|
||||
# If cache get fails, continue to compute
|
||||
pass
|
||||
cached_result = await _get_cached_value(backend, cache_key)
|
||||
if cached_result is not None:
|
||||
return cast(T, cached_result)
|
||||
|
||||
# Compute result
|
||||
result = await func(*args, **kwargs)
|
||||
|
||||
# Store in cache
|
||||
try:
|
||||
serialized = pickle.dumps(result)
|
||||
await backend.set(cache_key, serialized, ttl)
|
||||
except Exception:
|
||||
# If serialization fails, return result anyway
|
||||
pass
|
||||
await _store_cache_value(backend, cache_key, result, ttl)
|
||||
|
||||
return result
|
||||
|
||||
@@ -248,18 +292,14 @@ def cache_async(
|
||||
async def cache_delete(*args: P.args, **kwargs: P.kwargs) -> None:
|
||||
"""Delete specific cache entry."""
|
||||
if key_func:
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
cache_key = key_func(
|
||||
*cast("tuple[object, ...]", args),
|
||||
**cast("dict[str, object]", kwargs),
|
||||
)
|
||||
# Generate cache key with custom function
|
||||
cache_key = key_func(*args, **kwargs)
|
||||
else:
|
||||
# Convert ParamSpec args/kwargs to tuple/dict for cache key generation
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
# Generate cache key with standard function
|
||||
cache_key = _generate_cache_key(
|
||||
func.__name__,
|
||||
cast("tuple[object, ...]", args),
|
||||
cast("dict[str, object]", kwargs),
|
||||
args,
|
||||
kwargs,
|
||||
key_prefix,
|
||||
)
|
||||
await backend.delete(cache_key)
|
||||
@@ -272,7 +312,7 @@ def cache_async(
|
||||
|
||||
|
||||
def cache_sync(
|
||||
backend: BytesCacheBackend | None = None,
|
||||
backend: BytesCacheBackend | CacheBackend[bytes] | None = None,
|
||||
ttl: int | None = 3600,
|
||||
key_prefix: str = "",
|
||||
key_func: Callable[..., str] | None = None,
|
||||
@@ -301,18 +341,14 @@ def cache_sync(
|
||||
def wrapper(*args: P.args, **kwargs: P.kwargs) -> T:
|
||||
# Generate cache key
|
||||
if key_func:
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
cache_key = key_func(
|
||||
*cast("tuple[object, ...]", args),
|
||||
**cast("dict[str, object]", kwargs),
|
||||
)
|
||||
# Generate cache key with custom function
|
||||
cache_key = key_func(*args, **kwargs)
|
||||
else:
|
||||
# Convert ParamSpec args/kwargs to tuple/dict for cache key generation
|
||||
# Cast to avoid pyrefly ParamSpec issues
|
||||
# Generate cache key with standard function
|
||||
cache_key = _generate_cache_key(
|
||||
func.__name__,
|
||||
cast("tuple[object, ...]", args),
|
||||
cast("dict[str, object]", kwargs),
|
||||
args,
|
||||
kwargs,
|
||||
key_prefix,
|
||||
)
|
||||
|
||||
@@ -377,7 +413,7 @@ class _CacheDecoratorManager:
|
||||
# Fast path - decorator already exists
|
||||
if self._cache_decorator is not None:
|
||||
return cast(
|
||||
"Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]",
|
||||
Callable[..., Any],
|
||||
self._cache_decorator,
|
||||
)
|
||||
|
||||
@@ -386,7 +422,7 @@ class _CacheDecoratorManager:
|
||||
# Double-check after acquiring lock
|
||||
if self._cache_decorator is not None:
|
||||
return cast(
|
||||
"Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]",
|
||||
Callable[..., Any],
|
||||
self._cache_decorator,
|
||||
)
|
||||
|
||||
@@ -413,7 +449,7 @@ class _CacheDecoratorManager:
|
||||
# Register the completed decorator
|
||||
self._cache_decorator = decorator
|
||||
return cast(
|
||||
"Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]",
|
||||
Callable[..., Any],
|
||||
decorator,
|
||||
)
|
||||
finally:
|
||||
@@ -439,7 +475,7 @@ class _CacheDecoratorManager:
|
||||
key_prefix=key_prefix,
|
||||
)
|
||||
return cast(
|
||||
"Callable[[Callable[P, Awaitable[T]]], Callable[P, Awaitable[T]]]",
|
||||
Callable[..., Any],
|
||||
self._cache_decorator,
|
||||
)
|
||||
|
||||
@@ -452,7 +488,7 @@ class _CacheDecoratorManager:
|
||||
try:
|
||||
await self._initializing_task
|
||||
except asyncio.CancelledError:
|
||||
pass
|
||||
raise
|
||||
finally:
|
||||
self._initializing_task = None
|
||||
|
||||
|
||||
@@ -13,7 +13,8 @@ import aiofiles
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
from ..errors import ConfigurationError
|
||||
from ..errors import ConfigurationError, ValidationError
|
||||
from ..networking.async_utils import gather_with_concurrency
|
||||
from .base import CacheBackend
|
||||
|
||||
T = TypeVar("T")
|
||||
@@ -49,7 +50,7 @@ class FileCache(CacheBackend):
|
||||
ValueError: If an unsupported serializer is specified.
|
||||
"""
|
||||
if serializer not in ("pickle", "json"):
|
||||
raise ValueError('serializer must be either "pickle" or "json"')
|
||||
raise ValidationError('serializer must be either "pickle" or "json"')
|
||||
|
||||
self.cache_dir = Path(cache_dir)
|
||||
self.default_ttl = default_ttl
|
||||
@@ -245,7 +246,8 @@ class FileCache(CacheBackend):
|
||||
|
||||
# Delete all cache files concurrently
|
||||
if cache_files:
|
||||
await asyncio.gather(
|
||||
await gather_with_concurrency(
|
||||
10, # Reasonable concurrency for file operations
|
||||
*[self._delete_file(file_path) for file_path in cache_files],
|
||||
return_exceptions=True,
|
||||
)
|
||||
@@ -281,9 +283,11 @@ class FileCache(CacheBackend):
|
||||
Dictionary mapping keys to values (or None if not found)
|
||||
"""
|
||||
results: dict[str, bytes | None] = {}
|
||||
# Use asyncio.gather for parallel retrieval
|
||||
values = await asyncio.gather(
|
||||
*[self.get(key) for key in keys], return_exceptions=True
|
||||
# Use gather_with_concurrency for controlled parallel retrieval
|
||||
values = await gather_with_concurrency(
|
||||
10, # Reasonable concurrency for file operations
|
||||
*[self.get(key) for key in keys],
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
for key, value in zip(keys, values, strict=False):
|
||||
@@ -308,14 +312,12 @@ class FileCache(CacheBackend):
|
||||
items: Dictionary mapping keys to values
|
||||
ttl: Time-to-live in seconds (None uses default TTL)
|
||||
"""
|
||||
# Use asyncio.gather for parallel storage with limited concurrency
|
||||
semaphore = asyncio.Semaphore(10) # Limit concurrent file operations
|
||||
# Use gather_with_concurrency for controlled parallel storage
|
||||
async def set_item(key: str, value: bytes) -> None:
|
||||
await self.set(key, value, ttl)
|
||||
|
||||
async def set_with_semaphore(key: str, value: bytes) -> None:
|
||||
async with semaphore:
|
||||
await self.set(key, value, ttl)
|
||||
|
||||
await asyncio.gather(
|
||||
*[set_with_semaphore(key, value) for key, value in items.items()],
|
||||
await gather_with_concurrency(
|
||||
10, # Reasonable concurrency for file operations
|
||||
*[set_item(key, value) for key, value in items.items()],
|
||||
return_exceptions=True,
|
||||
)
|
||||
|
||||
@@ -2,10 +2,11 @@
|
||||
|
||||
import asyncio
|
||||
import time
|
||||
from collections import OrderedDict
|
||||
from dataclasses import dataclass
|
||||
from typing import Final
|
||||
|
||||
from .base import CacheBackend
|
||||
from .base import GenericCacheBackend as CacheBackend
|
||||
|
||||
|
||||
@dataclass
|
||||
@@ -16,7 +17,7 @@ class CacheEntry:
|
||||
expires_at: float | None
|
||||
|
||||
|
||||
class InMemoryCache(CacheBackend):
|
||||
class InMemoryCache(CacheBackend[bytes]):
|
||||
"""In-memory cache backend using a dictionary."""
|
||||
|
||||
def __init__(self, max_size: int | None = None) -> None:
|
||||
@@ -25,7 +26,7 @@ class InMemoryCache(CacheBackend):
|
||||
Args:
|
||||
max_size: Maximum number of entries (None for unlimited)
|
||||
"""
|
||||
self._cache: dict[str, CacheEntry] = {}
|
||||
self._cache: OrderedDict[str, CacheEntry] = OrderedDict()
|
||||
self._lock = asyncio.Lock()
|
||||
self.max_size: Final = max_size
|
||||
|
||||
@@ -42,6 +43,8 @@ class InMemoryCache(CacheBackend):
|
||||
del self._cache[key]
|
||||
return None
|
||||
|
||||
# Mark as recently used by moving to end
|
||||
self._cache.move_to_end(key)
|
||||
return entry.value
|
||||
|
||||
async def set(
|
||||
@@ -53,15 +56,14 @@ class InMemoryCache(CacheBackend):
|
||||
"""Store value in cache."""
|
||||
expires_at = time.time() + ttl if ttl is not None else None
|
||||
async with self._lock:
|
||||
# Enforce max size by removing oldest entry
|
||||
# Enforce max size by removing least recently used entry
|
||||
if (
|
||||
self.max_size
|
||||
and len(self._cache) >= self.max_size
|
||||
and key not in self._cache
|
||||
):
|
||||
# Remove the first (oldest) entry
|
||||
oldest_key = next(iter(self._cache))
|
||||
del self._cache[oldest_key]
|
||||
# Remove the least recently used entry (first in OrderedDict)
|
||||
self._cache.popitem(last=False)
|
||||
|
||||
self._cache[key] = CacheEntry(value=value, expires_at=expires_at)
|
||||
|
||||
|
||||
@@ -140,4 +140,15 @@ class RedisCache(CacheBackend):
|
||||
"""Close Redis connection."""
|
||||
if self._client:
|
||||
await self._client.close()
|
||||
self._client = None
|
||||
async def health_check(self) -> bool:
|
||||
"""Check if Redis is available."""
|
||||
try:
|
||||
return False if self._client is None else await self._client.ping()
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def get_client(self) -> redis.Redis:
|
||||
"""Get the Redis client."""
|
||||
if self._client is None:
|
||||
raise ConfigurationError("Redis client not initialized")
|
||||
return self._client
|
||||
|
||||
@@ -13,22 +13,43 @@ Key principles:
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from typing import Any
|
||||
|
||||
# Import for type hints
|
||||
from typing import TYPE_CHECKING, Any, TypeVar, cast
|
||||
|
||||
from biz_bud.core.errors import ConfigurationError, ValidationError
|
||||
from biz_bud.core.networking.async_utils import gather_with_concurrency
|
||||
from biz_bud.core.types import CleanupFunction, CleanupFunctionWithArgs
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from biz_bud.core.config.schemas import AppConfig
|
||||
from biz_bud.services.base import BaseService
|
||||
|
||||
T = TypeVar("T", bound="BaseService[Any]")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
||||
class CleanupRegistry:
|
||||
"""Registry for cleanup functions to break circular dependencies."""
|
||||
"""Registry for cleanup functions and service lifecycle management.
|
||||
|
||||
def __init__(self) -> None:
|
||||
"""Initialize the cleanup registry."""
|
||||
This registry handles both cleanup functions and service initialization
|
||||
to provide comprehensive lifecycle management capabilities. It integrates
|
||||
the functionality that was previously split between CleanupRegistry
|
||||
and LifecycleManager for better separation of concerns.
|
||||
"""
|
||||
|
||||
def __init__(self, config: "AppConfig | None" = None) -> None:
|
||||
"""Initialize the cleanup registry.
|
||||
|
||||
Args:
|
||||
config: Optional application configuration for service initialization.
|
||||
"""
|
||||
self._cleanup_functions: dict[str, CleanupFunction] = {}
|
||||
self._cleanup_functions_with_args: dict[str, CleanupFunctionWithArgs] = {}
|
||||
self._lock = asyncio.Lock()
|
||||
self._config = config
|
||||
|
||||
def register_cleanup(self, name: str, cleanup_func: CleanupFunction) -> None:
|
||||
"""Register a cleanup function.
|
||||
@@ -62,7 +83,7 @@ class CleanupRegistry:
|
||||
KeyError: If the cleanup function is not registered
|
||||
"""
|
||||
if name not in self._cleanup_functions:
|
||||
raise KeyError(f"Cleanup function '{name}' not registered")
|
||||
raise ValidationError(f"Cleanup function '{name}' not registered")
|
||||
|
||||
logger.debug(f"Calling cleanup function: {name}")
|
||||
await self._cleanup_functions[name]()
|
||||
@@ -84,7 +105,7 @@ class CleanupRegistry:
|
||||
logger.debug(f"Calling cleanup function with args: {name}")
|
||||
await self._cleanup_functions_with_args[name](*args, **kwargs)
|
||||
else:
|
||||
raise KeyError(f"Cleanup function with args '{name}' not registered")
|
||||
raise ValidationError(f"Cleanup function with args '{name}' not registered")
|
||||
|
||||
def is_registered(self, name: str) -> bool:
|
||||
"""Check if a cleanup function is registered.
|
||||
@@ -140,6 +161,317 @@ class CleanupRegistry:
|
||||
else:
|
||||
logger.error(f"Cleanup failed with {len(errors)} errors")
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# Service Lifecycle Management (integrated from LifecycleManager)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
def set_config(self, config: "AppConfig") -> None:
|
||||
"""Set the configuration for service initialization.
|
||||
|
||||
Args:
|
||||
config: Application configuration to use for service creation.
|
||||
"""
|
||||
self._config = config
|
||||
|
||||
async def create_service(self, service_class: "type[T]") -> "T":
|
||||
"""Create and initialize a service instance with comprehensive error handling.
|
||||
|
||||
This method handles the actual service creation and initialization
|
||||
with proper timeout protection and error recovery.
|
||||
|
||||
Args:
|
||||
service_class: The service class to instantiate.
|
||||
|
||||
Returns:
|
||||
An initialized service instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If no config is available for service creation.
|
||||
asyncio.TimeoutError: If service initialization times out.
|
||||
asyncio.CancelledError: If service initialization is cancelled.
|
||||
Exception: If service initialization fails for any other reason.
|
||||
"""
|
||||
if not self._config:
|
||||
raise ConfigurationError("No configuration available for service creation")
|
||||
|
||||
logger.debug(f"Creating service: {service_class.__name__}")
|
||||
|
||||
try:
|
||||
# Create service instance
|
||||
service = service_class(self._config)
|
||||
|
||||
# Initialize with timeout (reduced from 60 to 30 seconds for faster startup)
|
||||
timeout = getattr(self._config, "service_init_timeout", 30.0)
|
||||
if not (isinstance(timeout, (int, float)) and timeout > 0):
|
||||
raise ConfigurationError(f"Invalid service_init_timeout: {timeout!r}. Must be a positive number.")
|
||||
await asyncio.wait_for(service.initialize(), timeout=timeout)
|
||||
|
||||
logger.debug(f"Successfully initialized service: {service_class.__name__}")
|
||||
return service
|
||||
|
||||
except asyncio.TimeoutError:
|
||||
logger.error(
|
||||
f"Service initialization timed out for {service_class.__name__}"
|
||||
)
|
||||
raise
|
||||
except asyncio.CancelledError:
|
||||
logger.warning(
|
||||
f"Service initialization cancelled for {service_class.__name__}"
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Service initialization failed for {service_class.__name__}: {e}"
|
||||
)
|
||||
raise
|
||||
|
||||
def partition_results(
|
||||
self, results: dict["type[BaseService[Any]]", object]
|
||||
) -> tuple[dict["type[BaseService[Any]]", "BaseService[Any]"], list[str]]:
|
||||
"""Partition initialization results into succeeded and failed services.
|
||||
|
||||
Args:
|
||||
results: Dictionary mapping service classes to initialization results.
|
||||
|
||||
Returns:
|
||||
Tuple of (succeeded_services, failed_service_names).
|
||||
"""
|
||||
succeeded = {
|
||||
k: cast("BaseService[Any]", v)
|
||||
for k, v in results.items()
|
||||
if not isinstance(v, Exception)
|
||||
}
|
||||
failed = [k.__name__ for k, v in results.items() if isinstance(v, Exception)]
|
||||
return succeeded, failed
|
||||
|
||||
async def initialize_services(
|
||||
self, service_classes: list["type[BaseService[Any]]"]
|
||||
) -> dict["type[BaseService[Any]]", "BaseService[Any]"]:
|
||||
"""Initialize multiple services concurrently for faster startup.
|
||||
|
||||
This method optimizes startup time by initializing multiple services
|
||||
in parallel rather than sequentially. Services that don't depend on
|
||||
each other can be initialized concurrently.
|
||||
|
||||
Args:
|
||||
service_classes: List of service classes to initialize.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping service class to initialized service instance.
|
||||
|
||||
Raises:
|
||||
Exception: If any critical service fails to initialize.
|
||||
"""
|
||||
if not self._config:
|
||||
raise ConfigurationError("No configuration available for service initialization")
|
||||
|
||||
logger.info(f"Initializing {len(service_classes)} services concurrently")
|
||||
|
||||
# Initialize all services concurrently
|
||||
raw_results = await gather_with_concurrency(
|
||||
10, # Reasonable concurrency limit for service initialization
|
||||
*(self.create_service(cls) for cls in service_classes),
|
||||
return_exceptions=True
|
||||
)
|
||||
|
||||
# Map results back to service classes
|
||||
results: dict["type[BaseService[Any]]", object] = dict(
|
||||
zip(service_classes, raw_results)
|
||||
)
|
||||
|
||||
# Partition results into succeeded and failed
|
||||
succeeded, failed = self.partition_results(results)
|
||||
|
||||
# Log results
|
||||
for cls in succeeded:
|
||||
logger.info(f"Successfully initialized {cls.__name__}")
|
||||
if failed:
|
||||
logger.error(f"Failed services: {', '.join(failed)}")
|
||||
|
||||
return succeeded
|
||||
|
||||
async def initialize_critical_services(
|
||||
self,
|
||||
) -> dict["type[BaseService[Any]]", "BaseService[Any]"]:
|
||||
"""Initialize only critical services required for basic application functionality.
|
||||
|
||||
Critical services are those needed for core operations:
|
||||
- Database (PostgresStore) for data persistence
|
||||
- Cache (RedisCacheBackend) for performance
|
||||
|
||||
Non-critical services like vector store, extraction services are initialized lazily.
|
||||
This optimizes startup time by deferring heavy initialization until needed.
|
||||
|
||||
Returns:
|
||||
Dictionary of initialized critical services.
|
||||
"""
|
||||
# Import here to avoid circular imports
|
||||
try:
|
||||
from biz_bud.services.db import PostgresStore
|
||||
from biz_bud.services.redis_backend import RedisCacheBackend
|
||||
|
||||
critical_services: list["type[BaseService[Any]]"] = [
|
||||
PostgresStore,
|
||||
RedisCacheBackend,
|
||||
]
|
||||
|
||||
logger.info("Initializing critical services for faster startup")
|
||||
services = await self.initialize_services(critical_services)
|
||||
logger.info("Critical services initialized successfully")
|
||||
|
||||
return services
|
||||
except ImportError as e:
|
||||
logger.warning(f"Could not import critical services: {e}")
|
||||
return {}
|
||||
|
||||
async def cleanup_services(
|
||||
self, services: dict["type[BaseService[Any]]", "BaseService[Any]"]
|
||||
) -> None:
|
||||
"""Cleanup multiple services with comprehensive error handling.
|
||||
|
||||
This method ensures all services are properly shut down,
|
||||
even if some cleanup operations fail. It uses timeout protection
|
||||
and concurrent cleanup for efficiency.
|
||||
|
||||
Args:
|
||||
services: Dictionary of services to cleanup.
|
||||
"""
|
||||
if not services:
|
||||
return
|
||||
|
||||
logger.info(f"Cleaning up {len(services)} services")
|
||||
|
||||
async with self._lock:
|
||||
# Create cleanup tasks with individual timeout handling
|
||||
cleanup_tasks = []
|
||||
for service_class, service in services.items():
|
||||
logger.debug(f"Cleaning up {service_class.__name__}")
|
||||
|
||||
async def cleanup_with_timeout(service_cls: "type[BaseService[Any]]", svc: "BaseService[Any]") -> tuple["type[BaseService[Any]]", Exception | None]:
|
||||
"""Cleanup a single service with timeout and error handling."""
|
||||
try:
|
||||
await asyncio.wait_for(svc.cleanup(), timeout=5.0) # Reduced timeout for tests
|
||||
logger.debug(f"Successfully cleaned up {service_cls.__name__}")
|
||||
return service_cls, None
|
||||
except asyncio.TimeoutError as e:
|
||||
logger.error(f"Cleanup timed out for {service_cls.__name__}")
|
||||
return service_cls, e
|
||||
except Exception as e:
|
||||
logger.error(f"Cleanup failed for {service_cls.__name__}: {e}")
|
||||
return service_cls, e
|
||||
|
||||
task = asyncio.create_task(cleanup_with_timeout(service_class, service))
|
||||
cleanup_tasks.append(task)
|
||||
|
||||
# Wait for all cleanups to complete
|
||||
if cleanup_tasks:
|
||||
results = await gather_with_concurrency(5, *cleanup_tasks, return_exceptions=True)
|
||||
|
||||
# Log final results
|
||||
success_count = 0
|
||||
for result in results:
|
||||
if isinstance(result, Exception):
|
||||
logger.error(f"Cleanup task failed with exception: {result}")
|
||||
elif isinstance(result, tuple) and result[1] is None:
|
||||
success_count += 1
|
||||
|
||||
logger.info(f"Cleanup completed: {success_count}/{len(services)} services cleaned up successfully")
|
||||
|
||||
async def cleanup_with_cancellation_handling(
|
||||
self,
|
||||
services: dict["type[BaseService[Any]]", "BaseService[Any]"],
|
||||
initializing_tasks: dict["type[BaseService[Any]]", "asyncio.Task[BaseService[Any]]"],
|
||||
) -> None:
|
||||
"""Cleanup services and handle ongoing initialization tasks.
|
||||
|
||||
This method provides comprehensive cleanup that handles both
|
||||
initialized services and ongoing initialization tasks.
|
||||
|
||||
Args:
|
||||
services: Dictionary of initialized services to cleanup.
|
||||
initializing_tasks: Dictionary of ongoing initialization tasks.
|
||||
"""
|
||||
logger.info("Starting comprehensive cleanup with cancellation handling")
|
||||
|
||||
# Cancel and wait for any ongoing initializations
|
||||
if initializing_tasks:
|
||||
init_tasks: list["asyncio.Task[BaseService[Any]]"] = list(
|
||||
initializing_tasks.values()
|
||||
)
|
||||
|
||||
# Cancel all ongoing initialization tasks
|
||||
for task in init_tasks:
|
||||
if not task.done():
|
||||
task.cancel()
|
||||
|
||||
# Wait for all tasks to complete (cancelled or otherwise)
|
||||
init_results = await gather_with_concurrency(10, *init_tasks, return_exceptions=True)
|
||||
|
||||
# Log any initialization failures during shutdown
|
||||
for service_class, (task, result) in zip(
|
||||
initializing_tasks.keys(), zip(init_tasks, init_results)
|
||||
):
|
||||
if isinstance(result, asyncio.CancelledError):
|
||||
logger.info(
|
||||
f"Cancelled pending initialization for {service_class.__name__} during cleanup"
|
||||
)
|
||||
elif isinstance(result, Exception):
|
||||
logger.warning(
|
||||
f"Initialization failed during cleanup for {service_class.__name__}: {result}"
|
||||
)
|
||||
|
||||
# Now cleanup initialized services (this method has its own lock)
|
||||
await self.cleanup_services(services)
|
||||
|
||||
logger.info("Comprehensive cleanup completed")
|
||||
|
||||
async def create_service_with_dependencies(
|
||||
self,
|
||||
service_class: "type[BaseService[Any]]",
|
||||
dependencies: dict[str, Any],
|
||||
) -> "BaseService[Any]":
|
||||
"""Create a service instance with explicit dependency injection.
|
||||
|
||||
This method supports creating services that require specific
|
||||
dependencies to be injected during construction.
|
||||
|
||||
Args:
|
||||
service_class: The service class to instantiate.
|
||||
dependencies: Dictionary of dependencies to inject.
|
||||
|
||||
Returns:
|
||||
An initialized service instance with dependencies injected.
|
||||
|
||||
Example:
|
||||
# Create extraction service with LLM and vector store dependencies
|
||||
extraction_service = await registry.create_service_with_dependencies(
|
||||
SemanticExtractionService,
|
||||
{
|
||||
"llm_client": llm_client,
|
||||
"vector_store": vector_store
|
||||
}
|
||||
)
|
||||
"""
|
||||
if not self._config:
|
||||
raise ConfigurationError("No configuration available for service creation")
|
||||
|
||||
logger.info(f"Creating {service_class.__name__} with dependencies")
|
||||
|
||||
try:
|
||||
# Create service instance with dependencies
|
||||
service = service_class(app_config=self._config, **dependencies)
|
||||
|
||||
# Initialize with timeout
|
||||
timeout = getattr(self._config, "service_init_timeout", 30.0)
|
||||
await asyncio.wait_for(service.initialize(), timeout=timeout)
|
||||
|
||||
logger.info(f"Successfully initialized {service_class.__name__} with dependencies")
|
||||
return service
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create {service_class.__name__} with dependencies: {e}")
|
||||
raise
|
||||
|
||||
|
||||
# Global cleanup registry instance
|
||||
_cleanup_registry: CleanupRegistry | None = None
|
||||
|
||||
@@ -51,7 +51,14 @@ LangGraph Integration:
|
||||
"""
|
||||
|
||||
from .constants import * # re-export constants # noqa: F403
|
||||
from .loader import ConfigOverride, load_config, load_config_async
|
||||
from .loader import (
|
||||
ConfigOverride,
|
||||
build_llm_config,
|
||||
get_context_mappings,
|
||||
load_config,
|
||||
load_config_async,
|
||||
resolve_llm_profile,
|
||||
)
|
||||
from .schemas import AppConfig
|
||||
|
||||
# Legacy aliases for backward compatibility
|
||||
@@ -64,6 +71,10 @@ __all__ = [
|
||||
"load_config_async",
|
||||
"ConfigOverride",
|
||||
"AppConfig",
|
||||
# LLM configuration functions
|
||||
"resolve_llm_profile",
|
||||
"build_llm_config",
|
||||
"get_context_mappings",
|
||||
# Legacy aliases
|
||||
"resolve_app_config_with_overrides",
|
||||
"resolve_app_config_with_overrides_async",
|
||||
|
||||
@@ -68,6 +68,26 @@ ENV_PREFIX = "BIZBUDDY_"
|
||||
MAX_RECURSION_DEPTH = 10
|
||||
CONFIG_MAX_SIZE_MB = 10
|
||||
|
||||
# =============================================================================
|
||||
# STATE KEY CONSTANTS
|
||||
# =============================================================================
|
||||
|
||||
# Common state keys used throughout the framework
|
||||
STATE_KEY_MESSAGES: Final[str] = "messages"
|
||||
STATE_KEY_ERRORS: Final[str] = "errors"
|
||||
STATE_KEY_CONFIG: Final[str] = "config"
|
||||
STATE_KEY_QUERY: Final[str] = "query"
|
||||
STATE_KEY_USER_QUERY: Final[str] = "user_query"
|
||||
STATE_KEY_SEARCH_RESULTS: Final[str] = "search_results"
|
||||
STATE_KEY_SYNTHESIS: Final[str] = "synthesis"
|
||||
STATE_KEY_FINAL_RESPONSE: Final[str] = "final_response"
|
||||
STATE_KEY_TOOL_CALLS: Final[str] = "tool_calls"
|
||||
STATE_KEY_INPUT_URL: Final[str] = "input_url"
|
||||
STATE_KEY_URL: Final[str] = "url"
|
||||
STATE_KEY_SERVICE_FACTORY: Final[str] = "service_factory"
|
||||
STATE_KEY_EXTRACTED_INFO: Final[str] = "extracted_info"
|
||||
STATE_KEY_SOURCES: Final[str] = "sources"
|
||||
|
||||
# =============================================================================
|
||||
# AGENT FRAMEWORK CONSTANTS
|
||||
# =============================================================================
|
||||
@@ -302,6 +322,49 @@ DOCUMENT_PATTERNS = (
|
||||
".ods",
|
||||
)
|
||||
|
||||
# File extension patterns for URL analysis and content classification
|
||||
PDF_EXTENSIONS = (".pdf",)
|
||||
IMAGE_EXTENSIONS = (
|
||||
".jpg",
|
||||
".jpeg",
|
||||
".png",
|
||||
".gif",
|
||||
".webp",
|
||||
".svg",
|
||||
".bmp",
|
||||
".ico",
|
||||
".tiff",
|
||||
".tif",
|
||||
)
|
||||
VIDEO_EXTENSIONS = (
|
||||
".mp4",
|
||||
".webm",
|
||||
".avi",
|
||||
".mov",
|
||||
".wmv",
|
||||
".flv",
|
||||
".mkv",
|
||||
".m4v",
|
||||
".3gp",
|
||||
".ogv",
|
||||
)
|
||||
AUDIO_EXTENSIONS = (".mp3", ".wav", ".flac", ".aac", ".ogg", ".wma", ".m4a")
|
||||
ARCHIVE_EXTENSIONS = (".zip", ".rar", ".7z", ".tar", ".gz", ".bz2", ".xz")
|
||||
CODE_EXTENSIONS = (
|
||||
".py",
|
||||
".js",
|
||||
".html",
|
||||
".css",
|
||||
".java",
|
||||
".cpp",
|
||||
".c",
|
||||
".go",
|
||||
".rs",
|
||||
".php",
|
||||
".rb",
|
||||
)
|
||||
SITEMAP_EXTENSIONS = (".xml", ".txt")
|
||||
|
||||
REPORT_PATTERNS = (
|
||||
"/report/",
|
||||
"/analysis/",
|
||||
@@ -392,6 +455,21 @@ __all__ = [
|
||||
"ENV_PREFIX",
|
||||
"MAX_RECURSION_DEPTH",
|
||||
"CONFIG_MAX_SIZE_MB",
|
||||
# State key constants
|
||||
"STATE_KEY_MESSAGES",
|
||||
"STATE_KEY_ERRORS",
|
||||
"STATE_KEY_CONFIG",
|
||||
"STATE_KEY_QUERY",
|
||||
"STATE_KEY_USER_QUERY",
|
||||
"STATE_KEY_SEARCH_RESULTS",
|
||||
"STATE_KEY_SYNTHESIS",
|
||||
"STATE_KEY_FINAL_RESPONSE",
|
||||
"STATE_KEY_TOOL_CALLS",
|
||||
"STATE_KEY_INPUT_URL",
|
||||
"STATE_KEY_URL",
|
||||
"STATE_KEY_SERVICE_FACTORY",
|
||||
"STATE_KEY_EXTRACTED_INFO",
|
||||
"STATE_KEY_SOURCES",
|
||||
# Agent framework
|
||||
"AgentType",
|
||||
"CapabilityNames",
|
||||
@@ -425,6 +503,13 @@ __all__ = [
|
||||
"PROCUREMENT_HTML_PATTERNS",
|
||||
"MARKETPLACE_PATTERNS",
|
||||
"DOCUMENT_PATTERNS",
|
||||
"PDF_EXTENSIONS",
|
||||
"IMAGE_EXTENSIONS",
|
||||
"VIDEO_EXTENSIONS",
|
||||
"AUDIO_EXTENSIONS",
|
||||
"ARCHIVE_EXTENSIONS",
|
||||
"CODE_EXTENSIONS",
|
||||
"SITEMAP_EXTENSIONS",
|
||||
"REPORT_PATTERNS",
|
||||
"AUTHORITY_DOMAINS",
|
||||
"HIGH_CREDIBILITY_TERMS",
|
||||
|
||||
@@ -25,6 +25,9 @@ from dotenv import dotenv_values
|
||||
from pydantic import ValidationError
|
||||
|
||||
from biz_bud.core.config.schemas import AppConfig
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# 1. TypedDict for node-level overrides (highest priority)
|
||||
@@ -317,7 +320,174 @@ def _deep_set(obj: dict[str, Any], dotted_key: str, value: str) -> None:
|
||||
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# 5. Async version for LangGraph compatibility
|
||||
# 5. LLM Configuration Resolution (integrated from factory)
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
|
||||
def _build_context_profile_map() -> dict[str, str]:
|
||||
"""Build the mapping from node contexts to default LLM profiles.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping context names to profile names.
|
||||
"""
|
||||
return {
|
||||
# Conversational and general-purpose nodes
|
||||
"call_model": "large",
|
||||
"chat": "large",
|
||||
"conversation": "large",
|
||||
# Content generation and synthesis nodes
|
||||
"synthesize": "large",
|
||||
"summary": "large",
|
||||
"generate": "large",
|
||||
"compose": "large",
|
||||
# Analytical and reasoning nodes
|
||||
"validation": "reasoning",
|
||||
"logic": "reasoning",
|
||||
"analysis": "reasoning",
|
||||
"planning": "reasoning",
|
||||
"strategy": "reasoning",
|
||||
# Information processing nodes
|
||||
"extraction": "small",
|
||||
"parsing": "small",
|
||||
"research": "small",
|
||||
"search": "small",
|
||||
"categorize": "small",
|
||||
# Lightweight processing nodes
|
||||
"classify": "tiny",
|
||||
"filter": "tiny",
|
||||
"simple": "tiny",
|
||||
}
|
||||
|
||||
|
||||
def resolve_llm_profile(
|
||||
node_context: str,
|
||||
profile_override: str | None = None,
|
||||
config: AppConfig | None = None
|
||||
) -> str:
|
||||
"""Resolve the LLM profile using context and override hierarchy.
|
||||
|
||||
Args:
|
||||
node_context: Node context for default profile selection.
|
||||
profile_override: Optional explicit profile override.
|
||||
config: AppConfig instance for default profile lookup.
|
||||
|
||||
Returns:
|
||||
The resolved LLM profile name.
|
||||
|
||||
Example:
|
||||
# Use explicit override
|
||||
profile = resolve_llm_profile("call_model", "reasoning")
|
||||
# Returns: "reasoning"
|
||||
|
||||
# Use context mapping
|
||||
profile = resolve_llm_profile("extraction")
|
||||
# Returns: "small"
|
||||
|
||||
# Fall back to config default
|
||||
profile = resolve_llm_profile("unknown_context", config=config)
|
||||
# Returns: config default or "large"
|
||||
"""
|
||||
# If explicit override provided, use it
|
||||
if profile_override:
|
||||
logger.debug(f"Using explicit LLM profile override: {profile_override}")
|
||||
return str(profile_override)
|
||||
|
||||
# Get profile from context mapping
|
||||
context_profile_map = _build_context_profile_map()
|
||||
if context_profile := context_profile_map.get(node_context):
|
||||
logger.debug(
|
||||
f"Using context-based profile '{context_profile}' for node '{node_context}'"
|
||||
)
|
||||
return context_profile
|
||||
|
||||
# Fall back to config default or hardcoded default
|
||||
if config and hasattr(config, 'agent_config') and config.agent_config:
|
||||
default_profile = getattr(config.agent_config, "default_llm_profile", "large")
|
||||
else:
|
||||
default_profile = "large"
|
||||
|
||||
logger.debug(
|
||||
f"No context mapping for '{node_context}', using default: {default_profile}"
|
||||
)
|
||||
return str(default_profile)
|
||||
|
||||
|
||||
def build_llm_config(
|
||||
node_context: str,
|
||||
config: AppConfig | None = None,
|
||||
profile_override: str | None = None,
|
||||
temperature_override: float | None = None,
|
||||
max_tokens_override: int | None = None,
|
||||
**kwargs: Any,
|
||||
) -> dict[str, Any]:
|
||||
"""Build complete LLM configuration with overrides applied.
|
||||
|
||||
Args:
|
||||
node_context: The node context identifier for default profile selection.
|
||||
config: AppConfig instance for default profile lookup.
|
||||
profile_override: Override the default LLM profile for this node.
|
||||
temperature_override: Override the temperature setting.
|
||||
max_tokens_override: Override the maximum number of tokens.
|
||||
**kwargs: Additional LLM configuration parameters.
|
||||
|
||||
Returns:
|
||||
Dictionary containing resolved LLM configuration.
|
||||
|
||||
Example:
|
||||
config = load_config()
|
||||
llm_config = build_llm_config(
|
||||
node_context="synthesis",
|
||||
config=config,
|
||||
temperature_override=0.7,
|
||||
max_tokens_override=2000,
|
||||
top_p=0.9
|
||||
)
|
||||
# Returns: {
|
||||
# "profile": "large",
|
||||
# "temperature": 0.7,
|
||||
# "max_tokens": 2000,
|
||||
# "top_p": 0.9
|
||||
# }
|
||||
"""
|
||||
# Resolve the LLM profile
|
||||
resolved_profile = resolve_llm_profile(node_context, profile_override, config)
|
||||
|
||||
# Build the configuration dictionary
|
||||
llm_config: dict[str, Any] = {"profile": resolved_profile}
|
||||
|
||||
# Apply temperature override if provided
|
||||
if temperature_override is not None:
|
||||
llm_config["temperature"] = float(temperature_override)
|
||||
|
||||
# Apply max_tokens override if provided
|
||||
if max_tokens_override is not None:
|
||||
llm_config["max_tokens"] = int(max_tokens_override)
|
||||
|
||||
# Apply any additional kwargs
|
||||
if kwargs:
|
||||
llm_config |= kwargs
|
||||
|
||||
logger.debug(
|
||||
f"Built LLM config for node context '{node_context}': {llm_config}"
|
||||
)
|
||||
|
||||
return llm_config
|
||||
|
||||
|
||||
def get_context_mappings() -> dict[str, str]:
|
||||
"""Get the current context-to-profile mappings.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping context names to profile names.
|
||||
|
||||
This function is useful for debugging, testing, and documentation
|
||||
purposes to understand how contexts are mapped to profiles.
|
||||
"""
|
||||
return _build_context_profile_map().copy()
|
||||
|
||||
|
||||
# ------------------------------------------------------------------
|
||||
# 6. Async version for LangGraph compatibility
|
||||
# ------------------------------------------------------------------
|
||||
|
||||
|
||||
|
||||
@@ -81,12 +81,12 @@ class InputStateModel(BaseModel):
|
||||
|
||||
"""
|
||||
|
||||
query: str | None = Field(None, description="The user-provided query string.")
|
||||
query: str | None = Field(default=None, description="The user-provided query string.")
|
||||
organization: list[OrganizationModel] | None = Field(
|
||||
None, description="List of organizations associated with the query."
|
||||
default=None, description="List of organizations associated with the query."
|
||||
)
|
||||
catalog: CatalogConfig | None = Field(
|
||||
None, description="Catalog/menu configuration."
|
||||
default=None, description="Catalog/menu configuration."
|
||||
)
|
||||
|
||||
|
||||
@@ -105,18 +105,18 @@ class AppConfig(BaseModel):
|
||||
"""
|
||||
|
||||
DEFAULT_QUERY: str = Field(
|
||||
"You are a helpful AI assistant. Please help me with my request.",
|
||||
default="You are a helpful AI assistant. Please help me with my request.",
|
||||
description="Default user query fallback.",
|
||||
)
|
||||
DEFAULT_GREETING_MESSAGE: str = Field(
|
||||
"Hello! I'm your AI assistant. How can I help you with your market research today?",
|
||||
default="Hello! I'm your AI assistant. How can I help you with your market research today?",
|
||||
description="Default greeting message.",
|
||||
)
|
||||
inputs: InputStateModel | None = Field(
|
||||
None, description="Input state configuration."
|
||||
default=None, description="Input state configuration."
|
||||
)
|
||||
tools: ToolsConfigModel | None = Field(
|
||||
None, description="Tools configuration schema."
|
||||
default=None, description="Tools configuration schema."
|
||||
)
|
||||
logging: LoggingConfig = Field(
|
||||
default_factory=lambda: LoggingConfig(log_level="INFO"),
|
||||
|
||||
@@ -19,9 +19,9 @@ class AgentConfig(BaseModel):
|
||||
|
||||
"""
|
||||
|
||||
max_loops: int = Field(3, description="Maximum number of reasoning loops allowed.")
|
||||
max_loops: int = Field(default=3, description="Maximum number of reasoning loops allowed.")
|
||||
recursion_limit: int = Field(
|
||||
1000,
|
||||
default=1000,
|
||||
description=(
|
||||
"LangGraph recursion limit for agent execution. "
|
||||
"Recommended upper bound: 10,000. "
|
||||
@@ -111,8 +111,8 @@ class FeatureFlagsModel(BaseModel):
|
||||
class TelemetryConfigModel(BaseModel):
|
||||
"""Pydantic model for telemetry configuration."""
|
||||
|
||||
enabled: bool | None = Field(None, description="Enable telemetry.")
|
||||
enable_telemetry: bool = Field(False, description="Enable telemetry collection.")
|
||||
enabled: bool | None = Field(default=None, description="Enable telemetry.")
|
||||
enable_telemetry: bool = Field(default=False, description="Enable telemetry collection.")
|
||||
collect_performance_metrics: bool = Field(
|
||||
False, description="Collect performance metrics."
|
||||
)
|
||||
@@ -162,11 +162,11 @@ class TelemetryConfigModel(BaseModel):
|
||||
class RateLimitConfigModel(BaseModel):
|
||||
"""Pydantic model for rate limit configuration."""
|
||||
|
||||
web_max_requests: int | None = Field(None, description="HTTP requests per window.")
|
||||
web_max_requests: int | None = Field(default=None, description="HTTP requests per window.")
|
||||
web_time_window: float | None = Field(
|
||||
None, description="Web request window duration (seconds)."
|
||||
)
|
||||
llm_max_requests: int | None = Field(None, description="LLM API calls per window.")
|
||||
llm_max_requests: int | None = Field(default=None, description="LLM API calls per window.")
|
||||
llm_time_window: float | None = Field(
|
||||
None, description="LLM call window duration (seconds)."
|
||||
)
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from biz_bud.core.errors import ParameterValidationError
|
||||
from biz_bud.core.types import LLMProfile
|
||||
|
||||
|
||||
@@ -22,20 +23,20 @@ class LLMProfileConfig(BaseModel):
|
||||
"""
|
||||
|
||||
name: str = Field(
|
||||
"openai/gpt-4o", description="The identifier or model name for the LLM profile."
|
||||
default="openai/gpt-4o", description="The identifier or model name for the LLM profile."
|
||||
)
|
||||
temperature: float = Field(
|
||||
0.7, description="Sampling temperature for the model (0.0-2.0)."
|
||||
default=0.7, description="Sampling temperature for the model (0.0-2.0)."
|
||||
)
|
||||
max_tokens: int | None = Field(
|
||||
None, description="Maximum number of tokens allowed in responses."
|
||||
default=None, description="Maximum number of tokens allowed in responses."
|
||||
)
|
||||
input_token_limit: int = Field(
|
||||
100000, description="Maximum input tokens for processing."
|
||||
default=100000, description="Maximum input tokens for processing."
|
||||
)
|
||||
chunk_size: int = Field(4000, description="Default chunk size for large content.")
|
||||
chunk_size: int = Field(default=4000, description="Default chunk size for large content.")
|
||||
chunk_overlap: int = Field(
|
||||
200, description="Overlap between chunks for continuity."
|
||||
default=200, description="Overlap between chunks for continuity."
|
||||
)
|
||||
|
||||
@field_validator("max_tokens", mode="after")
|
||||
@@ -43,7 +44,13 @@ class LLMProfileConfig(BaseModel):
|
||||
def validate_max_tokens(cls, value: int | None) -> int | None:
|
||||
"""Validate max tokens."""
|
||||
if value is not None and value < 1:
|
||||
raise ValueError("max_tokens must be >= 1")
|
||||
raise ParameterValidationError(
|
||||
"max_tokens must be >= 1",
|
||||
parameter_name="max_tokens",
|
||||
parameter_value=value,
|
||||
expected_range=">= 1",
|
||||
validation_type="range_validation",
|
||||
)
|
||||
return value
|
||||
|
||||
@field_validator("temperature", mode="after")
|
||||
@@ -51,7 +58,13 @@ class LLMProfileConfig(BaseModel):
|
||||
def validate_temperature(cls, value: float) -> float:
|
||||
"""Validate temperature."""
|
||||
if value < 0.0 or value > 2.0:
|
||||
raise ValueError("temperature must be between 0.0 and 2.0")
|
||||
raise ParameterValidationError(
|
||||
"temperature must be between 0.0 and 2.0",
|
||||
parameter_name="temperature",
|
||||
parameter_value=value,
|
||||
expected_range="0.0 to 2.0",
|
||||
validation_type="range_validation",
|
||||
)
|
||||
return value
|
||||
|
||||
@field_validator("input_token_limit", mode="after")
|
||||
@@ -59,7 +72,13 @@ class LLMProfileConfig(BaseModel):
|
||||
def validate_input_token_limit(cls, value: int) -> int:
|
||||
"""Validate input token limit."""
|
||||
if value < 1000:
|
||||
raise ValueError("input_token_limit must be >= 1000")
|
||||
raise ParameterValidationError(
|
||||
"input_token_limit must be >= 1000",
|
||||
parameter_name="input_token_limit",
|
||||
parameter_value=value,
|
||||
expected_range=">= 1000",
|
||||
validation_type="range_validation",
|
||||
)
|
||||
return value
|
||||
|
||||
@field_validator("chunk_size", mode="after")
|
||||
@@ -67,7 +86,13 @@ class LLMProfileConfig(BaseModel):
|
||||
def validate_chunk_size(cls, value: int) -> int:
|
||||
"""Validate chunk size."""
|
||||
if value < 100:
|
||||
raise ValueError("chunk_size must be >= 100")
|
||||
raise ParameterValidationError(
|
||||
"chunk_size must be >= 100",
|
||||
parameter_name="chunk_size",
|
||||
parameter_value=value,
|
||||
expected_range=">= 100",
|
||||
validation_type="range_validation",
|
||||
)
|
||||
return value
|
||||
|
||||
@field_validator("chunk_overlap", mode="after")
|
||||
@@ -75,7 +100,13 @@ class LLMProfileConfig(BaseModel):
|
||||
def validate_chunk_overlap(cls, value: int) -> int:
|
||||
"""Validate chunk overlap."""
|
||||
if value < 0:
|
||||
raise ValueError("chunk_overlap must be >= 0")
|
||||
raise ParameterValidationError(
|
||||
"chunk_overlap must be >= 0",
|
||||
parameter_name="chunk_overlap",
|
||||
parameter_value=value,
|
||||
expected_range=">= 0",
|
||||
validation_type="range_validation",
|
||||
)
|
||||
return value
|
||||
|
||||
|
||||
|
||||
@@ -13,79 +13,43 @@ class APIConfigModel(BaseModel):
|
||||
|
||||
"""
|
||||
|
||||
openai_api_key: str | None = Field(None, description="API key for OpenAI services.")
|
||||
openai_api_key: str | None = Field(default=None, description="API key for OpenAI services.")
|
||||
anthropic_api_key: str | None = Field(
|
||||
None, description="API key for Anthropic services."
|
||||
default=None, description="API key for Anthropic services."
|
||||
)
|
||||
fireworks_api_key: str | None = Field(
|
||||
None, description="API key for Fireworks AI services."
|
||||
default=None, description="API key for Fireworks AI services."
|
||||
)
|
||||
openai_api_base: str | None = Field(
|
||||
None, description="Base URL for OpenAI-compatible API."
|
||||
)
|
||||
brave_api_key: str | None = Field(
|
||||
None, description="API key for Brave Search services."
|
||||
)
|
||||
brave_search_endpoint: str | None = Field(
|
||||
None, description="Endpoint URL for Brave Search API."
|
||||
)
|
||||
brave_web_endpoint: str | None = Field(
|
||||
None, description="Endpoint URL for Brave Web API."
|
||||
)
|
||||
brave_summarizer_endpoint: str | None = Field(
|
||||
None, description="Endpoint URL for Brave Summarizer API."
|
||||
)
|
||||
brave_news_endpoint: str | None = Field(
|
||||
None, description="Endpoint URL for Brave News API."
|
||||
)
|
||||
searxng_url: str | None = Field(None, description="URL for SearXNG instance.")
|
||||
jina_api_key: str | None = Field(None, description="API key for Jina AI services.")
|
||||
tavily_api_key: str | None = Field(
|
||||
None, description="API key for Tavily search services."
|
||||
)
|
||||
langsmith_api_key: str | None = Field(
|
||||
None, description="API key for LangSmith services."
|
||||
)
|
||||
langsmith_project: str | None = Field(
|
||||
None, description="Project name for LangSmith tracking."
|
||||
)
|
||||
langsmith_endpoint: str | None = Field(
|
||||
None, description="Endpoint URL for LangSmith API."
|
||||
)
|
||||
ragflow_api_key: str | None = Field(
|
||||
None, description="API key for RagFlow services."
|
||||
)
|
||||
ragflow_base_url: str | None = Field(None, description="Base URL for RagFlow API.")
|
||||
r2r_api_key: str | None = Field(None, description="API key for R2R services.")
|
||||
r2r_base_url: str | None = Field(
|
||||
None, description="Base URL for R2R API (defaults to http://localhost:7272)."
|
||||
)
|
||||
firecrawl_api_key: str | None = Field(
|
||||
None, description="API key for Firecrawl web scraping service."
|
||||
)
|
||||
firecrawl_base_url: str | None = Field(
|
||||
None, description="Base URL for Firecrawl API."
|
||||
default=None, description="Base URL for OpenAI-compatible API."
|
||||
)
|
||||
brave_api_key: str | None = Field(default=None, description="API key for Brave Search services.")
|
||||
brave_search_endpoint: str | None = Field(default=None, description="Endpoint URL for Brave Search API.")
|
||||
brave_web_endpoint: str | None = Field(default=None, description="Endpoint URL for Brave Web API.")
|
||||
brave_summarizer_endpoint: str | None = Field(default=None, description="Endpoint URL for Brave Summarizer API.")
|
||||
brave_news_endpoint: str | None = Field(default=None, description="Endpoint URL for Brave News API.")
|
||||
searxng_url: str | None = Field(default=None, description="URL for SearXNG instance.")
|
||||
jina_api_key: str | None = Field(default=None, description="API key for Jina AI services.")
|
||||
tavily_api_key: str | None = Field(default=None, description="API key for Tavily search services.")
|
||||
langsmith_api_key: str | None = Field(default=None, description="API key for LangSmith services.")
|
||||
langsmith_project: str | None = Field(default=None, description="Project name for LangSmith tracking.")
|
||||
langsmith_endpoint: str | None = Field(default=None, description="Endpoint URL for LangSmith API.")
|
||||
ragflow_api_key: str | None = Field(default=None, description="API key for RagFlow services.")
|
||||
ragflow_base_url: str | None = Field(default=None, description="Base URL for RagFlow API.")
|
||||
r2r_api_key: str | None = Field(default=None, description="API key for R2R services.")
|
||||
r2r_base_url: str | None = Field(default=None, description="Base URL for R2R API (defaults to http://localhost:7272).")
|
||||
firecrawl_api_key: str | None = Field(default=None, description="API key for Firecrawl web scraping service.")
|
||||
firecrawl_base_url: str | None = Field(default=None, description="Base URL for Firecrawl API.")
|
||||
|
||||
|
||||
class DatabaseConfigModel(BaseModel):
|
||||
"""Pydantic model for database configuration parameters."""
|
||||
|
||||
qdrant_host: str | None = Field(
|
||||
None, description="Hostname for Qdrant vector database."
|
||||
)
|
||||
qdrant_port: int | None = Field(
|
||||
None, description="Port number for Qdrant vector database."
|
||||
)
|
||||
qdrant_api_key: str | None = Field(
|
||||
None, description="API key for Qdrant cloud instance."
|
||||
)
|
||||
default_page_size: int = Field(
|
||||
100, description="Default page size for database queries."
|
||||
)
|
||||
max_page_size: int = Field(
|
||||
1000, description="Maximum allowed page size for database queries."
|
||||
)
|
||||
qdrant_host: str | None = Field(default=None, description="Hostname for Qdrant vector database.")
|
||||
qdrant_port: int | None = Field(default=None, description="Port number for Qdrant vector database.")
|
||||
qdrant_api_key: str | None = Field(default=None, description="API key for Qdrant cloud instance.")
|
||||
default_page_size: int = Field(default=100, description="Default page size for database queries.")
|
||||
max_page_size: int = Field(default=1000, description="Maximum allowed page size for database queries.")
|
||||
|
||||
@field_validator("qdrant_port", mode="after")
|
||||
@classmethod
|
||||
@@ -95,33 +59,15 @@ class DatabaseConfigModel(BaseModel):
|
||||
raise ValueError("qdrant_port must be between 1 and 65535")
|
||||
return value
|
||||
|
||||
qdrant_collection_name: str | None = Field(
|
||||
None, description="Collection name for Qdrant vector database."
|
||||
)
|
||||
postgres_user: str | None = Field(
|
||||
None, description="Username for PostgreSQL database connection."
|
||||
)
|
||||
postgres_password: str | None = Field(
|
||||
None, description="Password for PostgreSQL database connection."
|
||||
)
|
||||
postgres_db: str | None = Field(
|
||||
None, description="Database name for PostgreSQL connection."
|
||||
)
|
||||
postgres_host: str | None = Field(
|
||||
None, description="Hostname for PostgreSQL database server."
|
||||
)
|
||||
postgres_port: int | None = Field(
|
||||
None, description="Port number for PostgreSQL database server."
|
||||
)
|
||||
postgres_min_pool_size: int = Field(
|
||||
2, description="Minimum size of PostgreSQL connection pool."
|
||||
)
|
||||
postgres_max_pool_size: int = Field(
|
||||
15, description="Maximum size of PostgreSQL connection pool."
|
||||
)
|
||||
postgres_command_timeout: int = Field(
|
||||
10, description="Command timeout in seconds for PostgreSQL operations."
|
||||
)
|
||||
qdrant_collection_name: str | None = Field(default=None, description="Collection name for Qdrant vector database.")
|
||||
postgres_user: str | None = Field(default=None, description="Username for PostgreSQL database connection.")
|
||||
postgres_password: str | None = Field(default=None, description="Password for PostgreSQL database connection.")
|
||||
postgres_db: str | None = Field(default=None, description="Database name for PostgreSQL connection.")
|
||||
postgres_host: str | None = Field(default=None, description="Hostname for PostgreSQL database server.")
|
||||
postgres_port: int | None = Field(default=None, description="Port number for PostgreSQL database server.")
|
||||
postgres_min_pool_size: int = Field(default=2, description="Minimum size of PostgreSQL connection pool.")
|
||||
postgres_max_pool_size: int = Field(default=15, description="Maximum size of PostgreSQL connection pool.")
|
||||
postgres_command_timeout: int = Field(default=10, description="Command timeout in seconds for PostgreSQL operations.")
|
||||
|
||||
@field_validator("postgres_port", mode="after")
|
||||
@classmethod
|
||||
@@ -175,7 +121,7 @@ class DatabaseConfigModel(BaseModel):
|
||||
class ProxyConfigModel(BaseModel):
|
||||
"""Pydantic model for proxy configuration parameters."""
|
||||
|
||||
proxy_url: str | None = Field(None, description="URL for the proxy server.")
|
||||
proxy_url: str | None = Field(default=None, description="URL for the proxy server.")
|
||||
proxy_username: str | None = Field(
|
||||
None, description="Username for proxy authentication."
|
||||
)
|
||||
@@ -190,4 +136,4 @@ class RedisConfigModel(BaseModel):
|
||||
redis_url: str = Field(
|
||||
"redis://localhost:6379/0", description="Redis connection URL"
|
||||
)
|
||||
key_prefix: str = Field("biz_bud:", description="Prefix for all Redis keys")
|
||||
key_prefix: str = Field(default="biz_bud:", description="Prefix for all Redis keys")
|
||||
|
||||
@@ -1,15 +1,38 @@
|
||||
"""Tools configuration models."""
|
||||
|
||||
import os
|
||||
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
from biz_bud.core.enums import RankingStrategy
|
||||
from biz_bud.core.errors import ValidationError
|
||||
|
||||
|
||||
def _get_env_int(env_var: str, default: str) -> int:
|
||||
"""Get integer value from environment variable with validation.
|
||||
|
||||
Args:
|
||||
env_var: Environment variable name
|
||||
default: Default value as string
|
||||
|
||||
Returns:
|
||||
Validated integer value
|
||||
|
||||
Raises:
|
||||
ValueError: If environment variable is not a valid integer
|
||||
"""
|
||||
value = os.environ.get(env_var, default)
|
||||
try:
|
||||
return int(value)
|
||||
except ValueError as e:
|
||||
raise ValidationError(f"Invalid integer value for {env_var}: {value!r}") from e
|
||||
|
||||
|
||||
class JinaRerankConfig(BaseModel):
|
||||
"""Configuration for Jina AI reranking."""
|
||||
|
||||
model: str = Field(
|
||||
default="jina-reranker-v2-base-multilingual",
|
||||
"jina-reranker-v2-base-multilingual",
|
||||
description="Jina reranking model to use",
|
||||
)
|
||||
timeout: float = Field(default=30.0, description="Request timeout in seconds")
|
||||
@@ -39,13 +62,16 @@ class SearchRankingConfig(BaseModel):
|
||||
"""Configuration for search result ranking."""
|
||||
|
||||
strategy: RankingStrategy = Field(
|
||||
default=RankingStrategy.BASIC_SCORING, description="Ranking strategy to use"
|
||||
RankingStrategy.BASIC_SCORING, description="Ranking strategy to use"
|
||||
)
|
||||
hybrid_weight: float = Field(
|
||||
default=0.7, description="Weight for AI ranking in hybrid mode (0.0-1.0)"
|
||||
0.7, description="Weight for AI ranking in hybrid mode (0.0-1.0)"
|
||||
)
|
||||
jina_rerank: JinaRerankConfig = Field(
|
||||
default_factory=JinaRerankConfig, description="Jina reranking configuration"
|
||||
jina_rerank: JinaRerankConfig = JinaRerankConfig(
|
||||
model="jina-reranker-v2-base-multilingual",
|
||||
timeout=30.0,
|
||||
max_retries=3,
|
||||
enable_fallback=True
|
||||
)
|
||||
|
||||
@field_validator("hybrid_weight", mode="after")
|
||||
@@ -60,12 +86,19 @@ class SearchRankingConfig(BaseModel):
|
||||
class SearchToolConfigModel(BaseModel):
|
||||
"""Pydantic model for search tool configuration."""
|
||||
|
||||
name: str | None = Field(None, description="Name of the search tool.")
|
||||
name: str | None = Field(default=None, description="Name of the search tool.")
|
||||
max_results: int | None = Field(
|
||||
None, description="Maximum number of search results."
|
||||
default=None, description="Maximum number of search results."
|
||||
)
|
||||
ranking: SearchRankingConfig = Field(
|
||||
default_factory=SearchRankingConfig, description="Search ranking configuration"
|
||||
ranking: SearchRankingConfig = SearchRankingConfig(
|
||||
strategy=RankingStrategy.BASIC_SCORING,
|
||||
hybrid_weight=0.7,
|
||||
jina_rerank=JinaRerankConfig(
|
||||
model="jina-reranker-v2-base-multilingual",
|
||||
timeout=30.0,
|
||||
max_retries=3,
|
||||
enable_fallback=True
|
||||
)
|
||||
)
|
||||
|
||||
@field_validator("max_results", mode="after")
|
||||
@@ -80,10 +113,10 @@ class SearchToolConfigModel(BaseModel):
|
||||
class ExtractToolConfigModel(BaseModel):
|
||||
"""Pydantic model for extract tool configuration."""
|
||||
|
||||
name: str | None = Field(None, description="Name of the extract tool.")
|
||||
chunk_size: int = Field(8000, description="Default chunk size for text processing")
|
||||
name: str | None = Field(default=None, description="Name of the extract tool.")
|
||||
chunk_size: int = Field(default=8000, description="Default chunk size for text processing")
|
||||
min_content_length: int = Field(
|
||||
100, description="Minimum content length for processing"
|
||||
default=100, description="Minimum content length for processing"
|
||||
)
|
||||
|
||||
@field_validator("chunk_size", mode="after")
|
||||
@@ -179,21 +212,35 @@ class NetworkConfig(BaseModel):
|
||||
class WebToolsConfig(BaseModel):
|
||||
"""Configuration for web tools."""
|
||||
|
||||
browser: BrowserConfig = Field(
|
||||
default_factory=lambda: BrowserConfig(), description="Browser configuration"
|
||||
browser: BrowserConfig = BrowserConfig(
|
||||
headless=True,
|
||||
timeout_seconds=30.0,
|
||||
connection_timeout=10,
|
||||
max_browsers=3,
|
||||
browser_load_threshold=10,
|
||||
max_scroll_percent=500,
|
||||
user_agent=None,
|
||||
viewport_width=1920,
|
||||
viewport_height=1080
|
||||
)
|
||||
network: NetworkConfig = Field(
|
||||
default_factory=lambda: NetworkConfig(), description="Network configuration"
|
||||
network: NetworkConfig = NetworkConfig(
|
||||
timeout=30.0,
|
||||
max_retries=3,
|
||||
follow_redirects=True,
|
||||
verify_ssl=True
|
||||
)
|
||||
scraper_timeout: int = Field(30, description="Timeout for scraping operations")
|
||||
scraper_timeout: int = Field(default=30, description="Timeout for scraping operations")
|
||||
max_concurrent_scrapes: int = Field(
|
||||
5, description="Maximum concurrent scraping operations"
|
||||
description="Maximum concurrent scraping operations (env: BIZ_BUD_MAX_CONCURRENT_SCRAPES)",
|
||||
default_factory=lambda: _get_env_int("BIZ_BUD_MAX_CONCURRENT_SCRAPES", "5")
|
||||
)
|
||||
max_concurrent_db_queries: int = Field(
|
||||
5, description="Maximum concurrent database queries"
|
||||
description="Maximum concurrent database queries (env: BIZ_BUD_MAX_CONCURRENT_DB_QUERIES)",
|
||||
default_factory=lambda: _get_env_int("BIZ_BUD_MAX_CONCURRENT_DB_QUERIES", "5")
|
||||
)
|
||||
max_concurrent_analysis: int = Field(
|
||||
3, description="Maximum concurrent ingredient analysis operations"
|
||||
description="Maximum concurrent extraction/analysis operations (env: BIZ_BUD_MAX_CONCURRENT_ANALYSIS)",
|
||||
default_factory=lambda: _get_env_int("BIZ_BUD_MAX_CONCURRENT_ANALYSIS", "12")
|
||||
)
|
||||
|
||||
@field_validator("scraper_timeout", mode="after")
|
||||
@@ -232,8 +279,8 @@ class WebToolsConfig(BaseModel):
|
||||
class ToolFactoryConfig(BaseModel):
|
||||
"""Configuration for the LangGraph tool factory."""
|
||||
|
||||
enable_caching: bool = Field(True, description="Enable tool instance caching")
|
||||
cache_ttl_seconds: float = Field(3600.0, description="Tool cache TTL in seconds")
|
||||
enable_caching: bool = Field(default=True, description="Enable tool instance caching")
|
||||
cache_ttl_seconds: float = Field(default=3600.0, description="Tool cache TTL in seconds")
|
||||
max_cached_tools: int = Field(
|
||||
100, description="Maximum number of cached tool instances"
|
||||
)
|
||||
@@ -275,7 +322,7 @@ class ToolFactoryConfig(BaseModel):
|
||||
class StateIntegrationConfig(BaseModel):
|
||||
"""Configuration for LangGraph state integration."""
|
||||
|
||||
enable_state_validation: bool = Field(True, description="Enable state validation")
|
||||
enable_state_validation: bool = Field(default=True, description="Enable state validation")
|
||||
preserve_message_history: bool = Field(
|
||||
True, description="Preserve message history in state"
|
||||
)
|
||||
@@ -304,34 +351,39 @@ class ToolsConfigModel(BaseModel):
|
||||
extract: ExtractToolConfigModel | None = Field(
|
||||
None, description="Extract tool configuration."
|
||||
)
|
||||
web_tools: WebToolsConfig = Field(
|
||||
default_factory=lambda: WebToolsConfig(
|
||||
scraper_timeout=30,
|
||||
max_concurrent_scrapes=5,
|
||||
max_concurrent_db_queries=5,
|
||||
max_concurrent_analysis=3,
|
||||
web_tools: WebToolsConfig = WebToolsConfig(
|
||||
browser=BrowserConfig(
|
||||
headless=True,
|
||||
timeout_seconds=30.0,
|
||||
connection_timeout=10,
|
||||
max_browsers=3,
|
||||
browser_load_threshold=10,
|
||||
max_scroll_percent=500,
|
||||
user_agent=None,
|
||||
viewport_width=1920,
|
||||
viewport_height=1080
|
||||
),
|
||||
description="Web tools configuration.",
|
||||
network=NetworkConfig(
|
||||
timeout=30.0,
|
||||
max_retries=3,
|
||||
follow_redirects=True,
|
||||
verify_ssl=True
|
||||
),
|
||||
scraper_timeout=30
|
||||
)
|
||||
factory: ToolFactoryConfig = Field(
|
||||
default_factory=lambda: ToolFactoryConfig(
|
||||
enable_caching=True,
|
||||
cache_ttl_seconds=3600.0,
|
||||
max_cached_tools=100,
|
||||
auto_register_nodes=True,
|
||||
auto_register_graphs=True,
|
||||
default_tool_timeout=300.0,
|
||||
),
|
||||
description="LangGraph tool factory configuration.",
|
||||
factory: ToolFactoryConfig = ToolFactoryConfig(
|
||||
enable_caching=True,
|
||||
cache_ttl_seconds=3600.0,
|
||||
max_cached_tools=100,
|
||||
auto_register_nodes=True,
|
||||
auto_register_graphs=True,
|
||||
default_tool_timeout=300.0
|
||||
)
|
||||
state_integration: StateIntegrationConfig = Field(
|
||||
default_factory=lambda: StateIntegrationConfig(
|
||||
enable_state_validation=True,
|
||||
preserve_message_history=True,
|
||||
max_state_history_length=50,
|
||||
auto_enrich_state=True,
|
||||
),
|
||||
description="LangGraph state integration configuration.",
|
||||
state_integration: StateIntegrationConfig = StateIntegrationConfig(
|
||||
enable_state_validation=True,
|
||||
preserve_message_history=True,
|
||||
max_state_history_length=50,
|
||||
auto_enrich_state=True
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -9,9 +9,7 @@ mappings and condition evaluation.
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING, Any, TypeVar
|
||||
|
||||
StateT = TypeVar("StateT", bound=dict[str, Any])
|
||||
from typing import Any
|
||||
|
||||
|
||||
class BasicRouters:
|
||||
@@ -22,7 +20,7 @@ class BasicRouters:
|
||||
state_key: str,
|
||||
mapping: dict[Any, str],
|
||||
default: str = "end",
|
||||
) -> Callable[[StateT], str]:
|
||||
) -> Callable[[dict[str, Any]], str]:
|
||||
"""Route based on a state key value.
|
||||
|
||||
Simple routing pattern that looks up a value in state and maps it
|
||||
@@ -47,7 +45,7 @@ class BasicRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> str:
|
||||
def router(state: dict[str, Any]) -> str:
|
||||
value = state.get(state_key)
|
||||
return mapping.get(value, default)
|
||||
|
||||
@@ -55,10 +53,10 @@ class BasicRouters:
|
||||
|
||||
@staticmethod
|
||||
def route_on_condition(
|
||||
condition: Callable[[StateT], bool],
|
||||
condition: Callable[[dict[str, Any]], bool],
|
||||
true_target: str,
|
||||
false_target: str,
|
||||
) -> Callable[[StateT], str]:
|
||||
) -> Callable[[dict[str, Any]], str]:
|
||||
"""Route based on a boolean condition.
|
||||
|
||||
Simple binary routing based on evaluating a condition function.
|
||||
@@ -83,7 +81,7 @@ class BasicRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> str:
|
||||
def router(state: dict[str, Any]) -> str:
|
||||
return true_target if condition(state) else false_target
|
||||
|
||||
return router
|
||||
@@ -94,7 +92,7 @@ class BasicRouters:
|
||||
threshold: float,
|
||||
above_target: str,
|
||||
below_target: str,
|
||||
) -> Callable[[StateT], str]:
|
||||
) -> Callable[[dict[str, Any]], str]:
|
||||
"""Route based on numeric threshold comparison.
|
||||
|
||||
Compares a numeric value in state against a threshold and routes
|
||||
@@ -121,7 +119,7 @@ class BasicRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> str:
|
||||
def router(state: dict[str, Any]) -> str:
|
||||
value = state.get(state_key, 0)
|
||||
try:
|
||||
numeric_value = float(value)
|
||||
|
||||
@@ -8,7 +8,7 @@ and Send objects for dynamic control flow and map-reduce patterns.
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING, Any, TypeVar
|
||||
from typing import Any
|
||||
|
||||
from langgraph.types import Command, Send
|
||||
|
||||
@@ -16,8 +16,6 @@ from biz_bud.logging import debug_highlight, get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
StateT = TypeVar("StateT", bound=dict[str, Any])
|
||||
|
||||
|
||||
# Private helper functions for DRY code
|
||||
def _log_command(
|
||||
@@ -39,8 +37,8 @@ def _log_sends(targets: list[tuple[str, dict[str, Any]]], category: str) -> list
|
||||
|
||||
|
||||
def create_command_router(
|
||||
routing_logic: Callable[[StateT], tuple[str, dict[str, Any] | None]],
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
routing_logic: Callable[[dict[str, Any]], tuple[str, dict[str, Any] | None]],
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create a router that returns Command objects for combined state update and routing.
|
||||
|
||||
This factory creates routers that can both update state and control flow
|
||||
@@ -65,7 +63,7 @@ def create_command_router(
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
target, updates = routing_logic(state)
|
||||
return _log_command(target, updates, category="CommandRouter")
|
||||
|
||||
@@ -73,8 +71,8 @@ def create_command_router(
|
||||
|
||||
|
||||
def create_dynamic_send_router(
|
||||
target_generator: Callable[[StateT], list[tuple[str, dict[str, Any]]]],
|
||||
) -> Callable[[StateT], list[Send]]:
|
||||
target_generator: Callable[[dict[str, Any]], list[tuple[str, dict[str, Any]]]],
|
||||
) -> Callable[[dict[str, Any]], list[Send]]:
|
||||
"""Create a router that generates Send objects for dynamic fan-out patterns.
|
||||
|
||||
This factory creates routers for map-reduce patterns where you need to
|
||||
@@ -99,7 +97,7 @@ def create_dynamic_send_router(
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> list[Send]:
|
||||
def router(state: dict[str, Any]) -> list[Send]:
|
||||
targets = target_generator(state)
|
||||
return _log_sends(targets, category="SendRouter")
|
||||
|
||||
@@ -112,7 +110,7 @@ def create_conditional_command_router(
|
||||
],
|
||||
default_target: str = "end",
|
||||
default_updates: dict[str, Any] | None = None,
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create a router with multiple conditions that returns Command objects.
|
||||
|
||||
Args:
|
||||
@@ -133,7 +131,7 @@ def create_conditional_command_router(
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
for condition_fn, target, updates in conditions:
|
||||
if condition_fn(state):
|
||||
debug_highlight(
|
||||
@@ -156,7 +154,7 @@ def create_map_reduce_router(
|
||||
processor_node: str = "process_item",
|
||||
reducer_node: str = "reduce_results",
|
||||
item_state_key: str = "current_item",
|
||||
) -> Callable[[StateT], list[Send] | Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], list[Send] | Command[str]]:
|
||||
"""Create a router for map-reduce patterns using Send.
|
||||
|
||||
This router dispatches items to parallel processors and then
|
||||
@@ -199,7 +197,7 @@ def create_map_reduce_router(
|
||||
|
||||
send_router = create_dynamic_send_router(_gen)
|
||||
|
||||
def router(state: StateT) -> list[Send] | Command[str]:
|
||||
def router(state: dict[str, Any]) -> list[Send] | Command[str]:
|
||||
items = state.get(items_key, [])
|
||||
processed_count = state.get("processed_count", 0)
|
||||
|
||||
@@ -218,7 +216,7 @@ def create_retry_command_router(
|
||||
failure_node: str = "failure",
|
||||
attempt_key: str = "retry_attempts",
|
||||
success_key: str = "is_successful",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create a router that handles retry logic with Command pattern.
|
||||
|
||||
Args:
|
||||
@@ -243,7 +241,7 @@ def create_retry_command_router(
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
attempts = state.get(attempt_key, 0)
|
||||
is_successful = state.get(success_key, False)
|
||||
|
||||
@@ -285,7 +283,7 @@ def create_subgraph_command_router(
|
||||
subgraph_mapping: dict[str, tuple[str, dict[str, Any] | None]],
|
||||
state_key: str = "task_type",
|
||||
parent_return_node: str = "consolidate_results",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create a router for delegating to subgraphs with Command.PARENT support.
|
||||
|
||||
Args:
|
||||
@@ -306,7 +304,7 @@ def create_subgraph_command_router(
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
task_type = state.get(state_key)
|
||||
|
||||
if task_type and isinstance(task_type, str) and task_type in subgraph_mapping:
|
||||
@@ -345,7 +343,7 @@ def route_on_success(
|
||||
success_node: str = "continue",
|
||||
failure_node: str = "error_handler",
|
||||
success_key: str = "success",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create simple success/failure router with Command pattern."""
|
||||
return create_conditional_command_router(
|
||||
[(lambda s: s.get(success_key, False), success_node, None)],
|
||||
@@ -357,10 +355,10 @@ def route_on_success(
|
||||
def fan_out_tasks(
|
||||
tasks_key: str = "tasks",
|
||||
processor_node: str = "process_task",
|
||||
) -> Callable[[StateT], list[Send]]:
|
||||
) -> Callable[[dict[str, Any]], list[Send]]:
|
||||
"""Create simple fan-out router for parallel task processing."""
|
||||
|
||||
def generator(state: StateT) -> list[tuple[str, dict[str, Any]]]:
|
||||
def generator(state: dict[str, Any]) -> list[tuple[str, dict[str, Any]]]:
|
||||
tasks = state.get(tasks_key, [])
|
||||
return [
|
||||
(processor_node, {"task": task, "task_id": i})
|
||||
@@ -375,9 +373,9 @@ class CommandRouters:
|
||||
|
||||
@staticmethod
|
||||
def command_route_with_update(
|
||||
routing_fn: Callable[[StateT], str],
|
||||
update_fn: Callable[[StateT], dict[str, Any]],
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
routing_fn: Callable[[dict[str, Any]], str],
|
||||
update_fn: Callable[[dict[str, Any]], dict[str, Any]],
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create Command router that updates state while routing.
|
||||
|
||||
Combines routing decision with state updates in a single operation.
|
||||
@@ -403,7 +401,7 @@ class CommandRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
target = routing_fn(state)
|
||||
updates = update_fn(state)
|
||||
return _log_command(target, updates, category="CommandUpdate")
|
||||
@@ -412,13 +410,13 @@ class CommandRouters:
|
||||
|
||||
@staticmethod
|
||||
def command_route_with_retry(
|
||||
success_check: Callable[[StateT], bool],
|
||||
success_check: Callable[[dict[str, Any]], bool],
|
||||
success_target: str,
|
||||
retry_target: str,
|
||||
failure_target: str,
|
||||
max_attempts: int = 3,
|
||||
attempt_key: str = "attempts",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create Command router with retry logic.
|
||||
|
||||
Implements retry pattern with attempt counting and eventual failure routing.
|
||||
@@ -448,7 +446,7 @@ class CommandRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
attempts = state.get(attempt_key, 0)
|
||||
|
||||
if success_check(state):
|
||||
@@ -477,7 +475,7 @@ class CommandRouters:
|
||||
processor_node: str,
|
||||
item_key: str = "item",
|
||||
include_index: bool = True,
|
||||
) -> Callable[[StateT], list[Send]]:
|
||||
) -> Callable[[dict[str, Any]], list[Send]]:
|
||||
"""Create Send router for parallel processing of items.
|
||||
|
||||
Distributes items from state to parallel processor nodes using Send objects.
|
||||
@@ -504,7 +502,7 @@ class CommandRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> list[Send]:
|
||||
def router(state: dict[str, Any]) -> list[Send]:
|
||||
items = state.get(items_key, [])
|
||||
targets = []
|
||||
|
||||
@@ -525,7 +523,7 @@ class CommandRouters:
|
||||
true_node: str,
|
||||
false_node: str,
|
||||
item_key: str = "item",
|
||||
) -> Callable[[StateT], list[Send]]:
|
||||
) -> Callable[[dict[str, Any]], list[Send]]:
|
||||
"""Create conditional Send router for item filtering.
|
||||
|
||||
Evaluates each item against a condition and sends to different nodes
|
||||
@@ -557,7 +555,7 @@ class CommandRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> list[Send]:
|
||||
def router(state: dict[str, Any]) -> list[Send]:
|
||||
items = state.get(items_key, [])
|
||||
targets = []
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ from __future__ import annotations
|
||||
|
||||
import warnings
|
||||
from collections.abc import Callable
|
||||
from typing import TYPE_CHECKING, Any, TypeVar
|
||||
from typing import Any
|
||||
|
||||
from langgraph.types import Command
|
||||
|
||||
@@ -20,8 +20,6 @@ from .basic_routing import BasicRouters
|
||||
from .command_patterns import CommandRouters
|
||||
from .workflow_routing import WorkflowRouters
|
||||
|
||||
StateT = TypeVar("StateT", bound=dict[str, Any])
|
||||
|
||||
# Issue deprecation warning when this module is imported
|
||||
warnings.warn(
|
||||
"The consolidated EdgeHelpers module is deprecated. "
|
||||
@@ -40,7 +38,7 @@ class EdgeHelpers(BasicRouters, CommandRouters, WorkflowRouters):
|
||||
error_target: str = "error_handler",
|
||||
success_target: str = "continue",
|
||||
threshold: int = 1,
|
||||
) -> Callable[[StateT], str]:
|
||||
) -> Callable[[dict[str, Any]], str]:
|
||||
"""Route based on error presence - DEPRECATED.
|
||||
|
||||
Use error handling modules for new code.
|
||||
@@ -51,7 +49,7 @@ class EdgeHelpers(BasicRouters, CommandRouters, WorkflowRouters):
|
||||
stacklevel=2,
|
||||
)
|
||||
|
||||
def router(state: StateT) -> str:
|
||||
def router(state: dict[str, Any]) -> str:
|
||||
errors = state.get(error_key, [])
|
||||
|
||||
if isinstance(errors, list):
|
||||
@@ -69,7 +67,7 @@ class EdgeHelpers(BasicRouters, CommandRouters, WorkflowRouters):
|
||||
recovery_strategies: dict[str, str] | None = None,
|
||||
max_recovery_attempts: int = 2,
|
||||
final_failure_target: str = "human_intervention",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Create error-aware Command router with recovery - DEPRECATED.
|
||||
|
||||
Use error handling modules for new code.
|
||||
@@ -87,7 +85,7 @@ class EdgeHelpers(BasicRouters, CommandRouters, WorkflowRouters):
|
||||
"parsing": "alternative_parser",
|
||||
}
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
errors = state.get(error_key, [])
|
||||
recovery_attempts = state.get("recovery_attempts", 0)
|
||||
|
||||
|
||||
@@ -10,6 +10,7 @@ from typing import Any, Callable, Protocol
|
||||
|
||||
from langgraph.types import Command
|
||||
|
||||
from biz_bud.core.errors import ValidationError
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
from ..validation.condition_security import (
|
||||
@@ -76,10 +77,31 @@ class CommandRoutingRule:
|
||||
ValueError: If condition is malformed
|
||||
"""
|
||||
try:
|
||||
# SECURITY: Comprehensive validation using enhanced security framework
|
||||
validated_condition, operator, _ = validate_condition_for_security(
|
||||
condition
|
||||
)
|
||||
# SECURITY: Use comprehensive validation that properly parses components
|
||||
validated_condition, operator, _ = validate_condition_for_security(condition)
|
||||
|
||||
# Find operator position using the same logic as the security function
|
||||
import re
|
||||
operators_list = [">=", "<=", "==", "!=", ">", "<"]
|
||||
found_operator = None
|
||||
operator_position = -1
|
||||
|
||||
# Use regex to find the last occurrence of each operator with proper boundaries
|
||||
for op in operators_list:
|
||||
escaped_op = re.escape(op)
|
||||
pattern = rf"(?<=\s|^){escaped_op}(?=\s|$)"
|
||||
if matches := list(re.finditer(pattern, validated_condition)):
|
||||
last_match = matches[-1]
|
||||
if last_match.start() > operator_position:
|
||||
found_operator = op
|
||||
operator_position = last_match.start()
|
||||
|
||||
if not found_operator or found_operator != operator:
|
||||
raise ValidationError(f"Operator mismatch in condition: {condition}")
|
||||
|
||||
# Split using the found position instead of naive string split
|
||||
field_name = validated_condition[:operator_position].strip()
|
||||
expected_value = validated_condition[operator_position + len(operator):].strip()
|
||||
|
||||
# Define supported operators with their functions
|
||||
operators = {
|
||||
@@ -92,20 +114,10 @@ class CommandRoutingRule:
|
||||
}
|
||||
|
||||
if operator not in operators:
|
||||
raise ValueError(f"Unsupported operator: {operator}")
|
||||
raise ValidationError(f"Unsupported operator: {operator}")
|
||||
|
||||
comparator = operators[operator]
|
||||
|
||||
# Parse components with additional validation
|
||||
parts = validated_condition.split(operator, 1)
|
||||
if len(parts) != 2:
|
||||
raise ValueError(
|
||||
f"Malformed condition after validation: {validated_condition}"
|
||||
)
|
||||
|
||||
field_name = parts[0].strip()
|
||||
expected_value = parts[1].strip()
|
||||
|
||||
# Additional field name validation
|
||||
validated_field_name = ConditionValidator.validate_field_name(field_name)
|
||||
|
||||
@@ -127,13 +139,13 @@ class CommandRoutingRule:
|
||||
logger.warning(
|
||||
f"Security validation failed for condition '{condition}': {e}"
|
||||
)
|
||||
raise ValueError(f"Security validation failed: {e}") from e
|
||||
raise ValidationError(f"Security validation failed: {e}") from e
|
||||
except Exception as e:
|
||||
# Log unexpected errors for debugging while not exposing internals
|
||||
logger.error(
|
||||
f"Condition evaluation failed for '{condition}': {type(e).__name__}"
|
||||
)
|
||||
raise ValueError(f"Condition evaluation failed: {type(e).__name__}") from e
|
||||
raise ValidationError(f"Condition evaluation failed: {type(e).__name__}") from e
|
||||
|
||||
def _parse_condition_value(self, value_str: str) -> Any:
|
||||
"""Parse a condition value string into appropriate Python type.
|
||||
|
||||
@@ -135,7 +135,7 @@ class SecureGraphRouter:
|
||||
error: SecurityValidationError | ResourceLimitExceededError,
|
||||
execution_plan: dict[str, Any],
|
||||
step_id: str | None = None,
|
||||
) -> Command[Literal["router", "END"]]:
|
||||
) -> Command[Literal["__end__"]]:
|
||||
"""Create a command for handling security failures.
|
||||
|
||||
Args:
|
||||
@@ -155,10 +155,12 @@ class SecureGraphRouter:
|
||||
break
|
||||
|
||||
return Command(
|
||||
goto="router",
|
||||
goto="__end__",
|
||||
update={
|
||||
"execution_plan": execution_plan,
|
||||
"routing_decision": "security_failure",
|
||||
"planning_stage": "failed",
|
||||
"status": "error",
|
||||
"security_error": {
|
||||
"type": type(error).__name__,
|
||||
"message": str(error),
|
||||
|
||||
@@ -9,14 +9,12 @@ routing and provide utilities for building composite routing logic.
|
||||
from __future__ import annotations
|
||||
|
||||
from collections.abc import Callable, Sequence
|
||||
from typing import TYPE_CHECKING, Any, TypeVar
|
||||
from typing import Any
|
||||
|
||||
from langgraph.types import Command
|
||||
|
||||
from biz_bud.logging import debug_highlight
|
||||
|
||||
StateT = TypeVar("StateT", bound=dict[str, Any])
|
||||
|
||||
|
||||
class WorkflowRouters:
|
||||
"""Advanced workflow routing patterns for complex orchestration."""
|
||||
@@ -26,7 +24,7 @@ class WorkflowRouters:
|
||||
stage_key: str = "workflow_stage",
|
||||
stage_mapping: dict[str, str] | None = None,
|
||||
default_stage: str = "end",
|
||||
) -> Callable[[StateT], str]:
|
||||
) -> Callable[[dict[str, Any]], str]:
|
||||
"""Route based on workflow stage progression.
|
||||
|
||||
Routes based on current workflow stage, useful for sequential
|
||||
@@ -57,7 +55,7 @@ class WorkflowRouters:
|
||||
if stage_mapping is None:
|
||||
stage_mapping = {}
|
||||
|
||||
def router(state: StateT) -> str:
|
||||
def router(state: dict[str, Any]) -> str:
|
||||
current_stage = state.get(stage_key, "unknown")
|
||||
return stage_mapping.get(current_stage, default_stage)
|
||||
|
||||
@@ -69,7 +67,7 @@ class WorkflowRouters:
|
||||
subgraph_mapping: dict[str, str],
|
||||
default_subgraph: str = "main",
|
||||
parent_return: str = "consolidate",
|
||||
) -> Callable[[StateT], Command[str]]:
|
||||
) -> Callable[[dict[str, Any]], Command[str]]:
|
||||
"""Route to subgraphs using Command pattern.
|
||||
|
||||
Routes to different subgraphs based on state values, useful for
|
||||
@@ -99,7 +97,7 @@ class WorkflowRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def router(state: StateT) -> Command[str]:
|
||||
def router(state: dict[str, Any]) -> Command[str]:
|
||||
task_type = state.get(subgraph_key, "default")
|
||||
target_subgraph = subgraph_mapping.get(task_type, default_subgraph)
|
||||
|
||||
@@ -123,7 +121,7 @@ class WorkflowRouters:
|
||||
def combine_routers(
|
||||
routers: Sequence[tuple[Callable[[dict[str, Any]], Any], str]],
|
||||
combination_logic: str = "first_match",
|
||||
) -> Callable[[StateT], Any]:
|
||||
) -> Callable[[dict[str, Any]], Any]:
|
||||
"""Combine multiple routers with specified logic.
|
||||
|
||||
Allows composition of multiple routing functions with different
|
||||
@@ -147,7 +145,7 @@ class WorkflowRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def combined_router(state: StateT) -> Any:
|
||||
def combined_router(state: dict[str, Any]) -> Any:
|
||||
results = []
|
||||
|
||||
for router_func, router_name in routers:
|
||||
@@ -176,9 +174,9 @@ class WorkflowRouters:
|
||||
|
||||
@staticmethod
|
||||
def create_debug_router(
|
||||
inner_router: Callable[[StateT], Any],
|
||||
inner_router: Callable[[dict[str, Any]], Any],
|
||||
name: str = "unnamed_router",
|
||||
) -> Callable[[StateT], Any]:
|
||||
) -> Callable[[dict[str, Any]], Any]:
|
||||
"""Wrap a router with debug logging.
|
||||
|
||||
Adds comprehensive debug logging around router execution,
|
||||
@@ -202,7 +200,7 @@ class WorkflowRouters:
|
||||
```
|
||||
"""
|
||||
|
||||
def debug_router(state: StateT) -> Any:
|
||||
def debug_router(state: dict[str, Any]) -> Any:
|
||||
debug_highlight(
|
||||
f"Router '{name}' evaluating state keys: {list(state.keys())}",
|
||||
category="EdgeRouter",
|
||||
|
||||
@@ -108,6 +108,7 @@ from .specialized_exceptions import (
|
||||
ImmutableStateError,
|
||||
JsonParsingError,
|
||||
NodeValidationError,
|
||||
ParameterValidationError,
|
||||
R2RConnectionError,
|
||||
R2RDatabaseError,
|
||||
RegistryError,
|
||||
@@ -115,6 +116,15 @@ from .specialized_exceptions import (
|
||||
ResourceLimitExceededError,
|
||||
SecurityValidationError,
|
||||
ServiceHelperRemovedError,
|
||||
StateValidationError,
|
||||
URLConfigurationError,
|
||||
URLDeduplicationError,
|
||||
URLDiscoveryError,
|
||||
URLNormalizationError,
|
||||
URLProcessingError,
|
||||
URLProviderError,
|
||||
URLTimeoutError,
|
||||
URLValidationError,
|
||||
WebToolsRemovedError,
|
||||
)
|
||||
|
||||
@@ -251,9 +261,19 @@ __all__ = [
|
||||
"SecurityValidationError",
|
||||
"ResourceLimitExceededError",
|
||||
"ConditionSecurityError",
|
||||
"ParameterValidationError",
|
||||
"NodeValidationError",
|
||||
"GraphValidationError",
|
||||
"StateValidationError",
|
||||
"ImmutableStateError",
|
||||
"URLConfigurationError",
|
||||
"URLDeduplicationError",
|
||||
"URLDiscoveryError",
|
||||
"URLNormalizationError",
|
||||
"URLProcessingError",
|
||||
"URLProviderError",
|
||||
"URLTimeoutError",
|
||||
"URLValidationError",
|
||||
"ServiceHelperRemovedError",
|
||||
"WebToolsRemovedError",
|
||||
"R2RConnectionError",
|
||||
|
||||
@@ -423,10 +423,40 @@ def categorize_error(
|
||||
Returns:
|
||||
Tuple of (category, namespace_code)
|
||||
"""
|
||||
# First check specific exception types to avoid misclassification
|
||||
import socket
|
||||
|
||||
# Specific network exceptions (more reliable than string matching)
|
||||
if isinstance(exception, (socket.timeout, socket.gaierror)):
|
||||
return ErrorCategory.NETWORK, ErrorNamespace.NET_CONNECTION_TIMEOUT
|
||||
if isinstance(exception, (ConnectionError, ConnectionRefusedError, ConnectionAbortedError)):
|
||||
return ErrorCategory.NETWORK, ErrorNamespace.NET_CONNECTION_REFUSED
|
||||
|
||||
# Check for httpx/requests timeout exceptions (using type name to avoid direct import)
|
||||
exception_type_name = type(exception).__name__
|
||||
if "TimeoutException" in exception_type_name or "httpx.TimeoutException" in str(type(exception)):
|
||||
return ErrorCategory.NETWORK, ErrorNamespace.NET_CONNECTION_TIMEOUT
|
||||
if "ConnectError" in exception_type_name or "httpx.ConnectError" in str(type(exception)):
|
||||
return ErrorCategory.NETWORK, ErrorNamespace.NET_CONNECTION_REFUSED
|
||||
|
||||
# Check for specific validation exceptions
|
||||
if isinstance(exception, (ValueError, TypeError)):
|
||||
message = str(exception).lower()
|
||||
if "missing" in message or "required" in message:
|
||||
return ErrorCategory.VALIDATION, ErrorNamespace.VAL_MISSING_FIELD
|
||||
elif "format" in message or "invalid" in message:
|
||||
return ErrorCategory.VALIDATION, ErrorNamespace.VAL_INVALID_INPUT
|
||||
else:
|
||||
return ErrorCategory.VALIDATION, ErrorNamespace.VAL_CONSTRAINT_VIOLATION
|
||||
|
||||
if isinstance(exception, KeyError):
|
||||
return ErrorCategory.VALIDATION, ErrorNamespace.VAL_MISSING_FIELD
|
||||
|
||||
# Fall back to string matching for unknown exception types
|
||||
exception_type = type(exception).__name__
|
||||
message = str(exception).lower()
|
||||
|
||||
# Network errors
|
||||
# Network errors (fallback string matching)
|
||||
if any(
|
||||
term in exception_type.lower() for term in ["connection", "network", "timeout"]
|
||||
):
|
||||
|
||||
@@ -71,6 +71,10 @@ class RouteCondition:
|
||||
if cat_enum not in self.categories:
|
||||
return False
|
||||
except ValueError:
|
||||
# Log warning and treat as non-match to prevent crash
|
||||
from biz_bud.logging import get_logger
|
||||
logger = get_logger(__name__)
|
||||
logger.warning(f"Invalid ErrorCategory value: {category}")
|
||||
return False
|
||||
|
||||
# Check severities
|
||||
@@ -83,6 +87,10 @@ class RouteCondition:
|
||||
if sev_enum not in self.severities:
|
||||
return False
|
||||
except ValueError:
|
||||
# Log warning and treat as non-match to prevent crash
|
||||
from biz_bud.logging import get_logger
|
||||
logger = get_logger(__name__)
|
||||
logger.warning(f"Invalid ErrorSeverity value: {severity}")
|
||||
return False
|
||||
|
||||
# Check nodes
|
||||
|
||||
@@ -152,6 +152,42 @@ class ConditionSecurityError(SecurityValidationError):
|
||||
self.context.metadata["condition"] = condition
|
||||
|
||||
|
||||
class ParameterValidationError(ValidationError):
|
||||
"""Exception for parameter validation failures in configuration models."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
parameter_name: str | None = None,
|
||||
parameter_value: Any = None,
|
||||
expected_range: str | None = None,
|
||||
validation_type: str | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
cause: Exception | None = None,
|
||||
):
|
||||
"""Initialize parameter validation error with parameter details."""
|
||||
super().__init__(
|
||||
message,
|
||||
field=parameter_name,
|
||||
value=parameter_value,
|
||||
context=context,
|
||||
cause=cause,
|
||||
error_code=ErrorNamespace.VAL_RANGE_ERROR,
|
||||
)
|
||||
self.parameter_name = parameter_name
|
||||
self.parameter_value = parameter_value
|
||||
self.expected_range = expected_range
|
||||
self.validation_type = validation_type
|
||||
if parameter_name:
|
||||
self.context.metadata["parameter_name"] = parameter_name
|
||||
if parameter_value is not None:
|
||||
self.context.metadata["parameter_value"] = str(parameter_value)
|
||||
if expected_range:
|
||||
self.context.metadata["expected_range"] = expected_range
|
||||
if validation_type:
|
||||
self.context.metadata["validation_type"] = validation_type
|
||||
|
||||
|
||||
# === Graph and Node Validation Exceptions ===
|
||||
|
||||
|
||||
@@ -213,9 +249,229 @@ class GraphValidationError(ValidationError):
|
||||
self.context.metadata["validation_errors"] = validation_errors
|
||||
|
||||
|
||||
# === URL Processing Exceptions ===
|
||||
|
||||
|
||||
class URLProcessingError(BusinessBuddyError):
|
||||
"""Base exception for URL processing errors."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
details: dict[str, Any] | None = None,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
cause: Exception | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL processing error with additional details."""
|
||||
super().__init__(
|
||||
message,
|
||||
ErrorSeverity.ERROR,
|
||||
ErrorCategory.VALIDATION,
|
||||
context,
|
||||
cause or original_error,
|
||||
ErrorNamespace.VAL_SCHEMA_ERROR,
|
||||
)
|
||||
# Store additional details
|
||||
self.details = details or {}
|
||||
self.original_error = original_error
|
||||
if details:
|
||||
self.context.metadata.update(details)
|
||||
|
||||
|
||||
class URLValidationError(URLProcessingError):
|
||||
"""Exception raised when URL validation fails."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
url: str,
|
||||
validation_level: str,
|
||||
checks_failed: list[str] | None = None,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL validation error with validation details."""
|
||||
details = {
|
||||
"url": url,
|
||||
"validation_level": validation_level,
|
||||
"checks_failed": checks_failed or [],
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
class URLNormalizationError(URLProcessingError):
|
||||
"""Exception raised when URL normalization fails."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
url: str,
|
||||
normalization_rules: list[str] | None = None,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL normalization error with normalization details."""
|
||||
details = {
|
||||
"url": url,
|
||||
"normalization_rules": normalization_rules or [],
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
class URLDiscoveryError(URLProcessingError):
|
||||
"""Exception raised when URL discovery fails."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
url: str,
|
||||
discovery_method: str,
|
||||
attempted_methods: list[str] | None = None,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL discovery error with discovery details."""
|
||||
details = {
|
||||
"url": url,
|
||||
"discovery_method": discovery_method,
|
||||
"attempted_methods": attempted_methods or [],
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
class URLDeduplicationError(URLProcessingError):
|
||||
"""Exception raised when URL deduplication fails."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
url_count: int,
|
||||
deduplication_method: str,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL deduplication error with deduplication details."""
|
||||
details = {
|
||||
"url_count": url_count,
|
||||
"deduplication_method": deduplication_method,
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
class URLTimeoutError(URLProcessingError):
|
||||
"""Exception raised when URL processing operations timeout."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
timeout_duration: float,
|
||||
operation: str,
|
||||
url: str | None = None,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL timeout error with timeout details."""
|
||||
details = {
|
||||
"timeout_duration": timeout_duration,
|
||||
"operation": operation,
|
||||
"url": url,
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
class URLProviderError(URLProcessingError):
|
||||
"""Exception raised when URL processing provider fails."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
*,
|
||||
provider_name: str,
|
||||
provider_type: str,
|
||||
operation: str,
|
||||
original_error: Exception | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
) -> None:
|
||||
"""Initialize URL provider error with provider details."""
|
||||
details = {
|
||||
"provider_name": provider_name,
|
||||
"provider_type": provider_type,
|
||||
"operation": operation,
|
||||
}
|
||||
super().__init__(
|
||||
message=message,
|
||||
details=details,
|
||||
original_error=original_error,
|
||||
context=context,
|
||||
)
|
||||
|
||||
|
||||
# === State Management Exceptions ===
|
||||
|
||||
|
||||
class StateValidationError(ValidationError):
|
||||
"""Exception for state validation failures in LangGraph workflows."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
state_key: str | None = None,
|
||||
expected_type: str | None = None,
|
||||
validation_rule: str | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
cause: Exception | None = None,
|
||||
):
|
||||
"""Initialize state validation error with state details."""
|
||||
super().__init__(
|
||||
message,
|
||||
field=state_key,
|
||||
context=context,
|
||||
cause=cause,
|
||||
error_code=ErrorNamespace.STATE_TYPE_ERROR,
|
||||
)
|
||||
self.state_key = state_key
|
||||
self.expected_type = expected_type
|
||||
self.validation_rule = validation_rule
|
||||
if state_key:
|
||||
self.context.metadata["state_key"] = state_key
|
||||
if expected_type:
|
||||
self.context.metadata["expected_type"] = expected_type
|
||||
if validation_rule:
|
||||
self.context.metadata["validation_rule"] = validation_rule
|
||||
|
||||
|
||||
class ImmutableStateError(BusinessBuddyError):
|
||||
"""Exception for immutable state violations."""
|
||||
|
||||
@@ -244,6 +500,38 @@ class ImmutableStateError(BusinessBuddyError):
|
||||
self.context.metadata["attempted_operation"] = attempted_operation
|
||||
|
||||
|
||||
class URLConfigurationError(BusinessBuddyError):
|
||||
"""Exception for URL processing configuration errors."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
message: str,
|
||||
config_field: str | None = None,
|
||||
config_value: Any = None,
|
||||
requirement: str | None = None,
|
||||
context: ErrorContext | None = None,
|
||||
cause: Exception | None = None,
|
||||
):
|
||||
"""Initialize URL configuration error with configuration details."""
|
||||
super().__init__(
|
||||
message,
|
||||
ErrorSeverity.CRITICAL,
|
||||
ErrorCategory.CONFIGURATION,
|
||||
context,
|
||||
cause,
|
||||
ErrorNamespace.CFG_VALIDATION_FAILED,
|
||||
)
|
||||
self.config_field = config_field
|
||||
self.config_value = config_value
|
||||
self.requirement = requirement
|
||||
if config_field:
|
||||
self.context.metadata["config_field"] = config_field
|
||||
if config_value is not None:
|
||||
self.context.metadata["config_value"] = str(config_value)
|
||||
if requirement:
|
||||
self.context.metadata["requirement"] = requirement
|
||||
|
||||
|
||||
class ScraperError(BusinessBuddyError):
|
||||
"""Base exception for web scraper errors."""
|
||||
|
||||
@@ -523,11 +811,22 @@ __all__ = [
|
||||
"SecurityValidationError",
|
||||
"ResourceLimitExceededError",
|
||||
"ConditionSecurityError",
|
||||
"ParameterValidationError",
|
||||
# Graph and node validation exceptions
|
||||
"NodeValidationError",
|
||||
"GraphValidationError",
|
||||
# State management exceptions
|
||||
"StateValidationError",
|
||||
"ImmutableStateError",
|
||||
# URL processing exceptions
|
||||
"URLProcessingError",
|
||||
"URLValidationError",
|
||||
"URLNormalizationError",
|
||||
"URLDiscoveryError",
|
||||
"URLDeduplicationError",
|
||||
"URLTimeoutError",
|
||||
"URLProviderError",
|
||||
"URLConfigurationError",
|
||||
# Web tool exceptions
|
||||
"ScraperError",
|
||||
# Service management exceptions
|
||||
|
||||
@@ -26,6 +26,16 @@ class ConfigurationProvider:
|
||||
"""
|
||||
self._config = config or RunnableConfig()
|
||||
|
||||
def _get_configurable(self) -> dict[str, Any]:
|
||||
"""Get the configurable section from config.
|
||||
|
||||
Handles both dict-like and attribute-like access to RunnableConfig.
|
||||
|
||||
Returns:
|
||||
The configurable dictionary.
|
||||
"""
|
||||
return self._config.get("configurable", {})
|
||||
|
||||
def get_metadata(self, key: str, default: Any = None) -> Any:
|
||||
"""Get metadata value from the configuration.
|
||||
|
||||
@@ -72,7 +82,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The app configuration if available, None otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
return configurable.get("app_config")
|
||||
|
||||
def get_service_factory(self) -> Any | None:
|
||||
@@ -81,7 +91,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The service factory if available, None otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
return configurable.get("service_factory")
|
||||
|
||||
def get_llm_profile(self) -> str:
|
||||
@@ -90,7 +100,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The LLM profile name, defaults to "large".
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
profile = configurable.get("llm_profile_override", "large")
|
||||
return profile if isinstance(profile, str) else "large"
|
||||
|
||||
@@ -100,7 +110,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The temperature override if set, None otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
value = configurable.get("temperature_override")
|
||||
return float(value) if isinstance(value, int | float) else None
|
||||
|
||||
@@ -110,7 +120,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The max tokens override if set, None otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
value = configurable.get("max_tokens_override")
|
||||
return int(value) if isinstance(value, int | float) else None
|
||||
|
||||
@@ -120,7 +130,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
True if streaming is enabled, False otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
value = configurable.get("streaming_enabled", False)
|
||||
return bool(value)
|
||||
|
||||
@@ -130,7 +140,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
True if metrics are enabled, False otherwise.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
value = configurable.get("metrics_enabled", True)
|
||||
return bool(value)
|
||||
|
||||
@@ -144,7 +154,7 @@ class ConfigurationProvider:
|
||||
Returns:
|
||||
The configuration value or default.
|
||||
"""
|
||||
configurable = getattr(self._config, "configurable", {})
|
||||
configurable = self._get_configurable()
|
||||
result = configurable.get(key, default)
|
||||
return cast("T", result) if result is not None else default
|
||||
|
||||
|
||||
@@ -3,6 +3,13 @@
|
||||
This module provides utilities and patterns for ensuring state immutability
|
||||
in LangGraph nodes, preventing accidental state mutations and ensuring
|
||||
predictable state transitions.
|
||||
|
||||
PERFORMANCE NOTE: The decorators in this module use copy.deepcopy() extensively,
|
||||
which can be very expensive for large state objects. These should primarily be
|
||||
used during development and testing. For production, consider:
|
||||
1. Removing @enforce_immutability decorators from data-heavy nodes
|
||||
2. Using lightweight state validation instead of deep copying
|
||||
3. Implementing shallow comparison for state mutation detection
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -13,7 +20,7 @@ from typing import Any, TypeVar, cast
|
||||
|
||||
from typing_extensions import ParamSpec
|
||||
|
||||
from biz_bud.core.errors import ImmutableStateError
|
||||
from biz_bud.core.errors import ImmutableStateError, StateValidationError
|
||||
|
||||
P = ParamSpec("P")
|
||||
T = TypeVar("T")
|
||||
@@ -333,6 +340,8 @@ def ensure_immutable_node(
|
||||
result = node_func(*args, **kwargs)
|
||||
return await result if inspect.iscoroutine(result) else result
|
||||
|
||||
# PERFORMANCE WARNING: Deep copying state before/after execution
|
||||
# is expensive for large states. Consider disabling in production.
|
||||
# Create a snapshot of the original state for comparison
|
||||
original_snapshot = copy.deepcopy(state)
|
||||
|
||||
@@ -366,6 +375,8 @@ def ensure_immutable_node(
|
||||
if not isinstance(state, dict):
|
||||
return node_func(*args, **kwargs)
|
||||
|
||||
# PERFORMANCE WARNING: Deep copying state before/after execution
|
||||
# is expensive for large states. Consider disabling in production.
|
||||
# Create a snapshot of the original state for comparison
|
||||
original_snapshot = copy.deepcopy(state)
|
||||
|
||||
@@ -412,7 +423,7 @@ class StateUpdater:
|
||||
```
|
||||
"""
|
||||
|
||||
def __init__(self, base_state: dict[str, Any]):
|
||||
def __init__(self, base_state: dict[str, Any] | Any):
|
||||
"""Initialize with a base state.
|
||||
|
||||
Args:
|
||||
@@ -470,7 +481,12 @@ class StateUpdater:
|
||||
"""
|
||||
current_list = self._state.get(key, [])
|
||||
if not isinstance(current_list, list):
|
||||
raise ValueError(f"Cannot append to non-list value at key '{key}'")
|
||||
raise StateValidationError(
|
||||
f"Cannot append to non-list value at key '{key}'",
|
||||
state_key=key,
|
||||
expected_type="list",
|
||||
validation_rule="append_operation",
|
||||
)
|
||||
|
||||
if key not in self._updates:
|
||||
self._updates[key] = list(current_list)
|
||||
@@ -492,7 +508,12 @@ class StateUpdater:
|
||||
"""
|
||||
current_list = self._state.get(key, [])
|
||||
if not isinstance(current_list, list):
|
||||
raise ValueError(f"Cannot extend non-list value at key '{key}'")
|
||||
raise StateValidationError(
|
||||
f"Cannot extend non-list value at key '{key}'",
|
||||
state_key=key,
|
||||
expected_type="list",
|
||||
validation_rule="extend_operation",
|
||||
)
|
||||
|
||||
if key not in self._updates:
|
||||
self._updates[key] = list(current_list)
|
||||
@@ -514,7 +535,12 @@ class StateUpdater:
|
||||
"""
|
||||
current_dict = self._state.get(key, {})
|
||||
if not isinstance(current_dict, dict):
|
||||
raise ValueError(f"Cannot merge into non-dict value at key '{key}'")
|
||||
raise StateValidationError(
|
||||
f"Cannot merge into non-dict value at key '{key}'",
|
||||
state_key=key,
|
||||
expected_type="dict",
|
||||
validation_rule="merge_operation",
|
||||
)
|
||||
|
||||
if key not in self._updates:
|
||||
self._updates[key] = dict(current_dict)
|
||||
@@ -536,7 +562,12 @@ class StateUpdater:
|
||||
"""
|
||||
current_value = self._state.get(key, 0)
|
||||
if not isinstance(current_value, int | float):
|
||||
raise ValueError(f"Cannot increment non-numeric value at key '{key}'")
|
||||
raise StateValidationError(
|
||||
f"Cannot increment non-numeric value at key '{key}'",
|
||||
state_key=key,
|
||||
expected_type="int or float",
|
||||
validation_rule="increment_operation",
|
||||
)
|
||||
|
||||
self._updates[key] = current_value + amount
|
||||
return self
|
||||
@@ -567,7 +598,11 @@ def validate_state_schema(state: dict[str, Any], schema: type) -> None:
|
||||
if key not in state and not (
|
||||
hasattr(schema, "__total__") and not getattr(schema, "__total__", True)
|
||||
):
|
||||
raise ValueError(f"Required field '{key}' missing from state")
|
||||
raise StateValidationError(
|
||||
f"Required field '{key}' missing from state",
|
||||
state_key=key,
|
||||
validation_rule="schema_validation",
|
||||
)
|
||||
|
||||
if key in state:
|
||||
state[key]
|
||||
|
||||
@@ -13,7 +13,6 @@ HTTP/networking functionality.
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import time
|
||||
from collections.abc import Awaitable, Callable
|
||||
from dataclasses import dataclass
|
||||
@@ -22,12 +21,13 @@ from types import TracebackType
|
||||
from typing import Any, TypeVar, cast
|
||||
from urllib.parse import urlencode
|
||||
|
||||
import httpx
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
from ..errors import NetworkError, RateLimitError, handle_errors
|
||||
from ..errors import NetworkError, RateLimitError, ValidationError, handle_errors
|
||||
from .http_client import HTTPClient
|
||||
from .types import HTTPMethod, HTTPResponse, RequestOptions
|
||||
|
||||
logger = get_logger(__name__)
|
||||
T = TypeVar("T")
|
||||
@@ -83,13 +83,13 @@ class APIResponse:
|
||||
class RequestConfig(BaseModel):
|
||||
"""Configuration for API requests."""
|
||||
|
||||
timeout: float = Field(default=160.0, description="Request timeout in seconds")
|
||||
max_retries: int = Field(default=3, description="Maximum number of retries")
|
||||
retry_delay: float = Field(default=1.0, description="Initial retry delay")
|
||||
retry_backoff: float = Field(default=2.0, description="Retry backoff multiplier")
|
||||
cache_ttl: float | None = Field(default=None, description="Cache TTL in seconds")
|
||||
follow_redirects: bool = Field(default=True, description="Follow redirects")
|
||||
verify_ssl: bool = Field(default=True, description="Verify SSL certificates")
|
||||
timeout: float = Field(description="Request timeout in seconds", default=160.0)
|
||||
max_retries: int = Field(description="Maximum number of retries", default=3)
|
||||
retry_delay: float = Field(description="Initial retry delay", default=1.0)
|
||||
retry_backoff: float = Field(description="Retry backoff multiplier", default=2.0)
|
||||
cache_ttl: float | None = Field(description="Cache TTL in seconds", default=None)
|
||||
follow_redirects: bool = Field(description="Follow redirects", default=True)
|
||||
verify_ssl: bool = Field(description="Verify SSL certificates", default=True)
|
||||
|
||||
|
||||
class CircuitBreaker:
|
||||
@@ -213,7 +213,7 @@ class APIClient:
|
||||
self.config = config or RequestConfig()
|
||||
|
||||
# HTTP client
|
||||
self._client: httpx.AsyncClient | None = None
|
||||
self._http_client: HTTPClient = HTTPClient()
|
||||
|
||||
# Circuit breakers per host
|
||||
self._circuit_breakers: dict[str, CircuitBreaker] = {}
|
||||
@@ -223,12 +223,6 @@ class APIClient:
|
||||
|
||||
async def __aenter__(self) -> "APIClient":
|
||||
"""Enter async context."""
|
||||
self._client = httpx.AsyncClient(
|
||||
headers=self.headers,
|
||||
timeout=self.config.timeout,
|
||||
follow_redirects=self.config.follow_redirects,
|
||||
verify=self.config.verify_ssl,
|
||||
)
|
||||
return self
|
||||
|
||||
async def __aexit__(
|
||||
@@ -238,9 +232,8 @@ class APIClient:
|
||||
exc_tb: TracebackType | None,
|
||||
) -> None:
|
||||
"""Exit async context."""
|
||||
if self._client:
|
||||
await self._client.aclose()
|
||||
self._client = None
|
||||
# HTTPClient is a singleton and manages its own lifecycle
|
||||
pass
|
||||
|
||||
@handle_errors(NetworkError, RateLimitError)
|
||||
async def request(
|
||||
@@ -261,14 +254,16 @@ class APIClient:
|
||||
full_url = self._build_url(url)
|
||||
|
||||
# Check cache for GET requests
|
||||
cache_key = self._build_cache_key(method, full_url, params)
|
||||
cache_key = self._build_cache_key(method, full_url, params, json_data, data)
|
||||
if method == RequestMethod.GET and config.cache_ttl:
|
||||
if cached := self._get_cached(cache_key):
|
||||
# Note: Monitoring metrics removed to avoid circular import
|
||||
return cached
|
||||
|
||||
# Get circuit breaker for host
|
||||
host = httpx.URL(full_url).host
|
||||
from urllib.parse import urlparse
|
||||
parsed_url = urlparse(full_url)
|
||||
host = parsed_url.hostname or parsed_url.netloc
|
||||
circuit_breaker = self._get_circuit_breaker(host)
|
||||
|
||||
# Make request with circuit breaker
|
||||
@@ -376,31 +371,38 @@ class APIClient:
|
||||
# Note: Performance monitoring removed to avoid circular import
|
||||
start_time: float = time.time()
|
||||
|
||||
if not self._client:
|
||||
raise RuntimeError(
|
||||
"Client not initialized. Use async context manager."
|
||||
)
|
||||
# Build request options for HTTPClient
|
||||
# No longer needed since cast was removed
|
||||
|
||||
response = await self._client.request(
|
||||
method=method_val,
|
||||
url=url,
|
||||
params=params,
|
||||
json=json_data,
|
||||
data=data,
|
||||
headers=request_headers,
|
||||
timeout=config.timeout,
|
||||
)
|
||||
request_options: RequestOptions = {
|
||||
"method": cast(HTTPMethod, method_val),
|
||||
"url": url,
|
||||
}
|
||||
|
||||
if params:
|
||||
request_options["params"] = {str(k): str(v) for k, v in params.items()}
|
||||
if json_data:
|
||||
request_options["json"] = json_data
|
||||
if data:
|
||||
request_options["data"] = str(data)
|
||||
if request_headers:
|
||||
request_options["headers"] = request_headers
|
||||
if config.timeout:
|
||||
request_options["timeout"] = config.timeout
|
||||
request_options["follow_redirects"] = config.follow_redirects
|
||||
|
||||
# Make request through unified HTTPClient
|
||||
response: HTTPResponse = await self._http_client.request(request_options)
|
||||
elapsed_time: float = time.time() - start_time
|
||||
|
||||
# Parse response
|
||||
api_response: APIResponse = await self._parse_response(
|
||||
api_response: APIResponse = self._parse_unified_response(
|
||||
response, elapsed_time
|
||||
)
|
||||
|
||||
# Check for rate limiting
|
||||
if response.status_code == 429:
|
||||
retry_after: str | None = response.headers.get("Retry-After")
|
||||
if api_response.status_code == 429:
|
||||
retry_after: str | None = response["headers"].get("Retry-After")
|
||||
raise RateLimitError(
|
||||
"Rate limit exceeded",
|
||||
retry_after=int(float(retry_after)) if retry_after else None,
|
||||
@@ -409,12 +411,12 @@ class APIClient:
|
||||
# Raise for other errors if needed
|
||||
if not api_response.is_success() and attempt < config.max_retries:
|
||||
raise NetworkError(
|
||||
f"Request failed with status {response.status_code}"
|
||||
f"Request failed with status {api_response.status_code}"
|
||||
)
|
||||
|
||||
return api_response
|
||||
|
||||
except (httpx.TimeoutException, httpx.ConnectError, NetworkError) as e:
|
||||
except (TimeoutError, NetworkError) as e:
|
||||
last_exception = e
|
||||
|
||||
if attempt < config.max_retries:
|
||||
@@ -433,20 +435,18 @@ class APIClient:
|
||||
|
||||
raise NetworkError("Request failed for unknown reason")
|
||||
|
||||
async def _parse_response(
|
||||
self, response: httpx.Response, elapsed_time: float
|
||||
def _parse_unified_response(
|
||||
self, response: HTTPResponse, elapsed_time: float
|
||||
) -> APIResponse:
|
||||
"""Parse HTTP response."""
|
||||
# Try to parse JSON
|
||||
try:
|
||||
data = response.json()
|
||||
except json.JSONDecodeError:
|
||||
# Fall back to text
|
||||
data = response.text
|
||||
"""Parse HTTPResponse from unified client."""
|
||||
# Get JSON data if available, otherwise fall back to text
|
||||
data = response.get("json")
|
||||
if data is None:
|
||||
data = response.get("text", "")
|
||||
|
||||
return APIResponse(
|
||||
status_code=response.status_code,
|
||||
headers=dict(response.headers),
|
||||
status_code=response["status_code"],
|
||||
headers=response["headers"],
|
||||
data=data,
|
||||
elapsed_time=elapsed_time,
|
||||
)
|
||||
@@ -457,15 +457,42 @@ class APIClient:
|
||||
return url
|
||||
|
||||
if not self.base_url:
|
||||
raise ValueError("No base URL configured")
|
||||
raise ValidationError("No base URL configured")
|
||||
|
||||
return f"{self.base_url.rstrip('/')}/{url.lstrip('/')}"
|
||||
|
||||
def _build_cache_key(
|
||||
self, method: RequestMethod, url: str, params: dict[str, Any] | None
|
||||
self,
|
||||
method: RequestMethod,
|
||||
url: str,
|
||||
params: dict[str, Any] | None,
|
||||
json_data: dict[str, Any] | None = None,
|
||||
data: dict[str, Any] | None = None
|
||||
) -> str:
|
||||
"""Build cache key for request."""
|
||||
"""Build cache key for request.
|
||||
|
||||
Includes request body for POST requests to ensure different payloads
|
||||
to the same URL don't share the same cache entry.
|
||||
"""
|
||||
import hashlib
|
||||
import json
|
||||
|
||||
param_str = urlencode(sorted(params.items())) if params else ""
|
||||
|
||||
# Include request body in cache key for POST requests
|
||||
body_str = ""
|
||||
if json_data:
|
||||
# Serialize JSON data with sorted keys for consistent hashing
|
||||
body_str = json.dumps(json_data, sort_keys=True)
|
||||
elif data:
|
||||
# Handle dict data type
|
||||
body_str = urlencode(sorted(data.items()))
|
||||
|
||||
# Create hash of body if present to keep cache key manageable
|
||||
if body_str:
|
||||
body_hash = hashlib.md5(body_str.encode()).hexdigest()
|
||||
return f"{method.value}:{url}:{param_str}:{body_hash}"
|
||||
|
||||
return f"{method.value}:{url}:{param_str}"
|
||||
|
||||
def _get_cached(self, key: str) -> APIResponse | None:
|
||||
@@ -491,7 +518,7 @@ class APIClient:
|
||||
self._circuit_breakers[host] = CircuitBreaker(
|
||||
failure_threshold=5,
|
||||
recovery_timeout=60.0,
|
||||
expected_exception=(httpx.HTTPError, NetworkError),
|
||||
expected_exception=(NetworkError, TimeoutError),
|
||||
)
|
||||
|
||||
return self._circuit_breakers[host]
|
||||
@@ -619,7 +646,7 @@ def create_api_client(
|
||||
elif client_type == "graphql":
|
||||
return GraphQLClient(base_url=base_url, headers=headers, config=config)
|
||||
else:
|
||||
raise ValueError(
|
||||
raise ValidationError(
|
||||
f"Invalid client_type: {client_type}. "
|
||||
f"Must be 'basic', 'rest', or 'graphql'."
|
||||
)
|
||||
|
||||
@@ -6,6 +6,8 @@ import time
|
||||
from collections.abc import Awaitable, Callable, Coroutine
|
||||
from typing import Any, ParamSpec, TypeVar, cast
|
||||
|
||||
from biz_bud.core.errors import BusinessBuddyError, ValidationError
|
||||
|
||||
T = TypeVar("T")
|
||||
R = TypeVar("R")
|
||||
P = ParamSpec("P")
|
||||
@@ -30,7 +32,7 @@ async def gather_with_concurrency[T]( # noqa: D103
|
||||
If return_exceptions is True, exceptions will be included in the result list.
|
||||
"""
|
||||
if n < 1:
|
||||
raise ValueError("Concurrency limit must be at least 1")
|
||||
raise ValidationError("Concurrency limit must be at least 1")
|
||||
|
||||
semaphore = asyncio.Semaphore(n)
|
||||
|
||||
@@ -48,7 +50,7 @@ async def gather_with_concurrency[T]( # noqa: D103
|
||||
# Old retry_async function removed - see decorator version below
|
||||
|
||||
|
||||
def retry_async[**P, T]( # noqa: D103
|
||||
def retry_async( # noqa: D103
|
||||
func: Callable[P, Awaitable[T]] | None = None,
|
||||
/,
|
||||
max_retries: int = 3,
|
||||
@@ -123,7 +125,7 @@ class RateLimiter:
|
||||
ValueError: If calls_per_second is not positive
|
||||
"""
|
||||
if calls_per_second <= 0:
|
||||
raise ValueError("calls_per_second must be greater than 0")
|
||||
raise ValidationError("calls_per_second must be greater than 0")
|
||||
|
||||
self.min_interval = 1.0 / calls_per_second
|
||||
self.last_call_time = -float("inf")
|
||||
@@ -270,7 +272,7 @@ async def run_async_chain[T]( # noqa: D103
|
||||
except Exception as e:
|
||||
# Add function name to exception for better debugging
|
||||
func_name = getattr(func, "__name__", repr(func))
|
||||
raise Exception(f"Error in function {func_name}: {str(e)}") from e
|
||||
raise BusinessBuddyError(f"Error in function {func_name}: {str(e)}") from e
|
||||
|
||||
# Ensure result is awaited if it's still a coroutine
|
||||
if asyncio.iscoroutine(result):
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
"""Base HTTP client implementation."""
|
||||
|
||||
import asyncio
|
||||
import contextlib
|
||||
from dataclasses import dataclass
|
||||
from types import TracebackType
|
||||
from typing import cast
|
||||
from typing import Any, cast
|
||||
|
||||
import aiohttp
|
||||
|
||||
@@ -25,19 +26,49 @@ class HTTPClientConfig:
|
||||
retry_config: RetryConfig | None = None
|
||||
headers: dict[str, str] | None = None
|
||||
follow_redirects: bool = True
|
||||
# Connection pooling settings
|
||||
connector_limit: int = 100
|
||||
connector_limit_per_host: int = 30
|
||||
keepalive_timeout: float = 30.0
|
||||
enable_cleanup_closed: bool = True
|
||||
|
||||
|
||||
class HTTPClient:
|
||||
"""Base HTTP client with retry and error handling."""
|
||||
"""Base HTTP client with retry and error handling.
|
||||
|
||||
def __init__(self, config: HTTPClientConfig | None = None) -> None:
|
||||
"""Initialize HTTP client.
|
||||
Implements singleton pattern to ensure only one instance and one
|
||||
aiohttp.ClientSession exist throughout the application's lifecycle.
|
||||
This prevents resource exhaustion under high concurrency.
|
||||
"""
|
||||
|
||||
_instance: "HTTPClient | None" = None
|
||||
_session: aiohttp.ClientSession | None = None
|
||||
_lock: asyncio.Lock | None = None # Will be initialized when needed
|
||||
|
||||
def __new__(cls, config: HTTPClientConfig | None = None) -> "HTTPClient":
|
||||
"""Create or return the singleton HTTPClient instance.
|
||||
|
||||
Args:
|
||||
config: Client configuration
|
||||
config: Client configuration (only used for first instance)
|
||||
|
||||
Returns:
|
||||
The singleton HTTPClient instance
|
||||
"""
|
||||
if cls._instance is None:
|
||||
cls._instance = super().__new__(cls)
|
||||
return cast(HTTPClient, cls._instance)
|
||||
|
||||
def __init__(self, config: HTTPClientConfig | None = None) -> None:
|
||||
"""Initialize HTTP client singleton.
|
||||
|
||||
Args:
|
||||
config: Client configuration (only used for first initialization)
|
||||
"""
|
||||
# Check if already initialized to avoid double initialization
|
||||
if hasattr(self, '_initialized') and self._initialized:
|
||||
return
|
||||
self.config = config or HTTPClientConfig()
|
||||
self._session: aiohttp.ClientSession | None = None
|
||||
self._initialized = True
|
||||
|
||||
async def __aenter__(self) -> "HTTPClient":
|
||||
"""Enter async context."""
|
||||
@@ -50,30 +81,81 @@ class HTTPClient:
|
||||
exc_val: BaseException | None,
|
||||
exc_tb: TracebackType | None,
|
||||
) -> None:
|
||||
"""Exit async context."""
|
||||
"""Exit async context and close session."""
|
||||
await self.close()
|
||||
|
||||
async def _ensure_session(self) -> None:
|
||||
"""Ensure session is created."""
|
||||
if self._session is None:
|
||||
"""Ensure session is created with optimized connection pooling.
|
||||
|
||||
Thread-safe session creation for singleton pattern.
|
||||
"""
|
||||
if HTTPClient._session is not None:
|
||||
return
|
||||
|
||||
# Initialize lock if needed (thread-safe)
|
||||
if HTTPClient._lock is None:
|
||||
HTTPClient._lock = asyncio.Lock()
|
||||
|
||||
async with HTTPClient._lock:
|
||||
# Double-check pattern for thread safety
|
||||
if HTTPClient._session is not None:
|
||||
return
|
||||
|
||||
# Create timeout object - pyrefly has issues with ClientTimeout constructor
|
||||
# Using cast to work around type checking issues
|
||||
from typing import Any, cast
|
||||
|
||||
timeout = cast("Any", aiohttp.ClientTimeout)(
|
||||
timeout = cast(Any, aiohttp.ClientTimeout(
|
||||
total=self.config.timeout,
|
||||
connect=self.config.connect_timeout,
|
||||
)
|
||||
self._session = aiohttp.ClientSession(
|
||||
))
|
||||
|
||||
# Create optimized TCPConnector for high-concurrency operations
|
||||
# Use compatible parameters based on aiohttp version
|
||||
connector_kwargs: dict[str, Any] = {
|
||||
"limit": self.config.connector_limit,
|
||||
"limit_per_host": min(self.config.connector_limit_per_host, 50), # Cap per-host limit
|
||||
"keepalive_timeout": self.config.keepalive_timeout,
|
||||
"enable_cleanup_closed": self.config.enable_cleanup_closed,
|
||||
"force_close": False, # Keep connections alive
|
||||
}
|
||||
|
||||
# Add DNS cache parameters if supported (aiohttp >= 3.8)
|
||||
with contextlib.suppress(ImportError, AttributeError):
|
||||
if hasattr(aiohttp.TCPConnector, "use_dns_cache"):
|
||||
connector_kwargs |= {
|
||||
"use_dns_cache": True,
|
||||
"ttl_dns_cache": 300, # 5 minutes DNS cache
|
||||
}
|
||||
connector = aiohttp.TCPConnector(**connector_kwargs)
|
||||
|
||||
HTTPClient._session = aiohttp.ClientSession(
|
||||
timeout=timeout,
|
||||
headers=self.config.headers,
|
||||
connector=connector,
|
||||
)
|
||||
|
||||
async def _get_session(self) -> aiohttp.ClientSession:
|
||||
"""Get the current session, creating it if necessary.
|
||||
|
||||
Returns:
|
||||
The current aiohttp ClientSession
|
||||
|
||||
Raises:
|
||||
RuntimeError: If session cannot be created
|
||||
"""
|
||||
await self._ensure_session()
|
||||
if HTTPClient._session is None:
|
||||
raise RuntimeError("Failed to create HTTP session")
|
||||
return HTTPClient._session
|
||||
|
||||
async def close(self) -> None:
|
||||
"""Close the HTTP session."""
|
||||
if self._session:
|
||||
await self._session.close()
|
||||
self._session = None
|
||||
"""Close the HTTP session.
|
||||
|
||||
Note: This closes the singleton session for all instances.
|
||||
Should only be called during application shutdown.
|
||||
"""
|
||||
if HTTPClient._session:
|
||||
await HTTPClient._session.close()
|
||||
HTTPClient._session = None
|
||||
|
||||
async def request(self, options: RequestOptions) -> HTTPResponse:
|
||||
"""Make an HTTP request.
|
||||
@@ -88,7 +170,7 @@ class HTTPClient:
|
||||
NetworkError: On network failures
|
||||
"""
|
||||
await self._ensure_session()
|
||||
assert self._session is not None
|
||||
assert HTTPClient._session is not None
|
||||
|
||||
method = options["method"]
|
||||
url = options["url"]
|
||||
@@ -131,9 +213,9 @@ class HTTPClient:
|
||||
|
||||
async def _make_request() -> HTTPResponse:
|
||||
try:
|
||||
if self._session is None:
|
||||
if HTTPClient._session is None:
|
||||
raise RuntimeError("Session not initialized")
|
||||
async with self._session.request(method, url, **kwargs) as resp:
|
||||
async with HTTPClient._session.request(method, url, **kwargs) as resp:
|
||||
content = await resp.read()
|
||||
text = None
|
||||
json_data = None
|
||||
@@ -203,3 +285,177 @@ class HTTPClient:
|
||||
options_dict = {"method": "PATCH", "url": url, **kwargs}
|
||||
options = cast("RequestOptions", options_dict)
|
||||
return await self.request(options)
|
||||
|
||||
async def head(self, url: str, **kwargs) -> HTTPResponse:
|
||||
"""Make a HEAD request."""
|
||||
options_dict = {"method": "HEAD", "url": url, **kwargs}
|
||||
options = cast("RequestOptions", options_dict)
|
||||
return await self.request(options)
|
||||
|
||||
@classmethod
|
||||
async def get_session(cls) -> aiohttp.ClientSession:
|
||||
"""Get the global aiohttp.ClientSession for direct usage.
|
||||
|
||||
This method provides access to the underlying session for code that
|
||||
needs direct aiohttp.ClientSession access while still benefiting from
|
||||
the singleton pattern and shared connection pooling.
|
||||
|
||||
Returns:
|
||||
The singleton aiohttp.ClientSession instance.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If no session has been created yet.
|
||||
"""
|
||||
if cls._session is None:
|
||||
# Create a temporary instance to initialize the session
|
||||
temp_client = cls()
|
||||
await temp_client._ensure_session()
|
||||
|
||||
if cls._session is None:
|
||||
raise RuntimeError("Failed to initialize HTTPClient session")
|
||||
|
||||
return cls._session
|
||||
|
||||
@classmethod
|
||||
async def get_or_create_client(cls, config: HTTPClientConfig | None = None) -> "HTTPClient":
|
||||
"""Get or create the HTTPClient singleton with optional configuration.
|
||||
|
||||
This is a convenience method for code that needs to ensure an HTTPClient
|
||||
instance exists with specific configuration.
|
||||
|
||||
Args:
|
||||
config: Optional configuration (only used if no instance exists)
|
||||
|
||||
Returns:
|
||||
The HTTPClient singleton instance.
|
||||
"""
|
||||
if cls._instance is None:
|
||||
cls._instance = cls(config)
|
||||
return cls._instance
|
||||
|
||||
async def fetch_text(self, url: str, timeout: float | None = None, headers: dict[str, str] | None = None) -> str:
|
||||
"""Convenience method to fetch text content from a URL.
|
||||
|
||||
Args:
|
||||
url: URL to fetch
|
||||
timeout: Request timeout in seconds
|
||||
headers: Optional request headers
|
||||
|
||||
Returns:
|
||||
The response text content.
|
||||
|
||||
Raises:
|
||||
NetworkError: On request failures
|
||||
"""
|
||||
kwargs: dict[str, Any] = {}
|
||||
if timeout:
|
||||
kwargs["timeout"] = timeout
|
||||
if headers:
|
||||
kwargs["headers"] = headers
|
||||
|
||||
response = await self.get(url, **kwargs)
|
||||
return response.get("text", "")
|
||||
|
||||
async def fetch_json(self, url: str, timeout: float | None = None, headers: dict[str, str] | None = None) -> dict[str, Any] | list[Any] | None:
|
||||
"""Convenience method to fetch JSON content from a URL.
|
||||
|
||||
Args:
|
||||
url: URL to fetch
|
||||
timeout: Request timeout in seconds
|
||||
headers: Optional request headers
|
||||
|
||||
Returns:
|
||||
The parsed JSON data, or None if not JSON.
|
||||
|
||||
Raises:
|
||||
NetworkError: On request failures
|
||||
"""
|
||||
kwargs: dict[str, Any] = {}
|
||||
if timeout:
|
||||
kwargs["timeout"] = timeout
|
||||
if headers:
|
||||
kwargs["headers"] = headers
|
||||
|
||||
response = await self.get(url, **kwargs)
|
||||
return response.get("json")
|
||||
|
||||
|
||||
def create_high_performance_http_client(
|
||||
max_concurrent: int = 100,
|
||||
timeout: float = 30.0,
|
||||
connect_timeout: float = 5.0,
|
||||
) -> HTTPClient:
|
||||
"""Create an HTTP client optimized for high-concurrency batch operations.
|
||||
|
||||
This factory creates an HTTPClient configured for maximum performance
|
||||
in batch processing scenarios with optimized connection pooling.
|
||||
|
||||
Args:
|
||||
max_concurrent: Maximum number of concurrent connections
|
||||
timeout: Request timeout in seconds
|
||||
connect_timeout: Connection timeout in seconds
|
||||
|
||||
Returns:
|
||||
Configured HTTPClient instance optimized for batch processing
|
||||
"""
|
||||
config = HTTPClientConfig(
|
||||
timeout=timeout,
|
||||
connect_timeout=connect_timeout,
|
||||
# Connection pooling optimized for batch processing
|
||||
connector_limit=max_concurrent,
|
||||
connector_limit_per_host=min(max_concurrent // 2, 50),
|
||||
keepalive_timeout=60.0, # Longer keepalive for batch operations
|
||||
enable_cleanup_closed=True,
|
||||
# Default headers for performance
|
||||
headers={
|
||||
"User-Agent": "BizBud-BatchProcessor/1.0",
|
||||
"Connection": "keep-alive",
|
||||
},
|
||||
follow_redirects=True,
|
||||
)
|
||||
|
||||
return HTTPClient(config)
|
||||
|
||||
|
||||
class HTTPClientLifespan:
|
||||
"""Lifespan manager for HTTPClient singleton.
|
||||
|
||||
Use with FastAPI's lifespan parameter to ensure proper
|
||||
session initialization and cleanup.
|
||||
"""
|
||||
|
||||
def __init__(self, config: HTTPClientConfig | None = None):
|
||||
"""Initialize lifespan manager.
|
||||
|
||||
Args:
|
||||
config: Configuration for HTTPClient
|
||||
"""
|
||||
self.config = config
|
||||
self._client: HTTPClient | None = None
|
||||
|
||||
async def startup(self) -> None:
|
||||
"""Initialize HTTPClient on application startup."""
|
||||
self._client = HTTPClient(self.config)
|
||||
# Pre-initialize the session
|
||||
await self._client._ensure_session()
|
||||
logger.info("HTTPClient singleton initialized with persistent session")
|
||||
|
||||
async def shutdown(self) -> None:
|
||||
"""Close HTTPClient session on application shutdown."""
|
||||
if self._client:
|
||||
await self._client.close()
|
||||
logger.info("HTTPClient singleton session closed")
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def lifespan(self, app):
|
||||
"""Async context manager for FastAPI lifespan integration.
|
||||
|
||||
Usage:
|
||||
lifespan_manager = HTTPClientLifespan()
|
||||
app = FastAPI(lifespan=lifespan_manager.lifespan)
|
||||
"""
|
||||
await self.startup()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
await self.shutdown()
|
||||
|
||||
@@ -7,8 +7,13 @@ from collections import defaultdict, deque
|
||||
from collections.abc import Awaitable, Callable
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
from typing import Any, cast
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
# Global stats tracker
|
||||
_stats_lock = asyncio.Lock()
|
||||
_retry_stats: defaultdict[str, "RetryStats"] = defaultdict(lambda: RetryStats())
|
||||
@@ -115,6 +120,170 @@ class RetryConfig:
|
||||
exceptions: tuple[type[Exception], ...] = (Exception,)
|
||||
|
||||
|
||||
class CircuitBreakerState(Enum):
|
||||
"""Circuit breaker states."""
|
||||
CLOSED = "closed" # Normal operation
|
||||
OPEN = "open" # Circuit is open, calls fail fast
|
||||
HALF_OPEN = "half_open" # Testing if service is back
|
||||
|
||||
|
||||
class CircuitBreakerError(Exception):
|
||||
"""Raised when circuit breaker is open."""
|
||||
pass
|
||||
|
||||
|
||||
@dataclass
|
||||
class CircuitBreakerConfig:
|
||||
"""Configuration for circuit breaker behavior."""
|
||||
|
||||
failure_threshold: int = 5 # Number of failures before opening
|
||||
recovery_timeout: float = 60.0 # Seconds before attempting recovery
|
||||
expected_exception: tuple[type[Exception], ...] = (Exception,)
|
||||
success_threshold: int = 3 # Successes needed in half-open to close
|
||||
|
||||
|
||||
@dataclass
|
||||
class CircuitBreakerStats:
|
||||
"""Statistics for circuit breaker operations."""
|
||||
|
||||
state: CircuitBreakerState = CircuitBreakerState.CLOSED
|
||||
failure_count: int = 0
|
||||
success_count: int = 0
|
||||
last_failure_time: float = 0.0
|
||||
total_requests: int = 0
|
||||
successful_requests: int = 0
|
||||
failed_requests: int = 0
|
||||
circuit_opened_count: int = 0
|
||||
|
||||
def record_success(self) -> None:
|
||||
"""Record a successful request."""
|
||||
self.total_requests += 1
|
||||
self.successful_requests += 1
|
||||
self.success_count += 1
|
||||
|
||||
def record_failure(self) -> None:
|
||||
"""Record a failed request."""
|
||||
self.total_requests += 1
|
||||
self.failed_requests += 1
|
||||
self.failure_count += 1
|
||||
self.success_count = 0 # Reset success count
|
||||
self.last_failure_time = time.time()
|
||||
|
||||
def reset_failure_count(self) -> None:
|
||||
"""Reset failure count (when circuit closes)."""
|
||||
self.failure_count = 0
|
||||
self.success_count = 0
|
||||
|
||||
|
||||
class CircuitBreaker:
|
||||
"""Circuit breaker implementation for preventing cascading failures."""
|
||||
|
||||
def __init__(self, config: CircuitBreakerConfig) -> None:
|
||||
"""Initialize circuit breaker.
|
||||
|
||||
Args:
|
||||
config: Circuit breaker configuration
|
||||
"""
|
||||
self.config = config
|
||||
self.stats = CircuitBreakerStats()
|
||||
self._lock = asyncio.Lock()
|
||||
|
||||
async def _should_attempt_reset(self) -> bool:
|
||||
"""Check if we should attempt to reset the circuit breaker."""
|
||||
if self.stats.state != CircuitBreakerState.OPEN:
|
||||
return False
|
||||
|
||||
return (time.time() - self.stats.last_failure_time) >= self.config.recovery_timeout
|
||||
|
||||
async def _handle_success(self) -> None:
|
||||
"""Handle successful request."""
|
||||
async with self._lock:
|
||||
self.stats.record_success()
|
||||
|
||||
if self.stats.state == CircuitBreakerState.HALF_OPEN and self.stats.success_count >= self.config.success_threshold:
|
||||
logger.info("CircuitBreaker state transition: HALF_OPEN -> CLOSED (success_count=%d, threshold=%d)", self.stats.success_count, self.config.success_threshold)
|
||||
self.stats.state = CircuitBreakerState.CLOSED
|
||||
self.stats.reset_failure_count()
|
||||
|
||||
async def _handle_failure(self, exception: Exception) -> None:
|
||||
"""Handle failed request."""
|
||||
# Only count expected exceptions as circuit breaker failures
|
||||
if not any(isinstance(exception, exc_type) for exc_type in self.config.expected_exception):
|
||||
return
|
||||
|
||||
async with self._lock:
|
||||
self.stats.record_failure()
|
||||
|
||||
if self.stats.state == CircuitBreakerState.CLOSED:
|
||||
if self.stats.failure_count >= self.config.failure_threshold:
|
||||
# Open the circuit
|
||||
logger.warning("CircuitBreaker state transition: CLOSED -> OPEN (failure_count=%d, threshold=%d)", self.stats.failure_count, self.config.failure_threshold)
|
||||
self.stats.state = CircuitBreakerState.OPEN
|
||||
self.stats.circuit_opened_count += 1
|
||||
|
||||
elif self.stats.state == CircuitBreakerState.HALF_OPEN:
|
||||
# Go back to open state
|
||||
logger.warning("CircuitBreaker state transition: HALF_OPEN -> OPEN")
|
||||
self.stats.state = CircuitBreakerState.OPEN
|
||||
|
||||
async def call(self, func: Callable[..., Any], *args, **kwargs) -> Any:
|
||||
"""Execute function through circuit breaker.
|
||||
|
||||
Args:
|
||||
func: Function to execute
|
||||
*args: Positional arguments
|
||||
**kwargs: Keyword arguments
|
||||
|
||||
Returns:
|
||||
Function result
|
||||
|
||||
Raises:
|
||||
CircuitBreakerError: When circuit is open
|
||||
Exception: Original exception from function
|
||||
"""
|
||||
# Check if we should attempt reset
|
||||
if self.stats.state == CircuitBreakerState.OPEN:
|
||||
if await self._should_attempt_reset():
|
||||
async with self._lock:
|
||||
logger.info("CircuitBreaker state transition: OPEN -> HALF_OPEN")
|
||||
self.stats.state = CircuitBreakerState.HALF_OPEN
|
||||
else:
|
||||
raise CircuitBreakerError(
|
||||
f"Circuit breaker is open. Last failure: {self.stats.last_failure_time}"
|
||||
)
|
||||
|
||||
# Execute the function
|
||||
try:
|
||||
if asyncio.iscoroutinefunction(func):
|
||||
result = await func(*args, **kwargs)
|
||||
else:
|
||||
result = func(*args, **kwargs)
|
||||
|
||||
await self._handle_success()
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
await self._handle_failure(e)
|
||||
raise
|
||||
|
||||
def get_stats(self) -> dict[str, Any]:
|
||||
"""Get circuit breaker statistics."""
|
||||
return {
|
||||
"state": self.stats.state.value,
|
||||
"failure_count": self.stats.failure_count,
|
||||
"success_count": self.stats.success_count,
|
||||
"total_requests": self.stats.total_requests,
|
||||
"successful_requests": self.stats.successful_requests,
|
||||
"failed_requests": self.stats.failed_requests,
|
||||
"circuit_opened_count": self.stats.circuit_opened_count,
|
||||
"last_failure_time": self.stats.last_failure_time,
|
||||
"success_rate": (
|
||||
self.stats.successful_requests / self.stats.total_requests
|
||||
if self.stats.total_requests > 0 else 0.0
|
||||
),
|
||||
}
|
||||
|
||||
|
||||
def exponential_backoff(
|
||||
attempt: int,
|
||||
initial_delay: float = 1.0,
|
||||
@@ -302,3 +471,66 @@ def track_retry(func: Callable[..., Any]) -> Callable[..., Any]:
|
||||
raise
|
||||
|
||||
return wrapper
|
||||
|
||||
|
||||
async def retry_with_circuit_breaker(
|
||||
func: Callable[..., Any],
|
||||
retry_config: RetryConfig,
|
||||
circuit_breaker: CircuitBreaker,
|
||||
*args,
|
||||
**kwargs,
|
||||
) -> Any:
|
||||
"""Execute function with both retry logic and circuit breaker pattern.
|
||||
|
||||
This combines exponential backoff retry with circuit breaker protection
|
||||
to provide robust error handling for high-concurrency batch operations.
|
||||
|
||||
Args:
|
||||
func: Function to execute
|
||||
retry_config: Retry configuration
|
||||
circuit_breaker: Circuit breaker instance
|
||||
*args: Positional arguments for func
|
||||
**kwargs: Keyword arguments for func
|
||||
|
||||
Returns:
|
||||
Function result
|
||||
|
||||
Raises:
|
||||
CircuitBreakerError: When circuit is open
|
||||
Exception: Last exception if all retries fail
|
||||
"""
|
||||
async def execute_with_retry() -> Any:
|
||||
"""Execute function with retry logic."""
|
||||
return await retry_with_backoff(func, retry_config, *args, **kwargs)
|
||||
|
||||
# Execute through circuit breaker
|
||||
return await circuit_breaker.call(execute_with_retry)
|
||||
|
||||
|
||||
def create_circuit_breaker_for_batch_processing(
|
||||
failure_threshold: int = 5,
|
||||
recovery_timeout: float = 60.0,
|
||||
) -> CircuitBreaker:
|
||||
"""Create a circuit breaker optimized for batch processing operations.
|
||||
|
||||
Args:
|
||||
failure_threshold: Number of failures before opening circuit
|
||||
recovery_timeout: Seconds before attempting recovery
|
||||
|
||||
Returns:
|
||||
Configured CircuitBreaker instance
|
||||
"""
|
||||
config = CircuitBreakerConfig(
|
||||
failure_threshold=failure_threshold,
|
||||
recovery_timeout=recovery_timeout,
|
||||
# Common exceptions that should trigger circuit breaker
|
||||
expected_exception=(
|
||||
ConnectionError,
|
||||
TimeoutError,
|
||||
OSError,
|
||||
Exception, # Catch-all for other failures
|
||||
),
|
||||
success_threshold=3, # Need 3 successes to close circuit
|
||||
)
|
||||
|
||||
return CircuitBreaker(config)
|
||||
|
||||
185
src/biz_bud/core/services/__init__.py
Normal file
185
src/biz_bud/core/services/__init__.py
Normal file
@@ -0,0 +1,185 @@
|
||||
"""Modern service management for the Business Buddy framework.
|
||||
|
||||
This package provides a unified, async-first approach to service management
|
||||
that replaces competing singleton patterns with proper dependency injection,
|
||||
lifecycle management, and thread-safe initialization.
|
||||
|
||||
Key Components:
|
||||
- ServiceRegistry: Central registry for service management with async context managers
|
||||
- Service Factories: Async context manager factories for core services
|
||||
- Dependency Injection: Automatic resolution of service dependencies
|
||||
- Lifecycle Management: Proper initialization and cleanup of services
|
||||
|
||||
Benefits:
|
||||
- Eliminates race conditions in service initialization
|
||||
- Provides proper resource management and cleanup
|
||||
- Supports dependency injection and inversion of control
|
||||
- Thread-safe service access and management
|
||||
- Integration with FastAPI lifespan events
|
||||
|
||||
Example Usage:
|
||||
```python
|
||||
from biz_bud.core.services import ServiceRegistry, register_core_services
|
||||
from biz_bud.core.networking.http_client import HTTPClient
|
||||
|
||||
# Set up service registry
|
||||
registry = ServiceRegistry(config)
|
||||
await register_core_services(registry, config)
|
||||
|
||||
# Use services with proper lifecycle management
|
||||
async with registry.get_service(HTTPClient) as http_client:
|
||||
response = await http_client.get("https://example.com")
|
||||
|
||||
# Clean up all services
|
||||
await registry.cleanup_all()
|
||||
```
|
||||
|
||||
Migration from Legacy Patterns:
|
||||
The new service management system is designed to gradually replace
|
||||
existing singleton patterns. During the migration period, both
|
||||
systems can coexist:
|
||||
|
||||
```python
|
||||
# Legacy pattern (being phased out)
|
||||
from biz_bud.services.factory import get_global_factory
|
||||
factory = await get_global_factory(config)
|
||||
service = await factory.get_llm_client()
|
||||
|
||||
# New pattern (preferred)
|
||||
from biz_bud.core.services import get_global_registry
|
||||
registry = await get_global_registry(config)
|
||||
async with registry.get_service(LangchainLLMClient) as service:
|
||||
# Use service
|
||||
pass
|
||||
```
|
||||
"""
|
||||
|
||||
from .config_manager import (
|
||||
ConfigurationError,
|
||||
ConfigurationLoadError,
|
||||
ConfigurationManager,
|
||||
ConfigurationValidationError,
|
||||
ServiceConfigMixin,
|
||||
cleanup_global_config_manager,
|
||||
get_global_config_manager,
|
||||
)
|
||||
from .container import (
|
||||
BindingNotFoundError,
|
||||
DIContainer,
|
||||
DIError,
|
||||
InjectionError,
|
||||
auto_inject,
|
||||
conditional_service,
|
||||
container_scope,
|
||||
)
|
||||
from .factories import (
|
||||
create_app_lifespan,
|
||||
create_http_client_factory,
|
||||
create_llm_client_factory,
|
||||
create_managed_app_lifespan,
|
||||
create_postgres_store_factory,
|
||||
create_redis_cache_factory,
|
||||
create_semantic_extraction_factory,
|
||||
create_vector_store_factory,
|
||||
initialize_all_services,
|
||||
initialize_essential_services,
|
||||
register_core_services,
|
||||
register_extraction_services,
|
||||
)
|
||||
from .http_service import HTTPClientService, HTTPClientServiceConfig
|
||||
from .lifecycle import (
|
||||
LifecycleError,
|
||||
ServiceLifecycleManager,
|
||||
ShutdownError,
|
||||
StartupError,
|
||||
create_fastapi_lifespan,
|
||||
create_managed_registry,
|
||||
)
|
||||
from .monitoring import (
|
||||
HealthStatus,
|
||||
ServiceMetrics,
|
||||
ServiceMonitor,
|
||||
SystemHealthReport,
|
||||
check_database_connectivity,
|
||||
check_http_connectivity,
|
||||
console_alert_handler,
|
||||
log_alert_handler,
|
||||
setup_monitoring_for_registry,
|
||||
)
|
||||
from .registry import (
|
||||
CircularDependencyError,
|
||||
ServiceError,
|
||||
ServiceInitializationError,
|
||||
ServiceNotFoundError,
|
||||
ServiceProtocol,
|
||||
ServiceRegistry,
|
||||
cleanup_global_registry,
|
||||
get_global_registry,
|
||||
reset_global_registry,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
# Registry classes and functions
|
||||
"ServiceRegistry",
|
||||
"ServiceProtocol",
|
||||
"get_global_registry",
|
||||
"cleanup_global_registry",
|
||||
"reset_global_registry",
|
||||
# HTTP Service
|
||||
"HTTPClientService",
|
||||
"HTTPClientServiceConfig",
|
||||
# Lifecycle Management
|
||||
"ServiceLifecycleManager",
|
||||
"LifecycleError",
|
||||
"StartupError",
|
||||
"ShutdownError",
|
||||
"create_managed_registry",
|
||||
"create_fastapi_lifespan",
|
||||
# Configuration Management
|
||||
"ConfigurationManager",
|
||||
"ConfigurationError",
|
||||
"ConfigurationValidationError",
|
||||
"ConfigurationLoadError",
|
||||
"ServiceConfigMixin",
|
||||
"get_global_config_manager",
|
||||
"cleanup_global_config_manager",
|
||||
# Monitoring
|
||||
"ServiceMonitor",
|
||||
"HealthStatus",
|
||||
"ServiceMetrics",
|
||||
"SystemHealthReport",
|
||||
"setup_monitoring_for_registry",
|
||||
"log_alert_handler",
|
||||
"console_alert_handler",
|
||||
"check_http_connectivity",
|
||||
"check_database_connectivity",
|
||||
# DI Container
|
||||
"DIContainer",
|
||||
"auto_inject",
|
||||
"conditional_service",
|
||||
"container_scope",
|
||||
# Exceptions
|
||||
"ServiceError",
|
||||
"ServiceInitializationError",
|
||||
"ServiceNotFoundError",
|
||||
"CircularDependencyError",
|
||||
"DIError",
|
||||
"BindingNotFoundError",
|
||||
"InjectionError",
|
||||
# Factory functions
|
||||
"create_http_client_factory",
|
||||
"create_postgres_store_factory",
|
||||
"create_redis_cache_factory",
|
||||
"create_llm_client_factory",
|
||||
"create_vector_store_factory",
|
||||
"create_semantic_extraction_factory",
|
||||
# Service registration
|
||||
"register_core_services",
|
||||
"register_extraction_services",
|
||||
# Initialization helpers
|
||||
"initialize_essential_services",
|
||||
"initialize_all_services",
|
||||
# FastAPI integration
|
||||
"create_app_lifespan",
|
||||
"create_managed_app_lifespan",
|
||||
]
|
||||
584
src/biz_bud/core/services/config_manager.py
Normal file
584
src/biz_bud/core/services/config_manager.py
Normal file
@@ -0,0 +1,584 @@
|
||||
"""Thread-safe configuration management for service architecture.
|
||||
|
||||
This module provides thread-safe configuration management that integrates
|
||||
with the ServiceRegistry and dependency injection system. It ensures that
|
||||
configuration loading, validation, and hot-reloading are handled safely
|
||||
across all services without race conditions.
|
||||
|
||||
Key Features:
|
||||
- Thread-safe configuration loading and caching
|
||||
- Hot configuration reloading without service restart
|
||||
- Configuration validation with Pydantic models
|
||||
- Service-specific configuration injection
|
||||
- Configuration change notifications
|
||||
- Secure configuration handling (secrets, environment variables)
|
||||
|
||||
Example:
|
||||
```python
|
||||
from biz_bud.core.services.config_manager import ConfigurationManager
|
||||
|
||||
# Initialize configuration manager
|
||||
config_manager = ConfigurationManager()
|
||||
|
||||
# Load configuration
|
||||
await config_manager.load_configuration("config.yaml")
|
||||
|
||||
# Get service-specific configuration
|
||||
http_config = config_manager.get_service_config("http_client")
|
||||
|
||||
# Register for configuration changes
|
||||
config_manager.register_change_handler("http_client", on_config_change)
|
||||
```
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from typing import TYPE_CHECKING, Any, Awaitable, Callable, TypeVar, cast
|
||||
|
||||
from pydantic import BaseModel, ValidationError
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from biz_bud.core.config.schemas import AppConfig
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
T = TypeVar("T", bound=BaseModel)
|
||||
# Support both sync and async config change handlers
|
||||
ConfigChangeHandler = (
|
||||
Callable[[str, Any, Any], None] | # Sync handler
|
||||
Callable[[str, Any, Any], Awaitable[None]] # Async handler
|
||||
)
|
||||
|
||||
|
||||
class ConfigurationError(Exception):
|
||||
"""Base exception for configuration-related errors."""
|
||||
pass
|
||||
|
||||
|
||||
class ConfigurationValidationError(ConfigurationError):
|
||||
"""Raised when configuration validation fails."""
|
||||
pass
|
||||
|
||||
|
||||
class ConfigurationLoadError(ConfigurationError):
|
||||
"""Raised when configuration loading fails."""
|
||||
pass
|
||||
|
||||
|
||||
class ConfigurationManager:
|
||||
"""Thread-safe configuration manager for service architecture.
|
||||
|
||||
The ConfigurationManager provides centralized, thread-safe configuration
|
||||
management that integrates with the ServiceRegistry and dependency
|
||||
injection system. It handles configuration loading, validation,
|
||||
hot-reloading, and change notifications.
|
||||
|
||||
Features:
|
||||
- Thread-safe configuration operations
|
||||
- Service-specific configuration extraction
|
||||
- Configuration validation with Pydantic models
|
||||
- Hot configuration reloading
|
||||
- Change notification system
|
||||
- Secure handling of sensitive configuration
|
||||
"""
|
||||
|
||||
def __init__(self, config: AppConfig | None = None) -> None:
|
||||
"""Initialize the configuration manager.
|
||||
|
||||
Args:
|
||||
config: Optional initial application configuration.
|
||||
"""
|
||||
self._config: AppConfig | None = config
|
||||
self._service_configs: dict[str, Any] = {}
|
||||
self._config_models: dict[str, type[BaseModel]] = {}
|
||||
self._change_handlers: dict[str, list[ConfigChangeHandler]] = {}
|
||||
|
||||
# Thread safety
|
||||
self._lock = asyncio.Lock()
|
||||
self._loading = False
|
||||
|
||||
# Hot reload support
|
||||
self._config_file_path: Path | None = None
|
||||
self._file_watcher_task: asyncio.Task[None] | None = None
|
||||
self._reload_enabled = False
|
||||
|
||||
async def load_configuration(
|
||||
self,
|
||||
config: AppConfig | str | Path,
|
||||
enable_hot_reload: bool = False,
|
||||
) -> None:
|
||||
"""Load application configuration.
|
||||
|
||||
Args:
|
||||
config: Application configuration object, file path, or Path object.
|
||||
enable_hot_reload: Enable hot reloading of configuration from file.
|
||||
|
||||
Raises:
|
||||
ConfigurationLoadError: If configuration loading fails.
|
||||
"""
|
||||
async with self._lock:
|
||||
if self._loading:
|
||||
raise ConfigurationLoadError("Configuration loading already in progress")
|
||||
|
||||
self._loading = True
|
||||
|
||||
try:
|
||||
if isinstance(config, (str, Path)):
|
||||
config_path = Path(config)
|
||||
if not config_path.exists():
|
||||
raise ConfigurationLoadError(f"Configuration file not found: {config_path}")
|
||||
|
||||
# Load configuration from file
|
||||
self._config = await self._load_from_file(config_path)
|
||||
self._config_file_path = config_path
|
||||
|
||||
if enable_hot_reload:
|
||||
await self._enable_hot_reload()
|
||||
|
||||
else:
|
||||
# Use provided configuration object
|
||||
self._config = config
|
||||
self._config_file_path = None
|
||||
|
||||
# Extract service-specific configurations
|
||||
await self._extract_service_configs()
|
||||
|
||||
logger.info("Configuration loaded successfully")
|
||||
|
||||
except Exception as e:
|
||||
raise ConfigurationLoadError(f"Failed to load configuration: {e}") from e
|
||||
finally:
|
||||
async with self._lock:
|
||||
self._loading = False
|
||||
|
||||
async def _load_from_file(self, config_path: Path) -> AppConfig:
|
||||
"""Load configuration from file.
|
||||
|
||||
Args:
|
||||
config_path: Path to configuration file.
|
||||
|
||||
Returns:
|
||||
Loaded application configuration.
|
||||
"""
|
||||
# This would typically use the existing config loader
|
||||
# For now, return a placeholder
|
||||
from biz_bud.core.config.loader import load_config
|
||||
|
||||
try:
|
||||
return await asyncio.to_thread(load_config, str(config_path))
|
||||
except Exception as e:
|
||||
raise ConfigurationLoadError(f"Failed to load config from {config_path}: {e}") from e
|
||||
|
||||
async def _extract_service_configs(self) -> None:
|
||||
"""Extract service-specific configurations from main config."""
|
||||
if self._config is None:
|
||||
return
|
||||
|
||||
# Extract configurations for known services
|
||||
service_extractors = {
|
||||
"http_client": self._extract_http_client_config,
|
||||
"database": self._extract_database_config,
|
||||
"redis": self._extract_redis_config,
|
||||
"llm": self._extract_llm_config,
|
||||
"vector_store": self._extract_vector_store_config,
|
||||
}
|
||||
|
||||
for service_name, extractor in service_extractors.items():
|
||||
try:
|
||||
service_config = await extractor()
|
||||
if service_config is not None:
|
||||
self._service_configs[service_name] = service_config
|
||||
logger.debug(f"Extracted configuration for {service_name}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to extract config for {service_name}: {e}")
|
||||
|
||||
async def _extract_http_client_config(self) -> dict[str, Any] | None:
|
||||
"""Extract HTTP client configuration."""
|
||||
if not self._config:
|
||||
return None
|
||||
|
||||
return {
|
||||
attr: getattr(self._config, attr, default)
|
||||
for attr, default in [
|
||||
("http_timeout", 30.0),
|
||||
("http_connect_timeout", 5.0),
|
||||
("http_connector_limit", 100),
|
||||
("http_connector_limit_per_host", 30),
|
||||
("http_keepalive_timeout", 30.0),
|
||||
("http_follow_redirects", True),
|
||||
("http_user_agent", "BizBud-Agent/1.0"),
|
||||
]
|
||||
}
|
||||
|
||||
async def _extract_database_config(self) -> dict[str, Any] | None:
|
||||
"""Extract database configuration."""
|
||||
return {} if self._config else None
|
||||
|
||||
async def _extract_redis_config(self) -> dict[str, Any] | None:
|
||||
"""Extract Redis configuration."""
|
||||
return {} if self._config else None
|
||||
|
||||
async def _extract_llm_config(self) -> dict[str, Any] | None:
|
||||
"""Extract LLM configuration."""
|
||||
return {} if self._config else None
|
||||
|
||||
async def _extract_vector_store_config(self) -> dict[str, Any] | None:
|
||||
"""Extract vector store configuration."""
|
||||
return {} if self._config else None
|
||||
|
||||
def register_service_config_model(
|
||||
self,
|
||||
service_name: str,
|
||||
config_model: type[T],
|
||||
) -> None:
|
||||
"""Register a Pydantic model for service configuration validation.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service.
|
||||
config_model: Pydantic model class for validation.
|
||||
"""
|
||||
self._config_models[service_name] = config_model
|
||||
logger.debug(f"Registered config model for {service_name}")
|
||||
|
||||
def get_service_config(self, service_name: str) -> Any:
|
||||
"""Get configuration for a specific service.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service.
|
||||
|
||||
Returns:
|
||||
Service configuration (validated if model is registered).
|
||||
|
||||
Raises:
|
||||
ConfigurationError: If service configuration is not found.
|
||||
ConfigurationValidationError: If configuration validation fails.
|
||||
"""
|
||||
if service_name not in self._service_configs:
|
||||
raise ConfigurationError(f"Configuration for service '{service_name}' not found")
|
||||
|
||||
config_data = self._service_configs[service_name]
|
||||
|
||||
# Validate with registered model if available
|
||||
if service_name in self._config_models:
|
||||
model_class = self._config_models[service_name]
|
||||
try:
|
||||
return model_class(**config_data)
|
||||
except ValidationError as e:
|
||||
raise ConfigurationValidationError(
|
||||
f"Configuration validation failed for {service_name}: {e}"
|
||||
) from e
|
||||
|
||||
return config_data
|
||||
|
||||
def register_change_handler(
|
||||
self,
|
||||
service_name: str,
|
||||
handler: ConfigChangeHandler,
|
||||
) -> None:
|
||||
"""Register a handler for configuration changes.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service to watch.
|
||||
handler: Function to call when configuration changes.
|
||||
"""
|
||||
if service_name not in self._change_handlers:
|
||||
self._change_handlers[service_name] = []
|
||||
|
||||
self._change_handlers[service_name].append(handler)
|
||||
logger.debug(f"Registered change handler for {service_name}")
|
||||
|
||||
async def update_service_config(
|
||||
self,
|
||||
service_name: str,
|
||||
new_config: dict[str, Any],
|
||||
) -> None:
|
||||
"""Update configuration for a specific service.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service.
|
||||
new_config: New configuration data.
|
||||
|
||||
Raises:
|
||||
ConfigurationValidationError: If new configuration is invalid.
|
||||
"""
|
||||
# Validate new configuration if model is registered
|
||||
if service_name in self._config_models:
|
||||
model_class = self._config_models[service_name]
|
||||
try:
|
||||
validated_config = model_class(**new_config)
|
||||
new_config = validated_config.model_dump()
|
||||
except ValidationError as e:
|
||||
raise ConfigurationValidationError(
|
||||
f"New configuration validation failed for {service_name}: {e}"
|
||||
) from e
|
||||
|
||||
async with self._lock:
|
||||
old_config = self._service_configs.get(service_name)
|
||||
self._service_configs[service_name] = new_config
|
||||
|
||||
# Notify change handlers
|
||||
await self._notify_change_handlers(service_name, old_config, new_config)
|
||||
|
||||
logger.info(f"Updated configuration for {service_name}")
|
||||
|
||||
async def _notify_change_handlers(
|
||||
self,
|
||||
service_name: str,
|
||||
old_config: Any,
|
||||
new_config: Any,
|
||||
) -> None:
|
||||
"""Notify registered change handlers of configuration changes.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service.
|
||||
old_config: Previous configuration.
|
||||
new_config: New configuration.
|
||||
"""
|
||||
handlers = self._change_handlers.get(service_name, [])
|
||||
|
||||
for handler in handlers:
|
||||
try:
|
||||
await self._call_handler(handler, service_name, old_config, new_config)
|
||||
except Exception as e:
|
||||
logger.error(f"Configuration change handler failed for {service_name}: {e}")
|
||||
|
||||
async def _call_handler(
|
||||
self,
|
||||
handler: ConfigChangeHandler,
|
||||
service_name: str,
|
||||
old_config: Any,
|
||||
new_config: Any
|
||||
) -> None:
|
||||
"""Call a configuration change handler, whether sync or async."""
|
||||
if asyncio.iscoroutinefunction(handler):
|
||||
# Type narrowing: handler is async
|
||||
async_handler = cast(Callable[[str, Any, Any], Awaitable[None]], handler)
|
||||
await async_handler(service_name, old_config, new_config)
|
||||
else:
|
||||
# Type narrowing: handler is sync
|
||||
sync_handler = cast(Callable[[str, Any, Any], None], handler)
|
||||
sync_handler(service_name, old_config, new_config)
|
||||
|
||||
async def _enable_hot_reload(self) -> None:
|
||||
"""Enable hot reloading of configuration from file."""
|
||||
if not self._config_file_path or self._reload_enabled:
|
||||
return
|
||||
|
||||
self._reload_enabled = True
|
||||
|
||||
async def file_watcher() -> None:
|
||||
"""Watch configuration file for changes."""
|
||||
if not self._config_file_path:
|
||||
return
|
||||
|
||||
# Type checker assertion - we've already verified it's not None above
|
||||
config_file_path = self._config_file_path
|
||||
assert config_file_path is not None
|
||||
|
||||
last_modified = config_file_path.stat().st_mtime
|
||||
|
||||
while self._reload_enabled:
|
||||
try:
|
||||
current_modified = config_file_path.stat().st_mtime
|
||||
|
||||
if current_modified > last_modified:
|
||||
logger.info("Configuration file changed, reloading...")
|
||||
await self._reload_configuration()
|
||||
last_modified = current_modified
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error checking configuration file: {e}")
|
||||
|
||||
await asyncio.sleep(1.0) # Check every second
|
||||
|
||||
self._file_watcher_task = asyncio.create_task(file_watcher())
|
||||
logger.info("Hot reload enabled for configuration")
|
||||
|
||||
async def _reload_configuration(self) -> None:
|
||||
"""Reload configuration from file."""
|
||||
if not self._config_file_path:
|
||||
return
|
||||
|
||||
try:
|
||||
# Load new configuration
|
||||
new_config = await self._load_from_file(self._config_file_path)
|
||||
|
||||
# Store old service configs for comparison
|
||||
old_service_configs = dict(self._service_configs)
|
||||
|
||||
# Update main config and extract service configs
|
||||
self._config = new_config
|
||||
await self._extract_service_configs()
|
||||
|
||||
# Notify handlers of changed service configurations
|
||||
for service_name, new_service_config in self._service_configs.items():
|
||||
old_service_config = old_service_configs.get(service_name)
|
||||
|
||||
if old_service_config != new_service_config:
|
||||
await self._notify_change_handlers(
|
||||
service_name, old_service_config, new_service_config
|
||||
)
|
||||
|
||||
logger.info("Configuration reloaded successfully")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to reload configuration: {e}")
|
||||
|
||||
async def disable_hot_reload(self) -> None:
|
||||
"""Disable hot reloading of configuration."""
|
||||
self._reload_enabled = False
|
||||
|
||||
if self._file_watcher_task:
|
||||
self._file_watcher_task.cancel()
|
||||
try:
|
||||
await self._file_watcher_task
|
||||
except asyncio.CancelledError:
|
||||
raise
|
||||
self._file_watcher_task = None
|
||||
|
||||
logger.info("Hot reload disabled")
|
||||
|
||||
def get_app_config(self) -> AppConfig:
|
||||
"""Get the main application configuration.
|
||||
|
||||
Returns:
|
||||
The application configuration object.
|
||||
|
||||
Raises:
|
||||
ConfigurationError: If no configuration is loaded.
|
||||
"""
|
||||
if self._config is None:
|
||||
raise ConfigurationError("No configuration loaded")
|
||||
return self._config
|
||||
|
||||
def get_configuration_info(self) -> dict[str, Any]:
|
||||
"""Get information about loaded configuration.
|
||||
|
||||
Returns:
|
||||
Dictionary containing configuration metadata.
|
||||
"""
|
||||
return {
|
||||
"config_loaded": self._config is not None,
|
||||
"config_file_path": str(self._config_file_path) if self._config_file_path else None,
|
||||
"hot_reload_enabled": self._reload_enabled,
|
||||
"service_configs": list(self._service_configs.keys()),
|
||||
"registered_models": list(self._config_models.keys()),
|
||||
"change_handlers": {
|
||||
service: len(handlers)
|
||||
for service, handlers in self._change_handlers.items()
|
||||
},
|
||||
}
|
||||
|
||||
async def cleanup(self) -> None:
|
||||
"""Clean up the configuration manager."""
|
||||
await self.disable_hot_reload()
|
||||
|
||||
# Clear all data
|
||||
self._service_configs.clear()
|
||||
self._config_models.clear()
|
||||
self._change_handlers.clear()
|
||||
self._config = None
|
||||
self._config_file_path = None
|
||||
|
||||
logger.info("Configuration manager cleaned up")
|
||||
|
||||
|
||||
# Global configuration manager instance
|
||||
_global_config_manager: ConfigurationManager | None = None
|
||||
_config_manager_lock = asyncio.Lock()
|
||||
|
||||
|
||||
async def get_global_config_manager() -> ConfigurationManager:
|
||||
"""Get or create the global configuration manager.
|
||||
|
||||
Returns:
|
||||
The global ConfigurationManager instance.
|
||||
"""
|
||||
global _global_config_manager
|
||||
|
||||
if _global_config_manager is not None:
|
||||
return _global_config_manager
|
||||
|
||||
async with _config_manager_lock:
|
||||
if _global_config_manager is None:
|
||||
_global_config_manager = ConfigurationManager()
|
||||
logger.info("Global configuration manager initialized")
|
||||
|
||||
return _global_config_manager
|
||||
|
||||
|
||||
async def cleanup_global_config_manager() -> None:
|
||||
"""Clean up the global configuration manager."""
|
||||
global _global_config_manager
|
||||
|
||||
if _global_config_manager is not None:
|
||||
await _global_config_manager.cleanup()
|
||||
_global_config_manager = None
|
||||
logger.info("Global configuration manager cleaned up")
|
||||
|
||||
|
||||
# Integration utilities for service architecture
|
||||
class ServiceConfigMixin:
|
||||
"""Mixin for services that need configuration management integration.
|
||||
|
||||
This mixin provides convenient methods for services to integrate
|
||||
with the ConfigurationManager for dynamic configuration updates.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self._config_manager: ConfigurationManager | None = None
|
||||
self._service_name: str | None = None
|
||||
|
||||
async def setup_config_integration(
|
||||
self,
|
||||
config_manager: ConfigurationManager,
|
||||
service_name: str,
|
||||
) -> None:
|
||||
"""Set up integration with configuration manager.
|
||||
|
||||
Args:
|
||||
config_manager: Configuration manager instance.
|
||||
service_name: Name of this service for configuration purposes.
|
||||
"""
|
||||
self._config_manager = config_manager
|
||||
self._service_name = service_name
|
||||
|
||||
# Register for configuration changes
|
||||
config_manager.register_change_handler(
|
||||
service_name, self._on_config_change
|
||||
)
|
||||
|
||||
async def _on_config_change(
|
||||
self,
|
||||
service_name: str,
|
||||
old_config: Any,
|
||||
new_config: Any,
|
||||
) -> None:
|
||||
"""Handle configuration changes.
|
||||
|
||||
This method should be overridden by services that need to
|
||||
respond to configuration changes.
|
||||
|
||||
Args:
|
||||
service_name: Name of the service.
|
||||
old_config: Previous configuration.
|
||||
new_config: New configuration.
|
||||
"""
|
||||
logger.info(f"Configuration changed for {service_name}")
|
||||
# Services should override this method to handle config changes
|
||||
|
||||
def get_current_config(self) -> Any:
|
||||
"""Get the current configuration for this service.
|
||||
|
||||
Returns:
|
||||
Current service configuration.
|
||||
"""
|
||||
if not self._config_manager or not self._service_name:
|
||||
raise ConfigurationError("Configuration integration not set up")
|
||||
|
||||
return self._config_manager.get_service_config(self._service_name)
|
||||
572
src/biz_bud/core/services/container.py
Normal file
572
src/biz_bud/core/services/container.py
Normal file
@@ -0,0 +1,572 @@
|
||||
"""Dependency injection container for advanced service composition.
|
||||
|
||||
This module provides an advanced dependency injection container that works
|
||||
with the ServiceRegistry to provide sophisticated service composition,
|
||||
configuration injection, and lifecycle management.
|
||||
|
||||
The DIContainer extends the basic ServiceRegistry functionality with:
|
||||
- Configuration injection and binding
|
||||
- Service composition and decoration
|
||||
- Conditional service registration
|
||||
- Service interception and AOP capabilities
|
||||
- Multi-tenant service isolation
|
||||
|
||||
Example:
|
||||
```python
|
||||
from biz_bud.core.services import ServiceRegistry
|
||||
from biz_bud.core.services.container import DIContainer
|
||||
|
||||
# Set up container with registry
|
||||
registry = ServiceRegistry(config)
|
||||
container = DIContainer(registry)
|
||||
|
||||
# Bind configuration values
|
||||
container.bind_value("api_key", config.openai_api_key)
|
||||
container.bind_value("timeout", 30.0)
|
||||
|
||||
# Register services with injected configuration
|
||||
container.register_with_injection(
|
||||
LLMClient,
|
||||
lambda api_key, timeout: create_llm_client(api_key, timeout),
|
||||
requires=["api_key", "timeout"]
|
||||
)
|
||||
|
||||
# Use services with automatic dependency injection
|
||||
async with container.get_service(LLMClient) as client:
|
||||
response = await client.chat("Hello")
|
||||
```
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
from contextlib import asynccontextmanager
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
AsyncContextManager,
|
||||
AsyncIterator,
|
||||
Awaitable,
|
||||
Callable,
|
||||
TypeVar,
|
||||
cast,
|
||||
)
|
||||
|
||||
from biz_bud.logging import get_logger
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from biz_bud.core.services.registry import ServiceRegistry
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
T = TypeVar("T")
|
||||
|
||||
|
||||
class DIError(Exception):
|
||||
"""Base exception for dependency injection errors."""
|
||||
pass
|
||||
|
||||
|
||||
class BindingNotFoundError(DIError):
|
||||
"""Raised when a required binding is not found."""
|
||||
pass
|
||||
|
||||
|
||||
class InjectionError(DIError):
|
||||
"""Raised when dependency injection fails."""
|
||||
pass
|
||||
|
||||
|
||||
class DIContainer:
|
||||
"""Advanced dependency injection container.
|
||||
|
||||
The DIContainer provides sophisticated dependency injection capabilities
|
||||
on top of the ServiceRegistry, including configuration injection,
|
||||
service composition, and advanced binding strategies.
|
||||
|
||||
Features:
|
||||
- Value binding for configuration injection
|
||||
- Factory binding for complex service creation
|
||||
- Conditional registration based on environment
|
||||
- Service decoration and interception
|
||||
- Multi-tenant service isolation
|
||||
"""
|
||||
|
||||
def __init__(self, registry: ServiceRegistry) -> None:
|
||||
"""Initialize the DI container.
|
||||
|
||||
Args:
|
||||
registry: ServiceRegistry to use for service management.
|
||||
"""
|
||||
self.registry = registry
|
||||
|
||||
# Binding storage
|
||||
self._value_bindings: dict[str, Any] = {}
|
||||
self._factory_bindings: dict[str, Callable[[], Any]] = {}
|
||||
self._async_factory_bindings: dict[str, Callable[[], AsyncContextManager[Any]]] = {}
|
||||
|
||||
# Service decorators and interceptors
|
||||
self._decorators: dict[type[Any], list[Callable[[Any], Any]]] = {}
|
||||
self._interceptors: dict[type[Any], list[Callable[[Any, str, tuple[Any, ...]], Any]]] = {}
|
||||
|
||||
# Conditional registration
|
||||
self._conditions: dict[str, Callable[[], bool]] = {}
|
||||
|
||||
def bind_value(self, name: str, value: Any) -> None:
|
||||
"""Bind a value for dependency injection.
|
||||
|
||||
Args:
|
||||
name: Name of the binding.
|
||||
value: Value to bind.
|
||||
|
||||
Example:
|
||||
```python
|
||||
container.bind_value("api_key", "sk-...")
|
||||
container.bind_value("timeout", 30.0)
|
||||
container.bind_value("debug", True)
|
||||
```
|
||||
"""
|
||||
self._value_bindings[name] = value
|
||||
logger.debug(f"Bound value '{name}': {type(value).__name__}")
|
||||
|
||||
def bind_factory(self, name: str, factory: Callable[[], Any]) -> None:
|
||||
"""Bind a factory function for dependency injection.
|
||||
|
||||
Args:
|
||||
name: Name of the binding.
|
||||
factory: Factory function that creates the value.
|
||||
|
||||
Example:
|
||||
```python
|
||||
container.bind_factory("database_url", lambda: build_db_url(config))
|
||||
container.bind_factory("logger", lambda: get_logger(__name__))
|
||||
```
|
||||
"""
|
||||
self._factory_bindings[name] = factory
|
||||
logger.debug(f"Bound factory '{name}'")
|
||||
|
||||
def bind_async_factory(self, name: str, factory: Callable[[], AsyncContextManager[Any]]) -> None:
|
||||
"""Bind an async factory for dependency injection.
|
||||
|
||||
Args:
|
||||
name: Name of the binding.
|
||||
factory: Async factory function that yields the value.
|
||||
|
||||
Example:
|
||||
```python
|
||||
@asynccontextmanager
|
||||
async def create_temp_file():
|
||||
with tempfile.NamedTemporaryFile() as f:
|
||||
yield f.name
|
||||
|
||||
container.bind_async_factory("temp_file", create_temp_file)
|
||||
```
|
||||
"""
|
||||
self._async_factory_bindings[name] = factory
|
||||
logger.debug(f"Bound async factory '{name}'")
|
||||
|
||||
def register_condition(self, name: str, condition: Callable[[], bool]) -> None:
|
||||
"""Register a condition for conditional service registration.
|
||||
|
||||
Args:
|
||||
name: Name of the condition.
|
||||
condition: Function that returns True if condition is met.
|
||||
|
||||
Example:
|
||||
```python
|
||||
container.register_condition("development", lambda: config.debug)
|
||||
container.register_condition("redis_available", lambda: check_redis())
|
||||
```
|
||||
"""
|
||||
self._conditions[name] = condition
|
||||
logger.debug(f"Registered condition '{name}'")
|
||||
|
||||
def check_condition(self, name: str) -> bool:
|
||||
"""Check if a condition is met.
|
||||
|
||||
Args:
|
||||
name: Name of the condition to check.
|
||||
|
||||
Returns:
|
||||
True if the condition is met, False otherwise.
|
||||
|
||||
Raises:
|
||||
BindingNotFoundError: If the condition is not registered.
|
||||
"""
|
||||
if name not in self._conditions:
|
||||
raise BindingNotFoundError(f"Condition '{name}' not registered")
|
||||
|
||||
return self._conditions[name]()
|
||||
|
||||
async def resolve_dependencies(self, requires: list[str]) -> dict[str, Any]:
|
||||
"""Resolve required dependencies for injection.
|
||||
|
||||
Args:
|
||||
requires: List of dependency names to resolve.
|
||||
|
||||
Returns:
|
||||
Dictionary mapping dependency names to resolved values.
|
||||
|
||||
Raises:
|
||||
BindingNotFoundError: If a required dependency is not bound.
|
||||
InjectionError: If dependency resolution fails.
|
||||
"""
|
||||
resolved = {}
|
||||
|
||||
for name in requires:
|
||||
try:
|
||||
# Try value binding first
|
||||
if name in self._value_bindings:
|
||||
resolved[name] = self._value_bindings[name]
|
||||
continue
|
||||
|
||||
# Try factory binding
|
||||
if name in self._factory_bindings:
|
||||
resolved[name] = self._factory_bindings[name]()
|
||||
continue
|
||||
|
||||
# Try async factory binding
|
||||
if name in self._async_factory_bindings:
|
||||
factory = self._async_factory_bindings[name]
|
||||
async with factory() as value:
|
||||
resolved[name] = value
|
||||
continue
|
||||
|
||||
# Dependency not found
|
||||
raise BindingNotFoundError(f"No binding found for '{name}'")
|
||||
|
||||
except Exception as e:
|
||||
raise InjectionError(f"Failed to resolve dependency '{name}': {e}") from e
|
||||
|
||||
return resolved
|
||||
|
||||
def register_with_injection(
|
||||
self,
|
||||
service_type: type[T],
|
||||
factory: Callable[..., Callable[[], AsyncContextManager[T]]],
|
||||
requires: list[str] | None = None,
|
||||
conditions: list[str] | None = None,
|
||||
) -> None:
|
||||
"""Register a service with automatic dependency injection.
|
||||
|
||||
Args:
|
||||
service_type: The service type to register.
|
||||
factory: Factory function that accepts resolved dependencies.
|
||||
requires: List of dependency names to inject.
|
||||
conditions: List of conditions that must be met for registration.
|
||||
|
||||
Example:
|
||||
```python
|
||||
container.register_with_injection(
|
||||
LLMClient,
|
||||
lambda api_key, timeout: create_llm_client(api_key, timeout),
|
||||
requires=["api_key", "timeout"],
|
||||
conditions=["llm_enabled"]
|
||||
)
|
||||
```
|
||||
"""
|
||||
requires = requires or []
|
||||
conditions = conditions or []
|
||||
|
||||
# Check conditions before registration
|
||||
for condition in conditions:
|
||||
if not self.check_condition(condition):
|
||||
logger.info(f"Skipping registration of {service_type.__name__} - condition '{condition}' not met")
|
||||
return
|
||||
|
||||
# Create factory with dependency injection
|
||||
@asynccontextmanager
|
||||
async def injected_factory() -> AsyncIterator[T]:
|
||||
# Resolve dependencies
|
||||
dependencies = await self.resolve_dependencies(requires)
|
||||
|
||||
# Call original factory with injected dependencies
|
||||
factory_instance = factory(**dependencies)
|
||||
async with factory_instance() as service:
|
||||
yield self._apply_decorators(service_type, service)
|
||||
|
||||
# Register with the underlying registry
|
||||
self.registry.register_factory(service_type, injected_factory)
|
||||
|
||||
logger.info(f"Registered {service_type.__name__} with injection (requires: {requires})")
|
||||
|
||||
def add_decorator(
|
||||
self,
|
||||
service_type: type[Any],
|
||||
decorator: Callable[[Any], Any],
|
||||
) -> None:
|
||||
"""Add a decorator to be applied to service instances.
|
||||
|
||||
Args:
|
||||
service_type: The service type to decorate.
|
||||
decorator: Function that decorates the service instance.
|
||||
|
||||
Example:
|
||||
```python
|
||||
def add_logging(service):
|
||||
# Wrap service methods with logging
|
||||
return LoggingProxy(service)
|
||||
|
||||
container.add_decorator(HTTPClient, add_logging)
|
||||
```
|
||||
"""
|
||||
if service_type not in self._decorators:
|
||||
self._decorators[service_type] = []
|
||||
|
||||
self._decorators[service_type].append(decorator)
|
||||
logger.debug(f"Added decorator for {service_type.__name__}")
|
||||
|
||||
def add_interceptor(
|
||||
self,
|
||||
service_type: type[Any],
|
||||
interceptor: Callable[[Any, str, tuple[Any, ...]], Any],
|
||||
) -> None:
|
||||
"""Add an interceptor for method calls on service instances.
|
||||
|
||||
Args:
|
||||
service_type: The service type to intercept.
|
||||
interceptor: Function that intercepts method calls.
|
||||
|
||||
Example:
|
||||
```python
|
||||
def timing_interceptor(service, method_name, args):
|
||||
start = time.time()
|
||||
try:
|
||||
return getattr(service, method_name)(*args)
|
||||
finally:
|
||||
logger.info(f"{method_name} took {time.time() - start:.2f}s")
|
||||
|
||||
container.add_interceptor(HTTPClient, timing_interceptor)
|
||||
```
|
||||
"""
|
||||
if service_type not in self._interceptors:
|
||||
self._interceptors[service_type] = []
|
||||
|
||||
self._interceptors[service_type].append(interceptor)
|
||||
logger.debug(f"Added interceptor for {service_type.__name__}")
|
||||
|
||||
def _apply_decorators(self, service_type: type[Any], service: Any) -> Any:
|
||||
"""Apply registered decorators to a service instance.
|
||||
|
||||
Args:
|
||||
service_type: The service type.
|
||||
service: The service instance.
|
||||
|
||||
Returns:
|
||||
The decorated service instance.
|
||||
"""
|
||||
decorated_service = service
|
||||
|
||||
# Apply decorators in order
|
||||
for decorator in self._decorators.get(service_type, []):
|
||||
try:
|
||||
decorated_service = decorator(decorated_service)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to apply decorator to {service_type.__name__}: {e}")
|
||||
# Continue with undecorated service
|
||||
break
|
||||
|
||||
# Apply interceptors by wrapping the service
|
||||
if service_type in self._interceptors:
|
||||
decorated_service = self._create_intercepted_proxy(
|
||||
decorated_service, self._interceptors[service_type]
|
||||
)
|
||||
|
||||
return decorated_service
|
||||
|
||||
def _create_intercepted_proxy(self, service: Any, interceptors: list[Callable[[Any, str, tuple[Any, ...]], Any]]) -> Any:
|
||||
"""Create a proxy that applies interceptors to method calls.
|
||||
|
||||
Args:
|
||||
service: The service to proxy.
|
||||
interceptors: List of interceptor functions.
|
||||
|
||||
Returns:
|
||||
A proxy object that applies interceptors.
|
||||
"""
|
||||
class InterceptedProxy:
|
||||
def __init__(self, target: Any, interceptors: list[Callable[[Any, str, tuple[Any, ...]], Any]]):
|
||||
self._target = target
|
||||
self._interceptors = interceptors
|
||||
|
||||
def __getattr__(self, name: str) -> Any:
|
||||
attr = getattr(self._target, name)
|
||||
|
||||
# Only intercept callable methods
|
||||
if not callable(attr):
|
||||
return attr
|
||||
|
||||
def intercepted_method(*args: Any, **kwargs: Any) -> Any:
|
||||
# Apply each interceptor
|
||||
for interceptor in self._interceptors:
|
||||
try:
|
||||
result = interceptor(self._target, name, args)
|
||||
if result is not None:
|
||||
return result
|
||||
except Exception as e:
|
||||
logger.error(f"Interceptor failed for {name}: {e}")
|
||||
|
||||
# Call original method if no interceptor handled it
|
||||
return attr(*args, **kwargs)
|
||||
|
||||
return intercepted_method
|
||||
|
||||
return InterceptedProxy(service, interceptors)
|
||||
|
||||
@asynccontextmanager
|
||||
async def get_service(self, service_type: type[T]) -> AsyncIterator[T]:
|
||||
"""Get a service instance with dependency injection applied.
|
||||
|
||||
This delegates to the underlying ServiceRegistry but ensures
|
||||
any dependency injection and decorations are applied.
|
||||
|
||||
Args:
|
||||
service_type: The service type to retrieve.
|
||||
|
||||
Yields:
|
||||
The service instance with dependencies injected.
|
||||
"""
|
||||
async with self.registry.get_service(service_type) as service:
|
||||
yield service
|
||||
|
||||
async def cleanup_all(self) -> None:
|
||||
"""Clean up the container and all managed services."""
|
||||
await self.registry.cleanup_all()
|
||||
|
||||
# Clear bindings
|
||||
self._value_bindings.clear()
|
||||
self._factory_bindings.clear()
|
||||
self._async_factory_bindings.clear()
|
||||
self._decorators.clear()
|
||||
self._interceptors.clear()
|
||||
self._conditions.clear()
|
||||
|
||||
logger.info("DI container cleaned up")
|
||||
|
||||
def get_binding_info(self) -> dict[str, Any]:
|
||||
"""Get information about current bindings and registrations.
|
||||
|
||||
Returns:
|
||||
Dictionary containing binding and registration information.
|
||||
"""
|
||||
return {
|
||||
"value_bindings": list(self._value_bindings.keys()),
|
||||
"factory_bindings": list(self._factory_bindings.keys()),
|
||||
"async_factory_bindings": list(self._async_factory_bindings.keys()),
|
||||
"decorators": {
|
||||
service_type.__name__: len(decorators)
|
||||
for service_type, decorators in self._decorators.items()
|
||||
},
|
||||
"interceptors": {
|
||||
service_type.__name__: len(interceptors)
|
||||
for service_type, interceptors in self._interceptors.items()
|
||||
},
|
||||
"conditions": list(self._conditions.keys()),
|
||||
"registry_info": self.registry.get_service_info(),
|
||||
}
|
||||
|
||||
|
||||
# Utility functions for common DI patterns
|
||||
def auto_inject(func: Callable[..., T]) -> Callable[..., T]:
|
||||
"""Decorator for automatic dependency injection based on parameter names.
|
||||
|
||||
This decorator inspects function parameters and automatically resolves
|
||||
dependencies from the current DI container context.
|
||||
|
||||
Args:
|
||||
func: Function to decorate with auto-injection.
|
||||
|
||||
Returns:
|
||||
Decorated function with automatic dependency injection.
|
||||
|
||||
Example:
|
||||
```python
|
||||
@auto_inject
|
||||
async def process_data(api_key: str, timeout: float, data: dict):
|
||||
# api_key and timeout will be automatically injected
|
||||
# data must be provided when calling the function
|
||||
pass
|
||||
```
|
||||
"""
|
||||
# TODO: Get function signature for dependency injection
|
||||
# sig = inspect.signature(func)
|
||||
|
||||
if asyncio.iscoroutinefunction(func):
|
||||
# Async function wrapper
|
||||
async def async_wrapper(*args: Any, **kwargs: Any) -> Any:
|
||||
# This would need access to the current DI container
|
||||
# Implementation would depend on how we track the current container
|
||||
result = await cast(Awaitable[Any], func(*args, **kwargs))
|
||||
return result
|
||||
return cast(Callable[..., T], async_wrapper)
|
||||
else:
|
||||
# Sync function wrapper
|
||||
def sync_wrapper(*args: Any, **kwargs: Any) -> T:
|
||||
# This would need access to the current DI container
|
||||
# Implementation would depend on how we track the current container
|
||||
result = func(*args, **kwargs)
|
||||
return result
|
||||
return sync_wrapper
|
||||
|
||||
|
||||
def conditional_service(condition_name: str):
|
||||
"""Decorator for conditional service registration.
|
||||
|
||||
Args:
|
||||
condition_name: Name of the condition to check.
|
||||
|
||||
Returns:
|
||||
Decorator that conditionally registers the service.
|
||||
|
||||
Example:
|
||||
```python
|
||||
@conditional_service("redis_available")
|
||||
class RedisCache:
|
||||
pass
|
||||
```
|
||||
"""
|
||||
def decorator(cls: type[T]) -> type[T]:
|
||||
# Mark the class with condition metadata
|
||||
cls._di_condition = condition_name # type: ignore
|
||||
return cls
|
||||
|
||||
return decorator
|
||||
|
||||
|
||||
# Context manager for DI container scoping
|
||||
@asynccontextmanager
|
||||
async def container_scope(container: DIContainer) -> AsyncIterator[DIContainer]:
|
||||
"""Create a scoped DI container context.
|
||||
|
||||
This allows for creating isolated dependency injection scopes
|
||||
for testing or multi-tenant applications.
|
||||
|
||||
Args:
|
||||
container: Base container to create scope from.
|
||||
|
||||
Yields:
|
||||
Scoped container instance.
|
||||
|
||||
Example:
|
||||
```python
|
||||
async with container_scope(base_container) as scoped:
|
||||
scoped.bind_value("tenant_id", "tenant-123")
|
||||
async with scoped.get_service(TenantService) as service:
|
||||
# Service has tenant-specific configuration
|
||||
pass
|
||||
```
|
||||
"""
|
||||
# Create a copy of the container for scoping
|
||||
scoped_container = DIContainer(container.registry)
|
||||
|
||||
# Copy bindings from parent container
|
||||
scoped_container._value_bindings = container._value_bindings.copy()
|
||||
scoped_container._factory_bindings = container._factory_bindings.copy()
|
||||
scoped_container._async_factory_bindings = container._async_factory_bindings.copy()
|
||||
scoped_container._conditions = container._conditions.copy()
|
||||
|
||||
try:
|
||||
yield scoped_container
|
||||
finally:
|
||||
# Cleanup is handled by the underlying registry
|
||||
pass
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user