- Expose API Gateway port 4000 in docker-compose for local dev
- Enable dynamic API_GATEWAY_URL in Next.js config
- Add 64K context hard limit in report-generator to avoid LLM errors
- Add backend API readiness report
- Fix `simple_test_analysis` template in E2E test setup to align with Orchestrator's data fetch logic.
- Implement and verify additional E2E scenarios:
- Scenario C: Partial Provider Failure (verified error propagation fix in Orchestrator).
- Scenario D: Invalid Symbol input.
- Scenario E: Analysis Module failure.
- Update `WorkflowStateMachine::handle_report_failed` to correctly scope error broadcasting to the specific task instead of failing effectively silently or broadly.
- Update testing strategy documentation to reflect completed Phase 4 testing.
- Skip Scenario B (Orchestrator Restart) as persistence is not yet implemented (decision made to defer persistence).
- Enhance LlmClient to handle malformed URLs and HTML error responses
- Improve logging in report-generator-service worker
- Update frontend API routes and hooks for analysis
- Update various service configurations and persistence logic
- Backend: Add template_id to DataRequest and AnalysisResult metadata
- Frontend: Add TemplateSelector to Report page
- Frontend: Refactor ReportPage to use AnalysisTemplateSets for dynamic tabs
- Frontend: Strict mode for report rendering based on template_id
- Cleanup: Remove legacy analysis config hooks
- Tushare: Added /test endpoint to verify API token validity by fetching a small dataset.
- AlphaVantage: Implemented custom HTTP transport to handle MCP server's 400 Bad Request response on SSE endpoint gracefully (degrading to POST-only mode).
- AlphaVantage: Added /test endpoint using `list_tools` to verify MCP connection.
- AlphaVantage: Updated configuration polling to support dynamic API URLs.
This commit introduces a comprehensive, template-based analysis orchestration system, refactoring the entire analysis generation workflow from the ground up.
Key Changes:
1. **Backend Architecture (`report-generator-service`):**
* Replaced the naive analysis workflow with a robust orchestrator based on a Directed Acyclic Graph (DAG) of module dependencies.
* Implemented a full topological sort (`petgraph`) to determine the correct execution order and detect circular dependencies.
2. **Data Models (`common-contracts`, `data-persistence-service`):**
* Introduced the concept of `AnalysisTemplateSets` to allow for multiple, independent, and configurable analysis workflows.
* Created a new `analysis_results` table to persist the output of each module for every analysis run, ensuring traceability.
* Implemented a file-free data seeding mechanism to populate default analysis templates on service startup.
3. **API Layer (`api-gateway`):**
* Added a new asynchronous endpoint (`POST /analysis-requests/{symbol}`) to trigger analysis workflows via NATS messages.
* Updated all configuration endpoints to support the new `AnalysisTemplateSets` model.
4. **Frontend UI (`/config`, `/query`):**
* Completely refactored the "Analysis Config" page into a two-level management UI for "Template Sets" and the "Modules" within them, supporting full CRUD operations.
* Updated the "Query" page to allow users to select which analysis template to use when generating a report.
This new architecture provides a powerful, flexible, and robust foundation for all future development of our intelligent analysis capabilities.