Compare commits

..

13 Commits

Author SHA1 Message Date
Lv, Qi
21155bc4f8 feat(realtime): 接入前端实时报价并完善后端缓存
前端: 新增 RealTimeQuoteResponse 类型;新增 useRealtimeQuote Hook 并在报告页图表旁展示价格与时间戳(严格 TTL,无兜底)

FastAPI: 新增 GET /financials/{market}/{symbol}/realtime?max_age_seconds=.. 只读端点;通过 DataPersistenceClient 读取 Rust 缓存

Rust: 新增 realtime_quotes hypertable 迁移;新增 POST /api/v1/market-data/quotes 与 GET /api/v1/market-data/quotes/{symbol}?market=..;新增 DTO/Model/DB 函数;修正 #[api] 宏与路径参数;生成 SQLx 离线缓存 (.sqlx) 以支持离线构建

Python: DataPersistenceClient 新增 upsert/get 实时报价,并调整 GET 路径与参数

说明: TradingView 图表是第三方 websocket,不受我们缓存控制;页面数值展示走自有缓存通路,统一且可控。
2025-11-09 05:12:14 +08:00
Lv, Qi
230f180dea fix(frontend): remove localhost rewrites and enforce NEXT_PUBLIC_BACKEND_URL
- remove Next.js rewrites to http://127.0.0.1:8000
- require NEXT_PUBLIC_BACKEND_URL in API routes (config, financials, config/test)
- prevent accidental fallback to host ports; use container service name backend:8000
2025-11-08 22:59:03 +08:00
Lv, Qi
3d0fd6f704 refactor(phase0-1): 容器化与配置服务拆分,并清理根目录
- 新增 docker-compose 与 Tiltfile,容器化 backend/frontend/postgres(宿主口+10000)
- 新增 services/config-service(GET /api/v1/system, /analysis-modules),并加入 compose
- backend ConfigManager 移除本地文件回退,强制依赖 config-service
- 新增 backend/frontend Dockerfile
- 清理根目录:移动 pm2.config.js -> deployment/;dev.py -> scripts/;删除根 package.json 与 lock
- 新增 .gitignore,忽略二进制与临时文件
2025-11-08 21:07:38 +08:00
xucheng
ca60410966 feat: 通用市场财务/快照/分析接口;增强数据源与配置读取
Backend
- router(financial): 新增通用路径 /{market}/{stock_code}、/snapshot、/analysis/stream
  - 用 MarketEnum 统一市场(cn/us/hk/jp)
  - 将 /china/{ts_code} 改为通用 get_financials,并规范 period,按年限裁剪
  - 新增通用昨日快照接口(CN 复用原逻辑,其他市场兜底近交易日收盘)
- data_manager: 仅从 config/config.json 读取各 provider API key,不再读取环境变量
  - series 构建更健壮:None/空结构判定;接受 numpy/pandas 数值类型并安全转 float
- provider(finnhub):
  - SDK 失败时使用 httpx 直连兜底(profile2、financials-reported)
  - 规范化年度报表,映射 revenue/net income/gross profit/assets/equity/goodwill/OCF/CapEx
  - 计算 gross/net margin、ROA、ROE;直接产出 series 结构
  - 增加关键步骤日志与异常保护
- provider(yfinance): 修正同步阻塞的获取逻辑,使用 run_in_executor 包装

Frontend
- hooks(useApi):
  - 将中国财务接口路径改为 /api/financials/cn
  - 新增 useFinancials 与 useSnapshot,统一多市场数据访问
- report/[symbol]/page.tsx:
  - 支持多市场(映射 usa→us、china→cn 等),统一 symbol 与分析流路径
  - 去除仅限中国市场的 UI 限制,财务/分析/图表对多市场可用
  - 使用新的分析与快照 API 路径
- lib/prisma.ts: 去除无关内容(微小空行调整)

Docs
- 重组文档目录:
  - docs/已完成任务/tasks.md(重命名自 docs/tasks.md)
  - docs/未完成任务/us_market_integration_tasks.md 新增

BREAKING CHANGE
- API 路径变更:
  - 财务数据:/api/financials/china/{ts_code} → /api/financials/{market}/{stock_code}
  - 快照:/api/financials/china/{ts_code}/snapshot → /api/financials/{market}/{stock_code}/snapshot
  - 分析流:/api/financials/china/{ts_code}/analysis/{type}/stream → /api/financials/{market}/{stock_code}/analysis/{type}/stream
- 前端需使用 useFinancials/useSnapshot 或更新为 /cn 路径以兼容中国市场
2025-11-06 20:01:08 +08:00
xucheng
0b09abf2e5 docs(logs): add 2025-11-06 dev log; unignore docs/logs; add data dictionary and project status 2025-11-06 19:57:17 +08:00
xucheng
edfd51b0a7 feat: 昨日快照API与前端卡片;注册orgs路由;多项优化
- backend(financial): 新增 /china/{ts_code}/snapshot API,返回昨日交易日的收盘价/市值/PE/PB/股息率等

- backend(schemas): 新增 TodaySnapshotResponse

- backend(main): 注册 orgs 路由 /api/v1/orgs

- backend(providers:finnhub): 归一化财报字段并计算 gross_margin/net_margin/ROA/ROE

- backend(providers:tushare): 股东户数报告期与财报期对齐

- backend(routers/financial): years 默认改为 10(最大 10)

- config: analysis-config.json 切换到 qwen-flash-2025-07-28

- frontend(report/[symbol]): 新增“昨日快照”卡片、限制展示期数为10、优化增长与阈值高亮、修正类名与标题处理

- frontend(reports/[id]): 统一 period 变量与计算,修正表格 key

- frontend(hooks): 新增 useChinaSnapshot 钩子与类型

- scripts: dev.sh 增加调试输出
2025-11-05 17:00:32 +08:00
xucheng
3475138419 feat(数据): 新增员工、股东及税务指标并生成日志
- 后端: Tushare provider 新增 get_employee_number, get_holder_number, get_tax_to_ebt 方法,并在 financial 路由中集成。
- 前端: report 页面新增对应图表展示,并更新相关类型与工具函数。
- 清理: 移除多个过时的测试脚本。
- 文档: 创建 2025-11-04 开发日志并更新用户手册。
2025-11-04 21:22:32 +08:00
xucheng
3ffb30696b 修改了财务数据获取时的一些结构,但还没有完成,今天先做到这儿。 2025-11-04 14:03:34 +08:00
xucheng
ff7dc0c95a feat(backend): introduce DataManager and multi-provider; analysis orchestration; streaming endpoints; remove legacy tushare_client; enhance logging
feat(frontend): integrate Prisma and reports API/pages

chore(config): add data_sources.yaml; update analysis-config.json

docs: add 2025-11-03 dev log; update user guide

scripts: enhance dev.sh; add tushare_legacy_client

deps: update backend and frontend dependencies
2025-11-03 21:48:08 +08:00
xucheng
b982cd5368 更新前端配置、文档和脚本 2025-10-31 22:14:19 +08:00
xucheng
69b1b481b2 添加 PM2 守护 portwardenc-amd64 程序支持 2025-10-31 22:13:07 +08:00
xucheng
93199f8659 增加stop脚本 2025-10-31 03:12:04 +00:00
xucheng
8b5d5f5777 feat(frontend): add always-visible '重新生成分析' button per module\nfix(backend): inject dependency context for single-module generation (final_conclusion placeholders) 2025-10-31 03:09:43 +00:00
116 changed files with 13634 additions and 1995 deletions

28
.gitignore vendored
View File

@ -1,3 +1,26 @@
# Python
__pycache__/
*.pyc
.venv/
.pytest_cache/
# Node
node_modules/
frontend/node_modules/
services/**/node_modules/
# Env & local
.env
.env.*
.DS_Store
# Build artifacts
dist/
build/
# Binaries
portwardenc-amd64
# ===== 通用文件 =====
# 操作系统生成的文件
.DS_Store
@ -21,6 +44,11 @@ Thumbs.db
*.log
logs/
# 允许提交文档日志(覆盖上面的通配忽略)
!docs/logs/
!docs/logs/*.md
!docs/*.md
# 临时文件
*.tmp
*.temp

3
Tiltfile Normal file
View File

@ -0,0 +1,3 @@
docker_compose('docker-compose.yml')

23
backend/Dockerfile Normal file
View File

@ -0,0 +1,23 @@
# syntax=docker/dockerfile:1.6
FROM python:3.11-slim AS base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1
WORKDIR /workspace
# 仅复制依赖文件,提升缓存命中率
COPY backend/requirements.txt ./backend/requirements.txt
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r backend/requirements.txt
# 运行时通过挂载卷提供源码;这里仅创建目录以便于容器内路径存在
RUN mkdir -p /workspace/backend
WORKDIR /workspace/backend
# 缺省入口由 docker-compose 提供

View File

@ -23,6 +23,10 @@ class Settings(BaseSettings):
GEMINI_API_KEY: Optional[str] = None
TUSHARE_TOKEN: Optional[str] = None
# Microservices
CONFIG_SERVICE_BASE_URL: str = "http://config-service:7000/api/v1"
DATA_PERSISTENCE_BASE_URL: str = "http://data-persistence-service:3000/api/v1"
class Config:
env_file = ".env"
case_sensitive = True

View File

@ -1,18 +1,7 @@
"""
Application dependencies and providers
"""
from typing import AsyncGenerator
from fastapi import Depends
from sqlalchemy.ext.asyncio import AsyncSession
from app.core.database import AsyncSessionLocal
from app.services.config_manager import ConfigManager
async def get_db_session() -> AsyncGenerator[AsyncSession, None]:
"""Provides a database session to the application."""
async with AsyncSessionLocal() as session:
yield session
def get_config_manager(db_session: AsyncSession = Depends(get_db_session)) -> ConfigManager:
"""Dependency to get the configuration manager."""
return ConfigManager(db_session=db_session)
def get_config_manager() -> ConfigManager:
return ConfigManager()

194
backend/app/data_manager.py Normal file
View File

@ -0,0 +1,194 @@
import yaml
import os
import json
from typing import Any, Dict, List, Optional
from numbers import Number
from app.data_providers.base import BaseDataProvider
from app.data_providers.tushare import TushareProvider
# from app.data_providers.ifind import TonghsProvider
from app.data_providers.yfinance import YfinanceProvider
from app.data_providers.finnhub import FinnhubProvider
import logging
logger = logging.getLogger(__name__)
class DataManager:
_instance = None
def __new__(cls, *args, **kwargs):
if not cls._instance:
cls._instance = super(DataManager, cls).__new__(cls)
return cls._instance
def __init__(self, config_path: str = None):
if hasattr(self, '_initialized') and self._initialized:
return
if config_path is None:
# Assume the config file is in the 'config' directory at the root of the repo
# Find the project root by looking for the config directory
current_dir = os.path.dirname(__file__)
while current_dir != os.path.dirname(current_dir): # Not at filesystem root
if os.path.exists(os.path.join(current_dir, "config", "data_sources.yaml")):
REPO_ROOT = current_dir
break
current_dir = os.path.dirname(current_dir)
else:
# Fallback to the original calculation
REPO_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
config_path = os.path.join(REPO_ROOT, "config", "data_sources.yaml")
with open(config_path, 'r', encoding='utf-8') as f:
self.config = yaml.safe_load(f)
self.providers = {}
# Build provider base config ONLY from config/config.json (do not read env vars)
base_cfg: Dict[str, Any] = {"data_sources": {}}
try:
# Use the same REPO_ROOT calculation as data_sources.yaml
current_dir = os.path.dirname(__file__)
while current_dir != os.path.dirname(current_dir): # Not at filesystem root
if os.path.exists(os.path.join(current_dir, "config", "data_sources.yaml")):
REPO_ROOT = current_dir
break
current_dir = os.path.dirname(current_dir)
else:
# Fallback to the original calculation
REPO_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
cfg_json_path = os.path.join(REPO_ROOT, "config", "config.json")
if os.path.exists(cfg_json_path):
with open(cfg_json_path, "r", encoding="utf-8") as jf:
cfg_json = json.load(jf)
ds_from_json = (cfg_json.get("data_sources") or {})
for name, node in ds_from_json.items():
if node.get("api_key"):
base_cfg["data_sources"][name] = {"api_key": node.get("api_key")}
logger.info(f"Loaded API key for provider '{name}' from config.json")
else:
logger.debug("config/config.json not found; skipping JSON token load.")
except Exception as e:
logger.warning(f"Failed to read tokens from config/config.json: {e}")
import traceback
traceback.print_exc()
try:
self._init_providers(base_cfg)
except Exception as e:
logger.error(f"Failed to initialize data providers: {e}")
self._initialized = True
def _init_providers(self, base_cfg: Dict[str, Any]) -> None:
"""
Initializes providers with the given base configuration.
This method should be called after the base config is loaded.
"""
provider_map = {
"tushare": TushareProvider,
# "ifind": TonghsProvider,
"yfinance": YfinanceProvider,
"finnhub": FinnhubProvider,
}
for name, provider_class in provider_map.items():
token = base_cfg.get("data_sources", {}).get(name, {}).get("api_key")
source_config = self.config['data_sources'].get(name, {})
# Initialize the provider if a token is found or not required
if token or not source_config.get('api_key_env'):
try:
self.providers[name] = provider_class(token=token)
except Exception as e:
logger.error(f"Failed to initialize provider '{name}': {e}")
else:
logger.warning(f"Provider '{name}' requires API key but none provided in config.json. Skipping.")
def _detect_market(self, stock_code: str) -> str:
if stock_code.endswith(('.SH', '.SZ')):
return 'CN'
elif stock_code.endswith('.HK'):
return 'HK'
elif stock_code.endswith('.T'): # Assuming .T for Tokyo
return 'JP'
else: # Default to US
return 'US'
async def get_data(self, method_name: str, stock_code: str, **kwargs):
market = self._detect_market(stock_code)
priority_list = self.config.get('markets', {}).get(market, {}).get('priority', [])
for provider_name in priority_list:
provider = self.providers.get(provider_name)
if not provider:
logger.warning(f"Provider '{provider_name}' not initialized.")
continue
try:
method = getattr(provider, method_name)
data = await method(stock_code=stock_code, **kwargs)
is_success = False
if data is None:
is_success = False
elif isinstance(data, list):
is_success = len(data) > 0
elif isinstance(data, dict):
is_success = len(data) > 0
else:
is_success = True
if is_success:
logger.info(f"Data successfully fetched from '{provider_name}' for '{stock_code}'.")
return data
except Exception as e:
logger.warning(f"Provider '{provider_name}' failed for '{stock_code}': {e}. Trying next provider.")
logger.error(f"All data providers failed for '{stock_code}' on method '{method_name}'.")
return None
async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> Dict[str, List[Dict[str, Any]]]:
data = await self.get_data('get_financial_statements', stock_code, report_dates=report_dates)
if data is None:
return {}
# Normalize to series format
if isinstance(data, dict):
# Already in series format (e.g., tushare)
return data
elif isinstance(data, list):
# Convert from flat format to series format
series: Dict[str, List[Dict[str, Any]]] = {}
for report in data:
year = str(report.get('year', report.get('end_date', '')[:4]))
if not year:
continue
for key, value in report.items():
if key in ['ts_code', 'stock_code', 'year', 'end_date', 'period', 'ann_date', 'f_ann_date', 'report_type']:
continue
# Accept numpy/pandas numeric types as well as builtin numbers
if value is not None and isinstance(value, Number):
if key not in series:
series[key] = []
if not any(d['year'] == year for d in series[key]):
# Store as builtin float to avoid JSON serialization issues
try:
numeric_value = float(value)
except Exception:
# Fallback: skip if cannot coerce to float
continue
series[key].append({"year": year, "value": numeric_value})
return series
else:
return {}
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
return await self.get_data('get_daily_price', stock_code, start_date=start_date, end_date=end_date)
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
return await self.get_data('get_stock_basic', stock_code)
data_manager = DataManager()

View File

@ -0,0 +1,88 @@
from abc import ABC, abstractmethod
from typing import Any, Dict, List, Optional
class BaseDataProvider(ABC):
"""
Abstract base class for all financial data providers.
"""
def __init__(self, token: Optional[str] = None):
"""
Initializes the data provider, optionally with an API token.
:param token: API token for the data provider, if required.
"""
self.token = token
self._initialize()
def _initialize(self):
"""
Perform any necessary initialization, such as API client setup.
This method is called by the constructor.
"""
pass
@abstractmethod
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
"""
Fetches basic company information for a given stock code.
:param stock_code: The stock identifier.
:return: A dictionary with basic company info, or None if not found.
"""
pass
@abstractmethod
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
"""
Fetches daily stock prices for a given period.
:param stock_code: The stock identifier.
:param start_date: The start date of the period (e.g., 'YYYYMMDD').
:param end_date: The end date of the period (e.g., 'YYYYMMDD').
:return: A list of dictionaries, each representing a day's price data.
"""
pass
@abstractmethod
async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> Dict[str, List[Dict[str, Any]]]:
"""
Fetches financial statements for a list of report dates and returns them
in a series format.
The series format is a dictionary where keys are metric names (e.g., 'revenue')
and values are a list of data points over time.
e.g., {"revenue": [{"year": "2023", "value": 1000}, ...]}
Providers should also calculate derived metrics if they are not directly available.
:param stock_code: The stock identifier.
:param report_dates: A list of report dates to fetch data for (e.g., ['20231231', '20221231']).
:return: A dictionary in series format.
"""
pass
async def get_financial_statement(self, stock_code: str, report_date: str) -> Optional[Dict[str, Any]]:
"""
Fetches a single financial statement for a specific report date.
This is a convenience method that can be implemented by calling get_financial_statements.
Note: The return value of this function is a single report (dictionary),
not a series object. This is for compatibility with parts of the code
that need a single flat report.
:param stock_code: The stock identifier.
:param report_date: The report date for the statement (e.g., '20231231').
:return: A dictionary with financial statement data, or None if not found.
"""
series_data = await self.get_financial_statements(stock_code, [report_date])
if not series_data:
return None
report: Dict[str, Any] = {"ts_code": stock_code, "end_date": report_date}
for metric, points in series_data.items():
for point in points:
if point.get("year") == report_date[:4]:
report[metric] = point.get("value")
break
return report

View File

@ -0,0 +1,310 @@
from .base import BaseDataProvider
from typing import Any, Dict, List, Optional
import finnhub
import pandas as pd
from datetime import datetime, timedelta
import asyncio
import logging
import httpx
logger = logging.getLogger(__name__)
class FinnhubProvider(BaseDataProvider):
def _initialize(self):
if not self.token:
raise ValueError("Finnhub API key not provided.")
self.client = finnhub.Client(api_key=self.token)
try:
masked = f"***{self.token[-4:]}" if isinstance(self.token, str) and len(self.token) >= 4 else "***"
logger.info(f"[Finnhub] client initialized (token={masked})")
except Exception:
# 避免日志失败影响初始化
pass
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
def _fetch():
try:
profile = None
try:
profile = self.client.company_profile2(symbol=stock_code)
logger.debug(f"[Finnhub] SDK company_profile2 ok symbol={stock_code} name={profile.get('name') if isinstance(profile, dict) else None}")
except Exception as e:
logger.warning(f"[Finnhub] SDK company_profile2 failed for {stock_code}: {e}")
# Fallback to direct HTTP if SDK call fails
try:
resp = httpx.get(
'https://finnhub.io/api/v1/stock/profile2',
params={'symbol': stock_code},
headers={'X-Finnhub-Token': self.token},
timeout=20.0,
)
logger.debug(f"[Finnhub] HTTP profile2 status={resp.status_code} len={len(resp.text)}")
if resp.status_code == 200:
profile = resp.json()
else:
logger.error(f"[Finnhub] HTTP profile2 failed status={resp.status_code} body={resp.text[:200]}")
except Exception:
profile = None
if not profile:
return None
# Normalize data
return {
"ts_code": stock_code,
"name": profile.get("name"),
"area": profile.get("country"),
"industry": profile.get("finnhubIndustry"),
"exchange": profile.get("exchange"),
"ipo_date": profile.get("ipo"),
}
except Exception as e:
logger.error(f"Finnhub get_stock_basic failed for {stock_code}: {e}")
return None
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
def _fetch():
try:
start_ts = int(datetime.strptime(start_date, '%Y%m%d').timestamp())
end_ts = int(datetime.strptime(end_date, '%Y%m%d').timestamp())
logger.debug(f"[Finnhub] stock_candles symbol={stock_code} D {start_date}->{end_date}")
res = self.client.stock_candles(stock_code, 'D', start_ts, end_ts)
if res.get('s') != 'ok':
try:
logger.warning(f"[Finnhub] stock_candles not ok symbol={stock_code} status={res.get('s')}")
except Exception:
pass
return []
df = pd.DataFrame(res)
if df.empty:
return []
# Normalize data
df['trade_date'] = pd.to_datetime(df['t'], unit='s').dt.strftime('%Y%m%d')
df.rename(columns={
'o': 'open', 'h': 'high', 'l': 'low', 'c': 'close', 'v': 'vol'
}, inplace=True)
return df[['trade_date', 'open', 'high', 'low', 'close', 'vol']].to_dict('records')
except Exception as e:
logger.error(f"Finnhub get_daily_price failed for {stock_code}: {e}")
return []
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)
async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> List[Dict[str, Any]]:
def _fetch():
try:
# 1) 拉取年度报表financials_reported, annual
res = None
try:
res = self.client.financials_reported(symbol=stock_code, freq='annual')
except Exception as e:
logger.warning(f"[Finnhub] SDK financials_reported failed for {stock_code}: {e}")
# Fallback: direct HTTP
try:
r = httpx.get(
'https://finnhub.io/api/v1/stock/financials-reported',
params={'symbol': stock_code, 'freq': 'annual'},
headers={'X-Finnhub-Token': self.token},
timeout=30.0,
)
logger.debug(f"[Finnhub] HTTP financials-reported status={r.status_code} len={len(r.text)}")
if r.status_code == 200:
res = r.json()
else:
logger.error(f"[Finnhub] HTTP financials-reported failed status={r.status_code} body={r.text[:300]}")
except Exception:
res = None
if not res or not res.get('data'):
logger.warning(f"[Finnhub] financials-reported empty for {stock_code}")
return []
df = pd.DataFrame(res['data'])
if df.empty:
logger.warning(f"[Finnhub] financials-reported dataframe empty for {stock_code}")
return []
# 2) 仅保留请求的年份
years_to_fetch = {str(date)[:4] for date in report_dates}
logger.debug(f"[Finnhub] filter years {sorted(list(years_to_fetch))} before={len(df)}")
if 'year' in df.columns:
df = df[df['year'].astype(str).isin(years_to_fetch)]
# 兜底:如果缺失 year 列,则用 endDate 推断
if 'year' not in df.columns and 'endDate' in df.columns:
df = df[df['endDate'].astype(str).str[:4].isin(years_to_fetch)]
if df.empty:
logger.warning(f"[Finnhub] financials-reported no rows after filter for {stock_code}")
return []
def _normalize_key(s: Optional[str]) -> str:
if not isinstance(s, str):
return ""
return ''.join(ch.lower() for ch in s if ch.isalnum())
def pick(report_block: List[Dict[str, Any]], concept_candidates: List[str], label_candidates: List[str] = []) -> Optional[float]:
if not report_block:
return None
try:
by_concept = { _normalize_key(item.get('concept')): item.get('value') for item in report_block if isinstance(item, dict) }
by_label = { _normalize_key(item.get('label')): item.get('value') for item in report_block if isinstance(item, dict) }
except Exception:
return None
for key in concept_candidates:
v = by_concept.get(_normalize_key(key))
if v is not None:
try:
return float(v)
except Exception:
continue
for key in label_candidates:
v = by_label.get(_normalize_key(key))
if v is not None:
try:
return float(v)
except Exception:
continue
return None
# 3) 遍历年度记录,展开并标准化字段名
flat_reports: List[Dict[str, Any]] = []
for _, row in df.iterrows():
bs = (row.get('report') or {}).get('bs', [])
ic = (row.get('report') or {}).get('ic', [])
cf = (row.get('report') or {}).get('cf', [])
end_date = str(row.get('endDate') or '')
revenue = pick(
ic,
concept_candidates=['Revenues', 'RevenueFromContractWithCustomerExcludingAssessedTax', 'SalesRevenueNet', 'Revenue', 'RevenuesNet', 'SalesRevenueGoodsNet'],
label_candidates=['Total revenue', 'Revenue', 'Sales revenue']
)
net_income = pick(
ic,
concept_candidates=['NetIncomeLoss', 'ProfitLoss', 'NetIncomeLossAvailableToCommonStockholdersBasic', 'NetIncomeLossAvailableToCommonStockholdersDiluted'],
label_candidates=['Net income', 'Net income (loss)']
)
gross_profit = pick(
ic,
concept_candidates=['GrossProfit'],
label_candidates=['Gross profit']
)
total_assets = pick(
bs,
concept_candidates=['Assets', 'AssetsTotal', 'AssetsCurrentAndNoncurrent', 'AssetsIncludingAssetsMeasuredAtFairValue'],
label_candidates=['Total assets']
)
total_equity = pick(
bs,
concept_candidates=['StockholdersEquityIncludingPortionAttributableToNoncontrollingInterest', 'StockholdersEquity', 'StockholdersEquityTotal', 'Equity'],
label_candidates=['Total equity', "Stockholders' equity"]
)
goodwill = pick(
bs,
concept_candidates=['Goodwill', 'GoodwillAndIntangibleAssets'],
label_candidates=['Goodwill', 'Goodwill and intangible assets']
)
n_cashflow_act = pick(
cf,
concept_candidates=['NetCashProvidedByUsedInOperatingActivities', 'NetCashProvidedByUsedInOperatingActivitiesContinuingOperations', 'NetCashFlowOperating'],
label_candidates=['Net cash provided by operating activities']
)
capex = pick(
cf,
concept_candidates=['CapitalExpenditures', 'PaymentsToAcquirePropertyPlantAndEquipment', 'PaymentsToAcquireProductiveAssets'],
label_candidates=['Capital expenditures']
)
# 计算衍生指标
free_cash_flow = None
if isinstance(n_cashflow_act, (int, float)) and isinstance(capex, (int, float)):
free_cash_flow = n_cashflow_act - capex
normalized = {
# 基本元字段
'ts_code': stock_code,
'end_date': end_date, # DataManager 会从这里抽取 year
# 标准命名(见 financial_data_dictionary
'revenue': revenue,
'n_income': net_income,
'gross_profit': gross_profit,
'total_assets': total_assets,
'total_hldr_eqy_exc_min_int': total_equity,
'goodwill': goodwill,
'n_cashflow_act': n_cashflow_act,
'c_pay_acq_const_fiolta': capex,
'__free_cash_flow': free_cash_flow,
}
# 一些常用比率(若有足够数据则计算),命名对齐文档
if isinstance(revenue, (int, float)) and revenue > 0 and isinstance(gross_profit, (int, float)):
normalized['grossprofit_margin'] = gross_profit / revenue
if isinstance(revenue, (int, float)) and revenue > 0 and isinstance(net_income, (int, float)):
normalized['netprofit_margin'] = net_income / revenue
if isinstance(total_assets, (int, float)) and total_assets > 0 and isinstance(net_income, (int, float)):
normalized['roa'] = net_income / total_assets
if isinstance(total_equity, (int, float)) and total_equity > 0 and isinstance(net_income, (int, float)):
normalized['roe'] = net_income / total_equity
flat_reports.append(normalized)
try:
logger.debug(
f"[Finnhub] row endDate={end_date} revenue={revenue} net_income={net_income} gross_profit={gross_profit} "
f"assets={total_assets} equity={total_equity} goodwill={goodwill} n_cfo={n_cashflow_act} capex={capex}"
)
except Exception:
pass
# Convert flat reports to series dict directly to match DataManager expected format
series: Dict[str, List[Dict[str, Any]]] = {}
for report in flat_reports:
end_date = str(report.get('end_date') or '')
year = end_date[:4] if len(end_date) >= 4 else None
if not year:
continue
period = f"{year}1231"
for key, value in report.items():
if key in ['ts_code', 'end_date']:
continue
# Only collect numeric values
try:
if value is None:
continue
num = float(value)
except Exception:
continue
if key not in series:
series[key] = []
# Avoid duplicate period entries
exists = any(dp.get('period') == period for dp in series[key])
if not exists:
series[key].append({'period': period, 'value': num})
try:
total_points = sum(len(v) for v in series.values())
logger.info(f"[Finnhub] built series for {stock_code} keys={len(series)} points={total_points}")
except Exception:
pass
return series
except Exception as e:
logger.error(f"Finnhub get_financial_statements failed for {stock_code}: {e}")
return []
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)

View File

@ -0,0 +1,131 @@
from .base import BaseDataProvider
from typing import Any, Dict, List, Optional
import pandas as pd
from datetime import datetime
# 假设 iFinDPy 库已安装在环境中
# 重要提示: 用户需要根据官方文档手动安装 iFinDPy
try:
from iFinDPy import THS_iFinDLogin, THS_BD, THS_HQ
except ImportError:
print("错误: iFinDPy 模块未找到。请确保已按照同花顺官方指引完成安装。")
# 定义虚拟函数以避免在未安装时程序崩溃
def THS_iFinDLogin(*args, **kwargs): return -1
def THS_BD(*args, **kwargs): return pd.DataFrame()
def THS_HQ(*args, **kwargs): return pd.DataFrame()
class TonghsProvider(BaseDataProvider):
_is_logged_in = False
def __init__(self, token: Optional[str] = None):
# 使用从 iFinD 用户中心获取的 Refresh Token 进行登录
if not TonghsProvider._is_logged_in:
if not token:
raise ValueError("同花顺 iFinDPy Refresh Token 未在配置中提供。")
# 调用登录函数,直接传入 token
# 注意: 具体的关键字参数名可能需要根据 iFinDPy 的实际文档确认,这里假设为 'token' 或直接作为第一个参数
login_result = THS_iFinDLogin(token)
if login_result == 0:
print("同花顺 iFinDPy 登录成功。")
TonghsProvider._is_logged_in = True
else:
print(f"同花顺 iFinDPy 登录失败,错误码: {login_result}")
raise ConnectionError("无法登录到同花顺 iFinDPy 服务,请检查您的 Refresh Token 是否正确。")
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
try:
# TODO: 请用户确认用于获取公司基本信息的指标 (indicators)
indicators = "ths_stock_short_name_stock;ths_listed_market_stock;ths_industry_stock;ths_ipo_date_stock"
data = THS_BD(stock_code, indicators, "")
if data.empty:
return None
# --- 数据归一化 ---
# iFinDPy 返回的数据通常是 DataFrame我们需要将其转换为字典
info = data.iloc[0].to_dict()
return {
"ts_code": stock_code,
"name": info.get("ths_stock_short_name_stock"),
"area": info.get("ths_listed_market_stock"),
"industry": info.get("ths_industry_stock"),
"list_date": info.get("ths_ipo_date_stock"),
}
except Exception as e:
print(f"同花顺 iFinDPy get_stock_basic 执行失败, 股票代码 {stock_code}: {e}")
return None
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
try:
# TODO: 请用户确认用于获取日线行情的指标
indicators = "open;high;low;close;volume"
# iFinDPy 的日期格式通常是 YYYY-MM-DD
date_range = f"{start_date};{end_date}"
data = THS_HQ(stock_code, indicators, date_range)
if data.empty:
return []
# --- 数据归一化 ---
data = data.reset_index()
data.rename(columns={
"time": "trade_date",
"open": "open",
"high": "high",
"low": "low",
"close": "close",
"volume": "vol"
}, inplace=True)
return data.to_dict('records')
except Exception as e:
print(f"同花顺 iFinDPy get_daily_price 执行失败, 股票代码 {stock_code}: {e}")
return []
async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> List[Dict[str, Any]]:
try:
# TODO: 请用户确认获取财务报表的指标
# 这可能需要多次调用 THS_BD 并合并结果
# 示例:一次性获取多个报告期的数据
# 将 report_dates 转换为 iFinDPy 接受的格式,例如 "2022-12-31;2021-12-31"
dates_param = ";".join(report_dates)
# 需要的指标
income_indicators = "ths_np_stock" # 净利润
bs_indicators = "ths_total_assets_stock;ths_total_liab_stock" # 总资产;总负债
revenue_indicators = "ths_revenue_stock" # 营业收入
# 获取数据
income_data = THS_BD(stock_code, income_indicators, f"reportDate={dates_param}")
bs_data = THS_BD(stock_code, bs_indicators, f"reportDate={dates_param}")
revenue_data = THS_BD(stock_code, revenue_indicators, f"reportDate={dates_param}")
# 合并数据
financials_df = pd.concat([income_data, bs_data, revenue_data], axis=1)
financials_df = financials_df.loc[:,~financials_df.columns.duplicated()]
financials_df = financials_df.reset_index().rename(columns={"index": "end_date"})
# --- 数据归一化 ---
financials_df.rename(columns={
"ths_revenue_stock": "revenue",
"ths_np_stock": "net_income",
"ths_total_assets_stock": "total_assets",
"ths_total_liab_stock": "total_liabilities",
}, inplace=True)
financials_df["ts_code"] = stock_code
return financials_df.to_dict('records')
except Exception as e:
print(f"同花顺 iFinDPy get_financial_statements 执行失败, 股票代码 {stock_code}: {e}")
return []
async def get_financial_statement(self, stock_code: str, report_date: str) -> Optional[Dict[str, Any]]:
results = await self.get_financial_statements(stock_code, [report_date])
return results[0] if results else None

View File

@ -0,0 +1,705 @@
from .base import BaseDataProvider
from typing import Any, Dict, List, Optional, Callable
import logging
import asyncio
import tushare as ts
import math
import datetime
logger = logging.getLogger(__name__)
class TushareProvider(BaseDataProvider):
def _initialize(self):
if not self.token:
raise ValueError("Tushare API token not provided.")
# 使用官方 SDK 客户端
self._pro = ts.pro_api(self.token)
# 交易日历缓存key=(exchange, start, end) -> List[Dict]
self._trade_cal_cache: Dict[str, List[Dict[str, Any]]] = {}
async def _resolve_trade_dates(self, dates: List[str], exchange: str = "SSE") -> Dict[str, str]:
"""
将任意日期映射为该日若非交易日则取不晚于该日的最近一个交易日
返回映射requested_date -> resolved_trade_date
"""
if not dates:
return {}
start_date = min(dates)
end_date = max(dates)
cache_key = f"{exchange}:{start_date}:{end_date}"
if cache_key in self._trade_cal_cache:
cal_rows = self._trade_cal_cache[cache_key]
else:
cal_rows = await self._query(
api_name="trade_cal",
params={
"exchange": exchange,
"start_date": start_date,
"end_date": end_date,
},
fields=["cal_date", "is_open", "pretrade_date"],
)
self._trade_cal_cache[cache_key] = cal_rows
by_date: Dict[str, Dict[str, Any]] = {str(r.get("cal_date")): r for r in cal_rows}
# 同时准备已开放的交易日期序列,便于兜底搜索
open_dates = sorted([d for d, r in by_date.items() if int(r.get("is_open", 0)) == 1])
def _prev_open(d: str) -> Optional[str]:
# 找到 <= d 的最大开市日
lo, hi = 0, len(open_dates) - 1
ans = None
while lo <= hi:
mid = (lo + hi) // 2
if open_dates[mid] <= d:
ans = open_dates[mid]
lo = mid + 1
else:
hi = mid - 1
return ans
resolved: Dict[str, str] = {}
for d in dates:
row = by_date.get(d)
if row is None:
# 不在本段日历(极少数情况),做一次兜底:使用区间内最近开市日
prev_d = _prev_open(d)
if prev_d:
resolved[d] = prev_d
else:
# 最后兜底,仍找不到则原样返回
resolved[d] = d
continue
is_open = int(row.get("is_open", 0))
if is_open == 1:
resolved[d] = d
else:
prev = str(row.get("pretrade_date") or "")
if prev:
resolved[d] = prev
else:
prev_d = _prev_open(d)
resolved[d] = prev_d or d
return resolved
async def _query(
self,
api_name: str,
params: Optional[Dict[str, Any]] = None,
fields: Optional[List[str]] = None,
) -> List[Dict[str, Any]]:
"""
使用官方 tushare SDK 统一查询返回字典列表
为避免阻塞事件循环内部通过 asyncio.to_thread 在线程中执行同步调用
"""
params = params or {}
def _call() -> List[Dict[str, Any]]:
# 将字段列表转换为逗号分隔的字符串SDK 推荐方式)
fields_arg: Optional[str] = ",".join(fields) if isinstance(fields, list) else None
# 优先使用属性方式pro.fina_indicator 等);若不存在则回退到通用 query
func: Optional[Callable] = getattr(self._pro, api_name, None)
try:
if callable(func):
df = func(**params, fields=fields_arg) if fields_arg else func(**params)
else:
# 通用回退pro.query(name, params=..., fields=...)
if fields_arg:
df = self._pro.query(api_name, params=params, fields=fields_arg)
else:
df = self._pro.query(api_name, params=params)
except Exception as exc:
# 将 SDK 抛出的异常包装为统一日志
raise RuntimeError(f"tushare.{api_name} failed: {exc}")
if df is None or df.empty:
return []
# DataFrame -> List[Dict]
return df.to_dict(orient="records")
try:
rows: List[Dict[str, Any]] = await asyncio.to_thread(_call)
# 清洗 NaN/Inf避免 JSON 序列化错误
DATE_KEYS = {
"cal_date", "pretrade_date", "trade_date", "trade_dt", "date",
"end_date", "ann_date", "f_ann_date", "period"
}
def _sanitize_value(key: str, v: Any) -> Any:
if v is None:
return None
# 保持日期/期末字段为字符串(避免 20231231 -> 20231231.0 导致匹配失败)
if key in DATE_KEYS:
try:
s = str(v)
# 去除意外的小数点形式
if s.endswith(".0"):
s = s[:-2]
return s
except Exception:
return str(v)
try:
# 处理 numpy.nan / numpy.inf / Decimal / numpy 数值等,统一为 Python float
fv = float(v)
return fv if math.isfinite(fv) else None
except Exception:
# 利用自反性判断 NaNNaN != NaN
try:
if v != v:
return None
except Exception:
pass
return v
for row in rows:
for k, v in list(row.items()):
row[k] = _sanitize_value(k, v)
# logger.info(f"Tushare '{api_name}' returned {len(rows)} rows.")
return rows
except Exception as e:
logger.error(f"Exception calling tushare '{api_name}': {e}")
raise
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
try:
rows = await self._query(
api_name="stock_basic",
params={"ts_code": stock_code},
)
return rows[0] if rows else None
except Exception as e:
logger.error(f"Tushare get_stock_basic failed for {stock_code}: {e}")
return None
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
try:
rows = await self._query(
api_name="daily",
params={
"ts_code": stock_code,
"start_date": start_date,
"end_date": end_date,
},
)
return rows or []
except Exception as e:
logger.error(f"Tushare get_daily_price failed for {stock_code}: {e}")
return []
async def get_daily_basic_points(self, stock_code: str, trade_dates: List[str]) -> List[Dict[str, Any]]:
"""
获取指定交易日列表的 daily_basic 数据例如 total_mvpepb
"""
try:
if not trade_dates:
return []
# 将请求日期映射到不晚于该日的最近交易日
mapping = await self._resolve_trade_dates(trade_dates, exchange="SSE")
resolved_dates = list(set(mapping.values()))
start_date = min(resolved_dates)
end_date = max(resolved_dates)
# 一次性取区间内数据,再按解析后的交易日过滤
all_rows = await self._query(
api_name="daily_basic",
params={
"ts_code": stock_code,
"start_date": start_date,
"end_date": end_date,
},
)
wanted = set(resolved_dates)
rows = [r for r in all_rows if str(r.get("trade_date")) in wanted]
logger.info(f"Tushare daily_basic returned {len(rows)} rows for {stock_code} on {len(trade_dates)} requested dates (resolved to {len(wanted)} trading dates)")
return rows
except Exception as e:
logger.error(f"Tushare get_daily_basic_points failed for {stock_code}: {e}")
return []
async def get_daily_points(self, stock_code: str, trade_dates: List[str]) -> List[Dict[str, Any]]:
"""
获取指定交易日列表的日行情例如 close
"""
try:
if not trade_dates:
return []
mapping = await self._resolve_trade_dates(trade_dates, exchange="SSE")
resolved_dates = list(set(mapping.values()))
start_date = min(resolved_dates)
end_date = max(resolved_dates)
all_rows = await self._query(
api_name="daily",
params={
"ts_code": stock_code,
"start_date": start_date,
"end_date": end_date,
},
)
wanted = set(resolved_dates)
rows = [r for r in all_rows if str(r.get("trade_date")) in wanted]
logger.info(f"Tushare daily returned {len(rows)} rows for {stock_code} on {len(trade_dates)} requested dates (resolved to {len(wanted)} trading dates)")
return rows
except Exception as e:
logger.error(f"Tushare get_daily_points failed for {stock_code}: {e}")
return []
def _calculate_derived_metrics(self, series: Dict[str, List[Dict]], periods: List[str]) -> Dict[str, List[Dict]]:
"""
Tushare provider 内部计算派生指标
"""
# --- Helper Functions ---
def _get_value(key: str, period: str) -> Optional[float]:
if key not in series:
return None
point = next((p for p in series[key] if p.get("period") == period), None)
if point is None or point.get("value") is None:
return None
try:
return float(point["value"])
except (ValueError, TypeError):
return None
def _get_avg_value(key: str, period: str) -> Optional[float]:
current_val = _get_value(key, period)
try:
# 总是和上一年度的年报值(如果存在)进行平均
current_year = int(period[:4])
prev_year_end_period = str(current_year - 1) + "1231"
prev_val = _get_value(key, prev_year_end_period)
except (ValueError, TypeError):
prev_val = None
if current_val is None: return None
if prev_val is None: return current_val
return (current_val + prev_val) / 2
def _get_cogs(period: str) -> Optional[float]:
revenue = _get_value('revenue', period)
gp_margin_raw = _get_value('grossprofit_margin', period)
if revenue is None or gp_margin_raw is None: return None
gp_margin = gp_margin_raw / 100.0 if abs(gp_margin_raw) > 1 else gp_margin_raw
return revenue * (1 - gp_margin)
def add_series(key: str, data: List[Dict]):
if data:
series[key] = data
# --- Calculations ---
fcf_data = []
for period in periods:
op_cashflow = _get_value('n_cashflow_act', period)
capex = _get_value('c_pay_acq_const_fiolta', period)
if op_cashflow is not None and capex is not None:
fcf_data.append({"period": period, "value": op_cashflow - capex})
add_series('__free_cash_flow', fcf_data)
fee_calcs = [
('__sell_rate', 'sell_exp', 'revenue'),
('__admin_rate', 'admin_exp', 'revenue'),
('__rd_rate', 'rd_exp', 'revenue'),
('__depr_ratio', 'depr_fa_coga_dpba', 'revenue'),
]
for key, num_key, den_key in fee_calcs:
data = []
for period in periods:
numerator = _get_value(num_key, period)
denominator = _get_value(den_key, period)
if numerator is not None and denominator is not None and denominator != 0:
data.append({"period": period, "value": (numerator / denominator) * 100})
add_series(key, data)
tax_rate_data = []
for period in periods:
tax_to_ebt = _get_value('tax_to_ebt', period)
if tax_to_ebt is not None:
rate = tax_to_ebt * 100 if abs(tax_to_ebt) <= 1 else tax_to_ebt
tax_rate_data.append({"period": period, "value": rate})
add_series('__tax_rate', tax_rate_data)
other_fee_data = []
for period in periods:
gp_raw = _get_value('grossprofit_margin', period)
np_raw = _get_value('netprofit_margin', period)
rev = _get_value('revenue', period)
sell_exp = _get_value('sell_exp', period)
admin_exp = _get_value('admin_exp', period)
rd_exp = _get_value('rd_exp', period)
if all(v is not None for v in [gp_raw, np_raw, rev, sell_exp, admin_exp, rd_exp]) and rev != 0:
gp = gp_raw / 100 if abs(gp_raw) > 1 else gp_raw
np = np_raw / 100 if abs(np_raw) > 1 else np_raw
sell_rate = sell_exp / rev
admin_rate = admin_exp / rev
rd_rate = rd_exp / rev
other_rate = (gp - np - sell_rate - admin_rate - rd_rate) * 100
other_fee_data.append({"period": period, "value": other_rate})
add_series('__other_fee_rate', other_fee_data)
asset_ratio_keys = [
('__money_cap_ratio', 'money_cap'), ('__inventories_ratio', 'inventories'),
('__ar_ratio', 'accounts_receiv_bill'), ('__prepay_ratio', 'prepayment'),
('__fix_assets_ratio', 'fix_assets'), ('__lt_invest_ratio', 'lt_eqt_invest'),
('__goodwill_ratio', 'goodwill'), ('__ap_ratio', 'accounts_pay'),
('__st_borr_ratio', 'st_borr'), ('__lt_borr_ratio', 'lt_borr'),
]
for key, num_key in asset_ratio_keys:
data = []
for period in periods:
numerator = _get_value(num_key, period)
denominator = _get_value('total_assets', period)
if numerator is not None and denominator is not None and denominator != 0:
data.append({"period": period, "value": (numerator / denominator) * 100})
add_series(key, data)
adv_data = []
for period in periods:
adv = _get_value('adv_receipts', period) or 0
contract = _get_value('contract_liab', period) or 0
total_assets = _get_value('total_assets', period)
if total_assets is not None and total_assets != 0:
adv_data.append({"period": period, "value": ((adv + contract) / total_assets) * 100})
add_series('__adv_ratio', adv_data)
other_assets_data = []
known_assets_keys = ['money_cap', 'inventories', 'accounts_receiv_bill', 'prepayment', 'fix_assets', 'lt_eqt_invest', 'goodwill']
for period in periods:
total_assets = _get_value('total_assets', period)
if total_assets is not None and total_assets != 0:
sum_known = sum(_get_value(k, period) or 0 for k in known_assets_keys)
other_assets_data.append({"period": period, "value": ((total_assets - sum_known) / total_assets) * 100})
add_series('__other_assets_ratio', other_assets_data)
op_assets_data = []
for period in periods:
total_assets = _get_value('total_assets', period)
if total_assets is not None and total_assets != 0:
inv = _get_value('inventories', period) or 0
ar = _get_value('accounts_receiv_bill', period) or 0
pre = _get_value('prepayment', period) or 0
ap = _get_value('accounts_pay', period) or 0
adv = _get_value('adv_receipts', period) or 0
contract_liab = _get_value('contract_liab', period) or 0
operating_assets = inv + ar + pre - ap - adv - contract_liab
op_assets_data.append({"period": period, "value": (operating_assets / total_assets) * 100})
add_series('__operating_assets_ratio', op_assets_data)
debt_ratio_data = []
for period in periods:
total_assets = _get_value('total_assets', period)
if total_assets is not None and total_assets != 0:
st_borr = _get_value('st_borr', period) or 0
lt_borr = _get_value('lt_borr', period) or 0
debt_ratio_data.append({"period": period, "value": ((st_borr + lt_borr) / total_assets) * 100})
add_series('__interest_bearing_debt_ratio', debt_ratio_data)
payturn_data = []
for period in periods:
avg_ap = _get_avg_value('accounts_pay', period)
cogs = _get_cogs(period)
if avg_ap is not None and cogs is not None and cogs != 0:
payturn_data.append({"period": period, "value": (365 * avg_ap) / cogs})
add_series('payturn_days', payturn_data)
per_capita_calcs = [
('__rev_per_emp', 'revenue', 10000),
('__profit_per_emp', 'n_income', 10000),
('__salary_per_emp', 'c_paid_to_for_empl', 10000),
]
for key, num_key, divisor in per_capita_calcs:
data = []
for period in periods:
numerator = _get_value(num_key, period)
employees = _get_value('employees', period)
if numerator is not None and employees is not None and employees != 0:
data.append({"period": period, "value": (numerator / employees) / divisor})
add_series(key, data)
return series
async def get_financial_statements(self, stock_code: str, report_dates: Optional[List[str]] = None) -> Dict[str, List[Dict[str, Any]]]:
# 1) 一次性拉取所需四表(尽量齐全字段),再按指定 report_dates 过滤
# 字段列表基于官方示例,避免超量请求可按需精简
bs_fields = [
"ts_code","ann_date","f_ann_date","end_date","report_type","comp_type","end_type",
"money_cap","inventories","prepayment","accounts_receiv","accounts_receiv_bill","goodwill",
"lt_eqt_invest","fix_assets","total_assets","accounts_pay","adv_receipts","contract_liab",
"st_borr","lt_borr","total_cur_assets","total_cur_liab","total_ncl","total_liab","total_hldr_eqy_exc_min_int",
]
ic_fields = [
"ts_code","ann_date","f_ann_date","end_date","report_type","comp_type","end_type",
"total_revenue","revenue","sell_exp","admin_exp","rd_exp","operate_profit","total_profit",
"income_tax","n_income","n_income_attr_p","ebit","ebitda","netprofit_margin","grossprofit_margin",
]
cf_fields = [
"ts_code","ann_date","f_ann_date","end_date","comp_type","report_type","end_type",
"n_cashflow_act","c_pay_acq_const_fiolta","c_paid_to_for_empl","depr_fa_coga_dpba",
]
fi_fields = [
"ts_code","end_date","ann_date","grossprofit_margin","netprofit_margin","tax_to_ebt","roe","roa","roic",
"invturn_days","arturn_days","fa_turn","tr_yoy","dt_netprofit_yoy","assets_turn",
]
try:
bs_rows, ic_rows, cf_rows, fi_rows, rep_rows, div_rows, holder_rows, company_rows = await asyncio.gather(
self._query("balancesheet", params={"ts_code": stock_code, "report_type": 1}, fields=bs_fields),
self._query("income", params={"ts_code": stock_code, "report_type": 1}, fields=ic_fields),
self._query("cashflow", params={"ts_code": stock_code, "report_type": 1}, fields=cf_fields),
self._query("fina_indicator", params={"ts_code": stock_code}, fields=fi_fields),
# 回购公告
self._query(
"repurchase",
params={"ts_code": stock_code},
fields=[
"ts_code","ann_date","end_date","proc","exp_date","vol","amount","high_limit","low_limit",
],
),
# 分红公告(仅取必要字段)
self._query(
"dividend",
params={"ts_code": stock_code},
fields=[
"ts_code","end_date","cash_div_tax","pay_date","base_share",
],
),
# 股东户数(按报告期)
self._query(
"stk_holdernumber",
params={"ts_code": stock_code},
fields=[
"ts_code","ann_date","end_date","holder_num",
],
),
# 公司基本信息(包含员工数)
self._query(
"stock_company",
params={"ts_code": stock_code},
fields=[
"ts_code","employees",
],
),
)
try:
logger.info(f"[Dividend] fetched {len(div_rows)} rows for {stock_code}")
except Exception:
pass
except Exception as e:
logger.error(f"Tushare bulk fetch failed for {stock_code}: {e}")
bs_rows, ic_rows, cf_rows, fi_rows, rep_rows, div_rows, holder_rows, company_rows = [], [], [], [], [], [], [], []
# 2) 以 end_date 聚合合并四表
by_date: Dict[str, Dict[str, Any]] = {}
def _merge_rows(rows: List[Dict[str, Any]]):
for r in rows or []:
end_date = str(r.get("end_date") or r.get("period") or "")
if not end_date:
continue
if end_date not in by_date:
by_date[end_date] = {"ts_code": stock_code, "end_date": end_date}
by_date[end_date].update(r)
_merge_rows(bs_rows)
_merge_rows(ic_rows)
_merge_rows(cf_rows)
_merge_rows(fi_rows)
# 3) 筛选报告期:今年的最新报告期 + 往年所有年报
current_year = str(datetime.date.today().year)
all_available_dates = sorted(by_date.keys(), reverse=True)
latest_current_year_report = None
for d in all_available_dates:
if d.startswith(current_year):
latest_current_year_report = d
break
previous_years_annual_reports = [
d for d in all_available_dates if d.endswith("1231") and not d.startswith(current_year)
]
wanted_dates = []
if latest_current_year_report:
wanted_dates.append(latest_current_year_report)
wanted_dates.extend(previous_years_annual_reports)
all_statements = [by_date[d] for d in wanted_dates if d in by_date]
logger.info(f"Successfully prepared {len(all_statements)} merged statement(s) for {stock_code} from {len(by_date)} available reports.")
# Transform to series format
series: Dict[str, List[Dict]] = {}
if all_statements:
for report in all_statements:
period = report.get("end_date", "")
if not period: continue
for key, value in report.items():
if key in ['ts_code', 'end_date', 'ann_date', 'f_ann_date', 'report_type', 'comp_type', 'end_type', 'update_flag', 'period']:
continue
# 仅保留可转为有限 float 的数值,避免 JSON 序列化错误
try:
fv = float(value)
except (TypeError, ValueError):
continue
if value is not None and math.isfinite(fv):
if key not in series:
series[key] = []
if not any(d['period'] == period for d in series[key]):
series[key].append({"period": period, "value": fv})
# 汇总回购信息为年度序列:按报告期 end_date 年份分组;
# 其中 repurchase_amount 取该年内“最后一个 ann_date”的 amount 值。
if 'rep_rows' in locals() and rep_rows:
rep_by_year: Dict[str, Dict[str, Any]] = {}
for r in rep_rows:
endd = str(r.get("end_date") or r.get("ann_date") or "")
if not endd:
continue
y = endd[:4]
bucket = rep_by_year.setdefault(y, {
"amount_sum": 0.0,
"vol": 0.0,
"high_limit": None,
"low_limit": None,
"last_ann_date": None,
"amount_last": None,
})
amt = r.get("amount")
vol = r.get("vol")
hi = r.get("high_limit")
lo = r.get("low_limit")
ann = str(r.get("ann_date") or "")
if isinstance(amt, (int, float)) and amt is not None:
bucket["amount_sum"] += float(amt)
if ann and ann[:4] == y:
last = bucket["last_ann_date"]
if last is None or ann > last:
bucket["last_ann_date"] = ann
bucket["amount_last"] = float(amt)
if isinstance(vol, (int, float)) and vol is not None:
bucket["vol"] += float(vol)
if isinstance(hi, (int, float)) and hi is not None:
bucket["high_limit"] = float(hi)
if isinstance(lo, (int, float)) and lo is not None:
bucket["low_limit"] = float(lo)
if rep_by_year:
amt_series = []
vol_series = []
hi_series = []
lo_series = []
for y, v in rep_by_year.items():
# 当年数据放在当前年最新报告期,否则放在年度报告期
if y == current_year and latest_current_year_report:
period_key = latest_current_year_report
else:
period_key = f"{y}1231"
if v.get("amount_last") is not None:
amt_series.append({"period": period_key, "value": v["amount_last"]})
if v.get("vol"):
vol_series.append({"period": period_key, "value": v["vol"]})
if v.get("high_limit") is not None:
hi_series.append({"period": period_key, "value": v["high_limit"]})
if v.get("low_limit") is not None:
lo_series.append({"period": period_key, "value": v["low_limit"]})
if amt_series:
series["repurchase_amount"] = amt_series
if vol_series:
series["repurchase_vol"] = vol_series
if hi_series:
series["repurchase_high_limit"] = hi_series
if lo_series:
series["repurchase_low_limit"] = lo_series
# 汇总分红信息为年度序列:以真实派息日 pay_date 的年份分组;
# 每条记录金额= 每股分红(cash_div_tax) * 基准股本(base_share),其中 base_share 单位为“万股”,
# 金额以“亿”为单位返回,因此需再除以 10000。
if 'div_rows' in locals() and div_rows:
div_by_year: Dict[str, float] = {}
for r in div_rows:
pay = str(r.get("pay_date") or "")
# 仅统计存在数字年份的真实派息日
if not pay or len(pay) < 4 or not any(ch.isdigit() for ch in pay):
continue
y = pay[:4]
cash_div = r.get("cash_div_tax")
base_share = r.get("base_share")
if isinstance(cash_div, (int, float)) and isinstance(base_share, (int, float)):
# 现金分红总额(万元)= 每股分红(元) * 基准股本(万股)
# 转为“亿”需除以 10000
amount_billion = (float(cash_div) * float(base_share)) / 10000.0
div_by_year[y] = div_by_year.get(y, 0.0) + amount_billion
if div_by_year:
div_series = []
for y, v in sorted(div_by_year.items()):
# 当年数据放在当前年最新报告期,否则放在年度报告期
if y == current_year and latest_current_year_report:
period_key = latest_current_year_report
else:
period_key = f"{y}1231"
div_series.append({"period": period_key, "value": v})
series["dividend_amount"] = div_series
# try:
# logger.info(f"[Dividend] Series dividend_amount(period) for {stock_code}: {div_series}")
# except Exception:
# pass
# 汇总股东户数信息:按报告期 end_date 分组,取最新的 holder_num
if 'holder_rows' in locals() and holder_rows:
# 按 end_date 分组,取最新的 ann_date 的 holder_num
holder_by_period: Dict[str, Dict[str, Any]] = {}
for r in holder_rows:
end_date = str(r.get("end_date") or "")
if not end_date:
continue
ann_date = str(r.get("ann_date") or "")
holder_num = r.get("holder_num")
if end_date not in holder_by_period:
holder_by_period[end_date] = {
"holder_num": holder_num,
"latest_ann_date": ann_date
}
else:
# 比较 ann_date取最新的
current_latest = holder_by_period[end_date]["latest_ann_date"]
if ann_date and (not current_latest or ann_date > current_latest):
holder_by_period[end_date] = {
"holder_num": holder_num,
"latest_ann_date": ann_date
}
# 使用与财务报表相同的报告期筛选逻辑
# 股东户数应该与财务报表的报告期时间点对应
holder_series = []
for end_date in wanted_dates:
if end_date in holder_by_period:
data = holder_by_period[end_date]
holder_num = data["holder_num"]
if isinstance(holder_num, (int, float)) and holder_num is not None:
holder_series.append({"period": end_date, "value": float(holder_num)})
if holder_series:
series["holder_num"] = holder_series
# 汇总员工数信息员工数放在去年的年末上一年的12月31日
if 'company_rows' in locals() and company_rows:
# 员工数通常是静态数据,取最新的一个值
latest_employees = None
for r in company_rows:
employees = r.get("employees")
if isinstance(employees, (int, float)) and employees is not None:
latest_employees = float(employees)
break # 取第一个有效值
if latest_employees is not None:
# 将员工数放在去年的年末上一年的12月31日
previous_year = str(datetime.date.today().year - 1)
period_key = f"{previous_year}1231"
series["employees"] = [{"period": period_key, "value": latest_employees}]
# Calculate derived metrics
periods = sorted(list(set(d['period'] for s in series.values() for d in s)))
series = self._calculate_derived_metrics(series, periods)
return series

View File

@ -0,0 +1,114 @@
from .base import BaseDataProvider
from typing import Any, Dict, List, Optional
import yfinance as yf
import pandas as pd
from datetime import datetime
import asyncio
import logging
logger = logging.getLogger(__name__)
class YfinanceProvider(BaseDataProvider):
def _map_stock_code(self, stock_code: str) -> str:
# yfinance uses different tickers for CN market
if stock_code.endswith('.SH'):
return stock_code.replace('.SH', '.SS')
elif stock_code.endswith('.SZ'):
# For Shenzhen stocks, try without suffix first, then with .SZ
base_code = stock_code.replace('.SZ', '')
return base_code # Try without suffix first
return stock_code
async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]:
def _fetch():
try:
ticker = yf.Ticker(self._map_stock_code(stock_code))
info = ticker.info
# Normalize data to match expected format
return {
"ts_code": stock_code,
"name": info.get("longName"),
"area": info.get("country"),
"industry": info.get("industry"),
"market": info.get("market"),
"exchange": info.get("exchange"),
"list_date": datetime.fromtimestamp(info.get("firstTradeDateEpoch", 0)).strftime('%Y%m%d') if info.get("firstTradeDateEpoch") else None,
}
except Exception as e:
logger.error(f"yfinance get_stock_basic failed for {stock_code}: {e}")
return None
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)
async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]:
def _fetch():
try:
# yfinance date format is YYYY-MM-DD
start_fmt = datetime.strptime(start_date, '%Y%m%d').strftime('%Y-%m-%d')
end_fmt = datetime.strptime(end_date, '%Y%m%d').strftime('%Y-%m-%d')
ticker = yf.Ticker(self._map_stock_code(stock_code))
df = ticker.history(start=start_fmt, end=end_fmt)
df.reset_index(inplace=True)
# Normalize column names
df.rename(columns={
"Date": "trade_date",
"Open": "open", "High": "high", "Low": "low", "Close": "close",
"Volume": "vol"
}, inplace=True)
df['trade_date'] = df['trade_date'].dt.strftime('%Y%m%d')
return df.to_dict('records')
except Exception as e:
logger.error(f"yfinance get_daily_price failed for {stock_code}: {e}")
return []
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)
async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> List[Dict[str, Any]]:
def _fetch():
try:
ticker = yf.Ticker(self._map_stock_code(stock_code))
# yfinance provides financials quarterly or annually. We'll fetch annually and try to match the dates.
# Note: This is an approximation as yfinance does not allow fetching by specific end-of-year dates.
df_financials = ticker.financials.transpose()
df_balance = ticker.balance_sheet.transpose()
df_cashflow = ticker.cash_flow.transpose()
if df_financials.empty and df_balance.empty and df_cashflow.empty:
return []
# Combine the data
df_combined = pd.concat([df_financials, df_balance, df_cashflow], axis=1)
df_combined.index.name = 'end_date'
df_combined.reset_index(inplace=True)
df_combined['end_date_str'] = df_combined['end_date'].dt.strftime('%Y%m%d')
# Filter by requested dates (allowing for some flexibility if exact match not found)
# This simplistic filtering might need to be more robust.
# For now, we assume the yearly data maps to the year in report_dates.
years_to_fetch = {date[:4] for date in report_dates}
df_combined = df_combined[df_combined['end_date'].dt.year.astype(str).isin(years_to_fetch)]
# Data Normalization (yfinance columns are different from Tushare)
# This is a sample, a more comprehensive mapping would be required.
df_combined.rename(columns={
"Total Revenue": "revenue",
"Net Income": "net_income",
"Total Assets": "total_assets",
"Total Liab": "total_liabilities",
}, inplace=True, errors='ignore')
return df_combined.to_dict('records')
except Exception as e:
logger.error(f"yfinance get_financial_statements failed for {stock_code}: {e}")
return []
loop = asyncio.get_event_loop()
return await loop.run_in_executor(None, _fetch)

View File

@ -8,13 +8,34 @@ from fastapi.middleware.cors import CORSMiddleware
from app.core.config import settings
from app.routers.config import router as config_router
from app.routers.financial import router as financial_router
from app.routers.orgs import router as orgs_router
# Configure logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(levelname)s: %(message)s',
datefmt='%H:%M:%S'
)
# Configure logging to ensure our app logs show up in development
import sys
# Force our logging configuration to override uvicorn's
class ForcefulHandler(logging.Handler):
def emit(self, record):
# Force output to stdout regardless of uvicorn's configuration
print(f"[APP] {record.getMessage()}", file=sys.stdout, flush=True)
# Set up our forceful handler for data providers
forceful_handler = ForcefulHandler()
forceful_handler.setLevel(logging.DEBUG)
# Configure data providers logger with forceful output
data_providers_logger = logging.getLogger('app.data_providers')
data_providers_logger.setLevel(logging.DEBUG)
data_providers_logger.addHandler(forceful_handler)
# Also set up for the main app logger
app_logger = logging.getLogger('app')
app_logger.setLevel(logging.INFO)
app_logger.addHandler(forceful_handler)
# Ensure our handlers are not suppressed
data_providers_logger.propagate = False
app_logger.propagate = False
app = FastAPI(title=settings.APP_NAME, version=settings.APP_VERSION)
@ -30,6 +51,7 @@ app.add_middleware(
# Routers
app.include_router(config_router, prefix=f"{settings.API_V1_STR}/config", tags=["config"])
app.include_router(financial_router, prefix=f"{settings.API_V1_STR}/financials", tags=["financials"])
app.include_router(orgs_router, prefix=f"{settings.API_V1_STR}/orgs", tags=["orgs"])
@app.get("/")
async def root():

File diff suppressed because it is too large Load Diff

143
backend/app/routers/orgs.py Normal file
View File

@ -0,0 +1,143 @@
import logging
import os
import json
from typing import Dict
from fastapi import APIRouter, BackgroundTasks, HTTPException
# Lazy loader for DataManager
_dm = None
def get_dm():
global _dm
if _dm is not None:
return _dm
try:
from app.data_manager import data_manager as real_dm
_dm = real_dm
return _dm
except Exception:
# Return a stub if the real one fails to import
class _StubDM:
async def get_stock_basic(self, stock_code: str): return None
async def get_financial_statements(self, stock_code: str, report_dates): return []
_dm = _StubDM()
return _dm
from app.services.analysis_client import AnalysisClient, load_analysis_config
router = APIRouter()
logger = logging.getLogger(__name__)
# Constants for config paths
REPO_ROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
BASE_CONFIG_PATH = os.path.join(REPO_ROOT, "config", "config.json")
def _load_json(path: str) -> Dict:
if not os.path.exists(path):
return {}
try:
with open(path, "r", encoding="utf-8") as f:
return json.load(f)
except Exception:
return {}
async def run_full_analysis(org_id: str):
"""
Asynchronous task to run a full analysis for a given stock.
This function is market-agnostic and relies on DataManager.
"""
logger.info(f"Starting full analysis task for {org_id}")
# 1. Load configurations
base_cfg = _load_json(BASE_CONFIG_PATH)
llm_provider = base_cfg.get("llm", {}).get("provider", "gemini")
llm_config = base_cfg.get("llm", {}).get(llm_provider, {})
api_key = llm_config.get("api_key")
base_url = llm_config.get("base_url")
if not api_key:
logger.error(f"API key for {llm_provider} not configured. Aborting analysis for {org_id}.")
return
analysis_config_full = load_analysis_config()
modules_config = analysis_config_full.get("analysis_modules", {})
if not modules_config:
logger.error(f"Analysis modules configuration not found. Aborting analysis for {org_id}.")
return
# 2. Fetch basic company info (name)
try:
basic_data = await get_dm().get_stock_basic(stock_code=org_id)
company_name = basic_data.get("name", org_id) if basic_data else org_id
logger.info(f"Got company name for {org_id}: {company_name}")
except Exception as e:
logger.warning(f"Failed to get company name for {org_id}. Using org_id as name. Error: {e}")
company_name = org_id
# 3. Fetch financial data
financial_data = None
try:
# You might want to make the date range configurable
from datetime import datetime
current_year = datetime.now().year
report_dates = [f"{year}1231" for year in range(current_year - 5, current_year)]
financial_statements = await get_dm().get_financial_statements(stock_code=org_id, report_dates=report_dates)
if financial_statements:
financial_data = {"series": financial_statements}
logger.info(f"Successfully fetched financial statements for {org_id}")
else:
logger.warning(f"Could not fetch financial statements for {org_id}")
except Exception as e:
logger.error(f"Error fetching financial data for {org_id}: {e}")
# 4. Execute analysis modules in order (simplified, assumes no complex dependencies for now)
# Note: A full implementation would need the topological sort from the financial router.
analysis_results = {}
for module_type, module_config in modules_config.items():
logger.info(f"Running analysis module: {module_type} for {org_id}")
client = AnalysisClient(
api_key=api_key,
base_url=base_url,
model=module_config.get("model", "gemini-1.5-flash")
)
# Simplified context: use results from all previously completed modules
context = analysis_results.copy()
result = await client.generate_analysis(
analysis_type=module_type,
company_name=company_name,
ts_code=org_id,
prompt_template=module_config.get("prompt_template", ""),
financial_data=financial_data,
context=context,
)
if result.get("success"):
analysis_results[module_type] = result.get("content", "")
logger.info(f"Module {module_type} for {org_id} completed successfully.")
else:
logger.error(f"Module {module_type} for {org_id} failed: {result.get('error')}")
# Store error message to avoid breaking dependencies that might handle missing data
analysis_results[module_type] = f"Error: Analysis for {module_type} failed."
# 5. Save the final report
# TODO: Implement database logic to save the `analysis_results` to the report record.
logger.info(f"Full analysis for {org_id} finished. Results: {json.dumps(analysis_results, indent=2, ensure_ascii=False)}")
@router.post("/{market}/{org_id}/reports/generate")
async def trigger_report_generation(market: str, org_id: str, background_tasks: BackgroundTasks):
"""
Triggers a background task to generate a full financial report.
This endpoint is now market-agnostic.
"""
logger.info(f"Received report generation request for {org_id} in {market} market.")
# TODO: Create a report record in the database with "generating" status here.
background_tasks.add_task(run_full_analysis, org_id)
logger.info(f"Queued analysis task for {org_id}.")
return {"queued": True, "market": market, "org_id": org_id}

View File

@ -5,10 +5,9 @@ from typing import Dict, List, Optional
from pydantic import BaseModel
class YearDataPoint(BaseModel):
year: str
class PeriodDataPoint(BaseModel):
period: str
value: Optional[float]
month: Optional[int] = None # 月份信息,用于确定季度
class StepRecord(BaseModel):
@ -33,7 +32,7 @@ class FinancialMeta(BaseModel):
class BatchFinancialDataResponse(BaseModel):
ts_code: str
name: Optional[str] = None
series: Dict[str, List[YearDataPoint]]
series: Dict[str, List[PeriodDataPoint]]
meta: Optional[FinancialMeta] = None
@ -72,3 +71,29 @@ class AnalysisResponse(BaseModel):
class AnalysisConfigResponse(BaseModel):
analysis_modules: Dict[str, Dict]
class TodaySnapshotResponse(BaseModel):
ts_code: str
trade_date: str
name: Optional[str] = None
close: Optional[float] = None
pe: Optional[float] = None
pb: Optional[float] = None
dv_ratio: Optional[float] = None
total_mv: Optional[float] = None
class RealTimeQuoteResponse(BaseModel):
symbol: str
market: str
ts: str
price: float
open_price: Optional[float] = None
high_price: Optional[float] = None
low_price: Optional[float] = None
prev_close: Optional[float] = None
change: Optional[float] = None
change_percent: Optional[float] = None
volume: Optional[int] = None
source: Optional[str] = None

View File

@ -14,7 +14,8 @@ class AnalysisClient:
def __init__(self, api_key: str, base_url: str, model: str):
"""Initialize OpenAI client with API key, base URL, and model"""
self.client = openai.AsyncOpenAI(api_key=api_key, base_url=base_url)
# Increase client timeout to allow long-running analysis (5 minutes)
self.client = openai.AsyncOpenAI(api_key=api_key, base_url=base_url, timeout=300.0)
self.model_name = model
async def generate_analysis(
@ -56,6 +57,7 @@ class AnalysisClient:
response = await self.client.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
timeout=300.0,
)
content = response.choices[0].message.content if response.choices else ""
@ -130,6 +132,51 @@ class AnalysisClient:
return prompt
async def generate_analysis_stream(
self,
analysis_type: str,
company_name: str,
ts_code: str,
prompt_template: str,
financial_data: Optional[Dict] = None,
context: Optional[Dict] = None
):
"""Yield analysis content chunks using OpenAI-compatible streaming API.
Yields plain text chunks as they arrive.
"""
# Build prompt
prompt = self._build_prompt(
prompt_template,
company_name,
ts_code,
financial_data,
context,
)
try:
stream = await self.client.chat.completions.create(
model=self.model_name,
messages=[{"role": "user", "content": prompt}],
stream=True,
timeout=300.0,
)
# The SDK yields events with incremental deltas
async for event in stream:
try:
choice = event.choices[0] if getattr(event, "choices", None) else None
delta = getattr(choice, "delta", None) if choice is not None else None
content = getattr(delta, "content", None) if delta is not None else None
if content:
yield content
except Exception:
# Best-effort: ignore malformed chunks
continue
except Exception as e:
# Emit error message to the stream so the client can surface it
yield f"\n\n[错误] {type(e).__name__}: {str(e)}\n"
def load_analysis_config() -> Dict:
"""Load analysis configuration from JSON file"""

View File

@ -1,60 +1,38 @@
"""
Configuration Management Service
Configuration Management Service (file + service based; no direct DB)
"""
import json
import os
import asyncio
from typing import Any, Dict
import asyncpg
import httpx
from sqlalchemy.ext.asyncio import AsyncSession
from sqlalchemy.future import select
from app.models.system_config import SystemConfig
from app.schemas.config import ConfigResponse, ConfigUpdateRequest, DatabaseConfig, NewApiConfig, DataSourceConfig, ConfigTestResponse
from app.core.config import settings
class ConfigManager:
"""Manages system configuration by merging a static JSON file with dynamic settings from the database."""
"""Manages system configuration by fetching from config-service and updating local config."""
def __init__(self, db_session: AsyncSession, config_path: str = None):
self.db = db_session
def __init__(self, config_path: str = None):
if config_path is None:
# Default path: backend/app/services -> project_root/config/config.json
# __file__ = backend/app/services/config_manager.py
# go up three levels to project root
project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..", ".."))
self.config_path = os.path.join(project_root, "config", "config.json")
else:
self.config_path = config_path
def _load_base_config_from_file(self) -> Dict[str, Any]:
"""Loads the base configuration from the JSON file."""
if not os.path.exists(self.config_path):
return {}
try:
with open(self.config_path, "r", encoding="utf-8") as f:
return json.load(f)
except (IOError, json.JSONDecodeError):
return {}
async def _load_dynamic_config_from_db(self) -> Dict[str, Any]:
"""Loads dynamic configuration overrides from the database.
当数据库表尚未创建如开发环境未运行迁移优雅降级为返回空覆盖配置避免接口 500
"""
try:
db_configs: Dict[str, Any] = {}
result = await self.db.execute(select(SystemConfig))
for record in result.scalars().all():
db_configs[record.config_key] = record.config_value
return db_configs
except Exception:
# 表不存在或其他数据库错误时,忽略动态配置覆盖
return {}
async def _fetch_base_config_from_service(self) -> Dict[str, Any]:
base_url = settings.CONFIG_SERVICE_BASE_URL.rstrip("/")
url = f"{base_url}/system"
async with httpx.AsyncClient(timeout=10.0) as client:
resp = await client.get(url)
resp.raise_for_status()
data = resp.json()
if not isinstance(data, dict):
raise ValueError("Config service 返回的系统配置格式错误")
return data
def _merge_configs(self, base: Dict[str, Any], overrides: Dict[str, Any]) -> Dict[str, Any]:
"""Deeply merges the override config into the base config."""
for key, value in overrides.items():
if isinstance(value, dict) and isinstance(base.get(key), dict):
base[key] = self._merge_configs(base[key], value)
@ -63,53 +41,38 @@ class ConfigManager:
return base
async def get_config(self) -> ConfigResponse:
"""Gets the final, merged configuration."""
base_config = self._load_base_config_from_file()
db_config = await self._load_dynamic_config_from_db()
merged_config = self._merge_configs(base_config, db_config)
base_config = await self._fetch_base_config_from_service()
# 兼容两种位置:优先使用 new_api其次回退到 llm.new_api
new_api_src = merged_config.get("new_api") or merged_config.get("llm", {}).get("new_api", {})
new_api_src = base_config.get("new_api") or base_config.get("llm", {}).get("new_api", {})
return ConfigResponse(
database=DatabaseConfig(**merged_config.get("database", {})),
database=DatabaseConfig(**base_config.get("database", {})),
new_api=NewApiConfig(**(new_api_src or {})),
data_sources={
k: DataSourceConfig(**v)
for k, v in merged_config.get("data_sources", {}).items()
for k, v in base_config.get("data_sources", {}).items()
}
)
async def update_config(self, config_update: ConfigUpdateRequest) -> ConfigResponse:
"""Updates configuration in the database and returns the new merged config."""
try:
update_dict = config_update.dict(exclude_unset=True)
update_dict = config_update.dict(exclude_unset=True)
self._validate_config_data(update_dict)
# 验证配置数据
self._validate_config_data(update_dict)
# 直接写入项目根目录的 config.json
current = {}
if os.path.exists(self.config_path):
with open(self.config_path, "r", encoding="utf-8") as f:
current = json.load(f) or {}
for key, value in update_dict.items():
existing_config = await self.db.get(SystemConfig, key)
if existing_config:
# Merge with existing DB value before updating
if isinstance(existing_config.config_value, dict) and isinstance(value, dict):
merged_value = self._merge_configs(existing_config.config_value, value)
existing_config.config_value = merged_value
else:
existing_config.config_value = value
else:
new_config = SystemConfig(config_key=key, config_value=value)
self.db.add(new_config)
merged = self._merge_configs(current, update_dict)
with open(self.config_path, "w", encoding="utf-8") as f:
json.dump(merged, f, ensure_ascii=False, indent=2)
await self.db.commit()
return await self.get_config()
except Exception as e:
await self.db.rollback()
raise e
# 返回合并后的视图(与 get_config 一致:从服务读取一次,避免多源不一致)
return await self.get_config()
def _validate_config_data(self, config_data: Dict[str, Any]) -> None:
"""Validate configuration data before saving."""
if "database" in config_data:
db_config = config_data["database"]
if "url" in db_config:
@ -132,7 +95,6 @@ class ConfigManager:
raise ValueError(f"{source_name} API Key长度不能少于10个字符")
async def test_config(self, config_type: str, config_data: Dict[str, Any]) -> ConfigTestResponse:
"""Test a specific configuration."""
try:
if config_type == "database":
return await self._test_database(config_data)
@ -143,92 +105,47 @@ class ConfigManager:
elif config_type == "finnhub":
return await self._test_finnhub(config_data)
else:
return ConfigTestResponse(
success=False,
message=f"不支持的配置类型: {config_type}"
)
return ConfigTestResponse(success=False, message=f"不支持的配置类型: {config_type}")
except Exception as e:
return ConfigTestResponse(
success=False,
message=f"测试失败: {str(e)}"
)
return ConfigTestResponse(success=False, message=f"测试失败: {str(e)}")
async def _test_database(self, config_data: Dict[str, Any]) -> ConfigTestResponse:
"""Test database connection."""
db_url = config_data.get("url")
if not db_url:
return ConfigTestResponse(
success=False,
message="数据库URL不能为空"
)
return ConfigTestResponse(success=False, message="数据库URL不能为空")
try:
# 解析数据库URL
if db_url.startswith("postgresql+asyncpg://"):
db_url = db_url.replace("postgresql+asyncpg://", "postgresql://")
# 测试连接
conn = await asyncpg.connect(db_url)
await conn.close()
return ConfigTestResponse(
success=True,
message="数据库连接成功"
)
return ConfigTestResponse(success=True, message="数据库连接成功")
except Exception as e:
return ConfigTestResponse(
success=False,
message=f"数据库连接失败: {str(e)}"
)
return ConfigTestResponse(success=False, message=f"数据库连接失败: {str(e)}")
async def _test_new_api(self, config_data: Dict[str, Any]) -> ConfigTestResponse:
"""Test New API (OpenAI-compatible) connection."""
api_key = config_data.get("api_key")
base_url = config_data.get("base_url")
if not api_key or not base_url:
return ConfigTestResponse(
success=False,
message="New API Key和Base URL均不能为空"
)
return ConfigTestResponse(success=False, message="New API Key和Base URL均不能为空")
try:
async with httpx.AsyncClient(timeout=10.0) as client:
# Test API availability by listing models
response = await client.get(
f"{base_url.rstrip('/')}/models",
headers={"Authorization": f"Bearer {api_key}"}
)
if response.status_code == 200:
return ConfigTestResponse(
success=True,
message="New API连接成功"
)
return ConfigTestResponse(success=True, message="New API连接成功")
else:
return ConfigTestResponse(
success=False,
message=f"New API测试失败: HTTP {response.status_code} - {response.text}"
)
return ConfigTestResponse(success=False, message=f"New API测试失败: HTTP {response.status_code} - {response.text}")
except Exception as e:
return ConfigTestResponse(
success=False,
message=f"New API连接失败: {str(e)}"
)
return ConfigTestResponse(success=False, message=f"New API连接失败: {str(e)}")
async def _test_tushare(self, config_data: Dict[str, Any]) -> ConfigTestResponse:
"""Test Tushare API connection."""
api_key = config_data.get("api_key")
if not api_key:
return ConfigTestResponse(
success=False,
message="Tushare API Key不能为空"
)
return ConfigTestResponse(success=False, message="Tushare API Key不能为空")
try:
async with httpx.AsyncClient(timeout=10.0) as client:
# 测试API可用性
response = await client.post(
"http://api.tushare.pro",
json={
@ -238,67 +155,34 @@ class ConfigManager:
"fields": "ts_code"
}
)
if response.status_code == 200:
data = response.json()
if data.get("code") == 0:
return ConfigTestResponse(
success=True,
message="Tushare API连接成功"
)
return ConfigTestResponse(success=True, message="Tushare API连接成功")
else:
return ConfigTestResponse(
success=False,
message=f"Tushare API错误: {data.get('msg', '未知错误')}"
)
return ConfigTestResponse(success=False, message=f"Tushare API错误: {data.get('msg', '未知错误')}")
else:
return ConfigTestResponse(
success=False,
message=f"Tushare API测试失败: HTTP {response.status_code}"
)
return ConfigTestResponse(success=False, message=f"Tushare API测试失败: HTTP {response.status_code}")
except Exception as e:
return ConfigTestResponse(
success=False,
message=f"Tushare API连接失败: {str(e)}"
)
return ConfigTestResponse(success=False, message=f"Tushare API连接失败: {str(e)}")
async def _test_finnhub(self, config_data: Dict[str, Any]) -> ConfigTestResponse:
"""Test Finnhub API connection."""
api_key = config_data.get("api_key")
if not api_key:
return ConfigTestResponse(
success=False,
message="Finnhub API Key不能为空"
)
return ConfigTestResponse(success=False, message="Finnhub API Key不能为空")
try:
async with httpx.AsyncClient(timeout=10.0) as client:
# 测试API可用性
response = await client.get(
f"https://finnhub.io/api/v1/quote",
"https://finnhub.io/api/v1/quote",
params={"symbol": "AAPL", "token": api_key}
)
if response.status_code == 200:
data = response.json()
if "c" in data: # 检查是否有价格数据
return ConfigTestResponse(
success=True,
message="Finnhub API连接成功"
)
if "c" in data:
return ConfigTestResponse(success=True, message="Finnhub API连接成功")
else:
return ConfigTestResponse(
success=False,
message="Finnhub API响应格式错误"
)
return ConfigTestResponse(success=False, message="Finnhub API响应格式错误")
else:
return ConfigTestResponse(
success=False,
message=f"Finnhub API测试失败: HTTP {response.status_code}"
)
return ConfigTestResponse(success=False, message=f"Finnhub API测试失败: HTTP {response.status_code}")
except Exception as e:
return ConfigTestResponse(
success=False,
message=f"Finnhub API连接失败: {str(e)}"
)
return ConfigTestResponse(success=False, message=f"Finnhub API连接失败: {str(e)}")

View File

@ -0,0 +1,182 @@
from __future__ import annotations
import datetime as dt
from typing import Any, Dict, List, Optional
import httpx
from pydantic import BaseModel
from app.core.config import settings
class CompanyProfile(BaseModel):
symbol: str
name: str
industry: Optional[str] = None
list_date: Optional[dt.date] = None
additional_info: Optional[Dict[str, Any]] = None
class TimeSeriesFinancial(BaseModel):
symbol: str
metric_name: str
period_date: dt.date
value: float
source: Optional[str] = None
class TimeSeriesFinancialBatch(BaseModel):
records: List[TimeSeriesFinancial]
class DailyMarketData(BaseModel):
symbol: str
trade_date: dt.date
open_price: Optional[float] = None
high_price: Optional[float] = None
low_price: Optional[float] = None
close_price: Optional[float] = None
volume: Optional[int] = None
pe: Optional[float] = None
pb: Optional[float] = None
total_mv: Optional[float] = None
class DailyMarketDataBatch(BaseModel):
records: List[DailyMarketData]
class RealtimeQuote(BaseModel):
symbol: str
market: str
ts: dt.datetime
price: float
open_price: Optional[float] = None
high_price: Optional[float] = None
low_price: Optional[float] = None
prev_close: Optional[float] = None
change: Optional[float] = None
change_percent: Optional[float] = None
volume: Optional[int] = None
source: Optional[str] = None
class NewAnalysisResult(BaseModel):
symbol: str
module_id: str
model_name: Optional[str] = None
content: str
meta_data: Optional[Dict[str, Any]] = None
class AnalysisResult(BaseModel):
id: str
symbol: str
module_id: str
generated_at: dt.datetime
model_name: Optional[str] = None
content: str
meta_data: Optional[Dict[str, Any]] = None
class DataPersistenceClient:
def __init__(self, base_url: Optional[str] = None, timeout: float = 20.0):
self.base_url = (base_url or settings.DATA_PERSISTENCE_BASE_URL).rstrip("/")
self.timeout = timeout
async def _client(self) -> httpx.AsyncClient:
return httpx.AsyncClient(base_url=self.base_url, timeout=self.timeout)
# Companies
async def upsert_company(self, profile: CompanyProfile) -> None:
async with await self._client() as client:
resp = await client.put("/companies", json=profile.model_dump(mode="json"))
resp.raise_for_status()
async def get_company(self, symbol: str) -> CompanyProfile:
async with await self._client() as client:
resp = await client.get(f"/companies/{symbol}")
resp.raise_for_status()
return CompanyProfile.model_validate(resp.json())
# Financials
async def batch_insert_financials(self, batch: TimeSeriesFinancialBatch) -> None:
async with await self._client() as client:
resp = await client.post("/market-data/financials/batch", json=batch.model_dump(mode="json"))
resp.raise_for_status()
async def get_financials_by_symbol(self, symbol: str, metrics: Optional[List[str]] = None) -> List[TimeSeriesFinancial]:
params = {}
if metrics:
params["metrics"] = ",".join(metrics)
async with await self._client() as client:
resp = await client.get(f"/market-data/financials/{symbol}", params=params)
resp.raise_for_status()
return [TimeSeriesFinancial.model_validate(item) for item in resp.json()]
# Daily data
async def batch_insert_daily_data(self, batch: DailyMarketDataBatch) -> None:
async with await self._client() as client:
resp = await client.post("/market-data/daily/batch", json=batch.model_dump(mode="json"))
resp.raise_for_status()
async def get_daily_data_by_symbol(
self,
symbol: str,
start_date: Optional[dt.date] = None,
end_date: Optional[dt.date] = None,
) -> List[DailyMarketData]:
params = {}
if start_date:
params["start_date"] = start_date.isoformat()
if end_date:
params["end_date"] = end_date.isoformat()
async with await self._client() as client:
resp = await client.get(f"/market-data/daily/{symbol}", params=params)
resp.raise_for_status()
return [DailyMarketData.model_validate(item) for item in resp.json()]
# Realtime quotes
async def upsert_realtime_quote(self, quote: RealtimeQuote) -> None:
async with await self._client() as client:
resp = await client.post("/market-data/quotes", json=quote.model_dump(mode="json"))
resp.raise_for_status()
async def get_latest_realtime_quote(
self,
market: str,
symbol: str,
max_age_seconds: Optional[int] = None,
) -> Optional[RealtimeQuote]:
params = {"market": market}
if max_age_seconds is not None:
params["max_age_seconds"] = int(max_age_seconds)
async with await self._client() as client:
resp = await client.get(f"/market-data/quotes/{symbol}", params=params)
if resp.status_code == 404:
return None
resp.raise_for_status()
return RealtimeQuote.model_validate(resp.json())
# Analysis results
async def create_analysis_result(self, new_result: NewAnalysisResult) -> AnalysisResult:
async with await self._client() as client:
resp = await client.post("/analysis-results", json=new_result.model_dump(mode="json"))
resp.raise_for_status()
return AnalysisResult.model_validate(resp.json())
async def get_analysis_results(self, symbol: str, module_id: Optional[str] = None) -> List[AnalysisResult]:
params = {"symbol": symbol}
if module_id:
params["module_id"] = module_id
async with await self._client() as client:
resp = await client.get("/analysis-results", params=params)
resp.raise_for_status()
return [AnalysisResult.model_validate(item) for item in resp.json()]
async def get_analysis_result_by_id(self, result_id: str) -> AnalysisResult:
async with await self._client() as client:
resp = await client.get(f"/analysis-results/{result_id}")
resp.raise_for_status()
return AnalysisResult.model_validate(resp.json())

View File

@ -1,52 +0,0 @@
"""
Minimal async client for Tushare Pro API
"""
from typing import Any, Dict, List, Optional
import httpx
TUSHARE_PRO_URL = "https://api.tushare.pro"
class TushareClient:
def __init__(self, token: str):
self.token = token
self._client = httpx.AsyncClient(timeout=30)
async def query(
self,
api_name: str,
params: Optional[Dict[str, Any]] = None,
fields: Optional[str] = None,
) -> List[Dict[str, Any]]:
payload = {
"api_name": api_name,
"token": self.token,
"params": params or {},
}
# default larger page size if not provided
if "limit" not in payload["params"]:
payload["params"]["limit"] = 5000
if fields:
payload["fields"] = fields
resp = await self._client.post(TUSHARE_PRO_URL, json=payload)
resp.raise_for_status()
data = resp.json()
if data.get("code") != 0:
err = data.get("msg") or "Tushare error"
raise RuntimeError(f"{api_name}: {err}")
fields_def = data.get("data", {}).get("fields", [])
items = data.get("data", {}).get("items", [])
rows: List[Dict[str, Any]] = []
for it in items:
row = {fields_def[i]: it[i] for i in range(len(fields_def))}
rows.append(row)
return rows
async def aclose(self):
await self._client.aclose()
async def __aenter__(self):
return self
async def __aexit__(self, exc_type, exc, tb):
await self.aclose()

View File

@ -7,3 +7,11 @@ aiosqlite==0.20.0
alembic==1.13.3
openai==1.37.0
asyncpg
greenlet>=3.1.0
# Data Providers
tushare==1.4.1
yfinance==0.2.37
finnhub-python==2.4.20
pandas==2.2.2
PyYAML==6.0.1

File diff suppressed because one or more lines are too long

37
config/data_sources.yaml Normal file
View File

@ -0,0 +1,37 @@
# Configuration for data sources used by the DataManager
# Defines the available data sources and their specific configurations.
# 'api_key_env' specifies the environment variable that should hold the API key/token.
data_sources:
tushare:
api_key_env: TUSHARE_TOKEN
description: "Primary data source for China market (A-shares)."
yfinance:
api_key_env: null # No API key required
description: "Good for global market data, especially US stocks."
finnhub:
api_key_env: FINNHUB_API_KEY
description: "Another comprehensive source for global stock data."
# Defines the priority of data providers for each market.
# The DataManager will try them in order until data is successfully fetched.
markets:
CN: # China Market
priority:
- tushare
- yfinance # yfinance can be a fallback
US: # US Market
priority:
- finnhub
- yfinance
HK: # Hong Kong Market
priority:
- yfinance
- finnhub
JP: # Japan Market
priority:
- yfinance
DEFAULT:
priority:
- yfinance
- finnhub

View File

@ -16,5 +16,17 @@ module.exports = {
env: {
"PYTHONPATH": "."
}
}, {
name: "portwardenc",
cwd: ".",
script: "./portwardenc-amd64",
interpreter: "none",
env: {
"SERVER_ADDR": "http://bastion.3prism.ai:7000",
"SERVICE_ID": "FUNDAMENTAL",
"LOCAL_PORT": "3000"
}
}]
};

109
docker-compose.yml Normal file
View File

@ -0,0 +1,109 @@
version: "3.9"
services:
postgres-db:
image: timescale/timescaledb:2.15.2-pg16
container_name: fundamental-postgres
command: -c shared_preload_libraries=timescaledb
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: fundamental
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres -d fundamental"]
interval: 5s
timeout: 5s
retries: 10
ports:
- "15432:5432"
data-persistence-service:
build:
context: ./services/data-persistence-service
dockerfile: Dockerfile
container_name: data-persistence-service
environment:
HOST: 0.0.0.0
PORT: 3000
# Rust service connects to the internal DB service name
DATABASE_URL: postgresql://postgres:postgres@postgres-db:5432/fundamental
ports:
- "13000:3000"
depends_on:
postgres-db:
condition: service_healthy
# If you prefer live-reload or local code mount, consider switching to a dev Dockerfile.
# volumes:
# - ./:/workspace
backend:
build:
context: .
dockerfile: backend/Dockerfile
container_name: fundamental-backend
working_dir: /workspace/backend
command: uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
environment:
PYTHONDONTWRITEBYTECODE: "1"
PYTHONUNBUFFERED: "1"
# Config service base URL
CONFIG_SERVICE_BASE_URL: http://config-service:7000/api/v1
# Data persistence service base URL
DATA_PERSISTENCE_BASE_URL: http://data-persistence-service:3000/api/v1
volumes:
# 挂载整个项目,确保后端代码中对项目根目录的相对路径(如 config/)仍然有效
- ./:/workspace
ports:
- "18000:8000"
depends_on:
config-service:
condition: service_started
data-persistence-service:
condition: service_started
frontend:
build:
context: .
dockerfile: frontend/Dockerfile
container_name: fundamental-frontend
working_dir: /workspace/frontend
command: npm run dev
environment:
# 让 Next 的 API 路由代理到后端容器
NEXT_PUBLIC_BACKEND_URL: http://backend:8000/api
# Prisma 直连数据库(与后端共用同一库)
DATABASE_URL: postgresql://postgres:postgres@postgres-db:5432/fundamental?schema=public
NODE_ENV: development
NEXT_TELEMETRY_DISABLED: "1"
volumes:
- ./:/workspace
# 隔离 node_modules避免与宿主机冲突
- frontend_node_modules:/workspace/frontend/node_modules
ports:
- "13001:3001"
depends_on:
- backend
- postgres-db
- config-service
config-service:
build:
context: .
dockerfile: services/config-service/Dockerfile
container_name: fundamental-config-service
working_dir: /workspace/services/config-service
command: uvicorn app.main:app --host 0.0.0.0 --port 7000
environment:
PROJECT_ROOT: /workspace
volumes:
- ./:/workspace
ports:
- "17000:7000"
volumes:
pgdata:
frontend_node_modules:

View File

@ -0,0 +1,116 @@
# DataProvider 接口规范
本文档定义了 `BaseDataProvider` 抽象基类的接口规范。所有用于获取金融数据的数据提供商都必须继承此类并实现其定义的所有抽象方法。
## 设计哲学
`BaseDataProvider` 的设计旨在创建一个统一、标准化的接口,用于从各种不同的外部数据源(如 Tushare, iFind, yfinance, Finnhub 等)获取金融数据。通过这种方式,上层服务(如 `DataManager`)可以以一种与具体数据源无关的方式来请求数据,从而实现了系统核心逻辑与数据源的解耦。
这种设计带来了以下好处:
- **可扩展性**: 添加新的数据源变得简单,只需创建一个新的类继承 `BaseDataProvider` 并实现其接口即可,无需改动现有核心逻辑。
- **健壮性**: `DataManager` 可以根据配置实现数据源的优先级和故障转移Fallback当一个数据源不可用时可以无缝切换到备用数据源。
- **一致性**: 所有数据提供商返回的数据格式都是标准化的,简化了上层服务的数据处理逻辑。
## 接口定义 (`BaseDataProvider`)
### 1. `get_stock_basic`
- **目的**: 获取单只股票的基本信息。
- **方法签名**: `async def get_stock_basic(self, stock_code: str) -> Optional[Dict[str, Any]]`
- **参数**:
- `stock_code` (str): 股票的唯一代码。代码应尽量使用数据源通用的格式(例如 A 股的 `000001.SZ`)。
- **返回值**:
- 一个包含股票基本信息的字典 (`Dict`),例如公司名称、上市日期、行业等。
- 如果未找到该股票,则返回 `None`
- **示例**:
```json
{
"ts_code": "000001.SZ",
"name": "平安银行",
"area": "深圳",
"industry": "银行",
"list_date": "19910403"
}
```
### 2. `get_daily_price`
- **目的**: 获取指定时间范围内的每日股价行情数据。
- **方法签名**: `async def get_daily_price(self, stock_code: str, start_date: str, end_date: str) -> List[Dict[str, Any]]`
- **参数**:
- `stock_code` (str): 股票代码。
- `start_date` (str): 开始日期,格式为 'YYYYMMDD'。
- `end_date` (str): 结束日期,格式为 'YYYYMMDD'。
- **返回值**:
- 一个列表 (`List`),其中每个元素是一个字典,代表一天的行情数据。
- 如果没有数据,则返回一个空列表 `[]`
- **示例**:
```json
[
{
"trade_date": "20231229",
"open": 10.5,
"high": 10.6,
"low": 10.4,
"close": 10.55,
"vol": 1234567.0
},
...
]
```
### 3. `get_financial_statements`
- **目的**: 获取多年的财务报表数据,并将其处理成标准化的 **时间序列 (Series)** 格式。这是最核心也是最复杂的方法。
- **方法签名**: `async def get_financial_statements(self, stock_code: str, report_dates: List[str]) -> Dict[str, List[Dict[str, Any]]]`
- **参数**:
- `stock_code` (str): 股票代码。
- `report_dates` (List[str]): 财报报告期列表,格式为 `['YYYYMMDD', ...]`。通常使用年末的日期,如 `['20221231', '20211231']`
- **返回值**:
- 一个时间序列格式的字典。该字典的键 (key) 是财务指标的名称(如 `revenue`, `n_income`),值 (value) 是一个列表,列表中的每个元素代表该指标在一个年份的数据点。
- 如果无法获取任何数据,应返回一个空字典 `{}`
- **关键要求**:
1. **数据合并**: 数据提供商内部需要调用多个API如利润表、资产负债表、现金流量表、财务指标等来获取所有需要的原始指标并将它们合并。
2. **格式转换**: 必须将合并后的年度报表数据转换为标准的时间序列格式。
3. **衍生指标计算**: **数据提供商必须负责计算所有派生的财务指标**。如果某些指标如自由现金流、各种费用率、资产占比等无法从API直接获取提供商需要在内部完成计算并将计算结果一同放入返回的时间序列对象中。这确保了无论数据源如何返回给上层服务的数据都是完整且可以直接使用的。
- **示例**:
```json
{
"revenue": [
{ "year": "2021", "value": 100000000 },
{ "year": "2022", "value": 120000000 }
],
"n_income": [
{ "year": "2021", "value": 10000000 },
{ "year": "2022", "value": 12000000 }
],
"__free_cash_flow": [
{ "year": "2021", "value": 8000000 },
{ "year": "2022", "value": 9500000 }
],
"__sell_rate": [
{ "year": "2021", "value": 15.5 },
{ "year": "2222", "value": 16.2 }
]
}
```
### 4. `get_financial_statement` (辅助方法)
- **目的**: 这是一个便利的辅助方法,用于获取单份、扁平化的财务报告。它主要用于需要单点数据的场景,以保持向后兼容性。
- **方法签名**: `async def get_financial_statement(self, stock_code: str, report_date: str) -> Optional[Dict[str, Any]]`
- **实现**: 此方法通常通过调用 `get_financial_statements` 并从返回的时间序列数据中重构出单份报告来实现。基类中已提供了默认实现,通常无需重写。
- **返回值**:
- 一个扁平化的字典,包含了指定报告期的所有财务指标。
- 如果没有数据,则返回 `None`
- **示例**:
```json
{
"ts_code": "000001.SZ",
"end_date": "20221231",
"revenue": 120000000,
"n_income": 12000000,
"__free_cash_flow": 9500000,
...
}
```

View File

@ -0,0 +1,144 @@
# 数据库表结构设计 (`database_schema_design.md`)
## 1. 核心设计哲学与技术选型
经过深入讨论,我们确立了以**“为不同形态的数据建立专属的、高度优化的持久化方案”**为核心的设计哲学。这完美契合了项目追求稳定、健壮的“Rustic”风格。
我们的数据库技术栈将统一在 **PostgreSQL** 上,并通过其强大的扩展生态来满足特定的数据存储需求。
### 1.1. 时间序列数据: PostgreSQL + TimescaleDB
对于系统中最核心、数据量最大的**时间序列数据**(如财务指标、市场行情),我们明确采用 **TimescaleDB** 扩展。
- **为什么选择 TimescaleDB?**
- **解决性能瓶颈**: 它通过 **Hypertables (超表)** 机制,将一张巨大的时序表在物理上切分为按时间范围管理的小块 (Chunks)。这使得写入和基于时间的查询性能能够保持恒定的高速,不会随数据量增长而衰减。
- **支持稀疏与乱序数据**: 它的架构天然支持稀疏和乱序的数据写入,完美契合我们“有啥就存啥、随时补齐”的数据采集模式。
- **内置高级功能**: 它提供了强大的**持续聚合 (Continuous Aggregates)** 功能,可以高效地、自动化地将高频数据(如 Ticks降采样为分钟、小时、天等级别的聚合数据K线且查询速度极快。
- **零技术栈增加**: 它是一个 PostgreSQL 扩展,我们仍然使用标准 SQL 进行所有操作,无需引入和维护新的数据库系统。
### 1.2. 其他数据类型
- **生成式分析内容**: 使用标准的关系表,将结构化的元数据作为索引字段,将非结构化的文本存入 `TEXT` 字段。
- **静态与半静态数据**: 使用标准的关系表进行存储。
- **工作流与应用配置**: **优先使用 YAML 配置文件** (`config/analysis-config.yaml` 等) 来定义静态的工作流和分析模块。数据库仅用于存储需要通过管理界面动态修改的系统级配置。
- **执行过程元数据**: 使用标准的关系表来记录任务执行的结构化日志。
## 2. 详细 Schema 设计
### 2.1. 时间序列数据表
#### 2.1.1. `time_series_financials` (财务指标表)
```sql
-- 1. 创建标准的关系表
CREATE TABLE time_series_financials (
symbol VARCHAR(32) NOT NULL,
metric_name VARCHAR(64) NOT NULL, -- 标准化指标名 (e.g., 'roe', 'revenue')
period_date DATE NOT NULL, -- 报告期 (e.g., '2023-12-31')
value NUMERIC NOT NULL, -- 指标值
source VARCHAR(64), -- 数据来源 (e.g., 'tushare')
PRIMARY KEY (symbol, metric_name, period_date)
);
-- 2. 将其转换为 TimescaleDB 的超表
SELECT create_hypertable('time_series_financials', 'period_date');
COMMENT ON TABLE time_series_financials IS '存储标准化的、以时间序列格式存在的财务指标,由 TimescaleDB 管理';
```
#### 2.1.2. `daily_market_data` (每日市场数据表)
```sql
-- 1. 创建标准的关系表
CREATE TABLE daily_market_data (
symbol VARCHAR(32) NOT NULL,
trade_date DATE NOT NULL,
open_price NUMERIC,
high_price NUMERIC,
low_price NUMERIC,
close_price NUMERIC,
volume BIGINT,
pe NUMERIC,
pb NUMERIC,
total_mv NUMERIC, -- 总市值
PRIMARY KEY (symbol, trade_date)
);
-- 2. 将其转换为 TimescaleDB 的超表
SELECT create_hypertable('daily_market_data', 'trade_date');
COMMENT ON TABLE daily_market_data IS '存储每日更新的股价、成交量和关键估值指标,由 TimescaleDB 管理';
```
---
### 2.2. `analysis_results` (AI分析结果表)
```sql
CREATE TABLE analysis_results (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
symbol VARCHAR(32) NOT NULL,
module_id VARCHAR(64) NOT NULL, -- 分析模块ID (e.g., 'bull_case')
generated_at TIMESTAMTz NOT NULL DEFAULT NOW(),
model_name VARCHAR(64), -- 使用的AI模型
content TEXT NOT NULL, -- AI生成的完整文本
meta_data JSONB -- 用于存储token用量、耗时等元数据
);
COMMENT ON TABLE analysis_results IS '存储由AI大模型生成的分析报告文本';
CREATE INDEX idx_analysis_results_symbol_module ON analysis_results (symbol, module_id, generated_at DESC);
```
---
### 2.3. `company_profiles` (公司基本信息表)
```sql
CREATE TABLE company_profiles (
symbol VARCHAR(32) PRIMARY KEY, -- 标准化股票代码
name VARCHAR(255) NOT NULL, -- 公司名称
industry VARCHAR(255), -- 行业
list_date DATE, -- 上市日期
additional_info JSONB, -- 其他信息
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
COMMENT ON TABLE company_profiles IS '存储公司的基本、相对静态的信息';
```
---
### 2.4. `system_config` (系统配置表)
```sql
CREATE TABLE system_config (
config_key VARCHAR(255) PRIMARY KEY,
config_value JSONB NOT NULL,
description TEXT,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
COMMENT ON TABLE system_config IS '存储可通过UI动态修改的系统级配置敏感信息(API Key)不应存储于此';
```
---
### 2.5. `execution_logs` (执行过程日志表)
```sql
CREATE TABLE execution_logs (
id BIGSERIAL PRIMARY KEY,
report_id UUID NOT NULL, -- 关联的报告ID
step_name VARCHAR(255) NOT NULL, -- 步骤名称
status VARCHAR(32) NOT NULL, -- 'running', 'completed', 'failed'
start_time TIMESTAMPTZ NOT NULL,
end_time TIMESTAMPTZ,
duration_ms INTEGER,
token_usage JSONB, -- { "prompt": 100, "completion": 200 }
error_message TEXT,
log_details JSONB
);
COMMENT ON TABLE execution_logs IS '记录报告生成过程中每个步骤的结构化日志';
CREATE INDEX idx_execution_logs_report_id ON execution_logs (report_id);
```

View File

@ -0,0 +1,100 @@
# 财务数据字典 (Financial Data Dictionary)
本文档定义了项目前端财务报表中展示的所有数据字段。所有数据源(无论是 Tushare 还是 Finnhub提供的数据最终都应被标准化为本文档定义的字段。
**术语说明**
- Income Statement = 利润表简称IC
- Balance Sheet = 资产负债表简称BS
- Cash Flow Statement = 现金流量表简称CF
## 0. 页面元字段与昨日快照 (Meta & Snapshot)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `name` | 公司名称 | 页面元 | **API**: `stock_basic`, 字段: `name` | `company_profile2`, 字段: `name` |
| `trade_date` | 快照日期 | 快照 | `daily_basic.trade_date`(兜底 `daily.trade_date` | `stock_candles.t`(秒级时间戳,转为 YYYYMMDD |
| `close` | 股价(收盘价) | 快照 | **API**: `daily_basic.close`(兜底 `daily.close` | `stock_candles`(或 `/quote``c` 实时) |
| `pe` | PE市盈率 | 快照 | **API**: `daily_basic.pe` | `company-basic-financials.metrics.peTTM`(或 `peBasicExclExtraTTM` |
| `pb` | PB市净率 | 快照 | **API**: `daily_basic.pb` | `company-basic-financials.metrics.pb` |
| `dv_ratio` | 股息率(% | 快照 | **API**: `daily_basic.dv_ratio` | `company-basic-financials.metrics.dividendYieldTTM`(候选:`dividendYieldIndicatedAnnual` |
| `total_mv` | 市值(万元) | 快照 | **API**: `daily_basic.total_mv` | `company-basic-financials.metrics.marketCapitalization`(或 `company_profile2.marketCapitalization` |
## 1. 主要指标 (Key Indicators)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `roe` | ROE (净资产收益率) | 核心 | **API**: `fina_indicator`, **字段**: `roe` | 优先: `company-basic-financials.metrics.roeTTM`;若无,再计算: `NetIncomeLoss / StockholdersEquityTotal`financials-reported, annual |
| `roa` | ROA (总资产报酬率) | 核心 | **API**: `fina_indicator`, **字段**: `roa` | 优先: `company-basic-financials.metrics.roaTTM`;若无,再计算: `NetIncomeLoss / AssetsTotal`financials-reported, annual |
| `roic` | ROIC (投入资本回报率) | 核心 | **API**: `fina_indicator`, **字段**: `roic` | 优先: `company-basic-financials.metrics.roicTTM`(若提供);若无,再近似计算: `NetIncomeLoss / (StockholdersEquityTotal + LongTermDebt + ShortTermDebt - CashAndCashEquivalents)`financials-reported, annual |
| `grossprofit_margin` | 毛利率 | 核心 | **API**: `fina_indicator`, **字段**: `grossprofit_margin` | 优先: `company-basic-financials.metrics.grossMarginTTM`;若无,再计算: `GrossProfit / RevenuesTotal`ic, annual |
| `netprofit_margin` | 净利润率 | 核心 | **API**: `fina_indicator`, **字段**: `netprofit_margin` | 优先: `company-basic-financials.metrics.netProfitMarginTTM`;若无,再计算: `NetIncomeLoss / RevenuesTotal`ic, annual |
| `revenue` | 营业总收入 | 核心 | **API**: `income`, **字段**: `revenue` | ic 概念名候选: `RevenuesTotal`/`Revenues`financials-reported, annual备选: `company-basic-financials.metrics.revenueTTM`TTM 口径) |
| `tr_yoy` | 收入增速 | 核心 | **API**: `fina_indicator`, **字段**: `tr_yoy` | 优先: `company-basic-financials.metrics.revenueGrowthTTM`(或 `revenueGrowthYoY`);若无,再计算: `(revenue(y)-revenue(y-1))/revenue(y-1)`annual |
| `n_income` | 净利润 | 核心 | **API**: `income`, **字段**: `n_income` | ic: `NetIncomeLoss`financials-reported, annual |
| `dt_netprofit_yoy` | 净利润增速 | 核心 | **API**: `fina_indicator`, **字段**: `dt_netprofit_yoy` | 优先: `company-basic-financials.metrics.netIncomeGrowthTTM`;若无,再计算: `(net_income(y)-net_income(y-1))/net_income(y-1)`annual |
| `n_cashflow_act` | 经营净现金流 | 核心 | **API**: `cashflow`, **字段**: `n_cashflow_act` | cf 候选: `NetCashFlowOperating` / `NetCashProvidedByUsedInOperatingActivities`financials-reported, annual |
| `c_pay_acq_const_fiolta` | 资本开支 | 核心 | **API**: `cashflow`, **字段**: `c_pay_acq_const_fiolta` | cf 候选: `CapitalExpenditures` / `PaymentsToAcquirePropertyPlantAndEquipment`financials-reported, annual |
| `__free_cash_flow` | 自由现金流 | 计算 | `n_cashflow_act` - `c_pay_acq_const_fiolta` | 优先: `company-basic-financials.metrics.freeCashFlowTTM`;若需年度序列或指标缺失,再计算: `NetCashFlowOperating - CapitalExpenditures`cf, annual |
| `dividend_amount` | 分红总额 (亿元) | 计算 | **API**: `dividend` <br> 按派息年份(`pay_date`)汇总 <br> `(cash_div_tax * base_share) / 10000` | cf 候选: `PaymentsOfDividends` / `PaymentsOfDividendsTotal`financials-reported当年合计 |
| `repurchase_amount` | 回购总额 (万元) | 核心 | **API**: `repurchase` <br> 按年份汇总,取该年**最后一次**公告的`amount` | cf 候选: `RepurchaseOfCapitalStock` / `PaymentsForRepurchaseOfCommonStock`financials-reported当年合计 |
| `total_assets` | 总资产 | 核心 | **API**: `balancesheet`, **字段**: `total_assets` | bs 候选: `AssetsTotal` / `Assets`financials-reported, annual |
| `total_hldr_eqy_exc_min_int`| 净资产 | 核心 | **API**: `balancesheet`, **字段**: `total_hldr_eqy_exc_min_int` | bs 候选: `StockholdersEquityTotal`financials-reported, annual |
| `goodwill` | 商誉 | 核心 | **API**: `balancesheet`, **字段**: `goodwill` | bs 候选: `Goodwill`;备选: `GoodwillAndIntangibleAssetsTotal`financials-reported, annual |
## 2. 费用指标 (Expense Ratios)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `__sell_rate` | 销售费用率 | 计算 | `sell_exp` / `revenue` | 优先: `company-basic-financials.metrics.sgaToRevenueTTM`(若提供);若无,再计算: `SellingGeneralAndAdministrativeExpenses / RevenuesTotal`ic, annual |
| `__admin_rate`| 管理费用率 | 计算 | `admin_exp` / `revenue` | 多数公司不可稳定分拆,通常并入 SG&A若披露 `GeneralAndAdministrativeExpense` 则计算: `G&A / Revenue`ic, annual否则标注 N/A |
| `__rd_rate` | 研发费用率 | 计算 | `rd_exp` / `revenue` | 优先: `company-basic-financials.metrics.researchAndDevelopmentToRevenueTTM`(若提供);若无,再计算: `ResearchAndDevelopmentExpense / RevenuesTotal`ic, annual |
| `__tax_rate` | 所得税率 | 计算 | `income_tax_exp` / `total_profit` | 优先: `company-basic-financials.metrics.effectiveTaxRateTTM`;若无,再计算: `IncomeTaxExpense / IncomeBeforeIncomeTaxes`ic, annual |
| `__depr_ratio`| 折旧费用占比 | 计算 | `depr_fa_coga_dpba` / `revenue` | 若有 `company-basic-financials.metrics.depreciationToRevenueTTM` 则优先;若无,再计算: `DepreciationAndAmortization / RevenuesTotal`ic/cf, annual |
## 3. 资产负债结构 (Asset & Liability Structure)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `__money_cap_ratio` | 现金占比 | 计算 | `money_cap` / `total_assets` | 计算: `CashAndCashEquivalents / AssetsTotal`bs, annual |
| `__inventories_ratio` | 库存占比 | 计算 | `inventories` / `total_assets` | 计算: `Inventory / AssetsTotal`bs, annual |
| `__ar_ratio` | 应收款占比 | 计算 | `accounts_receiv_bill` / `total_assets` | 计算: `AccountsReceivable / AssetsTotal`bs, annual |
| `__prepay_ratio` | 预付款占比 | 计算 | `prepayment` / `total_assets` | 计算: `Prepaid... / AssetsTotal`bs, annual |
| `__fix_assets_ratio`| 固定资产占比 | 计算 | `fix_assets` / `total_assets` | 计算: `PropertyPlantAndEquipmentNet / AssetsTotal`bs, annual |
| `__lt_invest_ratio` | 长期投资占比 | 计算 | `lt_eqt_invest` / `total_assets` | 计算: `LongTermInvestments / AssetsTotal`bs, annual |
| `__goodwill_ratio`| 商誉占比 | 计算 | `goodwill` / `total_assets` | 计算: `Goodwill 或 GoodwillAndIntangibleAssetsTotal / AssetsTotal`bs, annual |
| `__other_assets_ratio`| 其他资产占比 | 计算 | **公式**: `(total_assets - sum_of_known_assets) / total_assets` | 计算: `AssetsTotal - (Cash + Inventory + AR + Prepaid + PPE + LTInvest + Goodwill)` 后除以 `AssetsTotal`bs, annual |
| `__ap_ratio` | 应付款占比 | 计算 | `accounts_pay` / `total_assets` | 计算: `AccountsPayable / AssetsTotal`bs, annual |
| `__adv_ratio` | 预收款占比 | 计算 | `(adv_receipts + contract_liab) / total_assets` | 计算: `DeferredRevenue/ContractWithCustomerLiability / AssetsTotal`bs, annual |
| `__st_borr_ratio` | 短期借款占比 | 计算 | `st_borr` / `total_assets` | 计算: `ShortTermDebt / AssetsTotal`bs, annual |
| `__lt_borr_ratio` | 长期借款占比 | 计算 | `lt_borr` / `total_assets` | 计算: `LongTermDebt / AssetsTotal`bs, annual |
| `__operating_assets_ratio`| 运营资产占比 | 计算 | **公式**: `(运营资产) / total_assets` <br> `运营资产 = (inv + ar + pre) - (ap + adv + contract_liab)` | 计算: `(Inventory + AccountsReceivable + Prepaid) - (AccountsPayable + DeferredRevenue)` 后除以 `AssetsTotal`bs, annual |
| `__interest_bearing_debt_ratio` | 有息负债率 | 计算 | (`st_borr` + `lt_borr`) / `total_assets` | 计算: `(ShortTermDebt + LongTermDebt) / AssetsTotal`bs, annual |
## 4. 周转能力 (Turnover Ratios)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `invturn_days` | 存货周转天数 | 核心 | **API**: `fina_indicator`, **字段**: `invturn_days` | 优先: `company-basic-financials.metrics.inventoryTurnoverTTM`,天数=`365/turnover`;若无,再计算: `COGS/平均库存`,天数=`365/周转率`ic: `CostOfGoodsSold`/`CostOfGoodsAndServicesSold`annual |
| `arturn_days` | 应收款周转天数 | 核心 | **API**: `fina_indicator`, **字段**: `arturn_days` | 优先: `company-basic-financials.metrics.daysSalesOutstandingTTM``receivablesTurnoverTTM`(天数可由周转率反推);若无,再计算: `天数=365/(Revenue/平均应收)`annual |
| `payturn_days`| 应付款周转天数 | 计算 | **公式**: `(365 * 平均应付账款) / 营业成本` <br> `营业成本` = `revenue * (1 - grossprofit_margin)` | 计算: `days = 365 * 平均 AccountsPayable / COGS`ic COGS平均值=当年与上年期末均值annual |
| `fa_turn` | 固定资产周转率 | 核心 | **API**: `fina_indicator`, **字段**: `fa_turn` | 优先: `company-basic-financials.metrics.fixedAssetTurnoverTTM`(若有);若无,再计算: `Revenue / 平均PPE净额`annual |
| `assets_turn` | 总资产周转率 | 核心 | **API**: `fina_indicator`, **字段**: `assets_turn` | 优先: `company-basic-financials.metrics.assetTurnoverTTM`;若无,再计算: `Revenue / 平均总资产`annual |
## 5. 人均效率 (Per Capita Efficiency)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `employees` | 员工人数 | 核心 | **API**: `stock_company`, **字段**: `employees` | `company_profile2`: `employeeTotal`/`employeeCount`(若缺失则置空) |
| `__rev_per_emp` | 人均创收 | 计算 | `revenue` / `employees` | 计算: `revenue / employees`(收入见上;单位按需求换算) |
| `__profit_per_emp`| 人均创利 | 计算 | `n_income` / `employees` | 计算: `net_income / employees` |
| `__salary_per_emp`| 人均工资 | 计算 | `c_paid_to_for_empl` / `employees` | US 披露通常无稳定“支付给员工现金”科目;标注 N/A 或外部口径 |
## 6. 市场表现 (Market Performance)
| 标准字段 (Standard Field) | 中文描述 | 分类 | Tushare 来源 | Finnhub 来源 |
| :--- | :--- | :--- | :--- | :--- |
| `close` | 股价 (收盘价) | 核心 | **API**: `daily`, **字段**: `close` | `stock_candles` 日线 或 `/quote` 实时Finnhub 市场数据) |
| `total_mv` | 市值 | 核心 | **API**: `daily_basic`, **字段**: `total_mv` | `company-basic-financials.metrics.marketCapitalization``company_profile2.marketCapitalization` |
| `pe` | PE (市盈率) | 核心 | **API**: `daily_basic`, **字段**: `pe` | `peTTM` / `peBasicExclExtraTTM`company-basic-financials.metrics |
| `pb` | PB (市净率) | 核心 | **API**: `daily_basic`, **字段**: `pb` | `pb`company-basic-financials.metrics |
| `holder_num` | 股东户数 | 核心 | **API**: `stk_holdernumber`, **字段**: `holder_num` | 暂无稳定字段Finnhub标注 NA |

99
docs/logs/2025-11-03.md Normal file
View File

@ -0,0 +1,99 @@
## 2025-11-03 开发日志
**比较基线**
- 上一次提交: b982cd52025-10-31 22:14 +0800“更新前端配置、文档和脚本”
**今日概览**
- 共修改 20 个文件:新增约 1047 行,删除约 616 行
- 关键主题:
- 后端数据源抽象与路由重构(引入 DataManager 与多 Provider
- AI 分析模块的编排、依赖解析与流式输出接口
- 前端接入 PrismaPostgreSQL并新增报告存储 API 与页面
- 移除旧 Tushare 客户端实现,统一到新 Provider 架构
- 配置、依赖与开发脚本同步更新
---
### 后端FastAPI
**数据源抽象与管理**
- 新增 `backend/app/data_manager.py`:集中加载 `config/data_sources.yaml`根据市场CN/US/HK/JP优先级选择 Provider提供统一的 `get_stock_basic`、`get_financial_statements`、`get_daily_price` 等方法。
- 新增 Provider 抽象与实现:
- 抽象基类:`backend/app/data_providers/base.py`
- 实现:`backend/app/data_providers/tushare.py`、`.../yfinance.py`、`.../finnhub.py`、`.../ifind.py`
- Token 优先从环境变量读取,次选 `config/config.json`
- 新增配置 `config/data_sources.yaml`:定义各数据源的 `api_key_env` 和各市场的优先顺序。
- 移除旧实现 `backend/app/services/tushare_client.py`
**路由与业务逻辑重构**
- `backend/app/routers/financial.py` 大幅重写:
- `GET /data-sources`:返回需要密钥的数据源列表(用于前端指引)。
- 分析编排接口:
- `POST /china/{ts_code}/analysis`:读取分析模块配置,拓扑排序按依赖顺序执行,汇总结果。
- `GET /china/{ts_code}/analysis/{analysis_type}`:单模块分析,自动解析依赖并注入上下文。
- `GET /china/{ts_code}/analysis/{analysis_type}/stream`:流式输出纯文本分析内容。
- `GET /analysis-config``PUT /analysis-config`:读取/更新 `config/analysis-config.json`
- `GET /china/{ts_code}`:通过 `DataManager` 批量拉取多年度报表,聚合为前端所需的 `series` 结构。
- `GET /china/{ts_code}/company-profile`:使用 LLM 生成公司画像(非流式)。
**分析客户端**
- `backend/app/services/analysis_client.py`:统一的 OpenAI 兼容客户端,支持:
- 非流式生成:`generate_analysis(...)`
- 流式生成:`generate_analysis_stream(...)`
- 安全模板占位符替换与上下文合并;读写 `config/analysis-config.json`
**应用入口与依赖**
- `backend/app/main.py`
- 增强日志输出(自定义 Handler开发期强制输出到 stdout
- 保持 CORS 全开与路由注册(`/api/v1/config/*`、`/api/v1/financials/*`)。
- `backend/requirements.txt`:补充 `yfinance`、`finnhub-python`、`pandas`、`PyYAML`、`asyncpg`、`greenlet` 等依赖。
---
### 配置与文档
- `config/analysis-config.json`:更新分析模块配置(依赖、模型、模板)。
- `docs/user-guide.md`:小幅补充。
---
### 前端Next.js 15 / React 19
**Prisma 集成与报告 API**
- 新增 Prisma
- `frontend/prisma/schema.prisma` 定义 `Report { id, symbol, content(Json), createdAt }`
- `frontend/src/lib/prisma.ts` 提供 PrismaClient 单例。
- 依赖更新:`@prisma/client`、`prisma` 等;`package-lock.json` 同步。
- 新增 API
- `GET/POST /api/reports`:分页查询与创建报告(基本校验)。
- `GET /api/reports/[id]`:按 ID 获取报告。
**页面与其他改动**
- `frontend/src/app/reports/page.tsx`:渲染报告列表并跳转至详情页 `reports/[id]`
- 新增 `frontend/src/app/reports/[id]/page.tsx`:报告详情页。
- `frontend/src/app/report/[symbol]/page.tsx`:较大调整(生成/展示逻辑整合)。
- 其他微调:`layout.tsx`、`api/financials/[...slug]/route.ts`、`.gitignore`、`next.config.mjs`。
- 运行脚本端口:`npm run dev` 默认 3001。
---
### 脚本
- `scripts/dev.sh`:增强启动/开发流程。
- 若干测试脚本小改:`scripts/test-*.py`。
- 新增过渡脚本 `scripts/tushare_legacy_client.py`(旧 Tushare 逻辑备份/兼容)。
---
### 删除/新增(关键)
- 删除:`backend/app/services/tushare_client.py`。
- 新增(尚未提交的工作副本):
- 后端:`backend/app/data_manager.py`、`backend/app/data_providers/*`
- 配置:`config/data_sources.yaml`
- 前端:`frontend/prisma/schema.prisma`、`frontend/src/lib/prisma.ts`、`frontend/src/app/api/reports/*`、`frontend/src/app/reports/[id]/page.tsx`
---
### 备注
- 需确保各数据源的密钥通过环境变量或 `config/config.json` 正确配置。
- 分析模块配置(依赖与模板)变更需同步前后端约定。

74
docs/logs/2025-11-04.md Normal file
View File

@ -0,0 +1,74 @@
## 2025-11-04 开发日志
**今日概览**
- 关键主题:
- 新增财务指标支持:在 Tushare 数据源中实现并集成了员工人数、股东户数、研发人员、所得税与利润总额关系等关键指标。
- 全栈功能贯通完成了从后端数据获取、API 暴露到前端报告页展示的完整开发链路。
- 技术债清理:移除了多个陈旧的、功能单一的测试脚本,整合测试逻辑。
- 文档同步:更新了用户手册,以反映新增功能。
---
### 后端FastAPI
**数据源 (Tushare Provider)**
- `backend/app/data_providers/tushare.py`:
- 新增 `get_employee_number` 方法,用于获取上市公司员工人数及构成(技术、生产、销售、行政)。
- 新增 `get_holder_number` 方法,用于获取股东户数及变化。
- 新增 `get_tax_to_ebt` 方法,用于计算所得税与利润总额的比例,以分析税负情况。
- 可能对现有财务报表获取逻辑进行了优化,以支持新指标的整合。
**API 路由与模型 (Financial Router & Schemas)**
- `backend/app/routers/financial.py`:
- 在 `GET /china/{ts_code}` 聚合接口中,新增了对员工人数、股东户数、税收数据的调用和组装逻辑。
- 确保新指标能够正确地合并到返回给前端的 `series` 数据结构中。
- `backend/app/schemas/financial.py`:
- 更新了相关的 Pydantic 模型,加入了 `employee_number`, `holder_number`, `tax_to_ebt` 等字段的定义,确保 API 的类型安全。
**数据管理器 (Data Manager)**
- `backend/app/data_manager.py`:
- 对 `DataManager` 进行了相应修改,使其能够统一调度 Tushare Provider 提供的新数据接口。
---
### 前端Next.js
**财务报告页面**
- `frontend/src/app/report/[symbol]/page.tsx`:
- 在报告页中新增了图表或表格,用于可视化展示员工人数变化、股东户数趋势以及所得税与利润总额的关系。
- 调整了页面布局和组件,以容纳新的数据模块。
- `frontend/src/app/reports/[id]/page.tsx`:
- 对已保存的报告页面进行了适配,确保在加载旧报告或包含新指标的报告时能够正确渲染。
**工具函数与类型**
- `frontend/src/lib/financial-utils.ts`:
- 添加了处理新财务指标(如格式化员工数据、计算股东户数环比变化等)的辅助函数。
- `frontend/src/types/index.ts`:
- 更新了 TypeScript 类型定义,增加了与新后端模型对应的接口。
**其他**
- `frontend/package.json`: 可能更新了某些依赖库以支持新的图表或功能。
- `frontend/src/lib/prisma.ts`: 可能调整了 Prisma 客户端的配置或扩展。
---
### 脚本与文档
**脚本清理**
- 删除了以下旧测试脚本,相关功能可能已通过单元测试或集成测试覆盖:
- `scripts/test-employees.py`
- `scripts/test-holder-number.py`
- `scripts/test-holder-processing.py`
- `scripts/test-tax-to-ebt.py`
- `scripts/test-api-tax-to-ebt.py`
- `scripts/test-config.py`
- 删除了 `scripts/tushare_legacy_client.py`,完成了向新 Provider 架构的迁移。
**文档**
- `docs/user-guide.md`: 更新了用户指南,加入了关于如何解读新增财务指标(员工、股东、税收)的说明。
---
### 备注
- 本次更新丰富了公司的非财报基本面数据,为分析提供了更多维度。
- 前端报告页的性能在增加了新图表后需要进一步观察。

86
docs/logs/2025-11-06.md Normal file
View File

@ -0,0 +1,86 @@
## 2025-11-06 开发日志
**今日概览**
- 多市场“昨日快照”接口落地CN/US/HK/JP并在报告页新增快照卡片
- 美股数据链路增强Finnhub 支持 SDK+HTTP 兜底与标准化映射YFinance 兼容CN代码规则
- 财务报表聚合与 period 统一YYYYMMDD去重并按年限裁剪日度行情/估值按报告期对齐
- 分析执行体验优化:顺序流式、可停止/继续、单模块重试、耗时/进度统计
- 配置与文档Prisma 读取 `config/config.json` 数据库URL补充“财务数据字典”和项目状态说明
---
### 后端FastAPI
**DataManager 与数据源策略**
- `backend/app/data_manager.py`
- 只从 `config/config.json` 读取各 Provider 的 `api_key`(不再读取环境变量),初始化受控更清晰
- 依据 `config/data_sources.yaml` 的市场优先级,按序尝试 `tushare`、`yfinance`、`finnhub` 等 Provider
- 统一 `get_data` 成功判定与异常兜底支持多返回类型list/dict/标量)
- `get_financial_statements`:将扁平报表数据规范化为 `series` 结构,确保数值可序列化
**Finnhub Provider美股重点**
- `backend/app/data_providers/finnhub.py`
- 初始化增加 Token 掩码日志SDK 失败时自动回退到 HTTP 接口(`profile2`、`financials-reported`
- `get_stock_basic` 标准化公司基本信息;`get_daily_price` 输出 `{trade_date, open, high, low, close, vol}`
- `get_financial_statements``financials-reported` 年度数据映射为内部字段,计算派生比率:`grossprofit_margin`、`netprofit_margin`、`roa`、`roe`,并直接输出 `series`
**YFinance Provider兼容与映射**
- `backend/app/data_providers/yfinance.py`
- 适配中国代码:`.SH -> .SS``.SZ` 优先尝试无后缀
- 规范化 `stock_basic`、日线行情与年度财务(合并利润表/资产负债表/现金流),提供基础字段重命名
**财务与分析路由**
- `backend/app/routers/financial.py`
- 新增“昨日快照”接口:
- `GET /api/financials/china/{ts_code}/snapshot`:优先 `daily_basic`,兜底 `daily`
- `GET /api/financials/{market}/{stock_code}/snapshot`CN 复用上式其他市场用日线在近10日内回看最近交易日
- `GET /api/financials/{market}/{stock_code}`
- 一次性拉取并聚合年度财报 `series`;识别当年最新报告期,将估值/股价按报告期映射
- 统一 `period`(优先 YYYYMMDD缺失则由 `year` 映射为 `YYYY1231`),去重、排序并按 `years` 裁剪
- 保持分析编排/单模块/流式接口与分析配置读写接口,便于前端顺序流式展示
---
### 前端Next.js 15 / React 19
**报告页体验**
- `frontend/src/app/report/[symbol]/page.tsx`
- 新增“昨日快照”卡片日期、股价、PE、PB、股息率、总市值亿元
- 分析执行:顺序流式、可停止/继续、总进度与耗时、单模块“重新生成分析”
- 财务表格:统一 `period`,扩充指标(人均效率、费用率、资产占比、周转能力、市场表现等)并突出关键阈值
**数据获取 Hooks**
- `frontend/src/hooks/useApi.ts`
- 新增 `useChinaSnapshot`、`useSnapshot`统一市场参数china/us/hk/jp与 SWR 策略
- `fetcher` 强化:兼容非标准 JSON 的错误返回,统一抛出可读错误
**Prisma 适配**
- `frontend/src/lib/prisma.ts`
- 从 `config/config.json` 动态解析数据库URL`postgresql+asyncpg://` 转换为 Prisma 需要的 `postgresql://`,默认追加 `schema=public`
- 开发环境下复用单例,减少连接开销
---
### 文档
- 新增:`docs/financial_data_dictionary.md`(统一字段口径与来源映射)
- 更新:`docs/project-status.md`(现状、限制与后续计划);`docs/user-guide.md`(报告页、快照与分析流程)
- 清理:删除 `docs/tasks.md`
---
### 风险与注意事项
- Provider 密钥现仅从 `config/config.json` 读取;未配置将跳过需密钥的数据源(日志有警告)
- 美股字段映射存在口径差异,个别指标为近似计算,需结合“财务数据字典”持续校验
- 单模块分析尝试附带最近年度财报上下文,边界与稳定性需结合真实数据回归
---
### 验收建议
- 快照:
- CN: `GET /api/financials/china/600519.SH/snapshot`
- US: `GET /api/financials/us/AAPL/snapshot`
- 报告页:访问 `/report/600519?market=china` 验证快照卡片、顺序流式与保存按钮
- 多源融合:调整 `config/data_sources.yaml` 优先级,观察回退与成功率日志
- 数据库:在无 `.env` 的场景下,确认 Prisma 能从 `config/config.json` 正确加载连接串

53
docs/project-status.md Normal file
View File

@ -0,0 +1,53 @@
## 项目当前状态
### 项目目的
- **目标**构建一套面向A股与美股的基本面分析与研究支持系统聚合股价、财务数据与外部资讯结合大模型进行结构化分析与自动化报告生成并支持历史留存与复盘。
### 当前功能与数据状态
- **A股财务数据**:已可正常获取并落库/展示。
- **每股口径per-share数据**:仅部分可得;个别财务指标存在异常或口径不一致的问题。
- 相关定义、口径说明与已知问题,请参见文档:[财务数据字典](./financial_data_dictionary.md)。
- **报告与分析**
- 首页输入公司代码与市场,点击“生成报告”后,应展示:
- 公司股价
- 财务数据
- 大模型的分析结论
- 目前分析步骤仅采用“提示词”的方式驱动;生成的报告支持保存到数据库。
### 系统运行与部署
- **数据库与网络**
- 数据库部署在许晨公司内部网络环境中;需要内网或通过跳板/映射方式访问。
- 本地运行可能存在连接限制。数据库当前未在本机配置,但可按需进行本机配置(需要对 LV 做调整,最终仍以本机配置为主)。
- **运行方式**
- 可将项目打包为 Docker 运行,因此也支持纯本地部署。
- 提供脚本位于 `scripts/`
- `dev.sh`:调试脚本,同时启动前端与后端
- `run.sh`:直接运行脚本
- **当前运行环境**
- 目前在许成的一台虚拟机上运行,便于访问内部数据库,并通过 LV 保垒机进行映射。
### 已知问题/限制
- 每股数据覆盖面不全,部分财务指标存在疑似口径或计算问题(详见“财务数据字典”)。
- 数据库处于内网环境,外部或本地直连存在门槛;需配置 LV/隧道或切换至本机数据库配置。
- 大模型分析仍以提示词工程为主,未融合多源结构化信号。
### 后续计划(优先级由高到低)
1. **完成美股数据获取并校验正确性**(当前最紧要)
- 引入更多数据源以提升覆盖面与一致性(如同花顺 iFinD如能接入 Bloomberg蓬勃更佳但实现难度较高。
2. **接入第三方大模型 API**(由许成购买的公司提供)
- 数据范围:新闻公告、研究报告、电话会议纪要等。
- 具体接入方式尚未确定,需进一步讨论与评估。
3. **升级分析框架与规则**
- 不再仅依赖提示词;需融合财务数据、股价、新闻与研报等多源信息。
- 按模块/主题进行特征组合与权重设计;输出结构化因子与可解释结论。
- 具体实现细节需与许成沟通确认。
### 待定与沟通事项
- 本机数据库配置方案与 LV 调整细节(端口、权限、备份策略)。
- 第三方大模型 API 的调用配额、上下文长度、费用与合规约束。
- 多数据源融合后的字段映射、口径优先级与冲突解决策略。
- 前端报告展示结构(股价/财务/大模型结论)的最终版式与交互细节。
### 参考
- 数据口径与字段说明:[财务数据字典](./financial_data_dictionary.md)
- 启动/运行脚本:`scripts/dev.sh`、`scripts/run.sh`

View File

@ -0,0 +1,198 @@
# Rust 数据持久化服务设计 (`rust_data_service_design.md`)
## 1. 服务定位与核心职责
- **服务名称**: `data-persistence-service`
- **核心定位**: 本服务是整个微服务架构中**唯一的数据持久化层**。它是数据库的**独占所有者 (Sole Owner)**,负责管理所有与数据库的交互。
### 1.1. 职责边界:核心实体服务
本服务被设计为**核心实体数据服务**,而非一个包罗万象的、管理所有数据的“上帝服务”。它的职责被严格限定在管理那些**跨多个业务领域共享的核心数据实体**上。
这种设计遵循了一种务实的**混合微服务数据模式**
- **核心数据集中管理**: 保证了通用数据的唯一性和一致性。我们定义的核心实体包括:
- 公司基本信息 (`company_profiles`)
- 标准化财务数据 (`time_series_financials`)
- 标准化市场数据 (`daily_market_data`)
- AI分析结果 (`analysis_results`) - 作为一种可被多方消费的核心产出物。
- **业务数据独立持久化**: 未来新增的、具有独立业务领域的微服务(例如“量化回测服务”)将被**允许并鼓励拥有和管理自己的数据库 Schema 或表**。这些新服务在需要核心实体数据时,应通过调用本服务提供的 API 来获取,而不是直接连接数据库。
这一策略确保了核心数据的一致性,同时为新服务的独立开发和快速迭代提供了最大的灵活性。
## 2. 技术选型与开发范式
### 2.1. 核心技术栈
- **语言**: **Rust**
- **开发套件**: **`service_kit`** (项目内置的一站式微服务开发套件)
- **Web 框架**: **`axum`**
- **数据库交互**: **`sqlx`**
- **序列化/反序列化**: **`serde`** (由 `service_kit` 自动集成)
### 2.2. 开发范式API 规范驱动
我们将采纳 `service_kit` 提供的、以 **OpenAPI** 规范为核心的开发范式。
- **数据契约**: 所有的数据传输对象 (DTOs) 都将使用 `service_kit` 提供的 `#[api_dto]` 宏进行标注。此宏会自动派生 `serde``utoipa::ToSchema`,确保我们的 Rust 代码即是 API 规范的“唯一事实源”。
- **前后端协同**: 我们将使用 `cargo forge generate-types` 命令,从服务自动生成的 OpenAPI 规范中,为前端项目生成 TypeScript 类型定义,实现端到端的类型安全。
- **数据交换格式**: 服务间的数据交换格式依然是 **JSON**
## 3. API 端点设计 (API Endpoint Design)
API 的设计严格服务于对核心实体的通用读写操作。
---
### 3.1. 公司信息 (`/companies`)
- **对应表**: `company_profiles`
| Method | Endpoint | 描述 |
| :--- | :--- | :--- |
| `PUT` | `/api/v1/companies` | 创建或更新Upsert一个公司的基本信息 |
| `GET` | `/api/v1/companies/{symbol}` | 获取指定公司的基本信息 |
---
### 3.2. 市场与财务数据 (`/market-data`)
- **对应表**: `time_series_financials`, `daily_market_data`
| Method | Endpoint | 描述 |
| :--- | :--- | :--- |
| `POST` | `/api/v1/market-data/financials/batch` | 批量写入多条时间序列财务指标 |
| `GET` | `/api/v1/market-data/financials/{symbol}` | 查询某公司的财务指标 (支持按 `metrics`, `start_date`, `end_date` 过滤) |
| `POST` | `/api/v1/market-data/daily/batch` | 批量写入多条每日市场行情数据 |
| `GET` | `/api/v1/market-data/daily/{symbol}` | 查询某公司的每日行情 (支持按 `start_date`, `end_date` 过滤) |
---
### 3.3. AI 分析结果 (`/analysis-results`)
- **对应表**: `analysis_results`
| Method | Endpoint | 描述 |
| :--- | :--- | :--- |
| `POST` | `/api/v1/analysis-results` | 保存一条新的 AI 分析结果 |
| `GET` | `/api/v1/analysis-results` | 查询分析结果列表 (支持按 `symbol`, `module_id` 过滤) |
| `GET` | `/api/v1/analysis-results/{id}` | 获取单条分析结果的详情 |
---
### 3.4. 系统配置 (`/system-config`)
- **对应表**: `system_config`
| Method | Endpoint | 描述 |
| :--- | :--- | :--- |
| `PUT` | `/api/v1/system-config/{key}` | 创建或更新一条键值对配置 |
| `GET` | `/api/v1/system-config/{key}` | 获取一条键值对配置 |
## 4. 数据传输对象 (DTOs)
所有 API 的请求体和响应体都将使用 `service_kit``#[api_dto]` 宏进行定义以自动获得序列化、API Schema 生成和调试能力。
```rust
use service_kit::macros::api_dto;
// 示例:用于批量写入财务数据的 DTO
#[api_dto]
pub struct TimeSeriesFinancialDto {
pub symbol: String,
pub metric_name: String,
pub period_date: chrono::NaiveDate,
pub value: f64,
pub source: Option<String>,
}
// 示例:用于创建 AI 分析结果的 DTO
#[api_dto]
pub struct NewAnalysisResultDto {
pub symbol: String,
pub module_id: String,
pub model_name: Option<String>,
pub content: String,
pub meta_data: Option<serde_json::Value>,
}
```
## 5. 开发流程与工具链
本服务将完全遵循 `service_kit` 提供的标准化开发流程。
- **项目初始化**: 使用 `cargo generate --git <repo_url> service-template` 创建服务骨架。
- **质量保障**:
- 代码风格检查: `cargo forge lint`
- 单元与集成测试: `cargo forge test`
- **API 调试与交互**: 使用 `forge-cli` 工具,通过 `cargo forge <command>` 与正在运行的服务进行交互式 API 调用和调试。
- **前端协同**: 在 CI/CD 流程或本地开发中,通过 `cargo forge generate-types` 命令,自动将本服务的 API 类型同步到前端项目。
## 6. 项目结构(建议)
```
/data-persistence-service
├── Cargo.toml
└── src/
├── main.rs # 应用入口, 初始化数据库连接池, 定义路由
├── error.rs # 统一的错误处理类型
├── db.rs # 数据库交互逻辑 (使用 sqlx)
├── models.rs # 数据库表对应的结构体
├── dtos.rs # API 请求/响应对应的结构体
└── api/
├── mod.rs
├── companies.rs
├── market_data.rs
└── analysis.rs
```
## 7. 实施计划 (Implementation Plan & To-Do List)
本部分将开发 `data-persistence-service` 的过程分解为一系列可执行、可追踪的任务。
### Phase 1: 项目初始化与基础设置
- [x] **T1.1**: 使用 `cargo generate``service-template``services/data-persistence-service` 目录下初始化新项目。
- [x] **T1.2**: 清理模板中的示例代码(如 `hello` 模块)。
- [x] **T1.3**: 配置 `Cargo.toml`,添加 `sqlx` (with `postgres`, `runtime-tokio-rustls`, `chrono`, `uuid`, `json`), `axum`, `tokio`, `serde` 等核心依赖。
- [x] **T1.4**: 设置 `.env` 文件,用于管理 `DATABASE_URL` 等环境变量。
- [x] **T1.5**: 在 `main.rs` 中建立与 PostgreSQL 的数据库连接池 (`sqlx::PgPool`)。
### Phase 2: 数据库集成与迁移
- [x] **T2.1**: 安装 `sqlx-cli` (`cargo install sqlx-cli`)。
- [x] **T2.2**: 使用 `sqlx-cli` 初始化迁移目录 (`sqlx migrate add create_initial_tables`)。
- [x] **T2.3**: 在生成的迁移 SQL 文件中,编写 `CREATE TABLE` 语句,创建 `docs/database_schema_design.md` 中定义的所有表 (`company_profiles`, `time_series_financials` 等)。
- [x] **T2.4**: 在迁移 SQL 文件中,为时序表 (`time_series_financials`, `daily_market_data`) 添加 `create_hypertable` 命令。
- [x] **T2.5**: 运行 `sqlx migrate run` 应用迁移,并在数据库中验证表结构是否正确创建。
- [x] **T2.6**: 在 `src/models.rs` 中,根据数据库表结构,编写对应的 Rust 结构体。
### Phase 3: 核心 API 实现
- [x] **T3.1**: **Companies API**:
- [x] 在 `src/dtos.rs` 中创建 `CompanyProfileDto`
- [x] 在 `src/db.rs` 中实现 `upsert_company``get_company_by_symbol` 数据库操作函数。
- [x] 在 `src/api/companies.rs` 中创建 `PUT /api/v1/companies``GET /api/v1/companies/{symbol}``axum` handler并连接到 `db` 函数。
- [x] **T3.2**: **Market Data API**:
- [x] 在 `src/dtos.rs` 中创建 `TimeSeriesFinancialDto``DailyMarketDataDto`
- [x] 在 `src/db.rs` 中实现 `batch_insert_financials``get_financials_by_symbol` 函数。
- [x] 在 `src/db.rs` 中实现 `batch_insert_daily_data``get_daily_data_by_symbol` 函数。
- [x] 在 `src/api/market_data.rs` 中创建对应的 `axum` handlers 和路由。
- [x] **T3.3**: **Analysis Results API**:
- [x] 在 `src/dtos.rs` 中创建 `NewAnalysisResultDto``AnalysisResultDto`
- [x] 在 `src/db.rs` 中实现 `create_analysis_result``get_analysis_results` 函数。
- [x] 在 `src/api/analysis.rs` 中创建对应的 `axum` handlers 和路由。
- [x] **T3.4**: 在 `main.rs` 中,将所有 API 路由组合起来。
### Phase 4: 容器化与集成
- [x] **T4.1**: 编写多阶段 `Dockerfile`,优化镜像大小和构建速度。
- [x] **T4.2**: 在根目录的 `docker-compose.yml` 中,添加 `data-persistence-service` 的定义,并配置其依赖 `postgres-db`
- [x] **T4.3**: 修改 `Tiltfile` 以包含新的 Rust 服务,确保 `tilt up` 可以成功构建并运行该服务。
- [x] **T4.4**: **(集成点)** 修改现有的 Python `backend` 服务,使其不再直接连接数据库,而是通过 HTTP 请求调用 `data-persistence-service` 的 API 来读写数据。
### Phase 5: 测试与文档
- [x] **T5.1**: 为 `db.rs` 中的每个数据库操作函数编写单元测试(需要 `sqlx` 的 test-macros 特性)。
- [x] **T5.2**: 为每个 API 端点编写集成测试。
- [ ] **T5.3**: 使用 `#[api_dto]` 宏确保所有 DTO 都已正确集成到 OpenAPI 规范中。
- [ ] **T5.4**: 运行 `cargo forge generate-types`,验证能否成功生成 TypeScript 类型文件。
- [ ] **T5.5**: 编写 `README.md`,说明如何本地启动、配置和测试该服务。

276
docs/user-guide.md Normal file
View File

@ -0,0 +1,276 @@
# 用户使用文档
## 欢迎使用基本面分析系统
基本面分析系统是一个专业的股票分析平台,帮助投资者通过多维度的基本面分析,做出更明智的投资决策。
## 目录
- [快速开始](#快速开始)
- [主要功能](#主要功能)
- [使用指南](#使用指南)
- [常见问题](#常见问题)
- [系统配置](#系统配置)
## 快速开始
### 1. 生成分析报告
1. **访问首页**:在浏览器中打开系统首页
2. **输入股票代码**:在输入框中输入股票代码,例如:
- 中国股票:`600519`(会自动识别为 600519.SH`600519.SH`
- 美国股票:`AAPL`
- 香港股票:`00700.HK`
3. **选择交易市场**:从下拉菜单中选择对应的交易市场(中国、香港、美国、日本)
4. **生成报告**:点击"生成报告"按钮,系统将自动获取财务数据并生成分析报告
### 2. 查看报告
报告生成后,您将看到包含以下内容的综合报告:
- **股价图表**:来自 TradingView 的实时股价图表
- **财务数据**:多年财务指标对比,包括:
- 主要指标ROE、ROA、ROIC、毛利率、净利润率等
- 费用指标:销售费用率、管理费用率、研发费用率等
- 资产占比:现金占比、库存占比、应收款占比等
- 周转能力:存货周转天数、应收款周转天数等
- 人均效率:人均创收、人均创利、人均工资等
- 市场表现股价、市值、PE、PB、股东户数等
- **AI 分析模块**:基于财务数据的智能分析,包括:
- 公司简介
- 业务分析
- 财务健康度评估
- 投资建议等
### 3. 报告操作
- **开始分析**:点击"开始分析"按钮,系统将按顺序生成各个分析模块
- **停止**:在分析过程中,可以随时点击"停止"按钮中断分析
- **继续**:停止后可以点击"继续"按钮恢复分析
- **重新生成分析**:对任意分析模块,可以点击"重新生成分析"按钮重新生成
## 主要功能
### 1. 股票分析报告
系统提供全面的股票基本面分析,包括:
- **财务数据展示**:自动从 Tushare 等数据源获取最新的财务数据
- **多维度分析**:涵盖盈利能力、运营效率、财务健康度等多个维度
- **历史对比**:展示多年的财务指标变化趋势
- **实时图表**:集成 TradingView 高级图表组件,提供专业的股价图表
### 2. 智能分析模块
系统使用 AI 模型(如 Google Gemini对财务数据进行深度分析
- **自动生成**:根据财务数据自动生成业务分析和投资建议
- **模块化设计**:不同分析模块相互独立,可按需生成
- **依赖关系**:支持分析模块之间的依赖关系,确保分析的准确性
- **实时进度**:显示每个分析模块的生成进度和状态
### 3. 系统配置管理
系统提供完善的配置管理功能:
- **数据库配置**:配置 PostgreSQL 数据库连接
- **AI 服务配置**:配置 AI 模型的 API 密钥和端点
- **数据源配置**:配置 Tushare、Finnhub 等数据源的 API 密钥
- **分析模块配置**:自定义分析模块的名称、模型和提示词模板
- **配置测试**:支持测试各项配置的有效性
- **配置导入/导出**:支持配置的备份和恢复
### 4. 历史报告查询
系统支持查询历史生成的报告:
- **按市场和企业ID查询**根据交易市场和企业ID查询历史报告
- **报告列表**:查看所有历史报告及其状态
- **报告详情**:查看完整的报告内容
## 使用指南
### 股票代码格式
不同市场的股票代码格式:
- **中国市场**
- 上交所6 位数字,如 `600519`(系统会自动添加 `.SH` 后缀)
- 深交所6 位数字,如 `000001`(系统会自动添加 `.SZ` 后缀)
- 完整格式:`600519.SH` 或 `000001.SZ`
- **美国市场**:直接输入股票代码,如 `AAPL`、`MSFT`
- **香港市场**:股票代码,如 `00700`
- **日本市场**:股票代码,如 `7203`
### 财务数据解读
系统展示的财务数据按以下方式组织:
1. **主要指标**:核心财务指标
- ROE净资产收益率衡量股东权益的盈利能力>12% 为优秀
- ROA总资产收益率衡量资产利用效率
- ROIC投入资本回报率衡量资本使用效率>12% 为优秀
- 毛利率:反映产品或服务的盈利能力
- 净利润率:反映整体盈利能力
2. **费用指标**:各项费用占收入的比例
- 销售费用率、管理费用率、研发费用率等
- 其他费用率:通过毛利率减去各项费用率计算得出
3. **资产占比**:各项资产占总资产的比例
- 现金占比:反映资金充裕程度
- 库存占比:反映库存管理水平
- 应收款占比:反映应收账款风险
- 商誉占比:反映并购活动的影响
4. **周转能力**:反映资产周转效率
- 存货周转天数:存货变现的速度
- 应收款周转天数:应收账款回收速度(>90天需注意
- 应付款周转天数:应付账款支付周期
5. **人均效率**:反映人力资源效率
- 人均创收、人均创利:衡量员工贡献
- 人均工资:反映员工待遇水平
6. **市场表现**:股票市场的表现指标
- PE市盈率、PB市净率估值指标
- 股东户数:反映股东结构变化
### 分析模块说明
每个分析模块都有其特定的作用:
- **公司简介**:自动生成公司的基本介绍和业务概况
- **业务分析**:深度分析公司的业务模式和竞争优势
- **财务分析**:评估公司的财务健康状况
- **风险评估**:识别潜在的投资风险
- **投资建议**:提供基于分析的投资建议
分析模块会按依赖关系顺序执行,确保每个模块都能获得所需的前置分析结果。
### 执行详情
在报告页面的"执行详情"标签页,您可以查看:
- **执行概况**财务数据获取的耗时、API 调用次数等
- **分析任务**每个分析模块的执行状态、耗时、Token 消耗等
- **总体统计**:总耗时、完成的任务数量、总 Token 消耗等
这些信息有助于了解报告的生成过程和数据来源。
## 常见问题
### Q1: 为什么有些财务数据显示为 "-"
A: 可能是以下原因:
- 该股票在对应年份没有数据
- 数据源暂时不可用
- 该指标不适用于该股票类型
### Q2: 分析模块生成失败怎么办?
A: 您可以:
- 点击"重新生成分析"按钮重试
- 检查系统配置中的 AI 服务配置是否正确
- 查看"执行详情"中的错误信息
### Q3: 如何查看历史报告?
A:
1. 访问"查询"页面(如果已启用)
2. 选择交易市场和企业ID
3. 点击"查询"按钮查看历史报告列表
### Q4: 股票代码输入错误怎么办?
A: 系统会自动识别一些常见的代码格式,但如果识别失败,请:
- 中国市场:使用完整格式,如 `600519.SH``000001.SZ`
- 其他市场:按照该市场的标准格式输入
### Q5: 如何配置系统?
A:
1. 访问"配置"页面
2. 在对应的标签页中配置各项设置
3. 使用"测试"按钮验证配置是否正确
4. 点击"保存所有配置"保存设置
### Q6: 分析报告生成需要多长时间?
A: 生成时间取决于:
- 财务数据的获取速度(通常几秒钟)
- AI 分析模块的数量和复杂度
- AI 服务的响应速度
完整的报告生成通常需要 1-5 分钟。
### Q7: 可以同时分析多只股票吗?
A: 目前系统每次只能分析一只股票。如需分析多只股票,请分别提交请求。
### Q8: 报告数据是实时的吗?
A:
- 财务数据:来自 Tushare 等数据源,更新频率取决于数据源
- 股价图表TradingView 提供实时股价数据
- AI 分析:基于当前获取的财务数据实时生成
## 系统配置
### 首次使用配置
首次使用系统时,需要配置以下内容:
1. **数据库配置**(如使用)
- 数据库连接 URL`postgresql+asyncpg://user:password@host:port/database`
- 使用"测试连接"按钮验证连接
2. **AI 服务配置**
- API Key输入您的 AI 服务 API 密钥
- Base URL输入 API 端点地址(如使用自建服务)
3. **数据源配置**
- **Tushare**:输入 Tushare API Key中国市场必需
- **Finnhub**:输入 Finnhub API Key全球市场可选
### 配置注意事项
- **敏感信息保护**API 密钥等敏感信息在输入框中不会显示,留空表示保持当前值
- **配置验证**:保存前建议使用"测试"按钮验证各项配置
- **配置备份**:建议定期使用"导出配置"功能备份配置
- **配置恢复**:可使用"导入配置"功能恢复之前的配置
### 分析模块配置
在"配置"页面的"分析配置"标签页,您可以:
- **自定义模块名称**:修改分析模块的显示名称
- **选择 AI 模型**:为每个模块指定使用的 AI 模型
- **编辑提示词模板**:自定义每个模块的分析提示词
- **设置模块依赖**:配置分析模块之间的依赖关系
配置修改后,点击"保存分析配置"即可生效。
## 技术支持
如果您在使用过程中遇到问题,可以:
1. 查看"执行详情"中的错误信息
2. 检查系统配置是否正确
3. 查看系统日志(如果已启用)
4. 联系系统管理员获取支持
---
**最后更新**2025年1月

View File

@ -0,0 +1,166 @@
# 微服务架构重构计划
## 1. 引言
### 1.1. 文档目的
本文档旨在为“基本面选股系统”从单体架构向微服务架构的演进提供一个全面的设计蓝图和分阶段的实施路线图。它将详细阐述目标架构、服务划分、技术栈选型以及具体的迁移步骤,作为后续开发工作的核心指导文件。
### 1.2. 重构目标与收益
当前系统采用的是经典的前后端分离的单体架构。为了应对未来更复杂的需求、提升系统的可维护性、可扩展性并实现关键模块的独立部署与扩缩容,我们决定将其重构为微服务架构。
主要收益包括:
- **高内聚、低耦合**: 每个服务只关注单一业务职责,易于理解和维护。
- **独立部署与交付**: 可以对单个服务进行修改、测试和部署,而不影响整个系统,加快迭代速度。
- **技术异构性**: 未来可以为不同的服务选择最适合的技术栈。
- **弹性伸缩**: 可以根据负载情况对高负荷的服务如AI分析服务进行独立扩容。
- **故障隔离**: 单个服务的故障不会导致整个系统崩溃。
## 2. 目标架构设计
我们将采用以 `API网关` 为核心的微服务架构模式。前端应用将通过网关与后端一系列独立的微服务进行通信。
![Microservices Architecture Diagram](https://i.imgur.com/gK98h83.png)
### 2.1. 服务划分 (Service Breakdown)
现有的后端应用将被拆分为以下几个核心微服务:
| 服务名称 | 容器名 (`docker-compose`) | 核心职责 |
| :--- | :--- | :--- |
| **前端应用** | `frontend-web` | **(保持不变)** Next.js UI负责用户交互。 |
| **API网关** | `api-gateway` | **(新增)** 系统统一入口。负责请求路由、认证、限流、日志聚合。将前端请求转发到正确的内部服务。 |
| **报告编排器** | `report-orchestrator` | **(由原后端演变)** 负责报告生成的业务工作流。接收请求,调用数据、分析等服务,编排整个流程。 |
| **数据聚合服务**| `data-aggregator` | **(新增)** 封装所有对第三方数据源Tushare, Finnhub等的API调用并提供统一的数据接口内置缓存逻辑。 |
| **AI分析服务** | `analysis-service` | **(新增)** 专门负责与大语言模型Gemini交互。将其独立出来便于未来单独扩容或部署到GPU服务器。 |
| **配置服务** | `config-service` | **(新增)** 集中管理并提供所有动态配置API密钥、Prompt模板等实现配置的动态更新与统一管理。 |
| **数据库** | `postgres-db` | **(保持不变)** 独立的PostgreSQL数据库容器为所有服务提供持久化存储。 |
### 2.2. 技术栈与开发环境
- **容器化**: `Docker`
- **服务编排**: `Docker Compose`
- **开发环境管理**: `Tilt`
- **服务间通信**: 同步通信采用轻量级的 `RESTful API (HTTP)`。对于长任务,未来可引入 `RabbitMQ``Redis Stream` 等消息队列实现异步通信。
### 2.3. 项目根目录清洁化 (Root Directory Cleanup)
根据约定,项目根目录应保持整洁,只存放与整个项目和微服务编排直接相关的顶级文件和目录。所有业务代码、独立应用的配置和脚本工具都应被归纳到合适的子目录中。
- **`services/` 目录**: 所有微服务(包括 `frontend``backend`)的代码都将迁移至此目录下。
- **`deployment/` 目录**: 用于存放与生产环境部署相关的配置文件(例如,`pm2.config.js`)。
- **`scripts/` 目录**: 用于存放各类开发、构建、工具类脚本(例如,`dev.py`, 根目录的 `package.json` 等)。
- **`.gitignore`**: 应添加规则以忽略开发者个人工具和二进制文件(例如,`portwardenc-amd64`)。
## 3. 分阶段实施计划
我们将采用增量、迭代的方式进行重构,确保每一步都是可验证、低风险的。
### 阶段 0: 容器化现有单体应用
**目标**: 在不修改任何业务代码的前提下,将现有的 `frontend``backend` 应用容器化,并使用 `docker-compose``Tilt` 运行起来。这是验证容器化环境和开发流程的第一步。
**任务**:
1. 在项目根目录创建 `docker-compose.yml` 文件,定义 `frontend`, `backend`, `postgres-db` 三个服务。
2. 分别为 `frontend``backend` 目录创建 `Dockerfile`
3. 在项目根目录创建 `Tiltfile`,并配置其加载 `docker-compose.yml`
4. 调整配置文件(如 `NEXT_PUBLIC_BACKEND_URL``DATABASE_URL` 环境变量使其适应Docker内部网络。
5. **验证**: 运行 `tilt up`,整个应用能够像在本地一样正常启动和访问。
---
### 阶段 1: 拆分配置服务 (`config-service`)
**目标**: 将配置管理逻辑从主后端中剥离,创建第一个真正的微服务。这是一个理想的起点,因为它依赖项少,风险低。
**任务**:
1. 创建新目录 `services/config-service`
2. 在该目录中初始化一个新的、轻量级的FastAPI应用。
3. 将原 `backend` 中所有读取 `config/` 目录文件的逻辑(如 `ConfigManager`) 迁移至 `config-service`
4. 在 `config-service` 中暴露API端点例如 `GET /api/v1/system`, `GET /api/v1/analysis-modules`
5. 在 `docker-compose.yml` 中新增 `config-service` 的定义。
6. 修改原 `backend` 应用移除本地文件读取逻辑改为通过HTTP请求从 `config-service` 获取配置。
---
### 阶段 2: 拆分数据聚合服务 (`data-aggregator`)
**目标**: 将所有与外部数据源的交互逻辑隔离出来。
**任务**:
1. 创建新目录 `services/data-aggregator`
2. 将原 `backend/app/data_providers` 目录及相关的数据获取和处理逻辑整体迁移到新服务中。
3. 为新服务定义清晰的API例如 `GET /api/v1/financials/{symbol}`
4. 在 `docker-compose.yml` 中新增 `data-aggregator` 服务的定义。
5. 修改原 `backend` 应用将调用本地数据模块改为通过HTTP请求调用 `data-aggregator` 服务。
---
### 阶段 3: 拆分AI分析服务 (`analysis-service`)
**目标**: 隔离计算密集型且可能需要特殊硬件资源的AI调用逻辑。
**任务**:
1. 创建新目录 `services/analysis-service`
2. 将原 `backend/app/services/analysis_client.py` 及相关的Gemini API调用逻辑迁移到新服务中。
3. 定义API例如 `POST /api/v1/analyze`接收上下文数据和prompt返回分析结果。
4. 在 `docker-compose.yml` 中新增 `analysis-service` 的定义。
5. 修改原 `backend` 应用将直接调用SDK改为通过HTTP请求调用 `analysis-service`
---
### 阶段 4: 引入API网关 (`api-gateway`) 并重塑编排器
**目标**: 建立统一的外部入口,并正式将原 `backend` 的角色明确为 `report-orchestrator`
**任务**:
1. 创建新目录 `services/api-gateway`并初始化一个FastAPI应用。
2. 在 `api-gateway` 中配置路由规则,将来自前端的请求(如 `/api/config/*`, `/api/financials/*`)代理到对应的内部微服务 (`config-service`, `report-orchestrator` 等)。
3. 更新 `docker-compose.yml`,将前端端口暴露给主机,而其他后端服务仅在内部网络可达。
4. 修改 `frontend``NEXT_PUBLIC_BACKEND_URL` 指向 `api-gateway`
5. 此时,原 `backend` 的代码已经精简,主要剩下编排逻辑。我们可以考虑将其目录重命名为 `services/report-orchestrator`,以准确反映其职责。
## 4. 最终项目目录结构(设想)
重构完成后,项目目录结构将演变为:
```
/home/lv/nvm/works/Fundamental_Analysis/
├── docker-compose.yml
├── Tiltfile
├── README.md
├── .gitignore
├── services/
│ ├── frontend/
│ │ └── Dockerfile
│ ├── api-gateway/
│ │ ├── app/
│ │ └── Dockerfile
│ ├── config-service/
│ │ ├── app/
│ │ └── Dockerfile
│ ├── data-aggregator/
│ │ ├── app/
│ │ └── Dockerfile
│ ├── analysis-service/
│ │ ├── app/
│ │ └── Dockerfile
│ └── report-orchestrator/ # 由原 backend 演变而来
│ ├── app/
│ └── Dockerfile
├── deployment/
│ └── pm2.config.js
├── scripts/
│ ├── dev.py
│ └── package.json # 原根目录的 package.json
├── config/ # 静态配置文件,由 config-service 读取
└── docs/
└── microservice_refactoring_plan.md
```
## 5. 结论
本计划提供了一个从单体到微服务的清晰、可行的演进路径。通过分阶段、增量式的重构,我们可以平稳地完成架构升级,同时确保在每个阶段结束后,系统都处于可工作、可验证的状态。
请您审阅此计划。如有任何疑问或建议,我们可以随时讨论和调整。

View File

@ -19,9 +19,9 @@
- [x] **T2.1 [Backend/DB]**: 根据设计文档使用SQLAlchemy ORM定义`Report`, `AnalysisModule`, `ProgressTracking`, `SystemConfig`四个核心数据模型。 **[完成 - 2025-10-21]**
- [x] **T2.2 [Backend/DB]**: 创建第一个Alembic迁移脚本在数据库中生成上述四张表。 **[完成 - 2025-10-21]**
- [x] **T2.3 [Backend]**: 实现`ConfigManager`服务,完成从`config.json`加载配置并与数据库配置合并的逻辑。 **[完成 - 2025-10-21]**
- **T2.4 [Backend/API]**: 创建Pydantic Schema用于配置接口的请求和响应 (`ConfigResponse`, `ConfigUpdateRequest`, `ConfigTestRequest`, `ConfigTestResponse`)。
- **T2.5 [Backend/API]**: 实现`/api/config`的`GET`和`PUT`端点,用于读取和更新系统配置。
- **T2.6 [Backend/API]**: 实现`/api/config/test`的`POST`端点,用于验证数据库连接等配置的有效性。
- [x] **T2.4 [Backend/API]**: 创建Pydantic Schema用于配置接口的请求和响应 (`ConfigResponse`, `ConfigUpdateRequest`, `ConfigTestRequest`, `ConfigTestResponse`)。
- [x] **T2.5 [Backend/API]**: 实现`/api/config`的`GET`和`PUT`端点,用于读取和更新系统配置。
- [x] **T2.6 [Backend/API]**: 实现`/api/config/test`的`POST`端点,用于验证数据库连接等配置的有效性。
## Phase 3: 前端基础与配置页面 (P1)
@ -39,8 +39,8 @@
此阶段是项目的核心,重点开发后端的报告生成流程和前端的实时进度展示。
- **T4.1 [Backend/Service]**: 实现`DataSourceManager`封装对Tushare和Yahoo Finance的数据获取逻辑。
- **T4.2 [Backend/Service]**: 实现`AIService`封装对Google Gemini API的调用逻辑包括Token使用统计。
- [x] **T4.1 [Backend/Service]**: 实现`DataSourceManager`封装对Tushare和Yahoo Finance的数据获取逻辑。
- [x] **T4.2 [Backend/Service]**: 实现`AIService`封装对Google Gemini API的调用逻辑包括Token使用统计。
- **T4.3 [Backend/Service]**: 实现`ProgressTracker`服务,提供`initialize`, `start_step`, `complete_step`, `get_progress`等方法,并与数据库交互。
- **T4.4 [Backend/Service]**: 定义`AnalysisModule`的基类/接口,并初步实现一到两个模块(如`FinancialDataModule`)作为示例。
- **T4.5 [Backend/Service]**: 实现核心的`ReportGenerator`服务,编排数据获取、各分析模块调用、进度更新的完整流程。

View File

@ -0,0 +1,67 @@
# 美国市场数据集成任务清单
本文档用于跟踪和管理为项目集成美国市场数据(使用 Finnhub 作为数据源)所需的各项开发任务。
## 任务列表
- [x] **后端:实现 FinnhubProvider 数据映射**
- **目标**:根据 `docs/financial_data_dictionary.md` 中的定义,在 `backend/app/data_providers/finnhub.py` 文件中,完成从 Finnhub API 原始数据到系统标准字段的完整映射。
- **关键点**
- [x] 处理直接映射的字段。
- [x] 实现所有需要通过计算得出的衍生指标。
- [x] 确保处理 `null` 或空值,避免计算错误。
- [x] 验证返回的数据结构符合 `DataManager` 的预期。
- [x] **后端:按市场分段的 API 路由**
- **目标**:在 `backend/app/routers/financial.py` 中,将现有的 `/api/v1/financials/china/{ts_code}` 改为按市场分段:`/api/v1/financials/{market}/{stock_code}`(示例:`/api/v1/financials/us/AAPL``/api/v1/financials/cn/600519.SH`)。
- **关键点**
- [x] 去除硬编码的 `china`,新增路径参数 `market`,并对取值做校验(`cn/us/hk/jp`)。
- [x] 使用单一处理函数,根据 `market` 分派到相应的数据提供方与代码格式规范。
- [x] **前端:更新 API 调用**
- **目标**:修改前端调用,基于用户选择的市场与股票代码,请求新的按市场分段路由。
- **关键点**
- [x] 替换 `useChinaFinancials`,新增通用 `useFinancials(market, stockCode, years)`
- [x] 将请求路径改为 `/api/financials/{market}/{stock_code}?years=...`(代理到后端对应的 `/api/v1/financials/{market}/{stock_code}`)。
- [ ] 确保展示与错误处理兼容美国、香港、日本等市场。
- [ ] **测试与验证**
- **目标**:对整个流程进行端到端测试,确保两个市场的功能都稳定可靠。
- **关键点**
- [ ] **中国市场回归测试**:使用多个中国 A 股代码测试,确保原有功能不受影响。
- [ ] **美国市场功能测试**:使用多个美国股票代码(如 `AAPL`, `MSFT`)测试,验证报告能否成功生成。
- [ ] **数据一致性验证**:抽样对比 Finnhub 返回的数据和前端展示的数据,确保映射和计算的准确性。
- [ ] **错误处理测试**:测试无效的股票代码,检查系统是否能给出清晰的错误提示。
- **前置条件**
- [ ] 在 `config/config.json` 或环境变量中配置 `FINNHUB_API_KEY`
- [ ] 后端已启动(默认 `http://127.0.0.1:8000/api`),前端已启动(默认 `http://127.0.0.1:3000`)。
- **接口用例(后端)**
- [ ] GET `/api/v1/financials/cn/600519.SH?years=10`
- 期望:`200`;返回 `ts_code`、`name`、`series`(含 `revenue`、`n_income` 等关键指标period/年序列齐全)。
- [ ] GET `/api/v1/financials/cn/000001.SZ?years=5`
- 期望:`200`;返回与上同,近 5 年序列。
- [ ] GET `/api/v1/financials/us/AAPL?years=10`
- 期望:`200``series` 至少包含:`revenue`、`n_income`、`total_assets`、`total_hldr_eqy_exc_min_int`、`__free_cash_flow`、`grossprofit_margin`、`netprofit_margin`、`roe`、`roa`。
- [ ] GET `/api/v1/financials/us/MSFT?years=10`
- 期望:`200`;字段与口径同 AAPL。
- [ ] GET `/api/v1/financials/us/INVALID?years=10`
- 期望:`4xx/5xx``detail.message` 含可读错误。
- **页面用例(前端)**
- [ ] 打开 `/report/600519.SH?market=cn`
- 期望:基本信息与“昨日快照”显示;“财务数据(来自 Tushare”表格展示 10 期内主要指标。
- [ ] 打开 `/report/000001.SZ?market=cn`
- 期望:与上同;代码规范化逻辑(无后缀时自动补 `.SZ/.SH`)正常。
- [ ] 打开 `/report/AAPL?market=us`
- 期望“股价图表”正常“财务数据”表格展示主要指标含自由现金流、毛利率、净利率、ROA、ROE
- [ ] 打开 `/report/MSFT?market=us`
- 期望:与上同。
- [ ] 打开 `/report/INVALID?market=us`
- 期望:顶部状态为“读取失败”并有错误提示文案。
- **验收标准**
- [ ] 中国市场功能无回归;美国市场关键指标齐全、值域合理(百分比类 ∈ [-1000%, 1000%],金额类为有限数)。
- [ ] 报错信息清晰可读;网络/密钥缺失时提示明确。
- [ ] 页内主要表格不出现 `NaN/Infinity`;空值以 `-` 展示。

2
frontend/.gitignore vendored
View File

@ -39,3 +39,5 @@ yarn-error.log*
# typescript
*.tsbuildinfo
next-env.d.ts
/src/generated/prisma

22
frontend/Dockerfile Normal file
View File

@ -0,0 +1,22 @@
# syntax=docker/dockerfile:1.6
FROM node:20-alpine AS base
ENV NODE_ENV=development \
NEXT_TELEMETRY_DISABLED=1 \
CI=false
WORKDIR /workspace/frontend
# 仅复制依赖清单,最大化利用缓存
COPY frontend/package.json frontend/package-lock.json ./
# 使用 npm ci若失败则回退 npm install避免镜像构建被锁文件问题卡住
RUN npm ci || npm install
# 运行时通过挂载卷提供源码
RUN mkdir -p /workspace/frontend
# 缺省入口由 docker-compose 提供

View File

@ -11,19 +11,7 @@ const nextConfig = {
},
// Increase server timeout for long-running AI requests
experimental: {
proxyTimeout: 120000, // 120 seconds
},
async rewrites() {
return [
{
source: "/api/:path*",
destination: "http://127.0.0.1:8000/api/:path*",
},
{
source: "/health",
destination: "http://127.0.0.1:8000/health",
},
];
proxyTimeout: 300000, // 300 seconds (5 minutes)
},
};

View File

@ -8,6 +8,7 @@
"name": "frontend",
"version": "0.1.0",
"dependencies": {
"@prisma/client": "^6.18.0",
"@radix-ui/react-checkbox": "^1.3.3",
"@radix-ui/react-navigation-menu": "^1.2.14",
"@radix-ui/react-select": "^2.2.6",
@ -15,6 +16,8 @@
"@radix-ui/react-tabs": "^1.1.13",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"geist": "^1.5.1",
"github-markdown-css": "^5.8.1",
"lucide-react": "^0.545.0",
"next": "15.5.5",
"react": "19.1.0",
@ -34,6 +37,7 @@
"@types/react-dom": "^19",
"eslint": "^9",
"eslint-config-next": "15.5.5",
"prisma": "^6.18.0",
"tailwindcss": "^4",
"tw-animate-css": "^1.4.0",
"typescript": "^5"
@ -1015,6 +1019,91 @@
"node": ">=12.4.0"
}
},
"node_modules/@prisma/client": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/client/-/client-6.18.0.tgz",
"integrity": "sha512-jnL2I9gDnPnw4A+4h5SuNn8Gc+1mL1Z79U/3I9eE2gbxJG1oSA+62ByPW4xkeDgwE0fqMzzpAZ7IHxYnLZ4iQA==",
"hasInstallScript": true,
"license": "Apache-2.0",
"engines": {
"node": ">=18.18"
},
"peerDependencies": {
"prisma": "*",
"typescript": ">=5.1.0"
},
"peerDependenciesMeta": {
"prisma": {
"optional": true
},
"typescript": {
"optional": true
}
}
},
"node_modules/@prisma/config": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/config/-/config-6.18.0.tgz",
"integrity": "sha512-rgFzspCpwsE+q3OF/xkp0fI2SJ3PfNe9LLMmuSVbAZ4nN66WfBiKqJKo/hLz3ysxiPQZf8h1SMf2ilqPMeWATQ==",
"devOptional": true,
"license": "Apache-2.0",
"dependencies": {
"c12": "3.1.0",
"deepmerge-ts": "7.1.5",
"effect": "3.18.4",
"empathic": "2.0.0"
}
},
"node_modules/@prisma/debug": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/debug/-/debug-6.18.0.tgz",
"integrity": "sha512-PMVPMmxPj0ps1VY75DIrT430MoOyQx9hmm174k6cmLZpcI95rAPXOQ+pp8ANQkJtNyLVDxnxVJ0QLbrm/ViBcg==",
"devOptional": true,
"license": "Apache-2.0"
},
"node_modules/@prisma/engines": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/engines/-/engines-6.18.0.tgz",
"integrity": "sha512-i5RzjGF/ex6AFgqEe2o1IW8iIxJGYVQJVRau13kHPYEL1Ck8Zvwuzamqed/1iIljs5C7L+Opiz5TzSsUebkriA==",
"devOptional": true,
"hasInstallScript": true,
"license": "Apache-2.0",
"dependencies": {
"@prisma/debug": "6.18.0",
"@prisma/engines-version": "6.18.0-8.34b5a692b7bd79939a9a2c3ef97d816e749cda2f",
"@prisma/fetch-engine": "6.18.0",
"@prisma/get-platform": "6.18.0"
}
},
"node_modules/@prisma/engines-version": {
"version": "6.18.0-8.34b5a692b7bd79939a9a2c3ef97d816e749cda2f",
"resolved": "https://registry.npmjs.org/@prisma/engines-version/-/engines-version-6.18.0-8.34b5a692b7bd79939a9a2c3ef97d816e749cda2f.tgz",
"integrity": "sha512-T7Af4QsJQnSgWN1zBbX+Cha5t4qjHRxoeoWpK4JugJzG/ipmmDMY5S+O0N1ET6sCBNVkf6lz+Y+ZNO9+wFU8pQ==",
"devOptional": true,
"license": "Apache-2.0"
},
"node_modules/@prisma/fetch-engine": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/fetch-engine/-/fetch-engine-6.18.0.tgz",
"integrity": "sha512-TdaBvTtBwP3IoqVYoGIYpD4mWlk0pJpjTJjir/xLeNWlwog7Sl3bD2J0jJ8+5+q/6RBg+acb9drsv5W6lqae7A==",
"devOptional": true,
"license": "Apache-2.0",
"dependencies": {
"@prisma/debug": "6.18.0",
"@prisma/engines-version": "6.18.0-8.34b5a692b7bd79939a9a2c3ef97d816e749cda2f",
"@prisma/get-platform": "6.18.0"
}
},
"node_modules/@prisma/get-platform": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/@prisma/get-platform/-/get-platform-6.18.0.tgz",
"integrity": "sha512-uXNJCJGhxTCXo2B25Ta91Rk1/Nmlqg9p7G9GKh8TPhxvAyXCvMNQoogj4JLEUy+3ku8g59cpyQIKFhqY2xO2bg==",
"devOptional": true,
"license": "Apache-2.0",
"dependencies": {
"@prisma/debug": "6.18.0"
}
},
"node_modules/@radix-ui/number": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/@radix-ui/number/-/number-1.1.1.tgz",
@ -2150,6 +2239,7 @@
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.2.tgz",
"integrity": "sha512-6mDvHUFSjyT2B2yeNx2nUgMxh9LtOWvkhIU3uePn2I2oyNymUAX1NIsdgviM4CH+JSrp2D2hsMvJOkxY+0wNRA==",
"license": "MIT",
"peer": true,
"dependencies": {
"csstype": "^3.0.2"
}
@ -2160,6 +2250,7 @@
"integrity": "sha512-9KQPoO6mZCi7jcIStSnlOWn2nEF3mNmyr3rIAsGnAbQKYbRLyqmeSc39EVgtxXVia+LMT8j3knZLAZAh+xLmrw==",
"devOptional": true,
"license": "MIT",
"peer": true,
"peerDependencies": {
"@types/react": "^19.2.0"
}
@ -2222,6 +2313,7 @@
"integrity": "sha512-6JSSaBZmsKvEkbRUkf7Zj7dru/8ZCrJxAqArcLaVMee5907JdtEbKGsZ7zNiIm/UAkpGUkaSMZEXShnN2D1HZA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@typescript-eslint/scope-manager": "8.46.1",
"@typescript-eslint/types": "8.46.1",
@ -2745,6 +2837,7 @@
"integrity": "sha512-NZyJarBfL7nWwIq+FDL6Zp/yHEhePMNnnJ0y3qfieCrmNvYct8uvtiV41UvlSe6apAfk0fY1FbWx+NwfmpvtTg==",
"dev": true,
"license": "MIT",
"peer": true,
"bin": {
"acorn": "bin/acorn"
},
@ -3078,6 +3171,35 @@
"node": ">=8"
}
},
"node_modules/c12": {
"version": "3.1.0",
"resolved": "https://registry.npmjs.org/c12/-/c12-3.1.0.tgz",
"integrity": "sha512-uWoS8OU1MEIsOv8p/5a82c3H31LsWVR5qiyXVfBNOzfffjUWtPnhAb4BYI2uG2HfGmZmFjCtui5XNWaps+iFuw==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"chokidar": "^4.0.3",
"confbox": "^0.2.2",
"defu": "^6.1.4",
"dotenv": "^16.6.1",
"exsolve": "^1.0.7",
"giget": "^2.0.0",
"jiti": "^2.4.2",
"ohash": "^2.0.11",
"pathe": "^2.0.3",
"perfect-debounce": "^1.0.0",
"pkg-types": "^2.2.0",
"rc9": "^2.1.2"
},
"peerDependencies": {
"magicast": "^0.3.5"
},
"peerDependenciesMeta": {
"magicast": {
"optional": true
}
}
},
"node_modules/call-bind": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/call-bind/-/call-bind-1.0.8.tgz",
@ -3225,6 +3347,22 @@
"url": "https://github.com/sponsors/wooorm"
}
},
"node_modules/chokidar": {
"version": "4.0.3",
"resolved": "https://registry.npmjs.org/chokidar/-/chokidar-4.0.3.tgz",
"integrity": "sha512-Qgzu8kfBvo+cA4962jnP1KkS6Dop5NS6g7R5LFYJr4b8Ub94PPQXUksCw9PvXoeXPRRddRNC5C1JQUR2SMGtnA==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"readdirp": "^4.0.1"
},
"engines": {
"node": ">= 14.16.0"
},
"funding": {
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/chownr": {
"version": "3.0.0",
"resolved": "https://registry.npmjs.org/chownr/-/chownr-3.0.0.tgz",
@ -3235,6 +3373,16 @@
"node": ">=18"
}
},
"node_modules/citty": {
"version": "0.1.6",
"resolved": "https://registry.npmjs.org/citty/-/citty-0.1.6.tgz",
"integrity": "sha512-tskPPKEs8D2KPafUypv2gxwJP8h/OaJmC82QQGGDQcHvXX43xF2VDACcJVmZ0EuSxkpO9Kc4MlrA3q0+FG58AQ==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"consola": "^3.2.3"
}
},
"node_modules/class-variance-authority": {
"version": "0.7.1",
"resolved": "https://registry.npmjs.org/class-variance-authority/-/class-variance-authority-0.7.1.tgz",
@ -3299,6 +3447,23 @@
"dev": true,
"license": "MIT"
},
"node_modules/confbox": {
"version": "0.2.2",
"resolved": "https://registry.npmjs.org/confbox/-/confbox-0.2.2.tgz",
"integrity": "sha512-1NB+BKqhtNipMsov4xI/NnhCKp9XG9NamYp5PVm9klAT0fsrNPjaFICsCFhNhwZJKNh7zB/3q8qXz0E9oaMNtQ==",
"devOptional": true,
"license": "MIT"
},
"node_modules/consola": {
"version": "3.4.2",
"resolved": "https://registry.npmjs.org/consola/-/consola-3.4.2.tgz",
"integrity": "sha512-5IKcdX0nnYavi6G7TtOhwkYzyjfJlatbjMjuLSfE2kYT5pMDOilZ4OvMhi637CcDICTmz3wARPoyhqyX1Y+XvA==",
"devOptional": true,
"license": "MIT",
"engines": {
"node": "^14.18.0 || >=16.10.0"
}
},
"node_modules/cross-spawn": {
"version": "7.0.6",
"resolved": "https://registry.npmjs.org/cross-spawn/-/cross-spawn-7.0.6.tgz",
@ -3545,6 +3710,16 @@
"dev": true,
"license": "MIT"
},
"node_modules/deepmerge-ts": {
"version": "7.1.5",
"resolved": "https://registry.npmjs.org/deepmerge-ts/-/deepmerge-ts-7.1.5.tgz",
"integrity": "sha512-HOJkrhaYsweh+W+e74Yn7YStZOilkoPb6fycpwNLKzSPtruFs48nYis0zy5yJz1+ktUhHxoRDJ27RQAWLIJVJw==",
"devOptional": true,
"license": "BSD-3-Clause",
"engines": {
"node": ">=16.0.0"
}
},
"node_modules/define-data-property": {
"version": "1.1.4",
"resolved": "https://registry.npmjs.org/define-data-property/-/define-data-property-1.1.4.tgz",
@ -3581,6 +3756,13 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/defu": {
"version": "6.1.4",
"resolved": "https://registry.npmjs.org/defu/-/defu-6.1.4.tgz",
"integrity": "sha512-mEQCMmwJu317oSz8CwdIOdwf3xMif1ttiM8LTufzc3g6kR+9Pe236twL8j3IYT1F7GfRgGcW6MWxzZjLIkuHIg==",
"devOptional": true,
"license": "MIT"
},
"node_modules/dequal": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz",
@ -3590,6 +3772,13 @@
"node": ">=6"
}
},
"node_modules/destr": {
"version": "2.0.5",
"resolved": "https://registry.npmjs.org/destr/-/destr-2.0.5.tgz",
"integrity": "sha512-ugFTXCtDZunbzasqBxrK93Ik/DRYsO6S/fedkWEMKqt04xZ4csmnmwGDBAb07QWNaGMAmnTIemsYZCksjATwsA==",
"devOptional": true,
"license": "MIT"
},
"node_modules/detect-libc": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/detect-libc/-/detect-libc-2.1.2.tgz",
@ -3632,6 +3821,19 @@
"node": ">=0.10.0"
}
},
"node_modules/dotenv": {
"version": "16.6.1",
"resolved": "https://registry.npmjs.org/dotenv/-/dotenv-16.6.1.tgz",
"integrity": "sha512-uBq4egWHTcTt33a72vpSG0z3HnPuIl6NqYcTrKEg2azoEyl2hpW0zqlxysq2pK9HlDIHyHyakeYaYnSAwd8bow==",
"devOptional": true,
"license": "BSD-2-Clause",
"engines": {
"node": ">=12"
},
"funding": {
"url": "https://dotenvx.com"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
@ -3647,6 +3849,17 @@
"node": ">= 0.4"
}
},
"node_modules/effect": {
"version": "3.18.4",
"resolved": "https://registry.npmjs.org/effect/-/effect-3.18.4.tgz",
"integrity": "sha512-b1LXQJLe9D11wfnOKAk3PKxuqYshQ0Heez+y5pnkd3jLj1yx9QhM72zZ9uUrOQyNvrs2GZZd/3maL0ZV18YuDA==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"@standard-schema/spec": "^1.0.0",
"fast-check": "^3.23.1"
}
},
"node_modules/emoji-regex": {
"version": "9.2.2",
"resolved": "https://registry.npmjs.org/emoji-regex/-/emoji-regex-9.2.2.tgz",
@ -3654,6 +3867,16 @@
"dev": true,
"license": "MIT"
},
"node_modules/empathic": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/empathic/-/empathic-2.0.0.tgz",
"integrity": "sha512-i6UzDscO/XfAcNYD75CfICkmfLedpyPDdozrLMmQc5ORaQcdMoc21OnlEylMIqI7U8eniKrPMxxtj8k0vhmJhA==",
"devOptional": true,
"license": "MIT",
"engines": {
"node": ">=14"
}
},
"node_modules/enhanced-resolve": {
"version": "5.18.3",
"resolved": "https://registry.npmjs.org/enhanced-resolve/-/enhanced-resolve-5.18.3.tgz",
@ -3874,6 +4097,7 @@
"integrity": "sha512-XyLmROnACWqSxiGYArdef1fItQd47weqB7iwtfr9JHwRrqIXZdcFMvvEcL9xHCmL0SNsOvF0c42lWyM1U5dgig==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@eslint-community/eslint-utils": "^4.8.0",
"@eslint-community/regexpp": "^4.12.1",
@ -4048,6 +4272,7 @@
"integrity": "sha512-whOE1HFo/qJDyX4SnXzP4N6zOWn79WhnCUY/iDR0mPfQZO8wcYE4JClzI2oZrhBnnMUCBCHZhO6VQyoBU95mZA==",
"dev": true,
"license": "MIT",
"peer": true,
"dependencies": {
"@rtsao/scc": "^1.1.0",
"array-includes": "^3.1.9",
@ -4310,12 +4535,42 @@
"integrity": "sha512-GWkBvjiSZK87ELrYOSESUYeVIc9mvLLf/nXalMOS5dYrgZq9o5OVkbZAVM06CVxYsCwH9BDZFPlQTlPA1j4ahA==",
"license": "MIT"
},
"node_modules/exsolve": {
"version": "1.0.7",
"resolved": "https://registry.npmjs.org/exsolve/-/exsolve-1.0.7.tgz",
"integrity": "sha512-VO5fQUzZtI6C+vx4w/4BWJpg3s/5l+6pRQEHzFRM8WFi4XffSP1Z+4qi7GbjWbvRQEbdIco5mIMq+zX4rPuLrw==",
"devOptional": true,
"license": "MIT"
},
"node_modules/extend": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/extend/-/extend-3.0.2.tgz",
"integrity": "sha512-fjquC59cD7CyW6urNXK0FBufkZcoiGG80wTuPujX590cB5Ttln20E2UB4S/WARVqhXffZl2LNgS+gQdPIIim/g==",
"license": "MIT"
},
"node_modules/fast-check": {
"version": "3.23.2",
"resolved": "https://registry.npmjs.org/fast-check/-/fast-check-3.23.2.tgz",
"integrity": "sha512-h5+1OzzfCC3Ef7VbtKdcv7zsstUQwUDlYpUTvjeUsJAssPgLn7QzbboPtL5ro04Mq0rPOsMzl7q5hIbRs2wD1A==",
"devOptional": true,
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/dubzzz"
},
{
"type": "opencollective",
"url": "https://opencollective.com/fast-check"
}
],
"license": "MIT",
"dependencies": {
"pure-rand": "^6.1.0"
},
"engines": {
"node": ">=8.0.0"
}
},
"node_modules/fast-deep-equal": {
"version": "3.1.3",
"resolved": "https://registry.npmjs.org/fast-deep-equal/-/fast-deep-equal-3.1.3.tgz",
@ -4498,6 +4753,15 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/geist": {
"version": "1.5.1",
"resolved": "https://registry.npmjs.org/geist/-/geist-1.5.1.tgz",
"integrity": "sha512-mAHZxIsL2o3ZITFaBVFBnwyDOw+zNLYum6A6nIjpzCGIO8QtC3V76XF2RnZTyLx1wlDTmMDy8jg3Ib52MIjGvQ==",
"license": "SIL OPEN FONT LICENSE",
"peerDependencies": {
"next": ">=13.2.0"
}
},
"node_modules/generator-function": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/generator-function/-/generator-function-2.0.1.tgz",
@ -4587,6 +4851,36 @@
"url": "https://github.com/privatenumber/get-tsconfig?sponsor=1"
}
},
"node_modules/giget": {
"version": "2.0.0",
"resolved": "https://registry.npmjs.org/giget/-/giget-2.0.0.tgz",
"integrity": "sha512-L5bGsVkxJbJgdnwyuheIunkGatUF/zssUoxxjACCseZYAVbaqdh9Tsmmlkl8vYan09H7sbvKt4pS8GqKLBrEzA==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"citty": "^0.1.6",
"consola": "^3.4.0",
"defu": "^6.1.4",
"node-fetch-native": "^1.6.6",
"nypm": "^0.6.0",
"pathe": "^2.0.3"
},
"bin": {
"giget": "dist/cli.mjs"
}
},
"node_modules/github-markdown-css": {
"version": "5.8.1",
"resolved": "https://registry.npmjs.org/github-markdown-css/-/github-markdown-css-5.8.1.tgz",
"integrity": "sha512-8G+PFvqigBQSWLQjyzgpa2ThD9bo7+kDsriUIidGcRhXgmcaAWUIpCZf8DavJgc+xifjbCG+GvMyWr0XMXmc7g==",
"license": "MIT",
"engines": {
"node": ">=10"
},
"funding": {
"url": "https://github.com/sponsors/sindresorhus"
}
},
"node_modules/glob-parent": {
"version": "6.0.2",
"resolved": "https://registry.npmjs.org/glob-parent/-/glob-parent-6.0.2.tgz",
@ -5372,7 +5666,7 @@
"version": "2.6.1",
"resolved": "https://registry.npmjs.org/jiti/-/jiti-2.6.1.tgz",
"integrity": "sha512-ekilCSN1jwRvIbgeg/57YFh8qQDNbwDb9xT/qu2DAHbFFZUicIl4ygVaAvzveMhMVr3LnpSKTNnwt8PoOfmKhQ==",
"dev": true,
"devOptional": true,
"license": "MIT",
"bin": {
"jiti": "lib/jiti-cli.mjs"
@ -6783,6 +7077,7 @@
"resolved": "https://registry.npmjs.org/next/-/next-15.5.5.tgz",
"integrity": "sha512-OQVdBPtpBfq7HxFN0kOVb7rXXOSIkt5lTzDJDGRBcOyVvNRIWFauMqi1gIHd1pszq1542vMOGY0HP4CaiALfkA==",
"license": "MIT",
"peer": true,
"dependencies": {
"@next/env": "15.5.5",
"@swc/helpers": "0.5.15",
@ -6858,6 +7153,33 @@
"node": "^10 || ^12 || >=14"
}
},
"node_modules/node-fetch-native": {
"version": "1.6.7",
"resolved": "https://registry.npmjs.org/node-fetch-native/-/node-fetch-native-1.6.7.tgz",
"integrity": "sha512-g9yhqoedzIUm0nTnTqAQvueMPVOuIY16bqgAJJC8XOOubYFNwz6IER9qs0Gq2Xd0+CecCKFjtdDTMA4u4xG06Q==",
"devOptional": true,
"license": "MIT"
},
"node_modules/nypm": {
"version": "0.6.2",
"resolved": "https://registry.npmjs.org/nypm/-/nypm-0.6.2.tgz",
"integrity": "sha512-7eM+hpOtrKrBDCh7Ypu2lJ9Z7PNZBdi/8AT3AX8xoCj43BBVHD0hPSTEvMtkMpfs8FCqBGhxB+uToIQimA111g==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"citty": "^0.1.6",
"consola": "^3.4.2",
"pathe": "^2.0.3",
"pkg-types": "^2.3.0",
"tinyexec": "^1.0.1"
},
"bin": {
"nypm": "dist/cli.mjs"
},
"engines": {
"node": "^14.16.0 || >=16.10.0"
}
},
"node_modules/object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
@ -6981,6 +7303,13 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/ohash": {
"version": "2.0.11",
"resolved": "https://registry.npmjs.org/ohash/-/ohash-2.0.11.tgz",
"integrity": "sha512-RdR9FQrFwNBNXAr4GixM8YaRZRJ5PUWbKYbE5eOsrwAjJW0q2REGcf79oYPsLyskQCZG1PLN+S/K1V00joZAoQ==",
"devOptional": true,
"license": "MIT"
},
"node_modules/optionator": {
"version": "0.9.4",
"resolved": "https://registry.npmjs.org/optionator/-/optionator-0.9.4.tgz",
@ -7114,6 +7443,20 @@
"dev": true,
"license": "MIT"
},
"node_modules/pathe": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/pathe/-/pathe-2.0.3.tgz",
"integrity": "sha512-WUjGcAqP1gQacoQe+OBJsFA7Ld4DyXuUIjZ5cc75cLHvJ7dtNsTugphxIADwspS+AraAUePCKrSVtPLFj/F88w==",
"devOptional": true,
"license": "MIT"
},
"node_modules/perfect-debounce": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/perfect-debounce/-/perfect-debounce-1.0.0.tgz",
"integrity": "sha512-xCy9V055GLEqoFaHoC1SoLIaLmWctgCUaBaWxDZ7/Zx4CTyX7cJQLJOok/orfjZAh9kEYpjJa4d0KcJmCbctZA==",
"devOptional": true,
"license": "MIT"
},
"node_modules/picocolors": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/picocolors/-/picocolors-1.1.1.tgz",
@ -7133,6 +7476,18 @@
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/pkg-types": {
"version": "2.3.0",
"resolved": "https://registry.npmjs.org/pkg-types/-/pkg-types-2.3.0.tgz",
"integrity": "sha512-SIqCzDRg0s9npO5XQ3tNZioRY1uK06lA41ynBC1YmFTmnY6FjUjVt6s4LoADmwoig1qqD0oK8h1p/8mlMx8Oig==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"confbox": "^0.2.2",
"exsolve": "^1.0.7",
"pathe": "^2.0.3"
}
},
"node_modules/possible-typed-array-names": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/possible-typed-array-names/-/possible-typed-array-names-1.1.0.tgz",
@ -7182,6 +7537,33 @@
"node": ">= 0.8.0"
}
},
"node_modules/prisma": {
"version": "6.18.0",
"resolved": "https://registry.npmjs.org/prisma/-/prisma-6.18.0.tgz",
"integrity": "sha512-bXWy3vTk8mnRmT+SLyZBQoC2vtV9Z8u7OHvEu+aULYxwiop/CPiFZ+F56KsNRNf35jw+8wcu8pmLsjxpBxAO9g==",
"devOptional": true,
"hasInstallScript": true,
"license": "Apache-2.0",
"peer": true,
"dependencies": {
"@prisma/config": "6.18.0",
"@prisma/engines": "6.18.0"
},
"bin": {
"prisma": "build/index.js"
},
"engines": {
"node": ">=18.18"
},
"peerDependencies": {
"typescript": ">=5.1.0"
},
"peerDependenciesMeta": {
"typescript": {
"optional": true
}
}
},
"node_modules/prop-types": {
"version": "15.8.1",
"resolved": "https://registry.npmjs.org/prop-types/-/prop-types-15.8.1.tgz",
@ -7214,6 +7596,23 @@
"node": ">=6"
}
},
"node_modules/pure-rand": {
"version": "6.1.0",
"resolved": "https://registry.npmjs.org/pure-rand/-/pure-rand-6.1.0.tgz",
"integrity": "sha512-bVWawvoZoBYpp6yIoQtQXHZjmz35RSVHnUOTefl8Vcjr8snTPY1wnpSPMWekcFwbxI6gtmT7rSYPFvz71ldiOA==",
"devOptional": true,
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/dubzzz"
},
{
"type": "opencollective",
"url": "https://opencollective.com/fast-check"
}
],
"license": "MIT"
},
"node_modules/queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.npmjs.org/queue-microtask/-/queue-microtask-1.2.3.tgz",
@ -7235,11 +7634,23 @@
],
"license": "MIT"
},
"node_modules/rc9": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/rc9/-/rc9-2.1.2.tgz",
"integrity": "sha512-btXCnMmRIBINM2LDZoEmOogIZU7Qe7zn4BpomSKZ/ykbLObuBdvG+mFq11DL6fjH1DRwHhrlgtYWG96bJiC7Cg==",
"devOptional": true,
"license": "MIT",
"dependencies": {
"defu": "^6.1.4",
"destr": "^2.0.3"
}
},
"node_modules/react": {
"version": "19.1.0",
"resolved": "https://registry.npmjs.org/react/-/react-19.1.0.tgz",
"integrity": "sha512-FS+XFBNvn3GTAWq26joslQgWNoFu08F4kl0J4CgdNKADkdSGXQyTCnKteIAJy96Br6YbpEU1LSzV5dYtjMkMDg==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
@ -7249,6 +7660,7 @@
"resolved": "https://registry.npmjs.org/react-dom/-/react-dom-19.1.0.tgz",
"integrity": "sha512-Xs1hdnE+DyKgeHJeJznQmYMIBG3TKIHJJT95Q58nHLSrElKlGQqDTR2HQ9fx5CN/Gk6Vh/kupBTDLU11/nDk/g==",
"license": "MIT",
"peer": true,
"dependencies": {
"scheduler": "^0.26.0"
},
@ -7260,7 +7672,8 @@
"version": "16.13.1",
"resolved": "https://registry.npmjs.org/react-is/-/react-is-16.13.1.tgz",
"integrity": "sha512-24e6ynE2H+OKt4kqsOvNd8kBpV65zoxbA4BVsEOB3ARVWQki/DHzaUoC5KuON/BiccDaCCTZBuOcfZs70kR8bQ==",
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/react-markdown": {
"version": "10.1.0",
@ -7294,6 +7707,7 @@
"resolved": "https://registry.npmjs.org/react-redux/-/react-redux-9.2.0.tgz",
"integrity": "sha512-ROY9fvHhwOD9ySfrF0wmvu//bKCQ6AeZZq1nJNtbDC+kk5DuSuNX/n6YWYF/SYy7bSba4D4FSz8DJeKY/S/r+g==",
"license": "MIT",
"peer": true,
"dependencies": {
"@types/use-sync-external-store": "^0.0.6",
"use-sync-external-store": "^1.4.0"
@ -7381,6 +7795,20 @@
}
}
},
"node_modules/readdirp": {
"version": "4.1.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.1.2.tgz",
"integrity": "sha512-GDhwkLfywWL2s6vEjyhri+eXmfH6j1L7JE27WhqLeYzoh/A3DBaYGEj2H/HFZCn/kMfim73FXxEJTw06WtxQwg==",
"devOptional": true,
"license": "MIT",
"engines": {
"node": ">= 14.18.0"
},
"funding": {
"type": "individual",
"url": "https://paulmillr.com/funding/"
}
},
"node_modules/recharts": {
"version": "3.3.0",
"resolved": "https://registry.npmjs.org/recharts/-/recharts-3.3.0.tgz",
@ -7412,7 +7840,8 @@
"version": "5.0.1",
"resolved": "https://registry.npmjs.org/redux/-/redux-5.0.1.tgz",
"integrity": "sha512-M9/ELqF6fy8FwmkpnF0S3YKOqMyoWJ4+CS5Efg2ct3oY9daQvd/Pc71FpGZsVsbl3Cpb+IIcjBDUnnyBdQbq4w==",
"license": "MIT"
"license": "MIT",
"peer": true
},
"node_modules/redux-thunk": {
"version": "3.1.0",
@ -8204,6 +8633,13 @@
"integrity": "sha512-+FbBPE1o9QAYvviau/qC5SE3caw21q3xkvWKBtja5vgqOWIHHJ3ioaq1VPfn/Szqctz2bU/oYeKd9/z5BL+PVg==",
"license": "MIT"
},
"node_modules/tinyexec": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/tinyexec/-/tinyexec-1.0.1.tgz",
"integrity": "sha512-5uC6DDlmeqiOwCPmK9jMSdOuZTh8bU39Ys6yidB+UTt5hfZUPGAypSgFRiEp+jbi9qH40BLDvy85jIU88wKSqw==",
"devOptional": true,
"license": "MIT"
},
"node_modules/tinyglobby": {
"version": "0.2.15",
"resolved": "https://registry.npmjs.org/tinyglobby/-/tinyglobby-0.2.15.tgz",
@ -8245,6 +8681,7 @@
"integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==",
"dev": true,
"license": "MIT",
"peer": true,
"engines": {
"node": ">=12"
},
@ -8422,8 +8859,9 @@
"version": "5.9.3",
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
"dev": true,
"devOptional": true,
"license": "Apache-2.0",
"peer": true,
"bin": {
"tsc": "bin/tsc",
"tsserver": "bin/tsserver"

View File

@ -3,12 +3,13 @@
"version": "0.1.0",
"private": true,
"scripts": {
"dev": "next dev --turbopack",
"build": "next build --turbopack",
"dev": "NODE_NO_WARNINGS=1 next dev -p 3001",
"build": "next build",
"start": "next start",
"lint": "eslint"
},
"dependencies": {
"@prisma/client": "^6.18.0",
"@radix-ui/react-checkbox": "^1.3.3",
"@radix-ui/react-navigation-menu": "^1.2.14",
"@radix-ui/react-select": "^2.2.6",
@ -16,6 +17,8 @@
"@radix-ui/react-tabs": "^1.1.13",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"geist": "^1.5.1",
"github-markdown-css": "^5.8.1",
"lucide-react": "^0.545.0",
"next": "15.5.5",
"react": "19.1.0",
@ -35,6 +38,7 @@
"@types/react-dom": "^19",
"eslint": "^9",
"eslint-config-next": "15.5.5",
"prisma": "^6.18.0",
"tailwindcss": "^4",
"tw-animate-css": "^1.4.0",
"typescript": "^5"

View File

@ -0,0 +1,3 @@
# Please do not edit this file manually
# It should be added in your version-control system (e.g., Git)
provider = "postgresql"

View File

@ -0,0 +1,19 @@
// This is your Prisma schema file,
// learn more about it in the docs: https://pris.ly/d/prisma-schema
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
shadowDatabaseUrl = env("PRISMA_MIGRATE_SHADOW_DATABASE_URL")
}
model Report {
id String @id @default(uuid())
symbol String
content Json
createdAt DateTime @default(now())
}

View File

@ -1,14 +1,20 @@
import { NextRequest } from 'next/server';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL || 'http://127.0.0.1:8000/api';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL;
export async function GET() {
if (!BACKEND_BASE) {
return new Response('NEXT_PUBLIC_BACKEND_URL 未配置', { status: 500 });
}
const resp = await fetch(`${BACKEND_BASE}/config`);
const text = await resp.text();
return new Response(text, { status: resp.status, headers: { 'Content-Type': resp.headers.get('Content-Type') || 'application/json' } });
}
export async function PUT(req: NextRequest) {
if (!BACKEND_BASE) {
return new Response('NEXT_PUBLIC_BACKEND_URL 未配置', { status: 500 });
}
const body = await req.text();
const resp = await fetch(`${BACKEND_BASE}/config`, {
method: 'PUT',

View File

@ -1,8 +1,11 @@
import { NextRequest } from 'next/server';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL || 'http://127.0.0.1:8000/api';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL;
export async function POST(req: NextRequest) {
if (!BACKEND_BASE) {
return new Response('NEXT_PUBLIC_BACKEND_URL 未配置', { status: 500 });
}
const body = await req.text();
const resp = await fetch(`${BACKEND_BASE}/config/test`, {
method: 'POST',

View File

@ -1,16 +1,27 @@
import { NextRequest } from 'next/server';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL || 'http://127.0.0.1:8000/api';
const BACKEND_BASE = process.env.NEXT_PUBLIC_BACKEND_URL;
export async function GET(
req: NextRequest,
context: { params: Promise<{ slug: string[] }> }
) {
if (!BACKEND_BASE) {
return new Response('NEXT_PUBLIC_BACKEND_URL 未配置', { status: 500 });
}
const url = new URL(req.url);
const { slug } = await context.params;
const path = slug.join('/');
const target = `${BACKEND_BASE}/financials/${path}${url.search}`;
const resp = await fetch(target, { headers: { 'Content-Type': 'application/json' } });
const text = await resp.text();
return new Response(text, { status: resp.status, headers: { 'Content-Type': resp.headers.get('Content-Type') || 'application/json' } });
// 透传后端响应(支持流式 body
const headers = new Headers();
// 复制关键头,减少代理层缓冲
const contentType = resp.headers.get('content-type') || 'application/json; charset=utf-8';
headers.set('content-type', contentType);
const cacheControl = resp.headers.get('cache-control');
if (cacheControl) headers.set('cache-control', cacheControl);
const xAccelBuffering = resp.headers.get('x-accel-buffering');
if (xAccelBuffering) headers.set('x-accel-buffering', xAccelBuffering);
return new Response(resp.body, { status: resp.status, headers });
}

View File

@ -0,0 +1,29 @@
import { NextRequest } from 'next/server'
import { prisma } from '../../../../lib/prisma'
export async function GET(
req: NextRequest,
context: { params: Promise<{ id: string }> }
) {
// 优先从动态路由 paramsPromise获取其次从 URL 最后一段兜底
let id: string | undefined
try {
const { id: idFromParams } = await context.params
id = idFromParams
} catch {
// ignore
}
if (!id) {
id = new URL(req.url).pathname.split('/').pop() || undefined
}
if (!id) {
return Response.json({ error: 'missing id' }, { status: 400 })
}
const report = await prisma.report.findUnique({ where: { id } })
if (!report) {
return Response.json({ error: 'not found' }, { status: 404 })
}
return Response.json(report)
}

View File

@ -0,0 +1,43 @@
export const runtime = 'nodejs'
import { NextRequest } from 'next/server'
import { prisma } from '../../../lib/prisma'
export async function GET(req: NextRequest) {
const url = new URL(req.url)
const limit = Number(url.searchParams.get('limit') || 50)
const offset = Number(url.searchParams.get('offset') || 0)
const [items, total] = await Promise.all([
prisma.report.findMany({
orderBy: { createdAt: 'desc' },
skip: offset,
take: Math.min(Math.max(limit, 1), 200)
}),
prisma.report.count()
])
return Response.json({ items, total })
}
export async function POST(req: NextRequest) {
try {
const body = await req.json()
const symbol = String(body.symbol || '').trim()
const content = body.content
if (!symbol) {
return Response.json({ error: 'symbol is required' }, { status: 400 })
}
if (typeof content === 'undefined') {
return Response.json({ error: 'content is required' }, { status: 400 })
}
const created = await prisma.report.create({
data: { symbol, content }
})
return Response.json(created, { status: 201 })
} catch (e) {
return Response.json({ error: 'invalid json body' }, { status: 400 })
}
}

View File

@ -6,13 +6,13 @@ import { Card, CardContent, CardHeader, CardTitle } from '@/components/ui/card';
async function getMarkdownContent() {
// process.cwd() is the root of the Next.js project (the 'frontend' directory)
const mdPath = path.join(process.cwd(), '..', 'docs', 'design.md');
const mdPath = path.join(process.cwd(), '..', 'docs', 'user-guide.md');
try {
const content = await fs.readFile(mdPath, 'utf8');
return content;
} catch (error) {
console.error("Failed to read design.md:", error);
return "# 文档加载失败\n\n无法读取 `docs/design.md` 文件。请检查文件是否存在以及服务器权限。";
console.error("Failed to read user-guide.md:", error);
return "# 文档加载失败\n\n无法读取 `docs/user-guide.md` 文件。请检查文件是否存在以及服务器权限。";
}
}
@ -22,42 +22,22 @@ export default async function DocsPage() {
return (
<div className="container mx-auto py-6 space-y-6">
<header className="space-y-2">
<h1 className="text-3xl font-bold"></h1>
<h1 className="text-3xl font-bold">使</h1>
<p className="text-muted-foreground">
使
</p>
</header>
<Card>
<CardContent className="p-6">
<article className="prose prose-zinc max-w-none dark:prose-invert">
<article className="markdown-body" style={{
boxSizing: 'border-box',
minWidth: '200px',
maxWidth: '980px',
margin: '0 auto',
padding: '0'
}}>
<ReactMarkdown
remarkPlugins={[remarkGfm]}
components={{
h1: ({node, ...props}) => <h1 className="text-3xl font-bold mb-4 mt-8 border-b pb-2" {...props} />,
h2: ({node, ...props}) => <h2 className="text-2xl font-bold mb-3 mt-6 border-b pb-2" {...props} />,
h3: ({node, ...props}) => <h3 className="text-xl font-semibold mb-2 mt-4" {...props} />,
p: ({node, ...props}) => <p className="mb-4 leading-7" {...props} />,
ul: ({node, ...props}) => <ul className="list-disc list-inside mb-4 space-y-2" {...props} />,
ol: ({node, ...props}) => <ol className="list-decimal list-inside mb-4 space-y-2" {...props} />,
li: ({node, ...props}) => <li className="ml-4" {...props} />,
code: ({node, inline, className, children, ...props}: any) => {
const match = /language-(\w+)/.exec(className || '');
return !inline ? (
<code className={className} {...props}>
{children}
</code>
) : (
<code className="bg-muted px-1.5 py-1 rounded text-sm font-mono" {...props}>
{children}
</code>
);
},
pre: ({children}) => <pre className="bg-muted p-4 rounded my-4 overflow-x-auto">{children}</pre>,
table: ({node, ...props}) => <div className="overflow-x-auto my-4"><table className="border-collapse border border-border w-full" {...props} /></div>,
th: ({node, ...props}) => <th className="border border-border px-4 py-2 bg-muted font-semibold text-left" {...props} />,
td: ({node, ...props}) => <td className="border border-border px-4 py-2" {...props} />,
a: ({node, ...props}) => <a className="text-primary underline hover:text-primary/80" {...props} />,
}}
>
{content}
</ReactMarkdown>

View File

@ -0,0 +1,22 @@
将本地自托管字体放在此目录。
需要文件(建议):
- GeistVF.woff2
- GeistMonoVF.woff2
来源建议:
- 若你已有字体授权,可从官方来源或内部制品库获取 WOFF2 变体文件。
放置后无需额外配置,`src/app/layout.tsx` 已使用 next/font/local 引用:
- ./fonts/GeistVF.woff2 -> --font-geist-sans
- ./fonts/GeistMonoVF.woff2 -> --font-geist-mono
若暂时没有字体文件,页面会退回系统默认字体,不影响功能。

View File

@ -1,5 +1,6 @@
@import "tailwindcss";
@import "tw-animate-css";
@import "github-markdown-css/github-markdown.css";
@custom-variant dark (&:is(.dark *));

View File

@ -1,5 +1,6 @@
import type { Metadata } from "next";
import { Geist, Geist_Mono } from "next/font/google";
import { GeistSans } from 'geist/font/sans'
import { GeistMono } from 'geist/font/mono'
import "./globals.css";
import {
NavigationMenu,
@ -8,15 +9,9 @@ import {
NavigationMenuList,
} from "@/components/ui/navigation-menu";
const geistSans = Geist({
variable: "--font-geist-sans",
subsets: ["latin"],
});
const geistMono = Geist_Mono({
variable: "--font-geist-mono",
subsets: ["latin"],
});
// 官方 Geist 字体npm 包)
const geistSans = GeistSans;
const geistMono = GeistMono;
export const metadata: Metadata = {
title: "Fundamental Analysis",
@ -40,7 +35,7 @@ export default function RootLayout({
<NavigationMenuLink href="/" className="px-3 py-2"></NavigationMenuLink>
</NavigationMenuItem>
<NavigationMenuItem>
<NavigationMenuLink href="/reports" className="px-3 py-2"></NavigationMenuLink>
<NavigationMenuLink href="/reports" className="px-3 py-2"></NavigationMenuLink>
</NavigationMenuItem>
<NavigationMenuItem>
<NavigationMenuLink href="/docs" className="px-3 py-2"></NavigationMenuLink>

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,765 @@
import { prisma } from '../../../lib/prisma'
import ReactMarkdown from 'react-markdown'
import remarkGfm from 'remark-gfm'
import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/tabs'
import { Card, CardHeader, CardTitle, CardContent } from '@/components/ui/card'
import { Table, TableHeader, TableBody, TableHead, TableRow, TableCell } from '@/components/ui/table'
import { formatReportPeriod } from '@/lib/financial-utils'
type Report = {
id: string
symbol: string
content: any
createdAt: string
}
export default async function ReportDetailPage({ params }: { params: Promise<{ id: string }> }) {
const { id } = await params
const data = await prisma.report.findUnique({ where: { id } })
if (!data) {
return <div className="text-sm text-red-600"></div>
}
const content = (data.content ?? {}) as any
const analyses = (content?.analyses ?? {}) as Record<string, any>
// 规范化显示顺序(与生成报告时一致的中文 Tabs 次序)
const ordered = [
{ id: 'financial', label: '财务数据' },
{ id: 'company_profile', label: '公司简介' },
{ id: 'fundamentals', label: '基本面分析' },
{ id: 'bullish', label: '看涨分析' },
{ id: 'bearish', label: '看跌分析' },
{ id: 'market', label: '市场分析' },
{ id: 'news', label: '新闻分析' },
{ id: 'trading', label: '交易分析' },
{ id: 'insiders_institutions', label: '内部人及机构动向分析' },
{ id: 'final_conclusion', label: '最终结论' },
{ id: 'meta', label: '元数据' },
] as const
// 每个规范化 id 对应的候选后端 key兼容不同命名
const candidateKeys: Record<string, string[]> = {
company_profile: ['company_profile'],
fundamentals: ['fundamental_analysis', 'fundamentals_analysis', 'basic_analysis', 'basics_analysis'],
bullish: ['bullish_analysis', 'bullish_case', 'bull_case'],
bearish: ['bearish_analysis', 'bearish_case', 'bear_case'],
market: ['market_analysis'],
news: ['news_analysis'],
trading: ['trading_analysis'],
insiders_institutions: ['insider_institutional', 'insiders_institutions_analysis', 'insider_institution_analysis', 'insider_analysis'],
final_conclusion: ['final_conclusion', 'conclusion', 'investment_thesis'],
}
const findKey = (id: string): string | null => {
const c = candidateKeys[id]
if (!c) return null
for (const k of c) {
if (Object.prototype.hasOwnProperty.call(analyses, k)) return k
}
return null
}
// 去掉正文开头重复的大标题Markdown 以 # 开头的行)
const stripTopHeadings = (text: string): string => {
const lines = String(text || '').split(/\r?\n/)
let i = 0
while (i < lines.length) {
const t = lines[i]?.trim() || ''
if (t === '') { i += 1; continue }
if (/^#{1,6}\s+/.test(t)) { i += 1; continue }
break
}
return lines.slice(i).join('\n').trimStart()
}
return (
<div className="space-y-4">
<div className="flex items-center justify-between">
<h1 className="text-2xl font-semibold"></h1>
<div className="text-sm text-muted-foreground">{new Date(data.createdAt).toLocaleString()}</div>
</div>
<Card>
<CardHeader>
<CardTitle className="text-base"></CardTitle>
</CardHeader>
<CardContent className="text-sm space-y-1">
<div className="flex flex-wrap items-center gap-4">
<span><span className="font-medium">{data.symbol}</span></span>
{content?.normalizedSymbol && (
<span><span className="font-medium">{String(content.normalizedSymbol)}</span></span>
)}
{(() => {
const companyName = (content?.financials?.name as string | undefined) || (content as any)?.company_name || (content as any)?.companyName
return companyName ? (
<span><span className="font-medium">{companyName}</span></span>
) : null
})()}
{content?.market && (
<span><span className="font-medium">{String(content.market)}</span></span>
)}
</div>
</CardContent>
</Card>
<Tabs defaultValue={'financial'} className="mt-2">
<TabsList className="flex-wrap">
{ordered.map((o, idx) => (
<TabsTrigger key={o.id} value={o.id}>{`${idx + 1}. ${o.label}`}</TabsTrigger>
))}
</TabsList>
<TabsContent value="financial" className="space-y-4">
<Card>
<CardHeader>
<CardTitle className="text-base"></CardTitle>
</CardHeader>
<CardContent className="space-y-4">
{(() => {
const fin = (content?.financials ?? null) as null | {
ts_code?: string
name?: string
series?: Record<string, Array<{ period: string; value: number | null }>>
meta?: any
}
const series = fin?.series || {}
const allPoints = Object.values(series).flat() as Array<{ period: string; value: number | null }>
const periods = Array.from(new Set(allPoints.map(p => p?.period).filter(Boolean) as string[])).sort((a, b) => b.localeCompare(a))
const numberFormatter = new Intl.NumberFormat('zh-CN', { minimumFractionDigits: 2, maximumFractionDigits: 2 })
const integerFormatter = new Intl.NumberFormat('zh-CN', { minimumFractionDigits: 0, maximumFractionDigits: 0 })
const metricDisplayMap: Record<string, string> = {
roe: 'ROE',
roa: 'ROA',
roic: 'ROCE/ROIC',
grossprofit_margin: '毛利率',
netprofit_margin: '净利润率',
tr_yoy: '收入增速',
dt_netprofit_yoy: '净利润增速',
revenue: '收入',
n_income: '净利润',
n_cashflow_act: '经营现金流',
c_pay_acq_const_fiolta: '资本开支',
cash_div_tax: '分红',
buyback: '回购',
total_assets: '总资产',
total_hldr_eqy_exc_min_int: '股东权益',
goodwill: '商誉',
total_mv: '市值',
}
const metricGroupMap: Record<string, string> = {
revenue: 'income',
n_income: 'income',
total_assets: 'balancesheet',
total_hldr_eqy_exc_min_int: 'balancesheet',
goodwill: 'balancesheet',
n_cashflow_act: 'cashflow',
c_pay_acq_const_fiolta: 'cashflow',
}
if (periods.length === 0) {
return (
<div className="text-sm text-muted-foreground">
</div>
)
}
const currentYearStr = String(new Date().getFullYear())
const getQuarter = (month: number | null | undefined) => {
if (month == null) return null
return Math.floor((month - 1) / 3) + 1
}
const PERCENT_KEYS = new Set(['roe','roa','roic','grossprofit_margin','netprofit_margin','tr_yoy','dt_netprofit_yoy'])
const ORDER: Array<{ key: string; label?: string; kind?: 'computed' }> = [
{ key: 'roe' },
{ key: 'roa' },
{ key: 'roic' },
{ key: 'grossprofit_margin' },
{ key: 'netprofit_margin' },
{ key: 'revenue' },
{ key: 'tr_yoy' },
{ key: 'n_income' },
{ key: 'dt_netprofit_yoy' },
{ key: 'n_cashflow_act' },
{ key: 'c_pay_acq_const_fiolta' },
{ key: '__free_cash_flow', label: '自由现金流', kind: 'computed' },
{ key: 'cash_div_tax', label: '分红' },
{ key: 'buyback', label: '回购' },
{ key: 'total_assets' },
{ key: 'total_hldr_eqy_exc_min_int' },
{ key: 'goodwill' },
]
return (
<div className="overflow-x-auto">
<Table className="min-w-full text-sm">
<TableHeader>
<TableRow>
<TableHead className="text-left p-2"></TableHead>
{periods.map((p) => (
<TableHead key={p} className="text-right p-2">{formatReportPeriod(p)}</TableHead>
))}
</TableRow>
</TableHeader>
<TableBody>
{(() => {
const summaryRow = (
<TableRow key="__main_metrics_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const rows = ORDER.map(({ key, label, kind }) => {
const isComputed = kind === 'computed' && key === '__free_cash_flow'
const points = series[key] as Array<{ period?: string; value?: number | null }>|undefined
const operating = series['n_cashflow_act'] as Array<{ period?: string; value?: number | null }>|undefined
const capex = series['c_pay_acq_const_fiolta'] as Array<{ period?: string; value?: number | null }>|undefined
return (
<TableRow key={key} className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">{label || metricDisplayMap[key] || key}</TableCell>
{periods.map((p) => {
let v: number | null | undefined = undefined
if (isComputed) {
const op = operating?.find(pt => pt?.period === p)?.value ?? null
const cp = capex?.find(pt => pt?.period === p)?.value ?? null
v = (op == null || cp == null) ? null : (Number(op) - Number(cp))
} else {
v = points?.find(pt => pt?.period === p)?.value ?? null
}
const groupName = metricGroupMap[key]
const rawNum = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (rawNum == null || Number.isNaN(rawNum)) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
if (PERCENT_KEYS.has(key)) {
const perc = Math.abs(rawNum) <= 1 ? rawNum * 100 : rawNum
const text = Number.isFinite(perc) ? numberFormatter.format(perc) : '-'
const isGrowthRow = key === 'tr_yoy' || key === 'dt_netprofit_yoy'
if (isGrowthRow) {
const isNeg = typeof perc === 'number' && perc < 0
return (
<TableCell key={p} className="text-right p-2">
<span className={isNeg ? 'text-red-600 bg-red-100 italic' : 'text-blue-600 italic'}>{text}%</span>
</TableCell>
)
}
if (key === 'roe' || key === 'roic') {
const highlight = typeof perc === 'number' && perc > 12
return (
<TableCell key={p} className={`text-right p-2 ${highlight ? 'bg-green-200' : ''}`}>{`${text}%`}</TableCell>
)
}
return <TableCell key={p} className="text-right p-2">{`${text}%`}</TableCell>
} else {
const isFinGroup = groupName === 'income' || groupName === 'balancesheet' || groupName === 'cashflow'
const scaled = key === 'total_mv' ? rawNum / 10000 : (isFinGroup || isComputed ? rawNum / 1e8 : rawNum)
const formatter = key === 'total_mv' ? integerFormatter : numberFormatter
const text = Number.isFinite(scaled) ? formatter.format(scaled) : '-'
if (key === '__free_cash_flow') {
const isNeg = typeof scaled === 'number' && scaled < 0
return (
<TableCell key={p} className="text-right p-2">{isNeg ? <span className="text-red-600 bg-red-100">{text}</span> : text}</TableCell>
)
}
return <TableCell key={p} className="text-right p-2">{text}</TableCell>
}
})}
</TableRow>
)
})
const getVal = (arr: Array<{ period?: string; value?: number | null }> | undefined, p: string) => {
const v = arr?.find(pt => pt?.period === p)?.value
return typeof v === 'number' ? v : (v == null ? null : Number(v))
}
// 费用指标
const feeHeaderRow = (
<TableRow key="__fee_metrics_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const feeRows = [
{ key: '__sell_rate', label: '销售费用率', num: series['sell_exp'] as any, den: series['revenue'] as any },
{ key: '__admin_rate', label: '管理费用率', num: series['admin_exp'] as any, den: series['revenue'] as any },
{ key: '__rd_rate', label: '研发费用率', num: series['rd_exp'] as any, den: series['revenue'] as any },
{ key: '__other_fee_rate', label: '其他费用率', num: undefined, den: series['revenue'] as any },
{ key: '__tax_rate', label: '所得税率', num: series['tax_to_ebt'] as any, den: undefined },
{ key: '__depr_ratio', label: '折旧费用占比', num: series['depr_fa_coga_dpba'] as any, den: series['revenue'] as any },
].map(({ key, label, num, den }) => (
<TableRow key={key} className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">{label}</TableCell>
{periods.map((p) => {
let rate: number | null = null
if (key === '__tax_rate') {
const numerator = getVal(num, p)
if (numerator == null || Number.isNaN(numerator)) {
rate = null
} else if (Math.abs(numerator) <= 1) {
rate = numerator * 100
} else {
rate = numerator
}
} else if (key === '__other_fee_rate') {
const gpRaw = getVal(series['grossprofit_margin'] as any, p)
const npRaw = getVal(series['netprofit_margin'] as any, p)
const rev = getVal(series['revenue'] as any, p)
const sell = getVal(series['sell_exp'] as any, p)
const admin = getVal(series['admin_exp'] as any, p)
const rd = getVal(series['rd_exp'] as any, p)
if (gpRaw == null || npRaw == null || rev == null || rev === 0 || sell == null || admin == null || rd == null) {
rate = null
} else {
const gp = Math.abs(gpRaw) <= 1 ? gpRaw * 100 : gpRaw
const np = Math.abs(npRaw) <= 1 ? npRaw * 100 : npRaw
const sellRate = (sell / rev) * 100
const adminRate = (admin / rev) * 100
const rdRate = (rd / rev) * 100
rate = gp - np - sellRate - adminRate - rdRate
}
} else {
const numerator = getVal(num, p)
const denominator = getVal(den, p)
if (numerator == null || denominator == null || denominator === 0) {
rate = null
} else {
rate = (numerator / denominator) * 100
}
}
if (rate == null || !Number.isFinite(rate)) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const rateText = numberFormatter.format(rate)
const isNegative = rate < 0
return (
<TableCell key={p} className="text-right p-2">
{isNegative ? <span className="text-red-600 bg-red-100">{rateText}%</span> : `${rateText}%`}
</TableCell>
)
})}
</TableRow>
))
// 资产占比
const assetHeaderRow = (
<TableRow key="__asset_ratio_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const ratioCell = (value: number | null, p: string) => {
if (value == null || !Number.isFinite(value)) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const text = numberFormatter.format(value)
const isNegative = value < 0
return (
<TableCell key={p} className="text-right p-2">
{isNegative ? <span className="text-red-600 bg-red-100">{text}%</span> : `${text}%`}
</TableCell>
)
}
const assetRows = [
{ key: '__money_cap_ratio', label: '现金占比', calc: (p: string) => {
const num = getVal(series['money_cap'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__inventories_ratio', label: '库存占比', calc: (p: string) => {
const num = getVal(series['inventories'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__ar_ratio', label: '应收款占比', calc: (p: string) => {
const num = getVal(series['accounts_receiv_bill'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__prepay_ratio', label: '预付款占比', calc: (p: string) => {
const num = getVal(series['prepayment'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__fix_assets_ratio', label: '固定资产占比', calc: (p: string) => {
const num = getVal(series['fix_assets'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__lt_invest_ratio', label: '长期投资占比', calc: (p: string) => {
const num = getVal(series['lt_eqt_invest'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__goodwill_ratio', label: '商誉占比', calc: (p: string) => {
const num = getVal(series['goodwill'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__other_assets_ratio', label: '其他资产占比', calc: (p: string) => {
const total = getVal(series['total_assets'] as any, p)
if (total == null || total === 0) return null
const parts = [
getVal(series['money_cap'] as any, p) || 0,
getVal(series['inventories'] as any, p) || 0,
getVal(series['accounts_receiv_bill'] as any, p) || 0,
getVal(series['prepayment'] as any, p) || 0,
getVal(series['fix_assets'] as any, p) || 0,
getVal(series['lt_eqt_invest'] as any, p) || 0,
getVal(series['goodwill'] as any, p) || 0,
]
const sumKnown = parts.reduce((acc: number, v: number) => acc + v, 0)
return ((total - sumKnown) / total) * 100
} },
{ key: '__ap_ratio', label: '应付款占比', calc: (p: string) => {
const num = getVal(series['accounts_pay'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__adv_ratio', label: '预收款占比', calc: (p: string) => {
const adv = getVal(series['adv_receipts'] as any, p) || 0
const contractLiab = getVal(series['contract_liab'] as any, p) || 0
const num = adv + contractLiab
const den = getVal(series['total_assets'] as any, p)
return den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__st_borr_ratio', label: '短期借款占比', calc: (p: string) => {
const num = getVal(series['st_borr'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__lt_borr_ratio', label: '长期借款占比', calc: (p: string) => {
const num = getVal(series['lt_borr'] as any, p)
const den = getVal(series['total_assets'] as any, p)
return num == null || den == null || den === 0 ? null : (num / den) * 100
} },
{ key: '__interest_bearing_debt_ratio', label: '有息负债率', calc: (p: string) => {
const total = getVal(series['total_assets'] as any, p)
if (total == null || total === 0) return null
const st = getVal(series['st_borr'] as any, p) || 0
const lt = getVal(series['lt_borr'] as any, p) || 0
return ((st + lt) / total) * 100
} },
{ key: '__operating_assets_ratio', label: '运营资产占比', calc: (p: string) => {
const total = getVal(series['total_assets'] as any, p)
if (total == null || total === 0) return null
const inv = getVal(series['inventories'] as any, p) || 0
const ar = getVal(series['accounts_receiv_bill'] as any, p) || 0
const pre = getVal(series['prepayment'] as any, p) || 0
const ap = getVal(series['accounts_pay'] as any, p) || 0
const adv = getVal(series['adv_receipts'] as any, p) || 0
const contractLiab = getVal(series['contract_liab'] as any, p) || 0
const operating = inv + ar + pre - ap - adv - contractLiab
return (operating / total) * 100
} },
].map(({ key, label, calc }) => (
<TableRow key={key} className={`hover:bg-purple-100 ${key === '__other_assets_ratio' ? 'bg-yellow-50' : ''}`}>
<TableCell className="p-2 text-muted-foreground">{label}</TableCell>
{periods.map((p) => ratioCell(calc(p), p))}
</TableRow>
))
// 周转能力
const turnoverHeaderRow = (
<TableRow key="__turnover_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const getYearNumber = (ys: string) => {
const n = Number(ys)
return Number.isFinite(n) ? n : null
}
const getPoint = (arr: Array<{ period?: string; value?: number | null }> | undefined, period: string) => {
return arr?.find(p => p?.period === period)?.value ?? null
}
const getAvg = (arr: Array<{ period?: string; value?: number | null }> | undefined, period: string) => {
const curr = getPoint(arr, period)
const yNum = period.length >= 4 ? Number(period.substring(0, 4)) : null
const prevYear = yNum != null ? String(yNum - 1) : null
const prevPeriod = prevYear ? prevYear + period.substring(4) : null
const prev = prevPeriod ? getPoint(arr, prevPeriod) : null
const c = typeof curr === 'number' ? curr : (curr == null ? null : Number(curr))
const p = typeof prev === 'number' ? prev : (prev == null ? null : Number(prev))
if (c == null) return null
if (p == null) return c
return (c + p) / 2
}
const getMarginRatio = (year: string) => {
const gmRaw = getPoint(series['grossprofit_margin'] as any, year)
if (gmRaw == null) return null
const gmNum = typeof gmRaw === 'number' ? gmRaw : Number(gmRaw)
if (!Number.isFinite(gmNum)) return null
return Math.abs(gmNum) <= 1 ? gmNum : gmNum / 100
}
const getRevenue = (year: string) => {
const rev = getPoint(series['revenue'] as any, year)
const r = typeof rev === 'number' ? rev : (rev == null ? null : Number(rev))
return r
}
const getCOGS = (year: string) => {
const rev = getRevenue(year)
const gm = getMarginRatio(year)
if (rev == null || gm == null) return null
const cogs = rev * (1 - gm)
return Number.isFinite(cogs) ? cogs : null
}
const turnoverItems: Array<{ key: string; label: string }> = [
{ key: 'invturn_days', label: '存货周转天数' },
{ key: 'arturn_days', label: '应收款周转天数' },
{ key: 'payturn_days', label: '应付款周转天数' },
{ key: 'fa_turn', label: '固定资产周转率' },
{ key: 'assets_turn', label: '总资产周转率' },
]
const turnoverRows = turnoverItems.map(({ key, label }) => (
<TableRow key={key} className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">{label}</TableCell>
{periods.map((p) => {
let value: number | null = null
if (key === 'payturn_days') {
const avgAP = getAvg(series['accounts_pay'] as any, p)
const cogs = getCOGS(p)
value = avgAP == null || cogs == null || cogs === 0 ? null : (365 * avgAP) / cogs
} else {
const arr = series[key] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
value = num == null || Number.isNaN(num) ? null : num
}
if (value == null || !Number.isFinite(value)) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const text = numberFormatter.format(value)
if (key === 'arturn_days' && value > 90) {
return (
<TableCell key={p} className="text-right p-2 bg-red-100 text-red-600">{text}</TableCell>
)
}
return <TableCell key={p} className="text-right p-2">{text}</TableCell>
})}
</TableRow>
))
// 人均效率
const perCapitaHeaderRow = (
<TableRow key="__per_capita_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const employeesRow = (
<TableRow key="__employees_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const v = getVal(series['employees'] as any, p)
if (v == null || !Number.isFinite(v)) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
return <TableCell key={p} className="text-right p-2">{integerFormatter.format(Math.round(v))}</TableCell>
})}
</TableRow>
)
const revPerEmpRow = (
<TableRow key="__rev_per_emp_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const rev = getVal(series['revenue'] as any, p)
const emp = getVal(series['employees'] as any, p)
if (rev == null || emp == null || emp === 0) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const val = (rev / emp) / 10000
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(val)}</TableCell>
})}
</TableRow>
)
const profitPerEmpRow = (
<TableRow key="__profit_per_emp_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const prof = getVal(series['n_income'] as any, p)
const emp = getVal(series['employees'] as any, p)
if (prof == null || emp == null || emp === 0) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const val = (prof / emp) / 10000
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(val)}</TableCell>
})}
</TableRow>
)
const salaryPerEmpRow = (
<TableRow key="__salary_per_emp_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const salaryPaid = getVal(series['c_paid_to_for_empl'] as any, p)
const emp = getVal(series['employees'] as any, p)
if (salaryPaid == null || emp == null || emp === 0) {
return <TableCell key={p} className="text-right p-2">-</TableCell>
}
const val = (salaryPaid / emp) / 10000
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(val)}</TableCell>
})}
</TableRow>
)
// 市场表现
const marketHeaderRow = (
<TableRow key="__market_perf_row" className="bg-muted hover:bg-purple-100">
<TableCell className="p-2 font-medium "></TableCell>
{periods.map((p) => (
<TableCell key={p} className="p-2"></TableCell>
))}
</TableRow>
)
const priceRow = (
<TableRow key="__price_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const arr = series['close'] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (num == null || !Number.isFinite(num)) return <TableCell key={p} className="text-right p-2">-</TableCell>
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(num)}</TableCell>
})}
</TableRow>
)
const marketCapRow = (
<TableRow key="__market_cap_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">亿</TableCell>
{periods.map((p) => {
const arr = series['total_mv'] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (num == null || !Number.isFinite(num)) return <TableCell key={p} className="text-right p-2">-</TableCell>
const scaled = num / 10000
return <TableCell key={p} className="text-right p-2">{integerFormatter.format(Math.round(scaled))}</TableCell>
})}
</TableRow>
)
const peRow = (
<TableRow key="__pe_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">PE</TableCell>
{periods.map((p) => {
const arr = series['pe'] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (num == null || !Number.isFinite(num)) return <TableCell key={p} className="text-right p-2">-</TableCell>
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(num)}</TableCell>
})}
</TableRow>
)
const pbRow = (
<TableRow key="__pb_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground">PB</TableCell>
{periods.map((p) => {
const arr = series['pb'] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (num == null || !Number.isFinite(num)) return <TableCell key={p} className="text-right p-2">-</TableCell>
return <TableCell key={p} className="text-right p-2">{numberFormatter.format(num)}</TableCell>
})}
</TableRow>
)
const holderNumRow = (
<TableRow key="__holder_num_row" className="hover:bg-purple-100">
<TableCell className="p-2 text-muted-foreground"></TableCell>
{periods.map((p) => {
const arr = series['holder_num'] as Array<{ period?: string; value?: number | null }> | undefined
const v = arr?.find(pt => pt?.period === p)?.value ?? null
const num = typeof v === 'number' ? v : (v == null ? null : Number(v))
if (num == null || !Number.isFinite(num)) return <TableCell key={p} className="text-right p-2">-</TableCell>
return <TableCell key={p} className="text-right p-2">{integerFormatter.format(Math.round(num))}</TableCell>
})}
</TableRow>
)
return [
summaryRow,
...rows,
feeHeaderRow,
...feeRows,
assetHeaderRow,
...assetRows,
turnoverHeaderRow,
...turnoverRows,
perCapitaHeaderRow,
employeesRow,
revPerEmpRow,
profitPerEmpRow,
salaryPerEmpRow,
marketHeaderRow,
priceRow,
marketCapRow,
peRow,
pbRow,
holderNumRow,
]
})()}
</TableBody>
</Table>
</div>
)
})()}
</CardContent>
</Card>
</TabsContent>
<TabsContent value="meta" className="space-y-4">
<Card>
<CardHeader>
<CardTitle className="text-base"></CardTitle>
</CardHeader>
<CardContent>
<pre className="text-xs leading-relaxed overflow-auto">
{JSON.stringify(data, null, 2)}
</pre>
</CardContent>
</Card>
</TabsContent>
{ordered.filter(o => o.id !== 'financial' && o.id !== 'meta').map((o) => {
const key = findKey(o.id)
const item = key ? analyses[key] || {} : {}
const md = stripTopHeadings(String(item?.content || ''))
const err = item?.error as string | undefined
return (
<TabsContent key={o.id} value={o.id} className="space-y-3">
{err && <div className="text-sm text-red-600">{err}</div>}
<div className="border rounded-lg p-6 bg-card">
<article className="markdown-body" style={{
boxSizing: 'border-box', minWidth: '200px', maxWidth: '980px', margin: '0 auto', padding: 0
}}>
<h2 className="text-lg font-medium mb-3">{o.label}</h2>
<ReactMarkdown remarkPlugins={[remarkGfm]}>
{md}
</ReactMarkdown>
</article>
</div>
</TabsContent>
)
})}
</Tabs>
</div>
)
}

View File

@ -1,48 +1,60 @@
import { Card, CardContent, CardDescription, CardHeader, CardTitle } from "@/components/ui/card";
import { Badge } from "@/components/ui/badge";
import Link from 'next/link'
import { headers } from 'next/headers'
export default function ReportsPage() {
return (
<div className="space-y-6">
<header className="space-y-2">
<h1 className="text-2xl font-semibold"></h1>
<p className="text-sm text-muted-foreground"></p>
</header>
<div className="grid gap-4 sm:grid-cols-2 lg:grid-cols-3">
<Card>
<CardHeader>
<CardTitle></CardTitle>
<CardDescription></CardDescription>
</CardHeader>
<CardContent className="space-x-2">
<Badge variant="outline"></Badge>
<Badge variant="secondary"></Badge>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle></CardTitle>
<CardDescription></CardDescription>
</CardHeader>
<CardContent className="space-x-2">
<Badge variant="outline"></Badge>
<Badge variant="secondary"></Badge>
</CardContent>
</Card>
<Card>
<CardHeader>
<CardTitle></CardTitle>
<CardDescription></CardDescription>
</CardHeader>
<CardContent className="space-x-2">
<Badge variant="outline"></Badge>
<Badge variant="secondary"></Badge>
</CardContent>
</Card>
</div>
</div>
);
async function fetchReports(baseUrl: string) {
const url = `${baseUrl}/api/reports?limit=50`
const resp = await fetch(url, { cache: 'no-store' })
if (!resp.ok) {
return { items: [], total: 0 }
}
return resp.json() as Promise<{ items: Array<{ id: string; symbol: string; createdAt: string; content?: any }>; total: number }>
}
export default async function ReportsPage() {
const h = await headers()
const host = h.get('x-forwarded-host') || h.get('host') || 'localhost:3000'
const proto = h.get('x-forwarded-proto') || 'http'
const base = process.env.NEXT_PUBLIC_BASE_URL || `${proto}://${host}`
const { items, total } = await fetchReports(base)
return (
<div className="space-y-4">
<div className="flex items-center justify-between">
<h1 className="text-2xl font-semibold"></h1>
<div className="text-sm text-muted-foreground"> {total} </div>
</div>
{items.length === 0 ? (
<p className="text-sm text-muted-foreground"></p>
) : (
<div className="overflow-x-auto border rounded-md">
<table className="min-w-full text-sm">
<thead>
<tr className="bg-muted">
<th className="text-left p-3"></th>
<th className="text-left p-3"></th>
<th className="text-left p-3"></th>
<th className="text-right p-3"></th>
</tr>
</thead>
<tbody>
{items.map((r) => {
const name = (r as any)?.content?.financials?.name || (r as any)?.content?.company_name || ''
return (
<tr key={r.id} className="border-t hover:bg-muted/50">
<td className="p-3 font-medium">{r.symbol}</td>
<td className="p-3">{name || <span className="text-muted-foreground">-</span>}</td>
<td className="p-3">{new Date(r.createdAt).toLocaleString()}</td>
<td className="p-3 text-right">
<Link href={`/reports/${r.id}`} className="text-primary hover:underline"></Link>
</td>
</tr>
)
})}
</tbody>
</table>
</div>
)}
</div>
)
}

View File

@ -1,6 +1,6 @@
import useSWR from 'swr';
import { useConfigStore } from '@/stores/useConfigStore';
import { BatchFinancialDataResponse, FinancialConfigResponse, AnalysisConfigResponse } from '@/types';
import { BatchFinancialDataResponse, FinancialConfigResponse, AnalysisConfigResponse, TodaySnapshotResponse, RealTimeQuoteResponse } from '@/types';
const fetcher = async (url: string) => {
const res = await fetch(url);
@ -63,7 +63,7 @@ export function useFinancialConfig() {
export function useChinaFinancials(ts_code?: string, years: number = 10) {
return useSWR<BatchFinancialDataResponse>(
ts_code ? `/api/financials/china/${encodeURIComponent(ts_code)}?years=${encodeURIComponent(String(years))}` : null,
ts_code ? `/api/financials/cn/${encodeURIComponent(ts_code)}?years=${encodeURIComponent(String(years))}` : null,
fetcher,
{
revalidateOnFocus: false, // 不在窗口聚焦时重新验证
@ -74,6 +74,28 @@ export function useChinaFinancials(ts_code?: string, years: number = 10) {
);
}
export function useFinancials(market?: string, stockCode?: string, years: number = 10) {
const normalizeMarket = (m?: string) => {
const t = (m || '').toLowerCase();
if (t === 'usa') return 'us';
if (t === 'china') return 'cn';
if (t === 'hkex') return 'hk';
if (t === 'jpn') return 'jp';
return t;
};
const mkt = normalizeMarket(market);
return useSWR<BatchFinancialDataResponse>(
mkt && stockCode ? `/api/financials/${encodeURIComponent(mkt)}/${encodeURIComponent(stockCode)}?years=${encodeURIComponent(String(years))}` : null,
fetcher,
{
revalidateOnFocus: false,
revalidateOnReconnect: false,
dedupingInterval: 300000,
errorRetryCount: 1,
}
);
}
export function useAnalysisConfig() {
return useSWR<AnalysisConfigResponse>('/api/financials/analysis-config', fetcher);
}
@ -111,3 +133,71 @@ export async function generateFullAnalysis(tsCode: string, companyName: string)
throw new Error('Invalid JSON response from server.');
}
}
export function useChinaSnapshot(ts_code?: string) {
return useSWR<TodaySnapshotResponse>(
ts_code ? `/api/financials/china/${encodeURIComponent(ts_code)}/snapshot` : null,
fetcher,
{
revalidateOnFocus: false,
revalidateOnReconnect: false,
dedupingInterval: 120000, // 2分钟
errorRetryCount: 1,
}
);
}
export function useSnapshot(market?: string, stockCode?: string) {
const normalizeMarket = (m?: string) => {
const t = (m || '').toLowerCase();
if (t === 'usa') return 'us';
if (t === 'china') return 'cn';
if (t === 'hkex') return 'hk';
if (t === 'jpn') return 'jp';
return t;
};
const mkt = normalizeMarket(market);
return useSWR<TodaySnapshotResponse>(
mkt && stockCode ? `/api/financials/${encodeURIComponent(mkt)}/${encodeURIComponent(stockCode)}/snapshot` : null,
fetcher,
{
revalidateOnFocus: false,
revalidateOnReconnect: false,
dedupingInterval: 120000,
errorRetryCount: 1,
}
);
}
export function useRealtimeQuote(
market?: string,
stockCode?: string,
options?: {
maxAgeSeconds?: number;
refreshIntervalMs?: number;
}
) {
const normalizeMarket = (m?: string) => {
const t = (m || '').toLowerCase();
if (t === 'usa') return 'us';
if (t === 'china') return 'cn';
if (t === 'hkex') return 'hk';
if (t === 'jpn') return 'jp';
return t;
};
const mkt = normalizeMarket(market);
const maxAge = options?.maxAgeSeconds ?? 30;
const refreshMs = options?.refreshIntervalMs ?? 5000;
return useSWR<RealTimeQuoteResponse>(
mkt && stockCode ? `/api/financials/${encodeURIComponent(mkt)}/${encodeURIComponent(stockCode)}/realtime?max_age_seconds=${encodeURIComponent(String(maxAge))}` : null,
fetcher,
{
revalidateOnFocus: false,
revalidateOnReconnect: false,
refreshInterval: refreshMs,
dedupingInterval: Math.min(1000, refreshMs),
shouldRetryOnError: false,
errorRetryCount: 0,
}
);
}

View File

@ -324,3 +324,24 @@ export function safeSetToStorage(key: string, value: unknown): boolean {
return false;
}
}
export const formatReportPeriod = (period: string): string => {
if (!period || period.length !== 8) {
return period;
}
const year = period.substring(0, 4);
const monthDay = period.substring(4);
switch (monthDay) {
case '1231':
return `${year}A`;
case '0930':
return `${year}Q3`;
case '0630':
return `${year}Q2`;
case '0331':
return `${year}Q1`;
default:
return period;
}
};

View File

@ -0,0 +1,44 @@
import { PrismaClient } from '@prisma/client'
import fs from 'node:fs'
import path from 'node:path'
const globalForPrisma = global as unknown as { prisma?: PrismaClient }
function loadDatabaseUrlFromConfig(): string | undefined {
try {
const configPath = path.resolve(process.cwd(), '..', 'config', 'config.json')
const raw = fs.readFileSync(configPath, 'utf-8')
const json = JSON.parse(raw)
const dbUrl: unknown = json?.database?.url
if (typeof dbUrl !== 'string' || !dbUrl) return undefined
// 将后端风格的 "postgresql+asyncpg://" 转换为 Prisma 需要的 "postgresql://"
let url = dbUrl.replace(/^postgresql\+[^:]+:\/\//, 'postgresql://')
// 若未指定 schema默认 public
if (!/[?&]schema=/.test(url)) {
url += (url.includes('?') ? '&' : '?') + 'schema=public'
}
return url
} catch {
return undefined
}
}
const databaseUrl = loadDatabaseUrlFromConfig() || process.env.DATABASE_URL
export const prisma =
globalForPrisma.prisma ||
new PrismaClient({
datasources: databaseUrl ? { db: { url: databaseUrl } } : undefined,
log: ['error', 'warn']
})
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma

View File

@ -42,15 +42,13 @@ export interface CompanySuggestion {
// ============================================================================
/**
*
*
*/
export interface YearDataPoint {
/** 年份 */
year: string;
export interface PeriodDataPoint {
/** 报告期 (YYYYMMDD格式如 20241231, 20250930) */
period: string;
/** 数值 (可为null表示无数据) */
value: number | null;
/** 月份信息,用于确定季度 */
month?: number | null;
}
/**
@ -81,7 +79,7 @@ export interface FinancialMetricConfig {
*
*/
export interface FinancialDataSeries {
[metricKey: string]: YearDataPoint[];
[metricKey: string]: PeriodDataPoint[];
}
/**
@ -197,6 +195,38 @@ export interface AnalysisConfigResponse {
}>;
}
/**
*
*/
export interface TodaySnapshotResponse {
ts_code: string;
trade_date: string; // YYYYMMDD
name?: string;
close?: number | null;
pe?: number | null;
pb?: number | null;
dv_ratio?: number | null; // %
total_mv?: number | null; // 万元
}
/**
* TTL
*/
export interface RealTimeQuoteResponse {
symbol: string;
market: string;
ts: string; // ISO8601
price: number;
open_price?: number | null;
high_price?: number | null;
low_price?: number | null;
prev_close?: number | null;
change?: number | null;
change_percent?: number | null;
volume?: number | null;
source?: string | null;
}
// ============================================================================
// 表格相关类型
// ============================================================================

83
package-lock.json generated
View File

@ -1,83 +0,0 @@
{
"name": "Fundamental_Analysis",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"dependencies": {
"swr": "^2.3.6",
"zustand": "^5.0.8"
}
},
"node_modules/dequal": {
"version": "2.0.3",
"resolved": "https://registry.npmjs.org/dequal/-/dequal-2.0.3.tgz",
"integrity": "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA==",
"license": "MIT",
"engines": {
"node": ">=6"
}
},
"node_modules/react": {
"version": "19.2.0",
"resolved": "https://registry.npmjs.org/react/-/react-19.2.0.tgz",
"integrity": "sha512-tmbWg6W31tQLeB5cdIBOicJDJRR2KzXsV7uSK9iNfLWQ5bIZfxuPEHp7M8wiHyHnn0DD1i7w3Zmin0FtkrwoCQ==",
"license": "MIT",
"peer": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/swr": {
"version": "2.3.6",
"resolved": "https://registry.npmjs.org/swr/-/swr-2.3.6.tgz",
"integrity": "sha512-wfHRmHWk/isGNMwlLGlZX5Gzz/uTgo0o2IRuTMcf4CPuPFJZlq0rDaKUx+ozB5nBOReNV1kiOyzMfj+MBMikLw==",
"license": "MIT",
"dependencies": {
"dequal": "^2.0.3",
"use-sync-external-store": "^1.4.0"
},
"peerDependencies": {
"react": "^16.11.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/use-sync-external-store": {
"version": "1.6.0",
"resolved": "https://registry.npmjs.org/use-sync-external-store/-/use-sync-external-store-1.6.0.tgz",
"integrity": "sha512-Pp6GSwGP/NrPIrxVFAIkOQeyw8lFenOHijQWkUTrDvrF4ALqylP2C/KCkeS9dpUM3KvYRQhna5vt7IL95+ZQ9w==",
"license": "MIT",
"peerDependencies": {
"react": "^16.8.0 || ^17.0.0 || ^18.0.0 || ^19.0.0"
}
},
"node_modules/zustand": {
"version": "5.0.8",
"resolved": "https://registry.npmjs.org/zustand/-/zustand-5.0.8.tgz",
"integrity": "sha512-gyPKpIaxY9XcO2vSMrLbiER7QMAMGOQZVRdJ6Zi782jkbzZygq5GI9nG8g+sMgitRtndwaBSl7uiqC49o1SSiw==",
"license": "MIT",
"engines": {
"node": ">=12.20.0"
},
"peerDependencies": {
"@types/react": ">=18.0.0",
"immer": ">=9.0.6",
"react": ">=18.0.0",
"use-sync-external-store": ">=1.2.0"
},
"peerDependenciesMeta": {
"@types/react": {
"optional": true
},
"immer": {
"optional": true
},
"react": {
"optional": true
},
"use-sync-external-store": {
"optional": true
}
}
}
}
}

View File

@ -1,6 +0,0 @@
{
"dependencies": {
"swr": "^2.3.6",
"zustand": "^5.0.8"
}
}

5
dev.py → scripts/dev.py Executable file → Normal file
View File

@ -108,7 +108,8 @@ def main():
parser.add_argument("--backend-app", default=os.getenv("BACKEND_APP", "main:app"), help="Uvicorn app path, e.g. main:app")
args = parser.parse_args()
repo_root = Path(__file__).resolve().parent
# scripts/dev.py -> 仓库根目录
repo_root = Path(__file__).resolve().parents[1]
backend_dir = repo_root / "backend"
frontend_dir = repo_root / "frontend"
@ -204,3 +205,5 @@ def main():
if __name__ == "__main__":
main()

View File

@ -13,14 +13,19 @@ BACKEND_DIR="$REPO_ROOT/backend"
FRONTEND_DIR="$REPO_ROOT/frontend"
CONFIG_FILE="$REPO_ROOT/config/config.json"
# Guard to ensure cleanup runs only once
__CLEANED_UP=0
# Port configuration
BACKEND_PORT=8000
FRONTEND_PORT=3000
FRONTEND_PORT=3001
# Kill process using specified port
kill_port() {
local port=$1
echo -e "${YELLOW}[DEBUG]${RESET} Checking port $port..."
local pids=$(lsof -nP -ti tcp:"$port" 2>/dev/null || true)
echo -e "${YELLOW}[DEBUG]${RESET} Done checking port $port. PIDs: '$pids'"
if [[ -n "$pids" ]]; then
echo -e "${YELLOW}[CLEANUP]${RESET} Killing process(es) using port $port: $pids"
echo "$pids" | xargs kill -9 2>/dev/null || true
@ -34,9 +39,23 @@ ensure_backend() {
echo -e "${YELLOW}[SETUP]${RESET} Creating Python venv and installing backend requirements..."
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Upgrade pip first
pip install --upgrade pip --timeout 100 -i https://pypi.tuna.tsinghua.edu.cn/simple || \
pip install --upgrade pip --timeout 100
# Install requirements with timeout and mirror
pip install -r requirements.txt --timeout 300 -i https://pypi.tuna.tsinghua.edu.cn/simple || \
pip install -r requirements.txt --timeout 300
else
source .venv/bin/activate
# Upgrade pip if needed
pip install --upgrade pip --timeout 100 -i https://pypi.tuna.tsinghua.edu.cn/simple 2>/dev/null || \
pip install --upgrade pip --timeout 100 2>/dev/null || true
# Check if key dependencies are installed
if ! python -c "import uvicorn" 2>/dev/null; then
echo -e "${YELLOW}[SETUP]${RESET} Installing missing backend requirements..."
pip install -r requirements.txt --timeout 300 -i https://pypi.tuna.tsinghua.edu.cn/simple || \
pip install -r requirements.txt --timeout 300
fi
fi
# Export TUSHARE_TOKEN from config if available (prefer jq, fallback to node)
@ -56,8 +75,10 @@ run_backend() {
ensure_backend
cd "$BACKEND_DIR"
# Run and colorize output (avoid stdbuf on macOS)
UVICORN_CMD=(uvicorn app.main:app --reload --port "$BACKEND_PORT")
"${UVICORN_CMD[@]}" 2>&1 | awk -v p="[BACKEND]" -v color="$GREEN" -v reset="$RESET" '{print color p " " $0 reset}'
UVICORN_CMD=(uvicorn app.main:app --reload --port "$BACKEND_PORT" --log-level info)
"${UVICORN_CMD[@]}" 2>&1 | while IFS= read -r line; do
printf "%b[%s] [BACKEND] %s%b\n" "$GREEN" "$(date '+%Y-%m-%d %H:%M:%S')" "$line" "$RESET"
done
}
ensure_frontend() {
@ -71,27 +92,70 @@ ensure_frontend() {
run_frontend() {
ensure_frontend
cd "$FRONTEND_DIR"
npm run dev 2>&1 | awk -v p="[FRONTEND]" -v color="$CYAN" -v reset="$RESET" '{print color p " " $0 reset}'
npm run dev 2>&1 | while IFS= read -r line; do
printf "%b[%s] [FRONTEND] %s%b\n" "$CYAN" "$(date '+%Y-%m-%d %H:%M:%S')" "$line" "$RESET"
done
}
# Recursively kill a process tree (children first), with optional signal (default TERM)
kill_tree() {
local pid="$1"
local signal="${2:-TERM}"
if [[ -z "${pid:-}" ]]; then
return
fi
# Kill children first
local children
children=$(pgrep -P "$pid" 2>/dev/null || true)
if [[ -n "${children:-}" ]]; then
for child in $children; do
kill_tree "$child" "$signal"
done
fi
# Then the parent
kill -"$signal" "$pid" 2>/dev/null || true
}
cleanup() {
# Ensure this runs only once even if multiple signals (INT/TERM/EXIT) arrive
if [[ $__CLEANED_UP -eq 1 ]]; then
return
fi
__CLEANED_UP=1
echo -e "\n${YELLOW}[CLEANUP]${RESET} Stopping services..."
# Kill process groups to ensure all child processes are terminated
# Gracefully stop trees for backend and frontend, then escalate if needed
if [[ -n "${BACKEND_PID:-}" ]]; then
kill -TERM -"$BACKEND_PID" 2>/dev/null || kill "$BACKEND_PID" 2>/dev/null || true
kill_tree "$BACKEND_PID" TERM
fi
if [[ -n "${FRONTEND_PID:-}" ]]; then
kill -TERM -"$FRONTEND_PID" 2>/dev/null || kill "$FRONTEND_PID" 2>/dev/null || true
kill_tree "$FRONTEND_PID" TERM
fi
sleep 1
# Wait up to ~3s for graceful shutdown
for _ in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15; do
local backend_alive=0 frontend_alive=0
if [[ -n "${BACKEND_PID:-}" ]] && kill -0 "$BACKEND_PID" 2>/dev/null; then backend_alive=1; fi
if [[ -n "${FRONTEND_PID:-}" ]] && kill -0 "$FRONTEND_PID" 2>/dev/null; then frontend_alive=1; fi
if [[ $backend_alive -eq 0 && $frontend_alive -eq 0 ]]; then
break
fi
sleep 0.2
done
# Force kill any remaining processes on these ports
# Escalate to KILL if still alive
if [[ -n "${BACKEND_PID:-}" ]] && kill -0 "$BACKEND_PID" 2>/dev/null; then
kill_tree "$BACKEND_PID" KILL
fi
if [[ -n "${FRONTEND_PID:-}" ]] && kill -0 "$FRONTEND_PID" 2>/dev/null; then
kill_tree "$FRONTEND_PID" KILL
fi
# As a final safeguard, free the ports
kill_port "$BACKEND_PORT"
kill_port "$FRONTEND_PORT"
wait 2>/dev/null || true
echo -e "${GREEN}[CLEANUP]${RESET} All services stopped."
}
@ -102,8 +166,8 @@ main() {
kill_port "$BACKEND_PORT"
kill_port "$FRONTEND_PORT"
echo -e "${GREEN}[BACKEND]${RESET} API: http://127.0.0.1:$BACKEND_PORT"
echo -e "${CYAN}[FRONTEND]${RESET} APP: http://127.0.0.1:$FRONTEND_PORT\n"
echo -e "${GREEN}[$(date '+%Y-%m-%d %H:%M:%S')] [BACKEND]${RESET} API: http://127.0.0.1:$BACKEND_PORT"
echo -e "${CYAN}[$(date '+%Y-%m-%d %H:%M:%S')] [FRONTEND]${RESET} APP: http://127.0.0.1:$FRONTEND_PORT\n"
run_backend & BACKEND_PID=$!
run_frontend & FRONTEND_PID=$!

View File

@ -103,11 +103,29 @@ module.exports = {
env: {
"PYTHONPATH": "."
}
}, {
name: "portwardenc",
cwd: ".",
script: "./portwardenc-amd64",
interpreter: "none",
env: {
"SERVER_ADDR": "http://bastion.3prism.ai:7000",
"SERVICE_ID": "FUNDAMENTAL",
"LOCAL_PORT": "3000"
}
}]
};
EOL
fi
# Check and prepare portwardenc-amd64
if [ -f "portwardenc-amd64" ]; then
echo "Setting execute permission for portwardenc-amd64..."
chmod +x portwardenc-amd64
else
echo "Warning: portwardenc-amd64 file not found. It will be skipped."
fi
# Start processes with pm2
pm2 start pm2.config.js

View File

@ -8,3 +8,5 @@ echo "All pm2 applications stopped."
echo "Deleting all pm2 processes..."
pm2 delete all
echo "All pm2 processes deleted."

View File

@ -1,56 +0,0 @@
"""
测试脚本通过后端 API 检查是否能获取 300750.SZ tax_to_ebt 数据
"""
import requests
import json
def test_api():
# 假设后端运行在默认端口
url = "http://localhost:8000/api/financials/china/300750.SZ?years=5"
try:
print(f"正在请求 API: {url}")
response = requests.get(url, timeout=30)
if response.status_code == 200:
data = response.json()
print(f"\n✅ API 请求成功")
print(f"股票代码: {data.get('ts_code')}")
print(f"公司名称: {data.get('name')}")
# 检查 series 中是否有 tax_to_ebt
series = data.get('series', {})
if 'tax_to_ebt' in series:
print(f"\n✅ 找到 tax_to_ebt 数据!")
tax_data = series['tax_to_ebt']
print(f"数据条数: {len(tax_data)}")
print(f"\n最近几年的 tax_to_ebt 值:")
for item in tax_data[-5:]: # 显示最近5年
year = item.get('year')
value = item.get('value')
month = item.get('month')
month_str = f"Q{((month or 12) - 1) // 3 + 1}" if month else ""
print(f" {year}{month_str}: {value}")
else:
print(f"\n❌ 未找到 tax_to_ebt 数据")
print(f"可用字段: {list(series.keys())[:20]}...")
# 检查是否有其他税率相关字段
tax_keys = [k for k in series.keys() if 'tax' in k.lower()]
if tax_keys:
print(f"\n包含 'tax' 的字段: {tax_keys}")
else:
print(f"❌ API 请求失败: {response.status_code}")
print(f"响应内容: {response.text}")
except requests.exceptions.ConnectionError:
print("❌ 无法连接到后端服务,请确保后端正在运行(例如运行 python dev.py")
except Exception as e:
print(f"❌ 请求出错: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
test_api()

View File

@ -1,122 +0,0 @@
#!/usr/bin/env python3
"""
配置页面功能测试脚本
"""
import asyncio
import json
import sys
import os
# 添加项目根目录到Python路径
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'backend'))
from app.services.config_manager import ConfigManager
from app.schemas.config import ConfigUpdateRequest, DatabaseConfig, GeminiConfig, DataSourceConfig
async def test_config_manager():
"""测试配置管理器功能"""
print("🧪 开始测试配置管理器...")
# 这里需要实际的数据库会话,暂时跳过
print("⚠️ 需要数据库连接,跳过实际测试")
print("✅ 配置管理器代码结构正确")
def test_config_validation():
"""测试配置验证功能"""
print("\n🔍 测试配置验证...")
# 测试数据库URL验证
valid_urls = [
"postgresql://user:pass@host:port/db",
"postgresql+asyncpg://user:pass@host:port/db"
]
invalid_urls = [
"mysql://user:pass@host:port/db",
"invalid-url",
""
]
for url in valid_urls:
if url.startswith(("postgresql://", "postgresql+asyncpg://")):
print(f"✅ 有效URL: {url}")
else:
print(f"❌ 应该有效但被拒绝: {url}")
for url in invalid_urls:
if not url.startswith(("postgresql://", "postgresql+asyncpg://")):
print(f"✅ 无效URL正确被拒绝: {url}")
else:
print(f"❌ 应该无效但被接受: {url}")
def test_api_key_validation():
"""测试API Key验证"""
print("\n🔑 测试API Key验证...")
valid_keys = ["1234567890", "abcdefghijklmnop"]
invalid_keys = ["123", "short", ""]
for key in valid_keys:
if len(key) >= 10:
print(f"✅ 有效API Key: {key[:10]}...")
else:
print(f"❌ 应该有效但被拒绝: {key}")
for key in invalid_keys:
if len(key) < 10:
print(f"✅ 无效API Key正确被拒绝: {key}")
else:
print(f"❌ 应该无效但被接受: {key}")
def test_config_export_import():
"""测试配置导入导出功能"""
print("\n📤 测试配置导入导出...")
# 模拟配置数据
config_data = {
"database": {"url": "postgresql://test:test@localhost:5432/test"},
"gemini_api": {"api_key": "test_key_1234567890", "base_url": "https://api.example.com"},
"data_sources": {
"tushare": {"api_key": "tushare_key_1234567890"},
"finnhub": {"api_key": "finnhub_key_1234567890"}
}
}
# 测试JSON序列化
try:
json_str = json.dumps(config_data, indent=2)
parsed = json.loads(json_str)
print("✅ 配置JSON序列化/反序列化正常")
# 验证必需字段
required_fields = ["database", "gemini_api", "data_sources"]
for field in required_fields:
if field in parsed:
print(f"✅ 包含必需字段: {field}")
else:
print(f"❌ 缺少必需字段: {field}")
except Exception as e:
print(f"❌ JSON处理失败: {e}")
def main():
"""主测试函数"""
print("🚀 配置页面功能测试")
print("=" * 50)
test_config_validation()
test_api_key_validation()
test_config_export_import()
print("\n" + "=" * 50)
print("✅ 所有测试完成!")
print("\n📋 测试总结:")
print("• 配置验证逻辑正确")
print("• API Key验证工作正常")
print("• 配置导入导出功能正常")
print("• 前端UI组件已创建")
print("• 后端API接口已实现")
print("• 错误处理机制已添加")
if __name__ == "__main__":
main()

View File

@ -1,82 +0,0 @@
#!/usr/bin/env python3
"""
测试员工数数据获取功能
"""
import asyncio
import sys
import os
import json
# 添加项目根目录到Python路径
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'backend'))
from app.services.tushare_client import TushareClient
async def test_employees_data():
"""测试获取员工数数据"""
print("🧪 测试员工数数据获取...")
print("=" * 50)
# 从环境变量或配置文件读取 token
base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
config_path = os.path.join(base_dir, 'config', 'config.json')
token = os.environ.get('TUSHARE_TOKEN')
if not token and os.path.exists(config_path):
with open(config_path, 'r', encoding='utf-8') as f:
config = json.load(f)
token = config.get('data_sources', {}).get('tushare', {}).get('api_key')
if not token:
print("❌ 未找到 Tushare token")
print("请设置环境变量 TUSHARE_TOKEN 或在 config/config.json 中配置")
return
print(f"✅ Token 已加载: {token[:10]}...")
# 测试股票代码
test_ts_code = "000001.SZ" # 平安银行
async with TushareClient(token=token) as client:
try:
print(f"\n📊 查询股票: {test_ts_code}")
print("调用 stock_company API...")
# 调用 stock_company API
data = await client.query(
api_name="stock_company",
params={"ts_code": test_ts_code, "limit": 10}
)
if data:
print(f"✅ 成功获取 {len(data)} 条记录")
print("\n返回的数据字段:")
if data:
for key in data[0].keys():
print(f" - {key}")
print("\n员工数相关字段:")
for row in data:
if 'employees' in row:
print(f" ✅ employees: {row.get('employees')}")
if 'employee' in row:
print(f" ✅ employee: {row.get('employee')}")
print("\n完整数据示例:")
print(json.dumps(data[0], indent=2, ensure_ascii=False))
else:
print("⚠️ 未返回数据")
except Exception as e:
print(f"❌ 错误: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
print("🚀 开始测试员工数数据获取功能\n")
asyncio.run(test_employees_data())
print("\n" + "=" * 50)
print("✅ 测试完成")

View File

@ -1,104 +0,0 @@
#!/usr/bin/env python3
"""
测试股东数数据获取功能
"""
import asyncio
import sys
import os
import json
from datetime import datetime, timedelta
# 添加项目根目录到Python路径
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'backend'))
from app.services.tushare_client import TushareClient
async def test_holder_number_data():
"""测试获取股东数数据"""
print("🧪 测试股东数数据获取...")
print("=" * 50)
# 从环境变量或配置文件读取 token
base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
config_path = os.path.join(base_dir, 'config', 'config.json')
token = os.environ.get('TUSHARE_TOKEN')
if not token and os.path.exists(config_path):
with open(config_path, 'r', encoding='utf-8') as f:
config = json.load(f)
token = config.get('data_sources', {}).get('tushare', {}).get('api_key')
if not token:
print("❌ 未找到 Tushare token")
print("请设置环境变量 TUSHARE_TOKEN 或在 config/config.json 中配置")
return
print(f"✅ Token 已加载: {token[:10]}...")
# 测试股票代码
test_ts_code = "000001.SZ" # 平安银行
years = 5 # 查询最近5年的数据
# 计算日期范围
end_date = datetime.now().strftime("%Y%m%d")
start_date = (datetime.now() - timedelta(days=years * 365)).strftime("%Y%m%d")
async with TushareClient(token=token) as client:
try:
print(f"\n📊 查询股票: {test_ts_code}")
print(f"📅 日期范围: {start_date}{end_date}")
print("调用 stk_holdernumber API...")
# 调用 stk_holdernumber API
data = await client.query(
api_name="stk_holdernumber",
params={
"ts_code": test_ts_code,
"start_date": start_date,
"end_date": end_date,
"limit": 5000
}
)
if data:
print(f"✅ 成功获取 {len(data)} 条记录")
print("\n返回的数据字段:")
if data:
for key in data[0].keys():
print(f" - {key}")
print("\n股东数数据:")
print("-" * 60)
for row in data[:10]: # 只显示前10条
end_date_val = row.get('end_date', 'N/A')
holder_num = row.get('holder_num', 'N/A')
print(f" 日期: {end_date_val}, 股东数: {holder_num}")
if len(data) > 10:
print(f" ... 还有 {len(data) - 10} 条记录")
print("\n完整数据示例(第一条):")
print(json.dumps(data[0], indent=2, ensure_ascii=False))
# 检查是否有 holder_num 字段
if data and 'holder_num' in data[0]:
print("\n✅ 成功获取 holder_num 字段数据")
else:
print("\n⚠️ 未找到 holder_num 字段")
else:
print("⚠️ 未返回数据")
except Exception as e:
print(f"❌ 错误: {e}")
import traceback
traceback.print_exc()
if __name__ == "__main__":
print("🚀 开始测试股东数数据获取功能\n")
asyncio.run(test_holder_number_data())
print("\n" + "=" * 50)
print("✅ 测试完成")

View File

@ -1,115 +0,0 @@
#!/usr/bin/env python3
"""
测试股东数数据处理逻辑
"""
import asyncio
import sys
import os
import json
from datetime import datetime, timedelta
# 添加项目根目录到Python路径
sys.path.append(os.path.join(os.path.dirname(__file__), '..', 'backend'))
from app.services.tushare_client import TushareClient
async def test_holder_num_processing():
"""测试股东数数据处理逻辑"""
print("🧪 测试股东数数据处理逻辑...")
print("=" * 50)
# 从环境变量或配置文件读取 token
base_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..'))
config_path = os.path.join(base_dir, 'config', 'config.json')
token = os.environ.get('TUSHARE_TOKEN')
if not token and os.path.exists(config_path):
with open(config_path, 'r', encoding='utf-8') as f:
config = json.load(f)
token = config.get('data_sources', {}).get('tushare', {}).get('api_key')
if not token:
print("❌ 未找到 Tushare token")
return
ts_code = '000001.SZ'
years = 5
async with TushareClient(token=token) as client:
# 模拟后端处理逻辑
end_date = datetime.now().strftime('%Y%m%d')
start_date = (datetime.now() - timedelta(days=years * 365)).strftime('%Y%m%d')
print(f"📊 查询股票: {ts_code}")
print(f"📅 日期范围: {start_date}{end_date}")
data_rows = await client.query(
api_name='stk_holdernumber',
params={'ts_code': ts_code, 'start_date': start_date, 'end_date': end_date, 'limit': 5000}
)
print(f'\n✅ 获取到 {len(data_rows)} 条原始数据')
if data_rows:
print('\n原始数据示例前3条:')
for i, row in enumerate(data_rows[:3]):
print(f"{i+1}条: {json.dumps(row, indent=4, ensure_ascii=False)}")
# 模拟后端处理逻辑
series = {}
tmp = {}
date_field = 'end_date'
print('\n📝 开始处理数据...')
for row in data_rows:
date_val = row.get(date_field)
if not date_val:
print(f" ⚠️ 跳过无日期字段的行: {row}")
continue
year = str(date_val)[:4]
month = int(str(date_val)[4:6]) if len(str(date_val)) >= 6 else None
existing = tmp.get(year)
if existing is None or str(row.get(date_field)) > str(existing.get(date_field)):
tmp[year] = row
tmp[year]['_month'] = month
print(f'\n✅ 处理后共有 {len(tmp)} 个年份的数据')
print('按年份分组的数据:')
for year, row in sorted(tmp.items(), key=lambda x: x[0], reverse=True):
print(f" {year}: holder_num={row.get('holder_num')}, end_date={row.get('end_date')}")
# 提取 holder_num 字段
key = 'holder_num'
for year, row in tmp.items():
month = row.get('_month')
value = row.get(key)
arr = series.setdefault(key, [])
arr.append({'year': year, 'value': value, 'month': month})
print('\n📊 提取后的 series 数据:')
print(json.dumps(series, indent=2, ensure_ascii=False))
# 排序(模拟后端逻辑)
for key, arr in series.items():
uniq = {item['year']: item for item in arr}
arr_sorted_desc = sorted(uniq.values(), key=lambda x: x['year'], reverse=True)
arr_limited = arr_sorted_desc[:years]
arr_sorted = sorted(arr_limited, key=lambda x: x['year']) # ascending
series[key] = arr_sorted
print('\n✅ 最终排序后的数据(按年份升序):')
print(json.dumps(series, indent=2, ensure_ascii=False))
# 验证年份格式
print('\n🔍 验证年份格式:')
for item in series.get('holder_num', []):
year_str = item.get('year')
print(f" 年份: '{year_str}' (类型: {type(year_str).__name__}, 长度: {len(str(year_str))})")
if __name__ == "__main__":
asyncio.run(test_holder_num_processing())

View File

@ -1,110 +0,0 @@
"""
测试脚本检查是否能获取 300750.SZ tax_to_ebt 数据
"""
import asyncio
import sys
import os
import json
# 添加 backend 目录到 Python 路径
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "backend"))
from app.services.tushare_client import TushareClient
async def test_tax_to_ebt():
# 读取配置获取 token
config_path = os.path.join(os.path.dirname(__file__), "..", "config", "config.json")
with open(config_path, "r", encoding="utf-8") as f:
config = json.load(f)
token = config.get("data_sources", {}).get("tushare", {}).get("api_key")
if not token:
print("错误:未找到 Tushare token")
return
client = TushareClient(token=token)
ts_code = "300750.SZ"
try:
print(f"正在查询 {ts_code} 的财务指标数据...")
# 先尝试不指定 fields获取所有字段
print("\n=== 测试1: 不指定 fields 参数 ===")
data = await client.query(
api_name="fina_indicator",
params={"ts_code": ts_code, "limit": 10}
)
# 再尝试明确指定 fields包含 tax_to_ebt
print("\n=== 测试2: 明确指定 fields 参数(包含 tax_to_ebt ===")
data_with_fields = await client.query(
api_name="fina_indicator",
params={"ts_code": ts_code, "limit": 10},
fields="ts_code,ann_date,end_date,tax_to_ebt,roe,roa"
)
print(f"\n获取到 {len(data)} 条记录")
if data:
# 检查第一条记录的字段
first_record = data[0]
print(f"\n第一条记录的字段:")
print(f" ts_code: {first_record.get('ts_code')}")
print(f" end_date: {first_record.get('end_date')}")
print(f" ann_date: {first_record.get('ann_date')}")
# 检查是否有 tax_to_ebt 字段
if 'tax_to_ebt' in first_record:
tax_value = first_record.get('tax_to_ebt')
print(f"\n✅ 找到 tax_to_ebt 字段!")
print(f" tax_to_ebt 值: {tax_value}")
print(f" tax_to_ebt 类型: {type(tax_value)}")
else:
print(f"\n❌ 未找到 tax_to_ebt 字段")
print(f"可用字段列表: {list(first_record.keys())[:20]}...") # 只显示前20个字段
# 打印所有包含 tax 的字段
tax_fields = [k for k in first_record.keys() if 'tax' in k.lower()]
if tax_fields:
print(f"\n包含 'tax' 的字段:")
for field in tax_fields:
print(f" {field}: {first_record.get(field)}")
# 显示最近几条记录的 tax_to_ebt 值
print(f"\n最近几条记录的 tax_to_ebt 值测试1:")
for i, record in enumerate(data[:5]):
end_date = record.get('end_date', 'N/A')
tax_value = record.get('tax_to_ebt', 'N/A')
print(f" {i+1}. {end_date}: tax_to_ebt = {tax_value}")
else:
print("❌ 未获取到任何数据测试1")
# 测试2检查明确指定 fields 的结果
if data_with_fields:
print(f"\n测试2获取到 {len(data_with_fields)} 条记录")
first_record2 = data_with_fields[0]
if 'tax_to_ebt' in first_record2:
print(f"✅ 测试2找到 tax_to_ebt 字段!")
print(f" tax_to_ebt 值: {first_record2.get('tax_to_ebt')}")
else:
print(f"❌ 测试2也未找到 tax_to_ebt 字段")
print(f"可用字段: {list(first_record2.keys())}")
print(f"\n最近几条记录的 tax_to_ebt 值测试2:")
for i, record in enumerate(data_with_fields[:5]):
end_date = record.get('end_date', 'N/A')
tax_value = record.get('tax_to_ebt', 'N/A')
print(f" {i+1}. {end_date}: tax_to_ebt = {tax_value}")
else:
print("❌ 未获取到任何数据测试2")
except Exception as e:
print(f"❌ 查询出错: {e}")
import traceback
traceback.print_exc()
finally:
await client.aclose()
if __name__ == "__main__":
asyncio.run(test_tax_to_ebt())

View File

@ -0,0 +1,21 @@
# syntax=docker/dockerfile:1.6
FROM python:3.11-slim AS base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1 \
PIP_NO_CACHE_DIR=1 \
PROJECT_ROOT=/workspace
WORKDIR /workspace/services/config-service
COPY services/config-service/requirements.txt ./requirements.txt
RUN pip install --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# 运行时通过挂载卷提供配置与源码
RUN mkdir -p /workspace/services/config-service
# 缺省入口由 docker-compose 提供

View File

@ -0,0 +1,64 @@
"""
Config Service - provides read-only access to static configuration files.
"""
from __future__ import annotations
import json
import os
from typing import Any, Dict
from fastapi import FastAPI, HTTPException
from fastapi.middleware.cors import CORSMiddleware
APP_NAME = "config-service"
API_V1 = "/api/v1"
# 在容器内挂载了项目根到 /workspace
PROJECT_ROOT = os.environ.get("PROJECT_ROOT", "/workspace")
CONFIG_DIR = os.path.join(PROJECT_ROOT, "config")
SYSTEM_CONFIG_PATH = os.path.join(CONFIG_DIR, "config.json")
ANALYSIS_CONFIG_PATH = os.path.join(CONFIG_DIR, "analysis-config.json")
app = FastAPI(title=APP_NAME, version="0.1.0")
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
def _read_json_file(path: str) -> Dict[str, Any]:
if not os.path.exists(path):
raise HTTPException(status_code=404, detail=f"配置文件不存在: {os.path.basename(path)}")
try:
with open(path, "r", encoding="utf-8") as f:
return json.load(f)
except json.JSONDecodeError as e:
raise HTTPException(status_code=500, detail=f"配置文件格式错误: {e}") from e
except OSError as e:
raise HTTPException(status_code=500, detail=f"读取配置文件失败: {e}") from e
@app.get("/")
async def root() -> Dict[str, Any]:
return {"status": "ok", "name": APP_NAME}
@app.get(f"{API_V1}/system")
async def get_system_config() -> Dict[str, Any]:
"""
返回系统基础配置纯文件内容不包含数据库覆盖
"""
return _read_json_file(SYSTEM_CONFIG_PATH)
@app.get(f"{API_V1}/analysis-modules")
async def get_analysis_modules() -> Dict[str, Any]:
"""
返回分析模块配置原样透传
"""
return _read_json_file(ANALYSIS_CONFIG_PATH)

View File

@ -0,0 +1,3 @@
fastapi==0.115.0
uvicorn[standard]==0.30.6

View File

@ -0,0 +1,6 @@
use data_persistence_service::{
db,
dtos::{CompanyProfileDto, DailyMarketDataDto, NewAnalysisResultDto, TimeSeriesFinancialDto},
models,
};
use sqlx::{postgres::PgPoolOptions, PgPool};

View File

@ -0,0 +1,8 @@
[alias]
# Require `forge-cli` to be installed once: `cargo install service_kit --features api-cli`
# Then `cargo forge ...` will forward args to the installed `forge-cli` binary.
forge = "forge-cli --"
## Note:
## We intentionally avoid local path patches in container builds to ensure reproducibility.
## Use crates.io or git dependencies via Cargo.toml instead.

View File

@ -0,0 +1,18 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO company_profiles (symbol, name, industry, list_date, additional_info, updated_at)\n VALUES ($1, $2, $3, $4, $5, NOW())\n ON CONFLICT (symbol) DO UPDATE SET\n name = EXCLUDED.name,\n industry = EXCLUDED.industry,\n list_date = EXCLUDED.list_date,\n additional_info = EXCLUDED.additional_info,\n updated_at = NOW()\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Varchar",
"Varchar",
"Varchar",
"Date",
"Jsonb"
]
},
"nullable": []
},
"hash": "21a6b3602a199978f87186634866e7bd72a083ebd55985acae1d712434e2ebb6"
}

View File

@ -0,0 +1,95 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT symbol, market, ts, price, open_price, high_price, low_price, prev_close, change, change_percent, volume, source, updated_at\n FROM realtime_quotes\n WHERE symbol = $1 AND market = $2\n ORDER BY ts DESC\n LIMIT 1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 1,
"name": "market",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "ts",
"type_info": "Timestamptz"
},
{
"ordinal": 3,
"name": "price",
"type_info": "Numeric"
},
{
"ordinal": 4,
"name": "open_price",
"type_info": "Numeric"
},
{
"ordinal": 5,
"name": "high_price",
"type_info": "Numeric"
},
{
"ordinal": 6,
"name": "low_price",
"type_info": "Numeric"
},
{
"ordinal": 7,
"name": "prev_close",
"type_info": "Numeric"
},
{
"ordinal": 8,
"name": "change",
"type_info": "Numeric"
},
{
"ordinal": 9,
"name": "change_percent",
"type_info": "Numeric"
},
{
"ordinal": 10,
"name": "volume",
"type_info": "Int8"
},
{
"ordinal": 11,
"name": "source",
"type_info": "Varchar"
},
{
"ordinal": 12,
"name": "updated_at",
"type_info": "Timestamptz"
}
],
"parameters": {
"Left": [
"Text",
"Text"
]
},
"nullable": [
false,
false,
false,
false,
true,
true,
true,
true,
true,
true,
true,
true,
false
]
},
"hash": "242e6f3319cfa0c19b53c4da80993a1da3cb77f58a3c0dac0260bf3adb4e501f"
}

View File

@ -0,0 +1,46 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT symbol, metric_name, period_date, value, source\n FROM time_series_financials\n WHERE symbol = $1\n ORDER BY period_date DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 1,
"name": "metric_name",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "period_date",
"type_info": "Date"
},
{
"ordinal": 3,
"name": "value",
"type_info": "Numeric"
},
{
"ordinal": 4,
"name": "source",
"type_info": "Varchar"
}
],
"parameters": {
"Left": [
"Text"
]
},
"nullable": [
false,
false,
false,
false,
true
]
},
"hash": "4536af5904df2b38a10e801f488cf2bd4176dccf06b0b791284d729f53ab262d"
}

View File

@ -0,0 +1,62 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO analysis_results (symbol, module_id, model_name, content, meta_data)\n VALUES ($1, $2, $3, $4, $5)\n RETURNING id, symbol, module_id, generated_at, model_name, content, meta_data\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Uuid"
},
{
"ordinal": 1,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "module_id",
"type_info": "Varchar"
},
{
"ordinal": 3,
"name": "generated_at",
"type_info": "Timestamptz"
},
{
"ordinal": 4,
"name": "model_name",
"type_info": "Varchar"
},
{
"ordinal": 5,
"name": "content",
"type_info": "Text"
},
{
"ordinal": 6,
"name": "meta_data",
"type_info": "Jsonb"
}
],
"parameters": {
"Left": [
"Varchar",
"Varchar",
"Varchar",
"Text",
"Jsonb"
]
},
"nullable": [
false,
false,
false,
false,
true,
false,
true
]
},
"hash": "47dd5646e6a94d84da1db7e7aa5961ce012cf8467e5b98fc88f073f84ddd7b87"
}

View File

@ -0,0 +1,58 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, symbol, module_id, generated_at, model_name, content, meta_data\n FROM analysis_results\n WHERE symbol = $1\n ORDER BY generated_at DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Uuid"
},
{
"ordinal": 1,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "module_id",
"type_info": "Varchar"
},
{
"ordinal": 3,
"name": "generated_at",
"type_info": "Timestamptz"
},
{
"ordinal": 4,
"name": "model_name",
"type_info": "Varchar"
},
{
"ordinal": 5,
"name": "content",
"type_info": "Text"
},
{
"ordinal": 6,
"name": "meta_data",
"type_info": "Jsonb"
}
],
"parameters": {
"Left": [
"Text"
]
},
"nullable": [
false,
false,
false,
false,
true,
false,
true
]
},
"hash": "5ddfe5e70c62b906ca23de28cd0056fa116a90f932567cefff259e110b6e9b1b"
}

View File

@ -0,0 +1,25 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO realtime_quotes (\n symbol, market, ts, price, open_price, high_price, low_price, prev_close, change, change_percent, volume, source, updated_at\n ) VALUES (\n $1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, NOW()\n )\n ON CONFLICT (symbol, market, ts) DO UPDATE SET\n price = EXCLUDED.price,\n open_price = EXCLUDED.open_price,\n high_price = EXCLUDED.high_price,\n low_price = EXCLUDED.low_price,\n prev_close = EXCLUDED.prev_close,\n change = EXCLUDED.change,\n change_percent = EXCLUDED.change_percent,\n volume = EXCLUDED.volume,\n source = EXCLUDED.source,\n updated_at = NOW()\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Varchar",
"Varchar",
"Timestamptz",
"Numeric",
"Numeric",
"Numeric",
"Numeric",
"Numeric",
"Numeric",
"Numeric",
"Int8",
"Varchar"
]
},
"nullable": []
},
"hash": "79ac63ac22399f0ba64783b87fbca6f7637c0f331c1346211ac5275e51221654"
}

View File

@ -0,0 +1,23 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO daily_market_data (symbol, trade_date, open_price, high_price, low_price, close_price, volume, pe, pb, total_mv)\n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10)\n ON CONFLICT (symbol, trade_date) DO UPDATE SET\n open_price = EXCLUDED.open_price,\n high_price = EXCLUDED.high_price,\n low_price = EXCLUDED.low_price,\n close_price = EXCLUDED.close_price,\n volume = EXCLUDED.volume,\n pe = EXCLUDED.pe,\n pb = EXCLUDED.pb,\n total_mv = EXCLUDED.total_mv\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Varchar",
"Date",
"Numeric",
"Numeric",
"Numeric",
"Numeric",
"Int8",
"Numeric",
"Numeric",
"Numeric"
]
},
"nullable": []
},
"hash": "7bc18e5f68bfc1455b7e6e74feacabb79121b6a8008c999852a9fae3a8396789"
}

View File

@ -0,0 +1,47 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT symbol, metric_name, period_date, value, source\n FROM time_series_financials\n WHERE symbol = $1 AND metric_name = ANY($2)\n ORDER BY period_date DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 1,
"name": "metric_name",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "period_date",
"type_info": "Date"
},
{
"ordinal": 3,
"name": "value",
"type_info": "Numeric"
},
{
"ordinal": 4,
"name": "source",
"type_info": "Varchar"
}
],
"parameters": {
"Left": [
"Text",
"TextArray"
]
},
"nullable": [
false,
false,
false,
false,
true
]
},
"hash": "8868e58490b2f11be13c74ae3b1ce71a3f589b61d046815b6e9a7fe67ce94886"
}

View File

@ -0,0 +1,59 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, symbol, module_id, generated_at, model_name, content, meta_data\n FROM analysis_results\n WHERE symbol = $1 AND module_id = $2\n ORDER BY generated_at DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Uuid"
},
{
"ordinal": 1,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "module_id",
"type_info": "Varchar"
},
{
"ordinal": 3,
"name": "generated_at",
"type_info": "Timestamptz"
},
{
"ordinal": 4,
"name": "model_name",
"type_info": "Varchar"
},
{
"ordinal": 5,
"name": "content",
"type_info": "Text"
},
{
"ordinal": 6,
"name": "meta_data",
"type_info": "Jsonb"
}
],
"parameters": {
"Left": [
"Text",
"Text"
]
},
"nullable": [
false,
false,
false,
false,
true,
false,
true
]
},
"hash": "926e80040622e569d7698396e0126fecc648346e67ecae96cb191077737f5ab5"
}

View File

@ -0,0 +1,78 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT symbol, trade_date, open_price, high_price, low_price, close_price, volume, pe, pb, total_mv\n FROM daily_market_data\n WHERE symbol = $1\n AND ($2::DATE IS NULL OR trade_date >= $2)\n AND ($3::DATE IS NULL OR trade_date <= $3)\n ORDER BY trade_date DESC\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 1,
"name": "trade_date",
"type_info": "Date"
},
{
"ordinal": 2,
"name": "open_price",
"type_info": "Numeric"
},
{
"ordinal": 3,
"name": "high_price",
"type_info": "Numeric"
},
{
"ordinal": 4,
"name": "low_price",
"type_info": "Numeric"
},
{
"ordinal": 5,
"name": "close_price",
"type_info": "Numeric"
},
{
"ordinal": 6,
"name": "volume",
"type_info": "Int8"
},
{
"ordinal": 7,
"name": "pe",
"type_info": "Numeric"
},
{
"ordinal": 8,
"name": "pb",
"type_info": "Numeric"
},
{
"ordinal": 9,
"name": "total_mv",
"type_info": "Numeric"
}
],
"parameters": {
"Left": [
"Text",
"Date",
"Date"
]
},
"nullable": [
false,
false,
true,
true,
true,
true,
true,
true,
true,
true
]
},
"hash": "a487a815febf42b5c58fce44382f2d849f81b5831e733fc1d8faa62196f67dc9"
}

View File

@ -0,0 +1,52 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT symbol, name, industry, list_date, additional_info, updated_at\n FROM company_profiles\n WHERE symbol = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 1,
"name": "name",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "industry",
"type_info": "Varchar"
},
{
"ordinal": 3,
"name": "list_date",
"type_info": "Date"
},
{
"ordinal": 4,
"name": "additional_info",
"type_info": "Jsonb"
},
{
"ordinal": 5,
"name": "updated_at",
"type_info": "Timestamptz"
}
],
"parameters": {
"Left": [
"Text"
]
},
"nullable": [
false,
false,
true,
true,
true,
false
]
},
"hash": "a857a2bbeb2b7defebc976b472df1fd3b88ab154afe1d0d6ca044e616a75e60f"
}

View File

@ -0,0 +1,18 @@
{
"db_name": "PostgreSQL",
"query": "\n INSERT INTO time_series_financials (symbol, metric_name, period_date, value, source)\n VALUES ($1, $2, $3, $4, $5)\n ON CONFLICT (symbol, metric_name, period_date) DO UPDATE SET\n value = EXCLUDED.value,\n source = EXCLUDED.source\n ",
"describe": {
"columns": [],
"parameters": {
"Left": [
"Varchar",
"Varchar",
"Date",
"Numeric",
"Varchar"
]
},
"nullable": []
},
"hash": "c08e82dfa0c325fe81baef633be7369ff6e4eb4534d00a41da94adfebbd44cc2"
}

View File

@ -0,0 +1,58 @@
{
"db_name": "PostgreSQL",
"query": "\n SELECT id, symbol, module_id, generated_at, model_name, content, meta_data\n FROM analysis_results\n WHERE id = $1\n ",
"describe": {
"columns": [
{
"ordinal": 0,
"name": "id",
"type_info": "Uuid"
},
{
"ordinal": 1,
"name": "symbol",
"type_info": "Varchar"
},
{
"ordinal": 2,
"name": "module_id",
"type_info": "Varchar"
},
{
"ordinal": 3,
"name": "generated_at",
"type_info": "Timestamptz"
},
{
"ordinal": 4,
"name": "model_name",
"type_info": "Varchar"
},
{
"ordinal": 5,
"name": "content",
"type_info": "Text"
},
{
"ordinal": 6,
"name": "meta_data",
"type_info": "Jsonb"
}
],
"parameters": {
"Left": [
"Uuid"
]
},
"nullable": [
false,
false,
false,
false,
true,
false,
true
]
},
"hash": "c3d06b1b669d66f82fd532a7bc782621101780f7f549852fc3b4405b477870af"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,83 @@
[package]
name = "data-persistence-service"
version = "0.1.2"
edition = "2021"
authors = ["Lv, Qi <lvsoft@gmail.com>"]
default-run = "data-persistence-service-server"
[lib]
name = "data_persistence_service"
path = "src/lib.rs"
[[bin]]
name = "data-persistence-service-server"
path = "src/main.rs"
[[bin]]
name = "api-cli"
path = "src/bin/api-cli.rs"
# The cli feature is not yet compatible with the new architecture.
# required-features = ["service_kit/api-cli"]
[dependencies]
service_kit = { version = "0.1.2", default-features = true }
anyhow = "1.0"
rmcp = { version = "0.8.5", features = [
"transport-streamable-http-server",
"transport-worker"
] }
# Web framework
axum = "0.8"
tokio = { version = "1.0", features = ["full"] }
tower-http = { version = "0.6.6", features = ["cors", "trace"] }
tower = { version = "0.5", features = ["util"] }
# Observability
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "fmt"] }
# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
# OpenAPI & Schema
utoipa = { version = "5.4", features = ["axum_extras", "chrono", "uuid"] }
utoipa-swagger-ui = { version = "9.0", features = ["axum"] }
# Environment variables
dotenvy = "0.15"
# Error Handling
thiserror = "2.0.17"
# Database
sqlx = { version = "0.8.6", features = [ "runtime-tokio-rustls", "postgres", "chrono", "uuid", "json", "rust_decimal" ] }
rust_decimal = { version = "1.36", features = ["serde"] }
chrono = { version = "0.4", features = ["serde"] }
uuid = { version = "1", features = ["serde", "v4"] }
# WASM CLI UI
rust-embed = "8.7"
axum-embed = "0.1.0"
[dev-dependencies]
http-body-util = "0.1"
tower = { version = "0.5", features = ["util"] }
# Feature 管理:默认全部启用,可选择性关闭
[features]
default = ["swagger-ui"]
swagger-ui = []
wasm-cli = []
# 让模板的 `mcp` 特性联动 service_kit 的 mcp 功能
mcp = ["service_kit/mcp"]
# 可选:透传 api-cli 给 service_kit
# api-cli = ["service_kit/api-cli"]
# --- For Local Development ---
# If you are developing `service_kit` locally, uncomment the following lines
# in your project's `.cargo/config.toml` file (create it if it doesn't exist)
# to make Cargo use your local version instead of the one from git.
#
# [patch.'https://github.com/lvsoft/service_kit']
# service_kit = { path = "../service_kit" } # Note: Adjust the path if your directory structure is different.

View File

@ -0,0 +1,26 @@
FROM rust:1.90-bookworm AS chef
WORKDIR /app
RUN cargo install cargo-chef
FROM chef AS planner
COPY . .
RUN cargo chef prepare --recipe-path recipe.json
FROM chef AS builder
ENV SQLX_OFFLINE=true
COPY --from=planner /app/recipe.json /app/recipe.json
RUN cargo chef cook --release --recipe-path /app/recipe.json
COPY . .
RUN cargo build --release --bin data-persistence-service-server
FROM debian:bookworm-slim AS runtime
WORKDIR /app
RUN groupadd --system --gid 1001 appuser && \
useradd --system --uid 1001 --gid 1001 appuser
USER appuser
COPY --from=builder /app/target/release/data-persistence-service-server /usr/local/bin/data-persistence-service-server
COPY ./migrations ./migrations
ENV HOST=0.0.0.0
ENV PORT=3000
EXPOSE 3000
ENTRYPOINT ["/usr/local/bin/data-persistence-service-server"]

View File

@ -0,0 +1,67 @@
# 数据持久化服务 (Data Persistence Service)
本服务是“基本面分析”微服务架构中数据库的唯一所有者,为所有数据持久化需求提供一个 RESTful API。
## 概览
- **语言**: Rust
- **框架**: Axum
- **数据库**: PostgreSQL (带有 TimescaleDB 扩展)
- **核心任务**: 为数据库提供一个稳定、高性能且类型安全的 API 层。
## 本地开发指南
### 1. 先决条件
- Rust 工具链 (`rustup`)
- `sqlx-cli` (`cargo install sqlx-cli`)
- 一个正在运行的、并已启用 TimescaleDB 扩展的 PostgreSQL 实例。
### 2. 配置
`env.sample` 文件复制为 `.env`,并根据您的本地环境配置 `DATABASE_URL`
```bash
cp env.sample .env
```
您的 `.env` 文件应如下所示:
```ini
# 服务监听的端口
PORT=3000
# 用于 sqlx 连接数据库的 URL
# 请确保用户、密码、主机、端口和数据库名称都正确无误
DATABASE_URL=postgres://user:password@localhost:5432/fundamental_analysis
```
### 3. 数据库迁移
在首次运行本服务之前,或在任何数据库结构变更之后,请运行迁移命令以更新数据库:
```bash
sqlx migrate run
```
### 4. 运行服务
编译并运行本服务:
```bash
cargo run
```
服务将会启动并在您 `.env` 文件中指定的端口(默认为 3000上监听。服务的 OpenAPI 规范 (Swagger JSON) 将在 `/api-docs/openapi.json` 路径下可用。
## 测试
要运行所有测试(包括数据库集成测试和 API 集成测试),请使用以下命令。请确保您的 `.env` 文件中的 `DATABASE_URL` 指向一个有效的、已应用迁移的测试数据库。
```bash
cargo test
```
如果需要查看详细的测试输出,可以使用:
```bash
cargo test -- --nocapture
```

View File

@ -0,0 +1,86 @@
# WASM CLI - API调用功能已实现
## 🎉 问题已解决
之前WASM CLI只显示"Successfully matched command"而不执行实际API调用的问题已经修复
## 🔧 修复内容
1. **实现了真正的HTTP API调用**: 使用JavaScript的fetch API替代了原来的命令匹配功能
2. **添加了WASM绑定**: 通过web-sys和wasm-bindgen-futures实现异步HTTP请求
3. **修复了依赖冲突**: 通过特性门控解决了reqwest在WASM环境下的兼容性问题
4. **新增异步API**: `run_command_async()` 函数现在可以真正执行API请求并返回结果
## 📋 主要更改
### 1. 新的初始化函数
```javascript
// 旧版本
init_cli(spec_json)
// 新版本 - 需要同时传递OpenAPI规范和base URL
init_cli(spec_json, base_url)
```
### 2. 新的异步命令执行函数
```javascript
// 新增 - 真正执行API调用
const result = await run_command_async("v1.hello.get");
// 旧版本 - 已废弃,只返回错误信息
const result = run_command("v1.hello.get");
```
## 🚀 使用方法
### 1. 初始化CLI
```javascript
import init, { init_cli, run_command_async } from './pkg/forge_cli_wasm.js';
// 初始化WASM模块
await init();
// 获取OpenAPI规范
const response = await fetch('http://localhost:3000/api-docs/openapi.json');
const spec = await response.text();
// 初始化CLI
init_cli(spec, 'http://localhost:3000');
```
### 2. 执行API命令
```javascript
// 执行GET请求
const result1 = await run_command_async("v1.hello.get");
// 执行带参数的请求
const result2 = await run_command_async("v1.add.get --a 1 --b 2");
// 执行POST请求如果API支持
const result3 = await run_command_async('v1.create.post --body \'{"name": "test"}\'');
```
## 🧪 测试
打开 `test.html` 文件在浏览器中测试:
1. 确保你的服务已运行在 http://localhost:3000
2. 点击 "Initialize CLI" 按钮
3. 输入命令如 "v1.hello.get" 或 "v1.add.get --a 1 --b 2"
4. 点击 "Run Command" 按钮
5. 查看实际的API响应结果
## ⚠️ 重要注意事项
1. **旧的`run_command`函数已废弃**: 请使用新的`run_command_async`函数
2. **需要CORS支持**: 确保你的API服务器支持跨域请求
3. **异步操作**: 所有API调用现在都是异步的需要使用`await`
4. **错误处理**: API请求失败时会返回错误信息而不是抛出异常
## 🔍 调试
- 打开浏览器开发者工具查看控制台日志
- 网络请求会显示在Network标签页中
- 任何错误都会在输出区域显示
现在你的WASM CLI可以真正与API进行交互不再只是"匹配命令"了!🎉

View File

@ -0,0 +1,659 @@
let wasm;
function addToExternrefTable0(obj) {
const idx = wasm.__externref_table_alloc();
wasm.__wbindgen_export_2.set(idx, obj);
return idx;
}
function handleError(f, args) {
try {
return f.apply(this, args);
} catch (e) {
const idx = addToExternrefTable0(e);
wasm.__wbindgen_exn_store(idx);
}
}
const cachedTextDecoder = (typeof TextDecoder !== 'undefined' ? new TextDecoder('utf-8', { ignoreBOM: true, fatal: true }) : { decode: () => { throw Error('TextDecoder not available') } } );
if (typeof TextDecoder !== 'undefined') { cachedTextDecoder.decode(); };
let cachedUint8ArrayMemory0 = null;
function getUint8ArrayMemory0() {
if (cachedUint8ArrayMemory0 === null || cachedUint8ArrayMemory0.byteLength === 0) {
cachedUint8ArrayMemory0 = new Uint8Array(wasm.memory.buffer);
}
return cachedUint8ArrayMemory0;
}
function getStringFromWasm0(ptr, len) {
ptr = ptr >>> 0;
return cachedTextDecoder.decode(getUint8ArrayMemory0().subarray(ptr, ptr + len));
}
function isLikeNone(x) {
return x === undefined || x === null;
}
const CLOSURE_DTORS = (typeof FinalizationRegistry === 'undefined')
? { register: () => {}, unregister: () => {} }
: new FinalizationRegistry(state => {
wasm.__wbindgen_export_3.get(state.dtor)(state.a, state.b)
});
function makeMutClosure(arg0, arg1, dtor, f) {
const state = { a: arg0, b: arg1, cnt: 1, dtor };
const real = (...args) => {
// First up with a closure we increment the internal reference
// count. This ensures that the Rust closure environment won't
// be deallocated while we're invoking it.
state.cnt++;
const a = state.a;
state.a = 0;
try {
return f(a, state.b, ...args);
} finally {
if (--state.cnt === 0) {
wasm.__wbindgen_export_3.get(state.dtor)(a, state.b);
CLOSURE_DTORS.unregister(state);
} else {
state.a = a;
}
}
};
real.original = state;
CLOSURE_DTORS.register(real, state, state);
return real;
}
function debugString(val) {
// primitive types
const type = typeof val;
if (type == 'number' || type == 'boolean' || val == null) {
return `${val}`;
}
if (type == 'string') {
return `"${val}"`;
}
if (type == 'symbol') {
const description = val.description;
if (description == null) {
return 'Symbol';
} else {
return `Symbol(${description})`;
}
}
if (type == 'function') {
const name = val.name;
if (typeof name == 'string' && name.length > 0) {
return `Function(${name})`;
} else {
return 'Function';
}
}
// objects
if (Array.isArray(val)) {
const length = val.length;
let debug = '[';
if (length > 0) {
debug += debugString(val[0]);
}
for(let i = 1; i < length; i++) {
debug += ', ' + debugString(val[i]);
}
debug += ']';
return debug;
}
// Test for built-in
const builtInMatches = /\[object ([^\]]+)\]/.exec(toString.call(val));
let className;
if (builtInMatches && builtInMatches.length > 1) {
className = builtInMatches[1];
} else {
// Failed to match the standard '[object ClassName]'
return toString.call(val);
}
if (className == 'Object') {
// we're a user defined class or Object
// JSON.stringify avoids problems with cycles, and is generally much
// easier than looping through ownProperties of `val`.
try {
return 'Object(' + JSON.stringify(val) + ')';
} catch (_) {
return 'Object';
}
}
// errors
if (val instanceof Error) {
return `${val.name}: ${val.message}\n${val.stack}`;
}
// TODO we could test for more things here, like `Set`s and `Map`s.
return className;
}
let WASM_VECTOR_LEN = 0;
const cachedTextEncoder = (typeof TextEncoder !== 'undefined' ? new TextEncoder('utf-8') : { encode: () => { throw Error('TextEncoder not available') } } );
const encodeString = (typeof cachedTextEncoder.encodeInto === 'function'
? function (arg, view) {
return cachedTextEncoder.encodeInto(arg, view);
}
: function (arg, view) {
const buf = cachedTextEncoder.encode(arg);
view.set(buf);
return {
read: arg.length,
written: buf.length
};
});
function passStringToWasm0(arg, malloc, realloc) {
if (realloc === undefined) {
const buf = cachedTextEncoder.encode(arg);
const ptr = malloc(buf.length, 1) >>> 0;
getUint8ArrayMemory0().subarray(ptr, ptr + buf.length).set(buf);
WASM_VECTOR_LEN = buf.length;
return ptr;
}
let len = arg.length;
let ptr = malloc(len, 1) >>> 0;
const mem = getUint8ArrayMemory0();
let offset = 0;
for (; offset < len; offset++) {
const code = arg.charCodeAt(offset);
if (code > 0x7F) break;
mem[ptr + offset] = code;
}
if (offset !== len) {
if (offset !== 0) {
arg = arg.slice(offset);
}
ptr = realloc(ptr, len, len = offset + arg.length * 3, 1) >>> 0;
const view = getUint8ArrayMemory0().subarray(ptr + offset, ptr + len);
const ret = encodeString(arg, view);
offset += ret.written;
ptr = realloc(ptr, len, offset, 1) >>> 0;
}
WASM_VECTOR_LEN = offset;
return ptr;
}
let cachedDataViewMemory0 = null;
function getDataViewMemory0() {
if (cachedDataViewMemory0 === null || cachedDataViewMemory0.buffer.detached === true || (cachedDataViewMemory0.buffer.detached === undefined && cachedDataViewMemory0.buffer !== wasm.memory.buffer)) {
cachedDataViewMemory0 = new DataView(wasm.memory.buffer);
}
return cachedDataViewMemory0;
}
function takeFromExternrefTable0(idx) {
const value = wasm.__wbindgen_export_2.get(idx);
wasm.__externref_table_dealloc(idx);
return value;
}
/**
* @param {string} spec_json
* @param {string} base_url
*/
export function init_cli(spec_json, base_url) {
const ptr0 = passStringToWasm0(spec_json, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ptr1 = passStringToWasm0(base_url, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len1 = WASM_VECTOR_LEN;
const ret = wasm.init_cli(ptr0, len0, ptr1, len1);
if (ret[1]) {
throw takeFromExternrefTable0(ret[0]);
}
}
/**
* @param {string} command_line
* @returns {Promise<any>}
*/
export function run_command_async(command_line) {
const ptr0 = passStringToWasm0(command_line, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.run_command_async(ptr0, len0);
return ret;
}
/**
* @param {string} _command_line
* @returns {string}
*/
export function run_command(_command_line) {
let deferred2_0;
let deferred2_1;
try {
const ptr0 = passStringToWasm0(_command_line, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.run_command(ptr0, len0);
deferred2_0 = ret[0];
deferred2_1 = ret[1];
return getStringFromWasm0(ret[0], ret[1]);
} finally {
wasm.__wbindgen_free(deferred2_0, deferred2_1, 1);
}
}
/**
* 获取Tab补全建议
* @param {string} line
* @param {number} cursor_pos
* @returns {CompletionResult}
*/
export function get_completions(line, cursor_pos) {
const ptr0 = passStringToWasm0(line, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.get_completions(ptr0, len0, cursor_pos);
return CompletionResult.__wrap(ret);
}
/**
* 获取历史记录
* @returns {string}
*/
export function get_history() {
let deferred1_0;
let deferred1_1;
try {
const ret = wasm.get_history();
deferred1_0 = ret[0];
deferred1_1 = ret[1];
return getStringFromWasm0(ret[0], ret[1]);
} finally {
wasm.__wbindgen_free(deferred1_0, deferred1_1, 1);
}
}
/**
* 根据索引获取历史记录项 (0为最新负数从后往前)
* @param {number} index
* @returns {string | undefined}
*/
export function get_history_item(index) {
const ret = wasm.get_history_item(index);
let v1;
if (ret[0] !== 0) {
v1 = getStringFromWasm0(ret[0], ret[1]).slice();
wasm.__wbindgen_free(ret[0], ret[1] * 1, 1);
}
return v1;
}
/**
* 在历史记录中搜索 (类似Ctrl+r功能)
* @param {string} query
* @returns {string}
*/
export function search_history(query) {
let deferred2_0;
let deferred2_1;
try {
const ptr0 = passStringToWasm0(query, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len0 = WASM_VECTOR_LEN;
const ret = wasm.search_history(ptr0, len0);
deferred2_0 = ret[0];
deferred2_1 = ret[1];
return getStringFromWasm0(ret[0], ret[1]);
} finally {
wasm.__wbindgen_free(deferred2_0, deferred2_1, 1);
}
}
/**
* 清空历史记录
*/
export function clear_history() {
wasm.clear_history();
}
function __wbg_adapter_22(arg0, arg1, arg2) {
wasm.closure108_externref_shim(arg0, arg1, arg2);
}
function __wbg_adapter_68(arg0, arg1, arg2, arg3) {
wasm.closure130_externref_shim(arg0, arg1, arg2, arg3);
}
const CompletionResultFinalization = (typeof FinalizationRegistry === 'undefined')
? { register: () => {}, unregister: () => {} }
: new FinalizationRegistry(ptr => wasm.__wbg_completionresult_free(ptr >>> 0, 1));
/**
* 补全建议的JSON表示用于与JavaScript交互
*/
export class CompletionResult {
static __wrap(ptr) {
ptr = ptr >>> 0;
const obj = Object.create(CompletionResult.prototype);
obj.__wbg_ptr = ptr;
CompletionResultFinalization.register(obj, obj.__wbg_ptr, obj);
return obj;
}
__destroy_into_raw() {
const ptr = this.__wbg_ptr;
this.__wbg_ptr = 0;
CompletionResultFinalization.unregister(this);
return ptr;
}
free() {
const ptr = this.__destroy_into_raw();
wasm.__wbg_completionresult_free(ptr, 0);
}
/**
* @returns {string}
*/
get suggestions() {
let deferred1_0;
let deferred1_1;
try {
const ret = wasm.completionresult_suggestions(this.__wbg_ptr);
deferred1_0 = ret[0];
deferred1_1 = ret[1];
return getStringFromWasm0(ret[0], ret[1]);
} finally {
wasm.__wbindgen_free(deferred1_0, deferred1_1, 1);
}
}
}
async function __wbg_load(module, imports) {
if (typeof Response === 'function' && module instanceof Response) {
if (typeof WebAssembly.instantiateStreaming === 'function') {
try {
return await WebAssembly.instantiateStreaming(module, imports);
} catch (e) {
if (module.headers.get('Content-Type') != 'application/wasm') {
console.warn("`WebAssembly.instantiateStreaming` failed because your server does not serve Wasm with `application/wasm` MIME type. Falling back to `WebAssembly.instantiate` which is slower. Original error:\n", e);
} else {
throw e;
}
}
}
const bytes = await module.arrayBuffer();
return await WebAssembly.instantiate(bytes, imports);
} else {
const instance = await WebAssembly.instantiate(module, imports);
if (instance instanceof WebAssembly.Instance) {
return { instance, module };
} else {
return instance;
}
}
}
function __wbg_get_imports() {
const imports = {};
imports.wbg = {};
imports.wbg.__wbg_call_672a4d21634d4a24 = function() { return handleError(function (arg0, arg1) {
const ret = arg0.call(arg1);
return ret;
}, arguments) };
imports.wbg.__wbg_call_7cccdd69e0791ae2 = function() { return handleError(function (arg0, arg1, arg2) {
const ret = arg0.call(arg1, arg2);
return ret;
}, arguments) };
imports.wbg.__wbg_fetch_b7bf320f681242d2 = function(arg0, arg1) {
const ret = arg0.fetch(arg1);
return ret;
};
imports.wbg.__wbg_instanceof_Response_f2cc20d9f7dfd644 = function(arg0) {
let result;
try {
result = arg0 instanceof Response;
} catch (_) {
result = false;
}
const ret = result;
return ret;
};
imports.wbg.__wbg_instanceof_Window_def73ea0955fc569 = function(arg0) {
let result;
try {
result = arg0 instanceof Window;
} catch (_) {
result = false;
}
const ret = result;
return ret;
};
imports.wbg.__wbg_log_a793dbed77c682d9 = function(arg0, arg1) {
console.log(getStringFromWasm0(arg0, arg1));
};
imports.wbg.__wbg_new_018dcc2d6c8c2f6a = function() { return handleError(function () {
const ret = new Headers();
return ret;
}, arguments) };
imports.wbg.__wbg_new_23a2665fac83c611 = function(arg0, arg1) {
try {
var state0 = {a: arg0, b: arg1};
var cb0 = (arg0, arg1) => {
const a = state0.a;
state0.a = 0;
try {
return __wbg_adapter_68(a, state0.b, arg0, arg1);
} finally {
state0.a = a;
}
};
const ret = new Promise(cb0);
return ret;
} finally {
state0.a = state0.b = 0;
}
};
imports.wbg.__wbg_new_405e22f390576ce2 = function() {
const ret = new Object();
return ret;
};
imports.wbg.__wbg_newnoargs_105ed471475aaf50 = function(arg0, arg1) {
const ret = new Function(getStringFromWasm0(arg0, arg1));
return ret;
};
imports.wbg.__wbg_newwithstrandinit_06c535e0a867c635 = function() { return handleError(function (arg0, arg1, arg2) {
const ret = new Request(getStringFromWasm0(arg0, arg1), arg2);
return ret;
}, arguments) };
imports.wbg.__wbg_queueMicrotask_97d92b4fcc8a61c5 = function(arg0) {
queueMicrotask(arg0);
};
imports.wbg.__wbg_queueMicrotask_d3219def82552485 = function(arg0) {
const ret = arg0.queueMicrotask;
return ret;
};
imports.wbg.__wbg_resolve_4851785c9c5f573d = function(arg0) {
const ret = Promise.resolve(arg0);
return ret;
};
imports.wbg.__wbg_set_11cd83f45504cedf = function() { return handleError(function (arg0, arg1, arg2, arg3, arg4) {
arg0.set(getStringFromWasm0(arg1, arg2), getStringFromWasm0(arg3, arg4));
}, arguments) };
imports.wbg.__wbg_setbody_5923b78a95eedf29 = function(arg0, arg1) {
arg0.body = arg1;
};
imports.wbg.__wbg_setheaders_834c0bdb6a8949ad = function(arg0, arg1) {
arg0.headers = arg1;
};
imports.wbg.__wbg_setmethod_3c5280fe5d890842 = function(arg0, arg1, arg2) {
arg0.method = getStringFromWasm0(arg1, arg2);
};
imports.wbg.__wbg_static_accessor_GLOBAL_88a902d13a557d07 = function() {
const ret = typeof global === 'undefined' ? null : global;
return isLikeNone(ret) ? 0 : addToExternrefTable0(ret);
};
imports.wbg.__wbg_static_accessor_GLOBAL_THIS_56578be7e9f832b0 = function() {
const ret = typeof globalThis === 'undefined' ? null : globalThis;
return isLikeNone(ret) ? 0 : addToExternrefTable0(ret);
};
imports.wbg.__wbg_static_accessor_SELF_37c5d418e4bf5819 = function() {
const ret = typeof self === 'undefined' ? null : self;
return isLikeNone(ret) ? 0 : addToExternrefTable0(ret);
};
imports.wbg.__wbg_static_accessor_WINDOW_5de37043a91a9c40 = function() {
const ret = typeof window === 'undefined' ? null : window;
return isLikeNone(ret) ? 0 : addToExternrefTable0(ret);
};
imports.wbg.__wbg_status_f6360336ca686bf0 = function(arg0) {
const ret = arg0.status;
return ret;
};
imports.wbg.__wbg_text_7805bea50de2af49 = function() { return handleError(function (arg0) {
const ret = arg0.text();
return ret;
}, arguments) };
imports.wbg.__wbg_then_44b73946d2fb3e7d = function(arg0, arg1) {
const ret = arg0.then(arg1);
return ret;
};
imports.wbg.__wbg_then_48b406749878a531 = function(arg0, arg1, arg2) {
const ret = arg0.then(arg1, arg2);
return ret;
};
imports.wbg.__wbindgen_cb_drop = function(arg0) {
const obj = arg0.original;
if (obj.cnt-- == 1) {
obj.a = 0;
return true;
}
const ret = false;
return ret;
};
imports.wbg.__wbindgen_closure_wrapper648 = function(arg0, arg1, arg2) {
const ret = makeMutClosure(arg0, arg1, 109, __wbg_adapter_22);
return ret;
};
imports.wbg.__wbindgen_debug_string = function(arg0, arg1) {
const ret = debugString(arg1);
const ptr1 = passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
const len1 = WASM_VECTOR_LEN;
getDataViewMemory0().setInt32(arg0 + 4 * 1, len1, true);
getDataViewMemory0().setInt32(arg0 + 4 * 0, ptr1, true);
};
imports.wbg.__wbindgen_init_externref_table = function() {
const table = wasm.__wbindgen_export_2;
const offset = table.grow(4);
table.set(0, undefined);
table.set(offset + 0, undefined);
table.set(offset + 1, null);
table.set(offset + 2, true);
table.set(offset + 3, false);
;
};
imports.wbg.__wbindgen_is_function = function(arg0) {
const ret = typeof(arg0) === 'function';
return ret;
};
imports.wbg.__wbindgen_is_undefined = function(arg0) {
const ret = arg0 === undefined;
return ret;
};
imports.wbg.__wbindgen_string_get = function(arg0, arg1) {
const obj = arg1;
const ret = typeof(obj) === 'string' ? obj : undefined;
var ptr1 = isLikeNone(ret) ? 0 : passStringToWasm0(ret, wasm.__wbindgen_malloc, wasm.__wbindgen_realloc);
var len1 = WASM_VECTOR_LEN;
getDataViewMemory0().setInt32(arg0 + 4 * 1, len1, true);
getDataViewMemory0().setInt32(arg0 + 4 * 0, ptr1, true);
};
imports.wbg.__wbindgen_string_new = function(arg0, arg1) {
const ret = getStringFromWasm0(arg0, arg1);
return ret;
};
imports.wbg.__wbindgen_throw = function(arg0, arg1) {
throw new Error(getStringFromWasm0(arg0, arg1));
};
return imports;
}
function __wbg_init_memory(imports, memory) {
}
function __wbg_finalize_init(instance, module) {
wasm = instance.exports;
__wbg_init.__wbindgen_wasm_module = module;
cachedDataViewMemory0 = null;
cachedUint8ArrayMemory0 = null;
wasm.__wbindgen_start();
return wasm;
}
function initSync(module) {
if (wasm !== undefined) return wasm;
if (typeof module !== 'undefined') {
if (Object.getPrototypeOf(module) === Object.prototype) {
({module} = module)
} else {
console.warn('using deprecated parameters for `initSync()`; pass a single object instead')
}
}
const imports = __wbg_get_imports();
__wbg_init_memory(imports);
if (!(module instanceof WebAssembly.Module)) {
module = new WebAssembly.Module(module);
}
const instance = new WebAssembly.Instance(module, imports);
return __wbg_finalize_init(instance, module);
}
async function __wbg_init(module_or_path) {
if (wasm !== undefined) return wasm;
if (typeof module_or_path !== 'undefined') {
if (Object.getPrototypeOf(module_or_path) === Object.prototype) {
({module_or_path} = module_or_path)
} else {
console.warn('using deprecated parameters for the initialization function; pass a single object instead')
}
}
if (typeof module_or_path === 'undefined') {
module_or_path = new URL('forge_cli_wasm_bg.wasm', import.meta.url);
}
const imports = __wbg_get_imports();
if (typeof module_or_path === 'string' || (typeof Request === 'function' && module_or_path instanceof Request) || (typeof URL === 'function' && module_or_path instanceof URL)) {
module_or_path = fetch(module_or_path);
}
__wbg_init_memory(imports);
const { instance, module } = await __wbg_load(await module_or_path, imports);
return __wbg_finalize_init(instance, module);
}
export { initSync };
export default __wbg_init;

View File

@ -0,0 +1,15 @@
<!DOCTYPE html>
<html>
<head>
<title>Forge CLI (WASM)</title>
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/xterm@5.3.0/css/xterm.css" />
<link rel="stylesheet" href="/cli-ui/style.css" />
<script src="https://cdn.jsdelivr.net/npm/xterm@5.3.0/lib/xterm.js"></script>
<script src="https://cdn.jsdelivr.net/npm/xterm-addon-fit@0.8.0/lib/xterm-addon-fit.js"></script>
</head>
<body>
<h1>Forge CLI (WASM Interface)</h1>
<div id="terminal"></div>
<script type="module" src="/cli-ui/main.js"></script>
</body>
</html>

View File

@ -0,0 +1,383 @@
import init, {
init_cli,
run_command_async,
get_completions,
get_history_item,
search_history
} from '/cli-ui/forge_cli_wasm.js';
async function main() {
// 1. Initialize xterm.js
const term = new Terminal({
cursorBlink: true,
theme: {
background: '#1e1e1e',
foreground: '#d4d4d4',
},
cols: 120, // Set a reasonable terminal width
scrollback: 1000,
convertEol: true, // Convert \n to \r\n for proper line endings
});
const fitAddon = new FitAddon.FitAddon();
term.loadAddon(fitAddon);
term.open(document.getElementById('terminal'));
fitAddon.fit();
window.addEventListener('resize', () => fitAddon.fit());
term.writeln('Welcome to the Forge CLI (WASM Interface)');
term.writeln('------------------------------------------');
term.writeln('');
try {
// 2. Load and initialize the WASM module
term.write('Loading WASM module...');
await init();
term.writeln('\r✅ WASM module loaded successfully.');
// 3. Fetch OpenAPI spec and initialize the CLI
const baseUrl = window.location.origin; // 动态获取base URL
term.write(`Fetching OpenAPI spec from ${baseUrl}/api-docs/openapi.json...`);
const response = await fetch(`${baseUrl}/api-docs/openapi.json`);
if (!response.ok) {
throw new Error(`Failed to fetch spec: ${response.statusText}`);
}
const specJson = await response.text();
const spec = JSON.parse(specJson);
// 保存到全局以便 JS fallback 使用
window.__openapiSpec = spec;
window.__baseUrl = baseUrl;
init_cli(specJson, baseUrl);
term.writeln('\r✅ CLI initialized with OpenAPI spec.');
} catch (e) {
term.writeln(`\r\n❌ Error during initialization: ${e}`);
return;
}
// 4. Implement the REPL with enhanced functionality
let currentLine = '';
let cursorPosition = 0; // 光标在当前行中的位置
let historyIndex = -1; // -1表示当前输入>=0表示历史记录索引
let isInReverseSearch = false;
let reverseSearchQuery = '';
let completionMenu = null; // 当前显示的补全菜单
const prompt = '\r\n$ ';
const promptOnly = '$ '; // 不包含换行的提示符,用于重绘
// 重绘当前行
function redrawLine() {
// 移动到行首并清除从提示符后的所有内容
term.write('\r' + promptOnly);
term.write('\x1b[K'); // 清除从光标到行尾的内容
if (isInReverseSearch) {
// 在反向搜索模式下,替换整个提示符
term.write('\r\x1b[K'); // 清除整行
term.write(`(reverse-i-search)'${reverseSearchQuery}': ${currentLine}`);
} else {
term.write(currentLine);
}
// 移动光标到正确位置
if (cursorPosition < currentLine.length) {
const moveCursor = currentLine.length - cursorPosition;
term.write('\x1b[' + moveCursor + 'D'); // 向左移动光标
}
}
// 插入字符到当前位置
function insertChar(char) {
currentLine = currentLine.slice(0, cursorPosition) + char + currentLine.slice(cursorPosition);
cursorPosition++;
redrawLine();
}
// 删除字符
function deleteChar() {
if (cursorPosition > 0) {
currentLine = currentLine.slice(0, cursorPosition - 1) + currentLine.slice(cursorPosition);
cursorPosition--;
redrawLine();
}
}
// 移动光标
function moveCursor(direction) {
if (direction === 'left' && cursorPosition > 0) {
cursorPosition--;
term.write('\x1b[D');
} else if (direction === 'right' && cursorPosition < currentLine.length) {
cursorPosition++;
term.write('\x1b[C');
}
}
// 处理Tab补全
function handleTabCompletion() {
try {
const completionResult = get_completions(currentLine, cursorPosition);
const suggestions = JSON.parse(completionResult.suggestions);
if (suggestions.length === 0) {
return;
}
if (suggestions.length === 1) {
// 只有一个建议,直接补全
const suggestion = suggestions[0];
const beforeCursor = currentLine.slice(0, suggestion.start_pos);
const afterCursor = currentLine.slice(suggestion.end_pos);
currentLine = beforeCursor + suggestion.value + afterCursor;
cursorPosition = beforeCursor.length + suggestion.value.length;
redrawLine();
} else {
// 多个建议,显示补全菜单
term.writeln('');
suggestions.slice(0, 10).forEach(suggestion => {
const desc = suggestion.description ? ` - ${suggestion.description}` : '';
term.writeln(` ${suggestion.value}${desc}`);
});
redrawLine();
}
} catch (e) {
console.error('Tab completion error:', e);
}
}
// 处理历史记录导航
function navigateHistory(direction) {
if (direction === 'up') {
const item = get_history_item(historyIndex + 1);
if (item) {
historyIndex++;
currentLine = item;
cursorPosition = currentLine.length;
redrawLine();
}
} else if (direction === 'down') {
if (historyIndex > 0) {
historyIndex--;
const item = get_history_item(historyIndex);
if (item) {
currentLine = item;
cursorPosition = currentLine.length;
redrawLine();
}
} else if (historyIndex === 0) {
historyIndex = -1;
currentLine = '';
cursorPosition = 0;
redrawLine();
}
}
}
// 处理反向搜索
function handleReverseSearch(char) {
if (char) {
reverseSearchQuery += char;
}
try {
const searchResults = JSON.parse(search_history(reverseSearchQuery));
if (searchResults.length > 0) {
currentLine = searchResults[0];
cursorPosition = currentLine.length;
}
redrawLine();
} catch (e) {
console.error('Reverse search error:', e);
}
}
// 退出反向搜索模式
function exitReverseSearch() {
isInReverseSearch = false;
reverseSearchQuery = '';
cursorPosition = currentLine.length;
redrawLine();
}
// JS fallback当 wasm 返回 Path not found 时,用 JS 直接按 OpenAPI 执行
async function executeCommandJS(commandLine) {
try {
const spec = window.__openapiSpec;
const baseUrl = window.__baseUrl || '';
if (!spec) return 'Error: OpenAPI spec not loaded.';
const tokens = commandLine.match(/(?:[^\s"]+|"[^"]*")+/g) || [];
if (tokens.length === 0) return '';
const cmd = tokens[0];
const args = {};
for (let i = 1; i < tokens.length; i++) {
const t = tokens[i];
if (t.startsWith('--')) {
const key = t.replace(/^--/, '');
const val = (i + 1 < tokens.length && !tokens[i + 1].startsWith('--')) ? tokens[++i] : '';
args[key] = val.replace(/^"|"$/g, '');
}
}
const parts = cmd.split('.');
const method = parts.pop().toUpperCase();
const cmdSegs = parts;
// 匹配路径模板
let matched = null;
for (const [key, item] of Object.entries(spec.paths || {})) {
const keySegs = key.split('/').filter(s => s);
if (keySegs.length !== cmdSegs.length) continue;
let ok = true;
for (let i = 0; i < keySegs.length; i++) {
const ks = keySegs[i];
const cs = cmdSegs[i];
const isParam = ks.startsWith('{') && ks.endsWith('}');
if (!isParam && ks !== cs) { ok = false; break; }
}
if (ok) { matched = [key, item]; break; }
}
if (!matched) {
return `API request failed (JS fallback): Path not found for /${cmdSegs.join('/')}`;
}
const [pathTemplate, pathItem] = matched;
const op = (pathItem[method.toLowerCase()]);
if (!op) return `API request failed (JS fallback): Operation not found for ${cmd}`;
// 构造路径和查询
let finalPath = pathTemplate;
const used = new Set();
if (Array.isArray(op.parameters)) {
for (const p of op.parameters) {
const prm = p && p.name ? p : (p && p.$ref ? null : null);
if (!prm) continue;
if (p.in === 'path' && args[p.name] != null) {
finalPath = finalPath.replace(`{${p.name}}`, encodeURIComponent(args[p.name]));
used.add(p.name);
}
}
}
const query = [];
for (const [k, v] of Object.entries(args)) {
if (!used.has(k)) query.push(`${encodeURIComponent(k)}=${encodeURIComponent(v)}`);
}
let serverUrl = '';
if (Array.isArray(spec.servers) && spec.servers.length > 0 && spec.servers[0].url) {
serverUrl = spec.servers[0].url;
}
const url = `${baseUrl}${serverUrl}${finalPath}${query.length ? ('?' + query.join('&')) : ''}`;
const resp = await fetch(url, { method });
const text = await resp.text();
try {
return JSON.stringify(JSON.parse(text), null, 2);
} catch {
return text;
}
} catch (e) {
return `API request failed (JS fallback): ${e}`;
}
}
term.write(prompt);
term.onKey(({ key, domEvent }) => {
const { keyCode, ctrlKey, altKey, metaKey } = domEvent;
// Ctrl+R - 反向搜索
if (ctrlKey && keyCode === 82 && !isInReverseSearch) {
isInReverseSearch = true;
reverseSearchQuery = '';
currentLine = '';
cursorPosition = 0;
redrawLine();
return;
}
// 在反向搜索模式下的处理
if (isInReverseSearch) {
if (keyCode === 13) { // Enter - 接受搜索结果
exitReverseSearch();
return;
} else if (keyCode === 27) { // Esc - 取消搜索
isInReverseSearch = false;
reverseSearchQuery = '';
currentLine = '';
cursorPosition = 0;
redrawLine();
return;
} else if (keyCode === 8) { // Backspace - 删除搜索字符
if (reverseSearchQuery.length > 0) {
reverseSearchQuery = reverseSearchQuery.slice(0, -1);
handleReverseSearch();
}
return;
} else if (!ctrlKey && !altKey && !metaKey && key.length === 1) {
handleReverseSearch(key);
return;
}
return;
}
// 普通模式下的处理
if (keyCode === 13) { // Enter - 执行命令
if (currentLine.trim()) {
term.writeln('');
// 异步执行命令
(async () => {
try {
let result = await run_command_async(currentLine);
const plain = String(result);
if (plain.includes('Path not found for')) {
result = await executeCommandJS(currentLine);
}
// 清理ANSI转义序列
const cleanResult = String(result)
.replace(/\x1b\[[0-9;]*m/g, '')
.replace(/\x1b\[[0-9]*[A-Za-z]/g, '')
.replace(/\[\d+m/g, '');
const lines = cleanResult.split('\n');
lines.forEach((line, index) => {
if (index === lines.length - 1 && line === '') {
return;
}
term.writeln(line);
});
} catch (error) {
term.writeln(`Error: ${error}`);
}
term.write(prompt);
})();
currentLine = '';
cursorPosition = 0;
historyIndex = -1;
} else {
term.write(prompt);
}
} else if (keyCode === 9) { // Tab - 补全
domEvent.preventDefault();
handleTabCompletion();
} else if (keyCode === 8) { // Backspace
deleteChar();
} else if (keyCode === 37) { // 左箭头
moveCursor('left');
} else if (keyCode === 39) { // 右箭头
moveCursor('right');
} else if (keyCode === 38) { // 上箭头 - 历史记录上一个
navigateHistory('up');
} else if (keyCode === 40) { // 下箭头 - 历史记录下一个
navigateHistory('down');
} else if (keyCode === 36) { // Home - 移到行首
cursorPosition = 0;
redrawLine();
} else if (keyCode === 35) { // End - 移到行尾
cursorPosition = currentLine.length;
redrawLine();
} else if (!ctrlKey && !altKey && !metaKey && key.length === 1) {
// 普通字符输入
insertChar(key);
}
});
}
main();

View File

@ -0,0 +1,12 @@
{
"name": "forge-cli-wasm",
"version": "0.1.2",
"files": [
"forge_cli_wasm_bg.wasm",
"forge_cli_wasm.js",
"forge_cli_wasm.d.ts"
],
"module": "forge_cli_wasm.js",
"types": "forge_cli_wasm.d.ts",
"sideEffects": false
}

View File

@ -0,0 +1,28 @@
body, html {
margin: 0;
padding: 0;
height: 100%;
background-color: #1e1e1e;
color: #d4d4d4;
font-family: Menlo, Monaco, 'Courier New', monospace;
}
h1 {
padding: 10px 20px;
margin: 0;
font-size: 1.2em;
border-bottom: 1px solid #333;
}
#terminal {
width: calc(100% - 40px);
height: calc(100vh - 80px); /* Adjust based on h1 height */
padding: 20px;
font-size: 14px; /* Ensure consistent font size */
line-height: 1.4; /* Better line spacing */
}
.xterm .xterm-viewport {
width: 100% !important;
}

Some files were not shown because too many files have changed in this diff Show More