fix: 全面修复代码质量和报告准确性问题
代码修复 (16 个模块): - GARCH 模型统一改用 t 分布 + 收敛检查 (returns/volatility/anomaly) - KS 检验替换为 Lilliefors 检验 (returns) - 修复数据泄漏: StratifiedKFold→TimeSeriesSplit, scaler 逐折 fit (anomaly) - 前兆标签 shift(-1) 预测次日异常 (anomaly) - PSD 归一化加入采样频率和单边谱×2 (fft) - AR(1) 红噪声基线经验缩放 (fft) - 盒计数法独立 x/y 归一化, MF-DFA q=0 (fractal) - ADF 平稳性检验 + 移除双重 Bonferroni (causality) - R/S Hurst 添加 R² 拟合优度 (hurst) - Prophet 递推预测避免信息泄露 (time_series) - IC 计算过滤零信号, 中性形态 hit_rate=NaN (indicators/patterns) - 聚类阈值自适应化 (clustering) - 日历效应前后半段稳健性检查 (calendar) - 证据评分标准文本与代码对齐 (visualization) - 核心管道 NaN/空值防护 (data_loader/preprocessing/main) 报告修复 (docs/REPORT.md, 15 处): - 标度指数 H_scaling 与 Hurst 指数消歧 - GBM 6 个月概率锥数值重算 - CLT 限定、减半措辞弱化、情景概率逻辑修正 - GPD 形状参数解读修正、异常 AUC 证据降级 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
140
docs/REPORT.md
140
docs/REPORT.md
@@ -46,7 +46,7 @@
|
||||
|
||||
## 1. 数据概览
|
||||
|
||||

|
||||

|
||||
|
||||
| 指标 | 值 |
|
||||
|------|-----|
|
||||
@@ -89,9 +89,9 @@
|
||||
|
||||
4σ 极端事件的出现频率是正态分布预测的近 87 倍,证明 BTC 收益率具有显著的厚尾特征。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
### 2.3 多时间尺度分布
|
||||
|
||||
@@ -102,9 +102,9 @@
|
||||
| 1d | 3,090 | 0.000935 | 0.0361 | 15.65 | -0.97 |
|
||||
| 1w | 434 | 0.006812 | 0.0959 | 2.08 | -0.44 |
|
||||
|
||||
**关键发现**: 峰度随时间尺度增大从 35.88 → 2.08 单调递减,趋向正态分布,符合中心极限定理的聚合正态性。
|
||||
**关键发现**: 峰度随时间尺度增大从 35.88 → 2.08 单调递减,趋向正态分布。这一趋势与聚合正态性一致,但由于 BTC 收益率存在显著的自相关(第 3 章)和波动率聚集,严格的 CLT 独立同分布前提不满足,收敛速度可能慢于独立序列。
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -121,7 +121,7 @@
|
||||
|
||||
持续性 0.973 接近 1,意味着波动率冲击衰减极慢 — 一次大幅波动的影响需要数十天才能消散。
|
||||
|
||||

|
||||

|
||||
|
||||
### 3.2 波动率 ACF 幂律衰减
|
||||
|
||||
@@ -133,9 +133,9 @@
|
||||
| p 值 | 5.82e-25 |
|
||||
| 长记忆性判断 (0 < d < 1) | **是** |
|
||||
|
||||
绝对收益率的自相关以幂律速度缓慢衰减,证实波动率具有长记忆特征。标准 GARCH 模型的指数衰减假设可能不足以完整刻画这一特征。
|
||||
绝对收益率的自相关以幂律速度缓慢衰减,支持波动率具有长记忆特征。线性拟合(d=0.635)和非线性拟合(d=0.345)差异较大,这是因为线性拟合在对数空间中对远端滞后阶赋予了更高权重,而非线性拟合更好地捕捉了短程衰减特征。FIGARCH 建模建议参考非线性拟合值 d≈0.34。标准 GARCH 模型的指数衰减假设不足以完整刻画这一特征。
|
||||
|
||||

|
||||

|
||||
|
||||
### 3.3 ACF 分析证据
|
||||
|
||||
@@ -148,11 +148,11 @@
|
||||
|
||||
绝对收益率前 88 阶 ACF 均显著(100 阶中的 88 阶),成交量全部 100 阶均显著(ACF(1) = 0.892),证明极强的非线性依赖和波动聚集。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
### 3.4 杠杆效应
|
||||
|
||||
@@ -164,7 +164,7 @@
|
||||
|
||||
仅在 5 天窗口内观测到弱杠杆效应(下跌后波动率上升),效应量极小(r=-0.062),比传统股市弱得多。
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -193,11 +193,11 @@
|
||||
|
||||
7 天周期分量解释了最多的方差(14.9%),但总体所有周期分量加起来仅解释 ~22% 的方差,约 78% 的波动无法用周期性解释。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
### 4.2 小波变换 (CWT)
|
||||
|
||||
@@ -215,11 +215,11 @@
|
||||
|
||||
这些周期虽然通过了 95% 显著性检验,但功率/阈值比值仅 1.01~1.15x,属于**边际显著**,实际应用价值有限。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -261,11 +261,11 @@ Hurst 指数随时间尺度增大而增大,周线级别(H=0.67)呈现更
|
||||
|
||||
几乎所有时间窗口都显示弱趋势性,没有任何窗口进入均值回归状态。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
### 5.2 分形维度
|
||||
|
||||
@@ -280,11 +280,11 @@ BTC 的分形维数 D=1.34 低于随机游走的 D=1.38(序列更光滑),
|
||||
|
||||
**多尺度自相似性**:峰度从尺度 1 的 15.65 降至尺度 50 的 -0.25,大尺度下趋于正态,自相似性有限。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -318,11 +318,11 @@ BTC 的分形维数 D=1.34 低于随机游走的 D=1.38(序列更光滑),
|
||||
|
||||
AIC/BIC 均支持指数增长模型优于幂律模型(差值 493),说明 BTC 的长期增长更接近指数而非幂律。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -337,7 +337,7 @@ AIC/BIC 均支持指数增长模型优于幂律模型(差值 493),说明 B
|
||||
|
||||
成交量放大伴随大幅波动,中等正相关且极其显著。
|
||||
|
||||

|
||||

|
||||
|
||||
### 7.2 Granger 因果检验
|
||||
|
||||
@@ -356,9 +356,9 @@ AIC/BIC 均支持指数增长模型优于幂律模型(差值 493),说明 B
|
||||
|
||||
**核心发现**: 因果关系是**单向**的 — 波动率/收益率 Granger-cause 成交量和 taker_buy_ratio,反向不成立。这意味着成交量是价格波动的结果而非原因。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
### 7.3 跨时间尺度因果
|
||||
|
||||
@@ -374,7 +374,7 @@ AIC/BIC 均支持指数增长模型优于幂律模型(差值 493),说明 B
|
||||
|
||||
检测到 82 个价量背离信号(49 个顶背离 + 33 个底背离)。
|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -396,7 +396,7 @@ AIC/BIC 均支持指数增长模型优于幂律模型(差值 493),说明 B
|
||||
|
||||
Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
|
||||

|
||||

|
||||
|
||||
### 8.2 月份效应
|
||||
|
||||
@@ -404,7 +404,7 @@ Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
|
||||
10 月份均值收益率最高(+0.501%),8 月最低(-0.123%),但 66 对两两比较经 Bonferroni 校正后无一显著。
|
||||
|
||||

|
||||

|
||||
|
||||
### 8.3 小时效应
|
||||
|
||||
@@ -413,7 +413,7 @@ Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
|
||||
日内小时效应在收益率和成交量上均显著存在。14:00 UTC 成交量最高(3,805 BTC),03:00-05:00 UTC 成交量最低(~1,980 BTC)。
|
||||
|
||||

|
||||

|
||||
|
||||
### 8.4 季度 & 月初月末效应
|
||||
|
||||
@@ -422,7 +422,7 @@ Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
| 季度 Kruskal-Wallis | 1.15 | 0.765 | 不显著 |
|
||||
| 月初 vs 月末 Mann-Whitney | 134,569 | 0.236 | 不显著 |
|
||||
|
||||

|
||||

|
||||
|
||||
### 日历效应总结
|
||||
|
||||
@@ -484,13 +484,13 @@ Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
|
||||
两个周期的归一化价格轨迹高度相关(r=0.81),但仅 2 个样本无法做出因果推断。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -523,11 +523,11 @@ Bonferroni 校正后的 21 对 Mann-Whitney U 两两比较均不显著。
|
||||
|
||||
Top-10 IC 中有 9/10 方向一致,1 个(SMA_20_100)发生方向翻转。但所有 IC 值均在 [-0.10, +0.05] 范围内,效果量极小。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -570,11 +570,11 @@ Top-10 IC 中有 9/10 方向一致,1 个(SMA_20_100)发生方向翻转。
|
||||
|
||||
大部分形态的命中率在验证集上出现衰减,说明训练集中的表现可能是过拟合。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -602,13 +602,13 @@ Top-10 IC 中有 9/10 方向一致,1 个(SMA_20_100)发生方向翻转。
|
||||
|
||||
暴涨暴跌状态平均仅持续 1.3 天即回归横盘。暴跌后有 31.9% 概率转为暴涨(反弹)。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -627,9 +627,9 @@ Top-10 IC 中有 9/10 方向一致,1 个(SMA_20_100)发生方向翻转。
|
||||
|
||||
Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Mariano 检验 p=0.152 **不显著**,本质上等同于随机游走。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -680,13 +680,13 @@ Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Maria
|
||||
|
||||
> **注意**: AUC=0.99 部分反映了异常本身的聚集性(异常日前后也是异常的),不等于真正的"事前预测"能力。
|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||

|
||||

|
||||
|
||||
---
|
||||
|
||||
@@ -702,7 +702,7 @@ Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Maria
|
||||
| 波动率聚集 | GARCH persistence=0.973,绝对收益率ACF 88阶显著 | 可预测波动率 |
|
||||
| 波动率长记忆性 | 幂律衰减 d=0.635, p=5.8e-25 | FIGARCH建模 |
|
||||
| 单向因果:波动→成交量 | abs_return→volume F=55.19, Bonferroni校正后全显著 | 理解市场微观结构 |
|
||||
| 异常事件前兆 | AUC=0.9935,6/12已知事件精确对齐 | 波动率异常预警 |
|
||||
| 异常事件前兆 | AUC=0.9935,6/12已知事件精确对齐 | 中等证据(AUC 受异常聚集性膨胀),波动率异常预警 |
|
||||
|
||||
#### ⚠️ 中等证据(统计显著但效果有限)
|
||||
|
||||
@@ -795,7 +795,7 @@ Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Maria
|
||||
- **多粒度稳定性**: 1m/5m/15m/1h四个粒度结论高度一致(平均相关系数1.000)
|
||||
|
||||
**核心发现**:
|
||||
- 日内收益率自相关在亚洲时段为-0.0499,显示微弱的均值回归特征
|
||||
- 日内收益率自相关在亚洲时段为-0.0499(绝对值极小,接近噪声水平,需结合样本量和置信区间判断是否具有统计显著性)
|
||||
- 各时段收益率差异的Kruskal-Wallis检验显著(p<0.05),时区效应存在
|
||||
- **多粒度稳定性极强**(相关系数=1.000),说明日内模式在不同采样频率下保持一致
|
||||
|
||||
@@ -807,7 +807,7 @@ Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Maria
|
||||
|
||||
| 参数 | 估计值 | R² | 解读 |
|
||||
|------|--------|-----|------|
|
||||
| **Hurst指数H** | **0.4803** | 0.9996 | 略<0.5,微弱均值回归 |
|
||||
| **标度指数 H_scaling** | **0.4803** | 0.9996 | 略<0.5,微弱均值回归 |
|
||||
| 标度常数c | 0.0362 | — | 日波动率基准 |
|
||||
| 波动率跨度比 | 170.5 | — | 从1m到1mo的σ比值 |
|
||||
|
||||
@@ -833,7 +833,7 @@ Historical Mean 的 RMSE/RW = 0.998,仅比随机游走好 0.2%,Diebold-Maria
|
||||
高阶矩(更大波动)的自相关衰减更快,说明大波动后的可预测性更低。
|
||||
|
||||
**核心发现**:
|
||||
1. **Hurst指数H=0.4803**(R²=0.9996),略低于0.5,显示微弱的均值回归特征
|
||||
1. **标度指数 H_scaling=0.4803**(R²=0.9996),略低于0.5,显示微弱的均值回归特征。注意:此处的标度指数衡量的是波动率跨时间尺度的缩放关系 σ(Δt) ∝ (Δt)^H,与第 5 章的 Hurst 指数(衡量收益率序列自相关结构,H_RS≈0.59)含义不同,两者并不矛盾
|
||||
2. **1分钟峰度(118.21)是日线峰度(15.65)的7.6倍**,高频数据尖峰厚尾特征极其显著
|
||||
3. 波动率跨度达170倍,从1m的0.11%到1mo的19.5%
|
||||
4. **标度律拟合优度极高**(R²=0.9996),说明波动率标度关系非常稳健
|
||||
@@ -860,7 +860,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
**核心发现**:
|
||||
1. **月尺度RV对次日RV预测贡献最大**(51.7%),远超日尺度(9.4%)
|
||||
2. HAR-RV模型R²=9.3%,虽然统计显著但预测力有限
|
||||
3. **跳跃检测**: 检测到2,979个显著跳跃事件(占比96.4%),显示价格过程包含大量不连续变动
|
||||
3. **跳跃检测**: 检测到2,979个显著跳跃事件(占比96.4%)。极高的检出率表明 BTC 价格过程本质上以不连续跳跃为常态而非例外,也可能反映跳跃检测阈值相对于加密货币市场的高波动率偏低
|
||||
4. **已实现偏度/峰度**: 平均已实现偏度≈0,峰度≈0,说明日内收益率分布相对对称但存在尖峰
|
||||
|
||||
---
|
||||
@@ -869,7 +869,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
|
||||
> 信息熵分析模块已加载,等待实际数据验证。
|
||||
|
||||
**理论预期**:
|
||||
**理论预期(假设值,非实测数据)**:
|
||||
| 尺度 | 熵值(bits) | 最大熵 | 归一化熵 | 可预测性 |
|
||||
|------|-----------|-------|---------|---------|
|
||||
| 1m | ~4.9 | 5.00 | ~0.98 | 极低 |
|
||||
@@ -896,7 +896,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
| 参数 | 估计值 | 解读 |
|
||||
|------|-------|------|
|
||||
| 尺度σ | 0.028 | 超阈值波动幅度 |
|
||||
| 形状ξ | -0.147 | 指数尾部(ξ≈0) |
|
||||
| 形状ξ | -0.147 | 有界尾部(ξ<0,GPD 有上界),与 GEV 负向尾部结论一致 |
|
||||
|
||||
**多尺度VaR/CVaR(实际回测通过)**:
|
||||
| 尺度 | VaR 95% | CVaR 95% | VaR 99% | CVaR 99% | 回测状态 |
|
||||
@@ -936,8 +936,8 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
| 3d | — | — | — | — | — | — | 1.00 | — |
|
||||
| 1w | — | — | — | — | — | — | — | 1.00 |
|
||||
|
||||
**平均跨尺度相关系数**: 0.788
|
||||
**最高相关对**: 15m-4h (r=1.000)
|
||||
**平均跨尺度相关系数**: 0.788(仅基于有数据的尺度对计算)
|
||||
**最高相关对**: 15m-4h (r=1.000,该极高值可能由日频对齐聚合导致,非原始 tick 级相关)
|
||||
|
||||
**领先滞后分析**:
|
||||
- 最优滞后期矩阵显示各尺度间最大滞后为0-5天
|
||||
@@ -1004,7 +1004,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
|---------|---------|---------|---------|
|
||||
| **微观结构** | 极低非流动性(Amihud~0),VPIN=0.20预警崩盘 | ✅ 已验证 | 高频(≤5m) |
|
||||
| **日内模式** | 日内U型曲线,各时段差异显著 | ✅ 已验证 | 日内(1h) |
|
||||
| **波动率标度** | H=0.4803微弱均值回归,R²=0.9996 | ✅ 已验证 | 全尺度 |
|
||||
| **波动率标度** | H_scaling=0.4803(波动率缩放指数,非 Hurst 指数),R²=0.9996 | ✅ 已验证 | 全尺度 |
|
||||
| **HAR-RV** | 月RV贡献51.7%,跳跃事件96.4% | ✅ 已验证 | 中高频 |
|
||||
| **信息熵** | 细粒度熵更高更难预测 | ⏳ 待验证 | 全尺度 |
|
||||
| **极端风险** | 正尾重尾(ξ=+0.12),负尾有界(ξ=-0.76),VaR回测通过 | ✅ 已验证 | 日/周 |
|
||||
@@ -1108,11 +1108,11 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
|
||||
基于日线对数收益率参数(μ=0.000935, σ=0.0361),在几何布朗运动假设下:
|
||||
|
||||
**风险中性漂移修正**: E[ln(S_T/S_0)] = (μ - σ²/2) × T = 0.000283/天
|
||||
**对数正态中位数修正**(Jensen 不等式修正): E[ln(S_T/S_0)] = (μ - σ²/2) × T = 0.000283/天
|
||||
|
||||
| 时间跨度 | 中位数预期 | -1σ (16%分位) | +1σ (84%分位) | -2σ (2.5%分位) | +2σ (97.5%分位) |
|
||||
|---------|-----------|-------------|-------------|-------------|---------------|
|
||||
| 6 个月 (183天) | $80,834 | $52,891 | $123,470 | $36,267 | $180,129 |
|
||||
| 6 个月 (183天) | $81,057 | $49,731 | $132,130 | $30,502 | $215,266 |
|
||||
| 1 年 (365天) | $85,347 | $42,823 | $170,171 | $21,502 | $338,947 |
|
||||
| 2 年 (730天) | $94,618 | $35,692 | $250,725 | $13,475 | $664,268 |
|
||||
|
||||
@@ -1146,7 +1146,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
**推演**:
|
||||
- 如果按第 3 次减半的轨迹形态(r=0.81),但收益率大幅衰减(0.06x~0.18x 缩减比),第 4 次周期可能已经或接近峰值
|
||||
- 第 3 次减半在 ~550 天达到顶点后进入长期下跌(随后的 2022 年熊市),若类比成立,2026Q1-Q2 可能处于"周期后期"
|
||||
- **但仅 2 个样本的统计功效极低**(Welch's t 合并 p=0.991),不能依赖此推演
|
||||
- **仅 2 个样本的统计功效极低**(Welch's t 合并 p=0.991),此框架仅作叙事参考,不具有数据驱动的预测力
|
||||
|
||||
### 17.6 框架四:马尔可夫状态模型推演
|
||||
|
||||
@@ -1194,7 +1194,7 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
| 1 年目标 | $130,000 ~ $200,000 | GBM +1σ 区间 + Hurst 趋势持续 |
|
||||
| 2 年目标 | $180,000 ~ $350,000 | GBM +1σ~+2σ,幂律上轨 $140K |
|
||||
| 触发条件 | 连续突破幂律 95% 上轨 ($119,340) | 历史上 2021 年曾发生 |
|
||||
| 概率依据 | 马尔可夫暴涨状态 14.6% × Hurst 趋势延续 98.9% | 但单次暴涨仅持续 1.3 天 |
|
||||
| 概率依据 | 参考马尔可夫暴涨状态 14.6% 和 Hurst 趋势延续 98.9%(综合判断,非简单乘积) | 但单次暴涨仅持续 1.3 天 |
|
||||
|
||||
**数据支撑**: Hurst H=0.593 表明价格有弱趋势延续性,一旦进入上行通道可能持续。周线 H=0.67 暗示更长周期趋势性更强。但暴涨状态平均仅 1.3 天,需要连续多次暴涨才能实现。
|
||||
|
||||
@@ -1298,4 +1298,4 @@ RV_t = β₀ + β_d·RV_{t-1} + β_w·RV_{t-1}^{(w)} + β_m·RV_{t-1}^{(m)} + ε
|
||||
|
||||
---
|
||||
|
||||
*报告生成日期: 2026-02-03 | 分析代码: [src/](src/) | 图表输出: [output/](output/)*
|
||||
*报告生成日期: 2026-02-03 | 分析代码: [src/](../src/) | 图表输出: [output/](../output/)*
|
||||
|
||||
6
main.py
6
main.py
@@ -88,6 +88,10 @@ def run_single_module(key, df, df_hourly, output_base):
|
||||
mod = _import_module(mod_name)
|
||||
func = getattr(mod, func_name)
|
||||
|
||||
if needs_hourly and df_hourly is None:
|
||||
print(f" [{key}] 跳过(需要小时数据但未加载)")
|
||||
return {"status": "skipped", "error": "小时数据未加载", "findings": []}
|
||||
|
||||
if needs_hourly:
|
||||
result = func(df, df_hourly, module_output)
|
||||
else:
|
||||
@@ -96,7 +100,7 @@ def run_single_module(key, df, df_hourly, output_base):
|
||||
if result is None:
|
||||
result = {"status": "completed", "findings": []}
|
||||
|
||||
result["status"] = "success"
|
||||
result.setdefault("status", "success")
|
||||
print(f" [{key}] 完成 ✓")
|
||||
return result
|
||||
|
||||
|
||||
@@ -21,7 +21,7 @@ from typing import Optional, Dict, List, Tuple
|
||||
from sklearn.ensemble import IsolationForest, RandomForestClassifier
|
||||
from sklearn.neighbors import LocalOutlierFactor
|
||||
from sklearn.preprocessing import StandardScaler
|
||||
from sklearn.model_selection import cross_val_predict, StratifiedKFold
|
||||
from sklearn.model_selection import TimeSeriesSplit
|
||||
from sklearn.metrics import roc_auc_score, roc_curve
|
||||
|
||||
from src.data_loader import load_klines
|
||||
@@ -323,9 +323,9 @@ def extract_precursor_features(
|
||||
|
||||
X = pd.DataFrame(precursor_features, index=df_aligned.index)
|
||||
|
||||
# 标签: 未来是否出现异常(shift(-1) 使得特征是"之前"的)
|
||||
# 我们用当前特征预测当天是否异常
|
||||
y = labels_aligned
|
||||
# 标签: 预测次日是否出现异常(前瞻1天)
|
||||
y = labels_aligned.shift(-1).dropna()
|
||||
X = X.loc[y.index] # 对齐特征和标签
|
||||
|
||||
# 去除 NaN
|
||||
valid_mask = X.notna().all(axis=1) & y.notna()
|
||||
@@ -360,17 +360,13 @@ def train_precursor_classifier(
|
||||
print(f" [警告] 样本不足 (n={len(X)}, 正例={y.sum()}),跳过分类器训练")
|
||||
return {}
|
||||
|
||||
# 标准化
|
||||
scaler = StandardScaler()
|
||||
X_scaled = scaler.fit_transform(X)
|
||||
|
||||
# 分层 K 折
|
||||
# 时间序列交叉验证
|
||||
n_splits = min(5, int(y.sum()))
|
||||
if n_splits < 2:
|
||||
print(" [警告] 正例数过少,无法进行交叉验证")
|
||||
return {}
|
||||
|
||||
cv = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=42)
|
||||
cv = TimeSeriesSplit(n_splits=n_splits)
|
||||
|
||||
clf = RandomForestClassifier(
|
||||
n_estimators=200,
|
||||
@@ -381,27 +377,41 @@ def train_precursor_classifier(
|
||||
n_jobs=-1,
|
||||
)
|
||||
|
||||
# 交叉验证预测概率
|
||||
# 手动交叉验证(每折单独 fit scaler,防止数据泄漏)
|
||||
try:
|
||||
y_prob = cross_val_predict(clf, X_scaled, y, cv=cv, method='predict_proba')[:, 1]
|
||||
auc = roc_auc_score(y, y_prob)
|
||||
y_prob = np.full(len(y), np.nan)
|
||||
for train_idx, val_idx in cv.split(X):
|
||||
X_train, X_val = X.iloc[train_idx], X.iloc[val_idx]
|
||||
y_train, y_val = y.iloc[train_idx], y.iloc[val_idx]
|
||||
scaler = StandardScaler()
|
||||
X_train_scaled = scaler.fit_transform(X_train)
|
||||
X_val_scaled = scaler.transform(X_val)
|
||||
clf.fit(X_train_scaled, y_train)
|
||||
y_prob[val_idx] = clf.predict_proba(X_val_scaled)[:, 1]
|
||||
# 去除未被验证的样本(如有)
|
||||
valid_prob_mask = ~np.isnan(y_prob)
|
||||
y_eval = y[valid_prob_mask]
|
||||
y_prob_eval = y_prob[valid_prob_mask]
|
||||
auc = roc_auc_score(y_eval, y_prob_eval)
|
||||
except Exception as e:
|
||||
print(f" [错误] 交叉验证失败: {e}")
|
||||
return {}
|
||||
|
||||
# 在全量数据上训练获取特征重要性
|
||||
scaler = StandardScaler()
|
||||
X_scaled = scaler.fit_transform(X)
|
||||
clf.fit(X_scaled, y)
|
||||
importances = pd.Series(clf.feature_importances_, index=X.columns)
|
||||
importances = importances.sort_values(ascending=False)
|
||||
|
||||
# ROC 曲线数据
|
||||
fpr, tpr, thresholds = roc_curve(y, y_prob)
|
||||
fpr, tpr, thresholds = roc_curve(y_eval, y_prob_eval)
|
||||
|
||||
results = {
|
||||
'auc': auc,
|
||||
'feature_importances': importances,
|
||||
'y_true': y,
|
||||
'y_prob': y_prob,
|
||||
'y_true': y_eval,
|
||||
'y_prob': y_prob_eval,
|
||||
'fpr': fpr,
|
||||
'tpr': tpr,
|
||||
}
|
||||
|
||||
@@ -539,6 +539,26 @@ def run_calendar_analysis(
|
||||
# 4. 季度 & 月初月末效应
|
||||
analyze_quarter_and_month_boundary(df, output_dir)
|
||||
|
||||
# 稳健性检查:前半段 vs 后半段效应一致性
|
||||
midpoint = len(df) // 2
|
||||
df_first_half = df.iloc[:midpoint]
|
||||
df_second_half = df.iloc[midpoint:]
|
||||
print(f"\n [稳健性检查] 数据前半段 vs 后半段效应一致性")
|
||||
print(f" 前半段: {df_first_half.index.min().date()} ~ {df_first_half.index.max().date()}")
|
||||
print(f" 后半段: {df_second_half.index.min().date()} ~ {df_second_half.index.max().date()}")
|
||||
|
||||
# 比较前后半段的星期效应一致性
|
||||
if 'log_return' in df.columns:
|
||||
df_work = df.dropna(subset=['log_return']).copy()
|
||||
df_work['weekday'] = df_work.index.dayofweek
|
||||
mid_work = len(df_work) // 2
|
||||
first_half_means = df_work.iloc[:mid_work].groupby('weekday')['log_return'].mean()
|
||||
second_half_means = df_work.iloc[mid_work:].groupby('weekday')['log_return'].mean()
|
||||
# 检查各星期均值符号是否一致
|
||||
consistent = (first_half_means * second_half_means > 0).sum()
|
||||
total = len(first_half_means)
|
||||
print(f" 星期效应符号一致性: {consistent}/{total} 个星期方向一致")
|
||||
|
||||
print("\n" + "#" * 70)
|
||||
print("# 日历效应分析完成")
|
||||
print("#" * 70)
|
||||
|
||||
@@ -17,7 +17,7 @@ import warnings
|
||||
from pathlib import Path
|
||||
from typing import Optional, List, Tuple, Dict
|
||||
|
||||
from statsmodels.tsa.stattools import grangercausalitytests
|
||||
from statsmodels.tsa.stattools import grangercausalitytests, adfuller
|
||||
|
||||
from src.data_loader import load_hourly
|
||||
from src.preprocessing import log_returns, add_derived_features
|
||||
@@ -46,7 +46,20 @@ TEST_LAGS = [1, 2, 3, 5, 10]
|
||||
|
||||
|
||||
# ============================================================
|
||||
# 2. 单对 Granger 因果检验
|
||||
# 2. ADF 平稳性检验辅助函数
|
||||
# ============================================================
|
||||
|
||||
def _check_stationarity(series, name, alpha=0.05):
|
||||
"""ADF 平稳性检验,非平稳则取差分"""
|
||||
result = adfuller(series.dropna(), autolag='AIC')
|
||||
if result[1] > alpha:
|
||||
print(f" [注意] {name} 非平稳 (ADF p={result[1]:.4f}),使用差分序列")
|
||||
return series.diff().dropna(), True
|
||||
return series, False
|
||||
|
||||
|
||||
# ============================================================
|
||||
# 3. 单对 Granger 因果检验
|
||||
# ============================================================
|
||||
|
||||
def granger_test_pair(
|
||||
@@ -87,6 +100,15 @@ def granger_test_pair(
|
||||
print(f" [警告] {cause} → {effect}: 样本量不足 ({len(data)}),跳过")
|
||||
return []
|
||||
|
||||
# ADF 平稳性检验,非平稳则取差分
|
||||
effect_series, effect_diffed = _check_stationarity(data[effect], effect)
|
||||
cause_series, cause_diffed = _check_stationarity(data[cause], cause)
|
||||
if effect_diffed or cause_diffed:
|
||||
data = pd.concat([effect_series, cause_series], axis=1).dropna()
|
||||
if len(data) < max_lag + 20:
|
||||
print(f" [警告] {cause} → {effect}: 差分后样本量不足 ({len(data)}),跳过")
|
||||
return []
|
||||
|
||||
results = []
|
||||
try:
|
||||
# 执行检验,maxlag 取最大值,一次获取所有滞后
|
||||
@@ -578,14 +600,10 @@ def run_causality_analysis(
|
||||
|
||||
# --- 因果关系网络图 ---
|
||||
print("\n>>> [4/4] 绘制因果关系网络图...")
|
||||
# 使用所有结果(含跨时间尺度)
|
||||
# 使用所有结果(含跨时间尺度),直接使用各组已做的 Bonferroni 校正结果,
|
||||
# 不再重复校正(各组检验已独立校正,合并后再校正会导致双重惩罚)
|
||||
if not all_results.empty:
|
||||
# 重新做一次 Bonferroni 校正(因为合并后总检验数增加)
|
||||
all_corrected = apply_bonferroni(all_results.drop(
|
||||
columns=['bonferroni_alpha', 'significant_raw', 'significant_corrected'],
|
||||
errors='ignore'
|
||||
), alpha=0.05)
|
||||
plot_causal_network(all_corrected, output_dir)
|
||||
plot_causal_network(all_results, output_dir)
|
||||
else:
|
||||
print(" [警告] 无可用结果,跳过网络图")
|
||||
|
||||
|
||||
@@ -250,24 +250,34 @@ def _interpret_clusters(df_clean: pd.DataFrame, labels: np.ndarray,
|
||||
print(f"{method_name} 聚类特征均值")
|
||||
print("=" * 60)
|
||||
|
||||
# 自动标注状态
|
||||
# 自动标注状态(基于数据分布的自适应阈值)
|
||||
state_labels = {}
|
||||
|
||||
# 计算自适应阈值:基于聚类均值的标准差
|
||||
lr_values = cluster_means["log_return"]
|
||||
abs_r_values = cluster_means["abs_return"]
|
||||
lr_std = lr_values.std() if len(lr_values) > 1 else 0.02
|
||||
abs_r_std = abs_r_values.std() if len(abs_r_values) > 1 else 0.02
|
||||
high_lr_threshold = max(0.005, lr_std) # 至少 0.5% 作为下限
|
||||
high_abs_threshold = max(0.005, abs_r_std)
|
||||
mild_lr_threshold = max(0.002, high_lr_threshold * 0.25)
|
||||
|
||||
for cid in cluster_means.index:
|
||||
row = cluster_means.loc[cid]
|
||||
lr = row["log_return"]
|
||||
vol = row["vol_7d"]
|
||||
abs_r = row["abs_return"]
|
||||
|
||||
# 基于收益率和波动率的规则判断
|
||||
if lr > 0.02 and abs_r > 0.02:
|
||||
# 基于自适应阈值的规则判断
|
||||
if lr > high_lr_threshold and abs_r > high_abs_threshold:
|
||||
label = "surge"
|
||||
elif lr < -0.02 and abs_r > 0.02:
|
||||
elif lr < -high_lr_threshold and abs_r > high_abs_threshold:
|
||||
label = "crash"
|
||||
elif lr > 0.005:
|
||||
elif lr > mild_lr_threshold:
|
||||
label = "mild_up"
|
||||
elif lr < -0.005:
|
||||
elif lr < -mild_lr_threshold:
|
||||
label = "mild_down"
|
||||
elif abs_r > 0.015 or vol > cluster_means["vol_7d"].median() * 1.5:
|
||||
elif abs_r > high_abs_threshold * 0.75 or vol > cluster_means["vol_7d"].median() * 1.5:
|
||||
label = "high_vol"
|
||||
else:
|
||||
label = "sideways"
|
||||
|
||||
@@ -13,12 +13,6 @@ AVAILABLE_INTERVALS = [
|
||||
"1d", "3d", "1w", "1mo"
|
||||
]
|
||||
|
||||
COLUMNS = [
|
||||
"open_time", "open", "high", "low", "close", "volume",
|
||||
"close_time", "quote_volume", "trades",
|
||||
"taker_buy_volume", "taker_buy_quote_volume", "ignore"
|
||||
]
|
||||
|
||||
NUMERIC_COLS = [
|
||||
"open", "high", "low", "close", "volume",
|
||||
"quote_volume", "trades", "taker_buy_volume", "taker_buy_quote_volume"
|
||||
@@ -27,7 +21,7 @@ NUMERIC_COLS = [
|
||||
|
||||
def _adaptive_timestamp(ts_series: pd.Series) -> pd.DatetimeIndex:
|
||||
"""自适应处理毫秒(13位)和微秒(16位)时间戳"""
|
||||
ts = ts_series.astype(np.int64)
|
||||
ts = pd.to_numeric(ts_series, errors="coerce").astype(np.int64)
|
||||
# 16位时间戳(微秒) -> 转为毫秒
|
||||
mask = ts > 1e15
|
||||
ts = ts.copy()
|
||||
@@ -91,9 +85,15 @@ def load_klines(
|
||||
|
||||
# 时间范围过滤
|
||||
if start:
|
||||
df = df[df.index >= pd.Timestamp(start)]
|
||||
try:
|
||||
df = df[df.index >= pd.Timestamp(start)]
|
||||
except ValueError:
|
||||
print(f"[警告] 无效的起始日期 '{start}',忽略")
|
||||
if end:
|
||||
df = df[df.index <= pd.Timestamp(end)]
|
||||
try:
|
||||
df = df[df.index <= pd.Timestamp(end)]
|
||||
except ValueError:
|
||||
print(f"[警告] 无效的结束日期 '{end}',忽略")
|
||||
|
||||
return df
|
||||
|
||||
@@ -110,6 +110,10 @@ def load_hourly(start: Optional[str] = None, end: Optional[str] = None) -> pd.Da
|
||||
|
||||
def validate_data(df: pd.DataFrame, interval: str = "1d") -> dict:
|
||||
"""数据完整性校验"""
|
||||
if len(df) == 0:
|
||||
return {"rows": 0, "date_range": "N/A", "null_counts": {}, "duplicate_index": 0,
|
||||
"price_range": "N/A", "negative_volume": 0}
|
||||
|
||||
report = {
|
||||
"rows": len(df),
|
||||
"date_range": f"{df.index.min()} ~ {df.index.max()}",
|
||||
|
||||
@@ -104,8 +104,9 @@ def compute_fft_spectrum(
|
||||
freqs_pos = freqs[pos_mask]
|
||||
yf_pos = yf[pos_mask]
|
||||
|
||||
# 功率谱密度:|FFT|^2 / (N * 窗函数能量)
|
||||
power = (np.abs(yf_pos) ** 2) / (n * window_energy)
|
||||
# 功率谱密度:单边谱乘2,加入采样频率 fs 归一化
|
||||
fs = 1.0 / sampling_period_days # 采样频率 (cycles/day)
|
||||
power = 2.0 * (np.abs(yf_pos) ** 2) / (n * fs * window_energy)
|
||||
|
||||
# 对应周期
|
||||
periods = 1.0 / freqs_pos
|
||||
@@ -122,6 +123,7 @@ def ar1_red_noise_spectrum(
|
||||
freqs: np.ndarray,
|
||||
sampling_period_days: float,
|
||||
confidence_percentile: float = 95.0,
|
||||
power: Optional[np.ndarray] = None,
|
||||
) -> Tuple[np.ndarray, np.ndarray]:
|
||||
"""
|
||||
基于AR(1)模型估算红噪声理论功率谱
|
||||
@@ -139,6 +141,8 @@ def ar1_red_noise_spectrum(
|
||||
采样周期
|
||||
confidence_percentile : float
|
||||
置信水平百分位数(默认95%)
|
||||
power : np.ndarray, optional
|
||||
信号功率谱,用于经验缩放使理论谱均值匹配信号谱均值
|
||||
|
||||
Returns
|
||||
-------
|
||||
@@ -165,7 +169,11 @@ def ar1_red_noise_spectrum(
|
||||
denominator = 1 - 2 * rho * cos_term + rho ** 2
|
||||
noise_mean = s0 / denominator
|
||||
|
||||
# 归一化使均值与信号功率谱均值匹配(经验缩放)
|
||||
# 经验缩放:使理论谱均值匹配信号谱均值
|
||||
if power is not None and np.mean(noise_mean) > 0:
|
||||
scale_factor_empirical = np.mean(power) / np.mean(noise_mean)
|
||||
noise_mean = noise_mean * scale_factor_empirical
|
||||
|
||||
# 在chi-squared分布下,FFT功率近似服从指数分布(自由度2)
|
||||
# 95%置信上界 = 均值 * chi2_ppf(0.95, 2) / 2 ≈ 均值 * 2.996
|
||||
from scipy.stats import chi2
|
||||
@@ -751,7 +759,8 @@ def _analyze_single_timeframe(
|
||||
|
||||
# AR(1)红噪声基线
|
||||
noise_mean, noise_threshold = ar1_red_noise_spectrum(
|
||||
log_ret, freqs, sampling_period_days, confidence_percentile=95.0
|
||||
log_ret, freqs, sampling_period_days, confidence_percentile=95.0,
|
||||
power=power,
|
||||
)
|
||||
|
||||
# 峰值检测
|
||||
|
||||
@@ -90,16 +90,16 @@ def box_counting_dimension(prices: np.ndarray,
|
||||
if num_boxes_per_side < 2:
|
||||
continue
|
||||
|
||||
# 盒子大小(在归一化空间中)
|
||||
box_size = 1.0 / num_boxes_per_side
|
||||
|
||||
# 计算每个数据点所在的盒子编号
|
||||
# x方向:时间划分
|
||||
x_box = np.floor(x / box_size).astype(int)
|
||||
# 独立归一化 x 和 y 到盒子网格,避免纵横比失真
|
||||
x_range = x.max() - x.min()
|
||||
y_range = y.max() - y.min()
|
||||
if x_range == 0:
|
||||
x_range = 1.0
|
||||
if y_range == 0:
|
||||
y_range = 1.0
|
||||
x_box = np.floor((x - x.min()) / x_range * (num_boxes_per_side - 1)).astype(int)
|
||||
y_box = np.floor((y - y.min()) / y_range * (num_boxes_per_side - 1)).astype(int)
|
||||
x_box = np.clip(x_box, 0, num_boxes_per_side - 1)
|
||||
|
||||
# y方向:价格划分
|
||||
y_box = np.floor(y / box_size).astype(int)
|
||||
y_box = np.clip(y_box, 0, num_boxes_per_side - 1)
|
||||
|
||||
# 还需要考虑相邻点之间的连线经过的盒子
|
||||
@@ -120,11 +120,12 @@ def box_counting_dimension(prices: np.ndarray,
|
||||
for t in np.linspace(0, 1, steps + 1):
|
||||
xi = x[i] + t * (x[i + 1] - x[i])
|
||||
yi = y[i] + t * (y[i + 1] - y[i])
|
||||
bx = int(np.clip(np.floor(xi / box_size), 0, num_boxes_per_side - 1))
|
||||
by = int(np.clip(np.floor(yi / box_size), 0, num_boxes_per_side - 1))
|
||||
bx = int(np.clip(np.floor((xi - x.min()) / x_range * (num_boxes_per_side - 1)), 0, num_boxes_per_side - 1))
|
||||
by = int(np.clip(np.floor((yi - y.min()) / y_range * (num_boxes_per_side - 1)), 0, num_boxes_per_side - 1))
|
||||
occupied.add((bx, by))
|
||||
|
||||
count = len(occupied)
|
||||
box_size = 1.0 / num_boxes_per_side # 等效盒子大小,用于缩放关系
|
||||
if count > 0:
|
||||
log_inv_scales.append(np.log(1.0 / box_size))
|
||||
log_counts.append(np.log(count))
|
||||
@@ -337,7 +338,7 @@ def mfdfa_analysis(series: np.ndarray, q_list=None, scales=None) -> Dict:
|
||||
包含 hq, q_list, h_list, tau, alpha, f_alpha, multifractal_width
|
||||
"""
|
||||
if q_list is None:
|
||||
q_list = [-5, -4, -3, -2, -1, -0.5, 0.5, 1, 2, 3, 4, 5]
|
||||
q_list = [-5, -4, -3, -2, -1, -0.5, 0, 0.5, 1, 2, 3, 4, 5]
|
||||
|
||||
N = len(series)
|
||||
if scales is None:
|
||||
@@ -476,7 +477,7 @@ def multi_timeframe_fractal(df_1h: pd.DataFrame, df_4h: pd.DataFrame, df_1d: pd.
|
||||
results[name] = {
|
||||
'样本量': len(prices),
|
||||
'分形维数': D,
|
||||
'Hurst(从D)': 2.0 - D,
|
||||
'Hurst(从D)': 2.0 - D, # 仅对自仿射 fBm 严格成立,真实数据为近似值
|
||||
'多重分形宽度': multifractal_width,
|
||||
'Hurst(MF-DFA,q=2)': h_q2,
|
||||
}
|
||||
|
||||
@@ -103,6 +103,8 @@ def rs_hurst(series: np.ndarray, min_window: int = 10, max_window: Optional[int]
|
||||
log(窗口大小)
|
||||
log_rs : np.ndarray
|
||||
log(平均R/S值)
|
||||
r_squared : float
|
||||
线性拟合的 R^2 拟合优度
|
||||
"""
|
||||
n = len(series)
|
||||
if max_window is None:
|
||||
@@ -143,12 +145,19 @@ def rs_hurst(series: np.ndarray, min_window: int = 10, max_window: Optional[int]
|
||||
|
||||
# 线性回归:log(R/S) = H * log(n) + c
|
||||
if len(log_ns) < 3:
|
||||
return 0.5, log_ns, log_rs
|
||||
return 0.5, log_ns, log_rs, 0.0
|
||||
|
||||
coeffs = np.polyfit(log_ns, log_rs, 1)
|
||||
H = coeffs[0]
|
||||
|
||||
return H, log_ns, log_rs
|
||||
# 计算 R^2 拟合优度
|
||||
predicted = H * log_ns + coeffs[1]
|
||||
ss_res = np.sum((log_rs - predicted) ** 2)
|
||||
ss_tot = np.sum((log_rs - np.mean(log_rs)) ** 2)
|
||||
r_squared = 1 - ss_res / ss_tot if ss_tot > 0 else 0.0
|
||||
print(f" R/S Hurst 拟合 R² = {r_squared:.4f}")
|
||||
|
||||
return H, log_ns, log_rs, r_squared
|
||||
|
||||
|
||||
# ============================================================
|
||||
@@ -166,7 +175,7 @@ def dfa_hurst(series: np.ndarray) -> float:
|
||||
Returns
|
||||
-------
|
||||
float
|
||||
DFA估计的Hurst指数(DFA指数α,对于分数布朗运动 α = H + 0.5 - 0.5 = H)
|
||||
DFA估计的Hurst指数(对增量过程(对数收益率),DFA 指数 α 近似等于 Hurst 指数 H)
|
||||
"""
|
||||
if HAS_NOLDS:
|
||||
# nolds.dfa 返回的是DFA scaling exponent α
|
||||
@@ -212,11 +221,12 @@ def cross_validate_hurst(series: np.ndarray) -> Dict[str, float]:
|
||||
dict
|
||||
包含两种方法的Hurst值及其差异
|
||||
"""
|
||||
h_rs, _, _ = rs_hurst(series)
|
||||
h_rs, _, _, r_squared = rs_hurst(series)
|
||||
h_dfa = dfa_hurst(series)
|
||||
|
||||
result = {
|
||||
'R/S Hurst': h_rs,
|
||||
'R/S R²': r_squared,
|
||||
'DFA Hurst': h_dfa,
|
||||
'两种方法差异': abs(h_rs - h_dfa),
|
||||
'平均值': (h_rs + h_dfa) / 2,
|
||||
@@ -262,7 +272,7 @@ def rolling_hurst(series: np.ndarray, dates: pd.DatetimeIndex,
|
||||
segment = series[start_idx:end_idx]
|
||||
|
||||
if method == 'rs':
|
||||
h, _, _ = rs_hurst(segment)
|
||||
h, _, _, _ = rs_hurst(segment)
|
||||
elif method == 'dfa':
|
||||
h = dfa_hurst(segment)
|
||||
else:
|
||||
@@ -313,7 +323,7 @@ def multi_timeframe_hurst(intervals: List[str] = None) -> Dict[str, Dict[str, fl
|
||||
returns = returns[-100000:]
|
||||
|
||||
# R/S分析
|
||||
h_rs, _, _ = rs_hurst(returns)
|
||||
h_rs, _, _, _ = rs_hurst(returns)
|
||||
# DFA分析
|
||||
h_dfa = dfa_hurst(returns)
|
||||
|
||||
@@ -593,8 +603,9 @@ def run_hurst_analysis(df: pd.DataFrame, output_dir: str = "output/hurst") -> Di
|
||||
print("【1】R/S (Rescaled Range) 分析")
|
||||
print("-" * 50)
|
||||
|
||||
h_rs, log_ns, log_rs = rs_hurst(returns_arr)
|
||||
h_rs, log_ns, log_rs, r_squared = rs_hurst(returns_arr)
|
||||
results['R/S Hurst'] = h_rs
|
||||
results['R/S R²'] = r_squared
|
||||
|
||||
print(f" R/S Hurst指数: {h_rs:.4f}")
|
||||
print(f" 解读: {interpret_hurst(h_rs)}")
|
||||
|
||||
@@ -248,7 +248,14 @@ def test_signal_returns(signal: pd.Series, returns: pd.Series) -> Dict:
|
||||
# 用信号值(-1, 0, 1)与未来收益的秩相关
|
||||
valid_mask = signal.notna() & returns.notna()
|
||||
if valid_mask.sum() >= 30:
|
||||
ic, ic_pval = stats.spearmanr(signal[valid_mask], returns[valid_mask])
|
||||
# 过滤掉无信号(signal=0)的样本,避免稀释真实信号效果
|
||||
sig_valid = signal[valid_mask]
|
||||
ret_valid = returns[valid_mask]
|
||||
nonzero_mask = sig_valid != 0
|
||||
if nonzero_mask.sum() >= 10: # 信号样本足够则仅对有信号的日期计算
|
||||
ic, ic_pval = stats.spearmanr(sig_valid[nonzero_mask], ret_valid[nonzero_mask])
|
||||
else:
|
||||
ic, ic_pval = stats.spearmanr(sig_valid, ret_valid)
|
||||
result['ic'] = ic
|
||||
result['ic_pval'] = ic_pval
|
||||
else:
|
||||
@@ -514,6 +521,9 @@ def run_indicators_analysis(df: pd.DataFrame, output_dir: str) -> Dict:
|
||||
|
||||
# --- 构建全部信号(在全量数据上计算,避免前导NaN问题) ---
|
||||
all_signals = build_all_signals(df['close'])
|
||||
# 注意: 信号在全量数据上计算以避免前导NaN问题。
|
||||
# EMA等递推指标从序列起点开始计算,训练集部分不受验证集数据影响。
|
||||
# 但严格的实盘模拟应在每个时间点仅使用历史数据重新计算指标。
|
||||
print(f"\n共构建 {len(all_signals)} 个技术指标信号")
|
||||
|
||||
# ============ 训练集评估 ============
|
||||
|
||||
@@ -433,8 +433,16 @@ def analyze_pattern_returns(pattern_signal: pd.Series, fwd_returns: pd.DataFrame
|
||||
# 看跌:收益<0 为命中
|
||||
hits = (ret_1d < 0).sum()
|
||||
else:
|
||||
# 中性:取绝对值较大方向的准确率
|
||||
hits = max((ret_1d > 0).sum(), (ret_1d < 0).sum())
|
||||
# 中性形态不做方向性预测,报告平均绝对收益幅度
|
||||
hit_rate = np.nan # 不适用方向性命中率
|
||||
result['hit_rate'] = hit_rate
|
||||
result['hit_count'] = 0
|
||||
result['hit_n'] = int(len(ret_1d))
|
||||
result['avg_abs_return'] = ret_1d.abs().mean()
|
||||
result['wilson_ci_lower'] = np.nan
|
||||
result['wilson_ci_upper'] = np.nan
|
||||
result['binom_pval'] = np.nan
|
||||
return result
|
||||
|
||||
n = len(ret_1d)
|
||||
hit_rate = hits / n
|
||||
|
||||
@@ -21,10 +21,15 @@ def detrend_log_diff(prices: pd.Series) -> pd.Series:
|
||||
|
||||
|
||||
def detrend_linear(series: pd.Series) -> pd.Series:
|
||||
"""线性去趋势"""
|
||||
x = np.arange(len(series))
|
||||
coeffs = np.polyfit(x, series.values, 1)
|
||||
trend = np.polyval(coeffs, x)
|
||||
"""线性去趋势(自动忽略NaN)"""
|
||||
clean = series.dropna()
|
||||
if len(clean) < 2:
|
||||
return series - series.mean()
|
||||
x = np.arange(len(clean))
|
||||
coeffs = np.polyfit(x, clean.values, 1)
|
||||
# 对完整索引计算趋势
|
||||
x_full = np.arange(len(series))
|
||||
trend = np.polyval(coeffs, x_full)
|
||||
return pd.Series(series.values - trend, index=series.index)
|
||||
|
||||
|
||||
@@ -35,9 +40,9 @@ def hp_filter(series: pd.Series, lamb: float = 1600) -> tuple:
|
||||
return cycle, trend
|
||||
|
||||
|
||||
def rolling_volatility(returns: pd.Series, window: int = 30) -> pd.Series:
|
||||
def rolling_volatility(returns: pd.Series, window: int = 30, periods_per_year: int = 365) -> pd.Series:
|
||||
"""滚动波动率(年化)"""
|
||||
return returns.rolling(window=window).std() * np.sqrt(365)
|
||||
return returns.rolling(window=window).std() * np.sqrt(periods_per_year)
|
||||
|
||||
|
||||
def realized_volatility(returns: pd.Series, window: int = 30) -> pd.Series:
|
||||
@@ -51,7 +56,11 @@ def taker_buy_ratio(df: pd.DataFrame) -> pd.Series:
|
||||
|
||||
|
||||
def add_derived_features(df: pd.DataFrame) -> pd.DataFrame:
|
||||
"""添加常用衍生特征列"""
|
||||
"""添加常用衍生特征列
|
||||
|
||||
注意: 返回的 DataFrame 前30行部分列包含 NaN(由滚动窗口计算导致),
|
||||
下游模块应根据需要自行处理。
|
||||
"""
|
||||
out = df.copy()
|
||||
out["log_return"] = log_returns(df["close"])
|
||||
out["simple_return"] = simple_returns(df["close"])
|
||||
@@ -69,8 +78,11 @@ def add_derived_features(df: pd.DataFrame) -> pd.DataFrame:
|
||||
|
||||
|
||||
def standardize(series: pd.Series) -> pd.Series:
|
||||
"""Z-score标准化"""
|
||||
return (series - series.mean()) / series.std()
|
||||
"""Z-score标准化(零方差时返回全零序列)"""
|
||||
std = series.std()
|
||||
if std == 0 or np.isnan(std):
|
||||
return pd.Series(0.0, index=series.index)
|
||||
return (series - series.mean()) / std
|
||||
|
||||
|
||||
def winsorize(series: pd.Series, lower: float = 0.01, upper: float = 0.99) -> pd.Series:
|
||||
|
||||
@@ -43,9 +43,14 @@ def normality_tests(returns: pd.Series) -> dict:
|
||||
"""
|
||||
r = returns.dropna().values
|
||||
|
||||
# Kolmogorov-Smirnov 检验(与标准正态比较)
|
||||
r_standardized = (r - r.mean()) / r.std()
|
||||
ks_stat, ks_p = stats.kstest(r_standardized, 'norm')
|
||||
# Lilliefors 检验(正确处理估计参数的正态性检验)
|
||||
try:
|
||||
from statsmodels.stats.diagnostic import lilliefors
|
||||
ks_stat, ks_p = lilliefors(r, dist='norm', pvalmethod='table')
|
||||
except ImportError:
|
||||
# 回退到 KS 检验并标注局限性
|
||||
r_standardized = (r - r.mean()) / r.std()
|
||||
ks_stat, ks_p = stats.kstest(r_standardized, 'norm')
|
||||
|
||||
# Jarque-Bera 检验
|
||||
jb_stat, jb_p = stats.jarque_bera(r)
|
||||
@@ -165,10 +170,14 @@ def fit_garch11(returns: pd.Series) -> dict:
|
||||
# arch库推荐使用百分比收益率以改善数值稳定性
|
||||
r_pct = returns.dropna() * 100
|
||||
|
||||
# 拟合GARCH(1,1),均值模型用常数均值
|
||||
model = arch_model(r_pct, vol='Garch', p=1, q=1, mean='Constant', dist='Normal')
|
||||
# 拟合GARCH(1,1),使用t分布以匹配BTC厚尾特征
|
||||
model = arch_model(r_pct, vol='Garch', p=1, q=1, mean='Constant', dist='t')
|
||||
result = model.fit(disp='off')
|
||||
|
||||
# 检查收敛状态
|
||||
if result.convergence_flag != 0:
|
||||
print(f" [警告] GARCH(1,1) 未收敛 (flag={result.convergence_flag}),参数可能不可靠")
|
||||
|
||||
# 提取参数
|
||||
params = result.params
|
||||
omega = params.get('omega', np.nan)
|
||||
@@ -444,7 +453,7 @@ def print_normality_results(results: dict):
|
||||
print("正态性检验结果")
|
||||
print("=" * 60)
|
||||
|
||||
print(f"\n[KS检验] Kolmogorov-Smirnov")
|
||||
print(f"\n[Lilliefors/KS检验] 正态性检验")
|
||||
print(f" 统计量: {results['ks_statistic']:.6f}")
|
||||
print(f" p值: {results['ks_pvalue']:.2e}")
|
||||
print(f" 结论: {'拒绝正态假设' if results['ks_pvalue'] < 0.05 else '不能拒绝正态假设'}")
|
||||
|
||||
@@ -245,9 +245,8 @@ def _run_prophet(train_df: pd.DataFrame, val_df: pd.DataFrame) -> Dict:
|
||||
|
||||
# 转换为对数收益率预测(与其他模型对齐)
|
||||
pred_close = forecast['yhat'].values
|
||||
# 用前一天的真实收盘价计算预测收益率
|
||||
# 第一天用训练集最后一天的价格
|
||||
prev_close = np.concatenate([[train_df['close'].iloc[-1]], val_df['close'].values[:-1]])
|
||||
# 使用递推方式:首个prev_close用训练集末尾真实价格,后续用模型预测价格
|
||||
prev_close = np.concatenate([[train_df['close'].iloc[-1]], pred_close[:-1]])
|
||||
pred_returns = np.log(pred_close / prev_close)
|
||||
|
||||
print(f" 预测完成,验证期: {val_df.index[0]} ~ {val_df.index[-1]}")
|
||||
|
||||
@@ -69,11 +69,11 @@ def ensure_dir(path):
|
||||
|
||||
EVIDENCE_CRITERIA = """
|
||||
"真正有规律" 判定标准(必须同时满足):
|
||||
1. FDR校正后 p < 0.05
|
||||
2. 排列检验 p < 0.01(如适用)
|
||||
3. 测试集上效果方向一致且显著
|
||||
4. >80% bootstrap子样本中成立(如适用)
|
||||
5. Cohen's d > 0.2 或经济意义显著
|
||||
1. FDR校正后 p < 0.05(+2分)
|
||||
2. p值极显著 (< 0.01) 额外加分(+1分)
|
||||
3. 测试集上效果方向一致且显著(+2分)
|
||||
4. >80% bootstrap子样本中成立(如适用)(+1分)
|
||||
5. Cohen's d > 0.2 或经济意义显著(+1分)
|
||||
6. 有合理的经济/市场直觉解释
|
||||
"""
|
||||
|
||||
@@ -111,7 +111,7 @@ def score_evidence(result: Dict) -> Dict:
|
||||
if significant:
|
||||
s += 2
|
||||
if p_value is not None and p_value < 0.01:
|
||||
s += 1
|
||||
s += 1 # p值极显著(补充严格性奖励)
|
||||
if effect_size is not None and abs(effect_size) > 0.2:
|
||||
s += 1
|
||||
if f.get("test_set_consistent", False):
|
||||
|
||||
@@ -202,8 +202,10 @@ def compare_garch_models(returns: pd.Series) -> dict:
|
||||
|
||||
# --- GARCH(1,1) ---
|
||||
model_garch = arch_model(r_pct, vol='Garch', p=1, q=1,
|
||||
mean='Constant', dist='Normal')
|
||||
mean='Constant', dist='t')
|
||||
res_garch = model_garch.fit(disp='off')
|
||||
if res_garch.convergence_flag != 0:
|
||||
print(f" [警告] GARCH(1,1) 模型未收敛 (flag={res_garch.convergence_flag})")
|
||||
results['GARCH'] = {
|
||||
'params': dict(res_garch.params),
|
||||
'aic': res_garch.aic,
|
||||
@@ -215,8 +217,10 @@ def compare_garch_models(returns: pd.Series) -> dict:
|
||||
|
||||
# --- EGARCH(1,1) ---
|
||||
model_egarch = arch_model(r_pct, vol='EGARCH', p=1, q=1,
|
||||
mean='Constant', dist='Normal')
|
||||
mean='Constant', dist='t')
|
||||
res_egarch = model_egarch.fit(disp='off')
|
||||
if res_egarch.convergence_flag != 0:
|
||||
print(f" [警告] EGARCH(1,1) 模型未收敛 (flag={res_egarch.convergence_flag})")
|
||||
# EGARCH的gamma参数反映杠杆效应(负值表示负收益增大波动率)
|
||||
egarch_params = dict(res_egarch.params)
|
||||
results['EGARCH'] = {
|
||||
@@ -232,8 +236,10 @@ def compare_garch_models(returns: pd.Series) -> dict:
|
||||
# --- GJR-GARCH(1,1) ---
|
||||
# GJR-GARCH 在 arch 库中通过 vol='Garch', o=1 实现
|
||||
model_gjr = arch_model(r_pct, vol='Garch', p=1, o=1, q=1,
|
||||
mean='Constant', dist='Normal')
|
||||
mean='Constant', dist='t')
|
||||
res_gjr = model_gjr.fit(disp='off')
|
||||
if res_gjr.convergence_flag != 0:
|
||||
print(f" [警告] GJR-GARCH(1,1) 模型未收敛 (flag={res_gjr.convergence_flag})")
|
||||
gjr_params = dict(res_gjr.params)
|
||||
results['GJR-GARCH'] = {
|
||||
'params': gjr_params,
|
||||
|
||||
@@ -9,7 +9,7 @@ import sys
|
||||
from pathlib import Path
|
||||
|
||||
# 添加项目路径
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent))
|
||||
|
||||
from src.hurst_analysis import multi_timeframe_hurst, plot_multi_timeframe, plot_hurst_vs_scale
|
||||
|
||||
@@ -42,7 +42,7 @@ def test_15_scales():
|
||||
f"平均: {data['平均Hurst']:.4f} | 数据量: {data['数据量']:>7}")
|
||||
|
||||
# 生成可视化
|
||||
output_dir = Path("output/hurst_test")
|
||||
output_dir = Path(__file__).parent.parent / "output" / "hurst_test"
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
|
||||
Reference in New Issue
Block a user