Selaa lähdekoodia

自定义指标最终版

cicv 1 kuukausi sitten
commit
059c4672e0
46 muutettua tiedostoa jossa 5721 lisäystä ja 0 poistoa
  1. 22 0
      .vscode/launch.json
  2. 132 0
      README.md
  3. 359 0
      config/all_metrics_config.yaml
  4. 22 0
      config/custom_metrics_config.yaml
  5. 342 0
      config/metrics_config.yaml
  6. 66 0
      custom_metrics/metric_safety_safeTime_CustomTTC.py
  7. 72 0
      custom_metrics/metric_user_safeTime_CustomTTC.py
  8. 30 0
      logs/test.log
  9. BIN
      modules/lib/__pycache__/common.cpython-312.pyc
  10. BIN
      modules/lib/__pycache__/common.cpython-313.pyc
  11. BIN
      modules/lib/__pycache__/data_process.cpython-312.pyc
  12. BIN
      modules/lib/__pycache__/data_process.cpython-313.pyc
  13. BIN
      modules/lib/__pycache__/log.cpython-312.pyc
  14. BIN
      modules/lib/__pycache__/log.cpython-313.pyc
  15. BIN
      modules/lib/__pycache__/log_manager.cpython-312.pyc
  16. BIN
      modules/lib/__pycache__/log_manager.cpython-313.pyc
  17. BIN
      modules/lib/__pycache__/metric_registry.cpython-312.pyc
  18. BIN
      modules/lib/__pycache__/metric_registry.cpython-313.pyc
  19. BIN
      modules/lib/__pycache__/score.cpython-312.pyc
  20. BIN
      modules/lib/__pycache__/score.cpython-313.pyc
  21. 185 0
      modules/lib/common.py
  22. 227 0
      modules/lib/data_process.py
  23. 115 0
      modules/lib/log_manager.py
  24. 131 0
      modules/lib/metric_registry.py
  25. 243 0
      modules/lib/score.py
  26. BIN
      modules/metric/__pycache__/comfort.cpython-312.pyc
  27. BIN
      modules/metric/__pycache__/comfort.cpython-313.pyc
  28. BIN
      modules/metric/__pycache__/efficient.cpython-312.pyc
  29. BIN
      modules/metric/__pycache__/efficient.cpython-313.pyc
  30. BIN
      modules/metric/__pycache__/function.cpython-312.pyc
  31. BIN
      modules/metric/__pycache__/function.cpython-313.pyc
  32. BIN
      modules/metric/__pycache__/safety.cpython-312.pyc
  33. BIN
      modules/metric/__pycache__/safety.cpython-313.pyc
  34. BIN
      modules/metric/__pycache__/traffic.cpython-312.pyc
  35. BIN
      modules/metric/__pycache__/traffic.cpython-313.pyc
  36. 560 0
      modules/metric/comfort.py
  37. 148 0
      modules/metric/efficient.py
  38. 164 0
      modules/metric/function.py
  39. 105 0
      modules/metric/safety.py
  40. 1220 0
      modules/metric/traffic.py
  41. 623 0
      scripts/evaluator_enhanced.py
  42. 498 0
      scripts/evaluator_optimized.py
  43. 106 0
      templates/custom_metric_template.py
  44. 226 0
      templates/unified_custom_metric_template.py
  45. 28 0
      test/custom_metrics_config.yaml
  46. 97 0
      test/split.py

+ 22 - 0
.vscode/launch.json

@@ -0,0 +1,22 @@
+{
+    // Use IntelliSense to learn about possible attributes.
+    // Hover to view descriptions of existing attributes.
+    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
+    "version": "0.2.0",
+    "configurations": [
+        {
+            "name": "Python Debugger: Current File",
+            "type": "debugpy",
+            "request": "launch",
+            "program": "${file}",
+            "console": "integratedTerminal"
+        },
+        {
+            "name": "Python Debugger: Current File",
+            "type": "debugpy",
+            "request": "launch",
+            "program": "${file}",
+            "console": "integratedTerminal"
+        }
+    ]
+}

+ 132 - 0
README.md

@@ -0,0 +1,132 @@
+# 自定义指标开发指南
+
+## 概述
+
+本系统支持用户自定义评估指标,您可以通过编写Python脚本来实现自己的指标计算逻辑,并将其集成到评估系统中。
+
+## 快速开始
+
+1. 复制 `custom_metric_template.py` 模板文件
+2. 根据您的需求修改指标计算逻辑
+3. 将您的自定义指标脚本放置在指定目录中
+4. 在运行评估时,使用 `--customMetricsPath` 参数指定自定义指标目录
+
+## 自定义指标规范
+
+### 必要条件
+
+1. 每个指标类必须继承 `BaseMetric` 基类
+2. 必须实现 `calculate()` 方法
+3. 必须在文件中定义 `METRIC_CATEGORY` 变量,指定指标类别
+
+### 指标类别
+
+可选的指标类别包括:
+- safety: 安全性指标
+- comfort: 舒适性指标
+- traffic: 交通规则指标
+- efficient: 效率指标
+- function: 功能指标
+- custom: 自定义类别
+
+### 返回值格式
+
+`calculate()` 方法应返回一个字典,包含以下字段:
+- value: 指标计算值
+- score: 评分(0-100)
+- details: 详细信息(可选)
+
+## 示例
+
+```python
+from modules.lib.metric_registry import BaseMetric
+
+METRIC_CATEGORY = "custom"
+
+class MyCustomMetric(BaseMetric):
+    def __init__(self, data):
+        super().__init__(data)
+    
+    def calculate(self):
+        # 实现您的计算逻辑
+        return {
+            "value": 42.0,
+            "score": 85,
+            "details": {"max": 100, "min": 0}
+        }
+
+python evaluator.py --configPath config.yaml --dataPath data_dir --reportPath report_dir --logPath logs --customMetricsPath custom_metrics
+## 架构说明
+
+新的架构设计主要包括以下几个部分:
+
+1. **指标注册系统**:通过 `MetricRegistry` 类实现,负责管理所有可用的指标(内置和自定义)。
+
+2. **指标基类**:所有指标都继承自 `BaseMetric` 基类,确保接口一致性。
+
+3. **动态指标选择**:通过配置文件中的指标定义,系统只会运行被选中的指标,提高效率。
+
+4. **自定义指标加载**:支持从指定目录加载用户自定义的指标脚本,扩展系统功能。
+
+5. **兼容性保证**:保留了原有的 `safety.py`、`comfort.py` 等模块,确保系统向后兼容。
+
+这种设计使系统更加灵活,既能满足选择性运行指标的需求,又能支持用户自定义指标,同时保持了原有代码结构的稳定性。
+
+d:\Kevin\zhaoyuan\zhaoyuan\
+├── scripts/
+│   └── evaluator.py          # 评估引擎主程序
+├── modules/
+│   ├── lib/
+│   │   ├── metric_registry.py  # 指标注册系统
+│   │   ├── data_process.py     # 数据处理模块
+│   │   └── log_manager.py      # 日志管理模块
+│   └── metric/
+│       ├── safety.py           # 安全性指标模块
+│       ├── comfort.py          # 舒适性指标模块
+│       ├── traffic.py          # 交通规则指标模块
+│       ├── efficient.py        # 效率指标模块
+│       └── function.py         # 功能指标模块
+├── templates/
+│   ├── custom_metric_template.py  # 自定义指标模板
+│   └── README.md                  # 自定义指标开发指南
+└── custom_metrics/                # 用户自定义指标目录
+
+
+## 工作流程
+1. 初始化阶段
+   
+   - 加载配置文件
+   - 注册内置指标模块
+   - 加载自定义指标脚本
+   - 提取启用的指标列表
+2. 评估阶段
+   
+   - 加载和预处理数据
+   - 创建启用指标的实例
+   - 并行执行指标计算
+   - 收集和组织结果
+3. 报告阶段
+   
+   - 生成结构化评估报告
+   - 输出到指定目录
+## 扩展性设计
+1. 新增内置指标
+   
+   - 在对应类别模块中添加新方法
+   - 系统会自动注册和识别
+2. 添加自定义指标
+   
+   - 基于模板创建新的指标脚本
+   - 放置在自定义指标目录
+   - 无需修改核心代码
+3. 新增指标类别
+   
+   - 创建新的类别模块
+   - 在注册系统中添加类别支持
+## 优势
+1. 灵活性 :可以根据配置选择性运行指标,提高效率
+2. 可扩展性 :支持用户自定义指标,无需修改核心代码
+3. 兼容性 :保留原有模块结构,确保向后兼容
+4. 并行处理 :利用多线程提高评估效率
+5. 模块化 :清晰的职责分离,便于维护和扩展
+这个架构设计满足了您的需求,既支持从配置文件中选择性运行指标,又允许用户通过自定义脚本扩展系统功能,同时保留了原有的模块化结构。

+ 359 - 0
config/all_metrics_config.yaml

@@ -0,0 +1,359 @@
+vehicle:
+  CAR_WIDTH: 1.872       
+  CAR_LENGTH: 4.924      
+  CAR_HEIGHT: 1.3        
+  CAR_OFFX: 1.321        
+  RHO: 0.3               
+  EGO_ACCEL_MAX: 6       
+  OBJ_DECEL_MAX: 8       
+  EGO_DECEL_MIN: 1       
+  EGO_DECEL_LON_MAX: 8   
+  EGO_DECEL_LAT_MAX: 1   
+  EGO_WHEELBASS: 2.8     
+
+T_threshold:
+  T0_threshold: 0  
+  T1_threshold: 2  
+  T2_threshold: 5  
+
+safety:
+  name: safety
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:  
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5
+    TTC:
+      name: TTC
+      priority: 0
+      max: 2000.0
+      min: 2.86
+    MTTC:
+      name: MTTC
+      priority: 0
+      max: 2000.0
+      min: 3.0
+    THW:
+      name: THW
+      priority: 0
+      max: 2000.0
+      min: 1.5
+  safeDistance:
+    name: safeDistance
+    priority: 0
+    LonSD:
+      name: LonSD
+      priority: 0
+      max: 2000.0
+      min: 10.0
+    LatSD:
+      name: LatSD 
+      priority: 0
+      max: 2000.0
+      min: 2.0
+  safeAcceleration:
+    name: safeAcceleration
+    priority: 0
+    BTN:
+      name: BTN
+      priority: 0
+      max: 1.0
+      min: -2000.0
+  safeProbability:
+    name: safeProbability
+    priority: 0
+    collisionRisk:
+      name: collisionRisk
+      priority: 0
+      max: 10.0
+      min: 0.0
+    collisionSeverity:
+      name: collisionSeverity
+      priority: 0
+      max: 10.0
+      min: 0.0
+
+user:
+  name: user
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:  # 新增的三级指标
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5
+
+comfort:
+  name: comfort
+  priority: 0
+  comfortLat:
+    name: comfortLat
+    priority: 0
+    Weaving:
+      name: Weaving
+      priority: 0
+      max: 0
+      min: 0
+    shake:
+      name: shake
+      priority: 0
+      max: 0
+      min: 0
+  comfortLon:
+    name: comfortLon
+    priority: 0
+    cadence:
+      name: cadence
+      priority: 0
+      max: 0
+      min: 0
+    slamBrake:
+      name: slamBrake 
+      priority: 0
+      max: 0
+      min: 0
+    slamAccelerate:
+      name: slamAccelerate
+      priority: 0
+      max: 0
+      min: 0
+
+efficient:
+  name: efficient
+  priority: 0
+  drivingMode:
+    name: drivingMode
+    priority: 0
+    max_speed:
+      name: maxSpeed
+      priority: 0
+      max: 0.0
+      min: 0.0
+    devation_speed:
+      name: deviationSpeed
+      priority: 0
+      max: 0.0
+      min: 0.0
+    averagedSpeed:
+      name: averagedSpeed
+      priority: 0
+      max: 80.0
+      min: 30.0
+  parkingMode:
+    name: parkingMode
+    priority: 0
+    stopDuration:
+      name: stopDuration
+      priority: 0
+      max: 1
+      min: 0
+
+function:
+  name: function
+  priority: 0
+  LKA:
+    name: LKA
+    priority: 0
+    latestWarningDistance_TTC:
+      name: latestWarningDistance_TTC
+      priority: 0
+      max: 5
+      min: 1.98
+    latestWarningDistance:
+      name: latestWarningDistance
+      priority: 0
+      max: 150
+      min: 0
+
+traffic:
+  name: traffic
+  priority: 0
+  majorViolation:
+    name: majorViolation
+    priority: 0
+    urbanExpresswayOrHighwaySpeedOverLimit50:
+      name: urbanExpresswayOrHighwaySpeedOverLimit50
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayReverse:
+      name: higwayreverse
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayDrivingAgainst:
+      name: higwayDrivingAgainst
+      priority: 0
+      max: 0
+      min: 0
+
+  seriousViolation:
+    name: seriousViolation
+    priority: 0
+    urbanExpresswayOrHighwayDrivingLaneStopped:
+      name: urbanExpresswayOrHighwayDrivingLaneStopped
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayEmergencyLaneStopped:
+      name: highwayEmergencyLaneStopped
+      priority: 0
+      max: 0
+      min: 0
+
+  dangerousViolation:
+    name: dangerousViolation
+    priority: 0
+    urbanExpresswayEmergencyLaneDriving:
+      name: urbanExpresswayEmergencyLaneDriving
+      priority: 0
+      max: 0
+      min: 0
+    trafficSignalViolation:
+      name: trafficSignalViolation
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwaySpeedOverLimit20to50:
+      name: urbanExpresswayOrHighwaySpeedOverLimit20to50
+      priority: 0
+      max: 0
+      min: 0
+    generalRoadSpeedOverLimit50:
+      name: generalRoadSpeedOverLimit50
+      priority: 0
+      max: 0
+      min: 0
+
+  generalViolation:
+    name: generalViolation
+    priority: 0
+    generalRoadSpeedOverLimit20to50:
+      name: generalRoadSpeedOverLimit20to50
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwaySpeedUnderLimit:
+      name: UrbanExpresswayOrHighwaySpeedUnderLimit
+      priority: 0
+      max: 0
+      min: 0
+    illegalDrivingOrParkingAtCrossroads:
+      name: illegalDrivingOrParkingAtCrossroads
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_right:
+      name: overtake_on_right
+      priority: 0
+      max: 0
+      min: 0
+    overtake_when_turn_around:
+      name: overtake_when_turn_around
+      priority: 0
+      max: 0
+      min: 0
+    overtake_when_passing_car:
+      name: overtake_when_passing_car
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_forbid_lane:
+      name: overtake_in_forbid_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_ramp:
+      name: overtake_in_ramp
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_tunnel:
+      name: overtake_in_tunnel
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_accelerate_lane:
+      name: overtake_on_accelerate_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_decelerate_lane:
+      name: overtake_on_decelerate_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_different_senerios:
+      name: overtake_in_different_senerios
+      priority: 0
+      max: 0
+      min: 0
+    slow_down_in_crosswalk:
+      name: slow_down_in_crosswalk
+      priority: 0
+      max: 0
+      min: 0
+    avoid_pedestrian_in_crosswalk:
+      name: avoid_pedestrian_in_crosswalk
+      priority: 0
+      max: 0
+      min: 0
+    avoid_pedestrian_in_the_road:
+      name: avoid_pedestrian_in_the_road
+      priority: 0
+      max: 0
+      min: 0
+    aviod_pedestrian_when_turning:
+      name: aviod_pedestrian_when_turning
+      priority: 0
+      max: 0
+      min: 0
+    NoStraightThrough:
+      name: NoStraightThrough
+      priority: 0
+      max: 0
+      min: 0
+    SpeedLimitViolation:
+      name: SpeedLimitViolation
+      priority: 0
+      max: 0
+      min: 0
+    MinimumSpeedLimitViolation:
+      name: MinimumSpeedLimitViolation
+      priority: 0
+      max: 0
+      min: 0
+
+  minorViolation:
+    name: minorViolation
+    priority: 0
+    noUTurnViolation:
+      name: noUTurnViolation
+      priority: 0
+      max: 0
+      min: 0
+
+  warningViolation:
+    name: warningViolation
+    priority: 0
+    urbanExpresswayOrHighwaySpeedOverLimit0to20:
+      name: urbanExpresswayOrHighwaySpeedOverLimit0to20
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayRideLaneDivider:
+      name: urbanExpresswayOrHighwayRideLaneDivider
+      priority: 0
+      max: 0
+      min: 0
+    generalRoadIrregularLaneUse:
+      name: generalRoadIrregularLaneUse
+      priority: 0
+      max: 0
+      min: 0

+ 22 - 0
config/custom_metrics_config.yaml

@@ -0,0 +1,22 @@
+safety:
+  name: safety
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5
+user:
+  name: user
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5

+ 342 - 0
config/metrics_config.yaml

@@ -0,0 +1,342 @@
+vehicle:
+  CAR_WIDTH: 1.872       
+  CAR_LENGTH: 4.924      
+  CAR_HEIGHT: 1.3        
+  CAR_OFFX: 1.321        
+  RHO: 0.3               
+  EGO_ACCEL_MAX: 6       
+  OBJ_DECEL_MAX: 8       
+  EGO_DECEL_MIN: 1       
+  EGO_DECEL_LON_MAX: 8   
+  EGO_DECEL_LAT_MAX: 1   
+  EGO_WHEELBASS: 2.8     
+
+T_threshold:
+  T0_threshold: 0  
+  T1_threshold: 2  
+  T2_threshold: 5  
+
+safety:
+  name: safety
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    TTC:
+      name: TTC
+      priority: 0
+      max: 2000.0
+      min: 2.86
+    MTTC:
+      name: MTTC
+      priority: 0
+      max: 2000.0
+      min: 3.0
+    THW:
+      name: THW
+      priority: 0
+      max: 2000.0
+      min: 1.5
+  safeDistance:
+    name: safeDistance
+    priority: 0
+    LonSD:
+      name: LonSD
+      priority: 0
+      max: 2000.0
+      min: 10.0
+    LatSD:
+      name: LatSD 
+      priority: 0
+      max: 2000.0
+      min: 2.0
+  safeAcceleration:
+    name: safeAcceleration
+    priority: 0
+    BTN:
+      name: BTN
+      priority: 0
+      max: 1.0
+      min: -2000.0
+  safeProbability:
+    name: safeProbability
+    priority: 0
+    collisionRisk:
+      name: collisionRisk
+      priority: 0
+      max: 10.0
+      min: 0.0
+    collisionSeverity:
+      name: collisionSeverity
+      priority: 0
+      max: 10.0
+      min: 0.0
+
+comfort:
+  name: comfort
+  priority: 0
+  comfortLat:
+    name: comfortLat
+    priority: 0
+    Weaving:
+      name: Weaving
+      priority: 0
+      max: 0
+      min: 0
+    shake:
+      name: shake
+      priority: 0
+      max: 0
+      min: 0
+  comfortLon:
+    name: comfortLon
+    priority: 0
+    cadence:
+      name: cadence
+      priority: 0
+      max: 0
+      min: 0
+    slamBrake:
+      name: slamBrake 
+      priority: 0
+      max: 0
+      min: 0
+    slamAccelerate:
+      name: slamAccelerate
+      priority: 0
+      max: 0
+      min: 0
+
+efficient:
+  name: efficient
+  priority: 0
+  drivingMode:
+    name: drivingMode
+    priority: 0
+    max_speed:
+      name: maxSpeed
+      priority: 0
+      max: 0.0
+      min: 0.0
+    devation_speed:
+      name: deviationSpeed
+      priority: 0
+      max: 0.0
+      min: 0.0
+    averagedSpeed:
+      name: averagedSpeed
+      priority: 0
+      max: 80.0
+      min: 30.0
+  parkingMode:
+    name: parkingMode
+    priority: 0
+    stopDuration:
+      name: stopDuration
+      priority: 0
+      max: 1
+      min: 0
+
+function:
+  name: function
+  priority: 0
+  LKA:
+    name: LKA
+    priority: 0
+    latestWarningDistance_TTC:
+      name: latestWarningDistance_TTC
+      priority: 0
+      max: 5
+      min: 1.98
+    latestWarningDistance:
+      name: latestWarningDistance
+      priority: 0
+      max: 150
+      min: 0
+
+traffic:
+  name: traffic
+  priority: 0
+  majorViolation:
+    name: majorViolation
+    priority: 0
+    urbanExpresswayOrHighwaySpeedOverLimit50:
+      name: urbanExpresswayOrHighwaySpeedOverLimit50
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayReverse:
+      name: higwayreverse
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayDrivingAgainst:
+      name: higwayDrivingAgainst
+      priority: 0
+      max: 0
+      min: 0
+
+  seriousViolation:
+    name: seriousViolation
+    priority: 0
+    urbanExpresswayOrHighwayDrivingLaneStopped:
+      name: urbanExpresswayOrHighwayDrivingLaneStopped
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayEmergencyLaneStopped:
+      name: highwayEmergencyLaneStopped
+      priority: 0
+      max: 0
+      min: 0
+
+  dangerousViolation:
+    name: dangerousViolation
+    priority: 0
+    urbanExpresswayEmergencyLaneDriving:
+      name: urbanExpresswayEmergencyLaneDriving
+      priority: 0
+      max: 0
+      min: 0
+    trafficSignalViolation:
+      name: trafficSignalViolation
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwaySpeedOverLimit20to50:
+      name: urbanExpresswayOrHighwaySpeedOverLimit20to50
+      priority: 0
+      max: 0
+      min: 0
+    generalRoadSpeedOverLimit50:
+      name: generalRoadSpeedOverLimit50
+      priority: 0
+      max: 0
+      min: 0
+
+  generalViolation:
+    name: generalViolation
+    priority: 0
+    generalRoadSpeedOverLimit20to50:
+      name: generalRoadSpeedOverLimit20to50
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwaySpeedUnderLimit:
+      name: UrbanExpresswayOrHighwaySpeedUnderLimit
+      priority: 0
+      max: 0
+      min: 0
+    illegalDrivingOrParkingAtCrossroads:
+      name: illegalDrivingOrParkingAtCrossroads
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_right:
+      name: overtake_on_right
+      priority: 0
+      max: 0
+      min: 0
+    overtake_when_turn_around:
+      name: overtake_when_turn_around
+      priority: 0
+      max: 0
+      min: 0
+    overtake_when_passing_car:
+      name: overtake_when_passing_car
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_forbid_lane:
+      name: overtake_in_forbid_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_ramp:
+      name: overtake_in_ramp
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_tunnel:
+      name: overtake_in_tunnel
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_accelerate_lane:
+      name: overtake_on_accelerate_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_on_decelerate_lane:
+      name: overtake_on_decelerate_lane
+      priority: 0
+      max: 0
+      min: 0
+    overtake_in_different_senerios:
+      name: overtake_in_different_senerios
+      priority: 0
+      max: 0
+      min: 0
+    slow_down_in_crosswalk:
+      name: slow_down_in_crosswalk
+      priority: 0
+      max: 0
+      min: 0
+    avoid_pedestrian_in_crosswalk:
+      name: avoid_pedestrian_in_crosswalk
+      priority: 0
+      max: 0
+      min: 0
+    avoid_pedestrian_in_the_road:
+      name: avoid_pedestrian_in_the_road
+      priority: 0
+      max: 0
+      min: 0
+    aviod_pedestrian_when_turning:
+      name: aviod_pedestrian_when_turning
+      priority: 0
+      max: 0
+      min: 0
+    NoStraightThrough:
+      name: NoStraightThrough
+      priority: 0
+      max: 0
+      min: 0
+    SpeedLimitViolation:
+      name: SpeedLimitViolation
+      priority: 0
+      max: 0
+      min: 0
+    MinimumSpeedLimitViolation:
+      name: MinimumSpeedLimitViolation
+      priority: 0
+      max: 0
+      min: 0
+
+  minorViolation:
+    name: minorViolation
+    priority: 0
+    noUTurnViolation:
+      name: noUTurnViolation
+      priority: 0
+      max: 0
+      min: 0
+
+  warningViolation:
+    name: warningViolation
+    priority: 0
+    urbanExpresswayOrHighwaySpeedOverLimit0to20:
+      name: urbanExpresswayOrHighwaySpeedOverLimit0to20
+      priority: 0
+      max: 0
+      min: 0
+    urbanExpresswayOrHighwayRideLaneDivider:
+      name: urbanExpresswayOrHighwayRideLaneDivider
+      priority: 0
+      max: 0
+      min: 0
+    generalRoadIrregularLaneUse:
+      name: generalRoadIrregularLaneUse
+      priority: 0
+      max: 0
+      min: 0

+ 66 - 0
custom_metrics/metric_safety_safeTime_CustomTTC.py

@@ -0,0 +1,66 @@
+"""自定义TTC指标评测脚本
+
+此脚本实现了一个自定义的TTC(Time To Collision)指标评测逻辑
+"""
+from typing import Dict, Any
+import math
+from modules.lib.score import Score
+import logging
+from modules.lib.metric_registry import BaseMetric
+
+# 指定指标类别
+METRIC_CATEGORY = "safety"
+
+class CustomTTCMetric(BaseMetric):
+    """自定义TTC指标类"""
+    
+    def __init__(self, data: Any):
+        """初始化指标
+        
+        Args:
+            data: 输入数据,包含场景、轨迹等信息
+        """
+        super().__init__(data)
+    
+    def calculate(self) -> Dict[str, Any]:
+        """计算指标
+        
+        Returns:
+            计算结果字典,包含指标值、评分和详细信息
+        """
+        try:
+            # 计算最小TTC值
+            min_ttc = self._calculate_min_ttc()
+            
+            # 构建返回结果
+            result =  {"CustomTTC": min_ttc}
+            
+            return result
+            
+        except Exception as e:
+            logging.error(f"评测CustomTTC指标失败: {str(e)}")
+            return {
+                "value": 0.0,
+                "score": 0,
+                "details": {"error": str(e)}
+            }
+    
+    def _calculate_min_ttc(self) -> float:
+        """计算最小TTC值
+        
+        Returns:
+            最小TTC值
+        """
+        if self.data is None or not hasattr(self.data, 'ego_data'):
+            raise ValueError("输入数据不能为空或格式不正确")
+        
+        # 初始化为一个较大的值
+        min_ttc = float('inf')
+        
+        # 这里应该实现实际的TTC计算逻辑
+        # 示例:遍历所有时间点,计算ego车辆与其他车辆的TTC
+        
+        # 临时使用固定值代替实际计算
+        min_ttc = 1.0
+        
+        return min_ttc

+ 72 - 0
custom_metrics/metric_user_safeTime_CustomTTC.py

@@ -0,0 +1,72 @@
+"""
+自定义TTC指标评测脚本示例
+
+此脚本实现了一个自定义的TTC(Time To Collision)指标评测逻辑
+"""
+from typing import Dict, Any
+import math
+from modules.lib.score import Score
+import logging
+import inspect  # 添加缺少的inspect模块导入
+
+def evaluate(data) -> Dict[str, Any]:
+    """
+    评测自定义TTC指标
+    
+    Args:
+        data: 评测数据,包含场景、轨迹等信息
+        config: 指标配置,包含阈值等参数
+        
+    Returns:
+        评测结果,包含分数、详情等
+    """
+
+    try:
+        # 计算最小TTC值
+        min_ttc = calculate_min_ttc(data.ego_data)
+        
+        # 使用Score类评估结果
+        # evaluator = Score(config)   
+        # result = evaluator.evaluate(min_ttc)
+        return min_ttc
+        
+    except Exception as e:
+        logging.error(f"评测CustomTTC指标失败: {str(e)}")
+        # 发生异常时返回错误信息
+        return {
+            "details": {
+                "error": str(e)
+            }
+        }
+    
+
+def calculate_min_ttc(data):
+    """
+    计算最小TTC值
+    
+    Args:
+        data: 轨迹数据列表
+        
+    Returns:
+        最小TTC值
+    """
+    # 这里是计算TTC的具体逻辑
+    # 实际应用中,应根据轨迹数据计算车辆之间的TTC
+    # 以下是一个简化的示例
+    
+    if data is None:
+        raise ValueError("输入数据不能为空")
+    
+    # 初始化为一个较大的值
+    min_ttc = float('inf')
+    
+    # 假设trajectories是一个包含多个时间点的列表
+    # 每个时间点包含ego车辆和其他车辆的位置、速度等信息
+    
+    # 这里应该实现实际的TTC计算逻辑
+    # 示例:遍历所有时间点,计算ego车辆与其他车辆的TTC
+    
+    # 临时使用固定值代替实际计算
+    min_ttc = 1.0
+    
+    return {"CustomTTC":min_ttc}

Tiedoston diff-näkymää rajattu, sillä se on liian suuri
+ 30 - 0
logs/test.log


BIN
modules/lib/__pycache__/common.cpython-312.pyc


BIN
modules/lib/__pycache__/common.cpython-313.pyc


BIN
modules/lib/__pycache__/data_process.cpython-312.pyc


BIN
modules/lib/__pycache__/data_process.cpython-313.pyc


BIN
modules/lib/__pycache__/log.cpython-312.pyc


BIN
modules/lib/__pycache__/log.cpython-313.pyc


BIN
modules/lib/__pycache__/log_manager.cpython-312.pyc


BIN
modules/lib/__pycache__/log_manager.cpython-313.pyc


BIN
modules/lib/__pycache__/metric_registry.cpython-312.pyc


BIN
modules/lib/__pycache__/metric_registry.cpython-313.pyc


BIN
modules/lib/__pycache__/score.cpython-312.pyc


BIN
modules/lib/__pycache__/score.cpython-313.pyc


+ 185 - 0
modules/lib/common.py

@@ -0,0 +1,185 @@
+import json
+from typing import List, Dict, Tuple
+
+import numpy as np
+import pandas as pd
+
+        
+def dict2json(data_dict: Dict, file_path: str) -> None:
+    """
+    将字典转换为JSON格式并保存到文件中。
+
+    参数:
+    data_dict (dict): 要转换的字典。
+    file_path (str): 保存JSON文件的路径。
+    """
+    try:
+        with open(file_path, "w", encoding="utf-8") as json_file:
+            json.dump(data_dict, json_file, ensure_ascii=False, indent=4)
+        print(f"JSON文件已保存到 {file_path}")
+    except Exception as e:
+        print(f"保存JSON文件时出错: {e}")
+
+
+def get_interpolation(x: float, point1: Tuple[float, float], point2: Tuple[float, float]) -> float:
+    """
+    根据两个点确定一元一次方程,并在定义域内求解。
+
+    参数:
+        x: 自变量的值。
+        point1: 第一个点的坐标。
+        point2: 第二个点的坐标。
+
+    返回:
+        y: 因变量的值。
+    """
+    try:
+        k = (point1[1] - point2[1]) / (point1[0] - point2[0])
+        b = (point1[0] * point2[1] - point1[1] * point2[0]) / (point1[0] - point2[0])
+        return x * k + b
+    except Exception as e:
+        return f"Error: {str(e)}"
+
+
+def get_frame_with_time(df1: pd.DataFrame, df2: pd.DataFrame) -> pd.DataFrame:
+    """
+    将两个DataFrame按照时间列进行合并,并返回结果。
+
+    参数:
+        df1: 包含start_time和end_time的DataFrame。
+        df2: 包含simTime和simFrame的DataFrame。
+
+    返回:
+        合并后的DataFrame。
+    """
+    df1_start = df1.merge(df2[["simTime", "simFrame"]], left_on="start_time", right_on="simTime")
+    df1_start = df1_start[["start_time", "simFrame"]].rename(columns={"simFrame": "start_frame"})
+
+    df1_end = df1.merge(df2[["simTime", "simFrame"]], left_on="end_time", right_on="simTime")
+    df1_end = df1_end[["end_time", "simFrame"]].rename(columns={"simFrame": "end_frame"})
+
+    return pd.concat([df1_start, df1_end], axis=1)
+
+
+class PolynomialCurvatureFitting:
+    def __init__(self, data_path: str, degree: int = 3):
+        self.data_path = data_path
+        self.degree = degree
+        self.data = pd.read_csv(self.data_path)
+        self.points = self.data[['centerLine_x', 'centerLine_y']].values
+        self.x_data, self.y_data = self.points[:, 0], self.points[:, 1]
+
+    def curvature(self, coefficients: np.ndarray, x: float) -> float:
+        """
+        计算多项式在x处的曲率。
+
+        参数:
+            coefficients: 多项式系数。
+            x: 自变量的值。
+
+        返回:
+            曲率值。
+        """
+        first_derivative = np.polyder(coefficients)
+        second_derivative = np.polyder(first_derivative)
+        return np.abs(np.polyval(second_derivative, x)) / (1 + np.polyval(first_derivative, x) ** 2) ** (3 / 2)
+
+    def polynomial_fit(self, x_window: np.ndarray, y_window: np.ndarray) -> Tuple[np.ndarray, np.poly1d]:
+        """
+        对给定的窗口数据进行多项式拟合。
+
+        参数:
+            x_window: 窗口内的x数据。
+            y_window: 窗口内的y数据。
+
+        返回:
+            多项式系数和多项式对象。
+        """
+        coefficients = np.polyfit(x_window, y_window, self.degree)
+        return coefficients, np.poly1d(coefficients)
+
+    def find_best_window(self, point: Tuple[float, float], window_size: int) -> int:
+        """
+        找到最佳窗口的起始索引。
+
+        参数:
+            point: 目标点的坐标。
+            window_size: 窗口大小。
+
+        返回:
+            最佳窗口的起始索引。
+        """
+        x1, y1 = point
+        window_means = np.array([
+            (np.mean(self.x_data[start:start + window_size]), np.mean(self.y_data[start:start + window_size]))
+            for start in range(len(self.x_data) - window_size + 1)
+        ])
+        distances = np.sqrt((x1 - window_means[:, 0]) ** 2 + (y1 - window_means[:, 1]) ** 2)
+        return np.argmin(distances)
+
+    def find_projection(self, x_point: float, y_point: float, polynomial: np.poly1d, x_data_range: Tuple[float, float], search_step: float = 0.0001) -> Tuple[float, float, float]:
+        """
+        找到目标点在多项式曲线上的投影点。
+
+        参数:
+            x_point: 目标点的x坐标。
+            y_point: 目标点的y坐标。
+            polynomial: 多项式对象。
+            x_data_range: x的取值范围。
+            search_step: 搜索步长。
+
+        返回:
+            投影点的坐标和最小距离。
+        """
+        x_values = np.arange(x_data_range[0], x_data_range[1], search_step)
+        y_values = polynomial(x_values)
+        distances = np.sqrt((x_point - x_values) ** 2 + (y_point - y_values) ** 2)
+        min_idx = np.argmin(distances)
+        return x_values[min_idx], y_values[min_idx], distances[min_idx]
+
+    def fit_and_project(self, points: List[Tuple[float, float]], window_size: int) -> List[Dict]:
+        """
+        对每个点进行多项式拟合和投影计算。
+
+        参数:
+            points: 目标点列表。
+            window_size: 窗口大小。
+
+        返回:
+            包含投影点、曲率、曲率变化和最小距离的字典列表。
+        """
+        results = []
+        for point in points:
+            best_start = self.find_best_window(point, window_size)
+            x_window = self.x_data[best_start:best_start + window_size]
+            y_window = self.y_data[best_start:best_start + window_size]
+            coefficients, polynomial = self.polynomial_fit(x_window, y_window)
+            proj_x, proj_y, min_distance = self.find_projection(point[0], point[1], polynomial, (min(x_window), max(x_window)))
+            curvature_value = self.curvature(coefficients, proj_x)
+            second_derivative_coefficients = np.polyder(np.polyder(coefficients))
+            curvature_change_value = np.polyval(second_derivative_coefficients, proj_x)
+
+            results.append({
+                'projection': (proj_x, proj_y),
+                'curvHor': curvature_value,
+                'curvHorDot': curvature_change_value,
+                'coefficients': coefficients,
+                'laneOffset': min_distance
+            })
+
+        return results
+
+    
+
+if __name__ == "__main__":
+    data_path = "/home/kevin/kevin/zhaoyuan/zhaoyuan/data/raw/data/LaneMap.csv"
+    point_path = "/home/kevin/kevin/zhaoyuan/zhaoyuan/data/raw/data/EgoMap.csv"
+
+    points_data = pd.read_csv(point_path)
+    points = points_data[['posX', 'posY']].values
+
+    window_size = 4
+
+    fitting_instance = PolynomialCurvatureFitting(data_path)
+    projections = fitting_instance.fit_and_project(points, window_size)
+    fitting_instance.plot_results(points, projections)

+ 227 - 0
modules/lib/data_process.py

@@ -0,0 +1,227 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+##################################################################
+#
+# Copyright (c) 2024 CICV, Inc. All Rights Reserved
+#
+##################################################################
+"""
+@Authors:           zhanghaiwen(zhanghaiwen@china-icv.cn)
+@Data:              2024/10/17
+@Last Modified:     2024/10/17
+@Summary:           Evaluation functions
+"""
+
+import os
+
+import numpy as np
+import pandas as pd
+
+import yaml
+
+
+
+from modules.lib.log_manager import LogManager
+
+# from lib import log  # 确保这个路径是正确的,或者调整它
+# logger = None  # 初始化为 None
+
+
+class DataPreprocessing:
+    def __init__(self, data_path, config_path):
+        # Initialize paths and data containers
+        # self.logger = log.get_logger()
+        
+        self.data_path = data_path
+        self.case_name = os.path.basename(os.path.dirname(data_path))
+
+        self.config_path = config_path
+
+        # Initialize DataFrames
+        self.object_df = pd.DataFrame()
+        self.driver_ctrl_df = pd.DataFrame()
+        self.vehicle_sys_df = pd.DataFrame()
+        self.ego_data_df = pd.DataFrame()
+
+        # Environment data
+        self.lane_info_df = pd.DataFrame()
+        self.road_mark_df = pd.DataFrame()
+        self.road_pos_df = pd.DataFrame()
+        self.traffic_light_df = pd.DataFrame()
+        self.traffic_signal_df = pd.DataFrame()
+
+        self.vehicle_config = {}
+        self.safety_config = {}
+        self.comfort_config = {}
+        self.efficient_config = {}
+        self.function_config = {}
+        self.traffic_config = {}
+
+        # Initialize data for later processing
+        self.obj_data = {}
+        self.ego_data = {}
+        self.obj_id_list = []
+
+        # Data quality level
+        self.data_quality_level = 15
+
+        # Process mode and prepare report information
+        self._process_mode()
+        self._get_yaml_config()
+        self.report_info = self._get_report_info(self.obj_data.get(1, pd.DataFrame()))
+
+    def _process_mode(self):
+        """Handle different processing modes."""
+        self._real_process_object_df()
+
+    def _get_yaml_config(self):
+        with open(self.config_path, 'r') as f:
+            full_config = yaml.safe_load(f)
+
+        modules = ["vehicle", "T_threshold", "safety", "comfort", "efficient", "function", "traffic"]
+        
+        # 1. 初始化 vehicle_config(不涉及 T_threshold 合并)
+        self.vehicle_config = full_config[modules[0]]
+        
+        # 2. 定义 T_threshold_config(封装为字典)
+        T_threshold_config = {"T_threshold": full_config[modules[1]]}
+        
+        # 3. 统一处理需要合并 T_threshold 的模块
+        # 3.1 safety_config
+        self.safety_config = {"safety": full_config[modules[2]]}
+        self.safety_config.update(T_threshold_config)
+        
+        # 3.2 comfort_config
+        self.comfort_config = {"comfort": full_config[modules[3]]}
+        self.comfort_config.update(T_threshold_config)
+        
+        # 3.3 efficient_config
+        self.efficient_config = {"efficient": full_config[modules[4]]}
+        self.efficient_config.update(T_threshold_config)
+        
+        # 3.4 function_config
+        self.function_config = {"function": full_config[modules[5]]}
+        self.function_config.update(T_threshold_config)
+        
+        # 3.5 traffic_config
+        self.traffic_config = {"traffic": full_config[modules[6]]}
+        self.traffic_config.update(T_threshold_config)
+
+    @staticmethod
+    def cal_velocity(lat_v, lon_v):
+        """Calculate resultant velocity from lateral and longitudinal components."""
+        return np.sqrt(lat_v**2 + lon_v**2)
+
+    def _real_process_object_df(self):
+        """Process the object DataFrame."""
+        try:
+            # 读取 CSV 文件
+            merged_csv_path = os.path.join(self.data_path, "merged_ObjState.csv")
+            self.object_df = pd.read_csv(
+                merged_csv_path, dtype={"simTime": float}
+            ).drop_duplicates(subset=["simTime", "simFrame", "playerId"])
+
+            data = self.object_df.copy()
+
+            # Calculate common parameters
+            data["lat_v"] = data["speedY"] * 1
+            data["lon_v"] = data["speedX"] * 1
+            data["v"] = data.apply(
+                lambda row: self.cal_velocity(row["lat_v"], row["lon_v"]), axis=1
+            )
+            data["v"] = data["v"]  # km/h
+
+            # Calculate acceleration components
+            data["lat_acc"] = data["accelY"] * 1
+            data["lon_acc"] = data["accelX"] * 1
+            data["accel"] = data.apply(
+                lambda row: self.cal_velocity(row["lat_acc"], row["lon_acc"]), axis=1
+            )
+
+            # Drop rows with missing 'type' and reset index
+            data = data.dropna(subset=["type"])
+            data.reset_index(drop=True, inplace=True)
+            self.object_df = data.copy()
+
+            # Calculate respective parameters for each object
+            for obj_id, obj_data in data.groupby("playerId"):
+                self.obj_data[obj_id] = self._calculate_object_parameters(obj_data)
+
+            # Get object id list
+            EGO_PLAYER_ID = 1
+            self.obj_id_list = list(self.obj_data.keys())
+            self.ego_data = self.obj_data[EGO_PLAYER_ID]
+
+        except Exception as e:
+            # self.logger.error(f"Error processing object DataFrame: {e}")
+            raise
+
+    def _calculate_object_parameters(self, obj_data):
+        """Calculate additional parameters for a single object."""
+        obj_data = obj_data.copy()
+        obj_data["time_diff"] = obj_data["simTime"].diff()
+
+        obj_data["lat_acc_diff"] = obj_data["lat_acc"].diff()
+        obj_data["lon_acc_diff"] = obj_data["lon_acc"].diff()
+        obj_data["yawrate_diff"] = obj_data["speedH"].diff()
+
+        obj_data["lat_acc_roc"] = (
+            obj_data["lat_acc_diff"] / obj_data["time_diff"]
+        ).replace([np.inf, -np.inf], [9999, -9999])
+        obj_data["lon_acc_roc"] = (
+            obj_data["lon_acc_diff"] / obj_data["time_diff"]
+        ).replace([np.inf, -np.inf], [9999, -9999])
+        obj_data["accelH"] = (
+            obj_data["yawrate_diff"] / obj_data["time_diff"]
+        ).replace([np.inf, -np.inf], [9999, -9999])
+
+        return obj_data
+
+    def _get_driver_ctrl_data(self, df):
+        """
+        Process and get driver control information.
+
+        Args:
+            df: A DataFrame containing driver control data.
+
+        Returns:
+            A dictionary of driver control info.
+        """
+        driver_ctrl_data = {
+            "time_list": df["simTime"].round(2).tolist(),
+            "frame_list": df["simFrame"].tolist(),
+            "brakePedal_list": (
+                (df["brakePedal"] * 100).tolist()
+                if df["brakePedal"].max() < 1
+                else df["brakePedal"].tolist()
+            ),
+            "throttlePedal_list": (
+                (df["throttlePedal"] * 100).tolist()
+                if df["throttlePedal"].max() < 1
+                else df["throttlePedal"].tolist()
+            ),
+            "steeringWheel_list": df["steeringWheel"].tolist(),
+        }
+        return driver_ctrl_data
+
+    def _get_report_info(self, df):
+        """Extract report information from the DataFrame."""
+        mileage = self._mileage_cal(df)
+        duration = self._duration_cal(df)
+        return {"mileage": mileage, "duration": duration}
+
+    def _mileage_cal(self, df):
+        """Calculate mileage based on the driving data."""
+        if df["travelDist"].nunique() == 1:
+            df["time_diff"] = df["simTime"].diff().fillna(0)
+            df["avg_speed"] = (df["v"] + df["v"].shift()).fillna(0) / 2
+            df["distance_increment"] = df["avg_speed"] * df["time_diff"] / 3.6
+            df["travelDist"] = df["distance_increment"].cumsum().fillna(0)
+
+            mileage = round(df["travelDist"].iloc[-1] - df["travelDist"].iloc[0], 2)
+            return mileage
+        return 0.0  # Return 0 if travelDist is not valid
+
+    def _duration_cal(self, df):
+        """Calculate duration of the driving data."""
+        return df["simTime"].iloc[-1] - df["simTime"].iloc[0]

+ 115 - 0
modules/lib/log_manager.py

@@ -0,0 +1,115 @@
+import logging
+import os
+import threading
+from logging.handlers import QueueHandler, QueueListener
+from queue import Queue
+
+class LogManager:
+    _instance = None
+    _lock = threading.Lock()
+    _configured = False  # 确保单例配置唯一性
+    
+    def __new__(cls, log_path="/home/kevin/kevin/zhaoyuan/zhaoyuan/log/app.log"):
+        with cls._lock:
+            if not cls._instance:
+                cls._instance = super().__new__(cls)
+                # 路径处理逻辑
+                cls._instance._full_path = log_path
+                cls._instance._init_logger()
+            return cls._instance
+
+    @classmethod
+    def _validate_path(cls, path):
+        """路径验证与创建"""
+        default_path = os.path.join(os.getcwd(), "logs")
+        target_path = path or default_path
+        
+        try:
+            os.makedirs(target_path, exist_ok=True)
+            # 测试写入权限
+            test_file = os.path.join(target_path, "write_test.tmp")
+            with open(test_file, "w") as f:
+                f.write("permission_test")
+            os.remove(test_file)
+            return target_path
+        except PermissionError:
+            logging.error(f"Insufficient permissions for {target_path}, using default")
+            os.makedirs(default_path, exist_ok=True)
+            return default_path
+        except Exception as e:
+            logging.error(f"Path error: {str(e)}, using default")
+            return default_path
+
+    @staticmethod
+    def _sanitize_filename(name):
+        """文件名合法性过滤"""
+        invalid_chars = {'/', '\\', ':', '*', '?', '"', '<', '>', '|'}
+        cleaned = ''.join(c for c in name if c not in invalid_chars)
+        return cleaned[:50]  # 限制文件名长度
+
+    def _init_logger(self):
+        """初始化日志系统"""
+        self.log_queue = Queue(-1)
+        self.logger = logging.getLogger("GlobalLogger")
+        self.logger.setLevel(logging.DEBUG)
+
+        if not self.logger.handlers:
+            # 创建带线程标识和行号的格式器
+            formatter = logging.Formatter(
+                "[%(asctime)s][%(levelname)s][%(threadName)s][%(filename)s:%(lineno)d] %(message)s"
+            )
+            
+            # 文件处理器(自动UTF-8编码)
+            file_handler = logging.FileHandler(
+                self._full_path, 
+                encoding='utf-8',
+                delay=True  # 延迟打开文件直到实际写入
+            )
+            file_handler.setFormatter(formatter)
+            
+            # 控制台处理器(仅ERROR级别)
+            console_handler = logging.StreamHandler()
+            console_handler.setLevel(logging.ERROR)
+            console_handler.setFormatter(formatter)
+
+            # 异步监听器
+            self.listener = QueueListener(
+                self.log_queue,
+                file_handler,
+                console_handler,
+                respect_handler_level=True
+            )
+            self.listener.start()
+
+            # 队列处理器配置
+            queue_handler = QueueHandler(self.log_queue)
+            queue_handler.setLevel(logging.DEBUG)
+            self.logger.addHandler(queue_handler)
+            self.logger.propagate = False
+
+    def get_logger(self):
+        """获取线程安全日志器"""
+        return self.logger
+
+    @classmethod
+    def shutdown(cls):
+        """安全关闭日志系统"""
+        if cls._instance:
+            cls._instance.listener.stop()
+            cls._instance = None
+
+# 使用示例
+if __name__ == "__main__":
+    # 自定义路径和文件名
+    custom_logger = LogManager(
+        log_path="/home/kevin/kevin/zhaoyuan/zhaoyuan/log/runtime.log"
+    ).get_logger()
+    
+    custom_logger.info("Custom logger configured successfully")
+    
+    # 默认配置
+    default_logger = LogManager().get_logger()
+    default_logger.warning("Using default configuration")
+    
+    # 安全关闭
+    LogManager.shutdown()

+ 131 - 0
modules/lib/metric_registry.py

@@ -0,0 +1,131 @@
+"""指标注册系统模块
+
+此模块提供了指标注册和管理的基础设施,包括BaseMetric基类和MetricRegistry类。
+所有自定义指标都应该继承BaseMetric基类,并实现calculate方法。
+"""
+from typing import Dict, Any, List, Type, Optional
+import logging
+import inspect
+import importlib.util
+from pathlib import Path
+
+class BaseMetric:
+    """指标基类
+    
+    所有自定义指标都应该继承此类,并实现calculate方法。
+    """
+    
+    def __init__(self, data: Any):
+        """初始化指标
+        
+        Args:
+            data: 输入数据,包含场景、轨迹等信息
+        """
+        self.data = data
+    
+    def calculate(self) -> Dict[str, Any]:
+        """计算指标
+        
+        Returns:
+            计算结果字典,包含指标值、评分和详细信息
+        """
+        raise NotImplementedError("子类必须实现calculate方法")
+
+
+class MetricRegistry:
+    """指标注册管理器
+    
+    负责注册和管理所有可用的指标(内置和自定义)
+    """
+    
+    def __init__(self, logger: Optional[logging.Logger] = None):
+        """初始化注册管理器
+        
+        Args:
+            logger: 日志记录器,如果为None则创建一个默认的记录器
+        """
+        self.metrics: Dict[str, Type[BaseMetric]] = {}
+        self.logger = logger or logging.getLogger(__name__)
+    
+    def register(self, metric_key: str, metric_class: Type[BaseMetric]) -> None:
+        """注册指标类
+        
+        Args:
+            metric_key: 指标键名,通常为'level1.level2.level3'格式
+            metric_class: 指标类,必须是BaseMetric的子类
+        """
+        if not issubclass(metric_class, BaseMetric):
+            raise TypeError(f"指标类 {metric_class.__name__} 必须继承BaseMetric")
+        
+        self.metrics[metric_key] = metric_class
+        self.logger.info(f"已注册指标: {metric_key}")
+    
+    def get_metric(self, metric_key: str) -> Optional[Type[BaseMetric]]:
+        """获取指标类
+        
+        Args:
+            metric_key: 指标键名
+            
+        Returns:
+            指标类,如果不存在则返回None
+        """
+        return self.metrics.get(metric_key)
+    
+    def get_all_metrics(self) -> Dict[str, Type[BaseMetric]]:
+        """获取所有注册的指标类
+        
+        Returns:
+            指标类字典
+        """
+        return self.metrics
+    
+    def load_metrics_from_directory(self, directory_path: Path) -> List[str]:
+        """从目录加载指标类
+        
+        Args:
+            directory_path: 指标脚本目录路径
+            
+        Returns:
+            加载成功的指标键名列表
+        """
+        if not directory_path.exists() or not directory_path.is_dir():
+            self.logger.warning(f"指标目录不存在: {directory_path}")
+            return []
+        
+        loaded_metrics = []
+        for py_file in directory_path.glob("*.py"):
+            try:
+                # 动态导入模块
+                module_name = f"custom_metric_{py_file.stem}"
+                spec = importlib.util.spec_from_file_location(module_name, py_file)
+                module = importlib.util.module_from_spec(spec)
+                spec.loader.exec_module(module)
+                
+                # 查找模块中的BaseMetric子类
+                for name, obj in inspect.getmembers(module):
+                    if (inspect.isclass(obj) and 
+                        issubclass(obj, BaseMetric) and 
+                        obj != BaseMetric):
+                        
+                        # 获取指标类别
+                        category = getattr(module, 'METRIC_CATEGORY', 'custom')
+                        
+                        # 从文件名解析指标键名
+                        if py_file.stem.startswith('metric_'):
+                            parts = py_file.stem[len('metric_'):].split('_')
+                            if len(parts) >= 3:
+                                level1 = parts[0] if category == 'custom' else category
+                                level2 = parts[1]
+                                level3 = parts[2]
+                                metric_key = f"{level1}.{level2}.{level3}"
+                                
+                                # 注册指标类
+                                self.register(metric_key, obj)
+                                loaded_metrics.append(metric_key)
+                                
+                                # 一个文件只注册一个指标类
+                                break
+            except Exception as e:
+                self.logger.error(f"加载指标文件失败 {py_file}: {str(e)}")
+        
+        return loaded_metrics

+ 243 - 0
modules/lib/score.py

@@ -0,0 +1,243 @@
+  
+import json 
+
+from modules.lib.log_manager import LogManager
+
+
+class Score:  
+    def __init__(self, yaml_config, module_name: str ):
+        self.logger = LogManager().get_logger()  # 获取全局日志实例   
+        self.calculated_metrics = None
+        self.config = yaml_config
+        self.module_config = None
+        self.module_name = module_name
+        self.t_threshold = None
+        self.process_config(self.config)
+        self.level_3_merics = self._extract_level_3_metrics(self.module_config) 
+        self.result = {}  
+    
+    def process_config(self, config_dict):
+        t_threshold = config_dict.get("T_threshold")
+        if t_threshold is None:
+            raise ValueError("配置中缺少 T_threshold 键")
+
+        module_keys = [key for key in config_dict if key != "T_threshold"]
+        # if len(module_keys) != 1:
+        #     raise ValueError("配置字典应包含且仅包含一个模块配置键")
+        
+        # module_name = module_keys[0]
+        module_config = config_dict[self.module_name]
+        # print(f'模块名称:{module_name}')
+        # print(f'模块配置:{module_config}')
+        # print(f'T_threshold:{t_threshold}')
+
+        # 实际业务逻辑(示例:存储到对象属性)
+        # self.module_name = module_name
+        self.module_config = module_config
+        self.t_threshold = t_threshold
+        self.logger.info(f'模块名称:{self.module_name}')
+        self.logger.info(f'模块配置:{self.module_config}')
+        self.logger.info(f'T_threshold: {t_threshold}')
+    def _extract_level_3_metrics(self, d):
+        name = []
+        for key, value in d.items():
+            if isinstance(value, dict):  # 如果值是字典,继续遍历
+                self._extract_level_3_metrics(value)
+            elif key == 'name':  # 找到name键时,将值添加到列表
+                name.append(value)
+        return name
+                         
+    def is_within_range(self, value, min_val, max_val):  
+        return min_val <= value <= max_val  
+  
+    def evaluate_level_3(self, metrics):  
+        result3 = {}  
+        name = metrics.get('name')  
+        priority = metrics.get('priority')  
+        max_val = metrics.get('max')  
+        min_val = metrics.get('min')
+        
+        self.level_3_merics.append(name)
+        print(f'name: {name}')  
+        print(f'self.calculated_metrics: {self.calculated_metrics}')
+        metric_value = self.calculated_metrics.get(name) 
+        print(f'metric_value: {metric_value}') 
+        result3[name] = {  
+            'result': True,  
+            'priority': priority 
+        } 
+        if metric_value is None:  
+            return result3  
+  
+        if not self.is_within_range(metric_value, min_val, max_val) and priority == 0:  
+            result3[name]['result'] = False  
+        elif not self.is_within_range(metric_value, min_val, max_val) and priority == 1:  
+            result3[name]['priority_1_count'] += 1  
+  
+        # Count priority 1 failures and override result if more than 3  
+       
+        priority_1_metrics = [v for v in result3.values() if v['priority'] == 1 and not v['result']]  
+        if len([m for m in priority_1_metrics if not m['result']]) > 3:  
+            result3[name]['result'] = False
+  
+        return result3  
+  
+    def evaluate_level_2(self, metrics):  
+        result2 = {}  
+        name = metrics.get('name')  
+        priority = metrics.get('priority') 
+        result2[name] = {}  
+  
+        for metric, sub_metrics in metrics.items():  
+            if metric not in ['name', 'priority']:  
+                result2[name].update(self.evaluate_level_3(sub_metrics))  
+  
+        # Aggregate results for level 2  config.T0 config.T1 config.T2
+        priority_0_count = sum(1 for v in result2[name].values() if v['priority'] == 0 and not v['result']) 
+        priority_1_count = sum(1 for v in result2[name].values() if v['priority'] == 1 and not v['result']) 
+        priority_2_count = sum(1 for v in result2[name].values() if v['priority'] == 2 and not v['result']) 
+
+        if priority_0_count > self.t_threshold['T0_threshold']:  
+            result2[name]['result'] = False
+            
+        elif priority_1_count > self.t_threshold['T1_threshold']:  
+            for metric in result2[name].values():  
+                metric['result'] = False 
+        elif priority_2_count > self.t_threshold['T2_threshold']:  
+            for metric in result2[name].values():  
+                metric['result'] = False
+        else:  
+            result2[name]['result'] = True  # Default to True unless overridden  
+        result2[name]['priority'] = priority 
+        result2[name]['priority_0_count'] = priority_0_count  
+        result2[name]['priority_1_count'] = priority_1_count
+        result2[name]['priority_2_count'] = priority_2_count  
+  
+        return result2  
+  
+    def evaluate_level_1(self): 
+
+        name = self.module_config.get('name')
+        priority = self.module_config.get('priority') 
+        result1 = {} 
+        result1[name] = {}  
+        for metric, metrics in self.module_config.items():
+            if metric not in ['name', 'priority']:  
+                result1[name].update(self.evaluate_level_2(metrics))
+                
+        # Aggregate results for level 2  config.T0 config.T1 config.T2
+        priority_0_count = sum(1 for v in result1[name].values() if v['priority'] == 0 and not v['result']) 
+        priority_1_count = sum(1 for v in result1[name].values() if v['priority'] == 1 and not v['result']) 
+        priority_2_count = sum(1 for v in result1[name].values() if v['priority'] == 2 and not v['result']) 
+
+        if priority_0_count > self.t_threshold['T0_threshold']:  
+            result1[name]['result'] = False
+            
+        elif priority_1_count > self.t_threshold['T1_threshold']:  
+            for metric in result1[name].values():  
+                metric['result'] = False 
+        elif priority_2_count > self.t_threshold['T2_threshold']:  
+            for metric in result1[name].values():  
+                metric['result'] = False
+        else:  
+            result1[name]['result'] = True  # Default to True unless overridden  
+        result1[name]['priority'] = priority 
+        result1[name]['priority_0_count'] = priority_0_count  
+        result1[name]['priority_1_count'] = priority_1_count
+        result1[name]['priority_2_count'] = priority_2_count  
+
+        return result1  
+  
+    def evaluate(self, calculated_metrics):
+        self.calculated_metrics = calculated_metrics  
+        self.result = self.evaluate_level_1()  
+        return self.result 
+
+    def evaluate_single_case(self, case_name, priority, json_dict):
+
+        name = case_name
+        result = {} 
+        result[name] = {}  
+        # print(json_dict)
+        # Aggregate results for level 2  config.T0 config.T1 config.T2
+        priority_0_count = sum(1 for v in json_dict.values() if v['priority'] == 0 and not v['result']) 
+        priority_1_count = sum(1 for v in json_dict.values() if v['priority'] == 1 and not v['result']) 
+        priority_2_count = sum(1 for v in json_dict.values() if v['priority'] == 2 and not v['result']) 
+
+        if priority_0_count > config.T0:  
+            result[name]['result'] = False
+            
+        elif priority_1_count > config.T1:  
+            for metric in result[name].values():  
+                metric['result'] = False 
+        elif priority_2_count > config.T2:  
+            for metric in result[name].values():  
+                metric['result'] = False
+        else:  
+            result[name]['result'] = True  # Default to True unless overridden  
+        result[name]['priority'] = priority 
+        result[name]['priority_0_count'] = priority_0_count  
+        result[name]['priority_1_count'] = priority_1_count
+        result[name]['priority_2_count'] = priority_2_count  
+        result[case_name].update(json_dict)
+        
+        return result  
+
+def evaluate_single_case_back(case_name, priority, json_dict):
+    """对单个案例进行评估"""
+    result = {case_name: {}}
+    priority_counts = {priority: sum(1 for v in json_dict.values() if v['priority'] == priority and not v['result']) 
+                       for priority in [0, 1, 2]}
+
+    if priority_counts[0] > config.T0:  
+        result[case_name]['result'] = False
+    elif priority_counts[1] > config.T1:  
+        for metric in result[case_name].values():  
+            metric['result'] = False 
+    elif priority_counts[2] > config.T2:  
+        for metric in result[case_name].values():  
+            metric['result'] = False
+    else:  
+        result[case_name]['result'] = True
+
+    result[case_name].update(priority_counts)
+    result[case_name].update(json_dict)  # 合并原始数据
+    return result 
+  
+  
+def main():  
+    # config_path = r'/home/kevin/kevin/zhaoyuan/evaluate_zhaoyuan/models/safety/config.yaml' 
+    # config_path1 = r'/home/kevin/kevin/zhaoyuan/evaluate_zhaoyuan/models/safety/config.json'  
+    # calculated_metrics = {  
+    #     'TTC': 1.0,  
+    #     'MTTC': 1.0,  
+    #     'THW': 1.0,  
+    #     'LonSD': 50.0,  
+    #     'LatSD': 3.0,  
+    #     'DRAC': 3.0,  
+    #     'BTN': -1000.0,  
+    #     'STN': 0.5,  
+    #     'collisionRisk': 5.0,  
+    #     'collisionSeverity': 2.0,  
+    # }  
+  
+    # # evaluator = Score(config_path, calculated_metrics) 
+    # evaluator = Score(config_path)  
+    # result = evaluator.evaluate(calculated_metrics) 
+    # with open(config_path1, 'w') as json_file:
+    #     json.dump(result, json_file, indent=4)  # `indent` 参数用于美化输出
+    config_path = r'/home/kevin/kevin/zhaoyuan/zhaoyuan/models/caseMetric/single_config.yaml' 
+    config_path1 = r'/home/kevin/kevin/zhaoyuan/zhaoyuan/result/data_zhaoyuan/data_zhaoyuan_single_report.json'  
+    
+    # evaluator = Score(config_path, calculated_metrics) 
+     
+    with open(config_path1, 'r') as file:
+        data = json.load(file)
+         
+        result = evaluate_single_case("case1", 0, data)
+        print(result)
+    
+   
+  
+if __name__ == '__main__':  
+    main()

BIN
modules/metric/__pycache__/comfort.cpython-312.pyc


BIN
modules/metric/__pycache__/comfort.cpython-313.pyc


BIN
modules/metric/__pycache__/efficient.cpython-312.pyc


BIN
modules/metric/__pycache__/efficient.cpython-313.pyc


BIN
modules/metric/__pycache__/function.cpython-312.pyc


BIN
modules/metric/__pycache__/function.cpython-313.pyc


BIN
modules/metric/__pycache__/safety.cpython-312.pyc


BIN
modules/metric/__pycache__/safety.cpython-313.pyc


BIN
modules/metric/__pycache__/traffic.cpython-312.pyc


BIN
modules/metric/__pycache__/traffic.cpython-313.pyc


+ 560 - 0
modules/metric/comfort.py

@@ -0,0 +1,560 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+##################################################################
+#
+# Copyright (c) 2023 CICV, Inc. All Rights Reserved
+#
+##################################################################
+"""
+@Authors:           zhanghaiwen(zhanghaiwen@china-icv.cn), yangzihao(yangzihao@china-icv.cn)
+@Data:              2023/06/25
+@Last Modified:     2023/06/25
+@Summary:           Comfort metrics
+"""
+
+import sys
+import math
+import pandas as pd
+import numpy as np
+import scipy.signal
+from pathlib import Path 
+from typing import Dict, List, Any, Optional, Callable, Union, Tuple
+
+from modules.lib.score import Score
+from modules.lib.common import get_interpolation, get_frame_with_time
+from modules.lib import data_process
+
+from modules.lib.log_manager import LogManager
+
+COMFORT_INFO = [
+    "simTime",
+    "simFrame",
+    "speedX",
+    "speedY",
+    "accelX",
+    "accelY",
+    "curvHor",
+    "lightMask",
+    "v",
+    "lat_acc",
+    "lon_acc",
+    "time_diff",
+    "lon_acc_diff",
+    "lon_acc_roc",
+    "speedH",
+    "accelH",
+]
+# ----------------------
+# 独立指标计算函数
+# ----------------------
+def weaving(data_processed) -> dict:
+    """计算蛇行指标"""
+    comfort = ComfortCalculator(data_processed)
+    zigzag_count = comfort.calculate_zigzag_count()
+    return {"weaving": float(zigzag_count)}
+
+def shake(data_processed) -> dict:
+    """计算晃动指标"""
+    comfort = ComfortCalculator(data_processed)
+    shake_count = comfort.calculate_shake_count()
+    return {"shake": float(shake_count)}
+
+def cadence(data_processed) -> dict:
+    """计算顿挫指标"""
+    comfort = ComfortCalculator(data_processed)
+    cadence_count = comfort.calculate_cadence_count()
+    return {"cadence": float(cadence_count)}
+
+def slamBrake(data_processed) -> dict:
+    """计算急刹车指标"""
+    comfort = ComfortCalculator(data_processed)
+    slam_brake_count = comfort.calculate_slam_brake_count()
+    return {"slamBrake": float(slam_brake_count)}
+
+def slamAccelerate(data_processed) -> dict:
+    """计算急加速指标"""
+    comfort = ComfortCalculator(data_processed)
+    slam_accel_count = comfort.calculate_slam_accel_count()
+    return {"slamAccelerate": float(slam_accel_count)}
+
+
+# 装饰器保持不变
+def peak_valley_decorator(method):
+    def wrapper(self, *args, **kwargs):
+        peak_valley = self._peak_valley_determination(self.df)
+        pv_list = self.df.loc[peak_valley, ['simTime', 'speedH']].values.tolist()
+        if len(pv_list) != 0:
+            flag = True
+            p_last = pv_list[0]
+
+            for i in range(1, len(pv_list)):
+                p_curr = pv_list[i]
+
+                if self._peak_valley_judgment(p_last, p_curr):
+                    # method(self, p_curr, p_last)
+                    method(self, p_curr, p_last, flag, *args, **kwargs)
+                else:
+                    p_last = p_curr
+
+            return method
+        else:
+            flag = False
+            p_curr = [0, 0]
+            p_last = [0, 0]
+            method(self, p_curr, p_last, flag, *args, **kwargs)
+            return method
+
+    return wrapper
+
+
+class ComfortRegistry:
+    """舒适性指标注册器"""
+    
+    def __init__(self, data_processed):
+        self.logger = LogManager().get_logger()  # 获取全局日志实例
+        self.data = data_processed
+        self.comfort_config = data_processed.comfort_config["comfort"]
+        self.metrics = self._extract_metrics(self.comfort_config)
+        self._registry = self._build_registry()
+    
+    def _extract_metrics(self, config_node: dict) -> list:
+        """DFS遍历提取指标"""
+        metrics = []
+        def _recurse(node):
+            if isinstance(node, dict):
+                if 'name' in node and not any(isinstance(v, dict) for v in node.values()):
+                    metrics.append(node['name'])
+                for v in node.values():
+                    _recurse(v)
+        _recurse(config_node)
+        self.logger.info(f'评比的舒适性指标列表:{metrics}')
+        return metrics
+    
+    def _build_registry(self) -> dict:
+        """自动注册指标函数"""
+        registry = {}
+        for metric_name in self.metrics:
+            try:
+                registry[metric_name] = globals()[metric_name]
+            except KeyError:
+                self.logger.error(f"未实现指标函数: {metric_name}")
+        return registry
+    
+    def batch_execute(self) -> dict:
+        """批量执行指标计算"""
+        results = {}
+        for name, func in self._registry.items():
+            try:
+                result = func(self.data)
+                results.update(result)
+            except Exception as e:
+                self.logger.error(f"{name} 执行失败: {str(e)}", exc_info=True)
+                results[name] = None
+        self.logger.info(f'舒适性指标计算结果:{results}')
+        return results
+
+
+class ComfortCalculator:
+    """舒适性指标计算类 - 提供核心计算功能"""
+    
+    def __init__(self, data_processed):
+        self.data_processed = data_processed
+        self.logger = LogManager().get_logger()
+        
+        self.data = data_processed.ego_data
+        self.ego_df = pd.DataFrame()
+        self.discomfort_df = pd.DataFrame(columns=['start_time', 'end_time', 'start_frame', 'end_frame', 'type'])
+        
+        self.time_list = self.data['simTime'].values.tolist()
+        self.frame_list = self.data['simFrame'].values.tolist()
+        
+        self.zigzag_count = 0
+        self.shake_count = 0
+        self.cadence_count = 0
+        self.slam_brake_count = 0
+        self.slam_accel_count = 0
+        
+        self.zigzag_time_list = []
+        self.zigzag_stre_list = []
+        self.cur_ego_path_list = []
+        self.curvature_list = []
+        
+        self._initialize_data()
+    
+    def _initialize_data(self):
+        """初始化数据"""
+        self.ego_df = self.data[COMFORT_INFO].copy()
+        self.df = self.ego_df.reset_index(drop=True)
+        self._prepare_comfort_parameters()
+    
+    def _prepare_comfort_parameters(self):
+        """准备舒适性计算所需参数"""
+        # 计算加减速阈值
+        self.ego_df['ip_acc'] = self.ego_df['v'].apply(get_interpolation, point1=[18, 4], point2=[72, 2])
+        self.ego_df['ip_dec'] = self.ego_df['v'].apply(get_interpolation, point1=[18, -5], point2=[72, -3.5])
+        self.ego_df['slam_brake'] = (self.ego_df['lon_acc'] - self.ego_df['ip_dec']).apply(
+            lambda x: 1 if x < 0 else 0)
+        self.ego_df['slam_accel'] = (self.ego_df['lon_acc'] - self.ego_df['ip_acc']).apply(
+            lambda x: 1 if x > 0 else 0)
+        self.ego_df['cadence'] = self.ego_df.apply(
+            lambda row: self._cadence_process_new(row['lon_acc'], row['ip_acc'], row['ip_dec']), axis=1)
+
+        # 计算曲率相关参数
+        self.ego_df['cur_ego_path'] = self.ego_df.apply(self._cal_cur_ego_path, axis=1)
+        self.ego_df['curvHor'] = self.ego_df['curvHor'].astype('float')
+        self.ego_df['cur_diff'] = (self.ego_df['cur_ego_path'] - self.ego_df['curvHor']).abs()
+        self.ego_df['R'] = self.ego_df['curvHor'].apply(lambda x: 10000 if x == 0 else 1 / x)
+        self.ego_df['R_ego'] = self.ego_df['cur_ego_path'].apply(lambda x: 10000 if x == 0 else 1 / x)
+        self.ego_df['R_diff'] = (self.ego_df['R_ego'] - self.ego_df['R']).abs()
+        
+        self.cur_ego_path_list = self.ego_df['cur_ego_path'].values.tolist()
+        self.curvature_list = self.ego_df['curvHor'].values.tolist()
+    
+    def _cal_cur_ego_path(self, row):
+        """计算车辆轨迹曲率"""
+        try:
+            divide = (row['speedX'] ** 2 + row['speedY'] ** 2) ** (3 / 2)
+            if not divide:
+                res = None
+            else:
+                res = (row['speedX'] * row['accelY'] - row['speedY'] * row['accelX']) / divide
+        except:
+            res = None
+        return res
+    
+    def _peak_valley_determination(self, df):
+        """确定角速度的峰谷"""
+        peaks, _ = scipy.signal.find_peaks(df['speedH'], height=0.01, distance=1, prominence=0.01)
+        valleys, _ = scipy.signal.find_peaks(-df['speedH'], height=0.01, distance=1, prominence=0.01)
+        peak_valley = sorted(list(peaks) + list(valleys))
+        return peak_valley
+    
+    def _peak_valley_judgment(self, p_last, p_curr, tw=10000, avg=0.02):
+        """判断峰谷是否满足蛇行条件"""
+        t_diff = p_curr[0] - p_last[0]
+        v_diff = abs(p_curr[1] - p_last[1])
+        s = p_curr[1] * p_last[1]
+
+        zigzag_flag = t_diff < tw and v_diff > avg and s < 0
+        if zigzag_flag and ([p_last[0], p_curr[0]] not in self.zigzag_time_list):
+            self.zigzag_time_list.append([p_last[0], p_curr[0]])
+        return zigzag_flag
+    
+    def _cadence_process_new(self, lon_acc, ip_acc, ip_dec):
+        """处理顿挫数据"""
+        if abs(lon_acc) < 1 or lon_acc > ip_acc or lon_acc < ip_dec:
+            return np.nan
+        elif abs(lon_acc) == 0:
+            return 0
+        elif lon_acc > 0 and lon_acc < ip_acc:
+            return 1
+        elif lon_acc < 0 and lon_acc > ip_dec:
+            return -1
+        else:
+            return 0
+    
+    @peak_valley_decorator
+    def _zigzag_count_func(self, p_curr, p_last, flag=True):
+        """计算蛇行次数"""
+        if flag:
+            self.zigzag_count += 1
+        else:
+            self.zigzag_count += 0
+    
+    @peak_valley_decorator
+    def _cal_zigzag_strength(self, p_curr, p_last, flag=True):
+        """计算蛇行强度"""
+        if flag:
+            v_diff = abs(p_curr[1] - p_last[1])
+            t_diff = p_curr[0] - p_last[0]
+            self.zigzag_stre_list.append(v_diff / t_diff)  # 平均角加速度
+        else:
+            self.zigzag_stre_list = []
+    
+    def calculate_zigzag_count(self):
+        """计算蛇行指标"""
+        self._zigzag_count_func()
+        return self.zigzag_count
+    
+    def calculate_shake_count(self):
+        """计算晃动指标"""
+        self._shake_detector()
+        return self.shake_count
+    
+    def calculate_cadence_count(self):
+        """计算顿挫指标"""
+        self._cadence_detector()
+        return self.cadence_count
+    
+    def calculate_slam_brake_count(self):
+        """计算急刹车指标"""
+        self._slam_brake_detector()
+        return self.slam_brake_count
+    
+    def calculate_slam_accel_count(self):
+        """计算急加速指标"""
+        self._slam_accel_detector()
+        return self.slam_accel_count
+    
+    def _shake_detector(self, Cr_diff=0.05, T_diff=0.39):
+        """晃动检测器"""
+        time_list = []
+        frame_list = []
+
+        df = self.ego_df.copy()
+        df = df[df['cur_diff'] > Cr_diff]
+        df['frame_ID_diff'] = df['simFrame'].diff()
+        filtered_df = df[df.frame_ID_diff > T_diff]
+
+        row_numbers = filtered_df.index.tolist()
+        cut_column = pd.cut(df.index, bins=row_numbers)
+
+        grouped = df.groupby(cut_column)
+        dfs = {}
+        for name, group in grouped:
+            dfs[name] = group.reset_index(drop=True)
+
+        for name, df_group in dfs.items():
+            # 直道,未主动换道
+            df_group['curvHor'] = df_group['curvHor'].abs()
+            df_group_straight = df_group[(df_group.lightMask == 0) & (df_group.curvHor < 0.001)]
+            if not df_group_straight.empty:
+                time_list.extend(df_group_straight['simTime'].values)
+                frame_list.extend(df_group_straight['simFrame'].values)
+                self.shake_count = self.shake_count + 1
+
+            # 打转向灯,道路为直道
+            df_group_change_lane = df_group[(df_group['lightMask'] != 0) & (df_group['curvHor'] < 0.001)]
+            df_group_change_lane_data = df_group_change_lane[df_group_change_lane.cur_diff > Cr_diff + 0.2]
+            if not df_group_change_lane_data.empty:
+                time_list.extend(df_group_change_lane_data['simTime'].values)
+                frame_list.extend(df_group_change_lane_data['simFrame'].values)
+                self.shake_count = self.shake_count + 1
+
+            # 转弯,打转向灯
+            df_group_turn = df_group[(df_group['lightMask'] != 0) & (df_group['curvHor'].abs() > 0.001)]
+            df_group_turn_data = df_group_turn[df_group_turn.cur_diff.abs() > Cr_diff + 0.1]
+            if not df_group_turn_data.empty:
+                time_list.extend(df_group_turn_data['simTime'].values)
+                frame_list.extend(df_group_turn_data['simFrame'].values)
+                self.shake_count = self.shake_count + 1
+
+        # 分组处理
+        TIME_RANGE = 1
+        t_list = time_list
+        f_list = frame_list
+        group_time = []
+        group_frame = []
+        sub_group_time = []
+        sub_group_frame = []
+        
+        if len(f_list) > 0:
+            for i in range(len(f_list)):
+                if not sub_group_time or t_list[i] - t_list[i - 1] <= TIME_RANGE:
+                    sub_group_time.append(t_list[i])
+                    sub_group_frame.append(f_list[i])
+                else:
+                    group_time.append(sub_group_time)
+                    group_frame.append(sub_group_frame)
+                    sub_group_time = [t_list[i]]
+                    sub_group_frame = [f_list[i]]
+
+            group_time.append(sub_group_time)
+            group_frame.append(sub_group_frame)
+
+        # 输出图表值
+        shake_time = [[g[0], g[-1]] for g in group_time]
+        shake_frame = [[g[0], g[-1]] for g in group_frame]
+        self.shake_count = len(shake_time)
+
+        if shake_time:
+            time_df = pd.DataFrame(shake_time, columns=['start_time', 'end_time'])
+            frame_df = pd.DataFrame(shake_frame, columns=['start_frame', 'end_frame'])
+            discomfort_df = pd.concat([time_df, frame_df], axis=1)
+            discomfort_df['type'] = 'shake'
+            self.discomfort_df = pd.concat([self.discomfort_df, discomfort_df], ignore_index=True)
+
+        return time_list
+    
+    def _cadence_detector(self):
+        """顿挫检测器"""
+        data = self.ego_df[['simTime', 'simFrame', 'lon_acc', 'lon_acc_roc', 'cadence']].copy()
+        time_list = data['simTime'].values.tolist()
+
+        data = data[data['cadence'] != np.nan]
+        data['cadence_diff'] = data['cadence'].diff()
+        data.dropna(subset='cadence_diff', inplace=True)
+        data = data[data['cadence_diff'] != 0]
+
+        t_list = data['simTime'].values.tolist()
+        f_list = data['simFrame'].values.tolist()
+
+        TIME_RANGE = 1
+        group_time = []
+        group_frame = []
+        sub_group_time = []
+        sub_group_frame = []
+        for i in range(len(f_list)):
+            if not sub_group_time or t_list[i] - t_list[i - 1] <= TIME_RANGE:  # 特征点相邻一秒内的,算作同一组顿挫
+                sub_group_time.append(t_list[i])
+                sub_group_frame.append(f_list[i])
+            else:
+                group_time.append(sub_group_time)
+                group_frame.append(sub_group_frame)
+                sub_group_time = [t_list[i]]
+                sub_group_frame = [f_list[i]]
+
+        group_time.append(sub_group_time)
+        group_frame.append(sub_group_frame)
+        group_time = [g for g in group_time if len(g) >= 1]  # 有一次特征点则算作一次顿挫
+        group_frame = [g for g in group_frame if len(g) >= 1]
+
+        # 输出图表值
+        cadence_time = [[g[0], g[-1]] for g in group_time]
+        cadence_frame = [[g[0], g[-1]] for g in group_frame]
+
+        if cadence_time:
+            time_df = pd.DataFrame(cadence_time, columns=['start_time', 'end_time'])
+            frame_df = pd.DataFrame(cadence_frame, columns=['start_frame', 'end_frame'])
+            discomfort_df = pd.concat([time_df, frame_df], axis=1)
+            discomfort_df['type'] = 'cadence'
+            self.discomfort_df = pd.concat([self.discomfort_df, discomfort_df], ignore_index=True)
+
+        # 将顿挫组的起始时间为组重新统计时间
+        cadence_time_list = [time for pair in cadence_time for time in time_list if pair[0] <= time <= pair[1]]
+
+        stre_list = []
+        freq_list = []
+        for g in group_time:
+            # calculate strength
+            g_df = data[data['simTime'].isin(g)]
+            strength = g_df['lon_acc'].abs().mean()
+            stre_list.append(strength)
+
+            # calculate frequency
+            cnt = len(g)
+            t_start = g_df['simTime'].iloc[0]
+            t_end = g_df['simTime'].iloc[-1]
+            t_delta = t_end - t_start
+            frequency = cnt / t_delta
+            freq_list.append(frequency)
+
+        self.cadence_count = len(freq_list)
+        cadence_stre = sum(stre_list) / len(stre_list) if stre_list else 0
+
+        return cadence_time_list
+    
+    def _slam_brake_detector(self):
+        """急刹车检测器"""
+        data = self.ego_df[['simTime', 'simFrame', 'lon_acc', 'lon_acc_roc', 'ip_dec', 'slam_brake']].copy()
+        res_df = data[data['slam_brake'] == 1]
+        t_list = res_df['simTime'].values
+        f_list = res_df['simFrame'].values.tolist()
+
+        TIME_RANGE = 1
+        group_time = []
+        group_frame = []
+        sub_group_time = []
+        sub_group_frame = []
+        for i in range(len(f_list)):
+            if not sub_group_time or f_list[i] - f_list[i - 1] <= TIME_RANGE:  # 连续帧的算作同一组急刹
+                sub_group_time.append(t_list[i])
+                sub_group_frame.append(f_list[i])
+            else:
+                group_time.append(sub_group_time)
+                group_frame.append(sub_group_frame)
+                sub_group_time = [t_list[i]]
+                sub_group_frame = [f_list[i]]
+
+        group_time.append(sub_group_time)
+        group_frame.append(sub_group_frame)
+        group_time = [g for g in group_time if len(g) >= 2]  # 达到两帧算作一次急刹
+        group_frame = [g for g in group_frame if len(g) >= 2]
+
+        # 输出图表值
+        slam_brake_time = [[g[0], g[-1]] for g in group_time]
+        slam_brake_frame = [[g[0], g[-1]] for g in group_frame]
+
+        if slam_brake_time:
+            time_df = pd.DataFrame(slam_brake_time, columns=['start_time', 'end_time'])
+            frame_df = pd.DataFrame(slam_brake_frame, columns=['start_frame', 'end_frame'])
+            discomfort_df = pd.concat([time_df, frame_df], axis=1)
+            discomfort_df['type'] = 'slam_brake'
+            self.discomfort_df = pd.concat([self.discomfort_df, discomfort_df], ignore_index=True)
+
+        time_list = [element for sublist in group_time for element in sublist]
+        self.slam_brake_count = len(group_time)
+        return time_list
+    
+    def _slam_accel_detector(self):
+        """急加速检测器"""
+        data = self.ego_df[['simTime', 'simFrame', 'lon_acc', 'ip_acc', 'slam_accel']].copy()
+        res_df = data.loc[data['slam_accel'] == 1]
+        t_list = res_df['simTime'].values
+        f_list = res_df['simFrame'].values.tolist()
+
+        group_time = []
+        group_frame = []
+        sub_group_time = []
+        sub_group_frame = []
+        for i in range(len(f_list)):
+            if not group_time or f_list[i] - f_list[i - 1] <= 1:  # 连续帧的算作同一组急加速
+                sub_group_time.append(t_list[i])
+                sub_group_frame.append(f_list[i])
+            else:
+                group_time.append(sub_group_time)
+                group_frame.append(sub_group_frame)
+                sub_group_time = [t_list[i]]
+                sub_group_frame = [f_list[i]]
+
+        group_time.append(sub_group_time)
+        group_frame.append(sub_group_frame)
+        group_time = [g for g in group_time if len(g) >= 2]
+        group_frame = [g for g in group_frame if len(g) >= 2]
+
+        # 输出图表值
+        slam_accel_time = [[g[0], g[-1]] for g in group_time]
+        slam_accel_frame = [[g[0], g[-1]] for g in group_frame]
+
+        if slam_accel_time:
+            time_df = pd.DataFrame(slam_accel_time, columns=['start_time', 'end_time'])
+            frame_df = pd.DataFrame(slam_accel_frame, columns=['start_frame', 'end_frame'])
+            discomfort_df = pd.concat([time_df, frame_df], axis=1)
+            discomfort_df['type'] = 'slam_accel'
+            self.discomfort_df = pd.concat([self.discomfort_df, discomfort_df], ignore_index=True)
+
+        time_list = [element for sublist in group_time for element in sublist]
+        self.slam_accel_count = len(group_time)
+        return time_list
+
+
+class ComfortManager:
+    """舒适性指标计算主类"""
+    
+    def __init__(self, data_processed):
+        self.data = data_processed
+        self.logger = LogManager().get_logger()
+        self.registry = ComfortRegistry(self.data)
+
+    def report_statistic(self):
+        """生成舒适性评分报告"""
+        comfort_result = self.registry.batch_execute()
+        # evaluator = Score(self.data.comfort_config)
+        # result = evaluator.evaluate(comfort_result) 
+        # return result
+        return comfort_result
+
+
+if __name__ == '__main__':
+    case_name = 'ICA'
+    mode_label = 'PGVIL'
+    
+    data = data_process.DataPreprocessing(case_name, mode_label)
+    comfort_instance = Comfort(data)  
+    
+    try:  
+        comfort_result = comfort_instance.report_statistic() 
+        result = {'comfort': comfort_result}
+        print(result) 
+    except Exception as e:  
+        print(f"An error occurred in Comfort.report_statistic: {e}")

+ 148 - 0
modules/metric/efficient.py

@@ -0,0 +1,148 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+##################################################################
+#
+# Copyright (c) 2024 CICV, Inc. All Rights Reserved
+#
+##################################################################
+"""
+@Authors:           zhanghaiwen
+@Data:              2024/12/23
+@Last Modified:     2024/12/23
+@Summary:           Efficient metrics calculation
+"""
+
+from modules.lib.score import Score
+from modules.lib.log_manager import LogManager
+import numpy as np
+from typing import Dict, Tuple, Optional, Callable, Any
+import pandas as pd
+
+
+# ----------------------
+# 基础指标计算函数
+# ----------------------
+def maxSpeed(data_processed) -> dict:
+    """计算最大速度"""
+    max_speed = data_processed.ego_data['v'].max()
+    return {"maxSpeed": float(max_speed)}
+
+def deviationSpeed(data_processed) -> dict:
+    """计算速度方差"""
+    deviation = data_processed.ego_data['v'].var()
+    return {"deviationSpeed": float(deviation)}
+
+def averagedSpeed(data_processed) -> dict:
+    """计算平均速度"""
+    avg_speed = data_processed.ego_data['v'].mean()
+    return {"averagedSpeed": float(avg_speed)}
+
+def stopDuration(data_processed) -> dict:
+    """计算停车持续时间和次数"""
+    STOP_SPEED_THRESHOLD = 0.05  # 停车速度阈值
+    FRAME_RANGE = 13  # 停车帧数阈值
+    
+    ego_df = data_processed.ego_data
+    
+    stop_time_list = ego_df[ego_df['v'] <= STOP_SPEED_THRESHOLD]['simTime'].values.tolist()
+    stop_frame_list = ego_df[ego_df['v'] <= STOP_SPEED_THRESHOLD]['simFrame'].values.tolist()
+
+    stop_frame_group = []
+    stop_time_group = []
+    sum_stop_time = 0
+    stop_count = 0
+    
+    if not stop_frame_list:
+        return {"stopDuration": 0.0, "stopCount": 0}
+        
+    f1, t1 = stop_frame_list[0], stop_time_list[0]
+
+    for i in range(1, len(stop_frame_list)):
+        if stop_frame_list[i] - stop_frame_list[i - 1] != 1:  # 帧不连续
+            f2, t2 = stop_frame_list[i - 1], stop_time_list[i - 1]
+            # 如果停车有效(帧间隔 >= FRAME_RANGE)
+            if f2 - f1 >= FRAME_RANGE:
+                stop_frame_group.append((f1, f2))
+                stop_time_group.append((t1, t2))
+                sum_stop_time += (t2 - t1)
+                stop_count += 1
+            # 更新 f1, t1
+            f1, t1 = stop_frame_list[i], stop_time_list[i]
+
+    # 检查最后一段停车
+    if len(stop_frame_list) > 0:
+        f2, t2 = stop_frame_list[-1], stop_time_list[-1]
+        if f2 - f1 >= FRAME_RANGE and f2 != ego_df['simFrame'].values[-1]:
+            stop_frame_group.append((f1, f2))
+            stop_time_group.append((t1, t2))
+            sum_stop_time += (t2 - t1)
+            stop_count += 1
+
+    # 计算停车持续时间
+    stop_duration = sum_stop_time / stop_count if stop_count != 0 else 0
+    
+    return {"stopDuration": float(stop_duration), "stopCount": stop_count}
+
+
+class EfficientRegistry:
+    """高效性指标注册器"""
+    
+    def __init__(self, data_processed):
+        self.logger = LogManager().get_logger()  # 获取全局日志实例
+        self.data = data_processed
+        self.eff_config = data_processed.efficient_config["efficient"]
+        self.metrics = self._extract_metrics(self.eff_config)
+        self._registry = self._build_registry()
+    
+    def _extract_metrics(self, config_node: dict) -> list:
+        """DFS遍历提取指标"""
+        metrics = []
+        def _recurse(node):
+            if isinstance(node, dict):
+                if 'name' in node and not any(isinstance(v, dict) for v in node.values()):
+                    metrics.append(node['name'])
+                for v in node.values():
+                    _recurse(v)
+        _recurse(config_node)
+        self.logger.info(f'评比的高效性指标列表:{metrics}')
+        return metrics
+    
+    def _build_registry(self) -> dict:
+        """自动注册指标函数"""
+        registry = {}
+        for metric_name in self.metrics:
+            try:
+                registry[metric_name] = globals()[metric_name]
+            except KeyError:
+                self.logger.error(f"未实现指标函数: {metric_name}")
+        return registry
+    
+    def batch_execute(self) -> dict:
+        """批量执行指标计算"""
+        results = {}
+        for name, func in self._registry.items():
+            try:
+                result = func(self.data)
+                results.update(result)
+            except Exception as e:
+                self.logger.error(f"{name} 执行失败: {str(e)}", exc_info=True)
+                results[name] = None
+        self.logger.info(f'高效性指标计算结果:{results}')
+        return results
+
+
+class EfficientManager:
+    """高效性指标管理类"""  
+    def __init__(self, data_processed):
+        self.data = data_processed
+        self.efficient = EfficientRegistry(self.data)
+    
+    def report_statistic(self):
+        """Generate the statistics and report the results."""
+        # 使用注册表批量执行指标计算
+        efficient_result = self.efficient.batch_execute()
+        # evaluator = Score(self.data.efficient_config)
+        # result = evaluator.evaluate(efficient_result) 
+        # return result
+        return efficient_result
+        

+ 164 - 0
modules/metric/function.py

@@ -0,0 +1,164 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+##################################################################
+#
+# Copyright (c) 2025 CICV, Inc. All Rights Reserved
+#
+##################################################################
+"""
+@Authors:           zhanghaiwen(zhanghaiwen@china-icv.cn)
+@Data:              2025/01/5
+@Last Modified:     2025/01/5
+@Summary:           Function Metrics Calculation
+"""
+
+
+from modules.lib.score import Score
+from modules.lib.log_manager import LogManager
+import numpy as np
+from typing import Dict, Tuple, Optional, Callable, Any
+import pandas as pd
+
+
+# ----------------------
+# 基础工具函数 (Pure functions)
+# ----------------------
+def calculate_distance(ego_pos: np.ndarray, obj_pos: np.ndarray) -> np.ndarray:
+    """向量化距离计算"""
+    return np.linalg.norm(ego_pos - obj_pos, axis=1)
+
+def calculate_relative_speed(ego_speed: np.ndarray, obj_speed: np.ndarray) -> np.ndarray:
+    """向量化相对速度计算"""
+    return np.linalg.norm(ego_speed - obj_speed, axis=1)
+
+def extract_ego_obj(data: pd.DataFrame) -> Tuple[pd.Series, pd.DataFrame]:
+    """数据提取函数"""
+    ego = data[data['playerId'] == 1].iloc[0]
+    obj = data[data['playerId'] != 1]
+    return ego, obj
+
+
+def get_first_warning(ego_df: pd.DataFrame, obj_df: pd.DataFrame) -> Optional[pd.DataFrame]:
+    """带缓存的预警数据获取"""
+    warning_times = ego_df[ego_df['ifwarning'] == 1]['simTime']
+    if warning_times.empty:
+        return None
+    first_time = warning_times.iloc[0]
+    return obj_df[obj_df['simTime'] == first_time]
+
+# ----------------------
+# 核心计算功能函数
+# ----------------------
+def latestWarningDistance(data_processed) -> dict:
+    """预警距离计算流水线"""
+    ego_df = data_processed.ego_data
+    obj_df = data_processed.object_df
+    warning_data = get_first_warning(ego_df, obj_df)
+    if warning_data is None:
+        return {"latestWarningDistance": 0.0}
+
+    ego, obj = extract_ego_obj(warning_data)
+    distances = calculate_distance(
+        np.array([[ego['posX'], ego['posY']]]),
+        obj[['posX', 'posY']].values
+    )
+    return {"latestWarningDistance": float(np.min(distances))}
+
+def latestWarningDistance_TTC(data_processed) -> dict:
+    """TTC计算流水线"""
+    ego_df = data_processed.ego_data
+    obj_df = data_processed.object_df
+    warning_data = get_first_warning(ego_df, obj_df)
+    if warning_data is None:
+        return {"latestWarningDistance_TTC": 0.0}
+
+    ego, obj = extract_ego_obj(warning_data)
+    
+    # 向量化计算
+    ego_pos = np.array([[ego['posX'], ego['posY']]])
+    ego_speed = np.array([[ego['speedX'], ego['speedY']]])
+    obj_pos = obj[['posX', 'posY']].values
+    obj_speed = obj[['speedX', 'speedY']].values
+
+    distances = calculate_distance(ego_pos, obj_pos)
+    rel_speeds = calculate_relative_speed(ego_speed, obj_speed)
+
+    with np.errstate(divide='ignore', invalid='ignore'):
+        ttc = np.where(rel_speeds != 0, distances / rel_speeds, np.inf)
+    
+    return {"latestWarningDistance_TTC": float(np.nanmin(ttc))}
+
+class FunctionRegistry:
+    """动态函数注册器(支持参数验证)"""
+    
+    def __init__(self, data_processed):
+        self.logger = LogManager().get_logger()  # 获取全局日志实例
+        self.data = data_processed
+        self.fun_config = data_processed.function_config["function"]
+        self.level_3_merics = self._extract_level_3_metrics(self.fun_config)
+        self._registry: Dict[str, Callable] = {}
+        self._registry = self._build_registry()
+
+    
+    def _extract_level_3_metrics(self, config_node: dict) -> list:
+        """DFS遍历提取第三层指标(时间复杂度O(n))[4](@ref)"""
+        metrics = []
+        def _recurse(node):
+            if isinstance(node, dict):
+                if 'name' in node and not any(isinstance(v, dict) for v in node.values()):
+                    metrics.append(node['name'])
+                for v in node.values():
+                    _recurse(v)
+        _recurse(config_node)
+        self.logger.info(f'评比的功能指标列表:{metrics}')
+        return metrics
+
+    def _build_registry(self) -> dict:
+        """自动注册指标函数(防御性编程)"""
+        registry = {}
+        for func_name in self.level_3_merics:
+            try:
+                registry[func_name] = globals()[func_name]
+            except KeyError:
+                print(f"未实现指标函数: {func_name}")
+                self.logger.error(f"未实现指标函数: {func_name}")
+        return registry
+
+    def batch_execute(self) -> dict:
+        """批量执行指标计算(带熔断机制)"""
+        results = {}
+        for name, func in self._registry.items():
+            try:
+                result = func(self.data)  # 统一传递数据上下文
+                results.update(result)
+            except Exception as e:
+                print(f"{name} 执行失败: {str(e)}")
+                self.logger.error(f"{name} 执行失败: {str(e)}", exc_info=True)
+                results[name] = None
+        self.logger.info(f'功能指标计算结果:{results}')
+        return results
+class FunctionManager:
+    """管理功能指标计算的类"""
+
+    def __init__(self, data_processed):
+        self.data = data_processed
+        self.function = FunctionRegistry(self.data)
+
+    def report_statistic(self):
+        """
+        计算并报告功能指标结果。
+        :return: 评估结果
+        """
+        function_result = self.function.batch_execute()
+        # evaluator = Score(self.data.function_config)
+        # result = evaluator.evaluate(function_result)
+        # return result
+        return function_result
+        # self.logger.info(f'Function Result: {function_result}')
+# 使用示例
+if __name__ == "__main__":
+    pass
+        # print("\n[功能类表现及得分情况]")
+    
+
+

+ 105 - 0
modules/metric/safety.py

@@ -0,0 +1,105 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+"""
+安全指标计算模块
+"""
+
+import numpy as np
+import pandas as pd
+from typing import Dict, Any, List, Optional
+
+from modules.lib.score import Score
+from modules.lib.log_manager import LogManager
+
+
+# 安全指标计算函数
+def calculate_ttc(data_processed) -> dict:
+    """计算TTC (Time To Collision)"""
+    # 实现TTC计算逻辑
+    # ...
+    return {"TTC": 3.5}  # 示例返回值
+
+def calculate_mttc(data_processed) -> dict:
+    """计算MTTC (Modified Time To Collision)"""
+    # 实现MTTC计算逻辑
+    # ...
+    return {"MTTC": 4.2}  # 示例返回值
+
+def calculate_thw(data_processed) -> dict:
+    """计算THW (Time Headway)"""
+    # 实现THW计算逻辑
+    # ...
+    return {"THW": 2.1}  # 示例返回值
+
+def calculate_collision_risk(data_processed) -> dict:
+    """计算碰撞风险"""
+    # 实现碰撞风险计算逻辑
+    # ...
+    return {"collisionRisk": 0.15}  # 示例返回值
+
+
+class SafetyRegistry:
+    """安全指标注册器"""
+    
+    def __init__(self, data_processed):
+        self.logger = LogManager().get_logger()
+        self.data = data_processed
+        self.safety_config = data_processed.safety_config["safety"]
+        self.metrics = self._extract_metrics(self.safety_config)
+        self._registry = self._build_registry()
+    
+    def _extract_metrics(self, config_node: dict) -> list:
+        """从配置中提取指标名称"""
+        metrics = []
+        def _recurse(node):
+            if isinstance(node, dict):
+                if 'name' in node and not any(isinstance(v, dict) for v in node.values()):
+                    metrics.append(node['name'])
+                for v in node.values():
+                    _recurse(v)
+        _recurse(config_node)
+        self.logger.info(f'评比的安全指标列表:{metrics}')
+        return metrics
+    
+    def _build_registry(self) -> dict:
+        """构建指标函数注册表"""
+        registry = {}
+        for metric_name in self.metrics:
+            func_name = f"calculate_{metric_name.lower()}"
+            if func_name in globals():
+                registry[metric_name] = globals()[func_name]
+            else:
+                self.logger.warning(f"未实现安全指标函数: {func_name}")
+        return registry
+    
+    def batch_execute(self) -> dict:
+        """批量执行指标计算"""
+        results = {}
+        for name, func in self._registry.items():
+            try:
+                result = func(self.data)
+                results.update(result)
+            except Exception as e:
+                self.logger.error(f"{name} 执行失败: {str(e)}", exc_info=True)
+                results[name] = None
+        self.logger.info(f'安全指标计算结果:{results}')
+        return results
+
+
+class SafeManager:
+    """安全指标管理类"""
+    
+    def __init__(self, data_processed):
+        self.data = data_processed
+        self.registry = SafetyRegistry(self.data)
+    
+    def report_statistic(self):
+        """计算并报告安全指标结果"""
+        safety_result = self.registry.batch_execute()
+        
+        # evaluator = Score(self.data.safety_config)
+        # result = evaluator.evaluate(safety_result)
+        # return result
+        return safety_result
+    
+    

+ 1220 - 0
modules/metric/traffic.py

@@ -0,0 +1,1220 @@
+
+import math
+import numpy as np
+import pandas as pd
+from modules.lib import log_manager
+from modules.lib.score import Score
+
+OVERTAKE_INFO = [
+    "simTime",
+    "simFrame",
+    "playerId",
+    "speedX",
+    "speedY",
+    "posX",
+    "posY",
+    "posH",
+    "lane_id",
+    "lane_type",
+    "road_type",
+    "interid",
+    "crossid",
+]
+SLOWDOWN_INFO = [
+    "simTime",
+    "simFrame",
+    "playerId",
+    "speedX",
+    "speedY",
+    "posX",
+    "posY",
+    "crossid",
+    "lane_type",
+]
+TURNAROUND_INFO = [
+    "simTime",
+    "simFrame",
+    "playerId",
+    "speedX",
+    "speedY",
+    "posX",
+    "posY",
+    "sign_type1",
+    "lane_type",
+]
+
+TRFFICSIGN_INFO = [
+    "simTime",
+    "simFrame",
+    "playerId",
+    "speedX",
+    "speedY",
+    "v",
+    "posX",
+    "posY",
+    "sign_type1",
+    "sign_ref_link",
+    "sign_x",
+    "sign_y",
+]
+
+
+class OvertakingViolation(object):
+    """超车违规类"""
+
+    def __init__(self, df_data):
+        print("超车违规类初始化中...")
+        self.traffic_violations_type = "超车违规类"
+
+        # self.logger = log.get_logger()  # 使用时再初始化
+
+        self.data = df_data.obj_data[1]
+        self.ego_data = (
+            self.data[OVERTAKE_INFO].copy().reset_index(drop=True)
+        )  # Copy to avoid modifying the original DataFrame
+        self.data_obj = df_data.obj_data[2]
+        self.obj_data = (
+            self.data_obj[OVERTAKE_INFO].copy().reset_index(drop=True)
+        )  # Copy to avoid modifying the original DataFrame
+        self.object_items = []
+        for i, item in df_data.obj_data.items():
+            self.object_items.append(i)
+        if 3 in self.object_items:
+            self.other_obj_data1 = df_data.obj_data[3]
+            self.other_obj_data = (
+                self.other_obj_data1[OVERTAKE_INFO].copy().reset_index(drop=True)
+            )
+
+        self.overtake_on_right_count = 0
+        self.overtake_when_turn_around_count = 0
+        self.overtake_when_passing_car_count = 0
+        self.overtake_in_forbid_lane_count = 0
+        self.overtake_in_ramp_count = 0
+        self.overtake_in_tunnel_count = 0
+        self.overtake_on_accelerate_lane_count = 0
+        self.overtake_on_decelerate_lane_count = 0
+        self.overtake_in_different_senerios_count = 0
+
+    def different_road_area_simtime(self, df, threshold=0.5):
+        if not df:
+            return []
+        simtime_group = []
+        current_simtime_group = [df[0]]
+
+        for i in range(1, len(df)):
+            if abs(df[i] - df[i - 1]) <= threshold:
+                current_simtime_group.append(df[i])
+            else:
+                simtime_group.append(current_simtime_group)
+                current_simtime_group = [df[i]]
+
+        simtime_group.append(current_simtime_group)
+        return simtime_group
+
+    def _is_overtake(self, lane_id, dx, dy, ego_speedx, ego_speedy):
+        lane_start = lane_id[0]
+        lane_end = lane_id[-1]
+        start_condition = dx[0] * ego_speedx[0] + dy[0] * ego_speedy[0] >= 0
+        end_condition = dx[-1] * ego_speedx[-1] + dy[-1] * ego_speedy[-1] < 0
+
+        return lane_start == lane_end and start_condition and end_condition
+
+    def _is_dxy_of_car(self, ego_df, obj_df):
+        """
+        :param df: objstate.csv and so on
+        :param id: playerId
+        :param string_type: posX/Y or speedX/Y and so on
+        :return: dataframe of dx/y and so on
+        """
+        car_dx = obj_df["posX"].values - ego_df["posX"].values
+        car_dy = obj_df["posY"].values - ego_df["posY"].values
+
+        return car_dx, car_dy
+
+        # 在前车右侧超车、会车时超车、前车掉头时超车
+
+    def illegal_overtake_with_car(self, window_width=250):
+
+        # 获取csv文件中最短的帧数
+        frame_id_length = len(self.ego_data["simFrame"])
+        start_frame_id = self.ego_data["simFrame"].iloc[0]  # 获取起始点的帧数
+
+        while (start_frame_id + window_width) < frame_id_length:
+            # if start_frame_id == 828:
+            #     print("end")
+            simframe_window1 = list(
+                np.arange(start_frame_id, start_frame_id + window_width)
+            )
+            simframe_window = list(map(int, simframe_window1))
+            # 读取滑动窗口的dataframe数据
+            ego_data_frames = self.ego_data[
+                self.ego_data["simFrame"].isin(simframe_window)
+            ]
+            obj_data_frames = self.obj_data[
+                self.obj_data["simFrame"].isin(simframe_window)
+            ]
+            other_data_frames = self.other_obj_data[
+                self.other_obj_data["simFrame"].isin(simframe_window)
+            ]
+            # 读取前后的laneId
+            lane_id = ego_data_frames["lane_id"].tolist()
+            # 读取前后方向盘转角steeringWheel,
+            driverctrl_start_state = ego_data_frames["posH"].iloc[0]
+            driverctrl_end_state = ego_data_frames["posH"].iloc[-1]
+            # 读取车辆前后的位置信息
+            dx, dy = self._is_dxy_of_car(ego_data_frames, obj_data_frames)
+            ego_speedx = ego_data_frames["speedX"].tolist()
+            ego_speedy = ego_data_frames["speedY"].tolist()
+
+            obj_speedx = obj_data_frames[obj_data_frames["playerId"] == 2][
+                "speedX"
+            ].tolist()
+            obj_speedy = obj_data_frames[obj_data_frames["playerId"] == 2][
+                "speedY"
+            ].tolist()
+            if len(other_data_frames) > 0:
+                other_start_speedx = other_data_frames["speedX"].iloc[0]
+                other_start_speedy = other_data_frames["speedY"].iloc[0]
+                if (
+                    ego_speedx[0] * other_start_speedx
+                    + ego_speedy[0] * other_start_speedy
+                    < 0
+                ):
+                    self.overtake_when_passing_car_count += self._is_overtake(
+                        lane_id, dx, dy, ego_speedx, ego_speedy
+                    )
+                    start_frame_id += window_width
+            """
+            如果滑动窗口开始和最后的laneid一致;
+            方向盘转角前后方向相反(开始方向盘转角向右后来方向盘转角向左);
+            自车和前车的位置发生的交换;
+            则认为右超车
+            """
+            if driverctrl_start_state > 0 and driverctrl_end_state < 0:
+                self.overtake_on_right_count += self._is_overtake(
+                    lane_id, dx, dy, ego_speedx, ego_speedy
+                )
+                start_frame_id += window_width
+            elif ego_speedx[0] * obj_speedx[0] + ego_speedy[0] * obj_speedy[0] < 0:
+                self.overtake_when_turn_around_count += self._is_overtake(
+                    lane_id, dx, dy, ego_speedx, ego_speedy
+                )
+                start_frame_id += window_width
+            else:
+                start_frame_id += 1
+        # print(
+        #     f"在会车时超车{self.overtake_when_passing_car_count}次, 右侧超车{self.overtake_on_right_count}次, 在前车掉头时超车{self.overtake_when_turn_around_count}次")
+
+    # 借道超车场景
+    def overtake_in_forbid_lane(self):
+        simTime = self.obj_data["simTime"].tolist()
+        simtime_devide = self.different_road_area_simtime(simTime)
+        for simtime in simtime_devide:
+            lane_overtake = self.ego_data[self.ego_data["simTime"].isin(simtime)]
+            try:
+                lane_type = lane_overtake["lane_type"].tolist()
+                if (50002 in lane_type and len(set(lane_type)) > 2) or (
+                    50002 not in lane_type and len(set(lane_type)) > 1
+                ):
+                    self.overtake_in_forbid_lane_count += 1
+            except Exception as e:
+                print("数据缺少lane_type信息")
+        # print(f"在不该占用车道超车{self.overtake_in_forbid_lane_count}次")
+
+    # 在匝道超车
+    def overtake_in_ramp_area(self):
+        ramp_simtime_list = self.ego_data[(self.ego_data["road_type"] == 19)][
+            "simTime"
+        ].tolist()
+        ramp_simTime_list = self.different_road_area_simtime(ramp_simtime_list)
+        for ramp_simtime in ramp_simTime_list:
+            lane_id = self.ego_data["lane_id"].tolist()
+            ego_in_ramp = self.ego_data[self.ego_data["simTime"].isin(ramp_simtime)]
+            objstate_in_ramp = self.obj_data[
+                self.obj_data["simTime"].isin(ramp_simtime)
+            ]
+            dx, dy = self._is_dxy_of_car(ego_in_ramp, objstate_in_ramp)
+            ego_speedx = ego_in_ramp["speedX"].tolist()
+            ego_speedy = ego_in_ramp["speedY"].tolist()
+            if len(lane_id) > 0:
+                self.overtake_in_ramp_count += self._is_overtake(
+                    lane_id, dx, dy, ego_speedx, ego_speedy
+                )
+            else:
+                continue
+        # print(f"在匝道超车{self.overtake_in_ramp_count}次")
+
+    def overtake_in_tunnel_area(self):
+        tunnel_simtime_list = self.ego_data[(self.ego_data["road_type"] == 15)][
+            "simTime"
+        ].tolist()
+        tunnel_simTime_list = self.different_road_area_simtime(tunnel_simtime_list)
+        for tunnel_simtime in tunnel_simTime_list:
+            lane_id = self.ego_data["lane_id"].tolist()
+            ego_in_tunnel = self.ego_data[self.ego_data["simTime"].isin(tunnel_simtime)]
+            objstate_in_tunnel = self.obj_data[
+                self.obj_data["simTime"].isin(tunnel_simtime)
+            ]
+            dx, dy = self._is_dxy_of_car(ego_in_tunnel, objstate_in_tunnel)
+            ego_speedx = ego_in_tunnel["speedX"].tolist()
+            ego_speedy = ego_in_tunnel["speedY"].tolist()
+            if len(lane_id) > 0:
+                self.overtake_in_tunnel_count += self._is_overtake(
+                    lane_id, dx, dy, ego_speedx, ego_speedy
+                )
+            else:
+                continue
+        # print(f"在隧道超车{self.overtake_in_tunnel_count}次")
+
+    # 加速车道超车
+    def overtake_on_accelerate_lane(self):
+        accelerate_simtime_list = self.ego_data[self.ego_data["lane_type"] == 2][
+            "simTime"
+        ].tolist()
+        accelerate_simTime_list = self.different_road_area_simtime(
+            accelerate_simtime_list
+        )
+        for accelerate_simtime in accelerate_simTime_list:
+            lane_id = self.ego_data["lane_id"].tolist()
+            ego_in_accelerate = self.ego_data[
+                self.ego_data["simTime"].isin(accelerate_simtime)
+            ]
+            objstate_in_accelerate = self.obj_data[
+                self.obj_data["simTime"].isin(accelerate_simtime)
+            ]
+            dx, dy = self._is_dxy_of_car(ego_in_accelerate, objstate_in_accelerate)
+            ego_speedx = ego_in_accelerate["speedX"].tolist()
+            ego_speedy = ego_in_accelerate["speedY"].tolist()
+
+            self.overtake_on_accelerate_lane_count += self._is_overtake(
+                lane_id, dx, dy, ego_speedx, ego_speedy
+            )
+        # print(f"在加速车道超车{self.overtake_on_accelerate_lane_count}次")
+
+    # 减速车道超车
+    def overtake_on_decelerate_lane(self):
+        decelerate_simtime_list = self.ego_data[(self.ego_data["lane_type"] == 3)][
+            "simTime"
+        ].tolist()
+        decelerate_simTime_list = self.different_road_area_simtime(
+            decelerate_simtime_list
+        )
+        for decelerate_simtime in decelerate_simTime_list:
+            lane_id = self.ego_data["id"].tolist()
+            ego_in_decelerate = self.ego_data[
+                self.ego_data["simTime"].isin(decelerate_simtime)
+            ]
+            objstate_in_decelerate = self.obj_data[
+                self.obj_data["simTime"].isin(decelerate_simtime)
+            ]
+            dx, dy = self._is_dxy_of_car(ego_in_decelerate, objstate_in_decelerate)
+            ego_speedx = ego_in_decelerate["speedX"].tolist()
+            ego_speedy = ego_in_decelerate["speedY"].tolist()
+
+            self.overtake_on_decelerate_lane_count += self._is_overtake(
+                lane_id, dx, dy, ego_speedx, ego_speedy
+            )
+        # print(f"在减速车道超车{self.overtake_on_decelerate_lane_count}次")
+
+    # 在交叉路口
+    def overtake_in_different_senerios(self):
+        crossroad_simTime = self.ego_data[self.ego_data["interid"] != 10000][
+            "simTime"
+        ].tolist()  # 判断是路口或者隧道区域
+        # 筛选在路口或者隧道区域的objectstate、driverctrl、laneinfo数据
+        crossroad_ego = self.ego_data[self.ego_data["simTime"].isin(crossroad_simTime)]
+        crossroad_objstate = self.obj_data[
+            self.obj_data["simTime"].isin(crossroad_simTime)
+        ]
+        # crossroad_laneinfo = self.laneinfo_new_data[self.laneinfo_new_data['simTime'].isin(crossroad_simTime)]
+
+        # 读取前后的laneId
+        lane_id = crossroad_ego["lane_id"].tolist()
+
+        # 读取车辆前后的位置信息
+        dx, dy = self._is_dxy_of_car(crossroad_ego, crossroad_objstate)
+        ego_speedx = crossroad_ego["speedX"].tolist()
+        ego_speedy = crossroad_ego["speedY"].tolist()
+        """
+        如果滑动窗口开始和最后的laneid一致;
+        自车和前车的位置发生的交换;
+        则认为发生超车
+        """
+        if len(lane_id) > 0:
+            self.overtake_in_different_senerios_count += self._is_overtake(
+                lane_id, dx, dy, ego_speedx, ego_speedy
+            )
+        else:
+            pass
+        # print(f"在路口超车{self.overtake_in_different_senerios_count}次")
+
+    def statistic(self):
+        self.overtake_in_forbid_lane()
+        self.overtake_on_decelerate_lane()
+        self.overtake_on_accelerate_lane()
+        self.overtake_in_ramp_area()
+        self.overtake_in_tunnel_area()
+        self.overtake_in_different_senerios()
+        self.illegal_overtake_with_car()
+
+        self.calculated_value = {
+            "overtake_on_right": self.overtake_on_right_count,
+            "overtake_when_turn_around": self.overtake_when_turn_around_count,
+            "overtake_when_passing_car": self.overtake_when_passing_car_count,
+            "overtake_in_forbid_lane": self.overtake_in_forbid_lane_count,
+            "overtake_in_ramp": self.overtake_in_ramp_count,
+            "overtake_in_tunnel": self.overtake_in_tunnel_count,
+            "overtake_on_accelerate_lane": self.overtake_on_accelerate_lane_count,
+            "overtake_on_decelerate_lane": self.overtake_on_decelerate_lane_count,
+            "overtake_in_different_senerios": self.overtake_in_different_senerios_count,
+        }
+        # self.logger.info(f"超车类指标统计完成,统计结果:{self.calculated_value}")
+        return self.calculated_value
+
+
+class SlowdownViolation(object):
+    """减速让行违规类"""
+
+    def __init__(self, df_data):
+        print("减速让行违规类-------------------------")
+        self.traffic_violations_type = "减速让行违规类"
+        self.object_items = []
+        self.data = df_data.obj_data[1]
+        self.ego_data = (
+            self.data[SLOWDOWN_INFO].copy().reset_index(drop=True)
+        )  # Copy to avoid modifying the original DataFrame
+        self.pedestrian_data = pd.DataFrame()
+
+        self.object_items = set(df_data.object_df.type.tolist())
+        if 13 in self.object_items:  # 行人的type是13
+            self.pedestrian_df = df_data.object_df[df_data.object_df.type == 13]
+            self.pedestrian_data = (
+                self.pedestrian_df[SLOWDOWN_INFO].copy().reset_index(drop=True)
+            )
+
+        self.slow_down_in_crosswalk_count = 0
+        self.avoid_pedestrian_in_crosswalk_count = 0
+        self.avoid_pedestrian_in_the_road_count = 0
+        self.aviod_pedestrian_when_turning_count = 0
+
+    def pedestrian_in_front_of_car(self):
+        if len(self.pedestrian_data) == 0:
+            return []
+        else:
+            self.ego_data["dx"] = self.ego_data["posX"] - self.pedestrian_data["posX"]
+            self.ego_data["dy"] = self.ego_data["posY"] - self.pedestrian_data["posY"]
+            self.ego_data["dist"] = np.sqrt(
+                self.ego_data["dx"] ** 2 + self.ego_data["dy"] ** 2
+            )
+
+            self.ego_data["rela_pos"] = (
+                self.ego_data["dx"] * self.ego_data["speedX"]
+                + self.ego_data["dy"] * self.ego_data["speedY"]
+            )
+            simtime = self.ego_data[
+                (self.ego_data["rela_pos"] > 0) & (self.ego_data["dist"] < 50)
+            ]["simTime"].tolist()
+            return simtime
+
+    def different_road_area_simtime(self, df, threshold=0.6):
+        if not df:
+            return []
+        simtime_group = []
+        current_simtime_group = [df[0]]
+
+        for i in range(1, len(df)):
+            if abs(df[i] - df[i - 1]) <= threshold:
+                current_simtime_group.append(df[i])
+            else:
+                simtime_group.append(current_simtime_group)
+                current_simtime_group = [df[i]]
+
+        simtime_group.append(current_simtime_group)
+        return simtime_group
+
+    def slow_down_in_crosswalk(self):
+        # 筛选出路口或隧道区域的时间点
+        crosswalk_simTime = self.ego_data[self.ego_data["crossid"] != 20000][
+            "simTime"
+        ].tolist()
+        crosswalk_simTime_divide = self.different_road_area_simtime(crosswalk_simTime)
+
+        for crosswalk_simtime in crosswalk_simTime_divide:
+            # 筛选出当前时间段内的数据
+            # start_time, end_time = crosswalk_simtime
+            start_time = crosswalk_simtime[0]
+            end_time = crosswalk_simtime[-1]
+            print(f"当前时间段:{start_time} - {end_time}")
+            crosswalk_objstate = self.ego_data[
+                (self.ego_data["simTime"] >= start_time)
+                & (self.ego_data["simTime"] <= end_time)
+            ]
+
+            # 计算车辆速度
+            ego_speedx = np.array(crosswalk_objstate["speedX"].tolist())
+            ego_speedy = np.array(crosswalk_objstate["speedY"].tolist())
+            ego_speed = np.sqrt(ego_speedx**2 + ego_speedy**2)
+
+            # 判断是否超速
+            if max(ego_speed) > 15 / 3.6:  # 15 km/h 转换为 m/s
+                self.slow_down_in_crosswalk_count += 1
+
+        # 输出总次数
+        print(f"在人行横道超车总次数:{self.slow_down_in_crosswalk_count}次")
+
+    def avoid_pedestrian_in_crosswalk(self):
+        crosswalk_simTime = self.ego_data[self.ego_data["crossid"] != 20000][
+            "simTime"
+        ].tolist()
+        crosswalk_simTime_devide = self.different_road_area_simtime(crosswalk_simTime)
+        for crosswalk_simtime in crosswalk_simTime_devide:
+            if not self.pedestrian_data.empty:
+                crosswalk_objstate = self.pedestrian_data[
+                    self.pedestrian_data["simTime"].isin(crosswalk_simtime)
+                ]
+            else:
+                crosswalk_objstate = pd.DataFrame()
+            if len(crosswalk_objstate) > 0:
+                pedestrian_simtime = crosswalk_objstate["simTime"]
+                pedestrian_objstate = crosswalk_objstate[
+                    crosswalk_objstate["simTime"].isin(pedestrian_simtime)
+                ]
+                ego_speed = np.sqrt(
+                    pedestrian_objstate["speedX"] ** 2
+                    + pedestrian_objstate["speedY"] ** 2
+                )
+                if ego_speed.any() > 0:
+                    self.avoid_pedestrian_in_crosswalk_count += 1
+
+    def avoid_pedestrian_in_the_road(self):
+        simtime = self.pedestrian_in_front_of_car()
+        if len(simtime) == 0:
+            self.avoid_pedestrian_in_the_road_count += 0
+        else:
+            pedestrian_on_the_road = self.pedestrian_data[
+                self.pedestrian_data["simTime"].isin(simtime)
+            ]
+            simTime = pedestrian_on_the_road["simTime"].tolist()
+            simTime_devide = self.different_road_area_simtime(simTime)
+            for simtime1 in simTime_devide:
+                sub_pedestrian_on_the_road = pedestrian_on_the_road[
+                    pedestrian_on_the_road["simTime"].isin(simtime1)
+                ]
+                ego_car = self.ego_data.loc[(self.ego_data["simTime"].isin(simtime1))]
+                dist = np.sqrt(
+                    (ego_car["posX"].values - sub_pedestrian_on_the_road["posX"].values)
+                    ** 2
+                    + (
+                        ego_car["posY"].values
+                        - sub_pedestrian_on_the_road["posY"].values
+                    )
+                    ** 2
+                )
+                speed = np.sqrt(
+                    ego_car["speedX"].values ** 2 + ego_car["speedY"].values ** 2
+                )
+                data = {"dist": dist, "speed": speed}
+                new_ego_car = pd.DataFrame(data)
+                new_ego_car = new_ego_car.assign(
+                    Column3=lambda x: (x["dist"] < 1) & (x["speed"] == 0)
+                )
+                if new_ego_car["Column3"].any():
+                    self.avoid_pedestrian_in_the_road_count += 1
+
+    def aviod_pedestrian_when_turning(self):
+        pedestrian_simtime_list = self.pedestrian_in_front_of_car()
+        if len(pedestrian_simtime_list) > 0:
+            simtime_list = self.ego_data[
+                (self.ego_data["simTime"].isin(pedestrian_simtime_list))
+                & (self.ego_data["lane_type"] == 20)
+            ]["simTime"].tolist()
+            simTime_list = self.different_road_area_simtime(simtime_list)
+            pedestrian_on_the_road = self.pedestrian_data[
+                self.pedestrian_data["simTime"].isin(simtime_list)
+            ]
+            for simtime in simTime_list:
+                sub_pedestrian_on_the_road = pedestrian_on_the_road[
+                    pedestrian_on_the_road["simTime"].isin(simtime)
+                ]
+                ego_car = self.ego_data.loc[(self.ego_data["simTime"].isin(simtime))]
+                ego_car["dist"] = np.sqrt(
+                    (ego_car["posX"].values - sub_pedestrian_on_the_road["posX"].values)
+                    ** 2
+                    + (
+                        ego_car["posY"].values
+                        - sub_pedestrian_on_the_road["posY"].values
+                    )
+                    ** 2
+                )
+                ego_car["speed"] = np.sqrt(
+                    ego_car["speedX"].values ** 2 + ego_car["speedY"].values ** 2
+                )
+                if any(ego_car["speed"].tolist()) != 0:
+                    self.aviod_pedestrian_when_turning_count += 1
+
+    def statistic(self):
+        self.slow_down_in_crosswalk()
+        self.avoid_pedestrian_in_crosswalk()
+        self.avoid_pedestrian_in_the_road()
+        self.aviod_pedestrian_when_turning()
+
+        self.calculated_value = {
+            "slow_down_in_crosswalk": self.slow_down_in_crosswalk_count,
+            "avoid_pedestrian_in_crosswalk": self.avoid_pedestrian_in_crosswalk_count,
+            "avoid_pedestrian_in_the_road": self.avoid_pedestrian_in_the_road_count,
+            "aviod_pedestrian_when_turning": self.aviod_pedestrian_when_turning_count,
+        }
+        # self.logger.info(f"减速让行类指标统计完成,统计结果:{self.calculated_value}")
+        return self.calculated_value
+
+
+class TurnaroundViolation(object):
+
+    def __init__(self, df_data):
+        print("掉头违规类初始化中...")
+        self.traffic_violations_type = "掉头违规类"
+
+        self.data = df_data.obj_data[1]
+        self.ego_data = (
+            self.data[TURNAROUND_INFO].copy().reset_index(drop=True)
+        )  # Copy to avoid modifying the original DataFrame
+        self.pedestrian_data = pd.DataFrame()
+
+        self.object_items = set(df_data.object_df.type.tolist())
+        if 13 in self.object_items:  # 行人的type是13
+            self.pedestrian_df = df_data.object_df[df_data.object_df.type == 13]
+            self.pedestrian_data = (
+                self.pedestrian_df[SLOWDOWN_INFO].copy().reset_index(drop=True)
+            )
+
+        self.turning_in_forbiden_turn_back_sign_count = 0
+        self.turning_in_forbiden_turn_left_sign_count = 0
+        self.avoid_pedestrian_when_turn_back_count = 0
+
+    def pedestrian_in_front_of_car(self):
+        if len(self.pedestrian_data) == 0:
+            return []
+        else:
+            self.ego_data["dx"] = self.ego_data["posX"] - self.pedestrian_data["posX"]
+            self.ego_data["dy"] = self.ego_data["posY"] - self.pedestrian_data["posY"]
+            self.ego_data["dist"] = np.sqrt(
+                self.ego_data["dx"] ** 2 + self.ego_data["dy"] ** 2
+            )
+
+            self.ego_data["rela_pos"] = (
+                self.ego_data["dx"] * self.ego_data["speedX"]
+                + self.ego_data["dy"] * self.ego_data["speedY"]
+            )
+            simtime = self.ego_data[
+                (self.ego_data["rela_pos"] > 0) & (self.ego_data["dist"] < 50)
+            ]["simTime"].tolist()
+            return simtime
+
+    def different_road_area_simtime(self, df, threshold=0.5):
+        if not df:
+            return []
+        simtime_group = []
+        current_simtime_group = [df[0]]
+
+        for i in range(1, len(df)):
+            if abs(df[i] - df[i - 1]) <= threshold:
+                current_simtime_group.append(df[i])
+            else:
+                simtime_group.append(current_simtime_group)
+                current_simtime_group = [df[i]]
+
+        simtime_group.append(current_simtime_group)
+        return simtime_group
+
+    def turn_back_in_forbiden_sign(self):
+        """
+        禁止掉头type = 8
+        """
+        forbiden_turn_back_simTime = self.ego_data[self.ego_data["sign_type1"] == 8][
+            "simTime"
+        ].tolist()
+        forbiden_turn_left_simTime = self.ego_data[self.ego_data["sign_type1"] == 9][
+            "simTime"
+        ].tolist()
+
+        forbiden_turn_back_simtime_devide = self.different_road_area_simtime(
+            forbiden_turn_back_simTime
+        )
+        forbiden_turn_left_simtime_devide = self.different_road_area_simtime(
+            forbiden_turn_left_simTime
+        )
+        for forbiden_turn_back_simtime in forbiden_turn_back_simtime_devide:
+            ego_car1 = self.ego_data.loc[
+                (self.ego_data["simFrame"].isin(forbiden_turn_back_simtime))
+            ]
+            ego_start_speedx1 = ego_car1["speedX"].iloc[0]
+            ego_start_speedy1 = ego_car1["speedY"].iloc[0]
+            ego_end_speedx1 = ego_car1["speedX"].iloc[-1]
+            ego_end_speedy1 = ego_car1["speedY"].iloc[-1]
+
+            if (
+                ego_end_speedx1 * ego_start_speedx1
+                + ego_end_speedy1 * ego_start_speedy1
+                < 0
+            ):
+                self.turning_in_forbiden_turn_back_sign_count += 1
+
+        for forbiden_turn_left_simtime in forbiden_turn_left_simtime_devide:
+            ego_car2 = self.ego_data.loc[
+                (self.ego_data["simFrame"].isin(forbiden_turn_left_simtime))
+            ]
+            ego_start_speedx2 = ego_car2["speedX"].iloc[0]
+            ego_start_speedy2 = ego_car2["speedY"].iloc[0]
+            ego_end_speedx2 = ego_car2["speedX"].iloc[-1]
+            ego_end_speedy2 = ego_car2["speedY"].iloc[-1]
+
+            if (
+                ego_end_speedx2 * ego_start_speedx2
+                + ego_end_speedy2 * ego_start_speedy2
+                < 0
+            ):
+                self.turning_in_forbiden_turn_left_sign_count += 1
+
+    def avoid_pedestrian_when_turn_back(self):
+        sensor_on_intersection = self.pedestrian_in_front_of_car()
+        avoid_pedestrian_when_turn_back_simTime_list = self.ego_data[
+            self.ego_data["lane_type"] == 20
+        ]["simTime"].tolist()
+        avoid_pedestrian_when_turn_back_simTime_devide = (
+            self.different_road_area_simtime(
+                avoid_pedestrian_when_turn_back_simTime_list
+            )
+        )
+        if len(sensor_on_intersection) > 0:
+            for (
+                avoid_pedestrian_when_turn_back_simtime
+            ) in avoid_pedestrian_when_turn_back_simTime_devide:
+                pedestrian_in_intersection_simtime = self.pedestrian_data[
+                    self.pedestrian_data["simTime"].isin(
+                        avoid_pedestrian_when_turn_back_simtime
+                    )
+                ].tolist()
+                ego_df = self.ego_data[
+                    self.ego_data["simTime"].isin(pedestrian_in_intersection_simtime)
+                ].reset_index(drop=True)
+                pedestrian_df = self.pedestrian_data[
+                    self.pedestrian_data["simTime"].isin(
+                        pedestrian_in_intersection_simtime
+                    )
+                ].reset_index(drop=True)
+                ego_df["dist"] = np.sqrt(
+                    (ego_df["posx"] - pedestrian_df["posx"]) ** 2
+                    + (ego_df["posy"] - pedestrian_df["posy"]) ** 2
+                )
+                ego_df["speed"] = np.sqrt(ego_df["speedx"] ** 2 + ego_df["speedy"] ** 2)
+                if any(ego_df["speed"].tolist()) != 0:
+                    self.avoid_pedestrian_when_turn_back_count += 1
+
+    def statistic(self):
+        self.turn_back_in_forbiden_sign()
+        self.avoid_pedestrian_when_turn_back()
+
+        self.calculated_value = {
+            "turn_back_in_forbiden_turn_back_sign": self.turning_in_forbiden_turn_back_sign_count,
+            "turn_back_in_forbiden_turn_left_sign": self.turning_in_forbiden_turn_left_sign_count,
+            "avoid_pedestrian_when_turn_back": self.avoid_pedestrian_when_turn_back_count,
+        }
+        # self.logger.info(f"掉头违规类指标统计完成,统计结果:{self.calculated_value}")
+        return self.calculated_value
+
+
+class WrongWayViolation:
+    """停车违规类"""
+
+    def __init__(self, df_data):
+        print("停车违规类初始化中...")
+        self.traffic_violations_type = "停车违规类"
+        self.data = df_data.obj_data[1]
+        # 初始化违规统计
+        self.violation_count = {
+            "urbanExpresswayOrHighwayDrivingLaneStopped": 0,
+            "urbanExpresswayOrHighwayEmergencyLaneStopped": 0,
+            "urbanExpresswayEmergencyLaneDriving": 0,
+        }
+
+    def process_violations(self):
+        """处理停车或者紧急车道行驶违规数据"""
+        # 提取有效道路类型
+        urban_expressway_or_highway = {1, 2}
+        driving_lane = {1, 4, 5, 6}
+        emergency_lane = {12}
+        self.data["v"] *= 3.6  # 转换速度
+
+        # 使用向量化和条件判断进行违规判定
+        conditions = [
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & self.data["lane_type"].isin(driving_lane)
+                & (self.data["v"] == 0)
+            ),
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & self.data["lane_type"].isin(emergency_lane)
+                & (self.data["v"] == 0)
+            ),
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & self.data["lane_type"].isin(emergency_lane)
+                & (self.data["v"] != 0)
+            ),
+        ]
+
+        violation_types = [
+            "urbanExpresswayOrHighwayDrivingLaneStopped",
+            "urbanExpresswayOrHighwayEmergencyLaneStopped",
+            "urbanExpresswayEmergencyLaneDriving",
+        ]
+
+        # 设置违规类型
+        self.data["violation_type"] = None
+        for condition, violation_type in zip(conditions, violation_types):
+            self.data.loc[condition, "violation_type"] = violation_type
+
+        # 统计违规情况
+        self.violation_count = (
+            self.data["violation_type"]
+            .value_counts()
+            .reindex(violation_types, fill_value=0)
+            .to_dict()
+        )
+
+    def statistic(self) -> str:
+
+        self.process_violations()
+        # self.logger.info(f"停车违规类指标统计完成,统计结果:{self.violation_count}")
+        return self.violation_count
+
+
+class SpeedingViolation(object):
+    """超速违规类"""
+
+    """ 这里没有道路标志牌限速指标,因为shp地图中没有这个信息"""
+
+    def __init__(self, df_data):
+        print("超速违规类初始化中...")
+        self.traffic_violations_type = "超速违规类"
+        self.data = df_data.obj_data[
+            1
+        ]  # Copy to avoid modifying the original DataFrame
+        # 初始化违规统计
+        self.violation_counts = {
+            "urbanExpresswayOrHighwaySpeedOverLimit50": 0,
+            "urbanExpresswayOrHighwaySpeedOverLimit20to50": 0,
+            "urbanExpresswayOrHighwaySpeedOverLimit0to20": 0,
+            "urbanExpresswayOrHighwaySpeedUnderLimit": 0,
+            "generalRoadSpeedOverLimit50": 0,
+            "generalRoadSpeedOverLimit20to50": 0,
+        }
+
+    def process_violations(self):
+        """处理数据帧,检查超速和其他违规行为"""
+        # 提取有效道路类型
+        urban_expressway_or_highway = {1, 2}  # 使用大括号直接创建集合
+        general_road = {3}  # 直接创建包含一个元素的集合
+        self.data["v"] *= 3.6  # 转换速度
+
+        # 违规判定
+        conditions = [
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & (self.data["v"] > self.data["road_speed_max"] * 1.5)
+            ),
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & (self.data["v"] > self.data["road_speed_max"] * 1.2)
+                & (self.data["v"] <= self.data["road_speed_max"] * 1.5)
+            ),
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & (self.data["v"] > self.data["road_speed_max"])
+                & (self.data["v"] <= self.data["road_speed_max"] * 1.2)
+            ),
+            (
+                self.data["road_fc"].isin(urban_expressway_or_highway)
+                & (self.data["v"] < self.data["road_speed_min"])
+            ),
+            (
+                self.data["road_fc"].isin(general_road)
+                & (self.data["v"] > self.data["road_speed_max"] * 1.5)
+            ),
+            (
+                self.data["road_fc"].isin(general_road)
+                & (self.data["v"] > self.data["road_speed_max"] * 1.2)
+                & (self.data["v"] <= self.data["road_speed_max"] * 1.5)
+            ),
+        ]
+
+        violation_types = [
+            "urbanExpresswayOrHighwaySpeedOverLimit50",
+            "urbanExpresswayOrHighwaySpeedOverLimit20to50",
+            "urbanExpresswayOrHighwaySpeedOverLimit0to20",
+            "urbanExpresswayOrHighwaySpeedUnderLimit",
+            "generalRoadSpeedOverLimit50",
+            "generalRoadSpeedOverLimit20to50",
+        ]
+
+        # 设置违规类型
+        self.data["violation_type"] = None
+        for condition, violation_type in zip(conditions, violation_types):
+            self.data.loc[condition, "violation_type"] = violation_type
+
+        # 统计各类违规情况
+        self.violation_counts = self.data["violation_type"].value_counts().to_dict()
+
+    def statistic(self) -> str:
+        # 处理数据
+        self.process_violations()
+        # self.logger.info(f"超速违规类指标统计完成,统计结果:{self.violation_counts}")
+        return self.violation_counts
+
+
+class TrafficLightViolation(object):
+    """违反交通灯类"""
+
+    """需要补充判断车辆是左转直行还是右转,判断红绿灯是方向性红绿灯还是通过性红绿灯"""
+
+    def __init__(self, df_data):
+        """初始化方法"""
+        self.traffic_violations_type = "违反交通灯类"
+        print("违反交通灯类 类初始化中...")
+        self.config = df_data.vehicle_config
+
+        self.data_ego = df_data.ego_data  # 获取数据
+        self.violation_counts = {
+            "trafficSignalViolation": 0,
+            "illegalDrivingOrParkingAtCrossroads": 0,
+        }
+
+        # 处理数据并判定违规
+        self.process_violations()
+
+    def is_point_cross_line(self, point, stop_line_points):
+        """
+        判断车辆的某一坐标点是否跨越了由两个点定义的停止线(线段)。
+        使用向量叉积判断点是否在线段上,并通过计算车辆的航向角来判断是否跨越了停止线。
+
+        :param point: 车辆位置点 (x, y, heading),包括 x, y 位置以及朝向角度(弧度制)
+        :param stop_line_points: 停止线两个端点 [[x1, y1], [x2, y2]]
+        :return: True 如果车辆跨越了停止线,否则 False
+        """
+        line_vector = np.array(
+            [
+                stop_line_points[1][0] - stop_line_points[0][0],
+                stop_line_points[1][1] - stop_line_points[0][1],
+            ]
+        )
+        point_vector = np.array(
+            [point[0] - stop_line_points[0][0], point[1] - stop_line_points[0][1]]
+        )
+
+        cross_product = np.cross(line_vector, point_vector)
+        if cross_product != 0:
+            return False
+
+        mid_point = (
+            np.array([stop_line_points[0][0], stop_line_points[0][1]])
+            + 0.5 * line_vector
+        )
+        axletree_to_mid_vector = np.array(
+            [point[0] - mid_point[0], point[1] - mid_point[1]]
+        )
+        direction_vector = np.array([math.cos(point[2]), math.sin(point[2])])
+
+        norm_axletree_to_mid = np.linalg.norm(axletree_to_mid_vector)
+        norm_direction = np.linalg.norm(direction_vector)
+
+        if norm_axletree_to_mid == 0 or norm_direction == 0:
+            return False
+
+        cos_theta = np.dot(axletree_to_mid_vector, direction_vector) / (
+            norm_axletree_to_mid * norm_direction
+        )
+        angle_theta = math.degrees(math.acos(cos_theta))
+
+        return angle_theta <= 90
+
+    def _filter_data(self):
+        """过滤数据,筛选出需要分析的记录"""
+        return self.data_ego[
+            (self.data_ego["stopline_id"] != -1)
+            & (self.data_ego["stopline_type"] == 1)
+            & (self.data_ego["trafficlight_id"] != -1)
+        ]
+
+    def _group_data(self, filtered_data):
+        """按时间差对数据进行分组"""
+        filtered_data["time_diff"] = filtered_data["simTime"].diff().fillna(0)
+        threshold = 0.5
+        filtered_data["group"] = (filtered_data["time_diff"] > threshold).cumsum()
+        return filtered_data.groupby("group")
+
+    def _analyze_group(self, group_data):
+        """分析单个分组的数据,判断是否闯红灯"""
+        photos = []
+        stop_in_intersection = False
+
+        for _, row in group_data.iterrows():
+            vehicle_pos = np.array([row["posX"], row["posY"], row["posH"]])
+            stop_line_points = [
+                [row["stopline_x1"], row["stopline_y1"]],
+                [row["stopline_x2"], row["stopline_y2"]],
+            ]
+            traffic_light_status = row["traffic_light_status"]
+            heading_vector = np.array([np.cos(row["posH"]), np.sin(row["posH"])])
+            heading_vector = heading_vector / np.linalg.norm(heading_vector)
+
+            # with open(self.config_path / "vehicle_config.yaml", 'r') as f:
+            #     config = yaml.load(f, Loader=yaml.FullLoader)
+            front_wheel_pos = vehicle_pos[:2] + self.config["EGO_WHEELBASS"] * heading_vector
+            rear_wheel_pos = vehicle_pos[:2] - self.config["EGO_WHEELBASS"] * heading_vector
+            dist = math.sqrt(
+                (row["posX"] - row["traffic_light_x"]) ** 2
+                + (row["posY"] - row["traffic_light_y"]) ** 2
+            )
+
+            if abs(row["speedH"]) > 0.01 or abs(row["speedH"]) < 0.01:
+                has_crossed_line_front = (
+                    self.is_point_cross_line(front_wheel_pos, stop_line_points)
+                    and traffic_light_status == 1
+                )
+                has_crossed_line_rear = (
+                    self.is_point_cross_line(rear_wheel_pos, stop_line_points)
+                    and row["v"] > 0
+                    and traffic_light_status == 1
+                )
+                has_stop_in_intersection = has_crossed_line_front and row["v"] == 0
+                has_passed_intersection = has_crossed_line_front and dist < 1.0
+                # print(f'time: {row["simTime"]}, speed: {row["speedH"]}, posH: {row["posH"]}, dist: {dist:.2f}, has_stop_in_intersection: {has_stop_in_intersection}, has_passed_intersection: {has_passed_intersection}')
+
+                photos.extend(
+                    [
+                        has_crossed_line_front,
+                        has_crossed_line_rear,
+                        has_passed_intersection,
+                        has_stop_in_intersection,
+                    ]
+                )
+                stop_in_intersection = has_passed_intersection
+
+        return photos, stop_in_intersection
+
+    def is_vehicle_run_a_red_light(self):
+        """判断车辆是否闯红灯"""
+        filtered_data = self._filter_data()
+        grouped_data = self._group_data(filtered_data)
+        self.photos_group = []
+        self.stop_in_intersections = []
+
+        for _, group_data in grouped_data:
+            photos, stop_in_intersection = self._analyze_group(group_data)
+            self.photos_group.append(photos)
+            self.stop_in_intersections.append(stop_in_intersection)
+
+    def process_violations(self):
+        """处理数据并判定违规"""
+        self.is_vehicle_run_a_red_light()
+        count_1 = sum(all(photos) for photos in self.photos_group)
+        count_2 = sum(
+            stop_in_intersection for stop_in_intersection in self.stop_in_intersections
+        )
+
+        self.violation_counts["trafficSignalViolation"] = count_1
+        self.violation_counts["illegalDrivingOrParkingAtCrossroads"] = count_2
+
+    def statistic(self):
+        """返回统计结果"""
+        return self.violation_counts
+
+
+class WarningViolation(object):
+    """警告性违规类"""
+
+    def __init__(self, df_data):
+        self.traffic_violations_type = "警告性违规类"
+        print("警告性违规类 类初始化中...")
+        self.config = df_data.vehicle_config
+        self.data_ego = df_data.obj_data[1]
+        self.data = self.data_ego.copy()  # 避免修改原始 DataFrame
+        self.violation_counts = {
+            "generalRoadIrregularLaneUse": 0,  # 驾驶机动车在高速公路、城市快速路以外的道路上不按规定车道行驶
+            "urbanExpresswayOrHighwayRideLaneDivider": 0,  # 机动车在高速公路或者城市快速路上骑、轧车行道分界线
+        }
+
+    def process_violations(self):
+        general_road = {3}  # 普通道路
+        lane_type = {11}  # 车道类型 # 10: 机动车道,11: 非机动车道
+        # with open(self.config_path / "vehicle_config.yaml", 'r') as f:
+        #     config = yaml.load(f, Loader=yaml.FullLoader)
+        car_width = self.config["CAR_WIDTH"]
+        lane_width = self.data["lane_width"]  # 假定 'lane_width' 在数据中存在
+
+        # 驾驶机动车在高速公路、城市快速路以外的道路上不按规定车道行驶
+        # 使用布尔索引来筛选满足条件的行
+        condition = (self.data["road_fc"].isin(general_road)) & (
+            self.data["lane_type"].isin(lane_type)
+        )
+
+        # 创建一个新的列,并根据条件设置值
+        self.data["is_violation"] = condition
+
+        # 统计满足条件的连续时间段
+        violation_segments = self.count_continuous_violations(
+            self.data["is_violation"], self.data["simTime"]
+        )
+
+        # 更新骑行车道线违规计数
+        self.violation_counts["generalRoadIrregularLaneUse"] += len(violation_segments)
+
+        # 机动车在高速公路或者城市快速路上骑、轧车行道分界线
+
+        # 计算阈值
+        threshold = (lane_width - car_width) / 2
+
+        # 找到满足条件的行
+        self.data["is_violation"] = self.data["laneOffset"] > threshold
+
+        # 统计满足条件的连续时间段
+        violation_segments = self.count_continuous_violations(
+            self.data["is_violation"], self.data["simTime"]
+        )
+
+        # 更新骑行车道线违规计数
+        self.violation_counts["urbanExpresswayOrHighwayRideLaneDivider"] += len(
+            violation_segments
+        )
+
+    def count_continuous_violations(self, violation_series, time_series):
+        """统计连续违规的时间段数量"""
+        continuous_segments = []
+        current_segment = []
+
+        for is_violation, time in zip(violation_series, time_series):
+            if is_violation:
+                if not current_segment:  # 新的连续段开始
+                    current_segment.append(time)
+            else:
+                if current_segment:  # 连续段结束
+                    continuous_segments.append(current_segment)
+                    current_segment = []
+
+        # 检查是否有一个未结束的连续段在最后
+        if current_segment:
+            continuous_segments.append(current_segment)
+
+        return continuous_segments
+
+    def statistic(self):
+        # 处理数据
+        self.process_violations()
+        # self.logger.info(f"警告性违规类指标统计完成,统计结果:{self.violation_counts}")
+        return self.violation_counts
+
+class TrafficSignViolation(object):
+    """交通标志违规类"""
+
+    def __init__(self, df_data):
+        self.traffic_violations_type = "交通标志违规类"
+        print("交通标志违规类 类初始化中...")
+        self.data_ego = df_data.obj_data[1]
+        self.ego_data = (
+            self.data_ego[TRFFICSIGN_INFO].copy().reset_index(drop=True)
+        )
+        self.data_ego = self.data_ego.copy()  # 避免修改原始 DataFrame
+        self.violation_counts = {
+            "NoStraightThrough": 0,  # 禁止直行标志地方直行
+            "SpeedLimitViolation": 0,  # 违反限速规定
+            "MinimumSpeedLimitViolation": 0,  # 违反最低限速规定
+        }   
+
+    # def checkForProhibitionViolation(self):
+    #     """禁令标志判断违规:7 禁止直行,12:限制速度"""
+    #     # 筛选出sign_type1为7(禁止直行)
+    #     violation_straight_df = self.data_ego[self.data_ego["sign_type1"] == 7]
+    #     violation_speed_limit_df = self.data_ego[self.data_ego["sign_type1"] == 12]
+
+    def checkForProhibitionViolation(self):
+        """禁令标志判断违规:7 禁止直行,12:限制速度"""
+        # 筛选出 sign_type1 为7(禁止直行)的数据
+        violation_straight_df = self.data_ego[self.data_ego["sign_type1"] == 7].copy()
+        
+        # 判断车辆是否在禁止直行路段直行
+        if not violation_straight_df.empty:
+            # 按时间戳排序(假设数据按时间顺序处理)
+            violation_straight_df = violation_straight_df.sort_values('simTime')
+            
+            # 计算航向角变化(前后时间点的差值绝对值)
+            violation_straight_df['posH_diff'] = violation_straight_df['posH'].diff().abs()
+            
+            # 筛选条件:航向角变化小于阈值(例如5度)且速度不为0
+            threshold = 5  # 单位:度(根据场景调整)
+            mask = (violation_straight_df['posH_diff'] <= threshold) & (violation_straight_df['v'] > 0)
+            straight_violations = violation_straight_df[mask]
+            
+            # 统计违规次数或记录违规数据
+            self.violation_counts["prohibition_straight"] = len(straight_violations)
+            
+        
+        # 限制速度判断(原代码)
+        violation_speed_limit_df = self.data_ego[self.data_ego["sign_type1"] == 12]
+        if violation_speed_limit_df.empty:
+            mask = self.data_ego["v"] > self.data_ego["sign_speed"]
+            self.violation_counts["SpeedLimitViolation"] = len(self.data_ego[mask])
+
+    def checkForInstructionViolation(self):
+        """限速标志属于指示性标志:13:最低限速"""
+        violation_minimum_speed_limit_df = self.data_ego[self.data_ego["sign_type1"] == 13]
+        if violation_minimum_speed_limit_df.empty:
+            mask = self.data_ego["v"] < self.data_ego["sign_speed"]
+            self.violation_counts["MinimumSpeedLimitViolation"] = len(self.data_ego[mask])
+    def statistic(self):
+        self.checkForProhibitionViolation()
+        self.checkForInstructionViolation()
+        # self.logger.info(f"交通标志违规类指标统计完成,统计结果:{self.violation_counts}")
+        return self.violation_counts
+
+
+class ViolationManager:
+    """违规管理类,用于管理所有违规行为"""
+
+    def __init__(self, data_processed):
+
+        self.violations = []
+        self.data = data_processed
+        self.config = data_processed.traffic_config
+
+        self.over_take_violation = OvertakingViolation(self.data)
+        self.slow_down_violation = SlowdownViolation(self.data)
+        self.wrong_way_violation = WrongWayViolation(self.data)
+        self.speeding_violation = SpeedingViolation(self.data)
+        self.traffic_light_violation = TrafficLightViolation(self.data)
+        self.warning_violation = WarningViolation(self.data)
+
+        # self.report_statistic()
+
+    def report_statistic(self):
+
+        traffic_result = self.over_take_violation.statistic()
+        traffic_result.update(self.slow_down_violation.statistic())
+        traffic_result.update(self.traffic_light_violation.statistic())
+        traffic_result.update(self.wrong_way_violation.statistic())
+        traffic_result.update(self.speeding_violation.statistic())
+        traffic_result.update(self.warning_violation.statistic())
+
+
+        # evaluator = Score(self.config)
+        # result = evaluator.evaluate(traffic_result)
+
+        # print("\n[交规类表现及得分情况]")
+        # # self.logger.info(f"Traffic Result:{traffic_result}")
+        # return result
+        return traffic_result
+
+
+# 示例使用
+if __name__ == "__main__":
+    pass

+ 623 - 0
scripts/evaluator_enhanced.py

@@ -0,0 +1,623 @@
+#!/usr/bin/env python3
+# evaluator_enhanced.py
+import sys
+import warnings
+import time
+import importlib
+import importlib.util
+import yaml
+from pathlib import Path
+import argparse
+from concurrent.futures import ThreadPoolExecutor
+from functools import lru_cache
+from typing import Dict, Any, List, Optional, Type, Tuple, Callable, Union
+from datetime import datetime
+import logging
+import traceback
+import json
+import inspect
+
+# 常量定义
+DEFAULT_WORKERS = 4
+CUSTOM_METRIC_PREFIX = "metric_"
+CUSTOM_METRIC_FILE_PATTERN = "*.py"
+
+# 安全设置根目录路径
+if hasattr(sys, "_MEIPASS"):
+    _ROOT_PATH = Path(sys._MEIPASS)
+else:
+    _ROOT_PATH = Path(__file__).resolve().parent.parent
+
+sys.path.insert(0, str(_ROOT_PATH))
+
+class ConfigManager:
+    """配置管理组件"""
+    
+    def __init__(self, logger: logging.Logger):
+        self.logger = logger
+        self.base_config: Dict[str, Any] = {}
+        self.custom_config: Dict[str, Any] = {}
+        self.merged_config: Dict[str, Any] = {}
+    
+    def split_configs(self, all_config_path: Path, base_config_path: Path, custom_config_path: Path) -> None:
+        """从all_metrics_config.yaml拆分成内置和自定义配置"""
+        try:
+            with open(all_config_path, 'r', encoding='utf-8') as f:
+                all_metrics = yaml.safe_load(f) or {}
+            
+            with open(base_config_path, 'r', encoding='utf-8') as f:
+                builtin_metrics = yaml.safe_load(f) or {}
+            
+            custom_metrics = self._find_custom_metrics(all_metrics, builtin_metrics)
+            
+            if custom_metrics:
+                with open(custom_config_path, 'w', encoding='utf-8') as f:
+                    yaml.dump(custom_metrics, f, allow_unicode=True, sort_keys=False, indent=2)
+                self.logger.info(f"Split configs: custom metrics saved to {custom_config_path}")
+        except Exception as e:
+            self.logger.error(f"Failed to split configs: {str(e)}")
+            raise
+    
+    def _find_custom_metrics(self, all_metrics, builtin_metrics, current_path=""):
+        """递归比较找出自定义指标"""
+        custom_metrics = {}
+        
+        if isinstance(all_metrics, dict) and isinstance(builtin_metrics, dict):
+            for key in all_metrics:
+                if key not in builtin_metrics:
+                    custom_metrics[key] = all_metrics[key]
+                else:
+                    child_custom = self._find_custom_metrics(
+                        all_metrics[key], 
+                        builtin_metrics[key],
+                        f"{current_path}.{key}" if current_path else key
+                    )
+                    if child_custom:
+                        custom_metrics[key] = child_custom
+        elif all_metrics != builtin_metrics:
+            return all_metrics
+        
+        if custom_metrics:
+            return self._ensure_structure(custom_metrics, all_metrics, current_path)
+        return None
+    
+    def _ensure_structure(self, metrics_dict, full_dict, path):
+        """确保每级包含name和priority"""
+        if not isinstance(metrics_dict, dict):
+            return metrics_dict
+        
+        current = full_dict
+        for key in path.split('.'):
+            if key in current:
+                current = current[key]
+            else:
+                break
+        
+        result = {}
+        if isinstance(current, dict):
+            if 'name' in current:
+                result['name'] = current['name']
+            if 'priority' in current:
+                result['priority'] = current['priority']
+        
+        for key, value in metrics_dict.items():
+            if key not in ['name', 'priority']:
+                result[key] = self._ensure_structure(value, full_dict, f"{path}.{key}" if path else key)
+        
+        return result
+
+    def load_configs(self, base_config_path: Optional[Path], custom_config_path: Optional[Path]) -> Dict[str, Any]:
+        """加载并合并配置"""
+        # 自动拆分配置
+        if base_config_path and base_config_path.exists():
+            all_config_path = base_config_path.parent / "all_metrics_config.yaml"
+            if all_config_path.exists():
+                target_custom_path = custom_config_path or (base_config_path.parent / "custom_metrics_config.yaml")
+                self.split_configs(all_config_path, base_config_path, target_custom_path)
+                custom_config_path = target_custom_path
+        
+        self.base_config = self._safe_load_config(base_config_path) if base_config_path else {}
+        self.custom_config = self._safe_load_config(custom_config_path) if custom_config_path else {}
+        self.merged_config = self._merge_configs(self.base_config, self.custom_config)
+        return self.merged_config
+    
+    def _safe_load_config(self, config_path: Path) -> Dict[str, Any]:
+        """安全加载YAML配置"""
+        try:
+            if not config_path.exists():
+                self.logger.warning(f"Config file not found: {config_path}")
+                return {}
+                
+            with config_path.open('r', encoding='utf-8') as f:
+                config = yaml.safe_load(f) or {}
+                self.logger.info(f"Loaded config: {config_path}")
+                return config
+        except Exception as e:
+            self.logger.error(f"Failed to load config {config_path}: {str(e)}")
+            return {}
+    
+    def _merge_configs(self, base_config: Dict, custom_config: Dict) -> Dict:
+        """智能合并配置"""
+        merged = base_config.copy()
+        
+        for level1_key, level1_value in custom_config.items():
+            if not isinstance(level1_value, dict) or 'name' not in level1_value:
+                if level1_key not in merged:
+                    merged[level1_key] = level1_value
+                continue
+                
+            if level1_key not in merged:
+                merged[level1_key] = level1_value
+            else:
+                for level2_key, level2_value in level1_value.items():
+                    if level2_key in ['name', 'priority']:
+                        continue
+                        
+                    if isinstance(level2_value, dict):
+                        if level2_key not in merged[level1_key]:
+                            merged[level1_key][level2_key] = level2_value
+                        else:
+                            for level3_key, level3_value in level2_value.items():
+                                if level3_key in ['name', 'priority']:
+                                    continue
+                                    
+                                if isinstance(level3_value, dict):
+                                    if level3_key not in merged[level1_key][level2_key]:
+                                        merged[level1_key][level2_key][level3_key] = level3_value
+        
+        return merged
+    
+    def get_config(self) -> Dict[str, Any]:
+        return self.merged_config
+    
+    def get_base_config(self) -> Dict[str, Any]:
+        return self.base_config
+    
+    def get_custom_config(self) -> Dict[str, Any]:
+        return self.custom_config
+
+class MetricLoader:
+    """指标加载器组件"""
+    
+    def __init__(self, logger: logging.Logger, config_manager: ConfigManager):
+        self.logger = logger
+        self.config_manager = config_manager
+        self.metric_modules: Dict[str, Type] = {}
+        self.custom_metric_modules: Dict[str, Any] = {}
+    
+    def load_builtin_metrics(self) -> Dict[str, Type]:
+        """加载内置指标模块"""
+        module_mapping = {
+            "safety": ("modules.metric.safety", "SafeManager"),
+            "comfort": ("modules.metric.comfort", "ComfortManager"),
+            "traffic": ("modules.metric.traffic", "ViolationManager"),
+            "efficient": ("modules.metric.efficient", "EfficientManager"),
+            "function": ("modules.metric.function", "FunctionManager"),
+        }
+        
+        self.metric_modules = {
+            name: self._load_module(*info)
+            for name, info in module_mapping.items()
+        }
+        
+        self.logger.info(f"Loaded builtin metrics: {', '.join(self.metric_modules.keys())}")
+        return self.metric_modules
+    
+    @lru_cache(maxsize=32)
+    def _load_module(self, module_path: str, class_name: str) -> Type:
+        """动态加载Python模块"""
+        try:
+            module = __import__(module_path, fromlist=[class_name])
+            return getattr(module, class_name)
+        except (ImportError, AttributeError) as e:
+            self.logger.error(f"Failed to load module: {module_path}.{class_name} - {str(e)}")
+            raise
+    
+    def load_custom_metrics(self, custom_metrics_path: Optional[Path]) -> Dict[str, Any]:
+        """加载自定义指标模块"""
+        if not custom_metrics_path or not custom_metrics_path.is_dir():
+            self.logger.info("No custom metrics path or path not exists")
+            return {}
+
+        loaded_count = 0
+        for py_file in custom_metrics_path.glob(CUSTOM_METRIC_FILE_PATTERN):
+            if py_file.name.startswith(CUSTOM_METRIC_PREFIX):
+                if self._process_custom_metric_file(py_file):
+                    loaded_count += 1
+        
+        self.logger.info(f"Loaded {loaded_count} custom metric modules")
+        return self.custom_metric_modules
+    
+    def _process_custom_metric_file(self, file_path: Path) -> bool:
+        """处理单个自定义指标文件"""
+        try:
+            metric_key = self._validate_metric_file(file_path)
+            
+            module_name = f"custom_metric_{file_path.stem}"
+            spec = importlib.util.spec_from_file_location(module_name, file_path)
+            module = importlib.util.module_from_spec(spec)
+            spec.loader.exec_module(module)
+            
+            from modules.lib.metric_registry import BaseMetric
+            metric_class = None
+            
+            for name, obj in inspect.getmembers(module):
+                if (inspect.isclass(obj) and 
+                    issubclass(obj, BaseMetric) and 
+                    obj != BaseMetric):
+                    metric_class = obj
+                    break
+            
+            if metric_class:
+                self.custom_metric_modules[metric_key] = {
+                    'type': 'class',
+                    'module': module,
+                    'class': metric_class
+                }
+                self.logger.info(f"Loaded class-based custom metric: {metric_key}")
+            elif hasattr(module, 'evaluate'):
+                self.custom_metric_modules[metric_key] = {
+                    'type': 'function',
+                    'module': module
+                }
+                self.logger.info(f"Loaded function-based custom metric: {metric_key}")
+            else:
+                raise AttributeError(f"Missing evaluate() function or BaseMetric subclass: {file_path.name}")
+                
+            return True
+        except ValueError as e:
+            self.logger.warning(str(e))
+            return False
+        except Exception as e:
+            self.logger.error(f"Failed to load custom metric {file_path}: {str(e)}")
+            return False
+    
+    def _validate_metric_file(self, file_path: Path) -> str:
+        """验证自定义指标文件命名规范"""
+        stem = file_path.stem[len(CUSTOM_METRIC_PREFIX):]
+        parts = stem.split('_')
+        if len(parts) < 3:
+            raise ValueError(f"Invalid custom metric filename: {file_path.name}, should be metric_<level1>_<level2>_<level3>.py")
+
+        level1, level2, level3 = parts[:3]
+        if not self._is_metric_configured(level1, level2, level3):
+            raise ValueError(f"Unconfigured metric: {level1}.{level2}.{level3}")
+        return f"{level1}.{level2}.{level3}"
+    
+    def _is_metric_configured(self, level1: str, level2: str, level3: str) -> bool:
+        """检查指标是否在配置中注册"""
+        custom_config = self.config_manager.get_custom_config()
+        try:
+            return (level1 in custom_config and 
+                    isinstance(custom_config[level1], dict) and
+                    level2 in custom_config[level1] and
+                    isinstance(custom_config[level1][level2], dict) and
+                    level3 in custom_config[level1][level2] and
+                    isinstance(custom_config[level1][level2][level3], dict))
+        except Exception:
+            return False
+    
+    def get_builtin_metrics(self) -> Dict[str, Type]:
+        return self.metric_modules
+    
+    def get_custom_metrics(self) -> Dict[str, Any]:
+        return self.custom_metric_modules
+
+class EvaluationEngine:
+    """评估引擎组件"""
+    
+    def __init__(self, logger: logging.Logger, config_manager: ConfigManager, metric_loader: MetricLoader):
+        self.logger = logger
+        self.config_manager = config_manager
+        self.metric_loader = metric_loader
+    
+    def evaluate(self, data: Any) -> Dict[str, Any]:
+        """执行评估流程"""
+        raw_results = self._collect_builtin_metrics(data)
+        custom_results = self._collect_custom_metrics(data)
+        return self._process_merged_results(raw_results, custom_results)
+    
+    def _collect_builtin_metrics(self, data: Any) -> Dict[str, Any]:
+        """收集内置指标结果"""
+        metric_modules = self.metric_loader.get_builtin_metrics()
+        raw_results: Dict[str, Any] = {}
+        
+        with ThreadPoolExecutor(max_workers=len(metric_modules)) as executor:
+            futures = {
+                executor.submit(self._run_module, module, data, module_name): module_name
+                for module_name, module in metric_modules.items()
+            }
+
+            for future in futures:
+                module_name = futures[future]
+                try:
+                    result = future.result()
+                    raw_results[module_name] = result[module_name]
+                except Exception as e:
+                    self.logger.error(
+                        f"{module_name} evaluation failed: {str(e)}",
+                        exc_info=True,
+                    )
+                    raw_results[module_name] = {
+                        "status": "error",
+                        "message": str(e),
+                        "timestamp": datetime.now().isoformat(),
+                    }
+        
+        return raw_results
+    
+    def _collect_custom_metrics(self, data: Any) -> Dict[str, Dict]:
+        """收集自定义指标结果"""
+        custom_metrics = self.metric_loader.get_custom_metrics()
+        if not custom_metrics:
+            return {}
+            
+        custom_results = {}
+        
+        for metric_key, metric_info in custom_metrics.items():
+            try:
+                level1, level2, level3 = metric_key.split('.')
+                
+                if metric_info['type'] == 'class':
+                    metric_class = metric_info['class']
+                    metric_instance = metric_class(data)
+                    metric_result = metric_instance.calculate()
+                else:
+                    module = metric_info['module']
+                    metric_result = module.evaluate(data)
+                
+                if level1 not in custom_results:
+                    custom_results[level1] = {}
+                custom_results[level1] = metric_result
+                
+                self.logger.info(f"Calculated custom metric: {level1}.{level2}.{level3}")
+                
+            except Exception as e:
+                self.logger.error(f"Custom metric {metric_key} failed: {str(e)}")
+                
+                try:
+                    level1, level2, level3 = metric_key.split('.')
+                    
+                    if level1 not in custom_results:
+                        custom_results[level1] = {}
+                        
+                    custom_results[level1] = {
+                        "status": "error",
+                        "message": str(e),
+                        "timestamp": datetime.now().isoformat(),
+                    }
+                except Exception:
+                    pass
+        
+        return custom_results
+    
+    def _process_merged_results(self, raw_results: Dict, custom_results: Dict) -> Dict:
+        """处理合并后的评估结果"""
+        from modules.lib.score import Score
+        final_results = {}
+        merged_config = self.config_manager.get_config()
+
+        for level1, level1_data in raw_results.items():
+            if level1 in custom_results:
+                level1_data.update(custom_results[level1])
+
+            try:
+                evaluator = Score(merged_config, level1)
+                final_results.update(evaluator.evaluate(level1_data))
+            except Exception as e:
+                final_results[level1] = self._format_error(e)
+
+        for level1, level1_data in custom_results.items():
+            if level1 not in raw_results:
+                try:
+                    evaluator = Score(merged_config, level1)
+                    final_results.update(evaluator.evaluate(level1_data))
+                except Exception as e:
+                    final_results[level1] = self._format_error(e)
+
+        return final_results
+        
+    def _format_error(self, e: Exception) -> Dict:
+        return {
+            "status": "error",
+            "message": str(e),
+            "timestamp": datetime.now().isoformat()
+        }
+                
+    def _run_module(self, module_class: Any, data: Any, module_name: str) -> Dict[str, Any]:
+        """执行单个评估模块"""
+        try:
+            instance = module_class(data)
+            return {module_name: instance.report_statistic()}
+        except Exception as e:
+            self.logger.error(f"{module_name} execution error: {str(e)}", exc_info=True)
+            return {module_name: {"error": str(e)}}
+
+class LoggingManager:
+    """日志管理组件"""
+    
+    def __init__(self, log_path: Path):
+        self.log_path = log_path
+        self.logger = self._init_logger()
+    
+    def _init_logger(self) -> logging.Logger:
+        """初始化日志系统"""
+        try:
+            from modules.lib.log_manager import LogManager
+            log_manager = LogManager(self.log_path)
+            return log_manager.get_logger()
+        except (ImportError, PermissionError, IOError) as e:
+            logger = logging.getLogger("evaluator")
+            logger.setLevel(logging.INFO)
+            console_handler = logging.StreamHandler()
+            console_handler.setFormatter(logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s'))
+            logger.addHandler(console_handler)
+            logger.warning(f"Failed to init standard logger: {str(e)}, using fallback logger")
+            return logger
+    
+    def get_logger(self) -> logging.Logger:
+        return self.logger
+
+class DataProcessor:
+    """数据处理组件"""
+    
+    def __init__(self, logger: logging.Logger, data_path: Path, config_path: Optional[Path] = None):
+        self.logger = logger
+        self.data_path = data_path
+        self.config_path = config_path
+        self.processor = self._load_processor()
+        self.case_name = self.data_path.name
+    
+    def _load_processor(self) -> Any:
+        """加载数据处理器"""
+        try:
+            from modules.lib import data_process
+            return data_process.DataPreprocessing(self.data_path, self.config_path)
+        except ImportError as e:
+            self.logger.error(f"Failed to load data processor: {str(e)}")
+            raise RuntimeError(f"Failed to load data processor: {str(e)}") from e
+    
+    def validate(self) -> None:
+        """验证数据路径"""
+        if not self.data_path.exists():
+            raise FileNotFoundError(f"Data path not exists: {self.data_path}")
+        if not self.data_path.is_dir():
+            raise NotADirectoryError(f"Invalid data directory: {self.data_path}")
+
+class EvaluationPipeline:
+    """评估流水线控制器"""
+    
+    def __init__(self, config_path: str, log_path: str, data_path: str, report_path: str, 
+                 custom_metrics_path: Optional[str] = None, custom_config_path: Optional[str] = None):
+        # 路径初始化
+        self.config_path = Path(config_path) if config_path else None
+        self.custom_config_path = Path(custom_config_path) if custom_config_path else None
+        self.data_path = Path(data_path)
+        self.report_path = Path(report_path)
+        self.custom_metrics_path = Path(custom_metrics_path) if custom_metrics_path else None
+        
+        # 组件初始化
+        self.logging_manager = LoggingManager(Path(log_path))
+        self.logger = self.logging_manager.get_logger()
+        self.config_manager = ConfigManager(self.logger)
+        self.config_manager.load_configs(self.config_path, self.custom_config_path)
+        self.metric_loader = MetricLoader(self.logger, self.config_manager)
+        self.metric_loader.load_builtin_metrics()
+        self.metric_loader.load_custom_metrics(self.custom_metrics_path)
+        self.evaluation_engine = EvaluationEngine(self.logger, self.config_manager, self.metric_loader)
+        self.data_processor = DataProcessor(self.logger, self.data_path, self.config_path)
+    
+    def execute(self) -> Dict[str, Any]:
+        """执行评估流水线"""
+        try:
+            self.data_processor.validate()
+            
+            self.logger.info(f"Start evaluation: {self.data_path.name}")
+            start_time = time.perf_counter()
+            results = self.evaluation_engine.evaluate(self.data_processor.processor)
+            elapsed_time = time.perf_counter() - start_time
+            self.logger.info(f"Evaluation completed, time: {elapsed_time:.2f}s")
+            
+            report = self._generate_report(self.data_processor.case_name, results)
+            return report
+            
+        except Exception as e:
+            self.logger.critical(f"Evaluation failed: {str(e)}", exc_info=True)
+            return {"error": str(e), "traceback": traceback.format_exc()}
+    
+    def _generate_report(self, case_name: str, results: Dict[str, Any]) -> Dict[str, Any]:
+        """生成评估报告"""
+        from modules.lib.common import dict2json
+        
+        self.report_path.mkdir(parents=True, exist_ok=True)
+        
+        results["metadata"] = {
+            "case_name": case_name,
+            "timestamp": datetime.now().isoformat(),
+            "version": "3.1.0",
+        }
+        
+        report_file = self.report_path / f"{case_name}_report.json"
+        dict2json(results, report_file)
+        self.logger.info(f"Report generated: {report_file}")
+        
+        return results
+
+def main():
+    """命令行入口"""
+    parser = argparse.ArgumentParser(
+        description="Autonomous Driving Evaluation System V3.1",
+        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+    )
+    
+    parser.add_argument(
+        "--logPath",
+        type=str,
+        default="logs/test.log",
+        help="Log file path",
+    )
+    parser.add_argument(
+        "--dataPath",
+        type=str,
+        default="data/zhaoyuan1",
+        help="Input data directory",
+    )
+    parser.add_argument(
+        "--configPath",
+        type=str,
+        default="config/metrics_config.yaml",
+        help="Metrics config file path",
+    )
+    parser.add_argument(
+        "--reportPath",
+        type=str,
+        default="reports",
+        help="Output report directory",
+    )
+    parser.add_argument(
+        "--customMetricsPath",
+        type=str,
+        default="custom_metrics",
+        help="Custom metrics scripts directory (optional)",
+    )
+    parser.add_argument(
+        "--customConfigPath",
+        type=str,
+        default="config/custom_metrics_config.yaml",
+        help="Custom metrics config path (optional)",
+    )
+    
+    args = parser.parse_args()
+
+    try:
+        pipeline = EvaluationPipeline(
+            args.configPath, 
+            args.logPath, 
+            args.dataPath, 
+            args.reportPath, 
+            args.customMetricsPath, 
+            args.customConfigPath
+        )
+        
+        start_time = time.perf_counter()
+        result = pipeline.execute()
+        elapsed_time = time.perf_counter() - start_time
+
+        if "error" in result:
+            print(f"Evaluation failed: {result['error']}")
+            sys.exit(1)
+
+        print(f"Evaluation completed, total time: {elapsed_time:.2f}s")
+        print(f"Report path: {pipeline.report_path}")
+        
+    except KeyboardInterrupt:
+        print("\nUser interrupted")
+        sys.exit(130)
+    except Exception as e:
+        print(f"Execution error: {str(e)}")
+        traceback.print_exc()
+        sys.exit(1)
+
+if __name__ == "__main__":
+    warnings.filterwarnings("ignore")
+    main()

+ 498 - 0
scripts/evaluator_optimized.py

@@ -0,0 +1,498 @@
+# evaluation_engine.py
+import sys
+import warnings
+import time
+import importlib
+import yaml  # 添加yaml模块导入
+from pathlib import Path
+import argparse
+from concurrent.futures import ThreadPoolExecutor
+from functools import lru_cache
+from typing import Dict, Any, List, Optional
+from datetime import datetime
+
+# 强制导入所有可能动态加载的模块
+
+
+
+# 安全设置根目录路径(动态路径管理)
+# 判断是否处于编译模式
+if hasattr(sys, "_MEIPASS"):
+    # 编译模式下使用临时资源目录
+    _ROOT_PATH = Path(sys._MEIPASS)
+else:
+    # 开发模式下使用原工程路径
+    _ROOT_PATH = Path(__file__).resolve().parent.parent
+
+sys.path.insert(0, str(_ROOT_PATH))
+print(f"当前根目录:{_ROOT_PATH}")
+print(f'当前系统路径:{sys.path}')
+
+
+class EvaluationCore:
+    """评估引擎核心类(单例模式)"""
+
+    _instance = None
+
+    def __new__(cls, logPath: str, configPath: str = None, customConfigPath: str = None, customMetricsPath: str = None):
+        if not cls._instance:
+            cls._instance = super().__new__(cls)
+            cls._instance._init(logPath, configPath, customConfigPath, customMetricsPath)
+        return cls._instance
+
+    def _init(self, logPath: str = None, configPath: str = None, customConfigPath: str = None, customMetricsPath: str = None) -> None:
+        """初始化引擎组件"""
+        self.log_path = logPath
+        self.config_path = configPath
+        self.custom_config_path = customConfigPath
+        self.custom_metrics_path = customMetricsPath
+        
+        # 加载配置
+        self.metrics_config = {}
+        self.custom_metrics_config = {}
+        self.merged_config = {}  # 添加合并后的配置
+        
+        # 自定义指标脚本模块
+        self.custom_metrics_modules = {}
+        
+        self._init_log_system()
+        self._load_configs()  # 加载并合并配置
+        self._init_metrics()
+        self._load_custom_metrics()
+
+    def _init_log_system(self) -> None:
+        """集中式日志管理"""
+        try:
+            from modules.lib.log_manager import LogManager
+
+            log_manager = LogManager(self.log_path)
+            self.logger = log_manager.get_logger()
+        except (PermissionError, IOError) as e:
+            sys.stderr.write(f"日志系统初始化失败: {str(e)}\n")
+            sys.exit(1)
+
+    def _init_metrics(self) -> None:
+        """初始化评估模块(策略模式)"""
+        # from modules.metric import safety, comfort, traffic, efficient, function
+        self.metric_modules = {
+            "safety": self._load_module("modules.metric.safety", "SafeManager"),
+            "comfort": self._load_module("modules.metric.comfort", "ComfortManager"),
+            "traffic": self._load_module("modules.metric.traffic", "ViolationManager"),
+            "efficient": self._load_module("modules.metric.efficient", "EfficientManager"),
+            "function": self._load_module("modules.metric.function", "FunctionManager"),
+        }
+
+    @lru_cache(maxsize=32)
+    def _load_module(self, module_path: str, class_name: str) -> Any:
+        """动态加载评估模块(带缓存)"""
+        try:
+            __import__(module_path)
+            return getattr(sys.modules[module_path], class_name)
+        except (ImportError, AttributeError) as e:
+            self.logger.error(f"模块加载失败: {module_path}.{class_name} - {str(e)}")
+            raise
+
+    def _load_configs(self) -> None:
+        """加载并合并内置指标和自定义指标配置"""
+        # 加载内置指标配置
+        if self.config_path and Path(self.config_path).exists():
+            try:
+                with open(self.config_path, 'r', encoding='utf-8') as f:
+                    self.metrics_config = yaml.safe_load(f)
+                self.logger.info(f"成功加载内置指标配置: {self.config_path}")
+            except Exception as e:
+                self.logger.error(f"加载内置指标配置失败: {str(e)}")
+                self.metrics_config = {}
+        
+        # 加载自定义指标配置
+        if self.custom_config_path and Path(self.custom_config_path).exists():
+            try:
+                with open(self.custom_config_path, 'r', encoding='utf-8') as f:
+                    self.custom_metrics_config = yaml.safe_load(f)
+                self.logger.info(f"成功加载自定义指标配置: {self.custom_config_path}")
+            except Exception as e:
+                self.logger.error(f"加载自定义指标配置失败: {str(e)}")
+                self.custom_metrics_config = {}
+        
+        # 合并配置
+        self.merged_config = self._merge_configs(self.metrics_config, self.custom_metrics_config)
+
+    def _merge_configs(self, base_config: Dict, custom_config: Dict) -> Dict:
+        """
+        合并内置指标和自定义指标配置
+        
+        策略:
+        1. 如果自定义指标与内置指标有相同的一级指标,则合并其下的二级指标
+        2. 如果自定义指标与内置指标有相同的二级指标,则合并其下的三级指标
+        3. 如果是全新的指标,则直接添加
+        """
+        merged = base_config.copy()
+        
+        for level1_key, level1_value in custom_config.items():
+            # 跳过非指标配置项(如vehicle等)
+            if not isinstance(level1_value, dict) or 'name' not in level1_value:
+                if level1_key not in merged:
+                    merged[level1_key] = level1_value
+                continue
+                
+            if level1_key not in merged:
+                # 全新的一级指标
+                merged[level1_key] = level1_value
+            else:
+                # 合并已存在的一级指标下的内容
+                for level2_key, level2_value in level1_value.items():
+                    if level2_key == 'name' or level2_key == 'priority':
+                        continue
+                        
+                    if isinstance(level2_value, dict):
+                        if level2_key not in merged[level1_key]:
+                            # 新的二级指标
+                            merged[level1_key][level2_key] = level2_value
+                        else:
+                            # 合并已存在的二级指标下的内容
+                            for level3_key, level3_value in level2_value.items():
+                                if level3_key == 'name' or level3_key == 'priority':
+                                    continue
+                                    
+                                if isinstance(level3_value, dict):
+                                    if level3_key not in merged[level1_key][level2_key]:
+                                        # 新的三级指标
+                                        merged[level1_key][level2_key][level3_key] = level3_value
+        
+        return merged
+
+    def _load_custom_metrics(self) -> None:
+        """加载自定义指标脚本"""
+        if not self.custom_metrics_path or not Path(self.custom_metrics_path).exists():
+            return
+            
+        custom_metrics_dir = Path(self.custom_metrics_path)
+        if not custom_metrics_dir.is_dir():
+            self.logger.warning(f"自定义指标路径不是目录: {custom_metrics_dir}")
+            return
+            
+        # 遍历自定义指标脚本目录
+        for file_path in custom_metrics_dir.glob("*.py"):
+            if file_path.name.startswith("metric_") and file_path.name.endswith(".py"):
+                try:
+                    # 解析脚本名称,获取指标层级信息
+                    parts = file_path.stem[7:].split('_')  # 去掉'metric_'前缀
+                    if len(parts) < 3:
+                        self.logger.warning(f"自定义指标脚本 {file_path.name} 命名不符合规范,应为 metric_<level1>_<level2>_<level3>.py")
+                        continue
+                    
+                    level1, level2, level3 = parts[0], parts[1], parts[2]
+                    
+                    # 检查指标是否在配置中
+                    if not self._check_metric_in_config(level1, level2, level3, self.custom_metrics_config):
+                        self.logger.warning(f"自定义指标 {level1}.{level2}.{level3} 在配置中不存在,跳过加载")
+                        continue
+                    
+                    # 加载脚本模块
+                    module_name = f"custom_metric_{level1}_{level2}_{level3}"
+                    spec = importlib.util.spec_from_file_location(module_name, file_path)
+                    module = importlib.util.module_from_spec(spec)
+                    spec.loader.exec_module(module)
+                    
+                    # 检查模块是否包含必要的函数
+                    if not hasattr(module, 'evaluate'):
+                        self.logger.warning(f"自定义指标脚本 {file_path.name} 缺少 evaluate 函数")
+                        continue
+                    
+                    # 存储模块引用
+                    key = f"{level1}.{level2}.{level3}"
+                    self.custom_metrics_modules[key] = module
+                    self.logger.info(f"成功加载自定义指标脚本: {file_path.name}")
+                    
+                except Exception as e:
+                    self.logger.error(f"加载自定义指标脚本 {file_path.name} 失败: {str(e)}")
+
+    def _check_metric_in_config(self, level1: str, level2: str, level3: str, config: Dict) -> bool:
+        """检查指标是否在配置中存在"""
+        try:
+            return (level1 in config and 
+                    isinstance(config[level1], dict) and
+                    level2 in config[level1] and
+                    isinstance(config[level1][level2], dict) and
+                    level3 in config[level1][level2] and
+                    isinstance(config[level1][level2][level3], dict))
+        except Exception:
+            return False
+
+    def parallel_evaluate(self, data: Any) -> Dict[str, Any]:
+        """并行化评估引擎(动态线程池)"""
+        # 存储所有评估结果
+        results = {}
+        
+        # 1. 先评估内置指标
+        self._evaluate_built_in_metrics(data, results)
+        
+        # 2. 再评估自定义指标并合并结果
+        self._evaluate_and_merge_custom_metrics(data, results)
+        
+        return results
+    
+    def _evaluate_built_in_metrics(self, data: Any, results: Dict[str, Any]) -> None:
+        """评估内置指标"""
+        # 关键修改点1:线程数=模块数
+        with ThreadPoolExecutor(max_workers=len(self.metric_modules)) as executor:
+            # 关键修改点2:按模块名创建future映射
+            futures = {
+                module_name: executor.submit(
+                    self._run_module, module, data, module_name
+                )
+                for module_name, module in self.metric_modules.items()
+            }
+
+            # 关键修改点3:按模块顺序处理结果
+            for module_name, future in futures.items():
+                try:
+                    from modules.lib.score import Score
+                    evaluator = Score(self.merged_config, module_name)
+                    result_module = future.result()
+                    result = evaluator.evaluate(result_module)
+                    # results.update(result[module_name])
+                    results.update(result)
+                except Exception as e:
+                    self.logger.error(
+                        f"{module_name} 评估失败: {str(e)}",
+                        exc_info=True,
+                        extra={"stack": True},  # 记录完整堆栈
+                    )
+                    # 错误信息结构化存储
+                    results[module_name] = {
+                        "status": "error",
+                        "message": str(e),
+                        "timestamp": datetime.now().isoformat(),
+                    }
+    
+    def _evaluate_and_merge_custom_metrics(self, data: Any, results: Dict[str, Any]) -> None:
+        """评估自定义指标并合并结果"""
+        if not self.custom_metrics_modules:
+            return
+            
+        # 按一级指标分组自定义指标
+        grouped_metrics = {}
+        for metric_key in self.custom_metrics_modules:
+            level1 = metric_key.split('.')[0]
+            if level1 not in grouped_metrics:
+                grouped_metrics[level1] = []
+            grouped_metrics[level1].append(metric_key)
+        
+        # 处理每个一级指标组
+        for level1, metric_keys in grouped_metrics.items():
+            # 检查是否为内置一级指标
+            is_built_in = level1 in self.metrics_config and 'name' in self.metrics_config[level1]
+            level1_name = self.merged_config[level1].get('name', level1) if level1 in self.merged_config else level1
+            
+            # 如果是内置一级指标,将结果合并到已有结果中
+            if is_built_in and level1_name in results:
+                for metric_key in metric_keys:
+                    self._evaluate_and_merge_single_metric(data, results, metric_key, level1_name)
+            else:
+                # 如果是新的一级指标,创建新的结果结构
+                if level1_name not in results:
+                    results[level1_name] = {}
+                
+                # 评估该一级指标下的所有自定义指标
+                for metric_key in metric_keys:
+                    self._evaluate_and_merge_single_metric(data, results, metric_key, level1_name)
+    
+    def _evaluate_and_merge_single_metric(self, data: Any, results: Dict[str, Any], metric_key: str, level1_name: str) -> None:
+        """评估单个自定义指标并合并结果"""
+        try:
+            level1, level2, level3 = metric_key.split('.')
+            module = self.custom_metrics_modules[metric_key]
+            
+            # 获取指标配置
+            metric_config = self.custom_metrics_config[level1][level2][level3]
+            
+            # 获取指标名称
+            level2_name = self.custom_metrics_config[level1][level2].get('name', level2)
+            level3_name = metric_config.get('name', level3)
+            
+            # 确保结果字典结构存在
+            if level2_name not in results[level1_name]:
+                results[level1_name][level2_name] = {}
+            
+            # 调用自定义指标评测函数
+            metric_result = module.evaluate(data)
+            from modules.lib.score import Score
+            evaluator = Score(self.merged_config, level1_name)
+            
+            result = evaluator.evaluate(metric_result)
+           
+            results.update(result)
+            
+            
+            self.logger.info(f"评测自定义指标: {level1_name}.{level2_name}.{level3_name}")
+            
+        except Exception as e:
+            self.logger.error(f"评测自定义指标 {metric_key} 失败: {str(e)}")
+            
+            # 尝试添加错误信息到结果中
+            try:
+                level1, level2, level3 = metric_key.split('.')
+                level2_name = self.custom_metrics_config[level1][level2].get('name', level2)
+                level3_name = self.custom_metrics_config[level1][level2][level3].get('name', level3)
+                
+                if level2_name not in results[level1_name]:
+                    results[level1_name][level2_name] = {}
+                    
+                results[level1_name][level2_name][level3_name] = {
+                    "status": "error",
+                    "message": str(e),
+                    "timestamp": datetime.now().isoformat(),
+                }
+            except Exception:
+                pass
+
+    def _run_module(
+        self, module_class: Any, data: Any, module_name: str
+    ) -> Dict[str, Any]:
+        """执行单个评估模块(带熔断机制)"""
+        try:
+            instance = module_class(data)
+            return {module_name: instance.report_statistic()}
+        except Exception as e:
+            self.logger.error(f"{module_name} 执行异常: {str(e)}", stack_info=True)
+            return {module_name: {"error": str(e)}}
+
+
+
+
+class EvaluationPipeline:
+    """评估流水线控制器"""
+
+    def __init__(self, configPath: str, logPath: str, dataPath: str, resultPath: str, customMetricsPath: Optional[str] = None, customConfigPath: Optional[str] = None):
+        self.configPath = Path(configPath)
+        self.custom_config_path = Path(customConfigPath) if customConfigPath else None
+        self.data_path = Path(dataPath)
+        self.report_path = Path(resultPath)
+        self.custom_metrics_path = Path(customMetricsPath) if customMetricsPath else None
+        
+        # 创建评估引擎实例,传入所有必要参数
+        self.engine = EvaluationCore(
+            logPath, 
+            configPath=str(self.configPath), 
+            customConfigPath=str(self.custom_config_path) if self.custom_config_path else None,
+            customMetricsPath=str(self.custom_metrics_path) if self.custom_metrics_path else None
+        )
+        
+        self.data_processor = self._load_data_processor()
+
+    def _load_data_processor(self) -> Any:
+        """动态加载数据预处理模块"""
+        try:
+            from modules.lib import data_process
+
+            return data_process.DataPreprocessing(self.data_path, self.configPath)
+        except ImportError as e:
+            raise RuntimeError(f"数据处理器加载失败: {str(e)}") from e
+
+    def execute_pipeline(self) -> Dict[str, Any]:
+        """端到端执行评估流程"""
+        self._validate_case()
+
+        try:
+            metric_results = self.engine.parallel_evaluate(self.data_processor)
+            report = self._generate_report(
+                self.data_processor.case_name, metric_results
+            )
+            return report
+        except Exception as e:
+            self.engine.logger.critical(f"流程执行失败: {str(e)}", exc_info=True)
+            return {"error": str(e)}
+
+    def _validate_case(self) -> None:
+        """用例路径验证"""
+        case_path = self.data_path
+        if not case_path.exists():
+            raise FileNotFoundError(f"用例路径不存在: {case_path}")
+        if not case_path.is_dir():
+            raise NotADirectoryError(f"无效的用例目录: {case_path}")
+
+    def _generate_report(self, case_name: str, results: Dict) -> Dict:
+        """生成评估报告(模板方法模式)"""
+        from modules.lib.common import dict2json
+
+        report_path = self.report_path
+        report_path.mkdir(parents=True, exist_ok=True, mode=0o777)
+
+        report_file = report_path / f"{case_name}_report.json"
+        dict2json(results, report_file)
+        self.engine.logger.info(f"评估报告已生成: {report_file}")
+        return results
+
+
+def main():
+    """命令行入口(工厂模式)"""
+    parser = argparse.ArgumentParser(
+        description="自动驾驶评估系统 V3.0 - 支持动态指标选择和自定义指标",
+        formatter_class=argparse.ArgumentDefaultsHelpFormatter,
+    )
+    # 带帮助说明的参数定义,增加默认值
+    parser.add_argument(
+        "--logPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/logs/test.log",
+        help="日志文件存储路径",
+    )
+    parser.add_argument(
+        "--dataPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/data/zhaoyuan1",
+        help="预处理后的输入数据目录",
+    )
+    parser.add_argument(
+        "--configPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/config/metrics_config.yaml",
+        help="评估指标配置文件路径",
+    )
+    parser.add_argument(
+        "--reportPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/reports",
+        help="评估报告输出目录",
+    )
+    # 新增自定义指标路径参数(可选)
+    parser.add_argument(
+        "--customMetricsPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/custom_metrics",
+        help="自定义指标脚本目录(可选)",
+    )
+    # 新增自定义指标路径参数(可选)
+    parser.add_argument(
+        "--customConfigPath",
+        type=str,
+        default="d:/Kevin/zhaoyuan/zhaoyuan_new/config/custom_metrics_config.yaml",
+        help="自定义指标脚本目录(可选)",
+    )
+    args = parser.parse_args()
+
+    try:
+        pipeline = EvaluationPipeline(
+            args.configPath, args.logPath, args.dataPath, args.reportPath, args.customMetricsPath, args.customConfigPath
+        )
+        start_time = time.perf_counter()
+
+        result = pipeline.execute_pipeline()
+
+        if "error" in result:
+            sys.exit(1)
+
+        print(f"评估完成,耗时: {time.perf_counter()-start_time:.2f}s")
+        print(f"报告路径: {pipeline.report_path}")
+    except KeyboardInterrupt:
+        print("\n用户中断操作")
+        sys.exit(130)
+    except Exception as e:
+        print(f"程序执行异常: {str(e)}")
+        sys.exit(1)
+
+
+if __name__ == "__main__":
+    warnings.filterwarnings("ignore")
+    main()

+ 106 - 0
templates/custom_metric_template.py

@@ -0,0 +1,106 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+"""
+自定义指标模板
+用户可以基于此模板创建自己的指标
+"""
+
+from typing import Dict, Any
+import numpy as np
+from modules.lib.score import Score
+
+# 导入基类
+# 注意:实际使用时,请确保路径正确
+from modules.lib.metric_registry import BaseMetric
+
+# 指定指标类别(必须)
+# 可选值: safety, comfort, traffic, efficient, function, custom
+METRIC_CATEGORY = "custom"
+
+class CustomMetricExample(BaseMetric):
+    """自定义指标示例 - 计算平均速度"""
+    
+    def __init__(self, data: Any):
+        """
+        初始化指标
+        
+        Args:
+            data: 输入数据
+        """
+        super().__init__(data)
+        # 在这里添加自定义初始化代码
+        
+    def calculate(self) -> Dict[str, Any]:
+        """
+        计算指标
+        
+        Returns:
+            计算结果字典
+        """
+        # 在这里实现指标计算逻辑
+        result = {
+            "value": 0.0,  # 指标值
+            "score": 100,  # 评分
+            "details": {}  # 详细信息
+        }
+        
+        # 示例:计算平均速度
+        try:
+            if hasattr(self.data, 'velocities') and self.data.velocities:
+                velocities = self.data.velocities
+                if isinstance(velocities, dict) and 'vx' in velocities and 'vy' in velocities:
+                    # 计算合速度
+                    vx = np.array(velocities['vx'])
+                    vy = np.array(velocities['vy'])
+                    speeds = np.sqrt(vx**2 + vy**2)
+                    
+                    # 计算平均速度
+                    avg_speed = np.mean(speeds)
+                    result['value'] = float(avg_speed)
+                    
+                    # 简单评分逻辑
+                    if avg_speed < 10:
+                        result['score'] = 60  # 速度过低
+                    elif avg_speed > 50:
+                        result['score'] = 70  # 速度过高
+                    else:
+                        result['score'] = 100  # 速度适中
+                    
+                    # 添加详细信息
+                    result['details'] = {
+                        "max_speed": float(np.max(speeds)),
+                        "min_speed": float(np.min(speeds)),
+                        "std_speed": float(np.std(speeds))
+                    }
+        except Exception as e:
+            # 出错时记录错误信息
+            result['value'] = 0.0
+            result['score'] = 0
+            result['details'] = {"error": str(e)}
+                
+        return result
+    
+    def report_statistic(self) -> Dict[str, Any]:
+        """
+        报告统计结果
+        可以在这里自定义结果格式
+        """
+        result = self.calculate()
+        
+        # 可以在这里添加额外的处理逻辑
+        # 例如:添加时间戳、格式化结果等
+        
+        return result
+
+
+# 可以在同一文件中定义多个指标类
+class AnotherCustomMetric(BaseMetric):
+    """另一个自定义指标示例 - 计算加速度变化率"""
+    
+    def __init__(self, data: Any):
+        super().__init__(data)
+    
+    def calculate(self) -> Dict[str, Any]:
+        # 实现您的计算逻辑
+        return {"value": 0.0, "score": 100, "details": {}}

+ 226 - 0
templates/unified_custom_metric_template.py

@@ -0,0 +1,226 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+"""
+自定义指标统一模板
+
+本模板提供两种实现自定义指标的方式:
+1. 基于类继承的方式(推荐):继承BaseMetric基类,实现calculate方法
+2. 基于函数式的方式:实现evaluate函数
+
+用户可以根据自己的需求选择合适的实现方式。
+"""
+
+from typing import Dict, Any, Union, Optional
+import numpy as np
+import logging
+from modules.lib.score import Score
+
+# 导入基类
+# 注意:实际使用时,请确保路径正确
+from modules.lib.metric_registry import BaseMetric
+
+# 指定指标类别(必须)
+# 可选值: safety, comfort, traffic, efficient, function, custom
+METRIC_CATEGORY = "custom"
+
+#############################################################
+# 方式一:基于类继承的实现方式(推荐)
+# 优点:结构清晰,易于扩展,支持复杂指标计算
+# 适用场景:需要复杂状态管理、多步骤计算的指标
+#############################################################
+
+class CustomMetricExample(BaseMetric):
+    """自定义指标示例 - 计算平均速度"""
+    
+    def __init__(self, data: Any):
+        """
+        初始化指标
+        
+        Args:
+            data: 输入数据,通常包含场景、轨迹等信息
+        """
+        super().__init__(data)
+        # 在这里添加自定义初始化代码
+        
+    def calculate(self) -> Dict[str, Any]:
+        """
+        计算指标
+        
+        Returns:
+            计算结果字典,包含以下字段:
+            - value: 指标值
+            - score: 评分(0-100)
+            - details: 详细信息(可选)
+        """
+        # 在这里实现指标计算逻辑
+        result = {
+            "value": 0.0,  # 指标值
+            "score": 100,  # 评分
+            "details": {}  # 详细信息
+        }
+        
+        # 示例:计算平均速度
+        try:
+            if hasattr(self.data, 'ego_data') and hasattr(self.data.ego_data, 'v'):
+                # 获取速度数据
+                speeds = self.data.ego_data['v'].values
+                
+                # 计算平均速度
+                avg_speed = np.mean(speeds)
+                result['value'] = float(avg_speed)
+                
+                # 简单评分逻辑
+                if avg_speed < 10:
+                    result['score'] = 60  # 速度过低
+                elif avg_speed > 50:
+                    result['score'] = 70  # 速度过高
+                else:
+                    result['score'] = 100  # 速度适中
+                
+                # 添加详细信息
+                result['details'] = {
+                    "max_speed": float(np.max(speeds)),
+                    "min_speed": float(np.min(speeds)),
+                    "std_speed": float(np.std(speeds))
+                }
+        except Exception as e:
+            # 出错时记录错误信息
+            logging.error(f"计算指标失败: {str(e)}")
+            result['value'] = 0.0
+            result['score'] = 0
+            result['details'] = {"error": str(e)}
+                
+        return result
+    
+    def report_statistic(self) -> Dict[str, Any]:
+        """
+        报告统计结果
+        可以在这里自定义结果格式
+        
+        Returns:
+            统计结果字典
+        """
+        result = self.calculate()
+        
+        # 可以在这里添加额外的处理逻辑
+        # 例如:添加时间戳、格式化结果等
+        
+        return result
+
+
+#############################################################
+# 方式二:基于函数式的实现方式
+# 优点:简单直接,易于理解
+# 适用场景:简单的指标计算,无需复杂状态管理
+#############################################################
+
+def evaluate(data) -> Dict[str, Any]:
+    """
+    评测自定义指标
+    
+    Args:
+        data: 评测数据,包含场景、轨迹等信息
+        
+    Returns:
+        评测结果,包含指标值、分数、详情等
+    """
+
+    try:
+        # 计算指标值
+        result = calculate_metric(data)
+        
+        # 可以使用Score类评估结果
+        # evaluator = Score(config)   
+        # result = evaluator.evaluate(result)
+        return result
+        
+    except Exception as e:
+        logging.error(f"评测指标失败: {str(e)}")
+        # 发生异常时返回错误信息
+        return {
+            "value": 0.0,
+            "score": 0,
+            "details": {
+                "error": str(e)
+            }
+        }
+    
+
+def calculate_metric(data) -> Dict[str, Any]:
+    """
+    计算指标值
+    
+    Args:
+        data: 输入数据
+        
+    Returns:
+        指标计算结果
+    """
+    # 这里是计算指标的具体逻辑
+    # 以下是一个简化的示例
+    
+    if data is None:
+        raise ValueError("输入数据不能为空")
+    
+    try:
+        # 示例:计算TTC (Time To Collision)
+        if hasattr(data, 'ego_data'):
+            # 这里应该实现实际的指标计算逻辑
+            # 临时使用固定值代替实际计算
+            metric_value = 1.5
+            
+            # 返回结果
+            return {
+                "value": metric_value,
+                "score": 85,  # 示例评分
+                "details": {
+                    "min_value": metric_value,
+                    "max_value": metric_value * 2
+                }
+            }
+        else:
+            raise ValueError("数据格式不正确,缺少ego_data")
+    except Exception as e:
+        logging.error(f"计算指标失败: {str(e)}")
+        raise
+
+
+#############################################################
+# 使用说明
+#############################################################
+"""
+如何选择实现方式:
+
+1. 基于类继承的方式(推荐):
+   - 适用于复杂指标计算
+   - 需要维护状态或多步骤计算
+   - 需要与系统深度集成
+
+2. 基于函数式的方式:
+   - 适用于简单指标计算
+   - 逻辑简单,无需复杂状态管理
+   - 快速实现原型
+
+文件命名规范:
+- 文件名应以 metric_ 开头
+- 后跟指标类别、二级指标名、三级指标名
+- 例如:metric_safety_safeTime_CustomTTC.py
+
+必要条件:
+1. 类实现方式:必须继承 BaseMetric 基类并实现 calculate() 方法
+2. 函数实现方式:必须实现 evaluate() 函数
+3. 必须在文件中定义 METRIC_CATEGORY 变量,指定指标类别
+
+返回结果格式:
+{
+    "value": 0.0,    # 指标值
+    "score": 100,   # 评分(0-100)
+    "details": {}   # 详细信息(可选)
+}
+"""
+
+# 测试代码(实际使用时可删除)
+if __name__ == "__main__":
+    # 这里可以添加测试代码
+    pass

+ 28 - 0
test/custom_metrics_config.yaml

@@ -0,0 +1,28 @@
+# 自定义指标配置示例
+
+# 示例1: 在已有一级指标下添加新的三级指标
+
+
+safety:
+  name: safety
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:  # 新增的三级指标
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5
+user:
+  name: user
+  priority: 0
+  safeTime:
+    name: safetime
+    priority: 0
+    CustomTTC:  # 新增的三级指标
+      name: CustomTTC
+      priority: 0
+      max: 20.0
+      min: 3.5
+  

+ 97 - 0
test/split.py

@@ -0,0 +1,97 @@
+import yaml
+from pathlib import Path
+
+def ensure_structure(metrics_dict, full_dict, path):
+    """确保每一级都包含name和priority字段"""
+    if not isinstance(metrics_dict, dict):
+        return metrics_dict
+    
+    # 从完整配置中获取当前路径的结构
+    current = full_dict
+    for key in path.split('.'):
+        if key in current:
+            current = current[key]
+        else:
+            break
+    
+    # 如果原结构中有name和priority,就保留它们
+    result = {}
+    if isinstance(current, dict):
+        if 'name' in current:
+            result['name'] = current['name']
+        if 'priority' in current:
+            result['priority'] = current['priority']
+    
+    # 添加自定义内容
+    for key, value in metrics_dict.items():
+        if key not in ['name', 'priority']:
+            result[key] = ensure_structure(value, full_dict, f"{path}.{key}" if path else key)
+    
+    return result
+
+def find_custom_metrics(all_metrics, builtin_metrics, current_path=""):
+    """递归比较两个配置,找出自定义指标"""
+    custom_metrics = {}
+    
+    if isinstance(all_metrics, dict) and isinstance(builtin_metrics, dict):
+        for key in all_metrics:
+            if key not in builtin_metrics:
+                # 完全新增的键,保留完整结构
+                custom_metrics[key] = all_metrics[key]
+            else:
+                # 递归比较子结构
+                child_custom = find_custom_metrics(
+                    all_metrics[key], 
+                    builtin_metrics[key],
+                    f"{current_path}.{key}" if current_path else key
+                )
+                if child_custom:
+                    custom_metrics[key] = child_custom
+    elif all_metrics != builtin_metrics:
+        # 值不同的情况
+        return all_metrics
+    
+    # 对结果进行结构调整,确保每层都有name和priority
+    if custom_metrics:
+        return ensure_structure(custom_metrics, all_metrics, current_path)
+    return None
+
+def split_metrics_config(all_metrics_path, builtin_metrics_path, custom_metrics_path):
+    # 加载完整的指标配置
+    with open(all_metrics_path, 'r', encoding='utf-8') as f:
+        all_metrics = yaml.safe_load(f) or {}
+    
+    # 加载内置指标配置作为基准
+    with open(builtin_metrics_path, 'r', encoding='utf-8') as f:
+        builtin_metrics = yaml.safe_load(f) or {}
+    
+    # 找出自定义指标
+    custom_metrics = find_custom_metrics(all_metrics, builtin_metrics)
+    
+    # 保存自定义指标
+    if custom_metrics:
+        with open(custom_metrics_path, 'w', encoding='utf-8') as f:
+            yaml.dump(custom_metrics, f, allow_unicode=True, sort_keys=False, indent=2)
+        
+        print(f"成功拆分指标配置:")
+        print(f"- 内置指标已保存到: {builtin_metrics_path}")
+        print(f"- 自定义指标已保存到: {custom_metrics_path}")
+        print("\n自定义指标内容:")
+        print(yaml.dump(custom_metrics, allow_unicode=True, sort_keys=False, indent=2))
+    else:
+        print("未发现自定义指标")
+
+if __name__ == "__main__":
+    # 配置文件路径
+    all_metrics_path = '/home/kevin/kevin/zhaoyuan/zhaoyuan_v2.0/zhaoyuan_new/config/all_metrics_config.yaml'
+    builtin_metrics_path = '/home/kevin/kevin/zhaoyuan/zhaoyuan_v2.0/zhaoyuan_new/config/metrics_config.yaml'
+    custom_metrics_path = '/home/kevin/kevin/zhaoyuan/zhaoyuan_v2.0/zhaoyuan_new/config/custom_metrics_config.yaml'
+    
+    # 确保文件存在
+    if not Path(all_metrics_path).exists():
+        raise FileNotFoundError(f"{all_metrics_path} 文件不存在")
+    if not Path(builtin_metrics_path).exists():
+        raise FileNotFoundError(f"{builtin_metrics_path} 文件不存在")
+    
+    # 执行拆分
+    split_metrics_config(all_metrics_path, builtin_metrics_path, custom_metrics_path)

Kaikkia tiedostoja ei voida näyttää, sillä liian monta tiedostoa muuttui tässä diffissä